id
stringlengths
9
16
submitter
stringlengths
3
64
authors
stringlengths
5
6.63k
title
stringlengths
7
245
comments
stringlengths
1
482
journal-ref
stringlengths
4
382
doi
stringlengths
9
151
report-no
stringclasses
984 values
categories
stringlengths
5
108
license
stringclasses
9 values
abstract
stringlengths
83
3.41k
versions
listlengths
1
20
update_date
timestamp[s]date
2007-05-23 00:00:00
2025-04-11 00:00:00
authors_parsed
listlengths
1
427
prompt
stringlengths
166
3.49k
label
stringclasses
2 values
prob
float64
0.5
0.98
2503.05162
Chao Zhang
Chao Zhang, Yifeng Zhou, Shuheng Wang, Wenfa Li, Degang Wang, Yi Xu, Shaohui Jiao
EvolvingGS: High-Fidelity Streamable Volumetric Video via Evolving 3D Gaussian Representation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We have recently seen great progress in 3D scene reconstruction through explicit point-based 3D Gaussian Splatting (3DGS), notable for its high quality and fast rendering speed. However, reconstructing dynamic scenes such as complex human performances with long durations remains challenging. Prior efforts fall short of modeling a long-term sequence with drastic motions, frequent topology changes or interactions with props, and resort to segmenting the whole sequence into groups of frames that are processed independently, which undermines temporal stability and thereby leads to an unpleasant viewing experience and inefficient storage footprint. In view of this, we introduce EvolvingGS, a two-stage strategy that first deforms the Gaussian model to coarsely align with the target frame, and then refines it with minimal point addition/subtraction, particularly in fast-changing areas. Owing to the flexibility of the incrementally evolving representation, our method outperforms existing approaches in terms of both per-frame and temporal quality metrics while maintaining fast rendering through its purely explicit representation. Moreover, by exploiting temporal coherence between successive frames, we propose a simple yet effective compression algorithm that achieves over 50x compression rate. Extensive experiments on both public benchmarks and challenging custom datasets demonstrate that our method significantly advances the state-of-the-art in dynamic scene reconstruction, particularly for extended sequences with complex human performances.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 06:01:07 GMT" } ]
2025-03-10T00:00:00
[ [ "Zhang", "Chao", "" ], [ "Zhou", "Yifeng", "" ], [ "Wang", "Shuheng", "" ], [ "Li", "Wenfa", "" ], [ "Wang", "Degang", "" ], [ "Xu", "Yi", "" ], [ "Jiao", "Shaohui", "" ] ]
TITLE: EvolvingGS: High-Fidelity Streamable Volumetric Video via Evolving 3D Gaussian Representation ABSTRACT: We have recently seen great progress in 3D scene reconstruction through explicit point-based 3D Gaussian Splatting (3DGS), notable for its high quality and fast rendering speed. However, reconstructing dynamic scenes such as complex human performances with long durations remains challenging. Prior efforts fall short of modeling a long-term sequence with drastic motions, frequent topology changes or interactions with props, and resort to segmenting the whole sequence into groups of frames that are processed independently, which undermines temporal stability and thereby leads to an unpleasant viewing experience and inefficient storage footprint. In view of this, we introduce EvolvingGS, a two-stage strategy that first deforms the Gaussian model to coarsely align with the target frame, and then refines it with minimal point addition/subtraction, particularly in fast-changing areas. Owing to the flexibility of the incrementally evolving representation, our method outperforms existing approaches in terms of both per-frame and temporal quality metrics while maintaining fast rendering through its purely explicit representation. Moreover, by exploiting temporal coherence between successive frames, we propose a simple yet effective compression algorithm that achieves over 50x compression rate. Extensive experiments on both public benchmarks and challenging custom datasets demonstrate that our method significantly advances the state-of-the-art in dynamic scene reconstruction, particularly for extended sequences with complex human performances.
no_new_dataset
0.945197
2503.05164
Jiangtao Gong
Shanhe You, Xuewen Luo, Xinhe Liang, Jiashu Yu, Chen Zheng and Jiangtao Gong
A Comprehensive LLM-powered Framework for Driving Intelligence Evaluation
8 pages, 3 figures
ICRA2025
null
null
cs.RO cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Evaluation methods for autonomous driving are crucial for algorithm optimization. However, due to the complexity of driving intelligence, there is currently no comprehensive evaluation method for the level of autonomous driving intelligence. In this paper, we propose an evaluation framework for driving behavior intelligence in complex traffic environments, aiming to fill this gap. We constructed a natural language evaluation dataset of human professional drivers and passengers through naturalistic driving experiments and post-driving behavior evaluation interviews. Based on this dataset, we developed an LLM-powered driving evaluation framework. The effectiveness of this framework was validated through simulated experiments in the CARLA urban traffic simulator and further corroborated by human assessment. Our research provides valuable insights for evaluating and designing more intelligent, human-like autonomous driving agents. The implementation details of the framework and detailed information about the dataset can be found at Github.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 06:03:02 GMT" } ]
2025-03-10T00:00:00
[ [ "You", "Shanhe", "" ], [ "Luo", "Xuewen", "" ], [ "Liang", "Xinhe", "" ], [ "Yu", "Jiashu", "" ], [ "Zheng", "Chen", "" ], [ "Gong", "Jiangtao", "" ] ]
TITLE: A Comprehensive LLM-powered Framework for Driving Intelligence Evaluation ABSTRACT: Evaluation methods for autonomous driving are crucial for algorithm optimization. However, due to the complexity of driving intelligence, there is currently no comprehensive evaluation method for the level of autonomous driving intelligence. In this paper, we propose an evaluation framework for driving behavior intelligence in complex traffic environments, aiming to fill this gap. We constructed a natural language evaluation dataset of human professional drivers and passengers through naturalistic driving experiments and post-driving behavior evaluation interviews. Based on this dataset, we developed an LLM-powered driving evaluation framework. The effectiveness of this framework was validated through simulated experiments in the CARLA urban traffic simulator and further corroborated by human assessment. Our research provides valuable insights for evaluating and designing more intelligent, human-like autonomous driving agents. The implementation details of the framework and detailed information about the dataset can be found at Github.
new_dataset
0.961425
2503.05167
Ruotai Li
Xinhan Zheng and Huyu Wu and Haopeng Jin and Ruotai Li
FMCHS: Advancing Traditional Chinese Medicine Herb Recommendation with Fusion of Multiscale Correlations of Herbs and Symptoms
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Traditional Chinese medicine (TCM) exhibits remarkable therapeutic efficacy in disease treatment and healthcare through personalized herb prescriptions. However, current herb recommendation models inadequately capture the multiscale relations between herbs and clinical symptoms, particularly neglecting latent correlations at the chemical-molecular scale. To address these limitations, we propose the Fusion of Multiscale Correlations of Herbs and Symptoms (FMCHS), an innovative framework that synergistically integrates molecular-scale chemical characteristics of herbs with clinical symptoms. The framework employs multi-relational graph transformer layers to generate enriched embeddings that preserve both structural and semantic features within herbs and symptoms. Through systematic incorporation of herb chemical profiles into node embeddings and implementation of attention-based feature fusion, FMCHS effectively utilizes multiscale correlations. Comprehensive evaluations demonstrate FMCHS's superior performance over the state-of-the-art (SOTA) baseline, achieving relative improvements of 8.85% in Precision@5, 12.30% in Recall@5, and 10.86% in F1@5 compared to the SOTA model on benchmark datasets. This work facilitates the practical application of TCM in disease treatment and healthcare.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 06:14:26 GMT" } ]
2025-03-10T00:00:00
[ [ "Zheng", "Xinhan", "" ], [ "Wu", "Huyu", "" ], [ "Jin", "Haopeng", "" ], [ "Li", "Ruotai", "" ] ]
TITLE: FMCHS: Advancing Traditional Chinese Medicine Herb Recommendation with Fusion of Multiscale Correlations of Herbs and Symptoms ABSTRACT: Traditional Chinese medicine (TCM) exhibits remarkable therapeutic efficacy in disease treatment and healthcare through personalized herb prescriptions. However, current herb recommendation models inadequately capture the multiscale relations between herbs and clinical symptoms, particularly neglecting latent correlations at the chemical-molecular scale. To address these limitations, we propose the Fusion of Multiscale Correlations of Herbs and Symptoms (FMCHS), an innovative framework that synergistically integrates molecular-scale chemical characteristics of herbs with clinical symptoms. The framework employs multi-relational graph transformer layers to generate enriched embeddings that preserve both structural and semantic features within herbs and symptoms. Through systematic incorporation of herb chemical profiles into node embeddings and implementation of attention-based feature fusion, FMCHS effectively utilizes multiscale correlations. Comprehensive evaluations demonstrate FMCHS's superior performance over the state-of-the-art (SOTA) baseline, achieving relative improvements of 8.85% in Precision@5, 12.30% in Recall@5, and 10.86% in F1@5 compared to the SOTA model on benchmark datasets. This work facilitates the practical application of TCM in disease treatment and healthcare.
no_new_dataset
0.949763
2503.05169
Juniper Tyree
Juniper Tyree, Andreas Rupp, Petri S. Clusius, and Michael H. Boy
phepy: Visual Benchmarks and Improvements for Out-of-Distribution Detectors
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
Applying machine learning to increasingly high-dimensional problems with sparse or biased training data increases the risk that a model is used on inputs outside its training domain. For such out-of-distribution (OOD) inputs, the model can no longer make valid predictions, and its error is potentially unbounded. Testing OOD detection methods on real-world datasets is complicated by the ambiguity around which inputs are in-distribution (ID) or OOD. We design a benchmark for OOD detection, which includes three novel and easily-visualisable toy examples. These simple examples provide direct and intuitive insight into whether the detector is able to detect (1) linear and (2) non-linear concepts and (3) identify thin ID subspaces (needles) within high-dimensional spaces (haystacks). We use our benchmark to evaluate the performance of various methods from the literature. Since tactile examples of OOD inputs may benefit OOD detection, we also review several simple methods to synthesise OOD inputs for supervised training. We introduce two improvements, $t$-poking and OOD sample weighting, to make supervised detectors more precise at the ID-OOD boundary. This is especially important when conflicts between real ID and synthetic OOD sample blur the decision boundary. Finally, we provide recommendations for constructing and applying out-of-distribution detectors in machine learning.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 06:25:20 GMT" } ]
2025-03-10T00:00:00
[ [ "Tyree", "Juniper", "" ], [ "Rupp", "Andreas", "" ], [ "Clusius", "Petri S.", "" ], [ "Boy", "Michael H.", "" ] ]
TITLE: phepy: Visual Benchmarks and Improvements for Out-of-Distribution Detectors ABSTRACT: Applying machine learning to increasingly high-dimensional problems with sparse or biased training data increases the risk that a model is used on inputs outside its training domain. For such out-of-distribution (OOD) inputs, the model can no longer make valid predictions, and its error is potentially unbounded. Testing OOD detection methods on real-world datasets is complicated by the ambiguity around which inputs are in-distribution (ID) or OOD. We design a benchmark for OOD detection, which includes three novel and easily-visualisable toy examples. These simple examples provide direct and intuitive insight into whether the detector is able to detect (1) linear and (2) non-linear concepts and (3) identify thin ID subspaces (needles) within high-dimensional spaces (haystacks). We use our benchmark to evaluate the performance of various methods from the literature. Since tactile examples of OOD inputs may benefit OOD detection, we also review several simple methods to synthesise OOD inputs for supervised training. We introduce two improvements, $t$-poking and OOD sample weighting, to make supervised detectors more precise at the ID-OOD boundary. This is especially important when conflicts between real ID and synthetic OOD sample blur the decision boundary. Finally, we provide recommendations for constructing and applying out-of-distribution detectors in machine learning.
no_new_dataset
0.886027
2503.05170
Willmer Quinones
Willmer Rafell Quinones Robles, Sakonporn Noree, Young Sin Ko, Bryan Wong, Jongwoo Kim, Mun Yong Yi
Spatial Context-Driven Positive Pair Sampling for Enhanced Histopathology Image Classification
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep learning has demonstrated great promise in cancer classification from whole-slide images (WSIs) but remains constrained by the need for extensive annotations. Annotation-free methods, such as multiple instance learning (MIL) and self-supervised learning (SSL), have emerged to address this challenge; however, current SSL techniques often depend on synthetic augmentations or temporal context, which may not adequately capture the intricate spatial relationships inherent to histopathology. In this work, we introduce a novel spatial context-driven positive pair sampling strategy for SSL that leverages the natural coherence of adjacent patches in WSIs. By constructing biologically relevant positive pairs from spatially proximate patches, our approach harnesses inherent spatial coherence to enhance patch-level representations, ultimately boosting slide-level classification performance. Experiments on multiple datasets reveal that our strategy improves classification accuracy by 5\% to 10\% over the standard method, paving the way for more clinically relevant AI models in cancer diagnosis. The code is available at https://anonymous.4open.science/r/contextual-pairs-E72F/.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 06:31:19 GMT" } ]
2025-03-10T00:00:00
[ [ "Robles", "Willmer Rafell Quinones", "" ], [ "Noree", "Sakonporn", "" ], [ "Ko", "Young Sin", "" ], [ "Wong", "Bryan", "" ], [ "Kim", "Jongwoo", "" ], [ "Yi", "Mun Yong", "" ] ]
TITLE: Spatial Context-Driven Positive Pair Sampling for Enhanced Histopathology Image Classification ABSTRACT: Deep learning has demonstrated great promise in cancer classification from whole-slide images (WSIs) but remains constrained by the need for extensive annotations. Annotation-free methods, such as multiple instance learning (MIL) and self-supervised learning (SSL), have emerged to address this challenge; however, current SSL techniques often depend on synthetic augmentations or temporal context, which may not adequately capture the intricate spatial relationships inherent to histopathology. In this work, we introduce a novel spatial context-driven positive pair sampling strategy for SSL that leverages the natural coherence of adjacent patches in WSIs. By constructing biologically relevant positive pairs from spatially proximate patches, our approach harnesses inherent spatial coherence to enhance patch-level representations, ultimately boosting slide-level classification performance. Experiments on multiple datasets reveal that our strategy improves classification accuracy by 5\% to 10\% over the standard method, paving the way for more clinically relevant AI models in cancer diagnosis. The code is available at https://anonymous.4open.science/r/contextual-pairs-E72F/.
no_new_dataset
0.95222
2503.05173
Samson Zhou
Vincent Cohen-Addad, Shaofeng H.-C. Jiang, Qiaoyuan Yang, Yubo Zhang, Samson Zhou
Fair Clustering in the Sliding Window Model
ICLR 2025
null
null
null
cs.DS
http://creativecommons.org/licenses/by/4.0/
We study streaming algorithms for proportionally fair clustering, a notion originally suggested by Chierichetti et. al. (2017), in the sliding window model. We show that although there exist efficient streaming algorithms in the insertion-only model, surprisingly no algorithm can achieve finite multiplicative ratio without violating the fairness constraint in the sliding window. Hence, the problem of fair clustering is a rare separation between the insertion-only streaming model and the sliding window model. On the other hand, we show that if the fairness constraint is relaxed by a multiplicative $(1+\varepsilon)$ factor, there exists a $(1 + \varepsilon)$-approximate sliding window algorithm that uses $\text{poly}(k\varepsilon^{-1}\log n)$ space. This achieves essentially the best parameters (up to degree in the polynomial) provided the aforementioned lower bound. We also implement a number of empirical evaluations on real datasets to complement our theoretical results.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 06:39:51 GMT" } ]
2025-03-10T00:00:00
[ [ "Cohen-Addad", "Vincent", "" ], [ "Jiang", "Shaofeng H. -C.", "" ], [ "Yang", "Qiaoyuan", "" ], [ "Zhang", "Yubo", "" ], [ "Zhou", "Samson", "" ] ]
TITLE: Fair Clustering in the Sliding Window Model ABSTRACT: We study streaming algorithms for proportionally fair clustering, a notion originally suggested by Chierichetti et. al. (2017), in the sliding window model. We show that although there exist efficient streaming algorithms in the insertion-only model, surprisingly no algorithm can achieve finite multiplicative ratio without violating the fairness constraint in the sliding window. Hence, the problem of fair clustering is a rare separation between the insertion-only streaming model and the sliding window model. On the other hand, we show that if the fairness constraint is relaxed by a multiplicative $(1+\varepsilon)$ factor, there exists a $(1 + \varepsilon)$-approximate sliding window algorithm that uses $\text{poly}(k\varepsilon^{-1}\log n)$ space. This achieves essentially the best parameters (up to degree in the polynomial) provided the aforementioned lower bound. We also implement a number of empirical evaluations on real datasets to complement our theoretical results.
no_new_dataset
0.949153
2503.05174
Linqi Yang
Linqi Yang, Xiongwei Zhao, Qihao Sun, Ke Wang, Ao Chen, Peng Kang
SplatPose: Geometry-Aware 6-DoF Pose Estimation from Single RGB Image via 3D Gaussian Splatting
Submitted to IROS 2025
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
6-DoF pose estimation is a fundamental task in computer vision with wide-ranging applications in augmented reality and robotics. Existing single RGB-based methods often compromise accuracy due to their reliance on initial pose estimates and susceptibility to rotational ambiguity, while approaches requiring depth sensors or multi-view setups incur significant deployment costs. To address these limitations, we introduce SplatPose, a novel framework that synergizes 3D Gaussian Splatting (3DGS) with a dual-branch neural architecture to achieve high-precision pose estimation using only a single RGB image. Central to our approach is the Dual-Attention Ray Scoring Network (DARS-Net), which innovatively decouples positional and angular alignment through geometry-domain attention mechanisms, explicitly modeling directional dependencies to mitigate rotational ambiguity. Additionally, a coarse-to-fine optimization pipeline progressively refines pose estimates by aligning dense 2D features between query images and 3DGS-synthesized views, effectively correcting feature misalignment and depth errors from sparse ray sampling. Experiments on three benchmark datasets demonstrate that SplatPose achieves state-of-the-art 6-DoF pose estimation accuracy in single RGB settings, rivaling approaches that depend on depth or multi-view images.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 06:40:06 GMT" } ]
2025-03-10T00:00:00
[ [ "Yang", "Linqi", "" ], [ "Zhao", "Xiongwei", "" ], [ "Sun", "Qihao", "" ], [ "Wang", "Ke", "" ], [ "Chen", "Ao", "" ], [ "Kang", "Peng", "" ] ]
TITLE: SplatPose: Geometry-Aware 6-DoF Pose Estimation from Single RGB Image via 3D Gaussian Splatting ABSTRACT: 6-DoF pose estimation is a fundamental task in computer vision with wide-ranging applications in augmented reality and robotics. Existing single RGB-based methods often compromise accuracy due to their reliance on initial pose estimates and susceptibility to rotational ambiguity, while approaches requiring depth sensors or multi-view setups incur significant deployment costs. To address these limitations, we introduce SplatPose, a novel framework that synergizes 3D Gaussian Splatting (3DGS) with a dual-branch neural architecture to achieve high-precision pose estimation using only a single RGB image. Central to our approach is the Dual-Attention Ray Scoring Network (DARS-Net), which innovatively decouples positional and angular alignment through geometry-domain attention mechanisms, explicitly modeling directional dependencies to mitigate rotational ambiguity. Additionally, a coarse-to-fine optimization pipeline progressively refines pose estimates by aligning dense 2D features between query images and 3DGS-synthesized views, effectively correcting feature misalignment and depth errors from sparse ray sampling. Experiments on three benchmark datasets demonstrate that SplatPose achieves state-of-the-art 6-DoF pose estimation accuracy in single RGB settings, rivaling approaches that depend on depth or multi-view images.
no_new_dataset
0.943191
2503.05179
Simon Aytes
Simon A. Aytes, Jinheon Baek, Sung Ju Hwang
Sketch-of-Thought: Efficient LLM Reasoning with Adaptive Cognitive-Inspired Sketching
null
null
null
null
cs.CL cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent advances in large language models have demonstrated remarkable reasoning capabilities through Chain of Thought (CoT) prompting, but often at the cost of excessive verbosity in their intermediate outputs, which increases computational overhead. We introduce Sketch-of-Thought (SoT), a novel prompting framework that combines cognitive-inspired reasoning paradigms with linguistic constraints to minimize token usage while preserving reasoning accuracy. SoT is designed as a flexible framework that can incorporate any custom reasoning paradigms based on cognitive science, and we instantiate it with three such paradigms - Conceptual Chaining, Chunked Symbolism, and Expert Lexicons - each tailored to different reasoning tasks and selected dynamically via a lightweight routing model. Through comprehensive evaluation across 15 reasoning datasets with multiple languages and multimodal scenarios, we demonstrate that SoT achieves token reductions of 76% with negligible accuracy impact. In certain domains like mathematical and multi-hop reasoning, it even improves accuracy while using significantly fewer tokens. Our code is publicly available: https://www.github.com/SimonAytes/SoT.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 06:57:17 GMT" } ]
2025-03-10T00:00:00
[ [ "Aytes", "Simon A.", "" ], [ "Baek", "Jinheon", "" ], [ "Hwang", "Sung Ju", "" ] ]
TITLE: Sketch-of-Thought: Efficient LLM Reasoning with Adaptive Cognitive-Inspired Sketching ABSTRACT: Recent advances in large language models have demonstrated remarkable reasoning capabilities through Chain of Thought (CoT) prompting, but often at the cost of excessive verbosity in their intermediate outputs, which increases computational overhead. We introduce Sketch-of-Thought (SoT), a novel prompting framework that combines cognitive-inspired reasoning paradigms with linguistic constraints to minimize token usage while preserving reasoning accuracy. SoT is designed as a flexible framework that can incorporate any custom reasoning paradigms based on cognitive science, and we instantiate it with three such paradigms - Conceptual Chaining, Chunked Symbolism, and Expert Lexicons - each tailored to different reasoning tasks and selected dynamically via a lightweight routing model. Through comprehensive evaluation across 15 reasoning datasets with multiple languages and multimodal scenarios, we demonstrate that SoT achieves token reductions of 76% with negligible accuracy impact. In certain domains like mathematical and multi-hop reasoning, it even improves accuracy while using significantly fewer tokens. Our code is publicly available: https://www.github.com/SimonAytes/SoT.
no_new_dataset
0.942823
2503.05180
Zherui Huang
Zherui Huang, Xing Gao, Guanjie Zheng, Licheng Wen, Xuemeng Yang, Xiao Sun
Safety-Critical Traffic Simulation with Adversarial Transfer of Driving Intentions
Accepted by ICRA 2025
null
null
null
cs.RO cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Traffic simulation, complementing real-world data with a long-tail distribution, allows for effective evaluation and enhancement of the ability of autonomous vehicles to handle accident-prone scenarios. Simulating such safety-critical scenarios is nontrivial, however, from log data that are typically regular scenarios, especially in consideration of dynamic adversarial interactions between the future motions of autonomous vehicles and surrounding traffic participants. To address it, this paper proposes an innovative and efficient strategy, termed IntSim, that explicitly decouples the driving intentions of surrounding actors from their motion planning for realistic and efficient safety-critical simulation. We formulate the adversarial transfer of driving intention as an optimization problem, facilitating extensive exploration of diverse attack behaviors and efficient solution convergence. Simultaneously, intention-conditioned motion planning benefits from powerful deep models and large-scale real-world data, permitting the simulation of realistic motion behaviors for actors. Specially, through adapting driving intentions based on environments, IntSim facilitates the flexible realization of dynamic adversarial interactions with autonomous vehicles. Finally, extensive open-loop and closed-loop experiments on real-world datasets, including nuScenes and Waymo, demonstrate that the proposed IntSim achieves state-of-the-art performance in simulating realistic safety-critical scenarios and further improves planners in handling such scenarios.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 06:59:27 GMT" } ]
2025-03-10T00:00:00
[ [ "Huang", "Zherui", "" ], [ "Gao", "Xing", "" ], [ "Zheng", "Guanjie", "" ], [ "Wen", "Licheng", "" ], [ "Yang", "Xuemeng", "" ], [ "Sun", "Xiao", "" ] ]
TITLE: Safety-Critical Traffic Simulation with Adversarial Transfer of Driving Intentions ABSTRACT: Traffic simulation, complementing real-world data with a long-tail distribution, allows for effective evaluation and enhancement of the ability of autonomous vehicles to handle accident-prone scenarios. Simulating such safety-critical scenarios is nontrivial, however, from log data that are typically regular scenarios, especially in consideration of dynamic adversarial interactions between the future motions of autonomous vehicles and surrounding traffic participants. To address it, this paper proposes an innovative and efficient strategy, termed IntSim, that explicitly decouples the driving intentions of surrounding actors from their motion planning for realistic and efficient safety-critical simulation. We formulate the adversarial transfer of driving intention as an optimization problem, facilitating extensive exploration of diverse attack behaviors and efficient solution convergence. Simultaneously, intention-conditioned motion planning benefits from powerful deep models and large-scale real-world data, permitting the simulation of realistic motion behaviors for actors. Specially, through adapting driving intentions based on environments, IntSim facilitates the flexible realization of dynamic adversarial interactions with autonomous vehicles. Finally, extensive open-loop and closed-loop experiments on real-world datasets, including nuScenes and Waymo, demonstrate that the proposed IntSim achieves state-of-the-art performance in simulating realistic safety-critical scenarios and further improves planners in handling such scenarios.
no_new_dataset
0.945701
2503.05182
Qingyuan Zhou
Qingyuan Zhou, Yuehu Gong, Weidong Yang, Jiaze Li, Yeqi Luo, Baixin Xu, Shuhao Li, Ben Fei, Ying He
MGSR: 2D/3D Mutual-boosted Gaussian Splatting for High-fidelity Surface Reconstruction under Various Light Conditions
11 pages, 7 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Novel view synthesis (NVS) and surface reconstruction (SR) are essential tasks in 3D Gaussian Splatting (3D-GS). Despite recent progress, these tasks are often addressed independently, with GS-based rendering methods struggling under diverse light conditions and failing to produce accurate surfaces, while GS-based reconstruction methods frequently compromise rendering quality. This raises a central question: must rendering and reconstruction always involve a trade-off? To address this, we propose MGSR, a 2D/3D Mutual-boosted Gaussian splatting for Surface Reconstruction that enhances both rendering quality and 3D reconstruction accuracy. MGSR introduces two branches--one based on 2D-GS and the other on 3D-GS. The 2D-GS branch excels in surface reconstruction, providing precise geometry information to the 3D-GS branch. Leveraging this geometry, the 3D-GS branch employs a geometry-guided illumination decomposition module that captures reflected and transmitted components, enabling realistic rendering under varied light conditions. Using the transmitted component as supervision, the 2D-GS branch also achieves high-fidelity surface reconstruction. Throughout the optimization process, the 2D-GS and 3D-GS branches undergo alternating optimization, providing mutual supervision. Prior to this, each branch completes an independent warm-up phase, with an early stopping strategy implemented to reduce computational costs. We evaluate MGSR on a diverse set of synthetic and real-world datasets, at both object and scene levels, demonstrating strong performance in rendering and surface reconstruction.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 07:06:47 GMT" } ]
2025-03-10T00:00:00
[ [ "Zhou", "Qingyuan", "" ], [ "Gong", "Yuehu", "" ], [ "Yang", "Weidong", "" ], [ "Li", "Jiaze", "" ], [ "Luo", "Yeqi", "" ], [ "Xu", "Baixin", "" ], [ "Li", "Shuhao", "" ], [ "Fei", "Ben", "" ], [ "He", "Ying", "" ] ]
TITLE: MGSR: 2D/3D Mutual-boosted Gaussian Splatting for High-fidelity Surface Reconstruction under Various Light Conditions ABSTRACT: Novel view synthesis (NVS) and surface reconstruction (SR) are essential tasks in 3D Gaussian Splatting (3D-GS). Despite recent progress, these tasks are often addressed independently, with GS-based rendering methods struggling under diverse light conditions and failing to produce accurate surfaces, while GS-based reconstruction methods frequently compromise rendering quality. This raises a central question: must rendering and reconstruction always involve a trade-off? To address this, we propose MGSR, a 2D/3D Mutual-boosted Gaussian splatting for Surface Reconstruction that enhances both rendering quality and 3D reconstruction accuracy. MGSR introduces two branches--one based on 2D-GS and the other on 3D-GS. The 2D-GS branch excels in surface reconstruction, providing precise geometry information to the 3D-GS branch. Leveraging this geometry, the 3D-GS branch employs a geometry-guided illumination decomposition module that captures reflected and transmitted components, enabling realistic rendering under varied light conditions. Using the transmitted component as supervision, the 2D-GS branch also achieves high-fidelity surface reconstruction. Throughout the optimization process, the 2D-GS and 3D-GS branches undergo alternating optimization, providing mutual supervision. Prior to this, each branch completes an independent warm-up phase, with an early stopping strategy implemented to reduce computational costs. We evaluate MGSR on a diverse set of synthetic and real-world datasets, at both object and scene levels, demonstrating strong performance in rendering and surface reconstruction.
no_new_dataset
0.949106
2503.05183
Quan Yu
Quan Yu, Yu-Hong Dai, Minru Bai
Spectral-Spatial Extraction through Layered Tensor Decomposition for Hyperspectral Anomaly Detection
null
null
null
null
cs.CV math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Low rank tensor representation (LRTR) methods are very useful for hyperspectral anomaly detection (HAD). To overcome the limitations that they often overlook spectral anomaly and rely on large-scale matrix singular value decomposition, we first apply non-negative matrix factorization (NMF) to alleviate spectral dimensionality redundancy and extract spectral anomaly and then employ LRTR to extract spatial anomaly while mitigating spatial redundancy, yielding a highly efffcient layered tensor decomposition (LTD) framework for HAD. An iterative algorithm based on proximal alternating minimization is developed to solve the proposed LTD model, with convergence guarantees provided. Moreover, we introduce a rank reduction strategy with validation mechanism that adaptively reduces data size while preventing excessive reduction. Theoretically, we rigorously establish the equivalence between the tensor tubal rank and tensor group sparsity regularization (TGSR) and, under mild conditions, demonstrate that the relaxed formulation of TGSR shares the same global minimizers and optimal values as its original counterpart. Experimental results on the Airport-Beach-Urban and MVTec datasets demonstrate that our approach outperforms state-of-the-art methods in the HAD task.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 07:08:14 GMT" } ]
2025-03-10T00:00:00
[ [ "Yu", "Quan", "" ], [ "Dai", "Yu-Hong", "" ], [ "Bai", "Minru", "" ] ]
TITLE: Spectral-Spatial Extraction through Layered Tensor Decomposition for Hyperspectral Anomaly Detection ABSTRACT: Low rank tensor representation (LRTR) methods are very useful for hyperspectral anomaly detection (HAD). To overcome the limitations that they often overlook spectral anomaly and rely on large-scale matrix singular value decomposition, we first apply non-negative matrix factorization (NMF) to alleviate spectral dimensionality redundancy and extract spectral anomaly and then employ LRTR to extract spatial anomaly while mitigating spatial redundancy, yielding a highly efffcient layered tensor decomposition (LTD) framework for HAD. An iterative algorithm based on proximal alternating minimization is developed to solve the proposed LTD model, with convergence guarantees provided. Moreover, we introduce a rank reduction strategy with validation mechanism that adaptively reduces data size while preventing excessive reduction. Theoretically, we rigorously establish the equivalence between the tensor tubal rank and tensor group sparsity regularization (TGSR) and, under mild conditions, demonstrate that the relaxed formulation of TGSR shares the same global minimizers and optimal values as its original counterpart. Experimental results on the Airport-Beach-Urban and MVTec datasets demonstrate that our approach outperforms state-of-the-art methods in the HAD task.
no_new_dataset
0.9462
2503.05190
Lei Zhu
Lei Zhu, Yanyu Xu, Huazhu Fu, Xinxing Xu, Rick Siow Mong Goh, and Yong Liu
Partially Supervised Unpaired Multi-Modal Learning for Label-Efficient Medical Image Segmentation
Accepted to MLMI 2024
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Unpaired Multi-Modal Learning (UMML) which leverages unpaired multi-modal data to boost model performance on each individual modality has attracted a lot of research interests in medical image analysis. However, existing UMML methods require multi-modal datasets to be fully labeled, which incurs tremendous annotation cost. In this paper, we investigate the use of partially labeled data for label-efficient unpaired multi-modal learning, which can reduce the annotation cost by up to one half. We term the new learning paradigm as Partially Supervised Unpaired Multi-Modal Learning (PSUMML) and propose a novel Decomposed partial class adaptation with snapshot Ensembled Self-Training (DEST) framework for it. Specifically, our framework consists of a compact segmentation network with modality specific normalization layers for learning with partially labeled unpaired multi-modal data. The key challenge in PSUMML lies in the complex partial class distribution discrepancy due to partial class annotation, which hinders effective knowledge transfer across modalities. We theoretically analyze this phenomenon with a decomposition theorem and propose a decomposed partial class adaptation technique to precisely align the partially labeled classes across modalities to reduce the distribution discrepancy. We further propose a snapshot ensembled self-training technique to leverage the valuable snapshot models during training to assign pseudo-labels to partially labeled pixels for self-training to boost model performance. We perform extensive experiments under different scenarios of PSUMML for two medical image segmentation tasks, namely cardiac substructure segmentation and abdominal multi-organ segmentation. Our framework outperforms existing methods significantly.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 07:22:42 GMT" } ]
2025-03-10T00:00:00
[ [ "Zhu", "Lei", "" ], [ "Xu", "Yanyu", "" ], [ "Fu", "Huazhu", "" ], [ "Xu", "Xinxing", "" ], [ "Goh", "Rick Siow Mong", "" ], [ "Liu", "Yong", "" ] ]
TITLE: Partially Supervised Unpaired Multi-Modal Learning for Label-Efficient Medical Image Segmentation ABSTRACT: Unpaired Multi-Modal Learning (UMML) which leverages unpaired multi-modal data to boost model performance on each individual modality has attracted a lot of research interests in medical image analysis. However, existing UMML methods require multi-modal datasets to be fully labeled, which incurs tremendous annotation cost. In this paper, we investigate the use of partially labeled data for label-efficient unpaired multi-modal learning, which can reduce the annotation cost by up to one half. We term the new learning paradigm as Partially Supervised Unpaired Multi-Modal Learning (PSUMML) and propose a novel Decomposed partial class adaptation with snapshot Ensembled Self-Training (DEST) framework for it. Specifically, our framework consists of a compact segmentation network with modality specific normalization layers for learning with partially labeled unpaired multi-modal data. The key challenge in PSUMML lies in the complex partial class distribution discrepancy due to partial class annotation, which hinders effective knowledge transfer across modalities. We theoretically analyze this phenomenon with a decomposition theorem and propose a decomposed partial class adaptation technique to precisely align the partially labeled classes across modalities to reduce the distribution discrepancy. We further propose a snapshot ensembled self-training technique to leverage the valuable snapshot models during training to assign pseudo-labels to partially labeled pixels for self-training to boost model performance. We perform extensive experiments under different scenarios of PSUMML for two medical image segmentation tasks, namely cardiac substructure segmentation and abdominal multi-organ segmentation. Our framework outperforms existing methods significantly.
no_new_dataset
0.949576
2503.05207
Yunkai Gao
Yunkai Gao, Jiaming Guo, Fan Wu, Rui Zhang
Policy Constraint by Only Support Constraint for Offline Reinforcement Learning
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Offline reinforcement learning (RL) aims to optimize a policy by using pre-collected datasets, to maximize cumulative rewards. However, offline reinforcement learning suffers challenges due to the distributional shift between the learned and behavior policies, leading to errors when computing Q-values for out-of-distribution (OOD) actions. To mitigate this issue, policy constraint methods aim to constrain the learned policy's distribution with the distribution of the behavior policy or confine action selection within the support of the behavior policy. However, current policy constraint methods tend to exhibit excessive conservatism, hindering the policy from further surpassing the behavior policy's performance. In this work, we present Only Support Constraint (OSC) which is derived from maximizing the total probability of learned policy in the support of behavior policy, to address the conservatism of policy constraint. OSC presents a regularization term that only restricts policies to the support without imposing extra constraints on actions within the support. Additionally, to fully harness the performance of the new policy constraints, OSC utilizes a diffusion model to effectively characterize the support of behavior policies. Experimental evaluations across a variety of offline RL benchmarks demonstrate that OSC significantly enhances performance, alleviating the challenges associated with distributional shifts and mitigating conservatism of policy constraints. Code is available at https://github.com/MoreanP/OSC.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 07:55:51 GMT" } ]
2025-03-10T00:00:00
[ [ "Gao", "Yunkai", "" ], [ "Guo", "Jiaming", "" ], [ "Wu", "Fan", "" ], [ "Zhang", "Rui", "" ] ]
TITLE: Policy Constraint by Only Support Constraint for Offline Reinforcement Learning ABSTRACT: Offline reinforcement learning (RL) aims to optimize a policy by using pre-collected datasets, to maximize cumulative rewards. However, offline reinforcement learning suffers challenges due to the distributional shift between the learned and behavior policies, leading to errors when computing Q-values for out-of-distribution (OOD) actions. To mitigate this issue, policy constraint methods aim to constrain the learned policy's distribution with the distribution of the behavior policy or confine action selection within the support of the behavior policy. However, current policy constraint methods tend to exhibit excessive conservatism, hindering the policy from further surpassing the behavior policy's performance. In this work, we present Only Support Constraint (OSC) which is derived from maximizing the total probability of learned policy in the support of behavior policy, to address the conservatism of policy constraint. OSC presents a regularization term that only restricts policies to the support without imposing extra constraints on actions within the support. Additionally, to fully harness the performance of the new policy constraints, OSC utilizes a diffusion model to effectively characterize the support of behavior policies. Experimental evaluations across a variety of offline RL benchmarks demonstrate that OSC significantly enhances performance, alleviating the challenges associated with distributional shifts and mitigating conservatism of policy constraints. Code is available at https://github.com/MoreanP/OSC.
no_new_dataset
0.950732
2503.05212
Aixin Sun
Guoxiu He, Xin Song, Aixin Sun
Knowledge Updating? No More Model Editing! Just Selective Contextual Reasoning
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
As real-world knowledge evolves, the information embedded within large language models (LLMs) can become outdated, inadequate, or erroneous. Model editing has emerged as a prominent approach for updating LLMs' knowledge with minimal computational costs and parameter changes. This approach typically identifies and adjusts specific model parameters associated with newly acquired knowledge. However, existing methods often underestimate the adverse effects that parameter modifications can have on broadly distributed knowledge. More critically, post-edit LLMs frequently struggle with multi-hop reasoning and continuous knowledge updates. Although various studies have discussed these shortcomings, there is a lack of comprehensive evaluation. In this paper, we provide an evaluation of ten model editing methods along four dimensions: reliability, generalization, locality, and portability. Results confirm that all ten popular model editing methods show significant shortcomings across multiple dimensions, suggesting model editing is less promising. We then propose a straightforward method called Selective Contextual Reasoning (SCR), for knowledge updating. SCR does not modify model parameters but harnesses LLM's inherent contextual reasoning capabilities utilizing the updated knowledge pieces. Under SCR, an LLM first assesses whether an incoming query falls within the scope of an external knowledge base. If it does, the relevant external knowledge texts are contextualized to enhance reasoning; otherwise, the query is answered directly. We evaluate SCR against the ten model editing methods on two counterfactual datasets with three backbone LLMs. Empirical results confirm the effectiveness and efficiency of contextual reasoning for knowledge updating.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 08:04:25 GMT" } ]
2025-03-10T00:00:00
[ [ "He", "Guoxiu", "" ], [ "Song", "Xin", "" ], [ "Sun", "Aixin", "" ] ]
TITLE: Knowledge Updating? No More Model Editing! Just Selective Contextual Reasoning ABSTRACT: As real-world knowledge evolves, the information embedded within large language models (LLMs) can become outdated, inadequate, or erroneous. Model editing has emerged as a prominent approach for updating LLMs' knowledge with minimal computational costs and parameter changes. This approach typically identifies and adjusts specific model parameters associated with newly acquired knowledge. However, existing methods often underestimate the adverse effects that parameter modifications can have on broadly distributed knowledge. More critically, post-edit LLMs frequently struggle with multi-hop reasoning and continuous knowledge updates. Although various studies have discussed these shortcomings, there is a lack of comprehensive evaluation. In this paper, we provide an evaluation of ten model editing methods along four dimensions: reliability, generalization, locality, and portability. Results confirm that all ten popular model editing methods show significant shortcomings across multiple dimensions, suggesting model editing is less promising. We then propose a straightforward method called Selective Contextual Reasoning (SCR), for knowledge updating. SCR does not modify model parameters but harnesses LLM's inherent contextual reasoning capabilities utilizing the updated knowledge pieces. Under SCR, an LLM first assesses whether an incoming query falls within the scope of an external knowledge base. If it does, the relevant external knowledge texts are contextualized to enhance reasoning; otherwise, the query is answered directly. We evaluate SCR against the ten model editing methods on two counterfactual datasets with three backbone LLMs. Empirical results confirm the effectiveness and efficiency of contextual reasoning for knowledge updating.
no_new_dataset
0.9434
2503.05217
Gulpi Pratamasunu Qorik Oktagalu
Gulpi Qorik Oktagalu Pratamasunu, Guoqing Hao and Kazuhiro Fukui
Separability Membrane: 3D Active Contour for Point Cloud Surface Reconstruction
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
This paper proposes Separability Membrane, a robust 3D active contour for extracting a surface from 3D point cloud object. Our approach defines the surface of a 3D object as the boundary that maximizes the separability of point features, such as intensity, color, or local density, between its inner and outer regions based on Fisher's ratio. Separability Membrane identifies the exact surface of a 3D object by maximizing class separability while controlling the rigidity of the 3D surface model with an adaptive B-spline surface that adjusts its properties based on the local and global separability. A key advantage of our method is its ability to accurately reconstruct surface boundaries even when they are ambiguous due to noise or outliers, without requiring any training data or conversion to volumetric representation. Evaluations on a synthetic 3D point cloud dataset and the 3DNet dataset demonstrate the membrane's effectiveness and robustness under diverse conditions.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 08:15:02 GMT" } ]
2025-03-10T00:00:00
[ [ "Pratamasunu", "Gulpi Qorik Oktagalu", "" ], [ "Hao", "Guoqing", "" ], [ "Fukui", "Kazuhiro", "" ] ]
TITLE: Separability Membrane: 3D Active Contour for Point Cloud Surface Reconstruction ABSTRACT: This paper proposes Separability Membrane, a robust 3D active contour for extracting a surface from 3D point cloud object. Our approach defines the surface of a 3D object as the boundary that maximizes the separability of point features, such as intensity, color, or local density, between its inner and outer regions based on Fisher's ratio. Separability Membrane identifies the exact surface of a 3D object by maximizing class separability while controlling the rigidity of the 3D surface model with an adaptive B-spline surface that adjusts its properties based on the local and global separability. A key advantage of our method is its ability to accurately reconstruct surface boundaries even when they are ambiguous due to noise or outliers, without requiring any training data or conversion to volumetric representation. Evaluations on a synthetic 3D point cloud dataset and the 3DNet dataset demonstrate the membrane's effectiveness and robustness under diverse conditions.
no_new_dataset
0.949201
2503.05223
Yifan Liu
Yifan Liu, Yu Fang, Zhouhan Lin
DiVISe: Direct Visual-Input Speech Synthesis Preserving Speaker Characteristics And Intelligibility
to be published in NAACL 25
null
null
null
cs.SD cs.CV cs.LG cs.MM eess.AS
http://creativecommons.org/licenses/by/4.0/
Video-to-speech (V2S) synthesis, the task of generating speech directly from silent video input, is inherently more challenging than other speech synthesis tasks due to the need to accurately reconstruct both speech content and speaker characteristics from visual cues alone. Recently, audio-visual pre-training has eliminated the need for additional acoustic hints in V2S, which previous methods often relied on to ensure training convergence. However, even with pre-training, existing methods continue to face challenges in achieving a balance between acoustic intelligibility and the preservation of speaker-specific characteristics. We analyzed this limitation and were motivated to introduce DiVISe (Direct Visual-Input Speech Synthesis), an end-to-end V2S model that predicts Mel-spectrograms directly from video frames alone. Despite not taking any acoustic hints, DiVISe effectively preserves speaker characteristics in the generated audio, and achieves superior performance on both objective and subjective metrics across the LRS2 and LRS3 datasets. Our results demonstrate that DiVISe not only outperforms existing V2S models in acoustic intelligibility but also scales more effectively with increased data and model parameters. Code and weights can be found at https://github.com/PussyCat0700/DiVISe.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 08:21:48 GMT" } ]
2025-03-10T00:00:00
[ [ "Liu", "Yifan", "" ], [ "Fang", "Yu", "" ], [ "Lin", "Zhouhan", "" ] ]
TITLE: DiVISe: Direct Visual-Input Speech Synthesis Preserving Speaker Characteristics And Intelligibility ABSTRACT: Video-to-speech (V2S) synthesis, the task of generating speech directly from silent video input, is inherently more challenging than other speech synthesis tasks due to the need to accurately reconstruct both speech content and speaker characteristics from visual cues alone. Recently, audio-visual pre-training has eliminated the need for additional acoustic hints in V2S, which previous methods often relied on to ensure training convergence. However, even with pre-training, existing methods continue to face challenges in achieving a balance between acoustic intelligibility and the preservation of speaker-specific characteristics. We analyzed this limitation and were motivated to introduce DiVISe (Direct Visual-Input Speech Synthesis), an end-to-end V2S model that predicts Mel-spectrograms directly from video frames alone. Despite not taking any acoustic hints, DiVISe effectively preserves speaker characteristics in the generated audio, and achieves superior performance on both objective and subjective metrics across the LRS2 and LRS3 datasets. Our results demonstrate that DiVISe not only outperforms existing V2S models in acoustic intelligibility but also scales more effectively with increased data and model parameters. Code and weights can be found at https://github.com/PussyCat0700/DiVISe.
no_new_dataset
0.950686
2503.05228
Ruoxuan Zhang
Ruoxuan Zhang, Hongxia Xie, Yi Yao, Jian-Yu Jiang-Lin, Bin Wen, Ling Lo, Hong-Han Shuai, Yung-Hui Li, Wen-Huang Cheng
RecipeGen: A Benchmark for Real-World Recipe Image Generation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recipe image generation is an important challenge in food computing, with applications from culinary education to interactive recipe platforms. However, there is currently no real-world dataset that comprehensively connects recipe goals, sequential steps, and corresponding images. To address this, we introduce RecipeGen, the first real-world goal-step-image benchmark for recipe generation, featuring diverse ingredients, varied recipe steps, multiple cooking styles, and a broad collection of food categories. Data is in https://github.com/zhangdaxia22/RecipeGen.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 08:25:28 GMT" } ]
2025-03-10T00:00:00
[ [ "Zhang", "Ruoxuan", "" ], [ "Xie", "Hongxia", "" ], [ "Yao", "Yi", "" ], [ "Jiang-Lin", "Jian-Yu", "" ], [ "Wen", "Bin", "" ], [ "Lo", "Ling", "" ], [ "Shuai", "Hong-Han", "" ], [ "Li", "Yung-Hui", "" ], [ "Cheng", "Wen-Huang", "" ] ]
TITLE: RecipeGen: A Benchmark for Real-World Recipe Image Generation ABSTRACT: Recipe image generation is an important challenge in food computing, with applications from culinary education to interactive recipe platforms. However, there is currently no real-world dataset that comprehensively connects recipe goals, sequential steps, and corresponding images. To address this, we introduce RecipeGen, the first real-world goal-step-image benchmark for recipe generation, featuring diverse ingredients, varied recipe steps, multiple cooking styles, and a broad collection of food categories. Data is in https://github.com/zhangdaxia22/RecipeGen.
new_dataset
0.9549
2503.05231
Li Haonan
Shuo Jiang, Haonan Li, Ruochen Ren, Yanmin Zhou, Zhipeng Wang, Bin He
Kaiwu: A Multimodal Manipulation Dataset and Framework for Robot Learning and Human-Robot Interaction
null
null
null
null
cs.RO cs.AI
http://creativecommons.org/licenses/by/4.0/
Cutting-edge robot learning techniques including foundation models and imitation learning from humans all pose huge demands on large-scale and high-quality datasets which constitute one of the bottleneck in the general intelligent robot fields. This paper presents the Kaiwu multimodal dataset to address the missing real-world synchronized multimodal data problems in the sophisticated assembling scenario,especially with dynamics information and its fine-grained labelling. The dataset first provides an integration of human,environment and robot data collection framework with 20 subjects and 30 interaction objects resulting in totally 11,664 instances of integrated actions. For each of the demonstration,hand motions,operation pressures,sounds of the assembling process,multi-view videos, high-precision motion capture information,eye gaze with first-person videos,electromyography signals are all recorded. Fine-grained multi-level annotation based on absolute timestamp,and semantic segmentation labelling are performed. Kaiwu dataset aims to facilitate robot learning,dexterous manipulation,human intention investigation and human-robot collaboration research.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 08:28:24 GMT" } ]
2025-03-10T00:00:00
[ [ "Jiang", "Shuo", "" ], [ "Li", "Haonan", "" ], [ "Ren", "Ruochen", "" ], [ "Zhou", "Yanmin", "" ], [ "Wang", "Zhipeng", "" ], [ "He", "Bin", "" ] ]
TITLE: Kaiwu: A Multimodal Manipulation Dataset and Framework for Robot Learning and Human-Robot Interaction ABSTRACT: Cutting-edge robot learning techniques including foundation models and imitation learning from humans all pose huge demands on large-scale and high-quality datasets which constitute one of the bottleneck in the general intelligent robot fields. This paper presents the Kaiwu multimodal dataset to address the missing real-world synchronized multimodal data problems in the sophisticated assembling scenario,especially with dynamics information and its fine-grained labelling. The dataset first provides an integration of human,environment and robot data collection framework with 20 subjects and 30 interaction objects resulting in totally 11,664 instances of integrated actions. For each of the demonstration,hand motions,operation pressures,sounds of the assembling process,multi-view videos, high-precision motion capture information,eye gaze with first-person videos,electromyography signals are all recorded. Fine-grained multi-level annotation based on absolute timestamp,and semantic segmentation labelling are performed. Kaiwu dataset aims to facilitate robot learning,dexterous manipulation,human intention investigation and human-robot collaboration research.
new_dataset
0.971966
2503.05236
Yibin Wang
Yibin Wang and Yuhang Zang and Hao Li and Cheng Jin and Jiaqi Wang
Unified Reward Model for Multimodal Understanding and Generation
project page: https://codegoat24.github.io/UnifiedReward/
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent advances in human preference alignment have significantly enhanced multimodal generation and understanding. A key approach is training reward models to guide preference optimization. However, existing models are often task-specific, limiting their adaptability across diverse visual applications. We also argue that jointly learning to assess multiple tasks may foster a synergistic effect, where improved image understanding enhances image generation assessment, and refined image evaluation benefits video assessment through better frame analysis. To this end, this paper proposes UnifiedReward, the first unified reward model for multimodal understanding and generation assessment, enabling both pairwise ranking and pointwise scoring, which can be employed for vision model preference alignment. Specifically, (1) we first develop UnifiedReward on our constructed large-scale human preference dataset, including both image and video generation/understanding tasks. (2) Then, it is utilized to automatically construct high-quality preference pair data based on the vision models, fine-gradually filtering their outputs through pair ranking and point sifting. (3) Finally, these data are used for their preference alignment through Direct Preference Optimization (DPO). Experimental results demonstrate that joint learning to assess diverse visual tasks can lead to substantial mutual benefits and we apply our pipeline to both image and video understanding/generation tasks, significantly improving the performance in each domain.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 08:36:05 GMT" } ]
2025-03-10T00:00:00
[ [ "Wang", "Yibin", "" ], [ "Zang", "Yuhang", "" ], [ "Li", "Hao", "" ], [ "Jin", "Cheng", "" ], [ "Wang", "Jiaqi", "" ] ]
TITLE: Unified Reward Model for Multimodal Understanding and Generation ABSTRACT: Recent advances in human preference alignment have significantly enhanced multimodal generation and understanding. A key approach is training reward models to guide preference optimization. However, existing models are often task-specific, limiting their adaptability across diverse visual applications. We also argue that jointly learning to assess multiple tasks may foster a synergistic effect, where improved image understanding enhances image generation assessment, and refined image evaluation benefits video assessment through better frame analysis. To this end, this paper proposes UnifiedReward, the first unified reward model for multimodal understanding and generation assessment, enabling both pairwise ranking and pointwise scoring, which can be employed for vision model preference alignment. Specifically, (1) we first develop UnifiedReward on our constructed large-scale human preference dataset, including both image and video generation/understanding tasks. (2) Then, it is utilized to automatically construct high-quality preference pair data based on the vision models, fine-gradually filtering their outputs through pair ranking and point sifting. (3) Finally, these data are used for their preference alignment through Direct Preference Optimization (DPO). Experimental results demonstrate that joint learning to assess diverse visual tasks can lead to substantial mutual benefits and we apply our pipeline to both image and video understanding/generation tasks, significantly improving the performance in each domain.
new_dataset
0.964589
2503.05251
Daniele Palossi
Lorenzo Scarciglia, Antonio Paolillo, Daniele Palossi
A Map-free Deep Learning-based Framework for Gate-to-Gate Monocular Visual Navigation aboard Miniaturized Aerial Vehicles
\c{opyright}2025 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works
null
null
null
cs.RO cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Palm-sized autonomous nano-drones, i.e., sub-50g in weight, recently entered the drone racing scenario, where they are tasked to avoid obstacles and navigate as fast as possible through gates. However, in contrast with their bigger counterparts, i.e., kg-scale drones, nano-drones expose three orders of magnitude less onboard memory and compute power, demanding more efficient and lightweight vision-based pipelines to win the race. This work presents a map-free vision-based (using only a monocular camera) autonomous nano-drone that combines a real-time deep learning gate detection front-end with a classic yet elegant and effective visual servoing control back-end, only relying on onboard resources. Starting from two state-of-the-art tiny deep learning models, we adapt them for our specific task, and after a mixed simulator-real-world training, we integrate and deploy them aboard our nano-drone. Our best-performing pipeline costs of only 24M multiply-accumulate operations per frame, resulting in a closed-loop control performance of 30 Hz, while achieving a gate detection root mean square error of 1.4 pixels, on our ~20k real-world image dataset. In-field experiments highlight the capability of our nano-drone to successfully navigate through 15 gates in 4 min, never crashing and covering a total travel distance of ~100m, with a peak flight speed of 1.9 m/s. Finally, to stress the generalization capability of our system, we also test it in a never-seen-before environment, where it navigates through gates for more than 4 min.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 09:07:07 GMT" } ]
2025-03-10T00:00:00
[ [ "Scarciglia", "Lorenzo", "" ], [ "Paolillo", "Antonio", "" ], [ "Palossi", "Daniele", "" ] ]
TITLE: A Map-free Deep Learning-based Framework for Gate-to-Gate Monocular Visual Navigation aboard Miniaturized Aerial Vehicles ABSTRACT: Palm-sized autonomous nano-drones, i.e., sub-50g in weight, recently entered the drone racing scenario, where they are tasked to avoid obstacles and navigate as fast as possible through gates. However, in contrast with their bigger counterparts, i.e., kg-scale drones, nano-drones expose three orders of magnitude less onboard memory and compute power, demanding more efficient and lightweight vision-based pipelines to win the race. This work presents a map-free vision-based (using only a monocular camera) autonomous nano-drone that combines a real-time deep learning gate detection front-end with a classic yet elegant and effective visual servoing control back-end, only relying on onboard resources. Starting from two state-of-the-art tiny deep learning models, we adapt them for our specific task, and after a mixed simulator-real-world training, we integrate and deploy them aboard our nano-drone. Our best-performing pipeline costs of only 24M multiply-accumulate operations per frame, resulting in a closed-loop control performance of 30 Hz, while achieving a gate detection root mean square error of 1.4 pixels, on our ~20k real-world image dataset. In-field experiments highlight the capability of our nano-drone to successfully navigate through 15 gates in 4 min, never crashing and covering a total travel distance of ~100m, with a peak flight speed of 1.9 m/s. Finally, to stress the generalization capability of our system, we also test it in a never-seen-before environment, where it navigates through gates for more than 4 min.
no_new_dataset
0.834811
2503.05255
Yan Xia
Guanghao Zhang, Tao Zhong, Yan Xia, Zhelun Yu, Haoyuan Li, Wanggui He, Fangxun Shu, Mushui Liu, Dong She, Yi Wang, Hao Jiang
CMMCoT: Enhancing Complex Multi-Image Comprehension via Multi-Modal Chain-of-Thought and Memory Augmentation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While previous multimodal slow-thinking methods have demonstrated remarkable success in single-image understanding scenarios, their effectiveness becomes fundamentally constrained when extended to more complex multi-image comprehension tasks. This limitation stems from their predominant reliance on text-based intermediate reasoning processes. While for human, when engaging in sophisticated multi-image analysis, they typically perform two complementary cognitive operations: (1) continuous cross-image visual comparison through region-of-interest matching, and (2) dynamic memorization of critical visual concepts throughout the reasoning chain. Motivated by these observations, we propose the Complex Multi-Modal Chain-of-Thought (CMMCoT) framework, a multi-step reasoning framework that mimics human-like "slow thinking" for multi-image understanding. Our approach incorporates two key innovations: 1. The construction of interleaved multimodal multi-step reasoning chains, which utilize critical visual region tokens, extracted from intermediate reasoning steps, as supervisory signals. This mechanism not only facilitates comprehensive cross-modal understanding but also enhances model interpretability. 2. The introduction of a test-time memory augmentation module that expands the model reasoning capacity during inference while preserving parameter efficiency. Furthermore, to facilitate research in this direction, we have curated a novel multi-image slow-thinking dataset. Extensive experiments demonstrate the effectiveness of our model.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 09:13:17 GMT" } ]
2025-03-10T00:00:00
[ [ "Zhang", "Guanghao", "" ], [ "Zhong", "Tao", "" ], [ "Xia", "Yan", "" ], [ "Yu", "Zhelun", "" ], [ "Li", "Haoyuan", "" ], [ "He", "Wanggui", "" ], [ "Shu", "Fangxun", "" ], [ "Liu", "Mushui", "" ], [ "She", "Dong", "" ], [ "Wang", "Yi", "" ], [ "Jiang", "Hao", "" ] ]
TITLE: CMMCoT: Enhancing Complex Multi-Image Comprehension via Multi-Modal Chain-of-Thought and Memory Augmentation ABSTRACT: While previous multimodal slow-thinking methods have demonstrated remarkable success in single-image understanding scenarios, their effectiveness becomes fundamentally constrained when extended to more complex multi-image comprehension tasks. This limitation stems from their predominant reliance on text-based intermediate reasoning processes. While for human, when engaging in sophisticated multi-image analysis, they typically perform two complementary cognitive operations: (1) continuous cross-image visual comparison through region-of-interest matching, and (2) dynamic memorization of critical visual concepts throughout the reasoning chain. Motivated by these observations, we propose the Complex Multi-Modal Chain-of-Thought (CMMCoT) framework, a multi-step reasoning framework that mimics human-like "slow thinking" for multi-image understanding. Our approach incorporates two key innovations: 1. The construction of interleaved multimodal multi-step reasoning chains, which utilize critical visual region tokens, extracted from intermediate reasoning steps, as supervisory signals. This mechanism not only facilitates comprehensive cross-modal understanding but also enhances model interpretability. 2. The introduction of a test-time memory augmentation module that expands the model reasoning capacity during inference while preserving parameter efficiency. Furthermore, to facilitate research in this direction, we have curated a novel multi-image slow-thinking dataset. Extensive experiments demonstrate the effectiveness of our model.
new_dataset
0.864996
2503.05260
Matteo Ceccarello
Matteo Ceccarello, Andrea Pietracaprina, Geppino Pucci, Francesco Vison\`a
Fair Center Clustering in Sliding Windows
null
null
null
null
cs.DS
http://creativecommons.org/licenses/by/4.0/
The $k$-center problem requires the selection of $k$ points (centers) from a given metric pointset $W$ so to minimize the maximum distance of any point of $W$ from the closest center. This paper focuses on a fair variant of the problem, known as \emph {fair center}, where each input point belongs to some category and each category may contribute a limited number of points to the center set. We present the first space-efficient streaming algorithm for fair center in general metrics, under the sliding window model. At any time $t$, the algorithm is able to provide a solution for the current window whose quality is almost as good as the one guaranteed by the best, polynomial-time sequential algorithms run on the entire window, and exhibits space and time requirements independent of the window size. Our theoretical results are backed by an extensive set of experiments on both real-world and synthetic datasets, which provide evidence of the practical viability of the algorithm.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 09:19:56 GMT" } ]
2025-03-10T00:00:00
[ [ "Ceccarello", "Matteo", "" ], [ "Pietracaprina", "Andrea", "" ], [ "Pucci", "Geppino", "" ], [ "Visonà", "Francesco", "" ] ]
TITLE: Fair Center Clustering in Sliding Windows ABSTRACT: The $k$-center problem requires the selection of $k$ points (centers) from a given metric pointset $W$ so to minimize the maximum distance of any point of $W$ from the closest center. This paper focuses on a fair variant of the problem, known as \emph {fair center}, where each input point belongs to some category and each category may contribute a limited number of points to the center set. We present the first space-efficient streaming algorithm for fair center in general metrics, under the sliding window model. At any time $t$, the algorithm is able to provide a solution for the current window whose quality is almost as good as the one guaranteed by the best, polynomial-time sequential algorithms run on the entire window, and exhibits space and time requirements independent of the window size. Our theoretical results are backed by an extensive set of experiments on both real-world and synthetic datasets, which provide evidence of the practical viability of the algorithm.
no_new_dataset
0.944893
2503.05268
Francesco Cazzaro
Francesco Cazzaro, Justin Kleindienst, Sofia Marquez, Ariadna Quattoni
ZOGRASCOPE: A New Benchmark for Property Graphs
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by-sa/4.0/
Natural language interfaces to knowledge graphs have become increasingly important in recent years, enabling easy and efficient access to structured data. In particular property graphs have seen growing adoption. However, these kind of graphs remain relatively underrepresented in research, which has focused in large part on RDF-style graphs. As a matter of fact there is a lack of resources for evaluating systems on property graphs, with many existing datasets featuring relatively simple queries. To address this gap, we introduce ZOGRASCOPE, a benchmark designed specifically for the cypher query language. The benchmark includes a diverse set of manually annotated queries of varying complexity. We complement this paper with a set of experiments that test the performance of out-of-the-box LLMs of different sizes. Our experiments show that semantic parsing over graphs is still a challenging open problem that can not be solved by prompting LLMs alone.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 09:33:30 GMT" } ]
2025-03-10T00:00:00
[ [ "Cazzaro", "Francesco", "" ], [ "Kleindienst", "Justin", "" ], [ "Marquez", "Sofia", "" ], [ "Quattoni", "Ariadna", "" ] ]
TITLE: ZOGRASCOPE: A New Benchmark for Property Graphs ABSTRACT: Natural language interfaces to knowledge graphs have become increasingly important in recent years, enabling easy and efficient access to structured data. In particular property graphs have seen growing adoption. However, these kind of graphs remain relatively underrepresented in research, which has focused in large part on RDF-style graphs. As a matter of fact there is a lack of resources for evaluating systems on property graphs, with many existing datasets featuring relatively simple queries. To address this gap, we introduce ZOGRASCOPE, a benchmark designed specifically for the cypher query language. The benchmark includes a diverse set of manually annotated queries of varying complexity. We complement this paper with a set of experiments that test the performance of out-of-the-box LLMs of different sizes. Our experiments show that semantic parsing over graphs is still a challenging open problem that can not be solved by prompting LLMs alone.
new_dataset
0.961498
2503.05274
Mohammad Sajad Marvi
Sajad Marvi and Christoph Rist and Julian Schmidt and Julian Jordan and Abhinav Valada
Evidential Uncertainty Estimation for Multi-Modal Trajectory Prediction
null
null
null
null
cs.RO cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurate trajectory prediction is crucial for autonomous driving, yet uncertainty in agent behavior and perception noise makes it inherently challenging. While multi-modal trajectory prediction models generate multiple plausible future paths with associated probabilities, effectively quantifying uncertainty remains an open problem. In this work, we propose a novel multi-modal trajectory prediction approach based on evidential deep learning that estimates both positional and mode probability uncertainty in real time. Our approach leverages a Normal Inverse Gamma distribution for positional uncertainty and a Dirichlet distribution for mode uncertainty. Unlike sampling-based methods, it infers both types of uncertainty in a single forward pass, significantly improving efficiency. Additionally, we experimented with uncertainty-driven importance sampling to improve training efficiency by prioritizing underrepresented high-uncertainty samples over redundant ones. We perform extensive evaluations of our method on the Argoverse 1 and Argoverse 2 datasets, demonstrating that it provides reliable uncertainty estimates while maintaining high trajectory prediction accuracy.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 09:46:21 GMT" } ]
2025-03-10T00:00:00
[ [ "Marvi", "Sajad", "" ], [ "Rist", "Christoph", "" ], [ "Schmidt", "Julian", "" ], [ "Jordan", "Julian", "" ], [ "Valada", "Abhinav", "" ] ]
TITLE: Evidential Uncertainty Estimation for Multi-Modal Trajectory Prediction ABSTRACT: Accurate trajectory prediction is crucial for autonomous driving, yet uncertainty in agent behavior and perception noise makes it inherently challenging. While multi-modal trajectory prediction models generate multiple plausible future paths with associated probabilities, effectively quantifying uncertainty remains an open problem. In this work, we propose a novel multi-modal trajectory prediction approach based on evidential deep learning that estimates both positional and mode probability uncertainty in real time. Our approach leverages a Normal Inverse Gamma distribution for positional uncertainty and a Dirichlet distribution for mode uncertainty. Unlike sampling-based methods, it infers both types of uncertainty in a single forward pass, significantly improving efficiency. Additionally, we experimented with uncertainty-driven importance sampling to improve training efficiency by prioritizing underrepresented high-uncertainty samples over redundant ones. We perform extensive evaluations of our method on the Argoverse 1 and Argoverse 2 datasets, demonstrating that it provides reliable uncertainty estimates while maintaining high trajectory prediction accuracy.
no_new_dataset
0.944944
2503.05283
Souhail Hadgi
Souhail Hadgi, Luca Moschella, Andrea Santilli, Diego Gomez, Qixing Huang, Emanuele Rodol\`a, Simone Melzi, Maks Ovsjanikov
Escaping Plato's Cave: Towards the Alignment of 3D and Text Latent Spaces
Accepted at CVPR 2025
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Recent works have shown that, when trained at scale, uni-modal 2D vision and text encoders converge to learned features that share remarkable structural properties, despite arising from different representations. However, the role of 3D encoders with respect to other modalities remains unexplored. Furthermore, existing 3D foundation models that leverage large datasets are typically trained with explicit alignment objectives with respect to frozen encoders from other representations. In this work, we investigate the possibility of a posteriori alignment of representations obtained from uni-modal 3D encoders compared to text-based feature spaces. We show that naive post-training feature alignment of uni-modal text and 3D encoders results in limited performance. We then focus on extracting subspaces of the corresponding feature spaces and discover that by projecting learned representations onto well-chosen lower-dimensional subspaces the quality of alignment becomes significantly higher, leading to improved accuracy on matching and retrieval tasks. Our analysis further sheds light on the nature of these shared subspaces, which roughly separate between semantic and geometric data representations. Overall, ours is the first work that helps to establish a baseline for post-training alignment of 3D uni-modal and text feature spaces, and helps to highlight both the shared and unique properties of 3D data compared to other representations.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 09:51:56 GMT" } ]
2025-03-10T00:00:00
[ [ "Hadgi", "Souhail", "" ], [ "Moschella", "Luca", "" ], [ "Santilli", "Andrea", "" ], [ "Gomez", "Diego", "" ], [ "Huang", "Qixing", "" ], [ "Rodolà", "Emanuele", "" ], [ "Melzi", "Simone", "" ], [ "Ovsjanikov", "Maks", "" ] ]
TITLE: Escaping Plato's Cave: Towards the Alignment of 3D and Text Latent Spaces ABSTRACT: Recent works have shown that, when trained at scale, uni-modal 2D vision and text encoders converge to learned features that share remarkable structural properties, despite arising from different representations. However, the role of 3D encoders with respect to other modalities remains unexplored. Furthermore, existing 3D foundation models that leverage large datasets are typically trained with explicit alignment objectives with respect to frozen encoders from other representations. In this work, we investigate the possibility of a posteriori alignment of representations obtained from uni-modal 3D encoders compared to text-based feature spaces. We show that naive post-training feature alignment of uni-modal text and 3D encoders results in limited performance. We then focus on extracting subspaces of the corresponding feature spaces and discover that by projecting learned representations onto well-chosen lower-dimensional subspaces the quality of alignment becomes significantly higher, leading to improved accuracy on matching and retrieval tasks. Our analysis further sheds light on the nature of these shared subspaces, which roughly separate between semantic and geometric data representations. Overall, ours is the first work that helps to establish a baseline for post-training alignment of 3D uni-modal and text feature spaces, and helps to highlight both the shared and unique properties of 3D data compared to other representations.
no_new_dataset
0.94428
2503.05289
Eliav Mor
Eliav Mor and Yair Carmon
An Analytical Model for Overparameterized Learning Under Class Imbalance
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
We study class-imbalanced linear classification in a high-dimensional Gaussian mixture model. We develop a tight, closed form approximation for the test error of several practical learning methods, including logit adjustment and class dependent temperature. Our approximation allows us to analytically tune and compare these methods, highlighting how and when they overcome the pitfalls of standard cross-entropy minimization. We test our theoretical findings on simulated data and imbalanced CIFAR10, MNIST and FashionMNIST datasets.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 10:09:16 GMT" } ]
2025-03-10T00:00:00
[ [ "Mor", "Eliav", "" ], [ "Carmon", "Yair", "" ] ]
TITLE: An Analytical Model for Overparameterized Learning Under Class Imbalance ABSTRACT: We study class-imbalanced linear classification in a high-dimensional Gaussian mixture model. We develop a tight, closed form approximation for the test error of several practical learning methods, including logit adjustment and class dependent temperature. Our approximation allows us to analytically tune and compare these methods, highlighting how and when they overcome the pitfalls of standard cross-entropy minimization. We test our theoretical findings on simulated data and imbalanced CIFAR10, MNIST and FashionMNIST datasets.
no_new_dataset
0.948058
2503.05301
Adrian Pfisterer
Adrian Pfisterer, Xing Li, Vito Mengers, Oliver Brock
A Helping (Human) Hand in Kinematic Structure Estimation
Accepted at ICRA25; 8 pages + 7 figures; For supplementary material, see https://www.tu.berlin/robotics/papers/helpinghands
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Visual uncertainties such as occlusions, lack of texture, and noise present significant challenges in obtaining accurate kinematic models for safe robotic manipulation. We introduce a probabilistic real-time approach that leverages the human hand as a prior to mitigate these uncertainties. By tracking the constrained motion of the human hand during manipulation and explicitly modeling uncertainties in visual observations, our method reliably estimates an object's kinematic model online. We validate our approach on a novel dataset featuring challenging objects that are occluded during manipulation and offer limited articulations for perception. The results demonstrate that by incorporating an appropriate prior and explicitly accounting for uncertainties, our method produces accurate estimates, outperforming two recent baselines by 195% and 140%, respectively. Furthermore, we demonstrate that our approach's estimates are precise enough to allow a robot to manipulate even small objects safely.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 10:29:25 GMT" } ]
2025-03-10T00:00:00
[ [ "Pfisterer", "Adrian", "" ], [ "Li", "Xing", "" ], [ "Mengers", "Vito", "" ], [ "Brock", "Oliver", "" ] ]
TITLE: A Helping (Human) Hand in Kinematic Structure Estimation ABSTRACT: Visual uncertainties such as occlusions, lack of texture, and noise present significant challenges in obtaining accurate kinematic models for safe robotic manipulation. We introduce a probabilistic real-time approach that leverages the human hand as a prior to mitigate these uncertainties. By tracking the constrained motion of the human hand during manipulation and explicitly modeling uncertainties in visual observations, our method reliably estimates an object's kinematic model online. We validate our approach on a novel dataset featuring challenging objects that are occluded during manipulation and offer limited articulations for perception. The results demonstrate that by incorporating an appropriate prior and explicitly accounting for uncertainties, our method produces accurate estimates, outperforming two recent baselines by 195% and 140%, respectively. Furthermore, we demonstrate that our approach's estimates are precise enough to allow a robot to manipulate even small objects safely.
new_dataset
0.961025
2503.05305
Hu Yu
Hu Yu, Hao Luo, Hangjie Yuan, Yu Rong, Feng Zhao
Frequency Autoregressive Image Generation with Continuous Tokens
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/publicdomain/zero/1.0/
Autoregressive (AR) models for image generation typically adopt a two-stage paradigm of vector quantization and raster-scan ``next-token prediction", inspired by its great success in language modeling. However, due to the huge modality gap, image autoregressive models may require a systematic reevaluation from two perspectives: tokenizer format and regression direction. In this paper, we introduce the frequency progressive autoregressive (\textbf{FAR}) paradigm and instantiate FAR with the continuous tokenizer. Specifically, we identify spectral dependency as the desirable regression direction for FAR, wherein higher-frequency components build upon the lower one to progressively construct a complete image. This design seamlessly fits the causality requirement for autoregressive models and preserves the unique spatial locality of image data. Besides, we delve into the integration of FAR and the continuous tokenizer, introducing a series of techniques to address optimization challenges and improve the efficiency of training and inference processes. We demonstrate the efficacy of FAR through comprehensive experiments on the ImageNet dataset and verify its potential on text-to-image generation.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 10:34:04 GMT" } ]
2025-03-10T00:00:00
[ [ "Yu", "Hu", "" ], [ "Luo", "Hao", "" ], [ "Yuan", "Hangjie", "" ], [ "Rong", "Yu", "" ], [ "Zhao", "Feng", "" ] ]
TITLE: Frequency Autoregressive Image Generation with Continuous Tokens ABSTRACT: Autoregressive (AR) models for image generation typically adopt a two-stage paradigm of vector quantization and raster-scan ``next-token prediction", inspired by its great success in language modeling. However, due to the huge modality gap, image autoregressive models may require a systematic reevaluation from two perspectives: tokenizer format and regression direction. In this paper, we introduce the frequency progressive autoregressive (\textbf{FAR}) paradigm and instantiate FAR with the continuous tokenizer. Specifically, we identify spectral dependency as the desirable regression direction for FAR, wherein higher-frequency components build upon the lower one to progressively construct a complete image. This design seamlessly fits the causality requirement for autoregressive models and preserves the unique spatial locality of image data. Besides, we delve into the integration of FAR and the continuous tokenizer, introducing a series of techniques to address optimization challenges and improve the efficiency of training and inference processes. We demonstrate the efficacy of FAR through comprehensive experiments on the ImageNet dataset and verify its potential on text-to-image generation.
no_new_dataset
0.949295
2503.05306
Hyungkyu Kang
Hyungkyu Kang and Min-hwan Oh
Adversarial Policy Optimization for Offline Preference-based Reinforcement Learning
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we study offline preference-based reinforcement learning (PbRL), where learning is based on pre-collected preference feedback over pairs of trajectories. While offline PbRL has demonstrated remarkable empirical success, existing theoretical approaches face challenges in ensuring conservatism under uncertainty, requiring computationally intractable confidence set constructions. We address this limitation by proposing Adversarial Preference-based Policy Optimization (APPO), a computationally efficient algorithm for offline PbRL that guarantees sample complexity bounds without relying on explicit confidence sets. By framing PbRL as a two-player game between a policy and a model, our approach enforces conservatism in a tractable manner. Using standard assumptions on function approximation and bounded trajectory concentrability, we derive a sample complexity bound. To our knowledge, APPO is the first offline PbRL algorithm to offer both statistical efficiency and practical applicability. Experimental results on continuous control tasks demonstrate that APPO effectively learns from complex datasets, showing comparable performance with existing state-of-the-art methods.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 10:35:01 GMT" } ]
2025-03-10T00:00:00
[ [ "Kang", "Hyungkyu", "" ], [ "Oh", "Min-hwan", "" ] ]
TITLE: Adversarial Policy Optimization for Offline Preference-based Reinforcement Learning ABSTRACT: In this paper, we study offline preference-based reinforcement learning (PbRL), where learning is based on pre-collected preference feedback over pairs of trajectories. While offline PbRL has demonstrated remarkable empirical success, existing theoretical approaches face challenges in ensuring conservatism under uncertainty, requiring computationally intractable confidence set constructions. We address this limitation by proposing Adversarial Preference-based Policy Optimization (APPO), a computationally efficient algorithm for offline PbRL that guarantees sample complexity bounds without relying on explicit confidence sets. By framing PbRL as a two-player game between a policy and a model, our approach enforces conservatism in a tractable manner. Using standard assumptions on function approximation and bounded trajectory concentrability, we derive a sample complexity bound. To our knowledge, APPO is the first offline PbRL algorithm to offer both statistical efficiency and practical applicability. Experimental results on continuous control tasks demonstrate that APPO effectively learns from complex datasets, showing comparable performance with existing state-of-the-art methods.
no_new_dataset
0.946349
2503.05319
Xinkun Wang
Xinkun Wang, Yifang Wang, Senwei Liang, Feilong Tang, Chengzhi Liu, Ming Hu, Chao Hu, Junjun He, Zongyuan Ge, and Imran Razzak
Robust Multimodal Learning for Ophthalmic Disease Grading via Disentangled Representation
10pages
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
This paper discusses how ophthalmologists often rely on multimodal data to improve diagnostic accuracy. However, complete multimodal data is rare in real-world applications due to a lack of medical equipment and concerns about data privacy. Traditional deep learning methods typically address these issues by learning representations in latent space. However, the paper highlights two key limitations of these approaches: (i) Task-irrelevant redundant information (e.g., numerous slices) in complex modalities leads to significant redundancy in latent space representations. (ii) Overlapping multimodal representations make it difficult to extract unique features for each modality. To overcome these challenges, the authors propose the Essence-Point and Disentangle Representation Learning (EDRL) strategy, which integrates a self-distillation mechanism into an end-to-end framework to enhance feature selection and disentanglement for more robust multimodal learning. Specifically, the Essence-Point Representation Learning module selects discriminative features that improve disease grading performance. The Disentangled Representation Learning module separates multimodal data into modality-common and modality-unique representations, reducing feature entanglement and enhancing both robustness and interpretability in ophthalmic disease diagnosis. Experiments on multimodal ophthalmology datasets show that the proposed EDRL strategy significantly outperforms current state-of-the-art methods.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 10:58:38 GMT" } ]
2025-03-10T00:00:00
[ [ "Wang", "Xinkun", "" ], [ "Wang", "Yifang", "" ], [ "Liang", "Senwei", "" ], [ "Tang", "Feilong", "" ], [ "Liu", "Chengzhi", "" ], [ "Hu", "Ming", "" ], [ "Hu", "Chao", "" ], [ "He", "Junjun", "" ], [ "Ge", "Zongyuan", "" ], [ "Razzak", "Imran", "" ] ]
TITLE: Robust Multimodal Learning for Ophthalmic Disease Grading via Disentangled Representation ABSTRACT: This paper discusses how ophthalmologists often rely on multimodal data to improve diagnostic accuracy. However, complete multimodal data is rare in real-world applications due to a lack of medical equipment and concerns about data privacy. Traditional deep learning methods typically address these issues by learning representations in latent space. However, the paper highlights two key limitations of these approaches: (i) Task-irrelevant redundant information (e.g., numerous slices) in complex modalities leads to significant redundancy in latent space representations. (ii) Overlapping multimodal representations make it difficult to extract unique features for each modality. To overcome these challenges, the authors propose the Essence-Point and Disentangle Representation Learning (EDRL) strategy, which integrates a self-distillation mechanism into an end-to-end framework to enhance feature selection and disentanglement for more robust multimodal learning. Specifically, the Essence-Point Representation Learning module selects discriminative features that improve disease grading performance. The Disentangled Representation Learning module separates multimodal data into modality-common and modality-unique representations, reducing feature entanglement and enhancing both robustness and interpretability in ophthalmic disease diagnosis. Experiments on multimodal ophthalmology datasets show that the proposed EDRL strategy significantly outperforms current state-of-the-art methods.
no_new_dataset
0.95253
2503.05320
Zitao Fang
Zitao Fang, Guodong DU, Shuyang Yu, Yifei Guo, Yiwei Zhang, Jing Li, Ho-Kin Tang, Sim Kuan Goh
Disentangling Task Interference within Neurons: Model Merging in Alignment with Neuronal Mechanisms
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fine-tuning pre-trained models on targeted datasets enhances task-specific performance but often comes at the expense of generalization. Model merging techniques, which integrate multiple fine-tuned models into a single multi-task model through task arithmetic at various levels: model, layer, or parameter, offer a promising solution. However, task interference remains a fundamental challenge, leading to performance degradation and suboptimal merged models. Existing approaches largely overlook the fundamental role of individual neurons and their connectivity, resulting in a lack of interpretability in both the merging process and the merged models. In this work, we present the first study on the impact of neuronal alignment in model merging. We decompose task-specific representations into two complementary neuronal subspaces that regulate neuron sensitivity and input adaptability. Leveraging this decomposition, we introduce NeuroMerging, a novel merging framework developed to mitigate task interference within neuronal subspaces, enabling training-free model fusion across diverse tasks. Through extensive experiments, we demonstrate that NeuroMerging achieves superior performance compared to existing methods on multi-task benchmarks across both vision and natural language domains. Our findings highlight the importance of aligning neuronal mechanisms in model merging, offering new insights into mitigating task interference and improving knowledge fusion.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 11:00:24 GMT" } ]
2025-03-10T00:00:00
[ [ "Fang", "Zitao", "" ], [ "DU", "Guodong", "" ], [ "Yu", "Shuyang", "" ], [ "Guo", "Yifei", "" ], [ "Zhang", "Yiwei", "" ], [ "Li", "Jing", "" ], [ "Tang", "Ho-Kin", "" ], [ "Goh", "Sim Kuan", "" ] ]
TITLE: Disentangling Task Interference within Neurons: Model Merging in Alignment with Neuronal Mechanisms ABSTRACT: Fine-tuning pre-trained models on targeted datasets enhances task-specific performance but often comes at the expense of generalization. Model merging techniques, which integrate multiple fine-tuned models into a single multi-task model through task arithmetic at various levels: model, layer, or parameter, offer a promising solution. However, task interference remains a fundamental challenge, leading to performance degradation and suboptimal merged models. Existing approaches largely overlook the fundamental role of individual neurons and their connectivity, resulting in a lack of interpretability in both the merging process and the merged models. In this work, we present the first study on the impact of neuronal alignment in model merging. We decompose task-specific representations into two complementary neuronal subspaces that regulate neuron sensitivity and input adaptability. Leveraging this decomposition, we introduce NeuroMerging, a novel merging framework developed to mitigate task interference within neuronal subspaces, enabling training-free model fusion across diverse tasks. Through extensive experiments, we demonstrate that NeuroMerging achieves superior performance compared to existing methods on multi-task benchmarks across both vision and natural language domains. Our findings highlight the importance of aligning neuronal mechanisms in model merging, offering new insights into mitigating task interference and improving knowledge fusion.
no_new_dataset
0.942348
2503.05328
Anar Yeginbergen
Anar Yeginbergen and Maite Oronoz and Rodrigo Agerri
Dynamic Knowledge Integration for Evidence-Driven Counter-Argument Generation with Large Language Models
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
This paper investigates the role of dynamic external knowledge integration in improving counter-argument generation using Large Language Models (LLMs). While LLMs have shown promise in argumentative tasks, their tendency to generate lengthy, potentially unfactual responses highlights the need for more controlled and evidence-based approaches. We introduce a new manually curated dataset of argument and counter-argument pairs specifically designed to balance argumentative complexity with evaluative feasibility. We also propose a new LLM-as-a-Judge evaluation methodology that shows a stronger correlation with human judgments compared to traditional reference-based metrics. Our experimental results demonstrate that integrating dynamic external knowledge from the web significantly improves the quality of generated counter-arguments, particularly in terms of relatedness, persuasiveness, and factuality. The findings suggest that combining LLMs with real-time external knowledge retrieval offers a promising direction for developing more effective and reliable counter-argumentation systems.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 11:13:33 GMT" } ]
2025-03-10T00:00:00
[ [ "Yeginbergen", "Anar", "" ], [ "Oronoz", "Maite", "" ], [ "Agerri", "Rodrigo", "" ] ]
TITLE: Dynamic Knowledge Integration for Evidence-Driven Counter-Argument Generation with Large Language Models ABSTRACT: This paper investigates the role of dynamic external knowledge integration in improving counter-argument generation using Large Language Models (LLMs). While LLMs have shown promise in argumentative tasks, their tendency to generate lengthy, potentially unfactual responses highlights the need for more controlled and evidence-based approaches. We introduce a new manually curated dataset of argument and counter-argument pairs specifically designed to balance argumentative complexity with evaluative feasibility. We also propose a new LLM-as-a-Judge evaluation methodology that shows a stronger correlation with human judgments compared to traditional reference-based metrics. Our experimental results demonstrate that integrating dynamic external knowledge from the web significantly improves the quality of generated counter-arguments, particularly in terms of relatedness, persuasiveness, and factuality. The findings suggest that combining LLMs with real-time external knowledge retrieval offers a promising direction for developing more effective and reliable counter-argumentation systems.
new_dataset
0.952662
2503.05331
Matic Pikovnik
Matic Pikovnik (1) and \v{Z}iga Zaplotnik (2 and 1) ((1) University of Ljubljana, Faculty of Mathematics and Physics, Jadranska 19, 1000 Ljubljana, Slovenia, (2) European Centre for Medium-range Weather Forecasts, Robert-Schuman-Platz 3, 53175 Bonn, Germany)
The Changes of the Northern Hadley Cell Strength in Reanalyses and Radiosonde Observations
17 pages (12 main, 5 supplementary), 11 figures (4 main, 7 supplementary)
null
null
null
physics.ao-ph
http://creativecommons.org/licenses/by/4.0/
This study examines mean meridional winds and their trends in the Northern Hadley Cell (NHC) from 1980 to 2022 using reanalysis datasets and radiosonde observations. Compared to radiosonde data, reanalyses underestimate the mean upper-tropospheric poleward flow of the NHC but accurately capture the mean equatorward flow in the lower troposphere. While climate models generally project a weakening of the NHC, our study finds no significant trend in radiosonde observations, adding to the uncertainty in future climate projections. In contrast, reanalyses indicate a strengthening, primarily due to an intensification of the upper-tropospheric poleward flow. Our examination of ERA5 analysis increments confirms that the NHC strengthening trend in ERA5 is not an artefact of data assimilation. Instead, the increments correct the first guess, which underestimates the strength of the NHC, nudging it toward a stronger circulation.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 11:15:53 GMT" } ]
2025-03-10T00:00:00
[ [ "Pikovnik", "Matic", "", "2 and 1" ], [ "Zaplotnik", "Žiga", "", "2 and 1" ] ]
TITLE: The Changes of the Northern Hadley Cell Strength in Reanalyses and Radiosonde Observations ABSTRACT: This study examines mean meridional winds and their trends in the Northern Hadley Cell (NHC) from 1980 to 2022 using reanalysis datasets and radiosonde observations. Compared to radiosonde data, reanalyses underestimate the mean upper-tropospheric poleward flow of the NHC but accurately capture the mean equatorward flow in the lower troposphere. While climate models generally project a weakening of the NHC, our study finds no significant trend in radiosonde observations, adding to the uncertainty in future climate projections. In contrast, reanalyses indicate a strengthening, primarily due to an intensification of the upper-tropospheric poleward flow. Our examination of ERA5 analysis increments confirms that the NHC strengthening trend in ERA5 is not an artefact of data assimilation. Instead, the increments correct the first guess, which underestimates the strength of the NHC, nudging it toward a stronger circulation.
no_new_dataset
0.948394
2503.05332
Jungho Lee
Jungho Lee, Donghyeong Kim, Dogyoon Lee, Suhwan Cho, Minhyeok Lee, Wonjoon Lee, Taeoh Kim, Dongyoon Wee, Sangyoun Lee
CoMoGaussian: Continuous Motion-Aware Gaussian Splatting from Motion-Blurred Images
Revised Version of CRiM-GS, Github: https://github.com/Jho-Yonsei/CoMoGaussian
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
3D Gaussian Splatting (3DGS) has gained significant attention for their high-quality novel view rendering, motivating research to address real-world challenges. A critical issue is the camera motion blur caused by movement during exposure, which hinders accurate 3D scene reconstruction. In this study, we propose CoMoGaussian, a Continuous Motion-Aware Gaussian Splatting that reconstructs precise 3D scenes from motion-blurred images while maintaining real-time rendering speed. Considering the complex motion patterns inherent in real-world camera movements, we predict continuous camera trajectories using neural ordinary differential equations (ODEs). To ensure accurate modeling, we employ rigid body transformations, preserving the shape and size of the object but rely on the discrete integration of sampled frames. To better approximate the continuous nature of motion blur, we introduce a continuous motion refinement (CMR) transformation that refines rigid transformations by incorporating additional learnable parameters. By revisiting fundamental camera theory and leveraging advanced neural ODE techniques, we achieve precise modeling of continuous camera trajectories, leading to improved reconstruction accuracy. Extensive experiments demonstrate state-of-the-art performance both quantitatively and qualitatively on benchmark datasets, which include a wide range of motion blur scenarios, from moderate to extreme blur.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 11:18:43 GMT" } ]
2025-03-10T00:00:00
[ [ "Lee", "Jungho", "" ], [ "Kim", "Donghyeong", "" ], [ "Lee", "Dogyoon", "" ], [ "Cho", "Suhwan", "" ], [ "Lee", "Minhyeok", "" ], [ "Lee", "Wonjoon", "" ], [ "Kim", "Taeoh", "" ], [ "Wee", "Dongyoon", "" ], [ "Lee", "Sangyoun", "" ] ]
TITLE: CoMoGaussian: Continuous Motion-Aware Gaussian Splatting from Motion-Blurred Images ABSTRACT: 3D Gaussian Splatting (3DGS) has gained significant attention for their high-quality novel view rendering, motivating research to address real-world challenges. A critical issue is the camera motion blur caused by movement during exposure, which hinders accurate 3D scene reconstruction. In this study, we propose CoMoGaussian, a Continuous Motion-Aware Gaussian Splatting that reconstructs precise 3D scenes from motion-blurred images while maintaining real-time rendering speed. Considering the complex motion patterns inherent in real-world camera movements, we predict continuous camera trajectories using neural ordinary differential equations (ODEs). To ensure accurate modeling, we employ rigid body transformations, preserving the shape and size of the object but rely on the discrete integration of sampled frames. To better approximate the continuous nature of motion blur, we introduce a continuous motion refinement (CMR) transformation that refines rigid transformations by incorporating additional learnable parameters. By revisiting fundamental camera theory and leveraging advanced neural ODE techniques, we achieve precise modeling of continuous camera trajectories, leading to improved reconstruction accuracy. Extensive experiments demonstrate state-of-the-art performance both quantitatively and qualitatively on benchmark datasets, which include a wide range of motion blur scenarios, from moderate to extreme blur.
no_new_dataset
0.953622
2503.05333
Martin Spitznagel
Martin Spitznagel, Jan Vaillant, Janis Keuper
PhysicsGen: Can Generative Models Learn from Images to Predict Complex Physical Relations?
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
The image-to-image translation abilities of generative learning models have recently made significant progress in the estimation of complex (steered) mappings between image distributions. While appearance based tasks like image in-painting or style transfer have been studied at length, we propose to investigate the potential of generative models in the context of physical simulations. Providing a dataset of 300k image-pairs and baseline evaluations for three different physical simulation tasks, we propose a benchmark to investigate the following research questions: i) are generative models able to learn complex physical relations from input-output image pairs? ii) what speedups can be achieved by replacing differential equation based simulations? While baseline evaluations of different current models show the potential for high speedups (ii), these results also show strong limitations toward the physical correctness (i). This underlines the need for new methods to enforce physical correctness. Data, baseline models and evaluation code http://www.physics-gen.org.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 11:19:13 GMT" } ]
2025-03-10T00:00:00
[ [ "Spitznagel", "Martin", "" ], [ "Vaillant", "Jan", "" ], [ "Keuper", "Janis", "" ] ]
TITLE: PhysicsGen: Can Generative Models Learn from Images to Predict Complex Physical Relations? ABSTRACT: The image-to-image translation abilities of generative learning models have recently made significant progress in the estimation of complex (steered) mappings between image distributions. While appearance based tasks like image in-painting or style transfer have been studied at length, we propose to investigate the potential of generative models in the context of physical simulations. Providing a dataset of 300k image-pairs and baseline evaluations for three different physical simulation tasks, we propose a benchmark to investigate the following research questions: i) are generative models able to learn complex physical relations from input-output image pairs? ii) what speedups can be achieved by replacing differential equation based simulations? While baseline evaluations of different current models show the potential for high speedups (ii), these results also show strong limitations toward the physical correctness (i). This underlines the need for new methods to enforce physical correctness. Data, baseline models and evaluation code http://www.physics-gen.org.
new_dataset
0.957715
2503.05335
Joel Honkamaa
Joel Honkamaa and Pekka Marttinen
New multimodal similarity measure for image registration via modeling local functional dependence with linear combination of learned basis functions
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
The deformable registration of images of different modalities, essential in many medical imaging applications, remains challenging. The main challenge is developing a robust measure for image overlap despite the compared images capturing different aspects of the underlying tissue. Here, we explore similarity metrics based on functional dependence between intensity values of registered images. Although functional dependence is too restrictive on the global scale, earlier work has shown competitive performance in deformable registration when such measures are applied over small enough contexts. We confirm this finding and further develop the idea by modeling local functional dependence via the linear basis function model with the basis functions learned jointly with the deformation. The measure can be implemented via convolutions, making it efficient to compute on GPUs. We release the method as an easy-to-use tool and show good performance on three datasets compared to well-established baseline and earlier functional dependence-based methods.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 11:22:33 GMT" } ]
2025-03-10T00:00:00
[ [ "Honkamaa", "Joel", "" ], [ "Marttinen", "Pekka", "" ] ]
TITLE: New multimodal similarity measure for image registration via modeling local functional dependence with linear combination of learned basis functions ABSTRACT: The deformable registration of images of different modalities, essential in many medical imaging applications, remains challenging. The main challenge is developing a robust measure for image overlap despite the compared images capturing different aspects of the underlying tissue. Here, we explore similarity metrics based on functional dependence between intensity values of registered images. Although functional dependence is too restrictive on the global scale, earlier work has shown competitive performance in deformable registration when such measures are applied over small enough contexts. We confirm this finding and further develop the idea by modeling local functional dependence via the linear basis function model with the basis functions learned jointly with the deformation. The measure can be implemented via convolutions, making it efficient to compute on GPUs. We release the method as an easy-to-use tool and show good performance on three datasets compared to well-established baseline and earlier functional dependence-based methods.
no_new_dataset
0.946349
2503.05339
Zhenxuan Zhang
Zhenxuan Zhang, Peiyuan Jing, Coraline Beitone, Jiahao Huang, Zhifan Gao, Guang Yang, and Pete Lally
Pretext Task Adversarial Learning for Unpaired Low-field to Ultra High-field MRI Synthesis
null
null
null
null
eess.IV cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given the scarcity and cost of high-field MRI, the synthesis of high-field MRI from low-field MRI holds significant potential when there is limited data for training downstream tasks (e.g. segmentation). Low-field MRI often suffers from a reduced signal-to-noise ratio (SNR) and spatial resolution compared to high-field MRI. However, synthesizing high-field MRI data presents challenges. These involve aligning image features across domains while preserving anatomical accuracy and enhancing fine details. To address these challenges, we propose a Pretext Task Adversarial (PTA) learning framework for high-field MRI synthesis from low-field MRI data. The framework comprises three processes: (1) The slice-wise gap perception (SGP) network aligns the slice inconsistencies of low-field and high-field datasets based on contrastive learning. (2) The local structure correction (LSC) network extracts local structures by restoring the locally rotated and masked images. (3) The pretext task-guided adversarial training process introduces additional supervision and incorporates a discriminator to improve image realism. Extensive experiments on low-field to ultra high-field task demonstrate the effectiveness of our method, achieving state-of-the-art performance (16.892 in FID, 1.933 in IS, and 0.324 in MS-SSIM). This enables the generation of high-quality high-field-like MRI data from low-field MRI data to augment training datasets for downstream tasks. The code is available at: https://github.com/Zhenxuan-Zhang/PTA4Unpaired_HF_MRI_SYN.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 11:28:55 GMT" } ]
2025-03-10T00:00:00
[ [ "Zhang", "Zhenxuan", "" ], [ "Jing", "Peiyuan", "" ], [ "Beitone", "Coraline", "" ], [ "Huang", "Jiahao", "" ], [ "Gao", "Zhifan", "" ], [ "Yang", "Guang", "" ], [ "Lally", "Pete", "" ] ]
TITLE: Pretext Task Adversarial Learning for Unpaired Low-field to Ultra High-field MRI Synthesis ABSTRACT: Given the scarcity and cost of high-field MRI, the synthesis of high-field MRI from low-field MRI holds significant potential when there is limited data for training downstream tasks (e.g. segmentation). Low-field MRI often suffers from a reduced signal-to-noise ratio (SNR) and spatial resolution compared to high-field MRI. However, synthesizing high-field MRI data presents challenges. These involve aligning image features across domains while preserving anatomical accuracy and enhancing fine details. To address these challenges, we propose a Pretext Task Adversarial (PTA) learning framework for high-field MRI synthesis from low-field MRI data. The framework comprises three processes: (1) The slice-wise gap perception (SGP) network aligns the slice inconsistencies of low-field and high-field datasets based on contrastive learning. (2) The local structure correction (LSC) network extracts local structures by restoring the locally rotated and masked images. (3) The pretext task-guided adversarial training process introduces additional supervision and incorporates a discriminator to improve image realism. Extensive experiments on low-field to ultra high-field task demonstrate the effectiveness of our method, achieving state-of-the-art performance (16.892 in FID, 1.933 in IS, and 0.324 in MS-SSIM). This enables the generation of high-quality high-field-like MRI data from low-field MRI data to augment training datasets for downstream tasks. The code is available at: https://github.com/Zhenxuan-Zhang/PTA4Unpaired_HF_MRI_SYN.
no_new_dataset
0.952882
2503.05347
Zhenxuan Zhang
Zhenxuan Zhang, Kinhei Lee, Weihang Deng, Huichi Zhou, Zihao Jin, Jiahao Huang, Zhifan Gao, Dominic C Marshall, Yingying Fang, Guang Yang
GEMA-Score: Granular Explainable Multi-Agent Score for Radiology Report Evaluation
null
null
null
null
cs.CL cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automatic medical report generation supports clinical diagnosis, reduces the workload of radiologists, and holds the promise of improving diagnosis consistency. However, existing evaluation metrics primarily assess the accuracy of key medical information coverage in generated reports compared to human-written reports, while overlooking crucial details such as the location and certainty of reported abnormalities. These limitations hinder the comprehensive assessment of the reliability of generated reports and pose risks in their selection for clinical use. Therefore, we propose a Granular Explainable Multi-Agent Score (GEMA-Score) in this paper, which conducts both objective quantification and subjective evaluation through a large language model-based multi-agent workflow. Our GEMA-Score parses structured reports and employs NER-F1 calculations through interactive exchanges of information among agents to assess disease diagnosis, location, severity, and uncertainty. Additionally, an LLM-based scoring agent evaluates completeness, readability, and clinical terminology while providing explanatory feedback. Extensive experiments validate that GEMA-Score achieves the highest correlation with human expert evaluations on a public dataset, demonstrating its effectiveness in clinical scoring (Kendall coefficient = 0.70 for Rexval dataset and Kendall coefficient = 0.54 for RadEvalX dataset). The anonymous project demo is available at: https://github.com/Zhenxuan-Zhang/GEMA_score.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 11:42:22 GMT" } ]
2025-03-10T00:00:00
[ [ "Zhang", "Zhenxuan", "" ], [ "Lee", "Kinhei", "" ], [ "Deng", "Weihang", "" ], [ "Zhou", "Huichi", "" ], [ "Jin", "Zihao", "" ], [ "Huang", "Jiahao", "" ], [ "Gao", "Zhifan", "" ], [ "Marshall", "Dominic C", "" ], [ "Fang", "Yingying", "" ], [ "Yang", "Guang", "" ] ]
TITLE: GEMA-Score: Granular Explainable Multi-Agent Score for Radiology Report Evaluation ABSTRACT: Automatic medical report generation supports clinical diagnosis, reduces the workload of radiologists, and holds the promise of improving diagnosis consistency. However, existing evaluation metrics primarily assess the accuracy of key medical information coverage in generated reports compared to human-written reports, while overlooking crucial details such as the location and certainty of reported abnormalities. These limitations hinder the comprehensive assessment of the reliability of generated reports and pose risks in their selection for clinical use. Therefore, we propose a Granular Explainable Multi-Agent Score (GEMA-Score) in this paper, which conducts both objective quantification and subjective evaluation through a large language model-based multi-agent workflow. Our GEMA-Score parses structured reports and employs NER-F1 calculations through interactive exchanges of information among agents to assess disease diagnosis, location, severity, and uncertainty. Additionally, an LLM-based scoring agent evaluates completeness, readability, and clinical terminology while providing explanatory feedback. Extensive experiments validate that GEMA-Score achieves the highest correlation with human expert evaluations on a public dataset, demonstrating its effectiveness in clinical scoring (Kendall coefficient = 0.70 for Rexval dataset and Kendall coefficient = 0.54 for RadEvalX dataset). The anonymous project demo is available at: https://github.com/Zhenxuan-Zhang/GEMA_score.
no_new_dataset
0.95594
2503.05349
Dingkun Liu
Dingkun Liu, Siyang Li, Ziwei Wang, Wei Li and Dongrui Wu
Spatial Distillation based Distribution Alignment (SDDA) for Cross-Headset EEG Classification
10 pages, 5 figures
null
null
null
cs.LG cs.AI cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A non-invasive brain-computer interface (BCI) enables direct interaction between the user and external devices, typically via electroencephalogram (EEG) signals. However, decoding EEG signals across different headsets remains a significant challenge due to differences in the number and locations of the electrodes. To address this challenge, we propose a spatial distillation based distribution alignment (SDDA) approach for heterogeneous cross-headset transfer in non-invasive BCIs. SDDA uses first spatial distillation to make use of the full set of electrodes, and then input/feature/output space distribution alignments to cope with the significant differences between the source and target domains. To our knowledge, this is the first work to use knowledge distillation in cross-headset transfers. Extensive experiments on six EEG datasets from two BCI paradigms demonstrated that SDDA achieved superior performance in both offline unsupervised domain adaptation and online supervised domain adaptation scenarios, consistently outperforming 10 classical and state-of-the-art transfer learning algorithms.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 11:44:49 GMT" } ]
2025-03-10T00:00:00
[ [ "Liu", "Dingkun", "" ], [ "Li", "Siyang", "" ], [ "Wang", "Ziwei", "" ], [ "Li", "Wei", "" ], [ "Wu", "Dongrui", "" ] ]
TITLE: Spatial Distillation based Distribution Alignment (SDDA) for Cross-Headset EEG Classification ABSTRACT: A non-invasive brain-computer interface (BCI) enables direct interaction between the user and external devices, typically via electroencephalogram (EEG) signals. However, decoding EEG signals across different headsets remains a significant challenge due to differences in the number and locations of the electrodes. To address this challenge, we propose a spatial distillation based distribution alignment (SDDA) approach for heterogeneous cross-headset transfer in non-invasive BCIs. SDDA uses first spatial distillation to make use of the full set of electrodes, and then input/feature/output space distribution alignments to cope with the significant differences between the source and target domains. To our knowledge, this is the first work to use knowledge distillation in cross-headset transfers. Extensive experiments on six EEG datasets from two BCI paradigms demonstrated that SDDA achieved superior performance in both offline unsupervised domain adaptation and online supervised domain adaptation scenarios, consistently outperforming 10 classical and state-of-the-art transfer learning algorithms.
no_new_dataset
0.947186
2503.05357
Jan Fillies
Jan Fillies and Adrian Paschke
Improving Hate Speech Classification with Cross-Taxonomy Dataset Integration
Accepted for publication at LaTeCH-CLfL 2025. The 9th Joint ACL Special Interest Group on Language Technologies for the Socio-Economic Sciences and Humanities (SIGHUM) Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature
null
null
null
cs.CL cs.AI cs.LG cs.SI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Algorithmic hate speech detection faces significant challenges due to the diverse definitions and datasets used in research and practice. Social media platforms, legal frameworks, and institutions each apply distinct yet overlapping definitions, complicating classification efforts. This study addresses these challenges by demonstrating that existing datasets and taxonomies can be integrated into a unified model, enhancing prediction performance and reducing reliance on multiple specialized classifiers. The work introduces a universal taxonomy and a hate speech classifier capable of detecting a wide range of definitions within a single framework. Our approach is validated by combining two widely used but differently annotated datasets, showing improved classification performance on an independent test set. This work highlights the potential of dataset and taxonomy integration in advancing hate speech detection, increasing efficiency, and ensuring broader applicability across contexts.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 12:01:02 GMT" } ]
2025-03-10T00:00:00
[ [ "Fillies", "Jan", "" ], [ "Paschke", "Adrian", "" ] ]
TITLE: Improving Hate Speech Classification with Cross-Taxonomy Dataset Integration ABSTRACT: Algorithmic hate speech detection faces significant challenges due to the diverse definitions and datasets used in research and practice. Social media platforms, legal frameworks, and institutions each apply distinct yet overlapping definitions, complicating classification efforts. This study addresses these challenges by demonstrating that existing datasets and taxonomies can be integrated into a unified model, enhancing prediction performance and reducing reliance on multiple specialized classifiers. The work introduces a universal taxonomy and a hate speech classifier capable of detecting a wide range of definitions within a single framework. Our approach is validated by combining two widely used but differently annotated datasets, showing improved classification performance on an independent test set. This work highlights the potential of dataset and taxonomy integration in advancing hate speech detection, increasing efficiency, and ensuring broader applicability across contexts.
no_new_dataset
0.953708
2503.05362
Weixiang Zhao
Weixiang Zhao, Xingyu Sui, Xinyang Han, Yang Deng, Yulin Hu, Jiahe Guo, Libo Qin, Qianyun Du, Shijin Wang, Yanyan Zhao, Bing Qin, Ting Liu
Chain of Strategy Optimization Makes Large Language Models Better Emotional Supporter
19 pages, 9 figures, 15 tables
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The growing emotional stress in modern society has increased the demand for Emotional Support Conversations (ESC). While Large Language Models (LLMs) show promise for ESC, they face two key challenges: (1) low strategy selection accuracy, and (2) preference bias, limiting their adaptability to emotional needs of users. Existing supervised fine-tuning (SFT) struggles to address these issues, as it rigidly trains models on single gold-standard responses without modeling nuanced strategy trade-offs. To overcome these limitations, we propose Chain-of-Strategy Optimization (CSO), a novel approach that optimizes strategy selection preferences at each dialogue turn. We first leverage Monte Carlo Tree Search to construct ESC-Pro, a high-quality preference dataset with turn-level strategy-response pairs. Training on ESC-Pro with CSO improves both strategy accuracy and bias mitigation, enabling LLMs to generate more empathetic and contextually appropriate responses. Experiments on LLaMA-3.1-8B, Gemma-2-9B, and Qwen2.5-7B demonstrate that CSO outperforms standard SFT, highlighting the efficacy of fine-grained, turn-level preference modeling in ESC.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 12:07:59 GMT" } ]
2025-03-10T00:00:00
[ [ "Zhao", "Weixiang", "" ], [ "Sui", "Xingyu", "" ], [ "Han", "Xinyang", "" ], [ "Deng", "Yang", "" ], [ "Hu", "Yulin", "" ], [ "Guo", "Jiahe", "" ], [ "Qin", "Libo", "" ], [ "Du", "Qianyun", "" ], [ "Wang", "Shijin", "" ], [ "Zhao", "Yanyan", "" ], [ "Qin", "Bing", "" ], [ "Liu", "Ting", "" ] ]
TITLE: Chain of Strategy Optimization Makes Large Language Models Better Emotional Supporter ABSTRACT: The growing emotional stress in modern society has increased the demand for Emotional Support Conversations (ESC). While Large Language Models (LLMs) show promise for ESC, they face two key challenges: (1) low strategy selection accuracy, and (2) preference bias, limiting their adaptability to emotional needs of users. Existing supervised fine-tuning (SFT) struggles to address these issues, as it rigidly trains models on single gold-standard responses without modeling nuanced strategy trade-offs. To overcome these limitations, we propose Chain-of-Strategy Optimization (CSO), a novel approach that optimizes strategy selection preferences at each dialogue turn. We first leverage Monte Carlo Tree Search to construct ESC-Pro, a high-quality preference dataset with turn-level strategy-response pairs. Training on ESC-Pro with CSO improves both strategy accuracy and bias mitigation, enabling LLMs to generate more empathetic and contextually appropriate responses. Experiments on LLaMA-3.1-8B, Gemma-2-9B, and Qwen2.5-7B demonstrate that CSO outperforms standard SFT, highlighting the efficacy of fine-grained, turn-level preference modeling in ESC.
new_dataset
0.937612
2503.05365
Zhigang Wang
Zhigang Wang, Shaojing Fan, Zhenguang Liu, Zheqi Wu, Sifan Wu, Yingying Jiao
Multi-Grained Feature Pruning for Video-Based Human Pose Estimation
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Human pose estimation, with its broad applications in action recognition and motion capture, has experienced significant advancements. However, current Transformer-based methods for video pose estimation often face challenges in managing redundant temporal information and achieving fine-grained perception because they only focus on processing low-resolution features. To address these challenges, we propose a novel multi-scale resolution framework that encodes spatio-temporal representations at varying granularities and executes fine-grained perception compensation. Furthermore, we employ a density peaks clustering method to dynamically identify and prioritize tokens that offer important semantic information. This strategy effectively prunes redundant feature tokens, especially those arising from multi-frame features, thereby optimizing computational efficiency without sacrificing semantic richness. Empirically, it sets new benchmarks for both performance and efficiency on three large-scale datasets. Our method achieves a 93.8% improvement in inference speed compared to the baseline, while also enhancing pose estimation accuracy, reaching 87.4 mAP on the PoseTrack2017 dataset.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 12:14:51 GMT" } ]
2025-03-10T00:00:00
[ [ "Wang", "Zhigang", "" ], [ "Fan", "Shaojing", "" ], [ "Liu", "Zhenguang", "" ], [ "Wu", "Zheqi", "" ], [ "Wu", "Sifan", "" ], [ "Jiao", "Yingying", "" ] ]
TITLE: Multi-Grained Feature Pruning for Video-Based Human Pose Estimation ABSTRACT: Human pose estimation, with its broad applications in action recognition and motion capture, has experienced significant advancements. However, current Transformer-based methods for video pose estimation often face challenges in managing redundant temporal information and achieving fine-grained perception because they only focus on processing low-resolution features. To address these challenges, we propose a novel multi-scale resolution framework that encodes spatio-temporal representations at varying granularities and executes fine-grained perception compensation. Furthermore, we employ a density peaks clustering method to dynamically identify and prioritize tokens that offer important semantic information. This strategy effectively prunes redundant feature tokens, especially those arising from multi-frame features, thereby optimizing computational efficiency without sacrificing semantic richness. Empirically, it sets new benchmarks for both performance and efficiency on three large-scale datasets. Our method achieves a 93.8% improvement in inference speed compared to the baseline, while also enhancing pose estimation accuracy, reaching 87.4 mAP on the PoseTrack2017 dataset.
no_new_dataset
0.951414
2503.05370
Alea Miako Tokita
Alea Miako Tokita, Timoth\'ee Devergne, A. Marco Saitta, J\"org Behler
Free energy profiles for chemical reactions in solution from high-dimensional neural network potentials: The case of the Strecker synthesis
null
null
null
null
physics.chem-ph
http://creativecommons.org/licenses/by/4.0/
Machine learning potentials (MLPs) have become a popular tool in chemistry and materials science as they combine the accuracy of electronic structure calculations with the high computational efficiency of analytic potentials. MLPs are particularly useful for computationally demanding simulations such as the determination of free energy profiles governing chemical reactions in solution, but to date such applications are still rare. In this work we show how umbrella sampling simulations can be combined with active learning of high-dimensional neural network potentials (HDNNPs) to construct free energy profiles in a systematic way. For the example of the first step of Strecker synthesis of glycine in aqueous solution we provide a detailed analysis of the improving quality of HDNNPs for datasets of increasing size. We find that next to the typical quantification of energy and force errors with respect to the underlying density functional theory data also the long-term stability of the simulations and the convergence of physical properties should be rigorously monitored to obtain reliable and converged free energy profiles of chemical reactions in solution.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 12:22:32 GMT" } ]
2025-03-10T00:00:00
[ [ "Tokita", "Alea Miako", "" ], [ "Devergne", "Timothée", "" ], [ "Saitta", "A. Marco", "" ], [ "Behler", "Jörg", "" ] ]
TITLE: Free energy profiles for chemical reactions in solution from high-dimensional neural network potentials: The case of the Strecker synthesis ABSTRACT: Machine learning potentials (MLPs) have become a popular tool in chemistry and materials science as they combine the accuracy of electronic structure calculations with the high computational efficiency of analytic potentials. MLPs are particularly useful for computationally demanding simulations such as the determination of free energy profiles governing chemical reactions in solution, but to date such applications are still rare. In this work we show how umbrella sampling simulations can be combined with active learning of high-dimensional neural network potentials (HDNNPs) to construct free energy profiles in a systematic way. For the example of the first step of Strecker synthesis of glycine in aqueous solution we provide a detailed analysis of the improving quality of HDNNPs for datasets of increasing size. We find that next to the typical quantification of energy and force errors with respect to the underlying density functional theory data also the long-term stability of the simulations and the convergence of physical properties should be rigorously monitored to obtain reliable and converged free energy profiles of chemical reactions in solution.
no_new_dataset
0.947332
2503.05371
Zara Siddique
Zara Siddique, Irtaza Khalid, Liam D. Turner, Luis Espinosa-Anke
Shifting Perspectives: Steering Vector Ensembles for Robust Bias Mitigation in LLMs
Submitted to ACL 2025
null
null
null
cs.LG cs.AI cs.CL
http://creativecommons.org/licenses/by-sa/4.0/
We present a novel approach to bias mitigation in large language models (LLMs) by applying steering vectors to modify model activations in forward passes. We employ Bayesian optimization to systematically identify effective contrastive pair datasets across nine bias axes. When optimized on the BBQ dataset, our individually tuned steering vectors achieve average improvements of 12.2%, 4.7%, and 3.2% over the baseline for Mistral, Llama, and Qwen, respectively. Building on these promising results, we introduce Steering Vector Ensembles (SVE), a method that averages multiple individually optimized steering vectors, each targeting a specific bias axis such as age, race, or gender. By leveraging their collective strength, SVE outperforms individual steering vectors in both bias reduction and maintaining model performance. The work presents the first systematic investigation of steering vectors for bias mitigation, and we demonstrate that SVE is a powerful and computationally efficient strategy for reducing bias in LLMs, with broader implications for enhancing AI safety.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 12:25:29 GMT" } ]
2025-03-10T00:00:00
[ [ "Siddique", "Zara", "" ], [ "Khalid", "Irtaza", "" ], [ "Turner", "Liam D.", "" ], [ "Espinosa-Anke", "Luis", "" ] ]
TITLE: Shifting Perspectives: Steering Vector Ensembles for Robust Bias Mitigation in LLMs ABSTRACT: We present a novel approach to bias mitigation in large language models (LLMs) by applying steering vectors to modify model activations in forward passes. We employ Bayesian optimization to systematically identify effective contrastive pair datasets across nine bias axes. When optimized on the BBQ dataset, our individually tuned steering vectors achieve average improvements of 12.2%, 4.7%, and 3.2% over the baseline for Mistral, Llama, and Qwen, respectively. Building on these promising results, we introduce Steering Vector Ensembles (SVE), a method that averages multiple individually optimized steering vectors, each targeting a specific bias axis such as age, race, or gender. By leveraging their collective strength, SVE outperforms individual steering vectors in both bias reduction and maintaining model performance. The work presents the first systematic investigation of steering vectors for bias mitigation, and we demonstrate that SVE is a powerful and computationally efficient strategy for reducing bias in LLMs, with broader implications for enhancing AI safety.
no_new_dataset
0.945651
2503.05373
Linh Le
Linh Le, Guido Zuccon, Gianluca Demartini, Genghong Zhao, Xia Zhang
Leveraging Semantic Type Dependencies for Clinical Named Entity Recognition
null
AMIA - American Medical Informatics Association 2022
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-nd/4.0/
Previous work on clinical relation extraction from free-text sentences leveraged information about semantic types from clinical knowledge bases as a part of entity representations. In this paper, we exploit additional evidence by also making use of domain-specific semantic type dependencies. We encode the relation between a span of tokens matching a Unified Medical Language System (UMLS) concept and other tokens in the sentence. We implement our method and compare against different named entity recognition (NER) architectures (i.e., BiLSTM-CRF and BiLSTM-GCN-CRF) using different pre-trained clinical embeddings (i.e., BERT, BioBERT, UMLSBert). Our experimental results on clinical datasets show that in some cases NER effectiveness can be significantly improved by making use of domain-specific semantic type dependencies. Our work is also the first study generating a matrix encoding to make use of more than three dependencies in one pass for the NER task.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 12:29:21 GMT" } ]
2025-03-10T00:00:00
[ [ "Le", "Linh", "" ], [ "Zuccon", "Guido", "" ], [ "Demartini", "Gianluca", "" ], [ "Zhao", "Genghong", "" ], [ "Zhang", "Xia", "" ] ]
TITLE: Leveraging Semantic Type Dependencies for Clinical Named Entity Recognition ABSTRACT: Previous work on clinical relation extraction from free-text sentences leveraged information about semantic types from clinical knowledge bases as a part of entity representations. In this paper, we exploit additional evidence by also making use of domain-specific semantic type dependencies. We encode the relation between a span of tokens matching a Unified Medical Language System (UMLS) concept and other tokens in the sentence. We implement our method and compare against different named entity recognition (NER) architectures (i.e., BiLSTM-CRF and BiLSTM-GCN-CRF) using different pre-trained clinical embeddings (i.e., BERT, BioBERT, UMLSBert). Our experimental results on clinical datasets show that in some cases NER effectiveness can be significantly improved by making use of domain-specific semantic type dependencies. Our work is also the first study generating a matrix encoding to make use of more than three dependencies in one pass for the NER task.
no_new_dataset
0.946745
2503.05388
Mohammad Javad Saeedizade
Anna Sofia Lippolis, Mohammad Javad Saeedizade, Robin Keskis\"arkk\"a, Sara Zuppiroli, Miguel Ceriani, Aldo Gangemi, Eva Blomqvist, Andrea Giovanni Nuzzolese
Ontology Generation using Large Language Models
2 figures and 3 tables. 20 pages
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ontology engineering process is complex, time-consuming, and error-prone, even for experienced ontology engineers. In this work, we investigate the potential of Large Language Models (LLMs) to provide effective OWL ontology drafts directly from ontological requirements described using user stories and competency questions. Our main contribution is the presentation and evaluation of two new prompting techniques for automated ontology development: Memoryless CQbyCQ and Ontogenia. We also emphasize the importance of three structural criteria for ontology assessment, alongside expert qualitative evaluation, highlighting the need for a multi-dimensional evaluation in order to capture the quality and usability of the generated ontologies. Our experiments, conducted on a benchmark dataset of ten ontologies with 100 distinct CQs and 29 different user stories, compare the performance of three LLMs using the two prompting techniques. The results demonstrate improvements over the current state-of-the-art in LLM-supported ontology engineering. More specifically, the model OpenAI o1-preview with Ontogenia produces ontologies of sufficient quality to meet the requirements of ontology engineers, significantly outperforming novice ontology engineers in modelling ability. However, we still note some common mistakes and variability of result quality, which is important to take into account when using LLMs for ontology authoring support. We discuss these limitations and propose directions for future research.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 13:03:28 GMT" } ]
2025-03-10T00:00:00
[ [ "Lippolis", "Anna Sofia", "" ], [ "Saeedizade", "Mohammad Javad", "" ], [ "Keskisärkkä", "Robin", "" ], [ "Zuppiroli", "Sara", "" ], [ "Ceriani", "Miguel", "" ], [ "Gangemi", "Aldo", "" ], [ "Blomqvist", "Eva", "" ], [ "Nuzzolese", "Andrea Giovanni", "" ] ]
TITLE: Ontology Generation using Large Language Models ABSTRACT: The ontology engineering process is complex, time-consuming, and error-prone, even for experienced ontology engineers. In this work, we investigate the potential of Large Language Models (LLMs) to provide effective OWL ontology drafts directly from ontological requirements described using user stories and competency questions. Our main contribution is the presentation and evaluation of two new prompting techniques for automated ontology development: Memoryless CQbyCQ and Ontogenia. We also emphasize the importance of three structural criteria for ontology assessment, alongside expert qualitative evaluation, highlighting the need for a multi-dimensional evaluation in order to capture the quality and usability of the generated ontologies. Our experiments, conducted on a benchmark dataset of ten ontologies with 100 distinct CQs and 29 different user stories, compare the performance of three LLMs using the two prompting techniques. The results demonstrate improvements over the current state-of-the-art in LLM-supported ontology engineering. More specifically, the model OpenAI o1-preview with Ontogenia produces ontologies of sufficient quality to meet the requirements of ontology engineers, significantly outperforming novice ontology engineers in modelling ability. However, we still note some common mistakes and variability of result quality, which is important to take into account when using LLMs for ontology authoring support. We discuss these limitations and propose directions for future research.
no_new_dataset
0.60092
2503.05423
Run He
Run He, Di Fang, Yicheng Xu, Yawen Cui, Ming Li, Cen Chen, Ziqian Zeng, Huiping Zhuang
Semantic Shift Estimation via Dual-Projection and Classifier Reconstruction for Exemplar-Free Class-Incremental Learning
14 pages, 7 figures
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Exemplar-Free Class-Incremental Learning (EFCIL) aims to sequentially learn from distinct categories without retaining exemplars but easily suffers from catastrophic forgetting of learned knowledge. While existing EFCIL methods leverage knowledge distillation to alleviate forgetting, they still face two critical challenges: semantic shift and decision bias. Specifically, the embeddings of old tasks shift in the embedding space after learning new tasks, and the classifier becomes biased towards new tasks due to training solely with new data, thereby hindering the balance between old and new knowledge. To address these issues, we propose the Dual-Projection Shift Estimation and Classifier Reconstruction (DPCR) approach for EFCIL. DPCR effectively estimates semantic shift through a dual-projection, which combines a learnable transformation with a row-space projection to capture both task-wise and category-wise shifts. Furthermore, to mitigate decision bias, DPCR employs ridge regression to reformulate classifier training as a reconstruction process. This reconstruction exploits previous information encoded in covariance and prototype of each class after calibration with estimated shift, thereby reducing decision bias. Extensive experiments demonstrate that, across various datasets, DPCR effectively balances old and new tasks, outperforming state-of-the-art EFCIL methods.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 13:50:29 GMT" } ]
2025-03-10T00:00:00
[ [ "He", "Run", "" ], [ "Fang", "Di", "" ], [ "Xu", "Yicheng", "" ], [ "Cui", "Yawen", "" ], [ "Li", "Ming", "" ], [ "Chen", "Cen", "" ], [ "Zeng", "Ziqian", "" ], [ "Zhuang", "Huiping", "" ] ]
TITLE: Semantic Shift Estimation via Dual-Projection and Classifier Reconstruction for Exemplar-Free Class-Incremental Learning ABSTRACT: Exemplar-Free Class-Incremental Learning (EFCIL) aims to sequentially learn from distinct categories without retaining exemplars but easily suffers from catastrophic forgetting of learned knowledge. While existing EFCIL methods leverage knowledge distillation to alleviate forgetting, they still face two critical challenges: semantic shift and decision bias. Specifically, the embeddings of old tasks shift in the embedding space after learning new tasks, and the classifier becomes biased towards new tasks due to training solely with new data, thereby hindering the balance between old and new knowledge. To address these issues, we propose the Dual-Projection Shift Estimation and Classifier Reconstruction (DPCR) approach for EFCIL. DPCR effectively estimates semantic shift through a dual-projection, which combines a learnable transformation with a row-space projection to capture both task-wise and category-wise shifts. Furthermore, to mitigate decision bias, DPCR employs ridge regression to reformulate classifier training as a reconstruction process. This reconstruction exploits previous information encoded in covariance and prototype of each class after calibration with estimated shift, thereby reducing decision bias. Extensive experiments demonstrate that, across various datasets, DPCR effectively balances old and new tasks, outperforming state-of-the-art EFCIL methods.
no_new_dataset
0.94625
2503.05425
Huai Yu
Jian Shen, Huai Yu, Ji Wu, Wen Yang, Gui-Song Xia
LiDAR-enhanced 3D Gaussian Splatting Mapping
Accepted by ICRA 2025
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
This paper introduces LiGSM, a novel LiDAR-enhanced 3D Gaussian Splatting (3DGS) mapping framework that improves the accuracy and robustness of 3D scene mapping by integrating LiDAR data. LiGSM constructs joint loss from images and LiDAR point clouds to estimate the poses and optimize their extrinsic parameters, enabling dynamic adaptation to variations in sensor alignment. Furthermore, it leverages LiDAR point clouds to initialize 3DGS, providing a denser and more reliable starting points compared to sparse SfM points. In scene rendering, the framework augments standard image-based supervision with depth maps generated from LiDAR projections, ensuring an accurate scene representation in both geometry and photometry. Experiments on public and self-collected datasets demonstrate that LiGSM outperforms comparative methods in pose tracking and scene rendering.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 13:51:34 GMT" } ]
2025-03-10T00:00:00
[ [ "Shen", "Jian", "" ], [ "Yu", "Huai", "" ], [ "Wu", "Ji", "" ], [ "Yang", "Wen", "" ], [ "Xia", "Gui-Song", "" ] ]
TITLE: LiDAR-enhanced 3D Gaussian Splatting Mapping ABSTRACT: This paper introduces LiGSM, a novel LiDAR-enhanced 3D Gaussian Splatting (3DGS) mapping framework that improves the accuracy and robustness of 3D scene mapping by integrating LiDAR data. LiGSM constructs joint loss from images and LiDAR point clouds to estimate the poses and optimize their extrinsic parameters, enabling dynamic adaptation to variations in sensor alignment. Furthermore, it leverages LiDAR point clouds to initialize 3DGS, providing a denser and more reliable starting points compared to sparse SfM points. In scene rendering, the framework augments standard image-based supervision with depth maps generated from LiDAR projections, ensuring an accurate scene representation in both geometry and photometry. Experiments on public and self-collected datasets demonstrate that LiGSM outperforms comparative methods in pose tracking and scene rendering.
no_new_dataset
0.662469
2503.05427
Simon Malacek
Simon Malacek, Jos\'e Portela, Yannick Marcus Werner, Sonja Wogrin
Generating Building-Level Heat Demand Time Series by Combining Occupancy Simulations and Thermal Modeling
null
null
null
null
eess.SY cs.SY
http://creativecommons.org/licenses/by/4.0/
Despite various efforts, decarbonizing the heating sector remains a significant challenge. To tackle it by smart planning, the availability of highly resolved heating demand data is key. Several existing models provide heating demand only for specific applications. Typically, they either offer time series for a larger area or annual demand data on a building level, but not both simultaneously. Additionally, the diversity in heating demand across different buildings is often not considered. To address these limitations, this paper presents a novel method for generating temporally resolved heat demand time series at the building level using publicly available data. The approach integrates a thermal building model with stochastic occupancy simulations that account for variability in user behavior. As a result, the tool serves as a cost-effective resource for cross-sectoral energy system planning and policy development, particularly with a focus on the heating sector. The obtained data can be used to assess the impact of renovation and retrofitting strategies, or to analyze district heating expansion. To illustrate the potential applications of this approach, we conducted a case study in Puertollano (Spain), where we prepared a dataset of heating demand with hourly resolution for each of 9,298 residential buildings. This data was then used to compare two different pathways for the thermal renovation of these buildings. By relying on publicly available data, this method can be adapted and applied to various European regions, offering broad usability in energy system optimization and analysis of decarbonization strategies.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 13:56:14 GMT" } ]
2025-03-10T00:00:00
[ [ "Malacek", "Simon", "" ], [ "Portela", "José", "" ], [ "Werner", "Yannick Marcus", "" ], [ "Wogrin", "Sonja", "" ] ]
TITLE: Generating Building-Level Heat Demand Time Series by Combining Occupancy Simulations and Thermal Modeling ABSTRACT: Despite various efforts, decarbonizing the heating sector remains a significant challenge. To tackle it by smart planning, the availability of highly resolved heating demand data is key. Several existing models provide heating demand only for specific applications. Typically, they either offer time series for a larger area or annual demand data on a building level, but not both simultaneously. Additionally, the diversity in heating demand across different buildings is often not considered. To address these limitations, this paper presents a novel method for generating temporally resolved heat demand time series at the building level using publicly available data. The approach integrates a thermal building model with stochastic occupancy simulations that account for variability in user behavior. As a result, the tool serves as a cost-effective resource for cross-sectoral energy system planning and policy development, particularly with a focus on the heating sector. The obtained data can be used to assess the impact of renovation and retrofitting strategies, or to analyze district heating expansion. To illustrate the potential applications of this approach, we conducted a case study in Puertollano (Spain), where we prepared a dataset of heating demand with hourly resolution for each of 9,298 residential buildings. This data was then used to compare two different pathways for the thermal renovation of these buildings. By relying on publicly available data, this method can be adapted and applied to various European regions, offering broad usability in energy system optimization and analysis of decarbonization strategies.
new_dataset
0.971699
2503.05439
Lachlan McPheat
Navdeep Kaur, Lachlan McPheat, Alessandra Russo, Anthony G Cohn, Pranava Madhyastha
An Empirical Study of Conformal Prediction in LLM with ASP Scaffolds for Robust Reasoning
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
In this paper, we examine the use of Conformal Language Modelling (CLM) alongside Answer Set Programming (ASP) to enhance the performance of standard open-weight LLMs on complex multi-step reasoning tasks. Using the StepGame dataset, which requires spatial reasoning, we apply CLM to generate sets of ASP programs from an LLM, providing statistical guarantees on the correctness of the outputs. Experimental results show that CLM significantly outperforms baseline models that use standard sampling methods, achieving substantial accuracy improvements across different levels of reasoning complexity. Additionally, the LLM-as-Judge metric enhances CLM's performance, especially in assessing structurally and logically correct ASP outputs. However, calibrating CLM with diverse calibration sets did not improve generalizability for tasks requiring much longer reasoning steps, indicating limitations in handling more complex tasks.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 14:10:10 GMT" } ]
2025-03-10T00:00:00
[ [ "Kaur", "Navdeep", "" ], [ "McPheat", "Lachlan", "" ], [ "Russo", "Alessandra", "" ], [ "Cohn", "Anthony G", "" ], [ "Madhyastha", "Pranava", "" ] ]
TITLE: An Empirical Study of Conformal Prediction in LLM with ASP Scaffolds for Robust Reasoning ABSTRACT: In this paper, we examine the use of Conformal Language Modelling (CLM) alongside Answer Set Programming (ASP) to enhance the performance of standard open-weight LLMs on complex multi-step reasoning tasks. Using the StepGame dataset, which requires spatial reasoning, we apply CLM to generate sets of ASP programs from an LLM, providing statistical guarantees on the correctness of the outputs. Experimental results show that CLM significantly outperforms baseline models that use standard sampling methods, achieving substantial accuracy improvements across different levels of reasoning complexity. Additionally, the LLM-as-Judge metric enhances CLM's performance, especially in assessing structurally and logically correct ASP outputs. However, calibrating CLM with diverse calibration sets did not improve generalizability for tasks requiring much longer reasoning steps, indicating limitations in handling more complex tasks.
no_new_dataset
0.945045
2503.05474
Ziran Zhou
Ziran Zhou and Guanyu Gao and Xiaohu Wu and Yan Lyu
Personalized Federated Learning via Learning Dynamic Graphs
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Personalized Federated Learning (PFL) aims to train a personalized model for each client that is tailored to its local data distribution, learning fails to perform well on individual clients due to variations in their local data distributions. Most existing PFL methods focus on personalizing the aggregated global model for each client, neglecting the fundamental aspect of federated learning: the regulation of how client models are aggregated. Additionally, almost all of them overlook the graph structure formed by clients in federated learning. In this paper, we propose a novel method, Personalized Federated Learning with Graph Attention Network (pFedGAT), which captures the latent graph structure between clients and dynamically determines the importance of other clients for each client, enabling fine-grained control over the aggregation process. We evaluate pFedGAT across multiple data distribution scenarios, comparing it with twelve state of the art methods on three datasets: Fashion MNIST, CIFAR-10, and CIFAR-100, and find that it consistently performs well.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 14:47:03 GMT" } ]
2025-03-10T00:00:00
[ [ "Zhou", "Ziran", "" ], [ "Gao", "Guanyu", "" ], [ "Wu", "Xiaohu", "" ], [ "Lyu", "Yan", "" ] ]
TITLE: Personalized Federated Learning via Learning Dynamic Graphs ABSTRACT: Personalized Federated Learning (PFL) aims to train a personalized model for each client that is tailored to its local data distribution, learning fails to perform well on individual clients due to variations in their local data distributions. Most existing PFL methods focus on personalizing the aggregated global model for each client, neglecting the fundamental aspect of federated learning: the regulation of how client models are aggregated. Additionally, almost all of them overlook the graph structure formed by clients in federated learning. In this paper, we propose a novel method, Personalized Federated Learning with Graph Attention Network (pFedGAT), which captures the latent graph structure between clients and dynamically determines the importance of other clients for each client, enabling fine-grained control over the aggregation process. We evaluate pFedGAT across multiple data distribution scenarios, comparing it with twelve state of the art methods on three datasets: Fashion MNIST, CIFAR-10, and CIFAR-100, and find that it consistently performs well.
no_new_dataset
0.948298
2503.05477
Gazi Tanbhir
Nizo Jaman Shohan, Gazi Tanbhir, Faria Elahi, Ahsan Ullah, Md. Nazmus Sakib
Enhancing Network Security: A Hybrid Approach for Detection and Mitigation of Distributed Denial-of-Service Attacks Using Machine Learning
Part of the book series: Communications in Computer and Information Science ((CCIS,volume 2091))
ANTIC 2023. Communications in Computer and Information Science, vol 2091. Springer, Cham
10.1007/978-3-031-64064-3_7
null
cs.CR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The distributed denial-of-service (DDoS) attack stands out as a highly formidable cyber threat, representing an advanced form of the denial-of-service (DoS) attack. A DDoS attack involves multiple computers working together to overwhelm a system, making it unavailable. On the other hand, a DoS attack is a one-on-one attempt to make a system or website inaccessible. Thus, it is crucial to construct an effective model for identifying various DDoS incidents. Although extensive research has focused on binary detection models for DDoS identification, they face challenges to adapt evolving threats, necessitating frequent updates. Whereas multiclass detection models offer a comprehensive defense against diverse DDoS attacks, ensuring adaptability in the ever-changing cyber threat landscape. In this paper, we propose a Hybrid Model to strengthen network security by combining the featureextraction abilities of 1D Convolutional Neural Networks (CNNs) with the classification skills of Random Forest (RF) and Multi-layer Perceptron (MLP) classifiers. Using the CIC-DDoS2019 dataset, we perform multiclass classification of various DDoS attacks and conduct a comparative analysis of evaluation metrics for RF, MLP, and our proposed Hybrid Model. After analyzing the results, we draw meaningful conclusions and confirm the superiority of our Hybrid Model by performing thorough cross-validation. Additionally, we integrate our machine learning model with Snort, which provides a robust and adaptive solution for detecting and mitigating various DDoS attacks.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 14:47:56 GMT" } ]
2025-03-10T00:00:00
[ [ "Shohan", "Nizo Jaman", "" ], [ "Tanbhir", "Gazi", "" ], [ "Elahi", "Faria", "" ], [ "Ullah", "Ahsan", "" ], [ "Sakib", "Md. Nazmus", "" ] ]
TITLE: Enhancing Network Security: A Hybrid Approach for Detection and Mitigation of Distributed Denial-of-Service Attacks Using Machine Learning ABSTRACT: The distributed denial-of-service (DDoS) attack stands out as a highly formidable cyber threat, representing an advanced form of the denial-of-service (DoS) attack. A DDoS attack involves multiple computers working together to overwhelm a system, making it unavailable. On the other hand, a DoS attack is a one-on-one attempt to make a system or website inaccessible. Thus, it is crucial to construct an effective model for identifying various DDoS incidents. Although extensive research has focused on binary detection models for DDoS identification, they face challenges to adapt evolving threats, necessitating frequent updates. Whereas multiclass detection models offer a comprehensive defense against diverse DDoS attacks, ensuring adaptability in the ever-changing cyber threat landscape. In this paper, we propose a Hybrid Model to strengthen network security by combining the featureextraction abilities of 1D Convolutional Neural Networks (CNNs) with the classification skills of Random Forest (RF) and Multi-layer Perceptron (MLP) classifiers. Using the CIC-DDoS2019 dataset, we perform multiclass classification of various DDoS attacks and conduct a comparative analysis of evaluation metrics for RF, MLP, and our proposed Hybrid Model. After analyzing the results, we draw meaningful conclusions and confirm the superiority of our Hybrid Model by performing thorough cross-validation. Additionally, we integrate our machine learning model with Snort, which provides a robust and adaptive solution for detecting and mitigating various DDoS attacks.
no_new_dataset
0.945701
2503.05482
Christofer Fellicious
Christofer Fellicious, Hans P. Reiser, Michael Granitzer
Bridging the Semantic Gap in Virtual Machine Introspection and Forensic Memory Analysis
null
null
null
null
cs.CR cs.LG
http://creativecommons.org/licenses/by/4.0/
Forensic Memory Analysis (FMA) and Virtual Machine Introspection (VMI) are critical tools for security in a virtualization-based approach. VMI and FMA involves using digital forensic methods to extract information from the system to identify and explain security incidents. A key challenge in both FMA and VMI is the "Semantic Gap", which is the difficulty of interpreting raw memory data without specialized tools and expertise. In this work, we investigate how a priori knowledge, metadata and engineered features can aid VMI and FMA, leveraging machine learning to automate information extraction and reduce the workload of forensic investigators. We choose OpenSSH as our use case to test different methods to extract high level structures. We also test our method on complete physical memory dumps to showcase the effectiveness of the engineered features. Our features range from basic statistical features to advanced graph-based representations using malloc headers and pointer translations. The training and testing are carried out on public datasets that we compare against already recognized baseline methods. We show that using metadata, we can improve the performance of the algorithm when there is very little training data and also quantify how having more data results in better generalization performance. The final contribution is an open dataset of physical memory dumps, totalling more than 1 TB of different memory state, software environments, main memory capacities and operating system versions. Our methods show that having more metadata boosts performance with all methods obtaining an F1-Score of over 80%. Our research underscores the possibility of using feature engineering and machine learning techniques to bridge the semantic gap.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 14:51:32 GMT" } ]
2025-03-10T00:00:00
[ [ "Fellicious", "Christofer", "" ], [ "Reiser", "Hans P.", "" ], [ "Granitzer", "Michael", "" ] ]
TITLE: Bridging the Semantic Gap in Virtual Machine Introspection and Forensic Memory Analysis ABSTRACT: Forensic Memory Analysis (FMA) and Virtual Machine Introspection (VMI) are critical tools for security in a virtualization-based approach. VMI and FMA involves using digital forensic methods to extract information from the system to identify and explain security incidents. A key challenge in both FMA and VMI is the "Semantic Gap", which is the difficulty of interpreting raw memory data without specialized tools and expertise. In this work, we investigate how a priori knowledge, metadata and engineered features can aid VMI and FMA, leveraging machine learning to automate information extraction and reduce the workload of forensic investigators. We choose OpenSSH as our use case to test different methods to extract high level structures. We also test our method on complete physical memory dumps to showcase the effectiveness of the engineered features. Our features range from basic statistical features to advanced graph-based representations using malloc headers and pointer translations. The training and testing are carried out on public datasets that we compare against already recognized baseline methods. We show that using metadata, we can improve the performance of the algorithm when there is very little training data and also quantify how having more data results in better generalization performance. The final contribution is an open dataset of physical memory dumps, totalling more than 1 TB of different memory state, software environments, main memory capacities and operating system versions. Our methods show that having more metadata boosts performance with all methods obtaining an F1-Score of over 80%. Our research underscores the possibility of using feature engineering and machine learning techniques to bridge the semantic gap.
no_new_dataset
0.951097
2503.05490
Amit Levy
Amit Levy, Itzik Klein
Adaptive Neural Unscented Kalman Filter
eight pages, ten figures
null
null
null
cs.RO eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The unscented Kalman filter is an algorithm capable of handling nonlinear scenarios. Uncertainty in process noise covariance may decrease the filter estimation performance or even lead to its divergence. Therefore, it is important to adjust the process noise covariance matrix in real time. In this paper, we developed an adaptive neural unscented Kalman filter to cope with time-varying uncertainties during platform operation. To this end, we devised ProcessNet, a simple yet efficient end-to-end regression network to adaptively estimate the process noise covariance matrix. We focused on the nonlinear inertial sensor and Doppler velocity log fusion problem in the case of autonomous underwater vehicle navigation. Using a real-world recorded dataset from an autonomous underwater vehicle, we demonstrated our filter performance and showed its advantages over other adaptive and non-adaptive nonlinear filters.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 14:59:30 GMT" } ]
2025-03-10T00:00:00
[ [ "Levy", "Amit", "" ], [ "Klein", "Itzik", "" ] ]
TITLE: Adaptive Neural Unscented Kalman Filter ABSTRACT: The unscented Kalman filter is an algorithm capable of handling nonlinear scenarios. Uncertainty in process noise covariance may decrease the filter estimation performance or even lead to its divergence. Therefore, it is important to adjust the process noise covariance matrix in real time. In this paper, we developed an adaptive neural unscented Kalman filter to cope with time-varying uncertainties during platform operation. To this end, we devised ProcessNet, a simple yet efficient end-to-end regression network to adaptively estimate the process noise covariance matrix. We focused on the nonlinear inertial sensor and Doppler velocity log fusion problem in the case of autonomous underwater vehicle navigation. Using a real-world recorded dataset from an autonomous underwater vehicle, we demonstrated our filter performance and showed its advantages over other adaptive and non-adaptive nonlinear filters.
no_new_dataset
0.950824
2503.05492
Haotian Hu
Haotian Hu and Jingwei Xu and Fanyi Wang and Toyota Li and Yaonong Wang and Laifeng Hu and Zhiwang Zhang
FastMap: Fast Queries Initialization Based Vectorized HD Map Reconstruction Framework
null
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reconstruction of high-definition maps is a crucial task in perceiving the autonomous driving environment, as its accuracy directly impacts the reliability of prediction and planning capabilities in downstream modules. Current vectorized map reconstruction methods based on the DETR framework encounter limitations due to the redundancy in the decoder structure, necessitating the stacking of six decoder layers to maintain performance, which significantly hampers computational efficiency. To tackle this issue, we introduce FastMap, an innovative framework designed to reduce decoder redundancy in existing approaches. FastMap optimizes the decoder architecture by employing a single-layer, two-stage transformer that achieves multilevel representation capabilities. Our framework eliminates the conventional practice of randomly initializing queries and instead incorporates a heatmap-guided query generation module during the decoding phase, which effectively maps image features into structured query vectors using learnable positional encoding. Additionally, we propose a geometry-constrained point-to-line loss mechanism for FastMap, which adeptly addresses the challenge of distinguishing highly homogeneous features that often arise in traditional point-to-point loss computations. Extensive experiments demonstrate that FastMap achieves state-of-the-art performance in both nuScenes and Argoverse2 datasets, with its decoder operating 3.2 faster than the baseline. Code and more demos are available at https://github.com/hht1996ok/FastMap.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 15:01:55 GMT" } ]
2025-03-10T00:00:00
[ [ "Hu", "Haotian", "" ], [ "Xu", "Jingwei", "" ], [ "Wang", "Fanyi", "" ], [ "Li", "Toyota", "" ], [ "Wang", "Yaonong", "" ], [ "Hu", "Laifeng", "" ], [ "Zhang", "Zhiwang", "" ] ]
TITLE: FastMap: Fast Queries Initialization Based Vectorized HD Map Reconstruction Framework ABSTRACT: Reconstruction of high-definition maps is a crucial task in perceiving the autonomous driving environment, as its accuracy directly impacts the reliability of prediction and planning capabilities in downstream modules. Current vectorized map reconstruction methods based on the DETR framework encounter limitations due to the redundancy in the decoder structure, necessitating the stacking of six decoder layers to maintain performance, which significantly hampers computational efficiency. To tackle this issue, we introduce FastMap, an innovative framework designed to reduce decoder redundancy in existing approaches. FastMap optimizes the decoder architecture by employing a single-layer, two-stage transformer that achieves multilevel representation capabilities. Our framework eliminates the conventional practice of randomly initializing queries and instead incorporates a heatmap-guided query generation module during the decoding phase, which effectively maps image features into structured query vectors using learnable positional encoding. Additionally, we propose a geometry-constrained point-to-line loss mechanism for FastMap, which adeptly addresses the challenge of distinguishing highly homogeneous features that often arise in traditional point-to-point loss computations. Extensive experiments demonstrate that FastMap achieves state-of-the-art performance in both nuScenes and Argoverse2 datasets, with its decoder operating 3.2 faster than the baseline. Code and more demos are available at https://github.com/hht1996ok/FastMap.
no_new_dataset
0.943452
2503.05493
Qijiong Liu
Qijiong Liu, Jieming Zhu, Lu Fan, Kun Wang, Hengchang Hu, Wei Guo, Yong Liu, Xiao-Ming Wu
Benchmarking LLMs in Recommendation Tasks: A Comparative Evaluation with Conventional Recommenders
null
null
null
null
cs.IR cs.CL
http://creativecommons.org/licenses/by/4.0/
In recent years, integrating large language models (LLMs) into recommender systems has created new opportunities for improving recommendation quality. However, a comprehensive benchmark is needed to thoroughly evaluate and compare the recommendation capabilities of LLMs with traditional recommender systems. In this paper, we introduce RecBench, which systematically investigates various item representation forms (including unique identifier, text, semantic embedding, and semantic identifier) and evaluates two primary recommendation tasks, i.e., click-through rate prediction (CTR) and sequential recommendation (SeqRec). Our extensive experiments cover up to 17 large models and are conducted across five diverse datasets from fashion, news, video, books, and music domains. Our findings indicate that LLM-based recommenders outperform conventional recommenders, achieving up to a 5% AUC improvement in the CTR scenario and up to a 170% NDCG@10 improvement in the SeqRec scenario. However, these substantial performance gains come at the expense of significantly reduced inference efficiency, rendering the LLM-as-RS paradigm impractical for real-time recommendation environments. We aim for our findings to inspire future research, including recommendation-specific model acceleration methods. We will release our code, data, configurations, and platform to enable other researchers to reproduce and build upon our experimental results.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 15:05:23 GMT" } ]
2025-03-10T00:00:00
[ [ "Liu", "Qijiong", "" ], [ "Zhu", "Jieming", "" ], [ "Fan", "Lu", "" ], [ "Wang", "Kun", "" ], [ "Hu", "Hengchang", "" ], [ "Guo", "Wei", "" ], [ "Liu", "Yong", "" ], [ "Wu", "Xiao-Ming", "" ] ]
TITLE: Benchmarking LLMs in Recommendation Tasks: A Comparative Evaluation with Conventional Recommenders ABSTRACT: In recent years, integrating large language models (LLMs) into recommender systems has created new opportunities for improving recommendation quality. However, a comprehensive benchmark is needed to thoroughly evaluate and compare the recommendation capabilities of LLMs with traditional recommender systems. In this paper, we introduce RecBench, which systematically investigates various item representation forms (including unique identifier, text, semantic embedding, and semantic identifier) and evaluates two primary recommendation tasks, i.e., click-through rate prediction (CTR) and sequential recommendation (SeqRec). Our extensive experiments cover up to 17 large models and are conducted across five diverse datasets from fashion, news, video, books, and music domains. Our findings indicate that LLM-based recommenders outperform conventional recommenders, achieving up to a 5% AUC improvement in the CTR scenario and up to a 170% NDCG@10 improvement in the SeqRec scenario. However, these substantial performance gains come at the expense of significantly reduced inference efficiency, rendering the LLM-as-RS paradigm impractical for real-time recommendation environments. We aim for our findings to inspire future research, including recommendation-specific model acceleration methods. We will release our code, data, configurations, and platform to enable other researchers to reproduce and build upon our experimental results.
no_new_dataset
0.772144
2503.05505
Yusong Ke
Yusong Ke
Statistical Guarantees of Correctness Coverage for Medical Multiple-Choice Question Answering
Under Review
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large language models (LLMs) are increasingly deployed in real-world question-answering (QA) applications. However, LLMs have been proven to generate hallucinations and nonfactual information, undermining their trustworthiness in high-stakes medical tasks. Conformal prediction (CP) is well-known to be model-agnostic and distribution-free, which creates statistically rigorous prediction sets in classification tasks. In this work, we for the first time adapt the CP framework to medical multiple-choice question-answering (MCQA) tasks, by correlating the nonconformity score with the frequency score of correct options grounded in self-consistency theory, assuming no access to internal model information. Considering that the adapted CP framework can only control the (mis)coverage rate, we employ a risk control framework, which can manage task-specific metrics by devising a monotonically decreasing loss function. We evaluate our framework on 3 popular medical MCQA datasets utilizing 4 ``off-the-shelf'' LLMs. Empirical results demonstrate that we achieve user-specified average (or marginal) error rates on the test set. Furthermore, we observe that the average prediction set size (APSS) on the test set decreases as the risk level increases, which concludes a promising evaluation metric for the uncertainty of LLMs.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 15:22:10 GMT" } ]
2025-03-10T00:00:00
[ [ "Ke", "Yusong", "" ] ]
TITLE: Statistical Guarantees of Correctness Coverage for Medical Multiple-Choice Question Answering ABSTRACT: Large language models (LLMs) are increasingly deployed in real-world question-answering (QA) applications. However, LLMs have been proven to generate hallucinations and nonfactual information, undermining their trustworthiness in high-stakes medical tasks. Conformal prediction (CP) is well-known to be model-agnostic and distribution-free, which creates statistically rigorous prediction sets in classification tasks. In this work, we for the first time adapt the CP framework to medical multiple-choice question-answering (MCQA) tasks, by correlating the nonconformity score with the frequency score of correct options grounded in self-consistency theory, assuming no access to internal model information. Considering that the adapted CP framework can only control the (mis)coverage rate, we employ a risk control framework, which can manage task-specific metrics by devising a monotonically decreasing loss function. We evaluate our framework on 3 popular medical MCQA datasets utilizing 4 ``off-the-shelf'' LLMs. Empirical results demonstrate that we achieve user-specified average (or marginal) error rates on the test set. Furthermore, we observe that the average prediction set size (APSS) on the test set decreases as the risk level increases, which concludes a promising evaluation metric for the uncertainty of LLMs.
no_new_dataset
0.947527
2503.05520
Romain Hermary
Romain Hermary, Vincent Gaudilli\`ere, Abd El Rahman Shabayek, Djamila Aouada
Removing Geometric Bias in One-Class Anomaly Detection with Adaptive Feature Perturbation
Published in WACV 2025
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
One-class anomaly detection aims to detect objects that do not belong to a predefined normal class. In practice training data lack those anomalous samples; hence state-of-the-art methods are trained to discriminate between normal and synthetically-generated pseudo-anomalous data. Most methods use data augmentation techniques on normal images to simulate anomalies. However the best-performing ones implicitly leverage a geometric bias present in the benchmarking datasets. This limits their usability in more general conditions. Others are relying on basic noising schemes that may be suboptimal in capturing the underlying structure of normal data. In addition most still favour the image domain to generate pseudo-anomalies training models end-to-end from only the normal class and overlooking richer representations of the information. To overcome these limitations we consider frozen yet rich feature spaces given by pretrained models and create pseudo-anomalous features with a novel adaptive linear feature perturbation technique. It adapts the noise distribution to each sample applies decaying linear perturbations to feature vectors and further guides the classification process using a contrastive learning objective. Experimental evaluation conducted on both standard and geometric bias-free datasets demonstrates the superiority of our approach with respect to comparable baselines. The codebase is accessible via our public repository.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 15:42:51 GMT" } ]
2025-03-10T00:00:00
[ [ "Hermary", "Romain", "" ], [ "Gaudillière", "Vincent", "" ], [ "Shabayek", "Abd El Rahman", "" ], [ "Aouada", "Djamila", "" ] ]
TITLE: Removing Geometric Bias in One-Class Anomaly Detection with Adaptive Feature Perturbation ABSTRACT: One-class anomaly detection aims to detect objects that do not belong to a predefined normal class. In practice training data lack those anomalous samples; hence state-of-the-art methods are trained to discriminate between normal and synthetically-generated pseudo-anomalous data. Most methods use data augmentation techniques on normal images to simulate anomalies. However the best-performing ones implicitly leverage a geometric bias present in the benchmarking datasets. This limits their usability in more general conditions. Others are relying on basic noising schemes that may be suboptimal in capturing the underlying structure of normal data. In addition most still favour the image domain to generate pseudo-anomalies training models end-to-end from only the normal class and overlooking richer representations of the information. To overcome these limitations we consider frozen yet rich feature spaces given by pretrained models and create pseudo-anomalous features with a novel adaptive linear feature perturbation technique. It adapts the noise distribution to each sample applies decaying linear perturbations to feature vectors and further guides the classification process using a contrastive learning objective. Experimental evaluation conducted on both standard and geometric bias-free datasets demonstrates the superiority of our approach with respect to comparable baselines. The codebase is accessible via our public repository.
no_new_dataset
0.947817
2503.05522
Eren Erogullari
Eren Erogullari, Sebastian Lapuschkin, Wojciech Samek, Frederik Pahde
Post-Hoc Concept Disentanglement: From Correlated to Isolated Concept Representations
null
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Concept Activation Vectors (CAVs) are widely used to model human-understandable concepts as directions within the latent space of neural networks. They are trained by identifying directions from the activations of concept samples to those of non-concept samples. However, this method often produces similar, non-orthogonal directions for correlated concepts, such as "beard" and "necktie" within the CelebA dataset, which frequently co-occur in images of men. This entanglement complicates the interpretation of concepts in isolation and can lead to undesired effects in CAV applications, such as activation steering. To address this issue, we introduce a post-hoc concept disentanglement method that employs a non-orthogonality loss, facilitating the identification of orthogonal concept directions while preserving directional correctness. We evaluate our approach with real-world and controlled correlated concepts in CelebA and a synthetic FunnyBirds dataset with VGG16 and ResNet18 architectures. We further demonstrate the superiority of orthogonalized concept representations in activation steering tasks, allowing (1) the insertion of isolated concepts into input images through generative models and (2) the removal of concepts for effective shortcut suppression with reduced impact on correlated concepts in comparison to baseline CAVs.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 15:45:43 GMT" } ]
2025-03-10T00:00:00
[ [ "Erogullari", "Eren", "" ], [ "Lapuschkin", "Sebastian", "" ], [ "Samek", "Wojciech", "" ], [ "Pahde", "Frederik", "" ] ]
TITLE: Post-Hoc Concept Disentanglement: From Correlated to Isolated Concept Representations ABSTRACT: Concept Activation Vectors (CAVs) are widely used to model human-understandable concepts as directions within the latent space of neural networks. They are trained by identifying directions from the activations of concept samples to those of non-concept samples. However, this method often produces similar, non-orthogonal directions for correlated concepts, such as "beard" and "necktie" within the CelebA dataset, which frequently co-occur in images of men. This entanglement complicates the interpretation of concepts in isolation and can lead to undesired effects in CAV applications, such as activation steering. To address this issue, we introduce a post-hoc concept disentanglement method that employs a non-orthogonality loss, facilitating the identification of orthogonal concept directions while preserving directional correctness. We evaluate our approach with real-world and controlled correlated concepts in CelebA and a synthetic FunnyBirds dataset with VGG16 and ResNet18 architectures. We further demonstrate the superiority of orthogonalized concept representations in activation steering tasks, allowing (1) the insertion of isolated concepts into input images through generative models and (2) the removal of concepts for effective shortcut suppression with reduced impact on correlated concepts in comparison to baseline CAVs.
no_new_dataset
0.947332
2503.05531
Alex Fedorov
Alex Fedorov, Yutong Bu, Xiao Hu, Chris Rorden, Sergey Plis
State-of-the-Art Stroke Lesion Segmentation at 1/1000th of Parameters
International Symposium on Biomedical Imaging, April 14-17, 2025
null
null
null
eess.IV cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Efficient and accurate whole-brain lesion segmentation remains a challenge in medical image analysis. In this work, we revisit MeshNet, a parameter-efficient segmentation model, and introduce a novel multi-scale dilation pattern with an encoder-decoder structure. This innovation enables capturing broad contextual information and fine-grained details without traditional downsampling, upsampling, or skip-connections. Unlike previous approaches processing subvolumes or slices, we operate directly on whole-brain $256^3$ MRI volumes. Evaluations on the Aphasia Recovery Cohort (ARC) dataset demonstrate that MeshNet achieves superior or comparable DICE scores to state-of-the-art architectures such as MedNeXt and U-MAMBA at 1/1000th of parameters. Our results validate MeshNet's strong balance of efficiency and performance, making it particularly suitable for resource-limited environments such as web-based applications and opening new possibilities for the widespread deployment of advanced medical image analysis tools.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 15:58:36 GMT" } ]
2025-03-10T00:00:00
[ [ "Fedorov", "Alex", "" ], [ "Bu", "Yutong", "" ], [ "Hu", "Xiao", "" ], [ "Rorden", "Chris", "" ], [ "Plis", "Sergey", "" ] ]
TITLE: State-of-the-Art Stroke Lesion Segmentation at 1/1000th of Parameters ABSTRACT: Efficient and accurate whole-brain lesion segmentation remains a challenge in medical image analysis. In this work, we revisit MeshNet, a parameter-efficient segmentation model, and introduce a novel multi-scale dilation pattern with an encoder-decoder structure. This innovation enables capturing broad contextual information and fine-grained details without traditional downsampling, upsampling, or skip-connections. Unlike previous approaches processing subvolumes or slices, we operate directly on whole-brain $256^3$ MRI volumes. Evaluations on the Aphasia Recovery Cohort (ARC) dataset demonstrate that MeshNet achieves superior or comparable DICE scores to state-of-the-art architectures such as MedNeXt and U-MAMBA at 1/1000th of parameters. Our results validate MeshNet's strong balance of efficiency and performance, making it particularly suitable for resource-limited environments such as web-based applications and opening new possibilities for the widespread deployment of advanced medical image analysis tools.
no_new_dataset
0.942981
2503.05534
Adrien Meyer
Adrien Meyer, Lorenzo Arboit, Giuseppe Massimiani, Francesco Brucchi, Luca Emanuele Amodio, Didier Mutter, Nicolas Padoy
S4M: Segment Anything with 4 Extreme Points
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
The Segment Anything Model (SAM) has revolutionized open-set interactive image segmentation, inspiring numerous adapters for the medical domain. However, SAM primarily relies on sparse prompts such as point or bounding box, which may be suboptimal for fine-grained instance segmentation, particularly in endoscopic imagery, where precise localization is critical and existing prompts struggle to capture object boundaries effectively. To address this, we introduce S4M (Segment Anything with 4 Extreme Points), which augments SAM by leveraging extreme points -- the top-, bottom-, left-, and right-most points of an instance -- prompts. These points are intuitive to identify and provide a faster, structured alternative to box prompts. However, a na\"ive use of extreme points degrades performance, due to SAM's inability to interpret their semantic roles. To resolve this, we introduce dedicated learnable embeddings, enabling the model to distinguish extreme points from generic free-form points and better reason about their spatial relationships. We further propose an auxiliary training task through the Canvas module, which operates solely on prompts -- without vision input -- to predict a coarse instance mask. This encourages the model to internalize the relationship between extreme points and mask distributions, leading to more robust segmentation. S4M outperforms other SAM-based approaches on three endoscopic surgical datasets, demonstrating its effectiveness in complex scenarios. Finally, we validate our approach through a human annotation study on surgical endoscopic videos, confirming that extreme points are faster to acquire than bounding boxes.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 16:02:11 GMT" } ]
2025-03-10T00:00:00
[ [ "Meyer", "Adrien", "" ], [ "Arboit", "Lorenzo", "" ], [ "Massimiani", "Giuseppe", "" ], [ "Brucchi", "Francesco", "" ], [ "Amodio", "Luca Emanuele", "" ], [ "Mutter", "Didier", "" ], [ "Padoy", "Nicolas", "" ] ]
TITLE: S4M: Segment Anything with 4 Extreme Points ABSTRACT: The Segment Anything Model (SAM) has revolutionized open-set interactive image segmentation, inspiring numerous adapters for the medical domain. However, SAM primarily relies on sparse prompts such as point or bounding box, which may be suboptimal for fine-grained instance segmentation, particularly in endoscopic imagery, where precise localization is critical and existing prompts struggle to capture object boundaries effectively. To address this, we introduce S4M (Segment Anything with 4 Extreme Points), which augments SAM by leveraging extreme points -- the top-, bottom-, left-, and right-most points of an instance -- prompts. These points are intuitive to identify and provide a faster, structured alternative to box prompts. However, a na\"ive use of extreme points degrades performance, due to SAM's inability to interpret their semantic roles. To resolve this, we introduce dedicated learnable embeddings, enabling the model to distinguish extreme points from generic free-form points and better reason about their spatial relationships. We further propose an auxiliary training task through the Canvas module, which operates solely on prompts -- without vision input -- to predict a coarse instance mask. This encourages the model to internalize the relationship between extreme points and mask distributions, leading to more robust segmentation. S4M outperforms other SAM-based approaches on three endoscopic surgical datasets, demonstrating its effectiveness in complex scenarios. Finally, we validate our approach through a human annotation study on surgical endoscopic videos, confirming that extreme points are faster to acquire than bounding boxes.
no_new_dataset
0.94625
2503.05541
Juan Miguel Valverde
Juan Miguel Valverde, Maja {\O}stergaard, Adrian Rodriguez-Palomo, Peter Alling Strange Vibe, Nina K{\o}lln Wittig, Henrik Birkedal, Anders Bjorholm Dahl
Disconnect to Connect: A Data Augmentation Method for Improving Topology Accuracy in Image Segmentation
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Accurate segmentation of thin, tubular structures (e.g., blood vessels) is challenging for deep neural networks. These networks classify individual pixels, and even minor misclassifications can break the thin connections within these structures. Existing methods for improving topology accuracy, such as topology loss functions, rely on very precise, topologically-accurate training labels, which are difficult to obtain. This is because annotating images, especially 3D images, is extremely laborious and time-consuming. Low image resolution and contrast further complicates the annotation by causing tubular structures to appear disconnected. We present CoLeTra, a data augmentation strategy that integrates to the models the prior knowledge that structures that appear broken are actually connected. This is achieved by creating images with the appearance of disconnected structures while maintaining the original labels. Our extensive experiments, involving different architectures, loss functions, and datasets, demonstrate that CoLeTra leads to segmentations topologically more accurate while often improving the Dice coefficient and Hausdorff distance. CoLeTra's hyper-parameters are intuitive to tune, and our sensitivity analysis shows that CoLeTra is robust to changes in these hyper-parameters. We also release a dataset specifically suited for image segmentation methods with a focus on topology accuracy. CoLetra's code can be found at https://github.com/jmlipman/CoLeTra.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 16:11:55 GMT" } ]
2025-03-10T00:00:00
[ [ "Valverde", "Juan Miguel", "" ], [ "Østergaard", "Maja", "" ], [ "Rodriguez-Palomo", "Adrian", "" ], [ "Vibe", "Peter Alling Strange", "" ], [ "Wittig", "Nina Kølln", "" ], [ "Birkedal", "Henrik", "" ], [ "Dahl", "Anders Bjorholm", "" ] ]
TITLE: Disconnect to Connect: A Data Augmentation Method for Improving Topology Accuracy in Image Segmentation ABSTRACT: Accurate segmentation of thin, tubular structures (e.g., blood vessels) is challenging for deep neural networks. These networks classify individual pixels, and even minor misclassifications can break the thin connections within these structures. Existing methods for improving topology accuracy, such as topology loss functions, rely on very precise, topologically-accurate training labels, which are difficult to obtain. This is because annotating images, especially 3D images, is extremely laborious and time-consuming. Low image resolution and contrast further complicates the annotation by causing tubular structures to appear disconnected. We present CoLeTra, a data augmentation strategy that integrates to the models the prior knowledge that structures that appear broken are actually connected. This is achieved by creating images with the appearance of disconnected structures while maintaining the original labels. Our extensive experiments, involving different architectures, loss functions, and datasets, demonstrate that CoLeTra leads to segmentations topologically more accurate while often improving the Dice coefficient and Hausdorff distance. CoLeTra's hyper-parameters are intuitive to tune, and our sensitivity analysis shows that CoLeTra is robust to changes in these hyper-parameters. We also release a dataset specifically suited for image segmentation methods with a focus on topology accuracy. CoLetra's code can be found at https://github.com/jmlipman/CoLeTra.
new_dataset
0.960768
2503.05549
Junpeng Jing
Junpeng Jing, Weixun Luo, Ye Mao, Krystian Mikolajczyk
Stereo Any Video: Temporally Consistent Stereo Matching
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces Stereo Any Video, a powerful framework for video stereo matching. It can estimate spatially accurate and temporally consistent disparities without relying on auxiliary information such as camera poses or optical flow. The strong capability is driven by rich priors from monocular video depth models, which are integrated with convolutional features to produce stable representations. To further enhance performance, key architectural innovations are introduced: all-to-all-pairs correlation, which constructs smooth and robust matching cost volumes, and temporal convex upsampling, which improves temporal coherence. These components collectively ensure robustness, accuracy, and temporal consistency, setting a new standard in video stereo matching. Extensive experiments demonstrate that our method achieves state-of-the-art performance across multiple datasets both qualitatively and quantitatively in zero-shot settings, as well as strong generalization to real-world indoor and outdoor scenarios.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 16:20:36 GMT" } ]
2025-03-10T00:00:00
[ [ "Jing", "Junpeng", "" ], [ "Luo", "Weixun", "" ], [ "Mao", "Ye", "" ], [ "Mikolajczyk", "Krystian", "" ] ]
TITLE: Stereo Any Video: Temporally Consistent Stereo Matching ABSTRACT: This paper introduces Stereo Any Video, a powerful framework for video stereo matching. It can estimate spatially accurate and temporally consistent disparities without relying on auxiliary information such as camera poses or optical flow. The strong capability is driven by rich priors from monocular video depth models, which are integrated with convolutional features to produce stable representations. To further enhance performance, key architectural innovations are introduced: all-to-all-pairs correlation, which constructs smooth and robust matching cost volumes, and temporal convex upsampling, which improves temporal coherence. These components collectively ensure robustness, accuracy, and temporal consistency, setting a new standard in video stereo matching. Extensive experiments demonstrate that our method achieves state-of-the-art performance across multiple datasets both qualitatively and quantitatively in zero-shot settings, as well as strong generalization to real-world indoor and outdoor scenarios.
no_new_dataset
0.949902
2503.05568
Xiaobei Zhao
Xiaobei Zhao (1), Xiangrong Zeng (1), Yihang Ma (1), Pengjin Tang (1), Xiang Li (1) ((1) China Agricultural University)
TomatoScanner: phenotyping tomato fruit based on only RGB image
12 pages, 37 figures. Codes and datasets are open-sourced in https://github.com/AlexTraveling/TomatoScanner
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In tomato greenhouse, phenotypic measurement is meaningful for researchers and farmers to monitor crop growth, thereby precisely control environmental conditions in time, leading to better quality and higher yield. Traditional phenotyping mainly relies on manual measurement, which is accurate but inefficient, more importantly, endangering the health and safety of people. Several studies have explored computer vision-based methods to replace manual phenotyping. However, the 2D-based need extra calibration, or cause destruction to fruit, or can only measure limited and meaningless traits. The 3D-based need extra depth camera, which is expensive and unacceptable for most farmers. In this paper, we propose a non-contact tomato fruit phenotyping method, titled TomatoScanner, where RGB image is all you need for input. First, pixel feature is extracted by instance segmentation of our proposed EdgeYOLO with preprocessing of individual separation and pose correction. Second, depth feature is extracted by depth estimation of Depth Pro. Third, pixel and depth feature are fused to output phenotype results in reality. We establish self-built Tomato Phenotype Dataset to test TomatoScanner, which achieves excellent phenotyping on width, height, vertical area and volume, with median relative error of 5.63%, 7.03%, -0.64% and 37.06%, respectively. We propose and add three innovative modules - EdgeAttention, EdgeLoss and EdgeBoost - into EdgeYOLO, to enhance the segmentation accuracy on edge portion. Precision and mean Edge Error greatly improve from 0.943 and 5.641% to 0.986 and 2.963%, respectively. Meanwhile, EdgeYOLO keeps lightweight and efficient, with 48.7 M weights size and 76.34 FPS. Codes and datasets: https://github.com/AlexTraveling/TomatoScanner.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 16:47:48 GMT" } ]
2025-03-10T00:00:00
[ [ "Zhao", "Xiaobei", "", "China Agricultural University" ], [ "Zeng", "Xiangrong", "", "China Agricultural University" ], [ "Ma", "Yihang", "", "China Agricultural University" ], [ "Tang", "Pengjin", "", "China Agricultural University" ], [ "Li", "Xiang", "", "China Agricultural University" ] ]
TITLE: TomatoScanner: phenotyping tomato fruit based on only RGB image ABSTRACT: In tomato greenhouse, phenotypic measurement is meaningful for researchers and farmers to monitor crop growth, thereby precisely control environmental conditions in time, leading to better quality and higher yield. Traditional phenotyping mainly relies on manual measurement, which is accurate but inefficient, more importantly, endangering the health and safety of people. Several studies have explored computer vision-based methods to replace manual phenotyping. However, the 2D-based need extra calibration, or cause destruction to fruit, or can only measure limited and meaningless traits. The 3D-based need extra depth camera, which is expensive and unacceptable for most farmers. In this paper, we propose a non-contact tomato fruit phenotyping method, titled TomatoScanner, where RGB image is all you need for input. First, pixel feature is extracted by instance segmentation of our proposed EdgeYOLO with preprocessing of individual separation and pose correction. Second, depth feature is extracted by depth estimation of Depth Pro. Third, pixel and depth feature are fused to output phenotype results in reality. We establish self-built Tomato Phenotype Dataset to test TomatoScanner, which achieves excellent phenotyping on width, height, vertical area and volume, with median relative error of 5.63%, 7.03%, -0.64% and 37.06%, respectively. We propose and add three innovative modules - EdgeAttention, EdgeLoss and EdgeBoost - into EdgeYOLO, to enhance the segmentation accuracy on edge portion. Precision and mean Edge Error greatly improve from 0.943 and 5.641% to 0.986 and 2.963%, respectively. Meanwhile, EdgeYOLO keeps lightweight and efficient, with 48.7 M weights size and 76.34 FPS. Codes and datasets: https://github.com/AlexTraveling/TomatoScanner.
no_new_dataset
0.948058
2503.05578
Jian Liu
Jian Liu, Wei Sun, Kai Zeng, Jin Zheng, Hui Yang, Lin Wang, Hossein Rahmani, Ajmal Mian
Novel Object 6D Pose Estimation with a Single Reference View
17 pages, 12 figures (including supplementary material)
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Existing novel object 6D pose estimation methods typically rely on CAD models or dense reference views, which are both difficult to acquire. Using only a single reference view is more scalable, but challenging due to large pose discrepancies and limited geometric and spatial information. To address these issues, we propose a Single-Reference-based novel object 6D (SinRef-6D) pose estimation method. Our key idea is to iteratively establish point-wise alignment in the camera coordinate system based on state space models (SSMs). Specifically, iterative camera-space point-wise alignment can effectively handle large pose discrepancies, while our proposed RGB and Points SSMs can capture long-range dependencies and spatial information from a single view, offering linear complexity and superior spatial modeling capability. Once pre-trained on synthetic data, SinRef-6D can estimate the 6D pose of a novel object using only a single reference view, without requiring retraining or a CAD model. Extensive experiments on six popular datasets and real-world robotic scenes demonstrate that we achieve on-par performance with CAD-based and dense reference view-based methods, despite operating in the more challenging single reference setting. Code will be released at https://github.com/CNJianLiu/SinRef-6D.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 17:00:41 GMT" } ]
2025-03-10T00:00:00
[ [ "Liu", "Jian", "" ], [ "Sun", "Wei", "" ], [ "Zeng", "Kai", "" ], [ "Zheng", "Jin", "" ], [ "Yang", "Hui", "" ], [ "Wang", "Lin", "" ], [ "Rahmani", "Hossein", "" ], [ "Mian", "Ajmal", "" ] ]
TITLE: Novel Object 6D Pose Estimation with a Single Reference View ABSTRACT: Existing novel object 6D pose estimation methods typically rely on CAD models or dense reference views, which are both difficult to acquire. Using only a single reference view is more scalable, but challenging due to large pose discrepancies and limited geometric and spatial information. To address these issues, we propose a Single-Reference-based novel object 6D (SinRef-6D) pose estimation method. Our key idea is to iteratively establish point-wise alignment in the camera coordinate system based on state space models (SSMs). Specifically, iterative camera-space point-wise alignment can effectively handle large pose discrepancies, while our proposed RGB and Points SSMs can capture long-range dependencies and spatial information from a single view, offering linear complexity and superior spatial modeling capability. Once pre-trained on synthetic data, SinRef-6D can estimate the 6D pose of a novel object using only a single reference view, without requiring retraining or a CAD model. Extensive experiments on six popular datasets and real-world robotic scenes demonstrate that we achieve on-par performance with CAD-based and dense reference view-based methods, despite operating in the more challenging single reference setting. Code will be released at https://github.com/CNJianLiu/SinRef-6D.
no_new_dataset
0.948917
2503.05582
Yang Mu
Yang Mu, Muhammad Shahzad, Xiao Xiang Zhu
MPTSNet: Integrating Multiscale Periodic Local Patterns and Global Dependencies for Multivariate Time Series Classification
Accepted by AAAI2025
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multivariate Time Series Classification (MTSC) is crucial in extensive practical applications, such as environmental monitoring, medical EEG analysis, and action recognition. Real-world time series datasets typically exhibit complex dynamics. To capture this complexity, RNN-based, CNN-based, Transformer-based, and hybrid models have been proposed. Unfortunately, current deep learning-based methods often neglect the simultaneous construction of local features and global dependencies at different time scales, lacking sufficient feature extraction capabilities to achieve satisfactory classification accuracy. To address these challenges, we propose a novel Multiscale Periodic Time Series Network (MPTSNet), which integrates multiscale local patterns and global correlations to fully exploit the inherent information in time series. Recognizing the multi-periodicity and complex variable correlations in time series, we use the Fourier transform to extract primary periods, enabling us to decompose data into multiscale periodic segments. Leveraging the inherent strengths of CNN and attention mechanism, we introduce the PeriodicBlock, which adaptively captures local patterns and global dependencies while offering enhanced interpretability through attention integration across different periodic scales. The experiments on UEA benchmark datasets demonstrate that the proposed MPTSNet outperforms 21 existing advanced baselines in the MTSC tasks.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 17:07:51 GMT" } ]
2025-03-10T00:00:00
[ [ "Mu", "Yang", "" ], [ "Shahzad", "Muhammad", "" ], [ "Zhu", "Xiao Xiang", "" ] ]
TITLE: MPTSNet: Integrating Multiscale Periodic Local Patterns and Global Dependencies for Multivariate Time Series Classification ABSTRACT: Multivariate Time Series Classification (MTSC) is crucial in extensive practical applications, such as environmental monitoring, medical EEG analysis, and action recognition. Real-world time series datasets typically exhibit complex dynamics. To capture this complexity, RNN-based, CNN-based, Transformer-based, and hybrid models have been proposed. Unfortunately, current deep learning-based methods often neglect the simultaneous construction of local features and global dependencies at different time scales, lacking sufficient feature extraction capabilities to achieve satisfactory classification accuracy. To address these challenges, we propose a novel Multiscale Periodic Time Series Network (MPTSNet), which integrates multiscale local patterns and global correlations to fully exploit the inherent information in time series. Recognizing the multi-periodicity and complex variable correlations in time series, we use the Fourier transform to extract primary periods, enabling us to decompose data into multiscale periodic segments. Leveraging the inherent strengths of CNN and attention mechanism, we introduce the PeriodicBlock, which adaptively captures local patterns and global dependencies while offering enhanced interpretability through attention integration across different periodic scales. The experiments on UEA benchmark datasets demonstrate that the proposed MPTSNet outperforms 21 existing advanced baselines in the MTSC tasks.
no_new_dataset
0.949201
2503.05587
Shiping Yang
Shiping Yang, Jie Wu, Wenbiao Ding, Ning Wu, Shining Liang, Ming Gong, Hengyuan Zhang, Dongmei Zhang
Quantifying the Robustness of Retrieval-Augmented Language Models Against Spurious Features in Grounding Data
null
null
null
null
cs.CL cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Robustness has become a critical attribute for the deployment of RAG systems in real-world applications. Existing research focuses on robustness to explicit noise (e.g., document semantics) but overlooks spurious features (a.k.a. implicit noise). While previous works have explored spurious features in LLMs, they are limited to specific features (e.g., formats) and narrow scenarios (e.g., ICL). In this work, we statistically confirm the presence of spurious features in the RAG paradigm, a robustness problem caused by the sensitivity of LLMs to semantic-agnostic features. Moreover, we provide a comprehensive taxonomy of spurious features and empirically quantify their impact through controlled experiments. Further analysis reveals that not all spurious features are harmful and they can even be beneficial sometimes. Extensive evaluation results across multiple LLMs suggest that spurious features are a widespread and challenging problem in the field of RAG. The code and dataset will be released to facilitate future research. We release all codes and data at: $\\\href{https://github.com/maybenotime/RAG-SpuriousFeatures}{https://github.com/maybenotime/RAG-SpuriousFeatures}$.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 17:11:34 GMT" } ]
2025-03-10T00:00:00
[ [ "Yang", "Shiping", "" ], [ "Wu", "Jie", "" ], [ "Ding", "Wenbiao", "" ], [ "Wu", "Ning", "" ], [ "Liang", "Shining", "" ], [ "Gong", "Ming", "" ], [ "Zhang", "Hengyuan", "" ], [ "Zhang", "Dongmei", "" ] ]
TITLE: Quantifying the Robustness of Retrieval-Augmented Language Models Against Spurious Features in Grounding Data ABSTRACT: Robustness has become a critical attribute for the deployment of RAG systems in real-world applications. Existing research focuses on robustness to explicit noise (e.g., document semantics) but overlooks spurious features (a.k.a. implicit noise). While previous works have explored spurious features in LLMs, they are limited to specific features (e.g., formats) and narrow scenarios (e.g., ICL). In this work, we statistically confirm the presence of spurious features in the RAG paradigm, a robustness problem caused by the sensitivity of LLMs to semantic-agnostic features. Moreover, we provide a comprehensive taxonomy of spurious features and empirically quantify their impact through controlled experiments. Further analysis reveals that not all spurious features are harmful and they can even be beneficial sometimes. Extensive evaluation results across multiple LLMs suggest that spurious features are a widespread and challenging problem in the field of RAG. The code and dataset will be released to facilitate future research. We release all codes and data at: $\\\href{https://github.com/maybenotime/RAG-SpuriousFeatures}{https://github.com/maybenotime/RAG-SpuriousFeatures}$.
no_new_dataset
0.917525
2503.05595
Zheng Li
Zheng Li, Liangbin Xie, Jiantao Zhou, Xintao Wang, Haiwei Wu, Jinyu Tian
Anti-Diffusion: Preventing Abuse of Modifications of Diffusion-Based Models
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although diffusion-based techniques have shown remarkable success in image generation and editing tasks, their abuse can lead to severe negative social impacts. Recently, some works have been proposed to provide defense against the abuse of diffusion-based methods. However, their protection may be limited in specific scenarios by manually defined prompts or the stable diffusion (SD) version. Furthermore, these methods solely focus on tuning methods, overlooking editing methods that could also pose a significant threat. In this work, we propose Anti-Diffusion, a privacy protection system designed for general diffusion-based methods, applicable to both tuning and editing techniques. To mitigate the limitations of manually defined prompts on defense performance, we introduce the prompt tuning (PT) strategy that enables precise expression of original images. To provide defense against both tuning and editing methods, we propose the semantic disturbance loss (SDL) to disrupt the semantic information of protected images. Given the limited research on the defense against editing methods, we develop a dataset named Defense-Edit to assess the defense performance of various methods. Experiments demonstrate that our Anti-Diffusion achieves superior defense performance across a wide range of diffusion-based techniques in different scenarios.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 17:23:52 GMT" } ]
2025-03-10T00:00:00
[ [ "Li", "Zheng", "" ], [ "Xie", "Liangbin", "" ], [ "Zhou", "Jiantao", "" ], [ "Wang", "Xintao", "" ], [ "Wu", "Haiwei", "" ], [ "Tian", "Jinyu", "" ] ]
TITLE: Anti-Diffusion: Preventing Abuse of Modifications of Diffusion-Based Models ABSTRACT: Although diffusion-based techniques have shown remarkable success in image generation and editing tasks, their abuse can lead to severe negative social impacts. Recently, some works have been proposed to provide defense against the abuse of diffusion-based methods. However, their protection may be limited in specific scenarios by manually defined prompts or the stable diffusion (SD) version. Furthermore, these methods solely focus on tuning methods, overlooking editing methods that could also pose a significant threat. In this work, we propose Anti-Diffusion, a privacy protection system designed for general diffusion-based methods, applicable to both tuning and editing techniques. To mitigate the limitations of manually defined prompts on defense performance, we introduce the prompt tuning (PT) strategy that enables precise expression of original images. To provide defense against both tuning and editing methods, we propose the semantic disturbance loss (SDL) to disrupt the semantic information of protected images. Given the limited research on the defense against editing methods, we develop a dataset named Defense-Edit to assess the defense performance of various methods. Experiments demonstrate that our Anti-Diffusion achieves superior defense performance across a wide range of diffusion-based techniques in different scenarios.
new_dataset
0.959687
2503.05602
Jan Schnabel
Roberto Fl\'orez Ablan, Marco Roth, and Jan Schnabel
On the similarity of bandwidth-tuned quantum kernels and classical kernels
9 main pages with 5 figures, and 9 appendix pages with 12 figures
null
null
null
quant-ph cs.LG
http://creativecommons.org/licenses/by/4.0/
Quantum kernels (QK) are widely used in quantum machine learning applications; yet, their potential to surpass classical machine learning methods on classical datasets remains uncertain. This limitation can be attributed to the exponential concentration phenomenon, which can impair both trainability and generalization. A common strategy to alleviate this is bandwidth tuning, which involves rescaling data points in the quantum model to improve generalization. In this work, we numerically demonstrate that optimal bandwidth tuning results in QKs that closely resemble radial basis function (RBF) kernels, leading to a lack of quantum advantage over classical methods. Moreover, we reveal that the size of optimal bandwidth tuning parameters further simplifies QKs, causing them to behave like polynomial kernels, corresponding to a low-order Taylor approximation of a RBF kernel. We thoroughly investigate this for fidelity quantum kernels and projected quantum kernels using various data encoding circuits across several classification datasets. We provide numerical evidence and derive a simple analytical model that elucidates how bandwidth tuning influences key quantities in classification tasks. Overall, our findings shed light on the mechanisms that render QK methods classically simulatable.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 17:28:02 GMT" } ]
2025-03-10T00:00:00
[ [ "Ablan", "Roberto Flórez", "" ], [ "Roth", "Marco", "" ], [ "Schnabel", "Jan", "" ] ]
TITLE: On the similarity of bandwidth-tuned quantum kernels and classical kernels ABSTRACT: Quantum kernels (QK) are widely used in quantum machine learning applications; yet, their potential to surpass classical machine learning methods on classical datasets remains uncertain. This limitation can be attributed to the exponential concentration phenomenon, which can impair both trainability and generalization. A common strategy to alleviate this is bandwidth tuning, which involves rescaling data points in the quantum model to improve generalization. In this work, we numerically demonstrate that optimal bandwidth tuning results in QKs that closely resemble radial basis function (RBF) kernels, leading to a lack of quantum advantage over classical methods. Moreover, we reveal that the size of optimal bandwidth tuning parameters further simplifies QKs, causing them to behave like polynomial kernels, corresponding to a low-order Taylor approximation of a RBF kernel. We thoroughly investigate this for fidelity quantum kernels and projected quantum kernels using various data encoding circuits across several classification datasets. We provide numerical evidence and derive a simple analytical model that elucidates how bandwidth tuning influences key quantities in classification tasks. Overall, our findings shed light on the mechanisms that render QK methods classically simulatable.
no_new_dataset
0.947769
2503.05604
Ahmed Alagha
Hanae Elmekki, Ahmed Alagha, Hani Sami, Amanda Spilkin, Antonela Mariel Zanuttini, Ehsan Zakeri, Jamal Bentahar, Lyes Kadem, Wen-Fang Xie, Philippe Pibarot, Rabeb Mizouni, Hadi Otrok, Shakti Singh, Azzam Mourad
CACTUS: An Open Dataset and Framework for Automated Cardiac Assessment and Classification of Ultrasound Images Using Deep Transfer Learning
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Cardiac ultrasound (US) scanning is a commonly used techniques in cardiology to diagnose the health of the heart and its proper functioning. Therefore, it is necessary to consider ways to automate these tasks and assist medical professionals in classifying and assessing cardiac US images. Machine learning (ML) techniques are regarded as a prominent solution due to their success in numerous applications aimed at enhancing the medical field, including addressing the shortage of echography technicians. However, the limited availability of medical data presents a significant barrier to applying ML in cardiology, particularly regarding US images of the heart. This paper addresses this challenge by introducing the first open graded dataset for Cardiac Assessment and ClassificaTion of UltraSound (CACTUS), which is available online. This dataset contains images obtained from scanning a CAE Blue Phantom and representing various heart views and different quality levels, exceeding the conventional cardiac views typically found in the literature. Additionally, the paper introduces a Deep Learning (DL) framework consisting of two main components. The first component classifies cardiac US images based on the heart view using a Convolutional Neural Network (CNN). The second component uses Transfer Learning (TL) to fine-tune the knowledge from the first component and create a model for grading and assessing cardiac images. The framework demonstrates high performance in both classification and grading, achieving up to 99.43% accuracy and as low as 0.3067 error, respectively. To showcase its robustness, the framework is further fine-tuned using new images representing additional cardiac views and compared to several other state-of-the-art architectures. The framework's outcomes and performance in handling real-time scans were also assessed using a questionnaire answered by cardiac experts.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 17:29:04 GMT" } ]
2025-03-10T00:00:00
[ [ "Elmekki", "Hanae", "" ], [ "Alagha", "Ahmed", "" ], [ "Sami", "Hani", "" ], [ "Spilkin", "Amanda", "" ], [ "Zanuttini", "Antonela Mariel", "" ], [ "Zakeri", "Ehsan", "" ], [ "Bentahar", "Jamal", "" ], [ "Kadem", "Lyes", "" ], [ "Xie", "Wen-Fang", "" ], [ "Pibarot", "Philippe", "" ], [ "Mizouni", "Rabeb", "" ], [ "Otrok", "Hadi", "" ], [ "Singh", "Shakti", "" ], [ "Mourad", "Azzam", "" ] ]
TITLE: CACTUS: An Open Dataset and Framework for Automated Cardiac Assessment and Classification of Ultrasound Images Using Deep Transfer Learning ABSTRACT: Cardiac ultrasound (US) scanning is a commonly used techniques in cardiology to diagnose the health of the heart and its proper functioning. Therefore, it is necessary to consider ways to automate these tasks and assist medical professionals in classifying and assessing cardiac US images. Machine learning (ML) techniques are regarded as a prominent solution due to their success in numerous applications aimed at enhancing the medical field, including addressing the shortage of echography technicians. However, the limited availability of medical data presents a significant barrier to applying ML in cardiology, particularly regarding US images of the heart. This paper addresses this challenge by introducing the first open graded dataset for Cardiac Assessment and ClassificaTion of UltraSound (CACTUS), which is available online. This dataset contains images obtained from scanning a CAE Blue Phantom and representing various heart views and different quality levels, exceeding the conventional cardiac views typically found in the literature. Additionally, the paper introduces a Deep Learning (DL) framework consisting of two main components. The first component classifies cardiac US images based on the heart view using a Convolutional Neural Network (CNN). The second component uses Transfer Learning (TL) to fine-tune the knowledge from the first component and create a model for grading and assessing cardiac images. The framework demonstrates high performance in both classification and grading, achieving up to 99.43% accuracy and as low as 0.3067 error, respectively. To showcase its robustness, the framework is further fine-tuned using new images representing additional cardiac views and compared to several other state-of-the-art architectures. The framework's outcomes and performance in handling real-time scans were also assessed using a questionnaire answered by cardiac experts.
new_dataset
0.910107
2503.05605
Silvia Garc\'ia-M\'endez
Francisco de Arriba-P\'erez, Silvia Garc\'ia-M\'endez, F\'atima Leal, Benedita Malheiro, Juan C Burguillo
Identification and explanation of disinformation in wiki data streams
(2025) Integrated Computer-Aided Engineering
null
10.1177/10692509241306580
null
cs.IR cs.CY cs.SI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Social media platforms, increasingly used as news sources for varied data analytics, have transformed how information is generated and disseminated. However, the unverified nature of this content raises concerns about trustworthiness and accuracy, potentially negatively impacting readers' critical judgment due to disinformation. This work aims to contribute to the automatic data quality validation field, addressing the rapid growth of online content on wiki pages. Our scalable solution includes stream-based data processing with feature engineering, feature analysis and selection, stream-based classification, and real-time explanation of prediction outcomes. The explainability dashboard is designed for the general public, who may need more specialized knowledge to interpret the model's prediction. Experimental results on two datasets attain approximately 90 % values across all evaluation metrics, demonstrating robust and competitive performance compared to works in the literature. In summary, the system assists editors by reducing their effort and time in detecting disinformation.
[ { "version": "v1", "created": "Mon, 3 Feb 2025 08:34:39 GMT" } ]
2025-03-10T00:00:00
[ [ "de Arriba-Pérez", "Francisco", "" ], [ "García-Méndez", "Silvia", "" ], [ "Leal", "Fátima", "" ], [ "Malheiro", "Benedita", "" ], [ "Burguillo", "Juan C", "" ] ]
TITLE: Identification and explanation of disinformation in wiki data streams ABSTRACT: Social media platforms, increasingly used as news sources for varied data analytics, have transformed how information is generated and disseminated. However, the unverified nature of this content raises concerns about trustworthiness and accuracy, potentially negatively impacting readers' critical judgment due to disinformation. This work aims to contribute to the automatic data quality validation field, addressing the rapid growth of online content on wiki pages. Our scalable solution includes stream-based data processing with feature engineering, feature analysis and selection, stream-based classification, and real-time explanation of prediction outcomes. The explainability dashboard is designed for the general public, who may need more specialized knowledge to interpret the model's prediction. Experimental results on two datasets attain approximately 90 % values across all evaluation metrics, demonstrating robust and competitive performance compared to works in the literature. In summary, the system assists editors by reducing their effort and time in detecting disinformation.
no_new_dataset
0.952353
2503.05609
Charvi Rastogi
Pushkar Mishra, Charvi Rastogi, Stephen R. Pfohl, Alicia Parrish, Roma Patel, Mark Diaz, Ding Wang, Michela Paganini, Vinodkumar Prabhakaran, Lora Aroyo, Verena Rieser
Nuanced Safety for Generative AI: How Demographics Shape Responsiveness to Severity
null
null
null
null
cs.CY cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ensuring safety of Generative AI requires a nuanced understanding of pluralistic viewpoints. In this paper, we introduce a novel data-driven approach for calibrating granular ratings in pluralistic datasets. Specifically, we address the challenge of interpreting responses of a diverse population to safety expressed via ordinal scales (e.g., Likert scale). We distill non-parametric responsiveness metrics that quantify the consistency of raters in scoring the varying levels of the severity of safety violations. Using safety evaluation of AI-generated content as a case study, we investigate how raters from different demographic groups (age, gender, ethnicity) use an ordinal scale to express their perception of the severity of violations in a pluralistic safety dataset. We apply our metrics across violation types, demonstrating their utility in extracting nuanced insights that are crucial for developing reliable AI systems in a multi-cultural contexts. We show that our approach offers improved capabilities for prioritizing safety concerns by capturing nuanced viewpoints across different demographic groups, hence improving the reliability of pluralistic data collection and in turn contributing to more robust AI evaluations.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 17:32:31 GMT" } ]
2025-03-10T00:00:00
[ [ "Mishra", "Pushkar", "" ], [ "Rastogi", "Charvi", "" ], [ "Pfohl", "Stephen R.", "" ], [ "Parrish", "Alicia", "" ], [ "Patel", "Roma", "" ], [ "Diaz", "Mark", "" ], [ "Wang", "Ding", "" ], [ "Paganini", "Michela", "" ], [ "Prabhakaran", "Vinodkumar", "" ], [ "Aroyo", "Lora", "" ], [ "Rieser", "Verena", "" ] ]
TITLE: Nuanced Safety for Generative AI: How Demographics Shape Responsiveness to Severity ABSTRACT: Ensuring safety of Generative AI requires a nuanced understanding of pluralistic viewpoints. In this paper, we introduce a novel data-driven approach for calibrating granular ratings in pluralistic datasets. Specifically, we address the challenge of interpreting responses of a diverse population to safety expressed via ordinal scales (e.g., Likert scale). We distill non-parametric responsiveness metrics that quantify the consistency of raters in scoring the varying levels of the severity of safety violations. Using safety evaluation of AI-generated content as a case study, we investigate how raters from different demographic groups (age, gender, ethnicity) use an ordinal scale to express their perception of the severity of violations in a pluralistic safety dataset. We apply our metrics across violation types, demonstrating their utility in extracting nuanced insights that are crucial for developing reliable AI systems in a multi-cultural contexts. We show that our approach offers improved capabilities for prioritizing safety concerns by capturing nuanced viewpoints across different demographic groups, hence improving the reliability of pluralistic data collection and in turn contributing to more robust AI evaluations.
no_new_dataset
0.946101
2503.05618
Luca Mossina
Luca Mossina and Corentin Friedrich
Conformal Prediction for Image Segmentation Using Morphological Prediction Sets
null
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Image segmentation is a challenging task influenced by multiple sources of uncertainty, such as the data labeling process or the sampling of training data. In this paper we focus on binary segmentation and address these challenges using conformal prediction, a family of model- and data-agnostic methods for uncertainty quantification that provide finite-sample theoretical guarantees and applicable to any pretrained predictor. Our approach involves computing nonconformity scores, a type of prediction residual, on held-out calibration data not used during training. We use dilation, one of the fundamental operations in mathematical morphology, to construct a margin added to the borders of predicted segmentation masks. At inference, the predicted set formed by the mask and its margin contains the ground-truth mask with high probability, at a confidence level specified by the user. The size of the margin serves as an indicator of predictive uncertainty for a given model and dataset. We work in a regime of minimal information as we do not require any feedback from the predictor: only the predicted masks are needed for computing the prediction sets. Hence, our method is applicable to any segmentation model, including those based on deep learning; we evaluate our approach on several medical imaging applications.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 17:42:30 GMT" } ]
2025-03-10T00:00:00
[ [ "Mossina", "Luca", "" ], [ "Friedrich", "Corentin", "" ] ]
TITLE: Conformal Prediction for Image Segmentation Using Morphological Prediction Sets ABSTRACT: Image segmentation is a challenging task influenced by multiple sources of uncertainty, such as the data labeling process or the sampling of training data. In this paper we focus on binary segmentation and address these challenges using conformal prediction, a family of model- and data-agnostic methods for uncertainty quantification that provide finite-sample theoretical guarantees and applicable to any pretrained predictor. Our approach involves computing nonconformity scores, a type of prediction residual, on held-out calibration data not used during training. We use dilation, one of the fundamental operations in mathematical morphology, to construct a margin added to the borders of predicted segmentation masks. At inference, the predicted set formed by the mask and its margin contains the ground-truth mask with high probability, at a confidence level specified by the user. The size of the margin serves as an indicator of predictive uncertainty for a given model and dataset. We work in a regime of minimal information as we do not require any feedback from the predictor: only the predicted masks are needed for computing the prediction sets. Hence, our method is applicable to any segmentation model, including those based on deep learning; we evaluate our approach on several medical imaging applications.
no_new_dataset
0.950227
2503.05620
Xuanqing Liu
Xuanqing Liu, Luyang Kong, Wei Niu, Afshin Khashei, Belinda Zeng, Steve Johnson, Jon Jay, Davor Golac, Matt Pope
Learning LLM Preference over Intra-Dialogue Pairs: A Framework for Utterance-level Understandings
7 pages, 4 figures
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large language models (LLMs) have demonstrated remarkable capabilities in handling complex dialogue tasks without requiring use case-specific fine-tuning. However, analyzing live dialogues in real-time necessitates low-latency processing systems, making it impractical to deploy models with billions of parameters due to latency constraints. As a result, practitioners often prefer smaller models with millions of parameters, trained on high-quality, human-annotated datasets. Yet, curating such datasets is both time-consuming and costly. Consequently, there is a growing need to combine the scalability of LLM-generated labels with the precision of human annotations, enabling fine-tuned smaller models to achieve both higher speed and accuracy comparable to larger models. In this paper, we introduce a simple yet effective framework to address this challenge. Our approach is specifically designed for per-utterance classification problems, which encompass tasks such as intent detection, dialogue state tracking, and more. To mitigate the impact of labeling errors from LLMs -- the primary source of inaccuracies in student models -- we propose a noise-reduced preference learning loss. Experimental results demonstrate that our method significantly improves accuracy across utterance-level dialogue tasks, including sentiment detection (over $2\%$), dialogue act classification (over $1.5\%$), etc.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 17:46:13 GMT" } ]
2025-03-10T00:00:00
[ [ "Liu", "Xuanqing", "" ], [ "Kong", "Luyang", "" ], [ "Niu", "Wei", "" ], [ "Khashei", "Afshin", "" ], [ "Zeng", "Belinda", "" ], [ "Johnson", "Steve", "" ], [ "Jay", "Jon", "" ], [ "Golac", "Davor", "" ], [ "Pope", "Matt", "" ] ]
TITLE: Learning LLM Preference over Intra-Dialogue Pairs: A Framework for Utterance-level Understandings ABSTRACT: Large language models (LLMs) have demonstrated remarkable capabilities in handling complex dialogue tasks without requiring use case-specific fine-tuning. However, analyzing live dialogues in real-time necessitates low-latency processing systems, making it impractical to deploy models with billions of parameters due to latency constraints. As a result, practitioners often prefer smaller models with millions of parameters, trained on high-quality, human-annotated datasets. Yet, curating such datasets is both time-consuming and costly. Consequently, there is a growing need to combine the scalability of LLM-generated labels with the precision of human annotations, enabling fine-tuned smaller models to achieve both higher speed and accuracy comparable to larger models. In this paper, we introduce a simple yet effective framework to address this challenge. Our approach is specifically designed for per-utterance classification problems, which encompass tasks such as intent detection, dialogue state tracking, and more. To mitigate the impact of labeling errors from LLMs -- the primary source of inaccuracies in student models -- we propose a noise-reduced preference learning loss. Experimental results demonstrate that our method significantly improves accuracy across utterance-level dialogue tasks, including sentiment detection (over $2\%$), dialogue act classification (over $1.5\%$), etc.
no_new_dataset
0.950595
2503.05626
Jingyu Xu
Jingyu Xu, Yang Wang
FMT:A Multimodal Pneumonia Detection Model Based on Stacking MOE Framework
null
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Artificial intelligence has shown the potential to improve diagnostic accuracy through medical image analysis for pneumonia diagnosis. However, traditional multimodal approaches often fail to address real-world challenges such as incomplete data and modality loss. In this study, a Flexible Multimodal Transformer (FMT) was proposed, which uses ResNet-50 and BERT for joint representation learning, followed by a dynamic masked attention strategy that simulates clinical modality loss to improve robustness; finally, a sequential mixture of experts (MOE) architecture was used to achieve multi-level decision refinement. After evaluation on a small multimodal pneumonia dataset, FMT achieved state-of-the-art performance with 94% accuracy, 95% recall, and 93% F1 score, outperforming single-modal baselines (ResNet: 89%; BERT: 79%) and the medical benchmark CheXMed (90%), providing a scalable solution for multimodal diagnosis of pneumonia in resource-constrained medical settings.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 17:52:12 GMT" } ]
2025-03-10T00:00:00
[ [ "Xu", "Jingyu", "" ], [ "Wang", "Yang", "" ] ]
TITLE: FMT:A Multimodal Pneumonia Detection Model Based on Stacking MOE Framework ABSTRACT: Artificial intelligence has shown the potential to improve diagnostic accuracy through medical image analysis for pneumonia diagnosis. However, traditional multimodal approaches often fail to address real-world challenges such as incomplete data and modality loss. In this study, a Flexible Multimodal Transformer (FMT) was proposed, which uses ResNet-50 and BERT for joint representation learning, followed by a dynamic masked attention strategy that simulates clinical modality loss to improve robustness; finally, a sequential mixture of experts (MOE) architecture was used to achieve multi-level decision refinement. After evaluation on a small multimodal pneumonia dataset, FMT achieved state-of-the-art performance with 94% accuracy, 95% recall, and 93% F1 score, outperforming single-modal baselines (ResNet: 89%; BERT: 79%) and the medical benchmark CheXMed (90%), providing a scalable solution for multimodal diagnosis of pneumonia in resource-constrained medical settings.
no_new_dataset
0.946151
2503.05629
Samaneh Zolfaghari Dr.
Ali Samimi Fard, Mohammadreza Mashhadigholamali, Samaneh Zolfaghari, Hajar Abedi, Mainak Chakraborty, Luigi Borz\`i, Masoud Daneshtalab, George Shaker
Exploring FMCW Radars and Feature Maps for Activity Recognition: A Benchmark Study
null
null
null
null
cs.ET cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Human Activity Recognition has gained significant attention due to its diverse applications, including ambient assisted living and remote sensing. Wearable sensor-based solutions often suffer from user discomfort and reliability issues, while video-based methods raise privacy concerns and perform poorly in low-light conditions or long ranges. This study introduces a Frequency-Modulated Continuous Wave radar-based framework for human activity recognition, leveraging a 60 GHz radar and multi-dimensional feature maps. Unlike conventional approaches that process feature maps as images, this study feeds multi-dimensional feature maps -- Range-Doppler, Range-Azimuth, and Range-Elevation -- as data vectors directly into the machine learning (SVM, MLP) and deep learning (CNN, LSTM, ConvLSTM) models, preserving the spatial and temporal structures of the data. These features were extracted from a novel dataset with seven activity classes and validated using two different validation approaches. The ConvLSTM model outperformed conventional machine learning and deep learning models, achieving an accuracy of 90.51% and an F1-score of 87.31% on cross-scene validation and an accuracy of 89.56% and an F1-score of 87.15% on leave-one-person-out cross-validation. The results highlight the approach's potential for scalable, non-intrusive, and privacy-preserving activity monitoring in real-world scenarios.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 17:53:29 GMT" } ]
2025-03-10T00:00:00
[ [ "Fard", "Ali Samimi", "" ], [ "Mashhadigholamali", "Mohammadreza", "" ], [ "Zolfaghari", "Samaneh", "" ], [ "Abedi", "Hajar", "" ], [ "Chakraborty", "Mainak", "" ], [ "Borzì", "Luigi", "" ], [ "Daneshtalab", "Masoud", "" ], [ "Shaker", "George", "" ] ]
TITLE: Exploring FMCW Radars and Feature Maps for Activity Recognition: A Benchmark Study ABSTRACT: Human Activity Recognition has gained significant attention due to its diverse applications, including ambient assisted living and remote sensing. Wearable sensor-based solutions often suffer from user discomfort and reliability issues, while video-based methods raise privacy concerns and perform poorly in low-light conditions or long ranges. This study introduces a Frequency-Modulated Continuous Wave radar-based framework for human activity recognition, leveraging a 60 GHz radar and multi-dimensional feature maps. Unlike conventional approaches that process feature maps as images, this study feeds multi-dimensional feature maps -- Range-Doppler, Range-Azimuth, and Range-Elevation -- as data vectors directly into the machine learning (SVM, MLP) and deep learning (CNN, LSTM, ConvLSTM) models, preserving the spatial and temporal structures of the data. These features were extracted from a novel dataset with seven activity classes and validated using two different validation approaches. The ConvLSTM model outperformed conventional machine learning and deep learning models, achieving an accuracy of 90.51% and an F1-score of 87.31% on cross-scene validation and an accuracy of 89.56% and an F1-score of 87.15% on leave-one-person-out cross-validation. The results highlight the approach's potential for scalable, non-intrusive, and privacy-preserving activity monitoring in real-world scenarios.
no_new_dataset
0.580471
2503.05630
Yu Jiang
Tian Qiu, Ruiming Du, Nikolai Spine, Lailiang Cheng, Yu Jiang
Joint 3D Point Cloud Segmentation using Real-Sim Loop: From Panels to Trees and Branches
Accepted by ICRA 2025
null
null
null
cs.RO cs.CV q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
Modern orchards are planted in structured rows with distinct panel divisions to improve management. Accurate and efficient joint segmentation of point cloud from Panel to Tree and Branch (P2TB) is essential for robotic operations. However, most current segmentation methods focus on single instance segmentation and depend on a sequence of deep networks to perform joint tasks. This strategy hinders the use of hierarchical information embedded in the data, leading to both error accumulation and increased costs for annotation and computation, which limits its scalability for real-world applications. In this study, we proposed a novel approach that incorporated a Real2Sim L-TreeGen for training data generation and a joint model (J-P2TB) designed for the P2TB task. The J-P2TB model, trained on the generated simulation dataset, was used for joint segmentation of real-world panel point clouds via zero-shot learning. Compared to representative methods, our model outperformed them in most segmentation metrics while using 40% fewer learnable parameters. This Sim2Real result highlighted the efficacy of L-TreeGen in model training and the performance of J-P2TB for joint segmentation, demonstrating its strong accuracy, efficiency, and generalizability for real-world applications. These improvements would not only greatly benefit the development of robots for automated orchard operations but also advance digital twin technology.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 17:54:02 GMT" } ]
2025-03-10T00:00:00
[ [ "Qiu", "Tian", "" ], [ "Du", "Ruiming", "" ], [ "Spine", "Nikolai", "" ], [ "Cheng", "Lailiang", "" ], [ "Jiang", "Yu", "" ] ]
TITLE: Joint 3D Point Cloud Segmentation using Real-Sim Loop: From Panels to Trees and Branches ABSTRACT: Modern orchards are planted in structured rows with distinct panel divisions to improve management. Accurate and efficient joint segmentation of point cloud from Panel to Tree and Branch (P2TB) is essential for robotic operations. However, most current segmentation methods focus on single instance segmentation and depend on a sequence of deep networks to perform joint tasks. This strategy hinders the use of hierarchical information embedded in the data, leading to both error accumulation and increased costs for annotation and computation, which limits its scalability for real-world applications. In this study, we proposed a novel approach that incorporated a Real2Sim L-TreeGen for training data generation and a joint model (J-P2TB) designed for the P2TB task. The J-P2TB model, trained on the generated simulation dataset, was used for joint segmentation of real-world panel point clouds via zero-shot learning. Compared to representative methods, our model outperformed them in most segmentation metrics while using 40% fewer learnable parameters. This Sim2Real result highlighted the efficacy of L-TreeGen in model training and the performance of J-P2TB for joint segmentation, demonstrating its strong accuracy, efficiency, and generalizability for real-world applications. These improvements would not only greatly benefit the development of robots for automated orchard operations but also advance digital twin technology.
no_new_dataset
0.946101
2503.05638
Wenbo Hu
Mark YU, Wenbo Hu, Jinbo Xing, Ying Shan
TrajectoryCrafter: Redirecting Camera Trajectory for Monocular Videos via Diffusion Models
Project webpage: https://trajectorycrafter.github.io/
null
null
null
cs.CV cs.AI cs.GR
http://creativecommons.org/licenses/by/4.0/
We present TrajectoryCrafter, a novel approach to redirect camera trajectories for monocular videos. By disentangling deterministic view transformations from stochastic content generation, our method achieves precise control over user-specified camera trajectories. We propose a novel dual-stream conditional video diffusion model that concurrently integrates point cloud renders and source videos as conditions, ensuring accurate view transformations and coherent 4D content generation. Instead of leveraging scarce multi-view videos, we curate a hybrid training dataset combining web-scale monocular videos with static multi-view datasets, by our innovative double-reprojection strategy, significantly fostering robust generalization across diverse scenes. Extensive evaluations on multi-view and large-scale monocular videos demonstrate the superior performance of our method.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 17:57:53 GMT" } ]
2025-03-10T00:00:00
[ [ "YU", "Mark", "" ], [ "Hu", "Wenbo", "" ], [ "Xing", "Jinbo", "" ], [ "Shan", "Ying", "" ] ]
TITLE: TrajectoryCrafter: Redirecting Camera Trajectory for Monocular Videos via Diffusion Models ABSTRACT: We present TrajectoryCrafter, a novel approach to redirect camera trajectories for monocular videos. By disentangling deterministic view transformations from stochastic content generation, our method achieves precise control over user-specified camera trajectories. We propose a novel dual-stream conditional video diffusion model that concurrently integrates point cloud renders and source videos as conditions, ensuring accurate view transformations and coherent 4D content generation. Instead of leveraging scarce multi-view videos, we curate a hybrid training dataset combining web-scale monocular videos with static multi-view datasets, by our innovative double-reprojection strategy, significantly fostering robust generalization across diverse scenes. Extensive evaluations on multi-view and large-scale monocular videos demonstrate the superior performance of our method.
new_dataset
0.856152
2503.05648
Bharat Jayaprakash
Harish Panneer Selvam, Bharat Jayaprakash, Yan Li, Shashi Shekhar, William F. Northrop
Physics-based machine learning framework for predicting NOx emissions from compression ignition engines using on-board diagnostics data
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work presents a physics-based machine learning framework to predict and analyze oxides of nitrogen (NOx) emissions from compression-ignition engine-powered vehicles using on-board diagnostics (OBD) data as input. Accurate NOx prediction from OBD datasets is difficult because NOx formation inside an engine combustion chamber is governed by complex processes occurring on timescales much shorter than the data collection rate. Thus, emissions generally cannot be predicted accurately using simple empirically derived physics models. Black box models like genetic algorithms or neural networks can be more accurate, but have poor interpretability. The transparent model presented in this paper has both high accuracy and can explain potential sources of high emissions. The proposed framework consists of two major steps: a physics-based NOx prediction model combined with a novel Divergent Window Co-occurrence (DWC) Pattern detection algorithm to analyze operating conditions that are not adequately addressed by the physics-based model. The proposed framework is validated for generalizability with a second vehicle OBD dataset, a sensitivity analysis is performed, and model predictions are compared with that from a deep neural network. The results show that NOx emissions predictions using the proposed model has around 55% better root mean square error, and around 60% higher mean absolute error compared to the baseline NOx prediction model from previously published work. The DWC Pattern Detection Algorithm identified low engine power conditions to have high statistical significance, indicating an operating regime where the model can be improved. This work shows that the physics-based machine learning framework is a viable method for predicting NOx emissions from engines that do not incorporate NOx sensing.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 18:11:23 GMT" } ]
2025-03-10T00:00:00
[ [ "Selvam", "Harish Panneer", "" ], [ "Jayaprakash", "Bharat", "" ], [ "Li", "Yan", "" ], [ "Shekhar", "Shashi", "" ], [ "Northrop", "William F.", "" ] ]
TITLE: Physics-based machine learning framework for predicting NOx emissions from compression ignition engines using on-board diagnostics data ABSTRACT: This work presents a physics-based machine learning framework to predict and analyze oxides of nitrogen (NOx) emissions from compression-ignition engine-powered vehicles using on-board diagnostics (OBD) data as input. Accurate NOx prediction from OBD datasets is difficult because NOx formation inside an engine combustion chamber is governed by complex processes occurring on timescales much shorter than the data collection rate. Thus, emissions generally cannot be predicted accurately using simple empirically derived physics models. Black box models like genetic algorithms or neural networks can be more accurate, but have poor interpretability. The transparent model presented in this paper has both high accuracy and can explain potential sources of high emissions. The proposed framework consists of two major steps: a physics-based NOx prediction model combined with a novel Divergent Window Co-occurrence (DWC) Pattern detection algorithm to analyze operating conditions that are not adequately addressed by the physics-based model. The proposed framework is validated for generalizability with a second vehicle OBD dataset, a sensitivity analysis is performed, and model predictions are compared with that from a deep neural network. The results show that NOx emissions predictions using the proposed model has around 55% better root mean square error, and around 60% higher mean absolute error compared to the baseline NOx prediction model from previously published work. The DWC Pattern Detection Algorithm identified low engine power conditions to have high statistical significance, indicating an operating regime where the model can be improved. This work shows that the physics-based machine learning framework is a viable method for predicting NOx emissions from engines that do not incorporate NOx sensing.
no_new_dataset
0.949153
2503.05657
Yasser Khalil
Yasser H. Khalil, Leo Brunswic, Soufiane Lamghari, Xu Li, Mahdi Beitollahi, Xi Chen
NoT: Federated Unlearning via Weight Negation
The 42nd IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville TN, US. 2025
null
null
null
cs.LG cs.CV
http://creativecommons.org/licenses/by/4.0/
Federated unlearning (FU) aims to remove a participant's data contributions from a trained federated learning (FL) model, ensuring privacy and regulatory compliance. Traditional FU methods often depend on auxiliary storage on either the client or server side or require direct access to the data targeted for removal-a dependency that may not be feasible if the data is no longer available. To overcome these limitations, we propose NoT, a novel and efficient FU algorithm based on weight negation (multiplying by -1), which circumvents the need for additional storage and access to the target data. We argue that effective and efficient unlearning can be achieved by perturbing model parameters away from the set of optimal parameters, yet being well-positioned for quick re-optimization. This technique, though seemingly contradictory, is theoretically grounded: we prove that the weight negation perturbation effectively disrupts inter-layer co-adaptation, inducing unlearning while preserving an approximate optimality property, thereby enabling rapid recovery. Experimental results across three datasets and three model architectures demonstrate that NoT significantly outperforms existing baselines in unlearning efficacy as well as in communication and computational efficiency.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 18:19:19 GMT" } ]
2025-03-10T00:00:00
[ [ "Khalil", "Yasser H.", "" ], [ "Brunswic", "Leo", "" ], [ "Lamghari", "Soufiane", "" ], [ "Li", "Xu", "" ], [ "Beitollahi", "Mahdi", "" ], [ "Chen", "Xi", "" ] ]
TITLE: NoT: Federated Unlearning via Weight Negation ABSTRACT: Federated unlearning (FU) aims to remove a participant's data contributions from a trained federated learning (FL) model, ensuring privacy and regulatory compliance. Traditional FU methods often depend on auxiliary storage on either the client or server side or require direct access to the data targeted for removal-a dependency that may not be feasible if the data is no longer available. To overcome these limitations, we propose NoT, a novel and efficient FU algorithm based on weight negation (multiplying by -1), which circumvents the need for additional storage and access to the target data. We argue that effective and efficient unlearning can be achieved by perturbing model parameters away from the set of optimal parameters, yet being well-positioned for quick re-optimization. This technique, though seemingly contradictory, is theoretically grounded: we prove that the weight negation perturbation effectively disrupts inter-layer co-adaptation, inducing unlearning while preserving an approximate optimality property, thereby enabling rapid recovery. Experimental results across three datasets and three model architectures demonstrate that NoT significantly outperforms existing baselines in unlearning efficacy as well as in communication and computational efficiency.
no_new_dataset
0.943504
2503.05665
Zengqun Zhao
Zengqun Zhao, Ziquan Liu, Yu Cao, Shaogang Gong, Ioannis Patras
AIM-Fair: Advancing Algorithmic Fairness via Selectively Fine-Tuning Biased Models with Contextual Synthetic Data
Accepted at CVPR 2025. Github: https://github.com/zengqunzhao/AIM-Fair. Project page: https://zengqunzhao.github.io/AIMFair
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Recent advances in generative models have sparked research on improving model fairness with AI-generated data. However, existing methods often face limitations in the diversity and quality of synthetic data, leading to compromised fairness and overall model accuracy. Moreover, many approaches rely on the availability of demographic group labels, which are often costly to annotate. This paper proposes AIM-Fair, aiming to overcome these limitations and harness the potential of cutting-edge generative models in promoting algorithmic fairness. We investigate a fine-tuning paradigm starting from a biased model initially trained on real-world data without demographic annotations. This model is then fine-tuned using unbiased synthetic data generated by a state-of-the-art diffusion model to improve its fairness. Two key challenges are identified in this fine-tuning paradigm, 1) the low quality of synthetic data, which can still happen even with advanced generative models, and 2) the domain and bias gap between real and synthetic data. To address the limitation of synthetic data quality, we propose Contextual Synthetic Data Generation (CSDG) to generate data using a text-to-image diffusion model (T2I) with prompts generated by a context-aware LLM, ensuring both data diversity and control of bias in synthetic data. To resolve domain and bias shifts, we introduce a novel selective fine-tuning scheme in which only model parameters more sensitive to bias and less sensitive to domain shift are updated. Experiments on CelebA and UTKFace datasets show that our AIM-Fair improves model fairness while maintaining utility, outperforming both fully and partially fine-tuned approaches to model fairness.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 18:26:48 GMT" } ]
2025-03-10T00:00:00
[ [ "Zhao", "Zengqun", "" ], [ "Liu", "Ziquan", "" ], [ "Cao", "Yu", "" ], [ "Gong", "Shaogang", "" ], [ "Patras", "Ioannis", "" ] ]
TITLE: AIM-Fair: Advancing Algorithmic Fairness via Selectively Fine-Tuning Biased Models with Contextual Synthetic Data ABSTRACT: Recent advances in generative models have sparked research on improving model fairness with AI-generated data. However, existing methods often face limitations in the diversity and quality of synthetic data, leading to compromised fairness and overall model accuracy. Moreover, many approaches rely on the availability of demographic group labels, which are often costly to annotate. This paper proposes AIM-Fair, aiming to overcome these limitations and harness the potential of cutting-edge generative models in promoting algorithmic fairness. We investigate a fine-tuning paradigm starting from a biased model initially trained on real-world data without demographic annotations. This model is then fine-tuned using unbiased synthetic data generated by a state-of-the-art diffusion model to improve its fairness. Two key challenges are identified in this fine-tuning paradigm, 1) the low quality of synthetic data, which can still happen even with advanced generative models, and 2) the domain and bias gap between real and synthetic data. To address the limitation of synthetic data quality, we propose Contextual Synthetic Data Generation (CSDG) to generate data using a text-to-image diffusion model (T2I) with prompts generated by a context-aware LLM, ensuring both data diversity and control of bias in synthetic data. To resolve domain and bias shifts, we introduce a novel selective fine-tuning scheme in which only model parameters more sensitive to bias and less sensitive to domain shift are updated. Experiments on CelebA and UTKFace datasets show that our AIM-Fair improves model fairness while maintaining utility, outperforming both fully and partially fine-tuned approaches to model fairness.
no_new_dataset
0.953405
2503.05678
Zhongyi Shui
Zhongyi Shui, Ruizhe Guo, Honglin Li, Yuxuan Sun, Yunlong Zhang, Chenglu Zhu, Jiatong Cai, Pingyi Chen, Yanzhou Su, Lin Yang
Towards Effective and Efficient Context-aware Nucleus Detection in Histopathology Whole Slide Images
under review
null
null
null
eess.IV cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Nucleus detection in histopathology whole slide images (WSIs) is crucial for a broad spectrum of clinical applications. Current approaches for nucleus detection in gigapixel WSIs utilize a sliding window methodology, which overlooks boarder contextual information (eg, tissue structure) and easily leads to inaccurate predictions. To address this problem, recent studies additionally crops a large Filed-of-View (FoV) region around each sliding window to extract contextual features. However, such methods substantially increases the inference latency. In this paper, we propose an effective and efficient context-aware nucleus detection algorithm. Specifically, instead of leveraging large FoV regions, we aggregate contextual clues from off-the-shelf features of historically visited sliding windows. This design greatly reduces computational overhead. Moreover, compared to large FoV regions at a low magnification, the sliding window patches have higher magnification and provide finer-grained tissue details, thereby enhancing the detection accuracy. To further improve the efficiency, we propose a grid pooling technique to compress dense feature maps of each patch into a few contextual tokens. Finally, we craft OCELOT-seg, the first benchmark dedicated to context-aware nucleus instance segmentation. Code, dataset, and model checkpoints will be available at https://github.com/windygoo/PathContext.
[ { "version": "v1", "created": "Tue, 4 Mar 2025 02:01:53 GMT" } ]
2025-03-10T00:00:00
[ [ "Shui", "Zhongyi", "" ], [ "Guo", "Ruizhe", "" ], [ "Li", "Honglin", "" ], [ "Sun", "Yuxuan", "" ], [ "Zhang", "Yunlong", "" ], [ "Zhu", "Chenglu", "" ], [ "Cai", "Jiatong", "" ], [ "Chen", "Pingyi", "" ], [ "Su", "Yanzhou", "" ], [ "Yang", "Lin", "" ] ]
TITLE: Towards Effective and Efficient Context-aware Nucleus Detection in Histopathology Whole Slide Images ABSTRACT: Nucleus detection in histopathology whole slide images (WSIs) is crucial for a broad spectrum of clinical applications. Current approaches for nucleus detection in gigapixel WSIs utilize a sliding window methodology, which overlooks boarder contextual information (eg, tissue structure) and easily leads to inaccurate predictions. To address this problem, recent studies additionally crops a large Filed-of-View (FoV) region around each sliding window to extract contextual features. However, such methods substantially increases the inference latency. In this paper, we propose an effective and efficient context-aware nucleus detection algorithm. Specifically, instead of leveraging large FoV regions, we aggregate contextual clues from off-the-shelf features of historically visited sliding windows. This design greatly reduces computational overhead. Moreover, compared to large FoV regions at a low magnification, the sliding window patches have higher magnification and provide finer-grained tissue details, thereby enhancing the detection accuracy. To further improve the efficiency, we propose a grid pooling technique to compress dense feature maps of each patch into a few contextual tokens. Finally, we craft OCELOT-seg, the first benchmark dedicated to context-aware nucleus instance segmentation. Code, dataset, and model checkpoints will be available at https://github.com/windygoo/PathContext.
new_dataset
0.739046
2503.05684
Parameswaran Kamalaruban Dr.
Parameswaran Kamalaruban, Mark Anderson, Stuart Burrell, Maeve Madigan, Piotr Skalski, David Sutton
Fairness-Aware Low-Rank Adaptation Under Demographic Privacy Constraints
null
null
null
null
cs.LG cs.CV
http://creativecommons.org/licenses/by/4.0/
Pre-trained foundation models can be adapted for specific tasks using Low-Rank Adaptation (LoRA). However, the fairness properties of these adapted classifiers remain underexplored. Existing fairness-aware fine-tuning methods rely on direct access to sensitive attributes or their predictors, but in practice, these sensitive attributes are often held under strict consumer privacy controls, and neither the attributes nor their predictors are available to model developers, hampering the development of fair models. To address this issue, we introduce a set of LoRA-based fine-tuning methods that can be trained in a distributed fashion, where model developers and fairness auditors collaborate without sharing sensitive attributes or predictors. In this paper, we evaluate three such methods - sensitive unlearning, adversarial training, and orthogonality loss - against a fairness-unaware baseline, using experiments on the CelebA and UTK-Face datasets with an ImageNet pre-trained ViT-Base model. We find that orthogonality loss consistently reduces bias while maintaining or improving utility, whereas adversarial training improves False Positive Rate Parity and Demographic Parity in some cases, and sensitive unlearning provides no clear benefit. In tasks where significant biases are present, distributed fairness-aware fine-tuning methods can effectively eliminate bias without compromising consumer privacy and, in most cases, improve model utility.
[ { "version": "v1", "created": "Fri, 7 Mar 2025 18:49:57 GMT" } ]
2025-03-10T00:00:00
[ [ "Kamalaruban", "Parameswaran", "" ], [ "Anderson", "Mark", "" ], [ "Burrell", "Stuart", "" ], [ "Madigan", "Maeve", "" ], [ "Skalski", "Piotr", "" ], [ "Sutton", "David", "" ] ]
TITLE: Fairness-Aware Low-Rank Adaptation Under Demographic Privacy Constraints ABSTRACT: Pre-trained foundation models can be adapted for specific tasks using Low-Rank Adaptation (LoRA). However, the fairness properties of these adapted classifiers remain underexplored. Existing fairness-aware fine-tuning methods rely on direct access to sensitive attributes or their predictors, but in practice, these sensitive attributes are often held under strict consumer privacy controls, and neither the attributes nor their predictors are available to model developers, hampering the development of fair models. To address this issue, we introduce a set of LoRA-based fine-tuning methods that can be trained in a distributed fashion, where model developers and fairness auditors collaborate without sharing sensitive attributes or predictors. In this paper, we evaluate three such methods - sensitive unlearning, adversarial training, and orthogonality loss - against a fairness-unaware baseline, using experiments on the CelebA and UTK-Face datasets with an ImageNet pre-trained ViT-Base model. We find that orthogonality loss consistently reduces bias while maintaining or improving utility, whereas adversarial training improves False Positive Rate Parity and Demographic Parity in some cases, and sensitive unlearning provides no clear benefit. In tasks where significant biases are present, distributed fairness-aware fine-tuning methods can effectively eliminate bias without compromising consumer privacy and, in most cases, improve model utility.
no_new_dataset
0.948155
2206.02796
Xihong Yang
Xihong Yang, Yiqi Wang, Yue Liu, Yi Wen, Lingyuan Meng, Sihang Zhou, Xinwang Liu, En Zhu
Mixed Graph Contrastive Network for Semi-Supervised Node Classification
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph Neural Networks (GNNs) have achieved promising performance in semi-supervised node classification in recent years. However, the problem of insufficient supervision, together with representation collapse, largely limits the performance of the GNNs in this field. To alleviate the collapse of node representations in semi-supervised scenario, we propose a novel graph contrastive learning method, termed Mixed Graph Contrastive Network (MGCN). In our method, we improve the discriminative capability of the latent embeddings by an interpolation-based augmentation strategy and a correlation reduction mechanism. Specifically, we first conduct the interpolation-based augmentation in the latent space and then force the prediction model to change linearly between samples. Second, we enable the learned network to tell apart samples across two interpolation-perturbed views through forcing the correlation matrix across views to approximate an identity matrix. By combining the two settings, we extract rich supervision information from both the abundant unlabeled nodes and the rare yet valuable labeled nodes for discriminative representation learning. Extensive experimental results on six datasets demonstrate the effectiveness and the generality of MGCN compared to the existing state-of-the-art methods. The code of MGCN is available at https://github.com/xihongyang1999/MGCN on Github.
[ { "version": "v1", "created": "Mon, 6 Jun 2022 14:26:34 GMT" }, { "version": "v2", "created": "Thu, 16 Jun 2022 14:25:36 GMT" }, { "version": "v3", "created": "Thu, 6 Mar 2025 09:10:18 GMT" } ]
2025-03-07T00:00:00
[ [ "Yang", "Xihong", "" ], [ "Wang", "Yiqi", "" ], [ "Liu", "Yue", "" ], [ "Wen", "Yi", "" ], [ "Meng", "Lingyuan", "" ], [ "Zhou", "Sihang", "" ], [ "Liu", "Xinwang", "" ], [ "Zhu", "En", "" ] ]
TITLE: Mixed Graph Contrastive Network for Semi-Supervised Node Classification ABSTRACT: Graph Neural Networks (GNNs) have achieved promising performance in semi-supervised node classification in recent years. However, the problem of insufficient supervision, together with representation collapse, largely limits the performance of the GNNs in this field. To alleviate the collapse of node representations in semi-supervised scenario, we propose a novel graph contrastive learning method, termed Mixed Graph Contrastive Network (MGCN). In our method, we improve the discriminative capability of the latent embeddings by an interpolation-based augmentation strategy and a correlation reduction mechanism. Specifically, we first conduct the interpolation-based augmentation in the latent space and then force the prediction model to change linearly between samples. Second, we enable the learned network to tell apart samples across two interpolation-perturbed views through forcing the correlation matrix across views to approximate an identity matrix. By combining the two settings, we extract rich supervision information from both the abundant unlabeled nodes and the rare yet valuable labeled nodes for discriminative representation learning. Extensive experimental results on six datasets demonstrate the effectiveness and the generality of MGCN compared to the existing state-of-the-art methods. The code of MGCN is available at https://github.com/xihongyang1999/MGCN on Github.
no_new_dataset
0.950411
2210.06746
Hao Cui
Hao Cui, Rahmadi Trimananda, Scott Jordan, Athina Markopoulou
PoliGraph: Automated Privacy Policy Analysis using Knowledge Graphs (Journal Version)
46 pages, 15 figures (including subfigures). This is the 2025 updated journal version. For the conference version published at USENIX Security '23, see arXiv:2210.06746v2
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
Privacy policies disclose how an organization collects and handles personal information. Recent work has made progress in leveraging natural language processing (NLP) to automate privacy policy analysis and extract data collection statements from different sentences, considered in isolation from each other. In this paper, we view and analyze, for the first time, the entire text of a privacy policy in an integrated way. In terms of methodology: (1) we define PoliGraph, a type of knowledge graph that captures statements in a policy as relations between different parts of the text; and (2) we revisit the notion of ontologies, previously defined in heuristic ways, to capture subsumption relations between terms. We make a clear distinction between local and global ontologies to capture the context of individual policies, application domains, and privacy laws. We develop PoliGrapher, an NLP tool to automatically extract PoliGraph from the text using linguistic analysis. Using a public dataset for evaluation, we show that PoliGrapher identifies 40% more collection statements than prior state-of-the-art, with 97% precision. In terms of applications, PoliGraph enables automated analysis of a corpus of policies and allows us to: (1) reveal common patterns in the texts across different policies, and (2) assess the correctness of the terms as defined within a policy. We also apply PoliGraph to: (3) detect contradictions in a policy, where we show false alarms by prior work, and (4) analyze the consistency of policies and network traffic, where we identify significantly more clear disclosures than prior work. Finally, leveraging the capabilities of the emerging large language models (LLMs), we also present PoliGrapher-LM, a tool that uses LLM prompting instead of NLP linguistic analysis, to extract PoliGraph from the policy text, and we show that it further improves coverage.
[ { "version": "v1", "created": "Thu, 13 Oct 2022 05:16:22 GMT" }, { "version": "v2", "created": "Tue, 20 Jun 2023 19:45:23 GMT" }, { "version": "v3", "created": "Thu, 6 Mar 2025 06:47:40 GMT" } ]
2025-03-07T00:00:00
[ [ "Cui", "Hao", "" ], [ "Trimananda", "Rahmadi", "" ], [ "Jordan", "Scott", "" ], [ "Markopoulou", "Athina", "" ] ]
TITLE: PoliGraph: Automated Privacy Policy Analysis using Knowledge Graphs (Journal Version) ABSTRACT: Privacy policies disclose how an organization collects and handles personal information. Recent work has made progress in leveraging natural language processing (NLP) to automate privacy policy analysis and extract data collection statements from different sentences, considered in isolation from each other. In this paper, we view and analyze, for the first time, the entire text of a privacy policy in an integrated way. In terms of methodology: (1) we define PoliGraph, a type of knowledge graph that captures statements in a policy as relations between different parts of the text; and (2) we revisit the notion of ontologies, previously defined in heuristic ways, to capture subsumption relations between terms. We make a clear distinction between local and global ontologies to capture the context of individual policies, application domains, and privacy laws. We develop PoliGrapher, an NLP tool to automatically extract PoliGraph from the text using linguistic analysis. Using a public dataset for evaluation, we show that PoliGrapher identifies 40% more collection statements than prior state-of-the-art, with 97% precision. In terms of applications, PoliGraph enables automated analysis of a corpus of policies and allows us to: (1) reveal common patterns in the texts across different policies, and (2) assess the correctness of the terms as defined within a policy. We also apply PoliGraph to: (3) detect contradictions in a policy, where we show false alarms by prior work, and (4) analyze the consistency of policies and network traffic, where we identify significantly more clear disclosures than prior work. Finally, leveraging the capabilities of the emerging large language models (LLMs), we also present PoliGrapher-LM, a tool that uses LLM prompting instead of NLP linguistic analysis, to extract PoliGraph from the policy text, and we show that it further improves coverage.
no_new_dataset
0.951863
2306.00723
Lakmal Meegahapola
Wageesha Bangamuarachchi and Anju Chamantha and Lakmal Meegahapola and Haeeun Kim and Salvador Ruiz-Correa and Indika Perera and Daniel Gatica-Perez
Inferring Mood-While-Eating with Smartphone Sensing and Community-Based Model Personalization
ACM Transactions on Computing for Healthcare (HEALTH)
null
null
null
cs.HC cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The interplay between mood and eating episodes has been extensively researched, revealing a connection between the two. Previous studies have relied on questionnaires and mobile phone self-reports to investigate the relationship between mood and eating. However, current literature exhibits several limitations: a lack of investigation into the generalization of mood inference models trained with data from various everyday life situations to specific contexts like eating; an absence of studies using sensor data to explore the intersection of mood and eating; and inadequate examination of model personalization techniques within limited label settings, a common challenge in mood inference (i.e., far fewer negative mood reports compared to positive or neutral reports). In this study, we sought to examine everyday eating behavior and mood using two datasets of college students in Mexico (N_mex = 84, 1843 mood-while-eating reports) and eight countries (N_mul = 678, 24K mood-while-eating reports), which contain both passive smartphone sensing and self-report data. Our results indicate that generic mood inference models experience a decline in performance in specific contexts, such as during eating, highlighting the issue of sub-context shifts in mobile sensing. Moreover, we discovered that population-level (non-personalized) and hybrid (partially personalized) modeling techniques fall short in the commonly used three-class mood inference task (positive, neutral, negative). To overcome these limitations, we implemented a novel community-based personalization approach. Our findings demonstrate that mood-while-eating can be inferred with accuracies 63.8% (with F1-score of 62.5) for the MEX dataset and 88.3% (with F1-score of 85.7) with the MUL dataset using community-based models, surpassing those achieved with traditional methods.
[ { "version": "v1", "created": "Thu, 1 Jun 2023 14:24:10 GMT" }, { "version": "v2", "created": "Sat, 9 Dec 2023 21:58:40 GMT" }, { "version": "v3", "created": "Wed, 5 Mar 2025 21:05:04 GMT" } ]
2025-03-07T00:00:00
[ [ "Bangamuarachchi", "Wageesha", "" ], [ "Chamantha", "Anju", "" ], [ "Meegahapola", "Lakmal", "" ], [ "Kim", "Haeeun", "" ], [ "Ruiz-Correa", "Salvador", "" ], [ "Perera", "Indika", "" ], [ "Gatica-Perez", "Daniel", "" ] ]
TITLE: Inferring Mood-While-Eating with Smartphone Sensing and Community-Based Model Personalization ABSTRACT: The interplay between mood and eating episodes has been extensively researched, revealing a connection between the two. Previous studies have relied on questionnaires and mobile phone self-reports to investigate the relationship between mood and eating. However, current literature exhibits several limitations: a lack of investigation into the generalization of mood inference models trained with data from various everyday life situations to specific contexts like eating; an absence of studies using sensor data to explore the intersection of mood and eating; and inadequate examination of model personalization techniques within limited label settings, a common challenge in mood inference (i.e., far fewer negative mood reports compared to positive or neutral reports). In this study, we sought to examine everyday eating behavior and mood using two datasets of college students in Mexico (N_mex = 84, 1843 mood-while-eating reports) and eight countries (N_mul = 678, 24K mood-while-eating reports), which contain both passive smartphone sensing and self-report data. Our results indicate that generic mood inference models experience a decline in performance in specific contexts, such as during eating, highlighting the issue of sub-context shifts in mobile sensing. Moreover, we discovered that population-level (non-personalized) and hybrid (partially personalized) modeling techniques fall short in the commonly used three-class mood inference task (positive, neutral, negative). To overcome these limitations, we implemented a novel community-based personalization approach. Our findings demonstrate that mood-while-eating can be inferred with accuracies 63.8% (with F1-score of 62.5) for the MEX dataset and 88.3% (with F1-score of 85.7) with the MUL dataset using community-based models, surpassing those achieved with traditional methods.
no_new_dataset
0.937268
2307.04014
Amirhossein Askari Farsangi
Amirhossein Askari Farsangi, Ali Sharifi-Zarchi, Mohammad Hossein Rohban
Novel Pipeline for Diagnosing Acute Lymphoblastic Leukemia Sensitive to Related Biomarkers
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Acute Lymphoblastic Leukemia (ALL) is one of the most common types of childhood blood cancer. The quick start of the treatment process is critical to saving the patient's life, and for this reason, early diagnosis of this disease is essential. Examining the blood smear images of these patients is one of the methods used by expert doctors to diagnose this disease. Deep learning-based methods have numerous applications in medical fields, as they have significantly advanced in recent years. ALL diagnosis is not an exception in this field, and several machine learning-based methods for this problem have been proposed. In previous methods, high diagnostic accuracy was reported, but our work showed that this alone is not sufficient, as it can lead to models taking shortcuts and not making meaningful decisions. This issue arises due to the small size of medical training datasets. To address this, we constrained our model to follow a pipeline inspired by experts' work. We also demonstrated that, since a judgement based on only one image is insufficient, redefining the problem as a multiple-instance learning problem is necessary for achieving a practical result. Our model is the first to provide a solution to this problem in a multiple-instance learning setup. We introduced a novel pipeline for diagnosing ALL that approximates the process used by hematologists, is sensitive to disease biomarkers, and achieves an accuracy of 96.15%, an F1-score of 94.24%, a sensitivity of 97.56%, and a specificity of 90.91% on ALL IDB 1. Our method was further evaluated on an out-of-distribution dataset, which posed a challenging test and had acceptable performance. Notably, our model was trained on a relatively small dataset, highlighting the potential for our approach to be applied to other medical datasets with limited data availability.
[ { "version": "v1", "created": "Sat, 8 Jul 2023 16:46:16 GMT" }, { "version": "v2", "created": "Tue, 11 Jul 2023 11:16:17 GMT" }, { "version": "v3", "created": "Thu, 6 Mar 2025 06:55:15 GMT" } ]
2025-03-07T00:00:00
[ [ "Farsangi", "Amirhossein Askari", "" ], [ "Sharifi-Zarchi", "Ali", "" ], [ "Rohban", "Mohammad Hossein", "" ] ]
TITLE: Novel Pipeline for Diagnosing Acute Lymphoblastic Leukemia Sensitive to Related Biomarkers ABSTRACT: Acute Lymphoblastic Leukemia (ALL) is one of the most common types of childhood blood cancer. The quick start of the treatment process is critical to saving the patient's life, and for this reason, early diagnosis of this disease is essential. Examining the blood smear images of these patients is one of the methods used by expert doctors to diagnose this disease. Deep learning-based methods have numerous applications in medical fields, as they have significantly advanced in recent years. ALL diagnosis is not an exception in this field, and several machine learning-based methods for this problem have been proposed. In previous methods, high diagnostic accuracy was reported, but our work showed that this alone is not sufficient, as it can lead to models taking shortcuts and not making meaningful decisions. This issue arises due to the small size of medical training datasets. To address this, we constrained our model to follow a pipeline inspired by experts' work. We also demonstrated that, since a judgement based on only one image is insufficient, redefining the problem as a multiple-instance learning problem is necessary for achieving a practical result. Our model is the first to provide a solution to this problem in a multiple-instance learning setup. We introduced a novel pipeline for diagnosing ALL that approximates the process used by hematologists, is sensitive to disease biomarkers, and achieves an accuracy of 96.15%, an F1-score of 94.24%, a sensitivity of 97.56%, and a specificity of 90.91% on ALL IDB 1. Our method was further evaluated on an out-of-distribution dataset, which posed a challenging test and had acceptable performance. Notably, our model was trained on a relatively small dataset, highlighting the potential for our approach to be applied to other medical datasets with limited data availability.
no_new_dataset
0.945248
2307.15421
Wei Jiang
Wei Jiang, Jiayu Yang, Yongqi Zhai, Feng Gao, Ronggang Wang
MLIC++: Linear Complexity Multi-Reference Entropy Modeling for Learned Image Compression
Accepted to ICML 2023 Neural Compression Workshop and ACM Transactions on Multimedia Computing, Communications, and Applications 2025
null
10.1145/3719011
null
eess.IV cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The latent representation in learned image compression encompasses channel-wise, local spatial, and global spatial correlations, which are essential for the entropy model to capture for conditional entropy minimization. Efficiently capturing these contexts within a single entropy model, especially in high-resolution image coding, presents a challenge due to the computational complexity of existing global context modules. To address this challenge, we propose the Linear Complexity Multi-Reference Entropy Model (MEM$^{++}$). Specifically, the latent representation is partitioned into multiple slices. For channel-wise contexts, previously compressed slices serve as the context for compressing a particular slice. For local contexts, we introduce a shifted-window-based checkerboard attention module. This module ensures linear complexity without sacrificing performance. For global contexts, we propose a linear complexity attention mechanism. It captures global correlations by decomposing the softmax operation, enabling the implicit computation of attention maps from previously decoded slices. Using MEM$^{++}$ as the entropy model, we develop the image compression method MLIC$^{++}$. Extensive experimental results demonstrate that MLIC$^{++}$ achieves state-of-the-art performance, reducing BD-rate by $13.39\%$ on the Kodak dataset compared to VTM-17.0 in Peak Signal-to-Noise Ratio (PSNR). Furthermore, MLIC$^{++}$ exhibits linear computational complexity and memory consumption with resolution, making it highly suitable for high-resolution image coding. Code and pre-trained models are available at https://github.com/JiangWeibeta/MLIC. Training dataset is available at https://huggingface.co/datasets/Whiteboat/MLIC-Train-100K.
[ { "version": "v1", "created": "Fri, 28 Jul 2023 09:11:37 GMT" }, { "version": "v10", "created": "Sat, 8 Feb 2025 08:12:31 GMT" }, { "version": "v11", "created": "Mon, 17 Feb 2025 08:41:30 GMT" }, { "version": "v2", "created": "Sun, 3 Sep 2023 09:01:43 GMT" }, { "version": "v3", "created": "Mon, 30 Oct 2023 05:56:08 GMT" }, { "version": "v4", "created": "Mon, 18 Dec 2023 09:02:57 GMT" }, { "version": "v5", "created": "Sun, 7 Jan 2024 03:52:03 GMT" }, { "version": "v6", "created": "Tue, 16 Jan 2024 15:15:49 GMT" }, { "version": "v7", "created": "Sat, 3 Feb 2024 09:12:10 GMT" }, { "version": "v8", "created": "Wed, 14 Feb 2024 11:13:49 GMT" }, { "version": "v9", "created": "Tue, 20 Feb 2024 03:25:43 GMT" } ]
2025-03-07T00:00:00
[ [ "Jiang", "Wei", "" ], [ "Yang", "Jiayu", "" ], [ "Zhai", "Yongqi", "" ], [ "Gao", "Feng", "" ], [ "Wang", "Ronggang", "" ] ]
TITLE: MLIC++: Linear Complexity Multi-Reference Entropy Modeling for Learned Image Compression ABSTRACT: The latent representation in learned image compression encompasses channel-wise, local spatial, and global spatial correlations, which are essential for the entropy model to capture for conditional entropy minimization. Efficiently capturing these contexts within a single entropy model, especially in high-resolution image coding, presents a challenge due to the computational complexity of existing global context modules. To address this challenge, we propose the Linear Complexity Multi-Reference Entropy Model (MEM$^{++}$). Specifically, the latent representation is partitioned into multiple slices. For channel-wise contexts, previously compressed slices serve as the context for compressing a particular slice. For local contexts, we introduce a shifted-window-based checkerboard attention module. This module ensures linear complexity without sacrificing performance. For global contexts, we propose a linear complexity attention mechanism. It captures global correlations by decomposing the softmax operation, enabling the implicit computation of attention maps from previously decoded slices. Using MEM$^{++}$ as the entropy model, we develop the image compression method MLIC$^{++}$. Extensive experimental results demonstrate that MLIC$^{++}$ achieves state-of-the-art performance, reducing BD-rate by $13.39\%$ on the Kodak dataset compared to VTM-17.0 in Peak Signal-to-Noise Ratio (PSNR). Furthermore, MLIC$^{++}$ exhibits linear computational complexity and memory consumption with resolution, making it highly suitable for high-resolution image coding. Code and pre-trained models are available at https://github.com/JiangWeibeta/MLIC. Training dataset is available at https://huggingface.co/datasets/Whiteboat/MLIC-Train-100K.
no_new_dataset
0.950824
2309.00889
Alper Ahmetoglu
Alper Ahmetoglu, Batuhan Celik, Erhan Oztop, Emre Ugur
Discovering Predictive Relational Object Symbols with Symbolic Attentive Layers
arXiv admin note: text overlap with arXiv:2208.01021
null
10.1109/LRA.2024.3350994
null
cs.RO cs.LG
http://creativecommons.org/licenses/by/4.0/
In this paper, we propose and realize a new deep learning architecture for discovering symbolic representations for objects and their relations based on the self-supervised continuous interaction of a manipulator robot with multiple objects on a tabletop environment. The key feature of the model is that it can handle a changing number number of objects naturally and map the object-object relations into symbolic domain explicitly. In the model, we employ a self-attention layer that computes discrete attention weights from object features, which are treated as relational symbols between objects. These relational symbols are then used to aggregate the learned object symbols and predict the effects of executed actions on each object. The result is a pipeline that allows the formation of object symbols and relational symbols from a dataset of object features, actions, and effects in an end-to-end manner. We compare the performance of our proposed architecture with state-of-the-art symbol discovery methods in a simulated tabletop environment where the robot needs to discover symbols related to the relative positions of objects to predict the observed effect successfully. Our experiments show that the proposed architecture performs better than other baselines in effect prediction while forming not only object symbols but also relational symbols. Furthermore, we analyze the learned symbols and relational patterns between objects to learn about how the model interprets the environment. Our analysis shows that the learned symbols relate to the relative positions of objects, object types, and their horizontal alignment on the table, which reflect the regularities in the environment.
[ { "version": "v1", "created": "Sat, 2 Sep 2023 10:06:10 GMT" } ]
2025-03-07T00:00:00
[ [ "Ahmetoglu", "Alper", "" ], [ "Celik", "Batuhan", "" ], [ "Oztop", "Erhan", "" ], [ "Ugur", "Emre", "" ] ]
TITLE: Discovering Predictive Relational Object Symbols with Symbolic Attentive Layers ABSTRACT: In this paper, we propose and realize a new deep learning architecture for discovering symbolic representations for objects and their relations based on the self-supervised continuous interaction of a manipulator robot with multiple objects on a tabletop environment. The key feature of the model is that it can handle a changing number number of objects naturally and map the object-object relations into symbolic domain explicitly. In the model, we employ a self-attention layer that computes discrete attention weights from object features, which are treated as relational symbols between objects. These relational symbols are then used to aggregate the learned object symbols and predict the effects of executed actions on each object. The result is a pipeline that allows the formation of object symbols and relational symbols from a dataset of object features, actions, and effects in an end-to-end manner. We compare the performance of our proposed architecture with state-of-the-art symbol discovery methods in a simulated tabletop environment where the robot needs to discover symbols related to the relative positions of objects to predict the observed effect successfully. Our experiments show that the proposed architecture performs better than other baselines in effect prediction while forming not only object symbols but also relational symbols. Furthermore, we analyze the learned symbols and relational patterns between objects to learn about how the model interprets the environment. Our analysis shows that the learned symbols relate to the relative positions of objects, object types, and their horizontal alignment on the table, which reflect the regularities in the environment.
no_new_dataset
0.949529
2311.07978
David Adelani
Jessica Ojo, Odunayo Ogundepo, Akintunde Oladipo, Kelechi Ogueji, Jimmy Lin, Pontus Stenetorp, David Ifeoluwa Adelani
AfroBench: How Good are Large Language Models on African Languages?
Under review
null
null
null
cs.CL cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Large-scale multilingual evaluations, such as MEGA, often include only a handful of African languages due to the scarcity of high-quality evaluation data and the limited discoverability of existing African datasets. This lack of representation hinders comprehensive LLM evaluation across a diverse range of languages and tasks. To address these challenges, we introduce AfroBench -- a multi-task benchmark for evaluating the performance of LLMs across 64 African languages, 15 tasks and 22 datasets. AfroBench consists of nine natural language understanding datasets, six text generation datasets, six knowledge and question answering tasks, and one mathematical reasoning task. We present results comparing the performance of prompting LLMs to fine-tuned baselines based on BERT and T5-style models. Our results suggest large gaps in performance between high-resource languages, such as English, and African languages across most tasks; but performance also varies based on the availability of monolingual data resources. Our findings confirm that performance on African languages continues to remain a hurdle for current LLMs, underscoring the need for additional efforts to close this gap. https://mcgill-nlp.github.io/AfroBench/
[ { "version": "v1", "created": "Tue, 14 Nov 2023 08:10:14 GMT" }, { "version": "v2", "created": "Tue, 30 Apr 2024 16:04:16 GMT" }, { "version": "v3", "created": "Wed, 26 Feb 2025 15:16:47 GMT" }, { "version": "v4", "created": "Thu, 6 Mar 2025 13:29:24 GMT" } ]
2025-03-07T00:00:00
[ [ "Ojo", "Jessica", "" ], [ "Ogundepo", "Odunayo", "" ], [ "Oladipo", "Akintunde", "" ], [ "Ogueji", "Kelechi", "" ], [ "Lin", "Jimmy", "" ], [ "Stenetorp", "Pontus", "" ], [ "Adelani", "David Ifeoluwa", "" ] ]
TITLE: AfroBench: How Good are Large Language Models on African Languages? ABSTRACT: Large-scale multilingual evaluations, such as MEGA, often include only a handful of African languages due to the scarcity of high-quality evaluation data and the limited discoverability of existing African datasets. This lack of representation hinders comprehensive LLM evaluation across a diverse range of languages and tasks. To address these challenges, we introduce AfroBench -- a multi-task benchmark for evaluating the performance of LLMs across 64 African languages, 15 tasks and 22 datasets. AfroBench consists of nine natural language understanding datasets, six text generation datasets, six knowledge and question answering tasks, and one mathematical reasoning task. We present results comparing the performance of prompting LLMs to fine-tuned baselines based on BERT and T5-style models. Our results suggest large gaps in performance between high-resource languages, such as English, and African languages across most tasks; but performance also varies based on the availability of monolingual data resources. Our findings confirm that performance on African languages continues to remain a hurdle for current LLMs, underscoring the need for additional efforts to close this gap. https://mcgill-nlp.github.io/AfroBench/
new_dataset
0.969928
2312.03286
Hongsin Lee
Hongsin Lee, Seungju Cho, Changick Kim
Indirect Gradient Matching for Adversarial Robust Distillation
ICLR 2025
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Adversarial training significantly improves adversarial robustness, but superior performance is primarily attained with large models. This substantial performance gap for smaller models has spurred active research into adversarial distillation (AD) to mitigate the difference. Existing AD methods leverage the teacher's logits as a guide. In contrast to these approaches, we aim to transfer another piece of knowledge from the teacher, the input gradient. In this paper, we propose a distillation module termed Indirect Gradient Distillation Module (IGDM) that indirectly matches the student's input gradient with that of the teacher. Experimental results show that IGDM seamlessly integrates with existing AD methods, significantly enhancing their performance. Particularly, utilizing IGDM on the CIFAR-100 dataset improves the AutoAttack accuracy from 28.06% to 30.32% with the ResNet-18 architecture and from 26.18% to 29.32% with the MobileNetV2 architecture when integrated into the SOTA method without additional data augmentation.
[ { "version": "v1", "created": "Wed, 6 Dec 2023 04:32:38 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 10:12:31 GMT" } ]
2025-03-07T00:00:00
[ [ "Lee", "Hongsin", "" ], [ "Cho", "Seungju", "" ], [ "Kim", "Changick", "" ] ]
TITLE: Indirect Gradient Matching for Adversarial Robust Distillation ABSTRACT: Adversarial training significantly improves adversarial robustness, but superior performance is primarily attained with large models. This substantial performance gap for smaller models has spurred active research into adversarial distillation (AD) to mitigate the difference. Existing AD methods leverage the teacher's logits as a guide. In contrast to these approaches, we aim to transfer another piece of knowledge from the teacher, the input gradient. In this paper, we propose a distillation module termed Indirect Gradient Distillation Module (IGDM) that indirectly matches the student's input gradient with that of the teacher. Experimental results show that IGDM seamlessly integrates with existing AD methods, significantly enhancing their performance. Particularly, utilizing IGDM on the CIFAR-100 dataset improves the AutoAttack accuracy from 28.06% to 30.32% with the ResNet-18 architecture and from 26.18% to 29.32% with the MobileNetV2 architecture when integrated into the SOTA method without additional data augmentation.
no_new_dataset
0.948917
2401.13097
Michelle Greene
Michelle R. Greene, Mariam Josyula, Wentao Si and Jennifer A. Hart
Digital Divides in Scene Recognition: Uncovering Socioeconomic Biases in Deep Learning Systems
28 pages, 3 figures, 6 tables
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Computer-based scene understanding has influenced fields ranging from urban planning to autonomous vehicle performance, yet little is known about how well these technologies work across social differences. We investigate the biases of deep convolutional neural networks (dCNNs) in scene classification, using nearly one million images from global and US sources, including user-submitted home photographs and Airbnb listings. We applied statistical models to quantify the impact of socioeconomic indicators such as family income, Human Development Index (HDI), and demographic factors from public data sources (CIA and US Census) on dCNN performance. Our analyses revealed significant socioeconomic bias, where pretrained dCNNs demonstrated lower classification accuracy, lower classification confidence, and a higher tendency to assign labels that could be offensive when applied to homes (e.g., "ruin", "slum"), especially in images from homes with lower socioeconomic status (SES). This trend is consistent across two datasets of international images and within the diverse economic and racial landscapes of the United States. This research contributes to understanding biases in computer vision, emphasizing the need for more inclusive and representative training datasets. By mitigating the bias in the computer vision pipelines, we can ensure fairer and more equitable outcomes for applied computer vision, including home valuation and smart home security systems. There is urgency in addressing these biases, which can significantly impact critical decisions in urban development and resource allocation. Our findings also motivate the development of AI systems that better understand and serve diverse communities, moving towards technology that equitably benefits all sectors of society.
[ { "version": "v1", "created": "Tue, 23 Jan 2024 21:22:06 GMT" }, { "version": "v2", "created": "Wed, 5 Mar 2025 21:31:31 GMT" } ]
2025-03-07T00:00:00
[ [ "Greene", "Michelle R.", "" ], [ "Josyula", "Mariam", "" ], [ "Si", "Wentao", "" ], [ "Hart", "Jennifer A.", "" ] ]
TITLE: Digital Divides in Scene Recognition: Uncovering Socioeconomic Biases in Deep Learning Systems ABSTRACT: Computer-based scene understanding has influenced fields ranging from urban planning to autonomous vehicle performance, yet little is known about how well these technologies work across social differences. We investigate the biases of deep convolutional neural networks (dCNNs) in scene classification, using nearly one million images from global and US sources, including user-submitted home photographs and Airbnb listings. We applied statistical models to quantify the impact of socioeconomic indicators such as family income, Human Development Index (HDI), and demographic factors from public data sources (CIA and US Census) on dCNN performance. Our analyses revealed significant socioeconomic bias, where pretrained dCNNs demonstrated lower classification accuracy, lower classification confidence, and a higher tendency to assign labels that could be offensive when applied to homes (e.g., "ruin", "slum"), especially in images from homes with lower socioeconomic status (SES). This trend is consistent across two datasets of international images and within the diverse economic and racial landscapes of the United States. This research contributes to understanding biases in computer vision, emphasizing the need for more inclusive and representative training datasets. By mitigating the bias in the computer vision pipelines, we can ensure fairer and more equitable outcomes for applied computer vision, including home valuation and smart home security systems. There is urgency in addressing these biases, which can significantly impact critical decisions in urban development and resource allocation. Our findings also motivate the development of AI systems that better understand and serve diverse communities, moving towards technology that equitably benefits all sectors of society.
no_new_dataset
0.947672
2401.13898
Huy Le Quang
Huy Q. Le, Chu Myaet Thwal, Yu Qiao, Ye Lin Tun, Minh N. H. Nguyen, Eui-Nam Huh and Choong Seon Hong
Cross-Modal Prototype based Multimodal Federated Learning under Severely Missing Modality
14 pages, 8 figures, 11 tables
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multimodal federated learning (MFL) has emerged as a decentralized machine learning paradigm, allowing multiple clients with different modalities to collaborate on training a global model across diverse data sources without sharing their private data. However, challenges, such as data heterogeneity and severely missing modalities, pose crucial hindrances to the robustness of MFL, significantly impacting the performance of global model. The occurrence of missing modalities in real-world applications, such as autonomous driving, often arises from factors like sensor failures, leading knowledge gaps during the training process. Specifically, the absence of a modality introduces misalignment during the local training phase, stemming from zero-filling in the case of clients with missing modalities. Consequently, achieving robust generalization in global model becomes imperative, especially when dealing with clients that have incomplete data. In this paper, we propose $\textbf{Multimodal Federated Cross Prototype Learning (MFCPL)}$, a novel approach for MFL under severely missing modalities. Our MFCPL leverages the complete prototypes to provide diverse modality knowledge in modality-shared level with the cross-modal regularization and modality-specific level with cross-modal contrastive mechanism. Additionally, our approach introduces the cross-modal alignment to provide regularization for modality-specific features, thereby enhancing the overall performance, particularly in scenarios involving severely missing modalities. Through extensive experiments on three multimodal datasets, we demonstrate the effectiveness of MFCPL in mitigating the challenges of data heterogeneity and severely missing modalities while improving the overall performance and robustness of MFL.
[ { "version": "v1", "created": "Thu, 25 Jan 2024 02:25:23 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 11:38:00 GMT" } ]
2025-03-07T00:00:00
[ [ "Le", "Huy Q.", "" ], [ "Thwal", "Chu Myaet", "" ], [ "Qiao", "Yu", "" ], [ "Tun", "Ye Lin", "" ], [ "Nguyen", "Minh N. H.", "" ], [ "Huh", "Eui-Nam", "" ], [ "Hong", "Choong Seon", "" ] ]
TITLE: Cross-Modal Prototype based Multimodal Federated Learning under Severely Missing Modality ABSTRACT: Multimodal federated learning (MFL) has emerged as a decentralized machine learning paradigm, allowing multiple clients with different modalities to collaborate on training a global model across diverse data sources without sharing their private data. However, challenges, such as data heterogeneity and severely missing modalities, pose crucial hindrances to the robustness of MFL, significantly impacting the performance of global model. The occurrence of missing modalities in real-world applications, such as autonomous driving, often arises from factors like sensor failures, leading knowledge gaps during the training process. Specifically, the absence of a modality introduces misalignment during the local training phase, stemming from zero-filling in the case of clients with missing modalities. Consequently, achieving robust generalization in global model becomes imperative, especially when dealing with clients that have incomplete data. In this paper, we propose $\textbf{Multimodal Federated Cross Prototype Learning (MFCPL)}$, a novel approach for MFL under severely missing modalities. Our MFCPL leverages the complete prototypes to provide diverse modality knowledge in modality-shared level with the cross-modal regularization and modality-specific level with cross-modal contrastive mechanism. Additionally, our approach introduces the cross-modal alignment to provide regularization for modality-specific features, thereby enhancing the overall performance, particularly in scenarios involving severely missing modalities. Through extensive experiments on three multimodal datasets, we demonstrate the effectiveness of MFCPL in mitigating the challenges of data heterogeneity and severely missing modalities while improving the overall performance and robustness of MFL.
no_new_dataset
0.951908
2402.01879
Antonio Emanuele Cin\`a
Antonio Emanuele Cin\`a, Francesco Villani, Maura Pintor, Lea Sch\"onherr, Battista Biggio, and Marcello Pelillo
$\sigma$-zero: Gradient-based Optimization of $\ell_0$-norm Adversarial Examples
Paper accepted at International Conference on Learning Representations (ICLR 2025). Code available at https://github.com/sigma0-advx/sigma-zero
null
null
null
cs.LG cs.CR cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Evaluating the adversarial robustness of deep networks to gradient-based attacks is challenging. While most attacks consider $\ell_2$- and $\ell_\infty$-norm constraints to craft input perturbations, only a few investigate sparse $\ell_1$- and $\ell_0$-norm attacks. In particular, $\ell_0$-norm attacks remain the least studied due to the inherent complexity of optimizing over a non-convex and non-differentiable constraint. However, evaluating adversarial robustness under these attacks could reveal weaknesses otherwise left untested with more conventional $\ell_2$- and $\ell_\infty$-norm attacks. In this work, we propose a novel $\ell_0$-norm attack, called $\sigma$-zero, which leverages a differentiable approximation of the $\ell_0$ norm to facilitate gradient-based optimization, and an adaptive projection operator to dynamically adjust the trade-off between loss minimization and perturbation sparsity. Extensive evaluations using MNIST, CIFAR10, and ImageNet datasets, involving robust and non-robust models, show that $\sigma$\texttt{-zero} finds minimum $\ell_0$-norm adversarial examples without requiring any time-consuming hyperparameter tuning, and that it outperforms all competing sparse attacks in terms of success rate, perturbation size, and efficiency.
[ { "version": "v1", "created": "Fri, 2 Feb 2024 20:08:11 GMT" }, { "version": "v2", "created": "Wed, 2 Oct 2024 12:42:56 GMT" }, { "version": "v3", "created": "Thu, 6 Mar 2025 11:05:33 GMT" } ]
2025-03-07T00:00:00
[ [ "Cinà", "Antonio Emanuele", "" ], [ "Villani", "Francesco", "" ], [ "Pintor", "Maura", "" ], [ "Schönherr", "Lea", "" ], [ "Biggio", "Battista", "" ], [ "Pelillo", "Marcello", "" ] ]
TITLE: $\sigma$-zero: Gradient-based Optimization of $\ell_0$-norm Adversarial Examples ABSTRACT: Evaluating the adversarial robustness of deep networks to gradient-based attacks is challenging. While most attacks consider $\ell_2$- and $\ell_\infty$-norm constraints to craft input perturbations, only a few investigate sparse $\ell_1$- and $\ell_0$-norm attacks. In particular, $\ell_0$-norm attacks remain the least studied due to the inherent complexity of optimizing over a non-convex and non-differentiable constraint. However, evaluating adversarial robustness under these attacks could reveal weaknesses otherwise left untested with more conventional $\ell_2$- and $\ell_\infty$-norm attacks. In this work, we propose a novel $\ell_0$-norm attack, called $\sigma$-zero, which leverages a differentiable approximation of the $\ell_0$ norm to facilitate gradient-based optimization, and an adaptive projection operator to dynamically adjust the trade-off between loss minimization and perturbation sparsity. Extensive evaluations using MNIST, CIFAR10, and ImageNet datasets, involving robust and non-robust models, show that $\sigma$\texttt{-zero} finds minimum $\ell_0$-norm adversarial examples without requiring any time-consuming hyperparameter tuning, and that it outperforms all competing sparse attacks in terms of success rate, perturbation size, and efficiency.
no_new_dataset
0.93511
2402.02998
Pierre Ablin
Yu-Guan Hsieh, James Thornton, Eugene Ndiaye, Michal Klein, Marco Cuturi, Pierre Ablin
Careful with that Scalpel: Improving Gradient Surgery with an EMA
null
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
Beyond minimizing a single training loss, many deep learning estimation pipelines rely on an auxiliary objective to quantify and encourage desirable properties of the model (e.g. performance on another dataset, robustness, agreement with a prior). Although the simplest approach to incorporating an auxiliary loss is to sum it with the training loss as a regularizer, recent works have shown that one can improve performance by blending the gradients beyond a simple sum; this is known as gradient surgery. We cast the problem as a constrained minimization problem where the auxiliary objective is minimized among the set of minimizers of the training loss. To solve this bilevel problem, we follow a parameter update direction that combines the training loss gradient and the orthogonal projection of the auxiliary gradient to the training gradient. In a setting where gradients come from mini-batches, we explain how, using a moving average of the training loss gradients, we can carefully maintain this critical orthogonality property. We demonstrate that our method, Bloop, can lead to much better performances on NLP and vision experiments than other gradient surgery methods without EMA.
[ { "version": "v1", "created": "Mon, 5 Feb 2024 13:37:00 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 08:57:29 GMT" } ]
2025-03-07T00:00:00
[ [ "Hsieh", "Yu-Guan", "" ], [ "Thornton", "James", "" ], [ "Ndiaye", "Eugene", "" ], [ "Klein", "Michal", "" ], [ "Cuturi", "Marco", "" ], [ "Ablin", "Pierre", "" ] ]
TITLE: Careful with that Scalpel: Improving Gradient Surgery with an EMA ABSTRACT: Beyond minimizing a single training loss, many deep learning estimation pipelines rely on an auxiliary objective to quantify and encourage desirable properties of the model (e.g. performance on another dataset, robustness, agreement with a prior). Although the simplest approach to incorporating an auxiliary loss is to sum it with the training loss as a regularizer, recent works have shown that one can improve performance by blending the gradients beyond a simple sum; this is known as gradient surgery. We cast the problem as a constrained minimization problem where the auxiliary objective is minimized among the set of minimizers of the training loss. To solve this bilevel problem, we follow a parameter update direction that combines the training loss gradient and the orthogonal projection of the auxiliary gradient to the training gradient. In a setting where gradients come from mini-batches, we explain how, using a moving average of the training loss gradients, we can carefully maintain this critical orthogonality property. We demonstrate that our method, Bloop, can lead to much better performances on NLP and vision experiments than other gradient surgery methods without EMA.
no_new_dataset
0.942665
2402.04722
Kit Gallagher
Kit Gallagher, Richard Creswell, Ben Lambert, Martin Robinson, Chon Lok Lei, Gary R. Mirams, David J. Gavaghan
Ten simple rules for training scientists to make better software
20 pages, 7 figures
null
10.1371/journal.pcbi.1012410
null
cs.CY cs.SE
http://creativecommons.org/licenses/by-nc-sa/4.0/
Computational methods and associated software implementations are central to every field of scientific investigation. Modern biological research, particularly within systems biology, has relied heavily on the development of software tools to process and organize increasingly large datasets, simulate complex mechanistic models, provide tools for the analysis and management of data, and visualize and organize outputs. However, developing high-quality research software requires scientists to develop a host of software development skills, and teaching these skills to students is challenging. There has been a growing importance placed on ensuring reproducibility and good development practices in computational research. However, less attention has been devoted to informing the specific teaching strategies which are effective at nurturing in researchers the complex skillset required to produce high-quality software that, increasingly, is required to underpin both academic and industrial biomedical research. Recent articles in the Ten Simple Rules collection have discussed the teaching of foundational computer science and coding techniques to biology students. We advance this discussion by describing the specific steps for effectively teaching the necessary skills scientists need to develop sustainable software packages which are fit for (re-)use in academic research or more widely. Although our advice is likely to be applicable to all students and researchers hoping to improve their software development skills, our guidelines are directed towards an audience of students that have some programming literacy but little formal training in software development or engineering, typical of early doctoral students. These practices are also applicable outside of doctoral training environments, and we believe they should form a key part of postgraduate training schemes more generally in the life sciences.
[ { "version": "v1", "created": "Wed, 7 Feb 2024 10:16:20 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 17:54:09 GMT" } ]
2025-03-07T00:00:00
[ [ "Gallagher", "Kit", "" ], [ "Creswell", "Richard", "" ], [ "Lambert", "Ben", "" ], [ "Robinson", "Martin", "" ], [ "Lei", "Chon Lok", "" ], [ "Mirams", "Gary R.", "" ], [ "Gavaghan", "David J.", "" ] ]
TITLE: Ten simple rules for training scientists to make better software ABSTRACT: Computational methods and associated software implementations are central to every field of scientific investigation. Modern biological research, particularly within systems biology, has relied heavily on the development of software tools to process and organize increasingly large datasets, simulate complex mechanistic models, provide tools for the analysis and management of data, and visualize and organize outputs. However, developing high-quality research software requires scientists to develop a host of software development skills, and teaching these skills to students is challenging. There has been a growing importance placed on ensuring reproducibility and good development practices in computational research. However, less attention has been devoted to informing the specific teaching strategies which are effective at nurturing in researchers the complex skillset required to produce high-quality software that, increasingly, is required to underpin both academic and industrial biomedical research. Recent articles in the Ten Simple Rules collection have discussed the teaching of foundational computer science and coding techniques to biology students. We advance this discussion by describing the specific steps for effectively teaching the necessary skills scientists need to develop sustainable software packages which are fit for (re-)use in academic research or more widely. Although our advice is likely to be applicable to all students and researchers hoping to improve their software development skills, our guidelines are directed towards an audience of students that have some programming literacy but little formal training in software development or engineering, typical of early doctoral students. These practices are also applicable outside of doctoral training environments, and we believe they should form a key part of postgraduate training schemes more generally in the life sciences.
no_new_dataset
0.940079
2403.16182
Yifei Huang
Yifei Huang, Guo Chen, Jilan Xu, Mingfang Zhang, Lijin Yang, Baoqi Pei, Hongjie Zhang, Lu Dong, Yali Wang, Limin Wang, Yu Qiao
EgoExoLearn: A Dataset for Bridging Asynchronous Ego- and Exo-centric View of Procedural Activities in Real World
CVPR 2024
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Being able to map the activities of others into one's own point of view is one fundamental human skill even from a very early age. Taking a step toward understanding this human ability, we introduce EgoExoLearn, a large-scale dataset that emulates the human demonstration following process, in which individuals record egocentric videos as they execute tasks guided by demonstration videos. Focusing on the potential applications in daily assistance and professional support, EgoExoLearn contains egocentric and demonstration video data spanning 120 hours captured in daily life scenarios and specialized laboratories. Along with the videos we record high-quality gaze data and provide detailed multimodal annotations, formulating a playground for modeling the human ability to bridge asynchronous procedural actions from different viewpoints. To this end, we present benchmarks such as cross-view association, cross-view action planning, and cross-view referenced skill assessment, along with detailed analysis. We expect EgoExoLearn can serve as an important resource for bridging the actions across views, thus paving the way for creating AI agents capable of seamlessly learning by observing humans in the real world. Code and data can be found at: https://github.com/OpenGVLab/EgoExoLearn
[ { "version": "v1", "created": "Sun, 24 Mar 2024 15:00:44 GMT" }, { "version": "v2", "created": "Wed, 5 Jun 2024 09:44:52 GMT" }, { "version": "v3", "created": "Thu, 6 Mar 2025 02:46:51 GMT" } ]
2025-03-07T00:00:00
[ [ "Huang", "Yifei", "" ], [ "Chen", "Guo", "" ], [ "Xu", "Jilan", "" ], [ "Zhang", "Mingfang", "" ], [ "Yang", "Lijin", "" ], [ "Pei", "Baoqi", "" ], [ "Zhang", "Hongjie", "" ], [ "Dong", "Lu", "" ], [ "Wang", "Yali", "" ], [ "Wang", "Limin", "" ], [ "Qiao", "Yu", "" ] ]
TITLE: EgoExoLearn: A Dataset for Bridging Asynchronous Ego- and Exo-centric View of Procedural Activities in Real World ABSTRACT: Being able to map the activities of others into one's own point of view is one fundamental human skill even from a very early age. Taking a step toward understanding this human ability, we introduce EgoExoLearn, a large-scale dataset that emulates the human demonstration following process, in which individuals record egocentric videos as they execute tasks guided by demonstration videos. Focusing on the potential applications in daily assistance and professional support, EgoExoLearn contains egocentric and demonstration video data spanning 120 hours captured in daily life scenarios and specialized laboratories. Along with the videos we record high-quality gaze data and provide detailed multimodal annotations, formulating a playground for modeling the human ability to bridge asynchronous procedural actions from different viewpoints. To this end, we present benchmarks such as cross-view association, cross-view action planning, and cross-view referenced skill assessment, along with detailed analysis. We expect EgoExoLearn can serve as an important resource for bridging the actions across views, thus paving the way for creating AI agents capable of seamlessly learning by observing humans in the real world. Code and data can be found at: https://github.com/OpenGVLab/EgoExoLearn
new_dataset
0.96707
2404.05569
Hao Li
Shen Gao, Hao Li, Chengrui Huang, Quan Tu, Zhiliang Tian, Minlie Huang, Shuo Shang
360$^\circ$REA: Towards A Reusable Experience Accumulation with 360{\deg} Assessment for Multi-Agent System
null
null
null
null
cs.AI cs.CL cs.MA
http://creativecommons.org/licenses/by/4.0/
Large language model agents have demonstrated remarkable advancements across various complex tasks. Recent works focus on optimizing the agent team or employing self-reflection to iteratively solve complex tasks. Since these agents are all based on the same LLM, only conducting self-evaluation or removing underperforming agents does not substantively enhance the capability of the agents. We argue that a comprehensive evaluation and accumulating experience from evaluation feedback is an effective approach to improving system performance. In this paper, we propose Reusable Experience Accumulation with 360$^\circ$ Assessment (360$^\circ$REA), a hierarchical multi-agent framework inspired by corporate organizational practices. The framework employs a novel 360$^\circ$ performance assessment method for multi-perspective performance evaluation with fine-grained assessment. To enhance the capability of agents in addressing complex tasks, we introduce dual-level experience pool for agents to accumulate experience through fine-grained assessment. Extensive experiments on complex task datasets demonstrate the effectiveness of 360$^\circ$REA.
[ { "version": "v1", "created": "Mon, 8 Apr 2024 14:43:13 GMT" }, { "version": "v2", "created": "Wed, 26 Jun 2024 11:42:10 GMT" }, { "version": "v3", "created": "Thu, 6 Mar 2025 12:54:37 GMT" } ]
2025-03-07T00:00:00
[ [ "Gao", "Shen", "" ], [ "Li", "Hao", "" ], [ "Huang", "Chengrui", "" ], [ "Tu", "Quan", "" ], [ "Tian", "Zhiliang", "" ], [ "Huang", "Minlie", "" ], [ "Shang", "Shuo", "" ] ]
TITLE: 360$^\circ$REA: Towards A Reusable Experience Accumulation with 360{\deg} Assessment for Multi-Agent System ABSTRACT: Large language model agents have demonstrated remarkable advancements across various complex tasks. Recent works focus on optimizing the agent team or employing self-reflection to iteratively solve complex tasks. Since these agents are all based on the same LLM, only conducting self-evaluation or removing underperforming agents does not substantively enhance the capability of the agents. We argue that a comprehensive evaluation and accumulating experience from evaluation feedback is an effective approach to improving system performance. In this paper, we propose Reusable Experience Accumulation with 360$^\circ$ Assessment (360$^\circ$REA), a hierarchical multi-agent framework inspired by corporate organizational practices. The framework employs a novel 360$^\circ$ performance assessment method for multi-perspective performance evaluation with fine-grained assessment. To enhance the capability of agents in addressing complex tasks, we introduce dual-level experience pool for agents to accumulate experience through fine-grained assessment. Extensive experiments on complex task datasets demonstrate the effectiveness of 360$^\circ$REA.
no_new_dataset
0.949201
2405.01924
Jiawei Zhou
Jiawei Zhou, Li Dong, Furu Wei, Lei Chen
Semi-Parametric Retrieval via Binary Bag-of-Tokens Index
null
null
null
null
cs.CL cs.AI cs.IR
http://creativecommons.org/licenses/by/4.0/
Information retrieval has transitioned from standalone systems into essential components across broader applications, with indexing efficiency, cost-effectiveness, and freshness becoming increasingly critical yet often overlooked. In this paper, we introduce SemI-parametric Disentangled Retrieval (SiDR), a bi-encoder retrieval framework that decouples retrieval index from neural parameters to enable efficient, low-cost, and parameter-agnostic indexing for emerging use cases. Specifically, in addition to using embeddings as indexes like existing neural retrieval methods, SiDR supports a non-parametric tokenization index for search, achieving BM25-like indexing complexity with significantly better effectiveness. Our comprehensive evaluation across 16 retrieval benchmarks demonstrates that SiDR outperforms both neural and term-based retrieval baselines under the same indexing workload: (i) When using an embedding-based index, SiDR exceeds the performance of conventional neural retrievers while maintaining similar training complexity; (ii) When using a tokenization-based index, SiDR drastically reduces indexing cost and time, matching the complexity of traditional term-based retrieval, while consistently outperforming BM25 on all in-domain datasets; (iii) Additionally, we introduce a late parametric mechanism that matches BM25 index preparation time while outperforming other neural retrieval baselines in effectiveness.
[ { "version": "v1", "created": "Fri, 3 May 2024 08:34:13 GMT" }, { "version": "v2", "created": "Thu, 6 Mar 2025 10:39:48 GMT" } ]
2025-03-07T00:00:00
[ [ "Zhou", "Jiawei", "" ], [ "Dong", "Li", "" ], [ "Wei", "Furu", "" ], [ "Chen", "Lei", "" ] ]
TITLE: Semi-Parametric Retrieval via Binary Bag-of-Tokens Index ABSTRACT: Information retrieval has transitioned from standalone systems into essential components across broader applications, with indexing efficiency, cost-effectiveness, and freshness becoming increasingly critical yet often overlooked. In this paper, we introduce SemI-parametric Disentangled Retrieval (SiDR), a bi-encoder retrieval framework that decouples retrieval index from neural parameters to enable efficient, low-cost, and parameter-agnostic indexing for emerging use cases. Specifically, in addition to using embeddings as indexes like existing neural retrieval methods, SiDR supports a non-parametric tokenization index for search, achieving BM25-like indexing complexity with significantly better effectiveness. Our comprehensive evaluation across 16 retrieval benchmarks demonstrates that SiDR outperforms both neural and term-based retrieval baselines under the same indexing workload: (i) When using an embedding-based index, SiDR exceeds the performance of conventional neural retrievers while maintaining similar training complexity; (ii) When using a tokenization-based index, SiDR drastically reduces indexing cost and time, matching the complexity of traditional term-based retrieval, while consistently outperforming BM25 on all in-domain datasets; (iii) Additionally, we introduce a late parametric mechanism that matches BM25 index preparation time while outperforming other neural retrieval baselines in effectiveness.
no_new_dataset
0.949716
2405.02318
Abhinav Lalwani
Abhinav Lalwani, Tasha Kim, Lovish Chopra, Christopher Hahn, Zhijing Jin and Mrinmaya Sachan
Autoformalizing Natural Language to First-Order Logic: A Case Study in Logical Fallacy Detection
null
null
null
null
cs.CL cs.AI cs.LG cs.LO
http://creativecommons.org/licenses/by/4.0/
Translating natural language into formal language such as First-Order Logic (FOL) is a foundational challenge in NLP with wide-ranging applications in automated reasoning, misinformation tracking, and knowledge validation. In this paper, we introduce Natural Language to First-Order Logic (NL2FOL), a framework to autoformalize natural language to FOL step by step using Large Language Models (LLMs). Our approach addresses key challenges in this translation process, including the integration of implicit background knowledge. By leveraging structured representations generated by NL2FOL, we use Satisfiability Modulo Theory (SMT) solvers to reason about the logical validity of natural language statements. We present logical fallacy detection as a case study to evaluate the efficacy of NL2FOL. Being neurosymbolic, our approach also provides interpretable insights into the reasoning process and demonstrates robustness without requiring model fine-tuning or labeled training data. Our framework achieves strong performance on multiple datasets. On the LOGIC dataset, NL2FOL achieves an F1-score of 78%, while generalizing effectively to the LOGICCLIMATE dataset with an F1-score of 80%.
[ { "version": "v1", "created": "Thu, 18 Apr 2024 00:20:48 GMT" }, { "version": "v2", "created": "Mon, 3 Mar 2025 00:38:48 GMT" }, { "version": "v3", "created": "Thu, 6 Mar 2025 07:29:44 GMT" } ]
2025-03-07T00:00:00
[ [ "Lalwani", "Abhinav", "" ], [ "Kim", "Tasha", "" ], [ "Chopra", "Lovish", "" ], [ "Hahn", "Christopher", "" ], [ "Jin", "Zhijing", "" ], [ "Sachan", "Mrinmaya", "" ] ]
TITLE: Autoformalizing Natural Language to First-Order Logic: A Case Study in Logical Fallacy Detection ABSTRACT: Translating natural language into formal language such as First-Order Logic (FOL) is a foundational challenge in NLP with wide-ranging applications in automated reasoning, misinformation tracking, and knowledge validation. In this paper, we introduce Natural Language to First-Order Logic (NL2FOL), a framework to autoformalize natural language to FOL step by step using Large Language Models (LLMs). Our approach addresses key challenges in this translation process, including the integration of implicit background knowledge. By leveraging structured representations generated by NL2FOL, we use Satisfiability Modulo Theory (SMT) solvers to reason about the logical validity of natural language statements. We present logical fallacy detection as a case study to evaluate the efficacy of NL2FOL. Being neurosymbolic, our approach also provides interpretable insights into the reasoning process and demonstrates robustness without requiring model fine-tuning or labeled training data. Our framework achieves strong performance on multiple datasets. On the LOGIC dataset, NL2FOL achieves an F1-score of 78%, while generalizing effectively to the LOGICCLIMATE dataset with an F1-score of 80%.
no_new_dataset
0.94625