Search is not available for this dataset
id
string
submitter
string
authors
string
title
string
comments
string
journal-ref
string
doi
string
report-no
string
categories
string
license
string
abstract
string
versions
list
update_date
timestamp[s]
authors_parsed
sequence
prompt
string
2503.22392
Xiao Jiang
Xiao Jiang, Grace J. Gang, J. Webster Stayman
Volumetric Material Decomposition Using Spectral Diffusion Posterior Sampling with a Compressed Polychromatic Forward Model
null
null
null
null
physics.med-ph eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We have previously introduced Spectral Diffusion Posterior Sampling (Spectral DPS) as a framework for accurate one-step material decomposition by integrating analytic spectral system models with priors learned from large datasets. This work extends the 2D Spectral DPS algorithm to 3D by addressing potentially limiting large-memory requirements with a pre-trained 2D diffusion model for slice-by-slice processing and a compressed polychromatic forward model to ensure accurate physical modeling. Simulation studies demonstrate that the proposed memory-efficient 3D Spectral DPS enables material decomposition of clinically significant volume sizes. Quantitative analysis reveals that Spectral DPS outperforms other deep-learning algorithms, such as InceptNet and conditional DDPM in contrast quantification, inter-slice continuity, and resolution preservation. This study establishes a foundation for advancing one-step material decomposition in volumetric spectral CT.
[ { "version": "v1", "created": "Fri, 28 Mar 2025 12:52:59 GMT" } ]
2025-03-31T00:00:00
[ [ "Jiang", "Xiao", "" ], [ "Gang", "Grace J.", "" ], [ "Stayman", "J. Webster", "" ] ]
TITLE: Volumetric Material Decomposition Using Spectral Diffusion Posterior Sampling with a Compressed Polychromatic Forward Model ABSTRACT: We have previously introduced Spectral Diffusion Posterior Sampling (Spectral DPS) as a framework for accurate one-step material decomposition by integrating analytic spectral system models with priors learned from large datasets. This work extends the 2D Spectral DPS algorithm to 3D by addressing potentially limiting large-memory requirements with a pre-trained 2D diffusion model for slice-by-slice processing and a compressed polychromatic forward model to ensure accurate physical modeling. Simulation studies demonstrate that the proposed memory-efficient 3D Spectral DPS enables material decomposition of clinically significant volume sizes. Quantitative analysis reveals that Spectral DPS outperforms other deep-learning algorithms, such as InceptNet and conditional DDPM in contrast quantification, inter-slice continuity, and resolution preservation. This study establishes a foundation for advancing one-step material decomposition in volumetric spectral CT.
2503.22394
Rulin Zhou
Rulin Zhou and Wenlong He and An Wang and Qiqi Yao and Haijun Hu and Jiankun Wang and Xi Zhang an Hongliang Ren
Endo-TTAP: Robust Endoscopic Tissue Tracking via Multi-Facet Guided Attention and Hybrid Flow-point Supervision
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Accurate tissue point tracking in endoscopic videos is critical for robotic-assisted surgical navigation and scene understanding, but remains challenging due to complex deformations, instrument occlusion, and the scarcity of dense trajectory annotations. Existing methods struggle with long-term tracking under these conditions due to limited feature utilization and annotation dependence. We present Endo-TTAP, a novel framework addressing these challenges through: (1) A Multi-Facet Guided Attention (MFGA) module that synergizes multi-scale flow dynamics, DINOv2 semantic embeddings, and explicit motion patterns to jointly predict point positions with uncertainty and occlusion awareness; (2) A two-stage curriculum learning strategy employing an Auxiliary Curriculum Adapter (ACA) for progressive initialization and hybrid supervision. Stage I utilizes synthetic data with optical flow ground truth for uncertainty-occlusion regularization, while Stage II combines unsupervised flow consistency and semi-supervised learning with refined pseudo-labels from off-the-shelf trackers. Extensive validation on two MICCAI Challenge datasets and our collected dataset demonstrates that Endo-TTAP achieves state-of-the-art performance in tissue point tracking, particularly in scenarios characterized by complex endoscopic conditions. The source code and dataset will be available at https://anonymous.4open.science/r/Endo-TTAP-36E5.
[ { "version": "v1", "created": "Fri, 28 Mar 2025 13:00:07 GMT" } ]
2025-03-31T00:00:00
[ [ "Zhou", "Rulin", "" ], [ "He", "Wenlong", "" ], [ "Wang", "An", "" ], [ "Yao", "Qiqi", "" ], [ "Hu", "Haijun", "" ], [ "Wang", "Jiankun", "" ], [ "Ren", "Xi Zhang an Hongliang", "" ] ]
TITLE: Endo-TTAP: Robust Endoscopic Tissue Tracking via Multi-Facet Guided Attention and Hybrid Flow-point Supervision ABSTRACT: Accurate tissue point tracking in endoscopic videos is critical for robotic-assisted surgical navigation and scene understanding, but remains challenging due to complex deformations, instrument occlusion, and the scarcity of dense trajectory annotations. Existing methods struggle with long-term tracking under these conditions due to limited feature utilization and annotation dependence. We present Endo-TTAP, a novel framework addressing these challenges through: (1) A Multi-Facet Guided Attention (MFGA) module that synergizes multi-scale flow dynamics, DINOv2 semantic embeddings, and explicit motion patterns to jointly predict point positions with uncertainty and occlusion awareness; (2) A two-stage curriculum learning strategy employing an Auxiliary Curriculum Adapter (ACA) for progressive initialization and hybrid supervision. Stage I utilizes synthetic data with optical flow ground truth for uncertainty-occlusion regularization, while Stage II combines unsupervised flow consistency and semi-supervised learning with refined pseudo-labels from off-the-shelf trackers. Extensive validation on two MICCAI Challenge datasets and our collected dataset demonstrates that Endo-TTAP achieves state-of-the-art performance in tissue point tracking, particularly in scenarios characterized by complex endoscopic conditions. The source code and dataset will be available at https://anonymous.4open.science/r/Endo-TTAP-36E5.
2503.22395
Tereza Vrabcov\'a
Tereza Vrabcov\'a, Marek Kadl\v{c}\'ik, Petr Sojka, Michal \v{S}tef\'anik, Michal Spiegel
Negation: A Pink Elephant in the Large Language Models' Room?
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
Negations are key to determining sentence meaning, making them essential for logical reasoning. Despite their importance, negations pose a substantial challenge for large language models (LLMs) and remain underexplored. We construct two multilingual natural language inference (NLI) datasets with \textit{paired} examples differing in negation. We investigate how model size and language impact its ability to handle negation correctly by evaluating popular LLMs. Contrary to previous work, we show that increasing the model size consistently improves the models' ability to handle negations. Furthermore, we find that both the models' reasoning accuracy and robustness to negation are language-dependent and that the length and explicitness of the premise have a greater impact on robustness than language. Our datasets can facilitate further research and improvements of language model reasoning in multilingual settings.
[ { "version": "v1", "created": "Fri, 28 Mar 2025 13:04:41 GMT" } ]
2025-03-31T00:00:00
[ [ "Vrabcová", "Tereza", "" ], [ "Kadlčík", "Marek", "" ], [ "Sojka", "Petr", "" ], [ "Štefánik", "Michal", "" ], [ "Spiegel", "Michal", "" ] ]
TITLE: Negation: A Pink Elephant in the Large Language Models' Room? ABSTRACT: Negations are key to determining sentence meaning, making them essential for logical reasoning. Despite their importance, negations pose a substantial challenge for large language models (LLMs) and remain underexplored. We construct two multilingual natural language inference (NLI) datasets with \textit{paired} examples differing in negation. We investigate how model size and language impact its ability to handle negation correctly by evaluating popular LLMs. Contrary to previous work, we show that increasing the model size consistently improves the models' ability to handle negations. Furthermore, we find that both the models' reasoning accuracy and robustness to negation are language-dependent and that the length and explicitness of the premise have a greater impact on robustness than language. Our datasets can facilitate further research and improvements of language model reasoning in multilingual settings.
2503.22397
Vida Adeli
Vida Adeli, Soroush Mehraban, Majid Mirmehdi, Alan Whone, Benjamin Filtjens, Amirhossein Dadashzadeh, Alfonso Fasano, Andrea Iaboni Babak Taati
GAITGen: Disentangled Motion-Pathology Impaired Gait Generative Model -- Bringing Motion Generation to the Clinical Domain
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Gait analysis is crucial for the diagnosis and monitoring of movement disorders like Parkinson's Disease. While computer vision models have shown potential for objectively evaluating parkinsonian gait, their effectiveness is limited by scarce clinical datasets and the challenge of collecting large and well-labelled data, impacting model accuracy and risk of bias. To address these gaps, we propose GAITGen, a novel framework that generates realistic gait sequences conditioned on specified pathology severity levels. GAITGen employs a Conditional Residual Vector Quantized Variational Autoencoder to learn disentangled representations of motion dynamics and pathology-specific factors, coupled with Mask and Residual Transformers for conditioned sequence generation. GAITGen generates realistic, diverse gait sequences across severity levels, enriching datasets and enabling large-scale model training in parkinsonian gait analysis. Experiments on our new PD-GaM (real) dataset demonstrate that GAITGen outperforms adapted state-of-the-art models in both reconstruction fidelity and generation quality, accurately capturing critical pathology-specific gait features. A clinical user study confirms the realism and clinical relevance of our generated sequences. Moreover, incorporating GAITGen-generated data into downstream tasks improves parkinsonian gait severity estimation, highlighting its potential for advancing clinical gait analysis.
[ { "version": "v1", "created": "Fri, 28 Mar 2025 13:06:45 GMT" } ]
2025-03-31T00:00:00
[ [ "Adeli", "Vida", "" ], [ "Mehraban", "Soroush", "" ], [ "Mirmehdi", "Majid", "" ], [ "Whone", "Alan", "" ], [ "Filtjens", "Benjamin", "" ], [ "Dadashzadeh", "Amirhossein", "" ], [ "Fasano", "Alfonso", "" ], [ "Taati", "Andrea Iaboni Babak", "" ] ]
TITLE: GAITGen: Disentangled Motion-Pathology Impaired Gait Generative Model -- Bringing Motion Generation to the Clinical Domain ABSTRACT: Gait analysis is crucial for the diagnosis and monitoring of movement disorders like Parkinson's Disease. While computer vision models have shown potential for objectively evaluating parkinsonian gait, their effectiveness is limited by scarce clinical datasets and the challenge of collecting large and well-labelled data, impacting model accuracy and risk of bias. To address these gaps, we propose GAITGen, a novel framework that generates realistic gait sequences conditioned on specified pathology severity levels. GAITGen employs a Conditional Residual Vector Quantized Variational Autoencoder to learn disentangled representations of motion dynamics and pathology-specific factors, coupled with Mask and Residual Transformers for conditioned sequence generation. GAITGen generates realistic, diverse gait sequences across severity levels, enriching datasets and enabling large-scale model training in parkinsonian gait analysis. Experiments on our new PD-GaM (real) dataset demonstrate that GAITGen outperforms adapted state-of-the-art models in both reconstruction fidelity and generation quality, accurately capturing critical pathology-specific gait features. A clinical user study confirms the realism and clinical relevance of our generated sequences. Moreover, incorporating GAITGen-generated data into downstream tasks improves parkinsonian gait severity estimation, highlighting its potential for advancing clinical gait analysis.
2503.22398
David Fischinger
David Fischinger and Martin Boyer
DF-Net: The Digital Forensics Network for Image Forgery Detection
Published in 2023 at the 25th Irish Machine Vision and Image Processing Conference (IMVIP), https://iprcs.github.io/pdf/IMVIP2023_Proceeding.pdf
2023 | 25th Irish Machine Vision and Image Processing Conference (IMVIP) | ISBN: 978-0-9934207-8-8
10.5281/zenodo.8214996 10.5281/zenodo.8142658
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
The orchestrated manipulation of public opinion, particularly through manipulated images, often spread via online social networks (OSN), has become a serious threat to society. In this paper we introduce the Digital Forensics Net (DF-Net), a deep neural network for pixel-wise image forgery detection. The released model outperforms several state-of-the-art methods on four established benchmark datasets. Most notably, DF-Net's detection is robust against lossy image operations (e.g resizing, compression) as they are automatically performed by social networks.
[ { "version": "v1", "created": "Fri, 28 Mar 2025 13:06:59 GMT" } ]
2025-03-31T00:00:00
[ [ "Fischinger", "David", "" ], [ "Boyer", "Martin", "" ] ]
TITLE: DF-Net: The Digital Forensics Network for Image Forgery Detection ABSTRACT: The orchestrated manipulation of public opinion, particularly through manipulated images, often spread via online social networks (OSN), has become a serious threat to society. In this paper we introduce the Digital Forensics Net (DF-Net), a deep neural network for pixel-wise image forgery detection. The released model outperforms several state-of-the-art methods on four established benchmark datasets. Most notably, DF-Net's detection is robust against lossy image operations (e.g resizing, compression) as they are automatically performed by social networks.
2503.22408
Xiaolei Bian
Xiaolei Bian, Changfu Zou, Bj\"orn Fridholm, Christian Sundvall, Torsten Wik
Smart Sensing Breaks the Accuracy Barrier in Battery State Monitoring
null
null
null
null
eess.SY cs.SY
http://creativecommons.org/licenses/by-nc-nd/4.0/
Accurate state-of-charge (SOC) estimation is essential for optimizing battery performance, ensuring safety, and maximizing economic value. Conventional current and voltage measurements, however, have inherent limitations in fully inferring the multiphysics-resolved dynamics inside battery cells. This creates an accuracy barrier that constrains battery usage and reduces cost-competitiveness and sustainability across industries dependent on battery technology. In this work, we introduce an integrated sensor framework that combines novel mechanical, thermal, gas, optical, and electrical sensors with traditional measurements to break through this barrier. We generate three unique datasets with eleven measurement types and propose an explainable machine-learning approach for SOC estimation. This approach renders the measured signals and the predictive result of machine learning physically interpretable with respect to battery SOC, offering fundamental insights into the time-varying importance of different signals. Our experimental results reveal a marked increase in SOC estimation accuracy--enhanced from 46.1% to 74.5%--compared to conventional methods. This approach not only advances SOC monitoring precision but also establishes a foundation for monitoring additional battery states to further improve safety, extend lifespan, and facilitate fast charging.
[ { "version": "v1", "created": "Fri, 28 Mar 2025 13:17:58 GMT" } ]
2025-03-31T00:00:00
[ [ "Bian", "Xiaolei", "" ], [ "Zou", "Changfu", "" ], [ "Fridholm", "Björn", "" ], [ "Sundvall", "Christian", "" ], [ "Wik", "Torsten", "" ] ]
TITLE: Smart Sensing Breaks the Accuracy Barrier in Battery State Monitoring ABSTRACT: Accurate state-of-charge (SOC) estimation is essential for optimizing battery performance, ensuring safety, and maximizing economic value. Conventional current and voltage measurements, however, have inherent limitations in fully inferring the multiphysics-resolved dynamics inside battery cells. This creates an accuracy barrier that constrains battery usage and reduces cost-competitiveness and sustainability across industries dependent on battery technology. In this work, we introduce an integrated sensor framework that combines novel mechanical, thermal, gas, optical, and electrical sensors with traditional measurements to break through this barrier. We generate three unique datasets with eleven measurement types and propose an explainable machine-learning approach for SOC estimation. This approach renders the measured signals and the predictive result of machine learning physically interpretable with respect to battery SOC, offering fundamental insights into the time-varying importance of different signals. Our experimental results reveal a marked increase in SOC estimation accuracy--enhanced from 46.1% to 74.5%--compared to conventional methods. This approach not only advances SOC monitoring precision but also establishes a foundation for monitoring additional battery states to further improve safety, extend lifespan, and facilitate fast charging.
2503.22411
Petter T\"ornberg
Petter T\"ornberg and Juliana Chueri
Elite Political Discourse has Become More Toxic in Western Countries
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Toxic and uncivil politics is widely seen as a growing threat to democratic values and governance, yet our understanding of the drivers and evolution of political incivility remains limited. Leveraging a novel dataset of nearly 18 million Twitter messages from parliamentarians in 17 countries over five years, this paper systematically investigates whether politics internationally is becoming more uncivil, and what are the determinants of political incivility. Our analysis reveals a marked increase in toxic discourse among political elites, and that it is associated to radical-right parties and parties in opposition. Toxicity diminished markedly during the early phase of the COVID-19 pandemic and, surprisingly, during election campaigns. Furthermore, our results indicate that posts relating to ``culture war'' topics, such as migration and LGBTQ+ rights, are substantially more toxic than debates focused on welfare or economic issues. These findings underscore a troubling shift in international democracies toward an erosion of constructive democratic dialogue.
[ { "version": "v1", "created": "Fri, 28 Mar 2025 13:21:49 GMT" } ]
2025-03-31T00:00:00
[ [ "Törnberg", "Petter", "" ], [ "Chueri", "Juliana", "" ] ]
TITLE: Elite Political Discourse has Become More Toxic in Western Countries ABSTRACT: Toxic and uncivil politics is widely seen as a growing threat to democratic values and governance, yet our understanding of the drivers and evolution of political incivility remains limited. Leveraging a novel dataset of nearly 18 million Twitter messages from parliamentarians in 17 countries over five years, this paper systematically investigates whether politics internationally is becoming more uncivil, and what are the determinants of political incivility. Our analysis reveals a marked increase in toxic discourse among political elites, and that it is associated to radical-right parties and parties in opposition. Toxicity diminished markedly during the early phase of the COVID-19 pandemic and, surprisingly, during election campaigns. Furthermore, our results indicate that posts relating to ``culture war'' topics, such as migration and LGBTQ+ rights, are substantially more toxic than debates focused on welfare or economic issues. These findings underscore a troubling shift in international democracies toward an erosion of constructive democratic dialogue.
2503.22417
David Fischinger
David Fischinger and Martin Boyer
DF2023: The Digital Forensics 2023 Dataset for Image Forgery Detection
Published at the 25th Irish Machine Vision and Image Processing Conference (IMVIP) --- Proceedings: https://iprcs.github.io/pdf/IMVIP2023_Proceeding.pdf --- Dataset download: https://zenodo.org/records/7326540/files/DF2023_train.zip https://zenodo.org/records/7326540/files/DF2023_val.zip Kaggle: https://www.kaggle.com/datasets/davidfischinger/df2023-digital-forensics-2023-dataset/data
2023 | 25th Irish Machine Vision and Image Processing Conference (IMVIP) | ISBN: 978-0-9934207-8-8
10.5281/zenodo.8215043 10.5281/zenodo.7326540
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
The deliberate manipulation of public opinion, especially through altered images, which are frequently disseminated through online social networks, poses a significant danger to society. To fight this issue on a technical level we support the research community by releasing the Digital Forensics 2023 (DF2023) training and validation dataset, comprising one million images from four major forgery categories: splicing, copy-move, enhancement and removal. This dataset enables an objective comparison of network architectures and can significantly reduce the time and effort of researchers preparing datasets.
[ { "version": "v1", "created": "Fri, 28 Mar 2025 13:31:19 GMT" } ]
2025-03-31T00:00:00
[ [ "Fischinger", "David", "" ], [ "Boyer", "Martin", "" ] ]
TITLE: DF2023: The Digital Forensics 2023 Dataset for Image Forgery Detection ABSTRACT: The deliberate manipulation of public opinion, especially through altered images, which are frequently disseminated through online social networks, poses a significant danger to society. To fight this issue on a technical level we support the research community by releasing the Digital Forensics 2023 (DF2023) training and validation dataset, comprising one million images from four major forgery categories: splicing, copy-move, enhancement and removal. This dataset enables an objective comparison of network architectures and can significantly reduce the time and effort of researchers preparing datasets.
2503.22427
Rajkumar Muthusamy DSc (Tech)
Abhinav Pathak and Rajkumar Muthusamy
Collapse and Collision Aware Grasping for Cluttered Shelf Picking
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Efficient and safe retrieval of stacked objects in warehouse environments is a significant challenge due to complex spatial dependencies and structural inter-dependencies. Traditional vision-based methods excel at object localization but often lack the physical reasoning required to predict the consequences of extraction, leading to unintended collisions and collapses. This paper proposes a collapse and collision aware grasp planner that integrates dynamic physics simulations for robotic decision-making. Using a single image and depth map, an approximate 3D representation of the scene is reconstructed in a simulation environment, enabling the robot to evaluate different retrieval strategies before execution. Two approaches 1) heuristic-based and 2) physics-based are proposed for both single-box extraction and shelf clearance tasks. Extensive real-world experiments on structured and unstructured box stacks, along with validation using datasets from existing databases, show that our physics-aware method significantly improves efficiency and success rates compared to baseline heuristics.
[ { "version": "v1", "created": "Fri, 28 Mar 2025 13:42:54 GMT" } ]
2025-03-31T00:00:00
[ [ "Pathak", "Abhinav", "" ], [ "Muthusamy", "Rajkumar", "" ] ]
TITLE: Collapse and Collision Aware Grasping for Cluttered Shelf Picking ABSTRACT: Efficient and safe retrieval of stacked objects in warehouse environments is a significant challenge due to complex spatial dependencies and structural inter-dependencies. Traditional vision-based methods excel at object localization but often lack the physical reasoning required to predict the consequences of extraction, leading to unintended collisions and collapses. This paper proposes a collapse and collision aware grasp planner that integrates dynamic physics simulations for robotic decision-making. Using a single image and depth map, an approximate 3D representation of the scene is reconstructed in a simulation environment, enabling the robot to evaluate different retrieval strategies before execution. Two approaches 1) heuristic-based and 2) physics-based are proposed for both single-box extraction and shelf clearance tasks. Extensive real-world experiments on structured and unstructured box stacks, along with validation using datasets from existing databases, show that our physics-aware method significantly improves efficiency and success rates compared to baseline heuristics.
2503.22436
Fuhao Li
Fuhao Li, Huan Jin, Bin Gao, Liaoyuan Fan, Lihui Jiang, Long Zeng
NuGrounding: A Multi-View 3D Visual Grounding Framework in Autonomous Driving
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-view 3D visual grounding is critical for autonomous driving vehicles to interpret natural languages and localize target objects in complex environments. However, existing datasets and methods suffer from coarse-grained language instructions, and inadequate integration of 3D geometric reasoning with linguistic comprehension. To this end, we introduce NuGrounding, the first large-scale benchmark for multi-view 3D visual grounding in autonomous driving. We present a Hierarchy of Grounding (HoG) method to construct NuGrounding to generate hierarchical multi-level instructions, ensuring comprehensive coverage of human instruction patterns. To tackle this challenging dataset, we propose a novel paradigm that seamlessly combines instruction comprehension abilities of multi-modal LLMs (MLLMs) with precise localization abilities of specialist detection models. Our approach introduces two decoupled task tokens and a context query to aggregate 3D geometric information and semantic instructions, followed by a fusion decoder to refine spatial-semantic feature fusion for precise localization. Extensive experiments demonstrate that our method significantly outperforms the baselines adapted from representative 3D scene understanding methods by a significant margin and achieves 0.59 in precision and 0.64 in recall, with improvements of 50.8% and 54.7%.
[ { "version": "v1", "created": "Fri, 28 Mar 2025 13:55:16 GMT" } ]
2025-03-31T00:00:00
[ [ "Li", "Fuhao", "" ], [ "Jin", "Huan", "" ], [ "Gao", "Bin", "" ], [ "Fan", "Liaoyuan", "" ], [ "Jiang", "Lihui", "" ], [ "Zeng", "Long", "" ] ]
TITLE: NuGrounding: A Multi-View 3D Visual Grounding Framework in Autonomous Driving ABSTRACT: Multi-view 3D visual grounding is critical for autonomous driving vehicles to interpret natural languages and localize target objects in complex environments. However, existing datasets and methods suffer from coarse-grained language instructions, and inadequate integration of 3D geometric reasoning with linguistic comprehension. To this end, we introduce NuGrounding, the first large-scale benchmark for multi-view 3D visual grounding in autonomous driving. We present a Hierarchy of Grounding (HoG) method to construct NuGrounding to generate hierarchical multi-level instructions, ensuring comprehensive coverage of human instruction patterns. To tackle this challenging dataset, we propose a novel paradigm that seamlessly combines instruction comprehension abilities of multi-modal LLMs (MLLMs) with precise localization abilities of specialist detection models. Our approach introduces two decoupled task tokens and a context query to aggregate 3D geometric information and semantic instructions, followed by a fusion decoder to refine spatial-semantic feature fusion for precise localization. Extensive experiments demonstrate that our method significantly outperforms the baselines adapted from representative 3D scene understanding methods by a significant margin and achieves 0.59 in precision and 0.64 in recall, with improvements of 50.8% and 54.7%.
2503.22437
Xu Wang Mr
Xu Wang, Shuai Zhang, Baoru Huang, Danail Stoyanov, Evangelos B. Mazomenos
EndoLRMGS: Complete Endoscopic Scene Reconstruction combining Large Reconstruction Modelling and Gaussian Splatting
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Complete reconstruction of surgical scenes is crucial for robot-assisted surgery (RAS). Deep depth estimation is promising but existing works struggle with depth discontinuities, resulting in noisy predictions at object boundaries and do not achieve complete reconstruction omitting occluded surfaces. To address these issues we propose EndoLRMGS, that combines Large Reconstruction Modelling (LRM) and Gaussian Splatting (GS), for complete surgical scene reconstruction. GS reconstructs deformable tissues and LRM generates 3D models for surgical tools while position and scale are subsequently optimized by introducing orthogonal perspective joint projection optimization (OPjPO) to enhance accuracy. In experiments on four surgical videos from three public datasets, our method improves the Intersection-over-union (IoU) of tool 3D models in 2D projections by>40%. Additionally, EndoLRMGS improves the PSNR of the tools projection from 3.82% to 11.07%. Tissue rendering quality also improves, with PSNR increasing from 0.46% to 49.87%, and SSIM from 1.53% to 29.21% across all test videos.
[ { "version": "v1", "created": "Fri, 28 Mar 2025 13:57:12 GMT" } ]
2025-03-31T00:00:00
[ [ "Wang", "Xu", "" ], [ "Zhang", "Shuai", "" ], [ "Huang", "Baoru", "" ], [ "Stoyanov", "Danail", "" ], [ "Mazomenos", "Evangelos B.", "" ] ]
TITLE: EndoLRMGS: Complete Endoscopic Scene Reconstruction combining Large Reconstruction Modelling and Gaussian Splatting ABSTRACT: Complete reconstruction of surgical scenes is crucial for robot-assisted surgery (RAS). Deep depth estimation is promising but existing works struggle with depth discontinuities, resulting in noisy predictions at object boundaries and do not achieve complete reconstruction omitting occluded surfaces. To address these issues we propose EndoLRMGS, that combines Large Reconstruction Modelling (LRM) and Gaussian Splatting (GS), for complete surgical scene reconstruction. GS reconstructs deformable tissues and LRM generates 3D models for surgical tools while position and scale are subsequently optimized by introducing orthogonal perspective joint projection optimization (OPjPO) to enhance accuracy. In experiments on four surgical videos from three public datasets, our method improves the Intersection-over-union (IoU) of tool 3D models in 2D projections by>40%. Additionally, EndoLRMGS improves the PSNR of the tools projection from 3.82% to 11.07%. Tissue rendering quality also improves, with PSNR increasing from 0.46% to 49.87%, and SSIM from 1.53% to 29.21% across all test videos.
2503.22448
Graciana Puentes
Graciana Puentes
Comparison between neural network clustering, hierarchical clustering and k-means clustering: Applications using fluidic lenses
19 pages, 9 figures
null
null
null
physics.optics cs.LG
http://creativecommons.org/licenses/by/4.0/
A comparison between neural network clustering (NNC), hierarchical clustering (HC) and K-means clustering (KMC) is performed to evaluate the computational superiority of these three machine learning (ML) techniques for organizing large datasets into clusters. For NNC, a self-organizing map (SOM) training was applied to a collection of wavefront sensor reconstructions, decomposed in terms of 15 Zernike coefficients, characterizing the optical aberrations of the phase front transmitted by fluidic lenses. In order to understand the distribution and structure of the 15 Zernike variables within an input space, SOM-neighboring weight distances, SOM-sample hits, SOM-weight positions and SOM-weight planes were analyzed to form a visual interpretation of the system's structural properties. In the case of HC, the data was partitioned using a combined dissimilarity-linkage matrix computation. The effectiveness of this method was confirmed by a high cophenetic correlation coefficient value (c=0.9651). Additionally, a maximum number of clusters was established by setting an inconsistency cutoff of 0.8, yielding a total of 7 clusters for system segmentation. In addition, a KMC approach was employed to establish a quantitative measure of clustering segmentation efficiency, obtaining a sillhoute average value of 0.905 for data segmentation into K=5 non-overlapping clusters. On the other hand, the NNC analysis revealed that the 15 variables could be characterized through the collective influence of 8 clusters. It was established that the formation of clusters through the combined linkage and dissimilarity algorithms of HC alongside KMC is a more dependable clustering solution than separate assessment via NNC or HC, where altering the SOM size or inconsistency cutoff can lead to completely new clustering configurations.
[ { "version": "v1", "created": "Fri, 28 Mar 2025 14:01:12 GMT" } ]
2025-03-31T00:00:00
[ [ "Puentes", "Graciana", "" ] ]
TITLE: Comparison between neural network clustering, hierarchical clustering and k-means clustering: Applications using fluidic lenses ABSTRACT: A comparison between neural network clustering (NNC), hierarchical clustering (HC) and K-means clustering (KMC) is performed to evaluate the computational superiority of these three machine learning (ML) techniques for organizing large datasets into clusters. For NNC, a self-organizing map (SOM) training was applied to a collection of wavefront sensor reconstructions, decomposed in terms of 15 Zernike coefficients, characterizing the optical aberrations of the phase front transmitted by fluidic lenses. In order to understand the distribution and structure of the 15 Zernike variables within an input space, SOM-neighboring weight distances, SOM-sample hits, SOM-weight positions and SOM-weight planes were analyzed to form a visual interpretation of the system's structural properties. In the case of HC, the data was partitioned using a combined dissimilarity-linkage matrix computation. The effectiveness of this method was confirmed by a high cophenetic correlation coefficient value (c=0.9651). Additionally, a maximum number of clusters was established by setting an inconsistency cutoff of 0.8, yielding a total of 7 clusters for system segmentation. In addition, a KMC approach was employed to establish a quantitative measure of clustering segmentation efficiency, obtaining a sillhoute average value of 0.905 for data segmentation into K=5 non-overlapping clusters. On the other hand, the NNC analysis revealed that the 15 variables could be characterized through the collective influence of 8 clusters. It was established that the formation of clusters through the combined linkage and dissimilarity algorithms of HC alongside KMC is a more dependable clustering solution than separate assessment via NNC or HC, where altering the SOM size or inconsistency cutoff can lead to completely new clustering configurations.
2503.22454
Ayan Majumdar
Ayan Majumdar and Deborah D. Kanubala and Kavya Gupta and Isabel Valera
A Causal Framework to Measure and Mitigate Non-binary Treatment Discrimination
24 pages, 5 figures
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Fairness studies of algorithmic decision-making systems often simplify complex decision processes, such as bail or loan approvals, into binary classification tasks. However, these approaches overlook that such decisions are not inherently binary (e.g., approve or not approve bail or loan); they also involve non-binary treatment decisions (e.g., bail conditions or loan terms) that can influence the downstream outcomes (e.g., loan repayment or reoffending). In this paper, we argue that non-binary treatment decisions are integral to the decision process and controlled by decision-makers and, therefore, should be central to fairness analyses in algorithmic decision-making. We propose a causal framework that extends fairness analyses and explicitly distinguishes between decision-subjects' covariates and the treatment decisions. This specification allows decision-makers to use our framework to (i) measure treatment disparity and its downstream effects in historical data and, using counterfactual reasoning, (ii) mitigate the impact of past unfair treatment decisions when automating decision-making. We use our framework to empirically analyze four widely used loan approval datasets to reveal potential disparity in non-binary treatment decisions and their discriminatory impact on outcomes, highlighting the need to incorporate treatment decisions in fairness assessments. Moreover, by intervening in treatment decisions, we show that our framework effectively mitigates treatment discrimination from historical data to ensure fair risk score estimation and (non-binary) decision-making processes that benefit all stakeholders.
[ { "version": "v1", "created": "Fri, 28 Mar 2025 14:06:35 GMT" } ]
2025-03-31T00:00:00
[ [ "Majumdar", "Ayan", "" ], [ "Kanubala", "Deborah D.", "" ], [ "Gupta", "Kavya", "" ], [ "Valera", "Isabel", "" ] ]
TITLE: A Causal Framework to Measure and Mitigate Non-binary Treatment Discrimination ABSTRACT: Fairness studies of algorithmic decision-making systems often simplify complex decision processes, such as bail or loan approvals, into binary classification tasks. However, these approaches overlook that such decisions are not inherently binary (e.g., approve or not approve bail or loan); they also involve non-binary treatment decisions (e.g., bail conditions or loan terms) that can influence the downstream outcomes (e.g., loan repayment or reoffending). In this paper, we argue that non-binary treatment decisions are integral to the decision process and controlled by decision-makers and, therefore, should be central to fairness analyses in algorithmic decision-making. We propose a causal framework that extends fairness analyses and explicitly distinguishes between decision-subjects' covariates and the treatment decisions. This specification allows decision-makers to use our framework to (i) measure treatment disparity and its downstream effects in historical data and, using counterfactual reasoning, (ii) mitigate the impact of past unfair treatment decisions when automating decision-making. We use our framework to empirically analyze four widely used loan approval datasets to reveal potential disparity in non-binary treatment decisions and their discriminatory impact on outcomes, highlighting the need to incorporate treatment decisions in fairness assessments. Moreover, by intervening in treatment decisions, we show that our framework effectively mitigates treatment discrimination from historical data to ensure fair risk score estimation and (non-binary) decision-making processes that benefit all stakeholders.
2503.22460
Xin He
YongKang Yan, Zeqian Gan, Luying Hu, Xinrui Xu, Ran Kang, Chengwei Qian, Jianqiang Mei, Paul Beckett, William Shieh, Rui Yin, Xin He, Xu Liu
High-Dimensional Encoding Computational Imaging
18 pages, 10 figures, 1 table
null
null
null
physics.optics
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
High-dimensional imaging technology has demonstrated significant research value across diverse fields, including environmental monitoring, agricultural inspection, and biomedical imaging, through integrating spatial (X*Y), spectral, and polarization detection functionalities. Here, we report a High-Dimensional encoding computational imaging technique, utilizing 4 high-dimensional encoders (HDE1-4) and a high-dimensional neural network (HDNN) to reconstruct 80 high-dimensional images of the target. The system efficiently acquires spectral-polarization information, spanning a wavelength range of 400-800 nm at intervals of 20 nm, obtaining 20 spectral datasets. Each dataset contains images captured at 4 polarization angles (0{\deg}, 45{\deg}, 90{\deg}, and -45{\deg}), and the image resolution can reach up to 1280 * 960 pixels. Achieving a reconstruction ratio 1:20. Experimental validation confirms that the spectral reconstruction error consistently remains below 0.14%. Extensive high-dimensional imaging experiments were conducted under indoor and outdoor conditions, showing the system's significant adaptability and robustness in various environments. Compared to traditional imaging devices, such as hyperspectral cameras that could only acquire spectral information, while polarization cameras are limited to polarization imaging, this integrated system successfully overcomes these technological constraints, providing an innovative and efficient solution for high-dimensional optical sensing applications.
[ { "version": "v1", "created": "Fri, 28 Mar 2025 14:13:32 GMT" } ]
2025-03-31T00:00:00
[ [ "Yan", "YongKang", "" ], [ "Gan", "Zeqian", "" ], [ "Hu", "Luying", "" ], [ "Xu", "Xinrui", "" ], [ "Kang", "Ran", "" ], [ "Qian", "Chengwei", "" ], [ "Mei", "Jianqiang", "" ], [ "Beckett", "Paul", "" ], [ "Shieh", "William", "" ], [ "Yin", "Rui", "" ], [ "He", "Xin", "" ], [ "Liu", "Xu", "" ] ]
TITLE: High-Dimensional Encoding Computational Imaging ABSTRACT: High-dimensional imaging technology has demonstrated significant research value across diverse fields, including environmental monitoring, agricultural inspection, and biomedical imaging, through integrating spatial (X*Y), spectral, and polarization detection functionalities. Here, we report a High-Dimensional encoding computational imaging technique, utilizing 4 high-dimensional encoders (HDE1-4) and a high-dimensional neural network (HDNN) to reconstruct 80 high-dimensional images of the target. The system efficiently acquires spectral-polarization information, spanning a wavelength range of 400-800 nm at intervals of 20 nm, obtaining 20 spectral datasets. Each dataset contains images captured at 4 polarization angles (0{\deg}, 45{\deg}, 90{\deg}, and -45{\deg}), and the image resolution can reach up to 1280 * 960 pixels. Achieving a reconstruction ratio 1:20. Experimental validation confirms that the spectral reconstruction error consistently remains below 0.14%. Extensive high-dimensional imaging experiments were conducted under indoor and outdoor conditions, showing the system's significant adaptability and robustness in various environments. Compared to traditional imaging devices, such as hyperspectral cameras that could only acquire spectral information, while polarization cameras are limited to polarization imaging, this integrated system successfully overcomes these technological constraints, providing an innovative and efficient solution for high-dimensional optical sensing applications.
2503.22462
Krispin Wandel
Krispin Wandel, Hesheng Wang
SemAlign3D: Semantic Correspondence between RGB-Images through Aligning 3D Object-Class Representations
Accepted to CVPR 2025. Poster: https://cvpr.thecvf.com/virtual/2025/poster/32799
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Semantic correspondence made tremendous progress through the recent advancements of large vision models (LVM). While these LVMs have been shown to reliably capture local semantics, the same can currently not be said for capturing global geometric relationships between semantic object regions. This problem leads to unreliable performance for semantic correspondence between images with extreme view variation. In this work, we aim to leverage monocular depth estimates to capture these geometric relationships for more robust and data-efficient semantic correspondence. First, we introduce a simple but effective method to build 3D object-class representations from monocular depth estimates and LVM features using a sparsely annotated image correspondence dataset. Second, we formulate an alignment energy that can be minimized using gradient descent to obtain an alignment between the 3D object-class representation and the object-class instance in the input RGB-image. Our method achieves state-of-the-art matching accuracy in multiple categories on the challenging SPair-71k dataset, increasing the [email protected] score by more than 10 points on three categories and overall by 3.3 points from 85.6% to 88.9%. Additional resources and code are available at https://dub.sh/semalign3d.
[ { "version": "v1", "created": "Fri, 28 Mar 2025 14:14:19 GMT" } ]
2025-03-31T00:00:00
[ [ "Wandel", "Krispin", "" ], [ "Wang", "Hesheng", "" ] ]
TITLE: SemAlign3D: Semantic Correspondence between RGB-Images through Aligning 3D Object-Class Representations ABSTRACT: Semantic correspondence made tremendous progress through the recent advancements of large vision models (LVM). While these LVMs have been shown to reliably capture local semantics, the same can currently not be said for capturing global geometric relationships between semantic object regions. This problem leads to unreliable performance for semantic correspondence between images with extreme view variation. In this work, we aim to leverage monocular depth estimates to capture these geometric relationships for more robust and data-efficient semantic correspondence. First, we introduce a simple but effective method to build 3D object-class representations from monocular depth estimates and LVM features using a sparsely annotated image correspondence dataset. Second, we formulate an alignment energy that can be minimized using gradient descent to obtain an alignment between the 3D object-class representation and the object-class instance in the input RGB-image. Our method achieves state-of-the-art matching accuracy in multiple categories on the challenging SPair-71k dataset, increasing the [email protected] score by more than 10 points on three categories and overall by 3.3 points from 85.6% to 88.9%. Additional resources and code are available at https://dub.sh/semalign3d.
2503.22473
Hanchao Liu
Hanchao Liu, Rongjun Li, Weimin Xiong, Ziyu Zhou, Wei Peng
WorkTeam: Constructing Workflows from Natural Language with Multi-Agents
Accepted in NAACL 2025 Industry Track
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Workflows play a crucial role in enhancing enterprise efficiency by orchestrating complex processes with multiple tools or components. However, hand-crafted workflow construction requires expert knowledge, presenting significant technical barriers. Recent advancements in Large Language Models (LLMs) have improved the generation of workflows from natural language instructions (aka NL2Workflow), yet existing single LLM agent-based methods face performance degradation on complex tasks due to the need for specialized knowledge and the strain of task-switching. To tackle these challenges, we propose WorkTeam, a multi-agent NL2Workflow framework comprising a supervisor, orchestrator, and filler agent, each with distinct roles that collaboratively enhance the conversion process. As there are currently no publicly available NL2Workflow benchmarks, we also introduce the HW-NL2Workflow dataset, which includes 3,695 real-world business samples for training and evaluation. Experimental results show that our approach significantly increases the success rate of workflow construction, providing a novel and effective solution for enterprise NL2Workflow services.
[ { "version": "v1", "created": "Fri, 28 Mar 2025 14:33:29 GMT" } ]
2025-03-31T00:00:00
[ [ "Liu", "Hanchao", "" ], [ "Li", "Rongjun", "" ], [ "Xiong", "Weimin", "" ], [ "Zhou", "Ziyu", "" ], [ "Peng", "Wei", "" ] ]
TITLE: WorkTeam: Constructing Workflows from Natural Language with Multi-Agents ABSTRACT: Workflows play a crucial role in enhancing enterprise efficiency by orchestrating complex processes with multiple tools or components. However, hand-crafted workflow construction requires expert knowledge, presenting significant technical barriers. Recent advancements in Large Language Models (LLMs) have improved the generation of workflows from natural language instructions (aka NL2Workflow), yet existing single LLM agent-based methods face performance degradation on complex tasks due to the need for specialized knowledge and the strain of task-switching. To tackle these challenges, we propose WorkTeam, a multi-agent NL2Workflow framework comprising a supervisor, orchestrator, and filler agent, each with distinct roles that collaboratively enhance the conversion process. As there are currently no publicly available NL2Workflow benchmarks, we also introduce the HW-NL2Workflow dataset, which includes 3,695 real-world business samples for training and evaluation. Experimental results show that our approach significantly increases the success rate of workflow construction, providing a novel and effective solution for enterprise NL2Workflow services.
2503.22475
Bo Shen
Chenyang Li, Tanmay Sunil Kapure, Prokash Chandra Roy, Zhengtao Gan, Bo Shen
DeepOFormer: Deep Operator Learning with Domain-informed Features for Fatigue Life Prediction
6 pages, 4 figures
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Fatigue life characterizes the duration a material can function before failure under specific environmental conditions, and is traditionally assessed using stress-life (S-N) curves. While machine learning and deep learning offer promising results for fatigue life prediction, they face the overfitting challenge because of the small size of fatigue experimental data in specific materials. To address this challenge, we propose, DeepOFormer, by formulating S-N curve prediction as an operator learning problem. DeepOFormer improves the deep operator learning framework with a transformer-based encoder and a mean L2 relative error loss function. We also consider Stussi, Weibull, and Pascual and Meeker (PM) features as domain-informed features. These features are motivated by empirical fatigue models. To evaluate the performance of our DeepOFormer, we compare it with different deep learning models and XGBoost on a dataset with 54 S-N curves of aluminum alloys. With seven different aluminum alloys selected for testing, our DeepOFormer achieves an R2 of 0.9515, a mean absolute error of 0.2080, and a mean relative error of 0.5077, significantly outperforming state-of-the-art deep/machine learning methods including DeepONet, TabTransformer, and XGBoost, etc. The results highlight that our Deep0Former integrating with domain-informed features substantially improves prediction accuracy and generalization capabilities for fatigue life prediction in aluminum alloys.
[ { "version": "v1", "created": "Fri, 28 Mar 2025 14:34:35 GMT" } ]
2025-03-31T00:00:00
[ [ "Li", "Chenyang", "" ], [ "Kapure", "Tanmay Sunil", "" ], [ "Roy", "Prokash Chandra", "" ], [ "Gan", "Zhengtao", "" ], [ "Shen", "Bo", "" ] ]
TITLE: DeepOFormer: Deep Operator Learning with Domain-informed Features for Fatigue Life Prediction ABSTRACT: Fatigue life characterizes the duration a material can function before failure under specific environmental conditions, and is traditionally assessed using stress-life (S-N) curves. While machine learning and deep learning offer promising results for fatigue life prediction, they face the overfitting challenge because of the small size of fatigue experimental data in specific materials. To address this challenge, we propose, DeepOFormer, by formulating S-N curve prediction as an operator learning problem. DeepOFormer improves the deep operator learning framework with a transformer-based encoder and a mean L2 relative error loss function. We also consider Stussi, Weibull, and Pascual and Meeker (PM) features as domain-informed features. These features are motivated by empirical fatigue models. To evaluate the performance of our DeepOFormer, we compare it with different deep learning models and XGBoost on a dataset with 54 S-N curves of aluminum alloys. With seven different aluminum alloys selected for testing, our DeepOFormer achieves an R2 of 0.9515, a mean absolute error of 0.2080, and a mean relative error of 0.5077, significantly outperforming state-of-the-art deep/machine learning methods including DeepONet, TabTransformer, and XGBoost, etc. The results highlight that our Deep0Former integrating with domain-informed features substantially improves prediction accuracy and generalization capabilities for fatigue life prediction in aluminum alloys.
2503.22498
Jing Li
Jing Li and Hao Sun
Learnable cut flow
26 pages, 33 figures
null
null
null
cs.LG hep-ph
http://creativecommons.org/licenses/by/4.0/
Neural networks have emerged as a powerful paradigm for tasks in high energy physics, yet their opaque training process renders them as a black box. In contrast, the traditional cut flow method offers simplicity and interpretability but demands human effort to identify optimal boundaries. To merge the strengths of both approaches, we propose the Learnable Cut Flow (LCF), a neural network that transforms the traditional cut selection into a fully differentiable, data-driven process. LCF implements two cut strategies-parallel, where observable distributions are treated independently, and sequential, where prior cuts shape subsequent ones-to flexibly determine optimal boundaries. Building on this, we introduce the Learnable Importance, a metric that quantifies feature importance and adjusts their contributions to the loss accordingly, offering model-driven insights unlike ad-hoc metrics. To ensure differentiability, a modified loss function replaces hard cuts with mask operations, preserving data shape throughout the training process. LCF is tested on six varied mock datasets and a realistic diboson vs. QCD dataset. Results demonstrate that LCF (1) accurately learns cut boundaries across typical feature distributions in both parallel and sequential strategies, (2) assigns higher importance to discriminative features with minimal overlap, (3) handles redundant or correlated features robustly, and (4) performs effectively in real-world scenarios. In diboson dataset, LCF initially underperforms boosted decision trees and multiplayer perceptrons when using all observables. However, pruning less critical features-guided by learned importance-boosts its performance to match or exceed these baselines. LCF bridges the gap between traditional cut flow method and modern black-box neural networks, delivering actionable insights into the training process and feature importance.
[ { "version": "v1", "created": "Fri, 28 Mar 2025 15:04:06 GMT" } ]
2025-03-31T00:00:00
[ [ "Li", "Jing", "" ], [ "Sun", "Hao", "" ] ]
TITLE: Learnable cut flow ABSTRACT: Neural networks have emerged as a powerful paradigm for tasks in high energy physics, yet their opaque training process renders them as a black box. In contrast, the traditional cut flow method offers simplicity and interpretability but demands human effort to identify optimal boundaries. To merge the strengths of both approaches, we propose the Learnable Cut Flow (LCF), a neural network that transforms the traditional cut selection into a fully differentiable, data-driven process. LCF implements two cut strategies-parallel, where observable distributions are treated independently, and sequential, where prior cuts shape subsequent ones-to flexibly determine optimal boundaries. Building on this, we introduce the Learnable Importance, a metric that quantifies feature importance and adjusts their contributions to the loss accordingly, offering model-driven insights unlike ad-hoc metrics. To ensure differentiability, a modified loss function replaces hard cuts with mask operations, preserving data shape throughout the training process. LCF is tested on six varied mock datasets and a realistic diboson vs. QCD dataset. Results demonstrate that LCF (1) accurately learns cut boundaries across typical feature distributions in both parallel and sequential strategies, (2) assigns higher importance to discriminative features with minimal overlap, (3) handles redundant or correlated features robustly, and (4) performs effectively in real-world scenarios. In diboson dataset, LCF initially underperforms boosted decision trees and multiplayer perceptrons when using all observables. However, pruning less critical features-guided by learned importance-boosts its performance to match or exceed these baselines. LCF bridges the gap between traditional cut flow method and modern black-box neural networks, delivering actionable insights into the training process and feature importance.
2503.22510
Yongmin Li
Simran Kaur Ghatoray and Yongmin Li
Automated UX Insights from User Research Videos by Integrating Facial Emotion and Text Sentiment
null
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
Emotion recognition technology has been studied from the past decade. With its growing importance and applications such as customer service, medical, education, etc., this research study aims to explore its potential and importance in the field of User experience evaluation. Recognizing and keeping track of user emotions in user research video is important to understand user needs and expectations from a service/product. Little research has been done that focuses on automating emotion extraction from a video where more than one modality has been incorporated in the field of UX. The study aims at implementing different modalities such as facial emotion recognition, speech-to-text and text-based emotion recognition for capturing emotional nuances from a user research video and extract meaningful actionable insights. For selection of facial emotion recognition model, 10 pre-trained models were evaluated on three benchmark datasets i.e. FER-2013, AffectNet and CK+, selecting the model with most generalization ability. To extract speech and convert to text, OpenAI's Whisper model was implemented and finally the emotions from text were recognized using a pre-trained model available at HuggingFace website having an evaluation accuracy more than 95%. The study also integrates the gathered data using temporal alignment and fusion for deeper and contextual insights. The study further demonstrates a way of automating data analysis through PandasAI Python library where OpenAI's GPT-4o model was implemented along with a discussion on other possible solutions. This study is an attempt to demonstrate a proof of concept where automated meaningful insights are extracted from a video based on user emotions.
[ { "version": "v1", "created": "Fri, 28 Mar 2025 15:14:08 GMT" } ]
2025-03-31T00:00:00
[ [ "Ghatoray", "Simran Kaur", "" ], [ "Li", "Yongmin", "" ] ]
TITLE: Automated UX Insights from User Research Videos by Integrating Facial Emotion and Text Sentiment ABSTRACT: Emotion recognition technology has been studied from the past decade. With its growing importance and applications such as customer service, medical, education, etc., this research study aims to explore its potential and importance in the field of User experience evaluation. Recognizing and keeping track of user emotions in user research video is important to understand user needs and expectations from a service/product. Little research has been done that focuses on automating emotion extraction from a video where more than one modality has been incorporated in the field of UX. The study aims at implementing different modalities such as facial emotion recognition, speech-to-text and text-based emotion recognition for capturing emotional nuances from a user research video and extract meaningful actionable insights. For selection of facial emotion recognition model, 10 pre-trained models were evaluated on three benchmark datasets i.e. FER-2013, AffectNet and CK+, selecting the model with most generalization ability. To extract speech and convert to text, OpenAI's Whisper model was implemented and finally the emotions from text were recognized using a pre-trained model available at HuggingFace website having an evaluation accuracy more than 95%. The study also integrates the gathered data using temporal alignment and fusion for deeper and contextual insights. The study further demonstrates a way of automating data analysis through PandasAI Python library where OpenAI's GPT-4o model was implemented along with a discussion on other possible solutions. This study is an attempt to demonstrate a proof of concept where automated meaningful insights are extracted from a video based on user emotions.
2503.22513
Martin Ki\v{s}\v{s}
Martin Ki\v{s}\v{s} and Michal Hradi\v{s}
Masked Self-Supervised Pre-Training for Text Recognition Transformers on Large-Scale Datasets
18 pages, 7 tables, 6 figures; Submitted to ICDAR25
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Self-supervised learning has emerged as a powerful approach for leveraging large-scale unlabeled data to improve model performance in various domains. In this paper, we explore masked self-supervised pre-training for text recognition transformers. Specifically, we propose two modifications to the pre-training phase: progressively increasing the masking probability, and modifying the loss function to incorporate both masked and non-masked patches. We conduct extensive experiments using a dataset of 50M unlabeled text lines for pre-training and four differently sized annotated datasets for fine-tuning. Furthermore, we compare our pre-trained models against those trained with transfer learning, demonstrating the effectiveness of the self-supervised pre-training. In particular, pre-training consistently improves the character error rate of models, in some cases up to 30 % relatively. It is also on par with transfer learning but without relying on extra annotated text lines.
[ { "version": "v1", "created": "Fri, 28 Mar 2025 15:16:48 GMT" } ]
2025-03-31T00:00:00
[ [ "Kišš", "Martin", "" ], [ "Hradiš", "Michal", "" ] ]
TITLE: Masked Self-Supervised Pre-Training for Text Recognition Transformers on Large-Scale Datasets ABSTRACT: Self-supervised learning has emerged as a powerful approach for leveraging large-scale unlabeled data to improve model performance in various domains. In this paper, we explore masked self-supervised pre-training for text recognition transformers. Specifically, we propose two modifications to the pre-training phase: progressively increasing the masking probability, and modifying the loss function to incorporate both masked and non-masked patches. We conduct extensive experiments using a dataset of 50M unlabeled text lines for pre-training and four differently sized annotated datasets for fine-tuning. Furthermore, we compare our pre-trained models against those trained with transfer learning, demonstrating the effectiveness of the self-supervised pre-training. In particular, pre-training consistently improves the character error rate of models, in some cases up to 30 % relatively. It is also on par with transfer learning but without relying on extra annotated text lines.
2503.22524
Shuze Wang
Shuze Wang, Yunpeng Mei, Hongjie Cao, Yetian Yuan, Gang Wang, Jian Sun, Jie Chen
Robust Offline Imitation Learning Through State-level Trajectory Stitching
null
null
null
null
cs.RO cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Imitation learning (IL) has proven effective for enabling robots to acquire visuomotor skills through expert demonstrations. However, traditional IL methods are limited by their reliance on high-quality, often scarce, expert data, and suffer from covariate shift. To address these challenges, recent advances in offline IL have incorporated suboptimal, unlabeled datasets into the training. In this paper, we propose a novel approach to enhance policy learning from mixed-quality offline datasets by leveraging task-relevant trajectory fragments and rich environmental dynamics. Specifically, we introduce a state-based search framework that stitches state-action pairs from imperfect demonstrations, generating more diverse and informative training trajectories. Experimental results on standard IL benchmarks and real-world robotic tasks showcase that our proposed method significantly improves both generalization and performance.
[ { "version": "v1", "created": "Fri, 28 Mar 2025 15:28:36 GMT" } ]
2025-03-31T00:00:00
[ [ "Wang", "Shuze", "" ], [ "Mei", "Yunpeng", "" ], [ "Cao", "Hongjie", "" ], [ "Yuan", "Yetian", "" ], [ "Wang", "Gang", "" ], [ "Sun", "Jian", "" ], [ "Chen", "Jie", "" ] ]
TITLE: Robust Offline Imitation Learning Through State-level Trajectory Stitching ABSTRACT: Imitation learning (IL) has proven effective for enabling robots to acquire visuomotor skills through expert demonstrations. However, traditional IL methods are limited by their reliance on high-quality, often scarce, expert data, and suffer from covariate shift. To address these challenges, recent advances in offline IL have incorporated suboptimal, unlabeled datasets into the training. In this paper, we propose a novel approach to enhance policy learning from mixed-quality offline datasets by leveraging task-relevant trajectory fragments and rich environmental dynamics. Specifically, we introduce a state-based search framework that stitches state-action pairs from imperfect demonstrations, generating more diverse and informative training trajectories. Experimental results on standard IL benchmarks and real-world robotic tasks showcase that our proposed method significantly improves both generalization and performance.
2503.22526
Martin Ki\v{s}\v{s}
Martin Ki\v{s}\v{s} and Michal Hradi\v{s} and Martina Dvo\v{r}\'akov\'a and V\'aclav Jirou\v{s}ek and Filip Kersch
AnnoPage Dataset: Dataset of Non-Textual Elements in Documents with Fine-Grained Categorization
15 pages, 2 tables, 6 figures; Submitted to ICDAR25
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce the AnnoPage Dataset, a novel collection of 7550 pages from historical documents, primarily in Czech and German, spanning from 1485 to the present, focusing on the late 19th and early 20th centuries. The dataset is designed to support research in document layout analysis and object detection. Each page is annotated with axis-aligned bounding boxes (AABB) representing elements of 25 categories of non-textual elements, such as images, maps, decorative elements, or charts, following the Czech Methodology of image document processing. The annotations were created by expert librarians to ensure accuracy and consistency. The dataset also incorporates pages from multiple, mainly historical, document datasets to enhance variability and maintain continuity. The dataset is divided into development and test subsets, with the test set carefully selected to maintain the category distribution. We provide baseline results using YOLO and DETR object detectors, offering a reference point for future research. The AnnoPage Dataset is publicly available on Zenodo (https://doi.org/10.5281/zenodo.12788419), along with ground-truth annotations in YOLO format.
[ { "version": "v1", "created": "Fri, 28 Mar 2025 15:30:42 GMT" } ]
2025-03-31T00:00:00
[ [ "Kišš", "Martin", "" ], [ "Hradiš", "Michal", "" ], [ "Dvořáková", "Martina", "" ], [ "Jiroušek", "Václav", "" ], [ "Kersch", "Filip", "" ] ]
TITLE: AnnoPage Dataset: Dataset of Non-Textual Elements in Documents with Fine-Grained Categorization ABSTRACT: We introduce the AnnoPage Dataset, a novel collection of 7550 pages from historical documents, primarily in Czech and German, spanning from 1485 to the present, focusing on the late 19th and early 20th centuries. The dataset is designed to support research in document layout analysis and object detection. Each page is annotated with axis-aligned bounding boxes (AABB) representing elements of 25 categories of non-textual elements, such as images, maps, decorative elements, or charts, following the Czech Methodology of image document processing. The annotations were created by expert librarians to ensure accuracy and consistency. The dataset also incorporates pages from multiple, mainly historical, document datasets to enhance variability and maintain continuity. The dataset is divided into development and test subsets, with the test set carefully selected to maintain the category distribution. We provide baseline results using YOLO and DETR object detectors, offering a reference point for future research. The AnnoPage Dataset is publicly available on Zenodo (https://doi.org/10.5281/zenodo.12788419), along with ground-truth annotations in YOLO format.
2503.22531
Qisheng He
Qisheng He, Nicholas Summerfield, Peiyong Wang, Carri Glide-Hurst, Ming Dong
Deterministic Medical Image Translation via High-fidelity Brownian Bridges
null
null
null
null
eess.IV cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Recent studies have shown that diffusion models produce superior synthetic images when compared to Generative Adversarial Networks (GANs). However, their outputs are often non-deterministic and lack high fidelity to the ground truth due to the inherent randomness. In this paper, we propose a novel High-fidelity Brownian bridge model (HiFi-BBrg) for deterministic medical image translations. Our model comprises two distinct yet mutually beneficial mappings: a generation mapping and a reconstruction mapping. The Brownian bridge training process is guided by the fidelity loss and adversarial training in the reconstruction mapping. This ensures that translated images can be accurately reversed to their original forms, thereby achieving consistent translations with high fidelity to the ground truth. Our extensive experiments on multiple datasets show HiFi-BBrg outperforms state-of-the-art methods in multi-modal image translation and multi-image super-resolution.
[ { "version": "v1", "created": "Fri, 28 Mar 2025 15:33:28 GMT" } ]
2025-03-31T00:00:00
[ [ "He", "Qisheng", "" ], [ "Summerfield", "Nicholas", "" ], [ "Wang", "Peiyong", "" ], [ "Glide-Hurst", "Carri", "" ], [ "Dong", "Ming", "" ] ]
TITLE: Deterministic Medical Image Translation via High-fidelity Brownian Bridges ABSTRACT: Recent studies have shown that diffusion models produce superior synthetic images when compared to Generative Adversarial Networks (GANs). However, their outputs are often non-deterministic and lack high fidelity to the ground truth due to the inherent randomness. In this paper, we propose a novel High-fidelity Brownian bridge model (HiFi-BBrg) for deterministic medical image translations. Our model comprises two distinct yet mutually beneficial mappings: a generation mapping and a reconstruction mapping. The Brownian bridge training process is guided by the fidelity loss and adversarial training in the reconstruction mapping. This ensures that translated images can be accurately reversed to their original forms, thereby achieving consistent translations with high fidelity to the ground truth. Our extensive experiments on multiple datasets show HiFi-BBrg outperforms state-of-the-art methods in multi-modal image translation and multi-image super-resolution.
2503.22537
Remy Sabathier
Remy Sabathier, Niloy J. Mitra, David Novotny
LIM: Large Interpolator Model for Dynamic Reconstruction
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Reconstructing dynamic assets from video data is central to many in computer vision and graphics tasks. Existing 4D reconstruction approaches are limited by category-specific models or slow optimization-based methods. Inspired by the recent Large Reconstruction Model (LRM), we present the Large Interpolation Model (LIM), a transformer-based feed-forward solution, guided by a novel causal consistency loss, for interpolating implicit 3D representations across time. Given implicit 3D representations at times $t_0$ and $t_1$, LIM produces a deformed shape at any continuous time $t\in[t_0,t_1]$, delivering high-quality interpolated frames in seconds. Furthermore, LIM allows explicit mesh tracking across time, producing a consistently uv-textured mesh sequence ready for integration into existing production pipelines. We also use LIM, in conjunction with a diffusion-based multiview generator, to produce dynamic 4D reconstructions from monocular videos. We evaluate LIM on various dynamic datasets, benchmarking against image-space interpolation methods (e.g., FiLM) and direct triplane linear interpolation, and demonstrate clear advantages. In summary, LIM is the first feed-forward model capable of high-speed tracked 4D asset reconstruction across diverse categories.
[ { "version": "v1", "created": "Fri, 28 Mar 2025 15:36:53 GMT" } ]
2025-03-31T00:00:00
[ [ "Sabathier", "Remy", "" ], [ "Mitra", "Niloy J.", "" ], [ "Novotny", "David", "" ] ]
TITLE: LIM: Large Interpolator Model for Dynamic Reconstruction ABSTRACT: Reconstructing dynamic assets from video data is central to many in computer vision and graphics tasks. Existing 4D reconstruction approaches are limited by category-specific models or slow optimization-based methods. Inspired by the recent Large Reconstruction Model (LRM), we present the Large Interpolation Model (LIM), a transformer-based feed-forward solution, guided by a novel causal consistency loss, for interpolating implicit 3D representations across time. Given implicit 3D representations at times $t_0$ and $t_1$, LIM produces a deformed shape at any continuous time $t\in[t_0,t_1]$, delivering high-quality interpolated frames in seconds. Furthermore, LIM allows explicit mesh tracking across time, producing a consistently uv-textured mesh sequence ready for integration into existing production pipelines. We also use LIM, in conjunction with a diffusion-based multiview generator, to produce dynamic 4D reconstructions from monocular videos. We evaluate LIM on various dynamic datasets, benchmarking against image-space interpolation methods (e.g., FiLM) and direct triplane linear interpolation, and demonstrate clear advantages. In summary, LIM is the first feed-forward model capable of high-speed tracked 4D asset reconstruction across diverse categories.
2503.22539
Yijun Quan
Yijun Quan, Zushu Li, Giovanni Montana
Efficient Verified Machine Unlearning For Distillation
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Growing data privacy demands, driven by regulations like GDPR and CCPA, require machine unlearning methods capable of swiftly removing the influence of specific training points. Although verified approaches like SISA, using data slicing and checkpointing, achieve efficient unlearning for single models by reverting to intermediate states, these methods struggle in teacher-student knowledge distillation settings. Unlearning in the teacher typically forces costly, complete student retraining due to pervasive information propagation during distillation. Our primary contribution is PURGE (Partitioned Unlearning with Retraining Guarantee for Ensembles), a novel framework integrating verified unlearning with distillation. We introduce constituent mapping and an incremental multi-teacher strategy that partitions the distillation process, confines each teacher constituent's impact to distinct student data subsets, and crucially maintains data isolation. The PURGE framework substantially reduces retraining overhead, requiring only partial student updates when teacher-side unlearning occurs. We provide both theoretical analysis, quantifying significant speed-ups in the unlearning process, and empirical validation on multiple datasets, demonstrating that PURGE achieves these efficiency gains while maintaining student accuracy comparable to standard baselines.
[ { "version": "v1", "created": "Fri, 28 Mar 2025 15:38:07 GMT" } ]
2025-03-31T00:00:00
[ [ "Quan", "Yijun", "" ], [ "Li", "Zushu", "" ], [ "Montana", "Giovanni", "" ] ]
TITLE: Efficient Verified Machine Unlearning For Distillation ABSTRACT: Growing data privacy demands, driven by regulations like GDPR and CCPA, require machine unlearning methods capable of swiftly removing the influence of specific training points. Although verified approaches like SISA, using data slicing and checkpointing, achieve efficient unlearning for single models by reverting to intermediate states, these methods struggle in teacher-student knowledge distillation settings. Unlearning in the teacher typically forces costly, complete student retraining due to pervasive information propagation during distillation. Our primary contribution is PURGE (Partitioned Unlearning with Retraining Guarantee for Ensembles), a novel framework integrating verified unlearning with distillation. We introduce constituent mapping and an incremental multi-teacher strategy that partitions the distillation process, confines each teacher constituent's impact to distinct student data subsets, and crucially maintains data isolation. The PURGE framework substantially reduces retraining overhead, requiring only partial student updates when teacher-side unlearning occurs. We provide both theoretical analysis, quantifying significant speed-ups in the unlearning process, and empirical validation on multiple datasets, demonstrating that PURGE achieves these efficiency gains while maintaining student accuracy comparable to standard baselines.
2503.22541
Haicheng Liao
Haicheng Liao, Hanlin Kong, Bin Rao, Bonan Wang, Chengyue Wang, Guyang Yu, Yuming Huang, Ruru Tang, Chengzhong Xu, and Zhenning Li
SafeCast: Risk-Responsive Motion Forecasting for Autonomous Vehicles
null
null
null
null
cs.RO cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurate motion forecasting is essential for the safety and reliability of autonomous driving (AD) systems. While existing methods have made significant progress, they often overlook explicit safety constraints and struggle to capture the complex interactions among traffic agents, environmental factors, and motion dynamics. To address these challenges, we present SafeCast, a risk-responsive motion forecasting model that integrates safety-aware decision-making with uncertainty-aware adaptability. SafeCast is the first to incorporate the Responsibility-Sensitive Safety (RSS) framework into motion forecasting, encoding interpretable safety rules--such as safe distances and collision avoidance--based on traffic norms and physical principles. To further enhance robustness, we introduce the Graph Uncertainty Feature (GUF), a graph-based module that injects learnable noise into Graph Attention Networks, capturing real-world uncertainties and enhancing generalization across diverse scenarios. We evaluate SafeCast on four real-world benchmark datasets--Next Generation Simulation (NGSIM), Highway Drone (HighD), ApolloScape, and the Macao Connected Autonomous Driving (MoCAD)--covering highway, urban, and mixed-autonomy traffic environments. Our model achieves state-of-the-art (SOTA) accuracy while maintaining a lightweight architecture and low inference latency, underscoring its potential for real-time deployment in safety-critical AD systems.
[ { "version": "v1", "created": "Fri, 28 Mar 2025 15:38:21 GMT" } ]
2025-03-31T00:00:00
[ [ "Liao", "Haicheng", "" ], [ "Kong", "Hanlin", "" ], [ "Rao", "Bin", "" ], [ "Wang", "Bonan", "" ], [ "Wang", "Chengyue", "" ], [ "Yu", "Guyang", "" ], [ "Huang", "Yuming", "" ], [ "Tang", "Ruru", "" ], [ "Xu", "Chengzhong", "" ], [ "Li", "Zhenning", "" ] ]
TITLE: SafeCast: Risk-Responsive Motion Forecasting for Autonomous Vehicles ABSTRACT: Accurate motion forecasting is essential for the safety and reliability of autonomous driving (AD) systems. While existing methods have made significant progress, they often overlook explicit safety constraints and struggle to capture the complex interactions among traffic agents, environmental factors, and motion dynamics. To address these challenges, we present SafeCast, a risk-responsive motion forecasting model that integrates safety-aware decision-making with uncertainty-aware adaptability. SafeCast is the first to incorporate the Responsibility-Sensitive Safety (RSS) framework into motion forecasting, encoding interpretable safety rules--such as safe distances and collision avoidance--based on traffic norms and physical principles. To further enhance robustness, we introduce the Graph Uncertainty Feature (GUF), a graph-based module that injects learnable noise into Graph Attention Networks, capturing real-world uncertainties and enhancing generalization across diverse scenarios. We evaluate SafeCast on four real-world benchmark datasets--Next Generation Simulation (NGSIM), Highway Drone (HighD), ApolloScape, and the Macao Connected Autonomous Driving (MoCAD)--covering highway, urban, and mixed-autonomy traffic environments. Our model achieves state-of-the-art (SOTA) accuracy while maintaining a lightweight architecture and low inference latency, underscoring its potential for real-time deployment in safety-critical AD systems.
2503.22557
Zhendi Gong
Zhendi Gong, Susan Francis, Eleanor Cox, Stamatios N. Sotiropoulos, Dorothee P. Auer, Guoping Qiu, Andrew P. French, Xin Chen
MO-CTranS: A unified multi-organ segmentation model learning from multiple heterogeneously labelled datasets
Accepted by International Symposium on Biomedical Imaging (ISIB) 2025 as an oral presentation
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Multi-organ segmentation holds paramount significance in many clinical tasks. In practice, compared to large fully annotated datasets, multiple small datasets are often more accessible and organs are not labelled consistently. Normally, an individual model is trained for each of these datasets, which is not an effective way of using data for model learning. It remains challenging to train a single model that can robustly learn from several partially labelled datasets due to label conflict and data imbalance problems. We propose MO-CTranS: a single model that can overcome such problems. MO-CTranS contains a CNN-based encoder and a Transformer-based decoder, which are connected in a multi-resolution manner. Task-specific tokens are introduced in the decoder to help differentiate label discrepancies. Our method was evaluated and compared to several baseline models and state-of-the-art (SOTA) solutions on abdominal MRI datasets that were acquired in different views (i.e. axial and coronal) and annotated for different organs (i.e. liver, kidney, spleen). Our method achieved better performance (most were statistically significant) than the compared methods. Github link: https://github.com/naisops/MO-CTranS.
[ { "version": "v1", "created": "Fri, 28 Mar 2025 16:00:59 GMT" } ]
2025-03-31T00:00:00
[ [ "Gong", "Zhendi", "" ], [ "Francis", "Susan", "" ], [ "Cox", "Eleanor", "" ], [ "Sotiropoulos", "Stamatios N.", "" ], [ "Auer", "Dorothee P.", "" ], [ "Qiu", "Guoping", "" ], [ "French", "Andrew P.", "" ], [ "Chen", "Xin", "" ] ]
TITLE: MO-CTranS: A unified multi-organ segmentation model learning from multiple heterogeneously labelled datasets ABSTRACT: Multi-organ segmentation holds paramount significance in many clinical tasks. In practice, compared to large fully annotated datasets, multiple small datasets are often more accessible and organs are not labelled consistently. Normally, an individual model is trained for each of these datasets, which is not an effective way of using data for model learning. It remains challenging to train a single model that can robustly learn from several partially labelled datasets due to label conflict and data imbalance problems. We propose MO-CTranS: a single model that can overcome such problems. MO-CTranS contains a CNN-based encoder and a Transformer-based decoder, which are connected in a multi-resolution manner. Task-specific tokens are introduced in the decoder to help differentiate label discrepancies. Our method was evaluated and compared to several baseline models and state-of-the-art (SOTA) solutions on abdominal MRI datasets that were acquired in different views (i.e. axial and coronal) and annotated for different organs (i.e. liver, kidney, spleen). Our method achieved better performance (most were statistically significant) than the compared methods. Github link: https://github.com/naisops/MO-CTranS.
2503.22563
Andrea Sebastiani
Pasquale Cascarano, Lorenzo Stacchio, Andrea Sebastiani, Alessandro Benfenati, Ulugbek S. Kamilov, Gustavo Marfia
RELD: Regularization by Latent Diffusion Models for Image Restoration
null
null
null
null
eess.IV cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, Diffusion Models have become the new state-of-the-art in deep generative modeling, ending the long-time dominance of Generative Adversarial Networks. Inspired by the Regularization by Denoising principle, we introduce an approach that integrates a Latent Diffusion Model, trained for the denoising task, into a variational framework using Half-Quadratic Splitting, exploiting its regularization properties. This approach, under appropriate conditions that can be easily met in various imaging applications, allows for reduced computational cost while achieving high-quality results. The proposed strategy, called Regularization by Latent Denoising (RELD), is then tested on a dataset of natural images, for image denoising, deblurring, and super-resolution tasks. The numerical experiments show that RELD is competitive with other state-of-the-art methods, particularly achieving remarkable results when evaluated using perceptual quality metrics.
[ { "version": "v1", "created": "Fri, 28 Mar 2025 16:04:21 GMT" } ]
2025-03-31T00:00:00
[ [ "Cascarano", "Pasquale", "" ], [ "Stacchio", "Lorenzo", "" ], [ "Sebastiani", "Andrea", "" ], [ "Benfenati", "Alessandro", "" ], [ "Kamilov", "Ulugbek S.", "" ], [ "Marfia", "Gustavo", "" ] ]
TITLE: RELD: Regularization by Latent Diffusion Models for Image Restoration ABSTRACT: In recent years, Diffusion Models have become the new state-of-the-art in deep generative modeling, ending the long-time dominance of Generative Adversarial Networks. Inspired by the Regularization by Denoising principle, we introduce an approach that integrates a Latent Diffusion Model, trained for the denoising task, into a variational framework using Half-Quadratic Splitting, exploiting its regularization properties. This approach, under appropriate conditions that can be easily met in various imaging applications, allows for reduced computational cost while achieving high-quality results. The proposed strategy, called Regularization by Latent Denoising (RELD), is then tested on a dataset of natural images, for image denoising, deblurring, and super-resolution tasks. The numerical experiments show that RELD is competitive with other state-of-the-art methods, particularly achieving remarkable results when evaluated using perceptual quality metrics.
2503.22569
Barbara Alexandra Hoffmann
Barbara Hoffmann, Ruben Mayer
Comparing Methods for Bias Mitigation in Graph Neural Networks
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
This paper examines the critical role of Graph Neural Networks (GNNs) in data preparation for generative artificial intelligence (GenAI) systems, with a particular focus on addressing and mitigating biases. We present a comparative analysis of three distinct methods for bias mitigation: data sparsification, feature modification, and synthetic data augmentation. Through experimental analysis using the german credit dataset, we evaluate these approaches using multiple fairness metrics, including statistical parity, equality of opportunity, and false positive rates. Our research demonstrates that while all methods improve fairness metrics compared to the original dataset, stratified sampling and synthetic data augmentation using GraphSAGE prove particularly effective in balancing demographic representation while maintaining model performance. The results provide practical insights for developing more equitable AI systems while maintaining model performance.
[ { "version": "v1", "created": "Fri, 28 Mar 2025 16:18:48 GMT" } ]
2025-03-31T00:00:00
[ [ "Hoffmann", "Barbara", "" ], [ "Mayer", "Ruben", "" ] ]
TITLE: Comparing Methods for Bias Mitigation in Graph Neural Networks ABSTRACT: This paper examines the critical role of Graph Neural Networks (GNNs) in data preparation for generative artificial intelligence (GenAI) systems, with a particular focus on addressing and mitigating biases. We present a comparative analysis of three distinct methods for bias mitigation: data sparsification, feature modification, and synthetic data augmentation. Through experimental analysis using the german credit dataset, we evaluate these approaches using multiple fairness metrics, including statistical parity, equality of opportunity, and false positive rates. Our research demonstrates that while all methods improve fairness metrics compared to the original dataset, stratified sampling and synthetic data augmentation using GraphSAGE prove particularly effective in balancing demographic representation while maintaining model performance. The results provide practical insights for developing more equitable AI systems while maintaining model performance.
2503.22582
Sarubi Thillainathan
Sarubi Thillainathan, Songchen Yuan, En-Shiun Annie Lee, Sanath Jayasena, Surangika Ranathunga
Beyond Vanilla Fine-Tuning: Leveraging Multistage, Multilingual, and Domain-Specific Methods for Low-Resource Machine Translation
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Fine-tuning multilingual sequence-to-sequence large language models (msLLMs) has shown promise in developing neural machine translation (NMT) systems for low-resource languages (LRLs). However, conventional single-stage fine-tuning methods struggle in extremely low-resource NMT settings, where training data is very limited. This paper contributes to artificial intelligence by proposing two approaches for adapting msLLMs in these challenging scenarios: (1) continual pre-training (CPT), where the msLLM is further trained with domain-specific monolingual data to compensate for the under-representation of LRLs, and (2) intermediate task transfer learning (ITTL), a method that fine-tunes the msLLM with both in-domain and out-of-domain parallel data to enhance its translation capabilities across various domains and tasks. As an application in engineering, these methods are implemented in NMT systems for Sinhala, Tamil, and English (six language pairs) in domain-specific, extremely low-resource settings (datasets containing fewer than 100,000 samples). Our experiments reveal that these approaches enhance translation performance by an average of +1.47 bilingual evaluation understudy (BLEU) score compared to the standard single-stage fine-tuning baseline across all translation directions. Additionally, a multi-model ensemble further improves performance by an additional BLEU score.
[ { "version": "v1", "created": "Fri, 28 Mar 2025 16:30:28 GMT" } ]
2025-03-31T00:00:00
[ [ "Thillainathan", "Sarubi", "" ], [ "Yuan", "Songchen", "" ], [ "Lee", "En-Shiun Annie", "" ], [ "Jayasena", "Sanath", "" ], [ "Ranathunga", "Surangika", "" ] ]
TITLE: Beyond Vanilla Fine-Tuning: Leveraging Multistage, Multilingual, and Domain-Specific Methods for Low-Resource Machine Translation ABSTRACT: Fine-tuning multilingual sequence-to-sequence large language models (msLLMs) has shown promise in developing neural machine translation (NMT) systems for low-resource languages (LRLs). However, conventional single-stage fine-tuning methods struggle in extremely low-resource NMT settings, where training data is very limited. This paper contributes to artificial intelligence by proposing two approaches for adapting msLLMs in these challenging scenarios: (1) continual pre-training (CPT), where the msLLM is further trained with domain-specific monolingual data to compensate for the under-representation of LRLs, and (2) intermediate task transfer learning (ITTL), a method that fine-tunes the msLLM with both in-domain and out-of-domain parallel data to enhance its translation capabilities across various domains and tasks. As an application in engineering, these methods are implemented in NMT systems for Sinhala, Tamil, and English (six language pairs) in domain-specific, extremely low-resource settings (datasets containing fewer than 100,000 samples). Our experiments reveal that these approaches enhance translation performance by an average of +1.47 bilingual evaluation understudy (BLEU) score compared to the standard single-stage fine-tuning baseline across all translation directions. Additionally, a multi-model ensemble further improves performance by an additional BLEU score.
2503.22585
Laura Manrique-G\'omez
Kevin Cohen, Laura Manrique-G\'omez, Rub\'en Manrique
Historical Ink: Exploring Large Language Models for Irony Detection in 19th-Century Spanish
null
null
null
null
cs.CL cs.AI cs.DL
http://creativecommons.org/licenses/by/4.0/
This study explores the use of large language models (LLMs) to enhance datasets and improve irony detection in 19th-century Latin American newspapers. Two strategies were employed to evaluate the efficacy of BERT and GPT-4o models in capturing the subtle nuances nature of irony, through both multi-class and binary classification tasks. First, we implemented dataset enhancements focused on enriching emotional and contextual cues; however, these showed limited impact on historical language analysis. The second strategy, a semi-automated annotation process, effectively addressed class imbalance and augmented the dataset with high-quality annotations. Despite the challenges posed by the complexity of irony, this work contributes to the advancement of sentiment analysis through two key contributions: introducing a new historical Spanish dataset tagged for sentiment analysis and irony detection, and proposing a semi-automated annotation methodology where human expertise is crucial for refining LLMs results, enriched by incorporating historical and cultural contexts as core features.
[ { "version": "v1", "created": "Fri, 28 Mar 2025 16:33:24 GMT" } ]
2025-03-31T00:00:00
[ [ "Cohen", "Kevin", "" ], [ "Manrique-Gómez", "Laura", "" ], [ "Manrique", "Rubén", "" ] ]
TITLE: Historical Ink: Exploring Large Language Models for Irony Detection in 19th-Century Spanish ABSTRACT: This study explores the use of large language models (LLMs) to enhance datasets and improve irony detection in 19th-century Latin American newspapers. Two strategies were employed to evaluate the efficacy of BERT and GPT-4o models in capturing the subtle nuances nature of irony, through both multi-class and binary classification tasks. First, we implemented dataset enhancements focused on enriching emotional and contextual cues; however, these showed limited impact on historical language analysis. The second strategy, a semi-automated annotation process, effectively addressed class imbalance and augmented the dataset with high-quality annotations. Despite the challenges posed by the complexity of irony, this work contributes to the advancement of sentiment analysis through two key contributions: introducing a new historical Spanish dataset tagged for sentiment analysis and irony detection, and proposing a semi-automated annotation methodology where human expertise is crucial for refining LLMs results, enriched by incorporating historical and cultural contexts as core features.
2503.22589
Bryce Dietrich
Adam Breuer, Bryce J. Dietrich, Michael H. Crespin, Matthew Butler, J.A. Pyrse, and Kosuke Imai
Using AI to Summarize US Presidential Campaign TV Advertisement Videos, 1952-2012
17 pages, 7 tables, 4 figures, and linked datasets
null
null
null
cs.MM cs.AI cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces the largest and most comprehensive dataset of US presidential campaign television advertisements, available in digital format. The dataset also includes machine-searchable transcripts and high-quality summaries designed to facilitate a variety of academic research. To date, there has been great interest in collecting and analyzing US presidential campaign advertisements, but the need for manual procurement and annotation led many to rely on smaller subsets. We design a large-scale parallelized, AI-based analysis pipeline that automates the laborious process of preparing, transcribing, and summarizing videos. We then apply this methodology to the 9,707 presidential ads from the Julian P. Kanter Political Commercial Archive. We conduct extensive human evaluations to show that these transcripts and summaries match the quality of manually generated alternatives. We illustrate the value of this data by including an application that tracks the genesis and evolution of current focal issue areas over seven decades of presidential elections. Our analysis pipeline and codebase also show how to use LLM-based tools to obtain high-quality summaries for other video datasets.
[ { "version": "v1", "created": "Fri, 28 Mar 2025 16:36:23 GMT" } ]
2025-03-31T00:00:00
[ [ "Breuer", "Adam", "" ], [ "Dietrich", "Bryce J.", "" ], [ "Crespin", "Michael H.", "" ], [ "Butler", "Matthew", "" ], [ "Pyrse", "J. A.", "" ], [ "Imai", "Kosuke", "" ] ]
TITLE: Using AI to Summarize US Presidential Campaign TV Advertisement Videos, 1952-2012 ABSTRACT: This paper introduces the largest and most comprehensive dataset of US presidential campaign television advertisements, available in digital format. The dataset also includes machine-searchable transcripts and high-quality summaries designed to facilitate a variety of academic research. To date, there has been great interest in collecting and analyzing US presidential campaign advertisements, but the need for manual procurement and annotation led many to rely on smaller subsets. We design a large-scale parallelized, AI-based analysis pipeline that automates the laborious process of preparing, transcribing, and summarizing videos. We then apply this methodology to the 9,707 presidential ads from the Julian P. Kanter Political Commercial Archive. We conduct extensive human evaluations to show that these transcripts and summaries match the quality of manually generated alternatives. We illustrate the value of this data by including an application that tracks the genesis and evolution of current focal issue areas over seven decades of presidential elections. Our analysis pipeline and codebase also show how to use LLM-based tools to obtain high-quality summaries for other video datasets.
2503.22592
Thomas Boucher
Thomas Boucher, Nicholas Tetlow, Annie Fung, Amy Dewar, Pietro Arina, Sven Kerneis, John Whittle, Evangelos B. Mazomenos
KEVS: Enhancing Segmentation of Visceral Adipose Tissue in Pre-Cystectomy CT with Gaussian Kernel Density Estimation
Preprint for submission to IPCAI special edition of IJCARS 2025, version prior to any peer review
null
null
null
eess.IV cs.AI cs.CV
http://creativecommons.org/licenses/by/4.0/
Purpose: The distribution of visceral adipose tissue (VAT) in cystectomy patients is indicative of the incidence of post-operative complications. Existing VAT segmentation methods for computed tomography (CT) employing intensity thresholding have limitations relating to inter-observer variability. Moreover, the difficulty in creating ground-truth masks limits the development of deep learning (DL) models for this task. This paper introduces a novel method for VAT prediction in pre-cystectomy CT, which is fully automated and does not require ground-truth VAT masks for training, overcoming aforementioned limitations. Methods: We introduce the Kernel density Enhanced VAT Segmentator ( KEVS), combining a DL semantic segmentation model, for multi-body feature prediction, with Gaussian kernel density estimation analysis of predicted subcutaneous adipose tissue to achieve accurate scan-specific predictions of VAT in the abdominal cavity. Uniquely for a DL pipeline, KEVS does not require ground-truth VAT masks. Results: We verify the ability of KEVS to accurately segment abdominal organs in unseen CT data and compare KEVS VAT segmentation predictions to existing state-of-the-art (SOTA) approaches in a dataset of 20 pre-cystectomy CT scans, collected from University College London Hospital (UCLH-Cyst), with expert ground-truth annotations. KEVS presents a 4.80% and 6.02% improvement in Dice Coefficient over the second best DL and thresholding-based VAT segmentation techniques respectively when evaluated on UCLH-Cyst. Conclusion: This research introduces KEVS; an automated, SOTA method for the prediction of VAT in pre-cystectomy CT which eliminates inter-observer variability and is trained entirely on open-source CT datasets which do not contain ground-truth VAT masks.
[ { "version": "v1", "created": "Fri, 28 Mar 2025 16:41:09 GMT" } ]
2025-03-31T00:00:00
[ [ "Boucher", "Thomas", "" ], [ "Tetlow", "Nicholas", "" ], [ "Fung", "Annie", "" ], [ "Dewar", "Amy", "" ], [ "Arina", "Pietro", "" ], [ "Kerneis", "Sven", "" ], [ "Whittle", "John", "" ], [ "Mazomenos", "Evangelos B.", "" ] ]
TITLE: KEVS: Enhancing Segmentation of Visceral Adipose Tissue in Pre-Cystectomy CT with Gaussian Kernel Density Estimation ABSTRACT: Purpose: The distribution of visceral adipose tissue (VAT) in cystectomy patients is indicative of the incidence of post-operative complications. Existing VAT segmentation methods for computed tomography (CT) employing intensity thresholding have limitations relating to inter-observer variability. Moreover, the difficulty in creating ground-truth masks limits the development of deep learning (DL) models for this task. This paper introduces a novel method for VAT prediction in pre-cystectomy CT, which is fully automated and does not require ground-truth VAT masks for training, overcoming aforementioned limitations. Methods: We introduce the Kernel density Enhanced VAT Segmentator ( KEVS), combining a DL semantic segmentation model, for multi-body feature prediction, with Gaussian kernel density estimation analysis of predicted subcutaneous adipose tissue to achieve accurate scan-specific predictions of VAT in the abdominal cavity. Uniquely for a DL pipeline, KEVS does not require ground-truth VAT masks. Results: We verify the ability of KEVS to accurately segment abdominal organs in unseen CT data and compare KEVS VAT segmentation predictions to existing state-of-the-art (SOTA) approaches in a dataset of 20 pre-cystectomy CT scans, collected from University College London Hospital (UCLH-Cyst), with expert ground-truth annotations. KEVS presents a 4.80% and 6.02% improvement in Dice Coefficient over the second best DL and thresholding-based VAT segmentation techniques respectively when evaluated on UCLH-Cyst. Conclusion: This research introduces KEVS; an automated, SOTA method for the prediction of VAT in pre-cystectomy CT which eliminates inter-observer variability and is trained entirely on open-source CT datasets which do not contain ground-truth VAT masks.
2503.22594
Philipp Schaer
Dirk Tunger and Philipp Schaer
On the Alignment of Post-Publication Reviews & Bibliometric and Altmetric Impact -- A Case Study on Expert Statements from the Science Media Center Germany
Accepted at The First Workshop on Scholarly Information Access (SCOLIA)
null
null
null
cs.DL
http://creativecommons.org/licenses/by-sa/4.0/
In the context of academic publishing and peer review, this study investigates the relationship between post-publication expert evaluations, their agreement levels, and the subsequent scientific and public recognition of the reviewed research. Using expert statements from the Science Media Center Germany as a dataset, we analyze Research in Context reviews to examine the alignment between qualitative post-publication assessments and bibliometric as well as altmetric indicators. We employ a Large Language Model to translate unstructured expert reviews into a structured rating scheme. Furthermore, we correlate these evaluations with citation counts from the Web of Science and alternative impact metrics such as the Altmetric Attention Score, news mentions, and Mendeley readership statistics from the Altmetric Explorer. We investigate the alignment of positive or critical post-publication reviews and high or low citation or altmetric counts.
[ { "version": "v1", "created": "Fri, 28 Mar 2025 16:41:41 GMT" } ]
2025-03-31T00:00:00
[ [ "Tunger", "Dirk", "" ], [ "Schaer", "Philipp", "" ] ]
TITLE: On the Alignment of Post-Publication Reviews & Bibliometric and Altmetric Impact -- A Case Study on Expert Statements from the Science Media Center Germany ABSTRACT: In the context of academic publishing and peer review, this study investigates the relationship between post-publication expert evaluations, their agreement levels, and the subsequent scientific and public recognition of the reviewed research. Using expert statements from the Science Media Center Germany as a dataset, we analyze Research in Context reviews to examine the alignment between qualitative post-publication assessments and bibliometric as well as altmetric indicators. We employ a Large Language Model to translate unstructured expert reviews into a structured rating scheme. Furthermore, we correlate these evaluations with citation counts from the Web of Science and alternative impact metrics such as the Altmetric Attention Score, news mentions, and Mendeley readership statistics from the Altmetric Explorer. We investigate the alignment of positive or critical post-publication reviews and high or low citation or altmetric counts.
2503.22595
Steven McClendon
S. Aaron McClendon, Vishaal Venkatesh, Juan Morinelli
Reinforcement Learning for Machine Learning Model Deployment: Evaluating Multi-Armed Bandits in ML Ops Environments
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
In modern ML Ops environments, model deployment is a critical process that traditionally relies on static heuristics such as validation error comparisons and A/B testing. However, these methods require human intervention to adapt to real-world deployment challenges, such as model drift or unexpected performance degradation. We investigate whether reinforcement learning, specifically multi-armed bandit (MAB) algorithms, can dynamically manage model deployment decisions more effectively. Our approach enables more adaptive production environments by continuously evaluating deployed models and rolling back underperforming ones in real-time. We test six model selection strategies across two real-world datasets and find that RL based approaches match or exceed traditional methods in performance. Our findings suggest that reinforcement learning (RL)-based model management can improve automation, reduce reliance on manual interventions, and mitigate risks associated with post-deployment model failures.
[ { "version": "v1", "created": "Fri, 28 Mar 2025 16:42:21 GMT" } ]
2025-03-31T00:00:00
[ [ "McClendon", "S. Aaron", "" ], [ "Venkatesh", "Vishaal", "" ], [ "Morinelli", "Juan", "" ] ]
TITLE: Reinforcement Learning for Machine Learning Model Deployment: Evaluating Multi-Armed Bandits in ML Ops Environments ABSTRACT: In modern ML Ops environments, model deployment is a critical process that traditionally relies on static heuristics such as validation error comparisons and A/B testing. However, these methods require human intervention to adapt to real-world deployment challenges, such as model drift or unexpected performance degradation. We investigate whether reinforcement learning, specifically multi-armed bandit (MAB) algorithms, can dynamically manage model deployment decisions more effectively. Our approach enables more adaptive production environments by continuously evaluating deployed models and rolling back underperforming ones in real-time. We test six model selection strategies across two real-world datasets and find that RL based approaches match or exceed traditional methods in performance. Our findings suggest that reinforcement learning (RL)-based model management can improve automation, reduce reliance on manual interventions, and mitigate risks associated with post-deployment model failures.
2503.22629
Stefano Grassi
Stefano Grassi
Sentiment Classification of Thai Central Bank Press Releases Using Supervised Learning
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Central bank communication plays a critical role in shaping economic expectations and monetary policy effectiveness. This study applies supervised machine learning techniques to classify the sentiment of press releases from the Bank of Thailand, addressing gaps in research that primarily focus on lexicon-based approaches. My findings show that supervised learning can be an effective method, even with smaller datasets, and serves as a starting point for further automation. However, achieving higher accuracy and better generalization requires a substantial amount of labeled data, which is time-consuming and demands expertise. Using models such as Na\"ive Bayes, Random Forest and SVM, this study demonstrates the applicability of machine learning for central bank sentiment analysis, with English-language communications from the Thai Central Bank as a case study.
[ { "version": "v1", "created": "Fri, 28 Mar 2025 17:20:41 GMT" } ]
2025-03-31T00:00:00
[ [ "Grassi", "Stefano", "" ] ]
TITLE: Sentiment Classification of Thai Central Bank Press Releases Using Supervised Learning ABSTRACT: Central bank communication plays a critical role in shaping economic expectations and monetary policy effectiveness. This study applies supervised machine learning techniques to classify the sentiment of press releases from the Bank of Thailand, addressing gaps in research that primarily focus on lexicon-based approaches. My findings show that supervised learning can be an effective method, even with smaller datasets, and serves as a starting point for further automation. However, achieving higher accuracy and better generalization requires a substantial amount of labeled data, which is time-consuming and demands expertise. Using models such as Na\"ive Bayes, Random Forest and SVM, this study demonstrates the applicability of machine learning for central bank sentiment analysis, with English-language communications from the Thai Central Bank as a case study.
2503.22634
Adam Wei
Adam Wei, Abhinav Agarwal, Boyuan Chen, Rohan Bosworth, Nicholas Pfaff, Russ Tedrake
Empirical Analysis of Sim-and-Real Cotraining Of Diffusion Policies For Planar Pushing from Pixels
9 pages, 15 figures, In Submission to IROS 2025
null
null
null
cs.RO cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In imitation learning for robotics, cotraining with demonstration data generated both in simulation and on real hardware has emerged as a powerful recipe to overcome the sim2real gap. This work seeks to elucidate basic principles of this sim-and-real cotraining to help inform simulation design, sim-and-real dataset creation, and policy training. Focusing narrowly on the canonical task of planar pushing from camera inputs enabled us to be thorough in our study. These experiments confirm that cotraining with simulated data \emph{can} dramatically improve performance in real, especially when real data is limited. Performance gains scale with simulated data, but eventually plateau; real-world data increases this performance ceiling. The results also suggest that reducing the domain gap in physics may be more important than visual fidelity for non-prehensile manipulation tasks. Perhaps surprisingly, having some visual domain gap actually helps the cotrained policy -- binary probes reveal that high-performing policies learn to distinguish simulated domains from real. We conclude by investigating this nuance and mechanisms that facilitate positive transfer between sim-and-real. In total, our experiments span over 40 real-world policies (evaluated on 800+ trials) and 200 simulated policies (evaluated on 40,000+ trials).
[ { "version": "v1", "created": "Fri, 28 Mar 2025 17:25:57 GMT" } ]
2025-03-31T00:00:00
[ [ "Wei", "Adam", "" ], [ "Agarwal", "Abhinav", "" ], [ "Chen", "Boyuan", "" ], [ "Bosworth", "Rohan", "" ], [ "Pfaff", "Nicholas", "" ], [ "Tedrake", "Russ", "" ] ]
TITLE: Empirical Analysis of Sim-and-Real Cotraining Of Diffusion Policies For Planar Pushing from Pixels ABSTRACT: In imitation learning for robotics, cotraining with demonstration data generated both in simulation and on real hardware has emerged as a powerful recipe to overcome the sim2real gap. This work seeks to elucidate basic principles of this sim-and-real cotraining to help inform simulation design, sim-and-real dataset creation, and policy training. Focusing narrowly on the canonical task of planar pushing from camera inputs enabled us to be thorough in our study. These experiments confirm that cotraining with simulated data \emph{can} dramatically improve performance in real, especially when real data is limited. Performance gains scale with simulated data, but eventually plateau; real-world data increases this performance ceiling. The results also suggest that reducing the domain gap in physics may be more important than visual fidelity for non-prehensile manipulation tasks. Perhaps surprisingly, having some visual domain gap actually helps the cotrained policy -- binary probes reveal that high-performing policies learn to distinguish simulated domains from real. We conclude by investigating this nuance and mechanisms that facilitate positive transfer between sim-and-real. In total, our experiments span over 40 real-world policies (evaluated on 800+ trials) and 200 simulated policies (evaluated on 40,000+ trials).
2503.22655
Xiaomin Yu
Xiaomin Yu, Pengxiang Ding, Wenjie Zhang, Siteng Huang, Songyang Gao, Chengwei Qin, Kejian Wu, Zhaoxin Fan, Ziyue Qiao, Donglin Wang
Unicorn: Text-Only Data Synthesis for Vision Language Model Training
null
null
null
null
cs.AI cs.CV cs.MM
http://creativecommons.org/licenses/by-nc-nd/4.0/
Training vision-language models (VLMs) typically requires large-scale, high-quality image-text pairs, but collecting or synthesizing such data is costly. In contrast, text data is abundant and inexpensive, prompting the question: can high-quality multimodal training data be synthesized purely from text? To tackle this, we propose a cross-integrated three-stage multimodal data synthesis framework, which generates two datasets: Unicorn-1.2M and Unicorn-471K-Instruction. In Stage 1: Diverse Caption Data Synthesis, we construct 1.2M semantically diverse high-quality captions by expanding sparse caption seeds using large language models (LLMs). In Stage 2: Instruction-Tuning Data Generation, we further process 471K captions into multi-turn instruction-tuning tasks to support complex reasoning. Finally, in Stage 3: Modality Representation Transfer, these textual captions representations are transformed into visual representations, resulting in diverse synthetic image representations. This three-stage process enables us to construct Unicorn-1.2M for pretraining and Unicorn-471K-Instruction for instruction-tuning, without relying on real images. By eliminating the dependency on real images while maintaining data quality and diversity, our framework offers a cost-effective and scalable solution for VLMs training. Code is available at https://github.com/Yu-xm/Unicorn.git.
[ { "version": "v1", "created": "Fri, 28 Mar 2025 17:43:00 GMT" } ]
2025-03-31T00:00:00
[ [ "Yu", "Xiaomin", "" ], [ "Ding", "Pengxiang", "" ], [ "Zhang", "Wenjie", "" ], [ "Huang", "Siteng", "" ], [ "Gao", "Songyang", "" ], [ "Qin", "Chengwei", "" ], [ "Wu", "Kejian", "" ], [ "Fan", "Zhaoxin", "" ], [ "Qiao", "Ziyue", "" ], [ "Wang", "Donglin", "" ] ]
TITLE: Unicorn: Text-Only Data Synthesis for Vision Language Model Training ABSTRACT: Training vision-language models (VLMs) typically requires large-scale, high-quality image-text pairs, but collecting or synthesizing such data is costly. In contrast, text data is abundant and inexpensive, prompting the question: can high-quality multimodal training data be synthesized purely from text? To tackle this, we propose a cross-integrated three-stage multimodal data synthesis framework, which generates two datasets: Unicorn-1.2M and Unicorn-471K-Instruction. In Stage 1: Diverse Caption Data Synthesis, we construct 1.2M semantically diverse high-quality captions by expanding sparse caption seeds using large language models (LLMs). In Stage 2: Instruction-Tuning Data Generation, we further process 471K captions into multi-turn instruction-tuning tasks to support complex reasoning. Finally, in Stage 3: Modality Representation Transfer, these textual captions representations are transformed into visual representations, resulting in diverse synthetic image representations. This three-stage process enables us to construct Unicorn-1.2M for pretraining and Unicorn-471K-Instruction for instruction-tuning, without relying on real images. By eliminating the dependency on real images while maintaining data quality and diversity, our framework offers a cost-effective and scalable solution for VLMs training. Code is available at https://github.com/Yu-xm/Unicorn.git.
2503.22668
Sindhu Hegde
Sindhu B Hegde, K R Prajwal, Taein Kwon, Andrew Zisserman
Understanding Co-speech Gestures in-the-wild
Main paper - 11 pages, 4 figures, Supplementary - 5 pages, 4 figures
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Co-speech gestures play a vital role in non-verbal communication. In this paper, we introduce a new framework for co-speech gesture understanding in the wild. Specifically, we propose three new tasks and benchmarks to evaluate a model's capability to comprehend gesture-text-speech associations: (i) gesture-based retrieval, (ii) gestured word spotting, and (iii) active speaker detection using gestures. We present a new approach that learns a tri-modal speech-text-video-gesture representation to solve these tasks. By leveraging a combination of global phrase contrastive loss and local gesture-word coupling loss, we demonstrate that a strong gesture representation can be learned in a weakly supervised manner from videos in the wild. Our learned representations outperform previous methods, including large vision-language models (VLMs), across all three tasks. Further analysis reveals that speech and text modalities capture distinct gesture-related signals, underscoring the advantages of learning a shared tri-modal embedding space. The dataset, model, and code are available at: https://www.robots.ox.ac.uk/~vgg/research/jegal
[ { "version": "v1", "created": "Fri, 28 Mar 2025 17:55:52 GMT" } ]
2025-03-31T00:00:00
[ [ "Hegde", "Sindhu B", "" ], [ "Prajwal", "K R", "" ], [ "Kwon", "Taein", "" ], [ "Zisserman", "Andrew", "" ] ]
TITLE: Understanding Co-speech Gestures in-the-wild ABSTRACT: Co-speech gestures play a vital role in non-verbal communication. In this paper, we introduce a new framework for co-speech gesture understanding in the wild. Specifically, we propose three new tasks and benchmarks to evaluate a model's capability to comprehend gesture-text-speech associations: (i) gesture-based retrieval, (ii) gestured word spotting, and (iii) active speaker detection using gestures. We present a new approach that learns a tri-modal speech-text-video-gesture representation to solve these tasks. By leveraging a combination of global phrase contrastive loss and local gesture-word coupling loss, we demonstrate that a strong gesture representation can be learned in a weakly supervised manner from videos in the wild. Our learned representations outperform previous methods, including large vision-language models (VLMs), across all three tasks. Further analysis reveals that speech and text modalities capture distinct gesture-related signals, underscoring the advantages of learning a shared tri-modal embedding space. The dataset, model, and code are available at: https://www.robots.ox.ac.uk/~vgg/research/jegal
2503.22675
Jiakai Tang
Jiakai Tang, Sunhao Dai, Teng Shi, Jun Xu, Xu Chen, Wen Chen, Wu Jian, Yuning Jiang
Think Before Recommend: Unleashing the Latent Reasoning Power for Sequential Recommendation
null
null
null
null
cs.IR cs.AI cs.CL
http://creativecommons.org/licenses/by/4.0/
Sequential Recommendation (SeqRec) aims to predict the next item by capturing sequential patterns from users' historical interactions, playing a crucial role in many real-world recommender systems. However, existing approaches predominantly adopt a direct forward computation paradigm, where the final hidden state of the sequence encoder serves as the user representation. We argue that this inference paradigm, due to its limited computational depth, struggles to model the complex evolving nature of user preferences and lacks a nuanced understanding of long-tail items, leading to suboptimal performance. To address this issue, we propose \textbf{ReaRec}, the first inference-time computing framework for recommender systems, which enhances user representations through implicit multi-step reasoning. Specifically, ReaRec autoregressively feeds the sequence's last hidden state into the sequential recommender while incorporating special reasoning position embeddings to decouple the original item encoding space from the multi-step reasoning space. Moreover, we introduce two lightweight reasoning-based learning methods, Ensemble Reasoning Learning (ERL) and Progressive Reasoning Learning (PRL), to further effectively exploit ReaRec's reasoning potential. Extensive experiments on five public real-world datasets and different SeqRec architectures demonstrate the generality and effectiveness of our proposed ReaRec. Remarkably, post-hoc analyses reveal that ReaRec significantly elevates the performance ceiling of multiple sequential recommendation backbones by approximately 30\%-50\%. Thus, we believe this work can open a new and promising avenue for future research in inference-time computing for sequential recommendation.
[ { "version": "v1", "created": "Fri, 28 Mar 2025 17:59:03 GMT" } ]
2025-03-31T00:00:00
[ [ "Tang", "Jiakai", "" ], [ "Dai", "Sunhao", "" ], [ "Shi", "Teng", "" ], [ "Xu", "Jun", "" ], [ "Chen", "Xu", "" ], [ "Chen", "Wen", "" ], [ "Jian", "Wu", "" ], [ "Jiang", "Yuning", "" ] ]
TITLE: Think Before Recommend: Unleashing the Latent Reasoning Power for Sequential Recommendation ABSTRACT: Sequential Recommendation (SeqRec) aims to predict the next item by capturing sequential patterns from users' historical interactions, playing a crucial role in many real-world recommender systems. However, existing approaches predominantly adopt a direct forward computation paradigm, where the final hidden state of the sequence encoder serves as the user representation. We argue that this inference paradigm, due to its limited computational depth, struggles to model the complex evolving nature of user preferences and lacks a nuanced understanding of long-tail items, leading to suboptimal performance. To address this issue, we propose \textbf{ReaRec}, the first inference-time computing framework for recommender systems, which enhances user representations through implicit multi-step reasoning. Specifically, ReaRec autoregressively feeds the sequence's last hidden state into the sequential recommender while incorporating special reasoning position embeddings to decouple the original item encoding space from the multi-step reasoning space. Moreover, we introduce two lightweight reasoning-based learning methods, Ensemble Reasoning Learning (ERL) and Progressive Reasoning Learning (PRL), to further effectively exploit ReaRec's reasoning potential. Extensive experiments on five public real-world datasets and different SeqRec architectures demonstrate the generality and effectiveness of our proposed ReaRec. Remarkably, post-hoc analyses reveal that ReaRec significantly elevates the performance ceiling of multiple sequential recommendation backbones by approximately 30\%-50\%. Thus, we believe this work can open a new and promising avenue for future research in inference-time computing for sequential recommendation.
2503.22677
Ruining Li
Ruining Li, Chuanxia Zheng, Christian Rupprecht, Andrea Vedaldi
DSO: Aligning 3D Generators with Simulation Feedback for Physical Soundness
Project page: https://ruiningli.com/dso
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Most 3D object generators focus on aesthetic quality, often neglecting physical constraints necessary in applications. One such constraint is that the 3D object should be self-supporting, i.e., remains balanced under gravity. Prior approaches to generating stable 3D objects used differentiable physics simulators to optimize geometry at test-time, which is slow, unstable, and prone to local optima. Inspired by the literature on aligning generative models to external feedback, we propose Direct Simulation Optimization (DSO), a framework to use the feedback from a (non-differentiable) simulator to increase the likelihood that the 3D generator outputs stable 3D objects directly. We construct a dataset of 3D objects labeled with a stability score obtained from the physics simulator. We can then fine-tune the 3D generator using the stability score as the alignment metric, via direct preference optimization (DPO) or direct reward optimization (DRO), a novel objective, which we introduce, to align diffusion models without requiring pairwise preferences. Our experiments show that the fine-tuned feed-forward generator, using either DPO or DRO objective, is much faster and more likely to produce stable objects than test-time optimization. Notably, the DSO framework works even without any ground-truth 3D objects for training, allowing the 3D generator to self-improve by automatically collecting simulation feedback on its own outputs.
[ { "version": "v1", "created": "Fri, 28 Mar 2025 17:59:53 GMT" } ]
2025-03-31T00:00:00
[ [ "Li", "Ruining", "" ], [ "Zheng", "Chuanxia", "" ], [ "Rupprecht", "Christian", "" ], [ "Vedaldi", "Andrea", "" ] ]
TITLE: DSO: Aligning 3D Generators with Simulation Feedback for Physical Soundness ABSTRACT: Most 3D object generators focus on aesthetic quality, often neglecting physical constraints necessary in applications. One such constraint is that the 3D object should be self-supporting, i.e., remains balanced under gravity. Prior approaches to generating stable 3D objects used differentiable physics simulators to optimize geometry at test-time, which is slow, unstable, and prone to local optima. Inspired by the literature on aligning generative models to external feedback, we propose Direct Simulation Optimization (DSO), a framework to use the feedback from a (non-differentiable) simulator to increase the likelihood that the 3D generator outputs stable 3D objects directly. We construct a dataset of 3D objects labeled with a stability score obtained from the physics simulator. We can then fine-tune the 3D generator using the stability score as the alignment metric, via direct preference optimization (DPO) or direct reward optimization (DRO), a novel objective, which we introduce, to align diffusion models without requiring pairwise preferences. Our experiments show that the fine-tuned feed-forward generator, using either DPO or DRO objective, is much faster and more likely to produce stable objects than test-time optimization. Notably, the DSO framework works even without any ground-truth 3D objects for training, allowing the 3D generator to self-improve by automatically collecting simulation feedback on its own outputs.
2503.22679
Weiqi Li
Weiqi Li, Xuanyu Zhang, Shijie Zhao, Yabin Zhang, Junlin Li, Li Zhang, Jian Zhang
Q-Insight: Understanding Image Quality via Visual Reinforcement Learning
Technical report
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Image quality assessment (IQA) focuses on the perceptual visual quality of images, playing a crucial role in downstream tasks such as image reconstruction, compression, and generation. The rapid advancement of multi-modal large language models (MLLMs) has significantly broadened the scope of IQA, moving toward comprehensive image quality understanding that incorporates content analysis, degradation perception, and comparison reasoning beyond mere numerical scoring. Previous MLLM-based methods typically either generate numerical scores lacking interpretability or heavily rely on supervised fine-tuning (SFT) using large-scale annotated datasets to provide descriptive assessments, limiting their flexibility and applicability. In this paper, we propose Q-Insight, a reinforcement learning-based model built upon group relative policy optimization (GRPO), which demonstrates strong visual reasoning capability for image quality understanding while requiring only a limited amount of rating scores and degradation labels. By jointly optimizing score regression and degradation perception tasks with carefully designed reward functions, our approach effectively exploits their mutual benefits for enhanced performance. Extensive experiments demonstrate that Q-Insight substantially outperforms existing state-of-the-art methods in both score regression and degradation perception tasks, while exhibiting impressive zero-shot generalization to comparison reasoning tasks. Code will be available at https://github.com/lwq20020127/Q-Insight.
[ { "version": "v1", "created": "Fri, 28 Mar 2025 17:59:54 GMT" } ]
2025-03-31T00:00:00
[ [ "Li", "Weiqi", "" ], [ "Zhang", "Xuanyu", "" ], [ "Zhao", "Shijie", "" ], [ "Zhang", "Yabin", "" ], [ "Li", "Junlin", "" ], [ "Zhang", "Li", "" ], [ "Zhang", "Jian", "" ] ]
TITLE: Q-Insight: Understanding Image Quality via Visual Reinforcement Learning ABSTRACT: Image quality assessment (IQA) focuses on the perceptual visual quality of images, playing a crucial role in downstream tasks such as image reconstruction, compression, and generation. The rapid advancement of multi-modal large language models (MLLMs) has significantly broadened the scope of IQA, moving toward comprehensive image quality understanding that incorporates content analysis, degradation perception, and comparison reasoning beyond mere numerical scoring. Previous MLLM-based methods typically either generate numerical scores lacking interpretability or heavily rely on supervised fine-tuning (SFT) using large-scale annotated datasets to provide descriptive assessments, limiting their flexibility and applicability. In this paper, we propose Q-Insight, a reinforcement learning-based model built upon group relative policy optimization (GRPO), which demonstrates strong visual reasoning capability for image quality understanding while requiring only a limited amount of rating scores and degradation labels. By jointly optimizing score regression and degradation perception tasks with carefully designed reward functions, our approach effectively exploits their mutual benefits for enhanced performance. Extensive experiments demonstrate that Q-Insight substantially outperforms existing state-of-the-art methods in both score regression and degradation perception tasks, while exhibiting impressive zero-shot generalization to comparison reasoning tasks. Code will be available at https://github.com/lwq20020127/Q-Insight.
2503.19588
Mia Siemon
Mia Siemon, Ivan Nikolov, Thomas B. Moeslund and Kamal Nasrollahi
Video Anomaly Detection with Contours -- A Study
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In Pose-based Video Anomaly Detection prior art is rooted on the assumption that abnormal events can be mostly regarded as a result of uncommon human behavior. Opposed to utilizing skeleton representations of humans, however, we investigate the potential of learning recurrent motion patterns of normal human behavior using 2D contours. Keeping all advantages of pose-based methods, such as increased object anonymization, the shift from human skeletons to contours is hypothesized to leave the opportunity to cover more object categories open for future research. We propose formulating the problem as a regression and a classification task, and additionally explore two distinct data representation techniques for contours. To further reduce the computational complexity of Pose-based Video Anomaly Detection solutions, all methods in this study are based on shallow Neural Networks from the field of Deep Learning, and evaluated on the three most prominent benchmark datasets within Video Anomaly Detection and their human-related counterparts, totaling six datasets. Our results indicate that this novel perspective on Pose-based Video Anomaly Detection marks a promising direction for future research.
[ { "version": "v1", "created": "Tue, 25 Mar 2025 12:11:50 GMT" } ]
2025-03-30T00:00:00
[ [ "Siemon", "Mia", "" ], [ "Nikolov", "Ivan", "" ], [ "Moeslund", "Thomas B.", "" ], [ "Nasrollahi", "Kamal", "" ] ]
TITLE: Video Anomaly Detection with Contours -- A Study ABSTRACT: In Pose-based Video Anomaly Detection prior art is rooted on the assumption that abnormal events can be mostly regarded as a result of uncommon human behavior. Opposed to utilizing skeleton representations of humans, however, we investigate the potential of learning recurrent motion patterns of normal human behavior using 2D contours. Keeping all advantages of pose-based methods, such as increased object anonymization, the shift from human skeletons to contours is hypothesized to leave the opportunity to cover more object categories open for future research. We propose formulating the problem as a regression and a classification task, and additionally explore two distinct data representation techniques for contours. To further reduce the computational complexity of Pose-based Video Anomaly Detection solutions, all methods in this study are based on shallow Neural Networks from the field of Deep Learning, and evaluated on the three most prominent benchmark datasets within Video Anomaly Detection and their human-related counterparts, totaling six datasets. Our results indicate that this novel perspective on Pose-based Video Anomaly Detection marks a promising direction for future research.
2503.19670
Saurav Sharma
Saurav Sharma, Didier Mutter, Nicolas Padoy
fine-CLIP: Enhancing Zero-Shot Fine-Grained Surgical Action Recognition with Vision-Language Models
6 pages, 3 tables, 3 figures
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
While vision-language models like CLIP have advanced zero-shot surgical phase recognition, they struggle with fine-grained surgical activities, especially action triplets. This limitation arises because current CLIP formulations rely on global image features, which overlook the fine-grained semantics and contextual details crucial for complex tasks like zero-shot triplet recognition. Furthermore, these models do not explore the hierarchical structure inherent in triplets, reducing their ability to generalize to novel triplets. To address these challenges, we propose fine-CLIP, which learns object-centric features and leverages the hierarchy in triplet formulation. Our approach integrates three components: hierarchical prompt modeling to capture shared semantics, LoRA-based vision backbone adaptation for enhanced feature extraction, and a graph-based condensation strategy that groups similar patch features into meaningful object clusters. Since triplet classification is a challenging task, we introduce an alternative yet meaningful base-to-novel generalization benchmark with two settings on the CholecT50 dataset: Unseen-Target, assessing adaptability to triplets with novel anatomical structures, and Unseen-Instrument-Verb, where models need to generalize to novel instrument-verb interactions. fine-CLIP shows significant improvements in F1 and mAP, enhancing zero-shot recognition of novel surgical triplets.
[ { "version": "v1", "created": "Tue, 25 Mar 2025 13:57:02 GMT" } ]
2025-03-30T00:00:00
[ [ "Sharma", "Saurav", "" ], [ "Mutter", "Didier", "" ], [ "Padoy", "Nicolas", "" ] ]
TITLE: fine-CLIP: Enhancing Zero-Shot Fine-Grained Surgical Action Recognition with Vision-Language Models ABSTRACT: While vision-language models like CLIP have advanced zero-shot surgical phase recognition, they struggle with fine-grained surgical activities, especially action triplets. This limitation arises because current CLIP formulations rely on global image features, which overlook the fine-grained semantics and contextual details crucial for complex tasks like zero-shot triplet recognition. Furthermore, these models do not explore the hierarchical structure inherent in triplets, reducing their ability to generalize to novel triplets. To address these challenges, we propose fine-CLIP, which learns object-centric features and leverages the hierarchy in triplet formulation. Our approach integrates three components: hierarchical prompt modeling to capture shared semantics, LoRA-based vision backbone adaptation for enhanced feature extraction, and a graph-based condensation strategy that groups similar patch features into meaningful object clusters. Since triplet classification is a challenging task, we introduce an alternative yet meaningful base-to-novel generalization benchmark with two settings on the CholecT50 dataset: Unseen-Target, assessing adaptability to triplets with novel anatomical structures, and Unseen-Instrument-Verb, where models need to generalize to novel instrument-verb interactions. fine-CLIP shows significant improvements in F1 and mAP, enhancing zero-shot recognition of novel surgical triplets.
2503.19860
Junzhi Ning
Junzhi Ning, Dominic Marshall, Yijian Gao, Xiaodan Xing Yang Nan, Yingying Fang, Sheng Zhang, Matthieu Komorowski, Guang Yang
Unpaired Translation of Chest X-ray Images for Lung Opacity Diagnosis via Adaptive Activation Masks and Cross-Domain Alignment
null
null
null
null
eess.IV cs.CV
http://creativecommons.org/licenses/by/4.0/
Chest X-ray radiographs (CXRs) play a pivotal role in diagnosing and monitoring cardiopulmonary diseases. However, lung opacities in CXRs frequently obscure anatomical structures, impeding clear identification of lung borders and complicating the localization of pathology. This challenge significantly hampers segmentation accuracy and precise lesion identification, which are crucial for diagnosis. To tackle these issues, our study proposes an unpaired CXR translation framework that converts CXRs with lung opacities into counterparts without lung opacities while preserving semantic features. Central to our approach is the use of adaptive activation masks to selectively modify opacity regions in lung CXRs. Cross-domain alignment ensures translated CXRs without opacity issues align with feature maps and prediction labels from a pre-trained CXR lesion classifier, facilitating the interpretability of the translation process. We validate our method using RSNA, MIMIC-CXR-JPG and JSRT datasets, demonstrating superior translation quality through lower Frechet Inception Distance (FID) and Kernel Inception Distance (KID) scores compared to existing methods (FID: 67.18 vs. 210.4, KID: 0.01604 vs. 0.225). Evaluation on RSNA opacity, MIMIC acute respiratory distress syndrome (ARDS) patient CXRs and JSRT CXRs show our method enhances segmentation accuracy of lung borders and improves lesion classification, further underscoring its potential in clinical settings (RSNA: mIoU: 76.58% vs. 62.58%, Sensitivity: 85.58% vs. 77.03%; MIMIC ARDS: mIoU: 86.20% vs. 72.07%, Sensitivity: 92.68% vs. 86.85%; JSRT: mIoU: 91.08% vs. 85.6%, Sensitivity: 97.62% vs. 95.04%). Our approach advances CXR imaging analysis, especially in investigating segmentation impacts through image translation techniques.
[ { "version": "v1", "created": "Tue, 25 Mar 2025 17:26:17 GMT" } ]
2025-03-30T00:00:00
[ [ "Ning", "Junzhi", "" ], [ "Marshall", "Dominic", "" ], [ "Gao", "Yijian", "" ], [ "Nan", "Xiaodan Xing Yang", "" ], [ "Fang", "Yingying", "" ], [ "Zhang", "Sheng", "" ], [ "Komorowski", "Matthieu", "" ], [ "Yang", "Guang", "" ] ]
TITLE: Unpaired Translation of Chest X-ray Images for Lung Opacity Diagnosis via Adaptive Activation Masks and Cross-Domain Alignment ABSTRACT: Chest X-ray radiographs (CXRs) play a pivotal role in diagnosing and monitoring cardiopulmonary diseases. However, lung opacities in CXRs frequently obscure anatomical structures, impeding clear identification of lung borders and complicating the localization of pathology. This challenge significantly hampers segmentation accuracy and precise lesion identification, which are crucial for diagnosis. To tackle these issues, our study proposes an unpaired CXR translation framework that converts CXRs with lung opacities into counterparts without lung opacities while preserving semantic features. Central to our approach is the use of adaptive activation masks to selectively modify opacity regions in lung CXRs. Cross-domain alignment ensures translated CXRs without opacity issues align with feature maps and prediction labels from a pre-trained CXR lesion classifier, facilitating the interpretability of the translation process. We validate our method using RSNA, MIMIC-CXR-JPG and JSRT datasets, demonstrating superior translation quality through lower Frechet Inception Distance (FID) and Kernel Inception Distance (KID) scores compared to existing methods (FID: 67.18 vs. 210.4, KID: 0.01604 vs. 0.225). Evaluation on RSNA opacity, MIMIC acute respiratory distress syndrome (ARDS) patient CXRs and JSRT CXRs show our method enhances segmentation accuracy of lung borders and improves lesion classification, further underscoring its potential in clinical settings (RSNA: mIoU: 76.58% vs. 62.58%, Sensitivity: 85.58% vs. 77.03%; MIMIC ARDS: mIoU: 86.20% vs. 72.07%, Sensitivity: 92.68% vs. 86.85%; JSRT: mIoU: 91.08% vs. 85.6%, Sensitivity: 97.62% vs. 95.04%). Our approach advances CXR imaging analysis, especially in investigating segmentation impacts through image translation techniques.
2012.04726
Jeff Da
Jeff Da and Maxwell Forbes and Rowan Zellers and Anthony Zheng and Jena D. Hwang and Antoine Bosselut and Yejin Choi
Edited Media Understanding Frames: Reasoning About the Intent and Implications of Visual Misinformation
ACL 2021
null
null
null
cs.CL cs.CV
http://creativecommons.org/licenses/by/4.0/
Multimodal disinformation, from 'deepfakes' to simple edits that deceive, is an important societal problem. Yet at the same time, the vast majority of media edits are harmless -- such as a filtered vacation photo. The difference between this example, and harmful edits that spread disinformation, is one of intent. Recognizing and describing this intent is a major challenge for today's AI systems. We present the task of Edited Media Understanding, requiring models to answer open-ended questions that capture the intent and implications of an image edit. We introduce a dataset for our task, EMU, with 48k question-answer pairs written in rich natural language. We evaluate a wide variety of vision-and-language models for our task, and introduce a new model PELICAN, which builds upon recent progress in pretrained multimodal representations. Our model obtains promising results on our dataset, with humans rating its answers as accurate 40.35% of the time. At the same time, there is still much work to be done -- humans prefer human-annotated captions 93.56% of the time -- and we provide analysis that highlights areas for further progress.
[ { "version": "v1", "created": "Tue, 8 Dec 2020 20:30:43 GMT" }, { "version": "v2", "created": "Wed, 26 Mar 2025 20:17:54 GMT" } ]
2025-03-28T00:00:00
[ [ "Da", "Jeff", "" ], [ "Forbes", "Maxwell", "" ], [ "Zellers", "Rowan", "" ], [ "Zheng", "Anthony", "" ], [ "Hwang", "Jena D.", "" ], [ "Bosselut", "Antoine", "" ], [ "Choi", "Yejin", "" ] ]
TITLE: Edited Media Understanding Frames: Reasoning About the Intent and Implications of Visual Misinformation ABSTRACT: Multimodal disinformation, from 'deepfakes' to simple edits that deceive, is an important societal problem. Yet at the same time, the vast majority of media edits are harmless -- such as a filtered vacation photo. The difference between this example, and harmful edits that spread disinformation, is one of intent. Recognizing and describing this intent is a major challenge for today's AI systems. We present the task of Edited Media Understanding, requiring models to answer open-ended questions that capture the intent and implications of an image edit. We introduce a dataset for our task, EMU, with 48k question-answer pairs written in rich natural language. We evaluate a wide variety of vision-and-language models for our task, and introduce a new model PELICAN, which builds upon recent progress in pretrained multimodal representations. Our model obtains promising results on our dataset, with humans rating its answers as accurate 40.35% of the time. At the same time, there is still much work to be done -- humans prefer human-annotated captions 93.56% of the time -- and we provide analysis that highlights areas for further progress.
2301.11923
Alexej Schelle Dr.
A. Schelle and H. L\"uling
Information loss from dimensionality reduction in 5D-Gaussian spectral data
4 pages, 3 figures
Whitepaper on arXiv.org (2023)
null
null
physics.data-an cs.LG quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding the loss of information in spectral analytics is a crucial first step towards finding root causes for failures and uncertainties using spectral data in artificial intelligence models built from modern complex data science applications. Here, we show from an elementary Shannon entropy model analysis with quantum statistics of Gaussian distributed spectral data, that the relative loss of information from dimensionality reduction due to the projection of an initial five-dimensional dataset onto two-dimensional diagrams is less than one percent in the parameter range of small data sets with sample sizes on the order of few hundred data samples. From our analysis, we also conclude that the density and expectation value of the entropy probability distribution increases with the sample number and sample size using artificial data models derived from random sampling Monte Carlo simulation methods.
[ { "version": "v1", "created": "Sun, 22 Jan 2023 14:51:35 GMT" }, { "version": "v2", "created": "Sat, 23 Dec 2023 12:56:33 GMT" } ]
2025-03-28T00:00:00
[ [ "Schelle", "A.", "" ], [ "Lüling", "H.", "" ] ]
TITLE: Information loss from dimensionality reduction in 5D-Gaussian spectral data ABSTRACT: Understanding the loss of information in spectral analytics is a crucial first step towards finding root causes for failures and uncertainties using spectral data in artificial intelligence models built from modern complex data science applications. Here, we show from an elementary Shannon entropy model analysis with quantum statistics of Gaussian distributed spectral data, that the relative loss of information from dimensionality reduction due to the projection of an initial five-dimensional dataset onto two-dimensional diagrams is less than one percent in the parameter range of small data sets with sample sizes on the order of few hundred data samples. From our analysis, we also conclude that the density and expectation value of the entropy probability distribution increases with the sample number and sample size using artificial data models derived from random sampling Monte Carlo simulation methods.
2308.07421
Hamidreza Behjoo
Hamidreza Behjoo, Michael Chertkov
U-Turn Diffusion
null
Entropy 2025
10.3390/e27040343
null
cs.LG cs.CV
http://creativecommons.org/licenses/by/4.0/
We investigate diffusion models generating synthetic samples from the probability distribution represented by the Ground Truth (GT) samples. We focus on how GT sample information is encoded in the Score Function (SF), computed (not simulated) from the Wiener-Ito (WI) linear forward process in the artifical time $t\in [0\to \infty]$, and then used as a nonlinear drift in the simulated WI reverse process with $t\in [\infty\to 0]$. We propose U-Turn diffusion, an augmentation of a pre-trained diffusion model, which shortens the forward and reverse processes to $t\in [0\to T_u]$ and $t\in [T_u\to 0]$. The U-Turn reverse process is initialized at $T_u$ with a sample from the probability distribution of the forward process (initialized at $t=0$ with a GT sample) ensuring a detailed balance relation between the shorten forward and reverse processes. Our experiments on the class-conditioned SF of the ImageNet dataset and the multi-class, single SF of the CIFAR-10 dataset reveal a critical Memorization Time $ T_m $, beyond which generated samples diverge from the GT sample used to initialize the U-Turn scheme, and a Speciation Time $ T_s $, where for $ T_u > T_s > T_m $, samples begin representing different classes. We further examine the role of SF non-linearity through a Gaussian Test, comparing empirical and Gaussian-approximated U-Turn auto-correlation functions, and showing that the SF becomes effectively affine for $ t > T_s $, and approximately affine for $t\in [T_m,T_s]$.
[ { "version": "v1", "created": "Mon, 14 Aug 2023 19:21:28 GMT" }, { "version": "v2", "created": "Wed, 22 May 2024 20:00:17 GMT" }, { "version": "v3", "created": "Wed, 25 Dec 2024 18:35:24 GMT" } ]
2025-03-28T00:00:00
[ [ "Behjoo", "Hamidreza", "" ], [ "Chertkov", "Michael", "" ] ]
TITLE: U-Turn Diffusion ABSTRACT: We investigate diffusion models generating synthetic samples from the probability distribution represented by the Ground Truth (GT) samples. We focus on how GT sample information is encoded in the Score Function (SF), computed (not simulated) from the Wiener-Ito (WI) linear forward process in the artifical time $t\in [0\to \infty]$, and then used as a nonlinear drift in the simulated WI reverse process with $t\in [\infty\to 0]$. We propose U-Turn diffusion, an augmentation of a pre-trained diffusion model, which shortens the forward and reverse processes to $t\in [0\to T_u]$ and $t\in [T_u\to 0]$. The U-Turn reverse process is initialized at $T_u$ with a sample from the probability distribution of the forward process (initialized at $t=0$ with a GT sample) ensuring a detailed balance relation between the shorten forward and reverse processes. Our experiments on the class-conditioned SF of the ImageNet dataset and the multi-class, single SF of the CIFAR-10 dataset reveal a critical Memorization Time $ T_m $, beyond which generated samples diverge from the GT sample used to initialize the U-Turn scheme, and a Speciation Time $ T_s $, where for $ T_u > T_s > T_m $, samples begin representing different classes. We further examine the role of SF non-linearity through a Gaussian Test, comparing empirical and Gaussian-approximated U-Turn auto-correlation functions, and showing that the SF becomes effectively affine for $ t > T_s $, and approximately affine for $t\in [T_m,T_s]$.
2310.04722
Monan Zhou Dr
Monan Zhou, Shangda Wu, Shaohua Ji, Zijin Li, Wei Li
A Holistic Evaluation of Piano Sound Quality
15 pages, 9 figures
Proceedings of the 10th Conference on Sound and Music Technology. CSMT 2023. Lecture Notes in Electrical Engineering, vol 1268. Springer, Singapore
10.1007/978-981-97-7962-8_1
23638935599966770924
cs.SD cs.AI eess.AS
http://creativecommons.org/licenses/by/4.0/
This paper aims to develop a holistic evaluation method for piano sound quality to assist in purchasing decisions. Unlike previous studies that focused on the effect of piano performance techniques on sound quality, this study evaluates the inherent sound quality of different pianos. To derive quality evaluation systems, the study uses subjective questionnaires based on a piano sound quality dataset. The method selects the optimal piano classification models by comparing the fine-tuning results of different pre-training models of Convolutional Neural Networks (CNN). To improve the interpretability of the models, the study applies Equivalent Rectangular Bandwidth (ERB) analysis. The results reveal that musically trained individuals are better able to distinguish between the sound quality differences of different pianos. The best fine-tuned CNN pre-trained backbone achieves a high accuracy of 98.3% as the piano classifier. However, the dataset is limited, and the audio is sliced to increase its quantity, resulting in a lack of diversity and balance, so we use focal loss to reduce the impact of data imbalance. To optimize the method, the dataset will be expanded, or few-shot learning techniques will be employed in future research.
[ { "version": "v1", "created": "Sat, 7 Oct 2023 07:51:34 GMT" }, { "version": "v2", "created": "Thu, 27 Mar 2025 02:31:56 GMT" } ]
2025-03-28T00:00:00
[ [ "Zhou", "Monan", "" ], [ "Wu", "Shangda", "" ], [ "Ji", "Shaohua", "" ], [ "Li", "Zijin", "" ], [ "Li", "Wei", "" ] ]
TITLE: A Holistic Evaluation of Piano Sound Quality ABSTRACT: This paper aims to develop a holistic evaluation method for piano sound quality to assist in purchasing decisions. Unlike previous studies that focused on the effect of piano performance techniques on sound quality, this study evaluates the inherent sound quality of different pianos. To derive quality evaluation systems, the study uses subjective questionnaires based on a piano sound quality dataset. The method selects the optimal piano classification models by comparing the fine-tuning results of different pre-training models of Convolutional Neural Networks (CNN). To improve the interpretability of the models, the study applies Equivalent Rectangular Bandwidth (ERB) analysis. The results reveal that musically trained individuals are better able to distinguish between the sound quality differences of different pianos. The best fine-tuned CNN pre-trained backbone achieves a high accuracy of 98.3% as the piano classifier. However, the dataset is limited, and the audio is sliced to increase its quantity, resulting in a lack of diversity and balance, so we use focal loss to reduce the impact of data imbalance. To optimize the method, the dataset will be expanded, or few-shot learning techniques will be employed in future research.
2311.15917
Zhanbo Liang
Zhanbo Liang, Jie Guo, Weidong Qiu, Zheng Huang and Shujun Li
When Graph Convolution Meets Double Attention: Online Privacy Disclosure Detection with Multi-Label Text Classification
The manuscript is accepted by Data Mining and Knowledge Discovery(ECML PKDD Journal track)
null
10.1007/s10618-023-00992-y
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the rise of Web 2.0 platforms such as online social media, people's private information, such as their location, occupation and even family information, is often inadvertently disclosed through online discussions. Therefore, it is important to detect such unwanted privacy disclosures to help alert people affected and the online platform. In this paper, privacy disclosure detection is modeled as a multi-label text classification (MLTC) problem, and a new privacy disclosure detection model is proposed to construct an MLTC classifier for detecting online privacy disclosures. This classifier takes an online post as the input and outputs multiple labels, each reflecting a possible privacy disclosure. The proposed presentation method combines three different sources of information, the input text itself, the label-to-text correlation and the label-to-label correlation. A double-attention mechanism is used to combine the first two sources of information, and a graph convolutional network (GCN) is employed to extract the third source of information that is then used to help fuse features extracted from the first two sources of information. Our extensive experimental results, obtained on a public dataset of privacy-disclosing posts on Twitter, demonstrated that our proposed privacy disclosure detection method significantly and consistently outperformed other state-of-the-art methods in terms of all key performance indicators.
[ { "version": "v1", "created": "Mon, 27 Nov 2023 15:25:17 GMT" }, { "version": "v2", "created": "Wed, 20 Dec 2023 08:40:33 GMT" } ]
2025-03-28T00:00:00
[ [ "Liang", "Zhanbo", "" ], [ "Guo", "Jie", "" ], [ "Qiu", "Weidong", "" ], [ "Huang", "Zheng", "" ], [ "Li", "Shujun", "" ] ]
TITLE: When Graph Convolution Meets Double Attention: Online Privacy Disclosure Detection with Multi-Label Text Classification ABSTRACT: With the rise of Web 2.0 platforms such as online social media, people's private information, such as their location, occupation and even family information, is often inadvertently disclosed through online discussions. Therefore, it is important to detect such unwanted privacy disclosures to help alert people affected and the online platform. In this paper, privacy disclosure detection is modeled as a multi-label text classification (MLTC) problem, and a new privacy disclosure detection model is proposed to construct an MLTC classifier for detecting online privacy disclosures. This classifier takes an online post as the input and outputs multiple labels, each reflecting a possible privacy disclosure. The proposed presentation method combines three different sources of information, the input text itself, the label-to-text correlation and the label-to-label correlation. A double-attention mechanism is used to combine the first two sources of information, and a graph convolutional network (GCN) is employed to extract the third source of information that is then used to help fuse features extracted from the first two sources of information. Our extensive experimental results, obtained on a public dataset of privacy-disclosing posts on Twitter, demonstrated that our proposed privacy disclosure detection method significantly and consistently outperformed other state-of-the-art methods in terms of all key performance indicators.
2311.16909
Hylke Donker
H. C. Donker, D. Neijzen, J. de Jong, G. A. Lunter
Multinomial belief networks for healthcare data
18 pages, 4 figs; supplement: 22 pages
PMLR 252, 1-22, 2024
null
null
stat.ML cs.LG stat.AP
http://creativecommons.org/licenses/by/4.0/
Healthcare data from patient or population cohorts are often characterized by sparsity, high missingness and relatively small sample sizes. In addition, being able to quantify uncertainty is often important in a medical context. To address these analytical requirements we propose a deep generative Bayesian model for multinomial count data. We develop a collapsed Gibbs sampling procedure that takes advantage of a series of augmentation relations, inspired by the Zhou$\unicode{x2013}$Cong$\unicode{x2013}$Chen model. We visualise the model's ability to identify coherent substructures in the data using a dataset of handwritten digits. We then apply it to a large experimental dataset of DNA mutations in cancer and show that we can identify biologically meaningful clusters of mutational signatures in a fully data-driven way.
[ { "version": "v1", "created": "Tue, 28 Nov 2023 16:12:50 GMT" }, { "version": "v2", "created": "Mon, 18 Mar 2024 11:53:00 GMT" }, { "version": "v3", "created": "Sat, 6 Apr 2024 11:38:31 GMT" } ]
2025-03-28T00:00:00
[ [ "Donker", "H. C.", "" ], [ "Neijzen", "D.", "" ], [ "de Jong", "J.", "" ], [ "Lunter", "G. A.", "" ] ]
TITLE: Multinomial belief networks for healthcare data ABSTRACT: Healthcare data from patient or population cohorts are often characterized by sparsity, high missingness and relatively small sample sizes. In addition, being able to quantify uncertainty is often important in a medical context. To address these analytical requirements we propose a deep generative Bayesian model for multinomial count data. We develop a collapsed Gibbs sampling procedure that takes advantage of a series of augmentation relations, inspired by the Zhou$\unicode{x2013}$Cong$\unicode{x2013}$Chen model. We visualise the model's ability to identify coherent substructures in the data using a dataset of handwritten digits. We then apply it to a large experimental dataset of DNA mutations in cancer and show that we can identify biologically meaningful clusters of mutational signatures in a fully data-driven way.
2312.00206
Haolin Xiong
Haolin Xiong and Sairisheek Muttukuru and Rishi Upadhyay and Pradyumna Chari and Achuta Kadambi
SparseGS: Real-Time 360{\deg} Sparse View Synthesis using Gaussian Splatting
Version accepted to 3DV 2025. Project page: https://github.com/ForMyCat/SparseGS
null
null
null
cs.CV cs.LG eess.IV
http://creativecommons.org/licenses/by/4.0/
3D Gaussian Splatting (3DGS) has recently enabled real-time rendering of unbounded 3D scenes for novel view synthesis. However, this technique requires dense training views to accurately reconstruct 3D geometry. A limited number of input views will significantly degrade reconstruction quality, resulting in artifacts such as "floaters" and "background collapse" at unseen viewpoints. In this work, we introduce SparseGS, an efficient training pipeline designed to address the limitations of 3DGS in scenarios with sparse training views. SparseGS incorporates depth priors, novel depth rendering techniques, and a pruning heuristic to mitigate floater artifacts, alongside an Unseen Viewpoint Regularization module to alleviate background collapses. Our extensive evaluations on the Mip-NeRF360, LLFF, and DTU datasets demonstrate that SparseGS achieves high-quality reconstruction in both unbounded and forward-facing scenarios, with as few as 12 and 3 input images, respectively, while maintaining fast training and real-time rendering capabilities.
[ { "version": "v1", "created": "Thu, 30 Nov 2023 21:38:22 GMT" }, { "version": "v2", "created": "Mon, 13 May 2024 05:11:37 GMT" }, { "version": "v3", "created": "Wed, 26 Mar 2025 19:59:58 GMT" } ]
2025-03-28T00:00:00
[ [ "Xiong", "Haolin", "" ], [ "Muttukuru", "Sairisheek", "" ], [ "Upadhyay", "Rishi", "" ], [ "Chari", "Pradyumna", "" ], [ "Kadambi", "Achuta", "" ] ]
TITLE: SparseGS: Real-Time 360{\deg} Sparse View Synthesis using Gaussian Splatting ABSTRACT: 3D Gaussian Splatting (3DGS) has recently enabled real-time rendering of unbounded 3D scenes for novel view synthesis. However, this technique requires dense training views to accurately reconstruct 3D geometry. A limited number of input views will significantly degrade reconstruction quality, resulting in artifacts such as "floaters" and "background collapse" at unseen viewpoints. In this work, we introduce SparseGS, an efficient training pipeline designed to address the limitations of 3DGS in scenarios with sparse training views. SparseGS incorporates depth priors, novel depth rendering techniques, and a pruning heuristic to mitigate floater artifacts, alongside an Unseen Viewpoint Regularization module to alleviate background collapses. Our extensive evaluations on the Mip-NeRF360, LLFF, and DTU datasets demonstrate that SparseGS achieves high-quality reconstruction in both unbounded and forward-facing scenarios, with as few as 12 and 3 input images, respectively, while maintaining fast training and real-time rendering capabilities.
2312.07669
Yibo Xia
Yibo Xia, Lizhen Wang, Xiang Deng, Xiaoyan Luo, Yunhong Wang and Yebin Liu
GMTalker: Gaussian Mixture-based Audio-Driven Emotional Talking Video Portraits
Project page: https://bob35buaa.github.io/GMTalker. This work has been submitted to the IEEE journal for possible publication
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Synthesizing high-fidelity and emotion-controllable talking video portraits, with audio-lip sync, vivid expressions, realistic head poses, and eye blinks, has been an important and challenging task in recent years. Most existing methods suffer in achieving personalized and precise emotion control, smooth transitions between different emotion states, and the generation of diverse motions. To tackle these challenges, we present GMTalker, a Gaussian mixture-based emotional talking portraits generation framework. Specifically, we propose a Gaussian mixture-based expression generator that can construct a continuous and disentangled latent space, achieving more flexible emotion manipulation. Furthermore, we introduce a normalizing flow-based motion generator pretrained on a large dataset with a wide-range motion to generate diverse head poses, blinks, and eyeball movements. Finally, we propose a personalized emotion-guided head generator with an emotion mapping network that can synthesize high-fidelity and faithful emotional video portraits. Both quantitative and qualitative experiments demonstrate our method outperforms previous methods in image quality, photo-realism, emotion accuracy, and motion diversity.
[ { "version": "v1", "created": "Tue, 12 Dec 2023 19:03:04 GMT" }, { "version": "v2", "created": "Tue, 28 May 2024 17:01:00 GMT" }, { "version": "v3", "created": "Thu, 27 Mar 2025 08:47:12 GMT" } ]
2025-03-28T00:00:00
[ [ "Xia", "Yibo", "" ], [ "Wang", "Lizhen", "" ], [ "Deng", "Xiang", "" ], [ "Luo", "Xiaoyan", "" ], [ "Wang", "Yunhong", "" ], [ "Liu", "Yebin", "" ] ]
TITLE: GMTalker: Gaussian Mixture-based Audio-Driven Emotional Talking Video Portraits ABSTRACT: Synthesizing high-fidelity and emotion-controllable talking video portraits, with audio-lip sync, vivid expressions, realistic head poses, and eye blinks, has been an important and challenging task in recent years. Most existing methods suffer in achieving personalized and precise emotion control, smooth transitions between different emotion states, and the generation of diverse motions. To tackle these challenges, we present GMTalker, a Gaussian mixture-based emotional talking portraits generation framework. Specifically, we propose a Gaussian mixture-based expression generator that can construct a continuous and disentangled latent space, achieving more flexible emotion manipulation. Furthermore, we introduce a normalizing flow-based motion generator pretrained on a large dataset with a wide-range motion to generate diverse head poses, blinks, and eyeball movements. Finally, we propose a personalized emotion-guided head generator with an emotion mapping network that can synthesize high-fidelity and faithful emotional video portraits. Both quantitative and qualitative experiments demonstrate our method outperforms previous methods in image quality, photo-realism, emotion accuracy, and motion diversity.
2401.13174
Dong Zhang
Dong Zhang, Pingcheng Dong, Long Chen, Kwang-Ting Cheng
Towards Complementary Knowledge Distillation for Efficient Dense Image Prediction
under submission
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It has been revealed that small efficient dense image prediction (EDIP) models, trained using the knowledge distillation (KD) framework, encounter two key challenges, including maintaining boundary region completeness and preserving target region connectivity, despite their favorable capacity to recognize main object regions. In this work, we propose a complementary boundary and context distillation (BCD) method within the KD framework for EDIPs, which facilitates the targeted knowledge transfer from large accurate teacher models to compact efficient student models. Specifically, the boundary distillation component focuses on extracting explicit object-level semantic boundaries from the hierarchical feature maps of the backbone network to enhance the student model's mask quality in boundary regions. Concurrently, the context distillation component leverages self-relations as a bridge to transfer implicit pixel-level contexts from the teacher model to the student model, ensuring strong connectivity in target regions. Our proposed BCD method is specifically designed for EDIP tasks and is characterized by its simplicity and efficiency. Extensive experimental results across semantic segmentation, object detection, and instance segmentation on various representative datasets demonstrate that our method can outperform existing methods without requiring extra supervisions or incurring increased inference costs, resulting in well-defined object boundaries and smooth connecting regions.
[ { "version": "v1", "created": "Wed, 24 Jan 2024 01:41:26 GMT" }, { "version": "v2", "created": "Mon, 2 Dec 2024 02:55:29 GMT" }, { "version": "v3", "created": "Thu, 27 Mar 2025 01:07:52 GMT" } ]
2025-03-28T00:00:00
[ [ "Zhang", "Dong", "" ], [ "Dong", "Pingcheng", "" ], [ "Chen", "Long", "" ], [ "Cheng", "Kwang-Ting", "" ] ]
TITLE: Towards Complementary Knowledge Distillation for Efficient Dense Image Prediction ABSTRACT: It has been revealed that small efficient dense image prediction (EDIP) models, trained using the knowledge distillation (KD) framework, encounter two key challenges, including maintaining boundary region completeness and preserving target region connectivity, despite their favorable capacity to recognize main object regions. In this work, we propose a complementary boundary and context distillation (BCD) method within the KD framework for EDIPs, which facilitates the targeted knowledge transfer from large accurate teacher models to compact efficient student models. Specifically, the boundary distillation component focuses on extracting explicit object-level semantic boundaries from the hierarchical feature maps of the backbone network to enhance the student model's mask quality in boundary regions. Concurrently, the context distillation component leverages self-relations as a bridge to transfer implicit pixel-level contexts from the teacher model to the student model, ensuring strong connectivity in target regions. Our proposed BCD method is specifically designed for EDIP tasks and is characterized by its simplicity and efficiency. Extensive experimental results across semantic segmentation, object detection, and instance segmentation on various representative datasets demonstrate that our method can outperform existing methods without requiring extra supervisions or incurring increased inference costs, resulting in well-defined object boundaries and smooth connecting regions.
2403.12922
Hanlin Wang
Hanlin Wang, Zhan Tong, Kecheng Zheng, Yujun Shen and Limin Wang
Contextual AD Narration with Interleaved Multimodal Sequence
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Audio Description (AD) task aims to generate descriptions of visual elements for visually impaired individuals to help them access long-form video content, like movies. With video feature, text, character bank and context information as inputs, the generated ADs are able to correspond to the characters by name and provide reasonable, contextual descriptions to help audience understand the storyline of movie. To achieve this goal, we propose to leverage pre-trained foundation models through a simple and unified framework to generate ADs with interleaved multimodal sequence as input, termed as Uni-AD. To enhance the alignment of features across various modalities with finer granularity, we introduce a simple and lightweight module that maps video features into the textual feature space. Moreover, we also propose a character-refinement module to provide more precise information by identifying the main characters who play more significant roles in the video context. With these unique designs, we further incorporate contextual information and a contrastive loss into our architecture to generate smoother and more contextually appropriate ADs. Experiments on multiple AD datasets show that Uni-AD performs well on AD generation, which demonstrates the effectiveness of our approach. Our code is available at: https://github.com/ant-research/UniAD.
[ { "version": "v1", "created": "Tue, 19 Mar 2024 17:27:55 GMT" }, { "version": "v2", "created": "Thu, 27 Mar 2025 14:51:25 GMT" } ]
2025-03-28T00:00:00
[ [ "Wang", "Hanlin", "" ], [ "Tong", "Zhan", "" ], [ "Zheng", "Kecheng", "" ], [ "Shen", "Yujun", "" ], [ "Wang", "Limin", "" ] ]
TITLE: Contextual AD Narration with Interleaved Multimodal Sequence ABSTRACT: The Audio Description (AD) task aims to generate descriptions of visual elements for visually impaired individuals to help them access long-form video content, like movies. With video feature, text, character bank and context information as inputs, the generated ADs are able to correspond to the characters by name and provide reasonable, contextual descriptions to help audience understand the storyline of movie. To achieve this goal, we propose to leverage pre-trained foundation models through a simple and unified framework to generate ADs with interleaved multimodal sequence as input, termed as Uni-AD. To enhance the alignment of features across various modalities with finer granularity, we introduce a simple and lightweight module that maps video features into the textual feature space. Moreover, we also propose a character-refinement module to provide more precise information by identifying the main characters who play more significant roles in the video context. With these unique designs, we further incorporate contextual information and a contrastive loss into our architecture to generate smoother and more contextually appropriate ADs. Experiments on multiple AD datasets show that Uni-AD performs well on AD generation, which demonstrates the effectiveness of our approach. Our code is available at: https://github.com/ant-research/UniAD.
2405.15474
Gongxi Zhu
Hanlin Gu, Gongxi Zhu, Jie Zhang, Xinyuan Zhao, Yuxing Han, Lixin Fan, Qiang Yang
Unlearning during Learning: An Efficient Federated Machine Unlearning Method
Accepted by IJCAI 2024
null
null
null
cs.LG cs.DC
http://creativecommons.org/licenses/by-nc-nd/4.0/
In recent years, Federated Learning (FL) has garnered significant attention as a distributed machine learning paradigm. To facilitate the implementation of the right to be forgotten, the concept of federated machine unlearning (FMU) has also emerged. However, current FMU approaches often involve additional time-consuming steps and may not offer comprehensive unlearning capabilities, which renders them less practical in real FL scenarios. In this paper, we introduce FedAU, an innovative and efficient FMU framework aimed at overcoming these limitations. Specifically, FedAU incorporates a lightweight auxiliary unlearning module into the learning process and employs a straightforward linear operation to facilitate unlearning. This approach eliminates the requirement for extra time-consuming steps, rendering it well-suited for FL. Furthermore, FedAU exhibits remarkable versatility. It not only enables multiple clients to carry out unlearning tasks concurrently but also supports unlearning at various levels of granularity, including individual data samples, specific classes, and even at the client level. We conducted extensive experiments on MNIST, CIFAR10, and CIFAR100 datasets to evaluate the performance of FedAU. The results demonstrate that FedAU effectively achieves the desired unlearning effect while maintaining model accuracy. Our code is availiable at https://github.com/Liar-Mask/FedAU.
[ { "version": "v1", "created": "Fri, 24 May 2024 11:53:13 GMT" }, { "version": "v2", "created": "Thu, 27 Mar 2025 12:41:08 GMT" } ]
2025-03-28T00:00:00
[ [ "Gu", "Hanlin", "" ], [ "Zhu", "Gongxi", "" ], [ "Zhang", "Jie", "" ], [ "Zhao", "Xinyuan", "" ], [ "Han", "Yuxing", "" ], [ "Fan", "Lixin", "" ], [ "Yang", "Qiang", "" ] ]
TITLE: Unlearning during Learning: An Efficient Federated Machine Unlearning Method ABSTRACT: In recent years, Federated Learning (FL) has garnered significant attention as a distributed machine learning paradigm. To facilitate the implementation of the right to be forgotten, the concept of federated machine unlearning (FMU) has also emerged. However, current FMU approaches often involve additional time-consuming steps and may not offer comprehensive unlearning capabilities, which renders them less practical in real FL scenarios. In this paper, we introduce FedAU, an innovative and efficient FMU framework aimed at overcoming these limitations. Specifically, FedAU incorporates a lightweight auxiliary unlearning module into the learning process and employs a straightforward linear operation to facilitate unlearning. This approach eliminates the requirement for extra time-consuming steps, rendering it well-suited for FL. Furthermore, FedAU exhibits remarkable versatility. It not only enables multiple clients to carry out unlearning tasks concurrently but also supports unlearning at various levels of granularity, including individual data samples, specific classes, and even at the client level. We conducted extensive experiments on MNIST, CIFAR10, and CIFAR100 datasets to evaluate the performance of FedAU. The results demonstrate that FedAU effectively achieves the desired unlearning effect while maintaining model accuracy. Our code is availiable at https://github.com/Liar-Mask/FedAU.
2405.15668
Mahmoud Afifi
Abdelrahman Abdelhamed, Mahmoud Afifi, Alec Go
What Do You See? Enhancing Zero-Shot Image Classification with Multimodal Large Language Models
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large language models (LLMs) have been effectively used for many computer vision tasks, including image classification. In this paper, we present a simple yet effective approach for zero-shot image classification using multimodal LLMs. Using multimodal LLMs, we generate comprehensive textual representations from input images. These textual representations are then utilized to generate fixed-dimensional features in a cross-modal embedding space. Subsequently, these features are fused together to perform zero-shot classification using a linear classifier. Our method does not require prompt engineering for each dataset; instead, we use a single, straightforward set of prompts across all datasets. We evaluated our method on several datasets and our results demonstrate its remarkable effectiveness, surpassing benchmark accuracy on multiple datasets. On average, for ten benchmarks, our method achieved an accuracy gain of 6.2 percentage points, with an increase of 6.8 percentage points on the ImageNet dataset, compared to prior methods re-evaluated with the same setup. Our findings highlight the potential of multimodal LLMs to enhance computer vision tasks such as zero-shot image classification, offering a significant improvement over traditional methods.
[ { "version": "v1", "created": "Fri, 24 May 2024 16:05:15 GMT" }, { "version": "v2", "created": "Thu, 3 Oct 2024 22:53:09 GMT" }, { "version": "v3", "created": "Sat, 8 Mar 2025 18:53:47 GMT" }, { "version": "v4", "created": "Thu, 27 Mar 2025 09:41:01 GMT" } ]
2025-03-28T00:00:00
[ [ "Abdelhamed", "Abdelrahman", "" ], [ "Afifi", "Mahmoud", "" ], [ "Go", "Alec", "" ] ]
TITLE: What Do You See? Enhancing Zero-Shot Image Classification with Multimodal Large Language Models ABSTRACT: Large language models (LLMs) have been effectively used for many computer vision tasks, including image classification. In this paper, we present a simple yet effective approach for zero-shot image classification using multimodal LLMs. Using multimodal LLMs, we generate comprehensive textual representations from input images. These textual representations are then utilized to generate fixed-dimensional features in a cross-modal embedding space. Subsequently, these features are fused together to perform zero-shot classification using a linear classifier. Our method does not require prompt engineering for each dataset; instead, we use a single, straightforward set of prompts across all datasets. We evaluated our method on several datasets and our results demonstrate its remarkable effectiveness, surpassing benchmark accuracy on multiple datasets. On average, for ten benchmarks, our method achieved an accuracy gain of 6.2 percentage points, with an increase of 6.8 percentage points on the ImageNet dataset, compared to prior methods re-evaluated with the same setup. Our findings highlight the potential of multimodal LLMs to enhance computer vision tasks such as zero-shot image classification, offering a significant improvement over traditional methods.
2405.16439
Rohan Chandra
Rohan Chandra, Haresh Karnan, Negar Mehr, Peter Stone, Joydeep Biswas
Multi-Agent Inverse Reinforcement Learning in Real World Unstructured Pedestrian Crowds
null
null
null
null
cs.RO cs.AI cs.LG cs.MA
http://creativecommons.org/licenses/by/4.0/
Social robot navigation in crowded public spaces such as university campuses, restaurants, grocery stores, and hospitals, is an increasingly important area of research. One of the core strategies for achieving this goal is to understand humans' intent--underlying psychological factors that govern their motion--by learning their reward functions, typically via inverse reinforcement learning (IRL). Despite significant progress in IRL, learning reward functions of multiple agents simultaneously in dense unstructured pedestrian crowds has remained intractable due to the nature of the tightly coupled social interactions that occur in these scenarios \textit{e.g.} passing, intersections, swerving, weaving, etc. In this paper, we present a new multi-agent maximum entropy inverse reinforcement learning algorithm for real world unstructured pedestrian crowds. Key to our approach is a simple, but effective, mathematical trick which we name the so-called tractability-rationality trade-off trick that achieves tractability at the cost of a slight reduction in accuracy. We compare our approach to the classical single-agent MaxEnt IRL as well as state-of-the-art trajectory prediction methods on several datasets including the ETH, UCY, SCAND, JRDB, and a new dataset, called Speedway, collected at a busy intersection on a University campus focusing on dense, complex agent interactions. Our key findings show that, on the dense Speedway dataset, our approach ranks 1st among top 7 baselines with >2X improvement over single-agent IRL, and is competitive with state-of-the-art large transformer-based encoder-decoder models on sparser datasets such as ETH/UCY (ranks 3rd among top 7 baselines).
[ { "version": "v1", "created": "Sun, 26 May 2024 05:48:21 GMT" }, { "version": "v2", "created": "Sun, 15 Dec 2024 03:48:49 GMT" }, { "version": "v3", "created": "Wed, 26 Mar 2025 21:19:58 GMT" } ]
2025-03-28T00:00:00
[ [ "Chandra", "Rohan", "" ], [ "Karnan", "Haresh", "" ], [ "Mehr", "Negar", "" ], [ "Stone", "Peter", "" ], [ "Biswas", "Joydeep", "" ] ]
TITLE: Multi-Agent Inverse Reinforcement Learning in Real World Unstructured Pedestrian Crowds ABSTRACT: Social robot navigation in crowded public spaces such as university campuses, restaurants, grocery stores, and hospitals, is an increasingly important area of research. One of the core strategies for achieving this goal is to understand humans' intent--underlying psychological factors that govern their motion--by learning their reward functions, typically via inverse reinforcement learning (IRL). Despite significant progress in IRL, learning reward functions of multiple agents simultaneously in dense unstructured pedestrian crowds has remained intractable due to the nature of the tightly coupled social interactions that occur in these scenarios \textit{e.g.} passing, intersections, swerving, weaving, etc. In this paper, we present a new multi-agent maximum entropy inverse reinforcement learning algorithm for real world unstructured pedestrian crowds. Key to our approach is a simple, but effective, mathematical trick which we name the so-called tractability-rationality trade-off trick that achieves tractability at the cost of a slight reduction in accuracy. We compare our approach to the classical single-agent MaxEnt IRL as well as state-of-the-art trajectory prediction methods on several datasets including the ETH, UCY, SCAND, JRDB, and a new dataset, called Speedway, collected at a busy intersection on a University campus focusing on dense, complex agent interactions. Our key findings show that, on the dense Speedway dataset, our approach ranks 1st among top 7 baselines with >2X improvement over single-agent IRL, and is competitive with state-of-the-art large transformer-based encoder-decoder models on sparser datasets such as ETH/UCY (ranks 3rd among top 7 baselines).
2405.17712
Mohammad Hasan Dr.
Ahatsham Hayat and Mohammad Rashedul Hasan
A Context-Aware Approach for Enhancing Data Imputation with Pre-trained Language Models
null
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
This paper presents a novel approach named \textbf{C}ontextually \textbf{R}elevant \textbf{I}mputation leveraging pre-trained \textbf{L}anguage \textbf{M}odels (\textbf{CRILM}) for handling missing data in tabular datasets. Instead of relying on traditional numerical estimations, CRILM uses pre-trained language models (LMs) to create contextually relevant descriptors for missing values. This method aligns datasets with LMs' strengths, allowing large LMs to generate these descriptors and small LMs to be fine-tuned on the enriched datasets for enhanced downstream task performance. Our evaluations demonstrate CRILM's superior performance and robustness across MCAR, MAR, and challenging MNAR scenarios, with up to a 10\% improvement over the best-performing baselines. By mitigating biases, particularly in MNAR settings, CRILM improves downstream task performance and offers a cost-effective solution for resource-constrained environments.
[ { "version": "v1", "created": "Tue, 28 May 2024 00:08:29 GMT" }, { "version": "v2", "created": "Thu, 27 Mar 2025 16:22:43 GMT" } ]
2025-03-28T00:00:00
[ [ "Hayat", "Ahatsham", "" ], [ "Hasan", "Mohammad Rashedul", "" ] ]
TITLE: A Context-Aware Approach for Enhancing Data Imputation with Pre-trained Language Models ABSTRACT: This paper presents a novel approach named \textbf{C}ontextually \textbf{R}elevant \textbf{I}mputation leveraging pre-trained \textbf{L}anguage \textbf{M}odels (\textbf{CRILM}) for handling missing data in tabular datasets. Instead of relying on traditional numerical estimations, CRILM uses pre-trained language models (LMs) to create contextually relevant descriptors for missing values. This method aligns datasets with LMs' strengths, allowing large LMs to generate these descriptors and small LMs to be fine-tuned on the enriched datasets for enhanced downstream task performance. Our evaluations demonstrate CRILM's superior performance and robustness across MCAR, MAR, and challenging MNAR scenarios, with up to a 10\% improvement over the best-performing baselines. By mitigating biases, particularly in MNAR settings, CRILM improves downstream task performance and offers a cost-effective solution for resource-constrained environments.
2406.02166
Saierdaer Yusuyin
Saierdaer Yusuyin, Te Ma, Hao Huang, Wenbo Zhao, Zhijian Ou
Whistle: Data-Efficient Multilingual and Crosslingual Speech Recognition via Weakly Phonetic Supervision
Accepted by IEEE-TASLP
null
10.1109/TASLPRO.2025.3550683
null
cs.SD cs.CL eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There exist three approaches for multilingual and crosslingual automatic speech recognition (MCL-ASR) - supervised pretraining with phonetic or graphemic transcription, and self-supervised pretraining. We find that pretraining with phonetic supervision has been underappreciated so far for MCL-ASR, while conceptually it is more advantageous for information sharing between different languages. This paper explores the approach of pretraining with weakly phonetic supervision towards data-efficient MCL-ASR, which is called Whistle. We relax the requirement of gold-standard human-validated phonetic transcripts, and obtain International Phonetic Alphabet (IPA) based transcription by leveraging the LanguageNet grapheme-to-phoneme (G2P) models. We construct a common experimental setup based on the CommonVoice dataset, called CV-Lang10, with 10 seen languages and 2 unseen languages. A set of experiments are conducted on CV-Lang10 to compare, as fair as possible, the three approaches under the common setup for MCL-ASR. Experiments demonstrate the advantages of phoneme-based models (Whistle) for MCL-ASR, in terms of speech recognition for seen languages, crosslingual performance for unseen languages with different amounts of few-shot data, overcoming catastrophic forgetting, and training efficiency. It is found that when training data is more limited, phoneme supervision can achieve better results compared to subword supervision and self-supervision, thereby providing higher data-efficiency. To support reproducibility and promote future research along this direction, we release the code, models and data for the entire pipeline of Whistle at https://github.com/thu-spmi/CAT/tree/master/egs/cv-lang10.
[ { "version": "v1", "created": "Tue, 4 Jun 2024 09:56:05 GMT" }, { "version": "v2", "created": "Thu, 27 Mar 2025 16:38:29 GMT" } ]
2025-03-28T00:00:00
[ [ "Yusuyin", "Saierdaer", "" ], [ "Ma", "Te", "" ], [ "Huang", "Hao", "" ], [ "Zhao", "Wenbo", "" ], [ "Ou", "Zhijian", "" ] ]
TITLE: Whistle: Data-Efficient Multilingual and Crosslingual Speech Recognition via Weakly Phonetic Supervision ABSTRACT: There exist three approaches for multilingual and crosslingual automatic speech recognition (MCL-ASR) - supervised pretraining with phonetic or graphemic transcription, and self-supervised pretraining. We find that pretraining with phonetic supervision has been underappreciated so far for MCL-ASR, while conceptually it is more advantageous for information sharing between different languages. This paper explores the approach of pretraining with weakly phonetic supervision towards data-efficient MCL-ASR, which is called Whistle. We relax the requirement of gold-standard human-validated phonetic transcripts, and obtain International Phonetic Alphabet (IPA) based transcription by leveraging the LanguageNet grapheme-to-phoneme (G2P) models. We construct a common experimental setup based on the CommonVoice dataset, called CV-Lang10, with 10 seen languages and 2 unseen languages. A set of experiments are conducted on CV-Lang10 to compare, as fair as possible, the three approaches under the common setup for MCL-ASR. Experiments demonstrate the advantages of phoneme-based models (Whistle) for MCL-ASR, in terms of speech recognition for seen languages, crosslingual performance for unseen languages with different amounts of few-shot data, overcoming catastrophic forgetting, and training efficiency. It is found that when training data is more limited, phoneme supervision can achieve better results compared to subword supervision and self-supervision, thereby providing higher data-efficiency. To support reproducibility and promote future research along this direction, we release the code, models and data for the entire pipeline of Whistle at https://github.com/thu-spmi/CAT/tree/master/egs/cv-lang10.
2407.03314
Zhantao Yang
Zhantao Yang, Ruili Feng, Keyu Yan, Huangji Wang, Zhicai Wang, Shangwen Zhu, Han Zhang, Jie Xiao, Pingyu Wu, Kai Zhu, Jixuan Chen, Chen-Wei Xie, Yue Yang, Hongyang Zhang, Yu Liu, Fan Cheng
BACON: Improving Clarity of Image Captions via Bag-of-Concept Graphs
null
null
null
null
cs.CV cs.CL cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Advancements in large Vision-Language Models have brought precise, accurate image captioning, vital for advancing multi-modal image understanding and processing. Yet these captions often carry lengthy, intertwined contexts that are difficult to parse and frequently overlook essential cues, posing a great barrier for models like GroundingDINO and SDXL, which lack the strong text encoding and syntax analysis needed to fully leverage dense captions. To address this, we propose BACON, a prompting method that breaks down VLM-generated captions into disentangled, structured elements such as objects, relationships, styles, and themes. This approach not only minimizes confusion from handling complex contexts but also allows for efficient transfer into a JSON dictionary, enabling models without linguistic processing capabilities to easily access key information. We annotated 100,000 image-caption pairs using BACON with GPT-4V and trained an LLaVA captioner on this dataset, enabling it to produce BACON-style captions without relying on costly GPT-4V. Evaluations of overall quality, precision, and recall-as well as user studies-demonstrate that the resulting caption model consistently outperforms other SOTA VLM models in generating high-quality captions. Besides, we show that BACON-style captions exhibit better clarity when applied to various models, enabling them to accomplish previously unattainable tasks or surpass existing SOTA solutions without training. For example, BACON-style captions help GroundingDINO achieve 1.51x higher recall scores on open-vocabulary object detection tasks compared to leading methods.
[ { "version": "v1", "created": "Wed, 3 Jul 2024 17:55:27 GMT" }, { "version": "v2", "created": "Thu, 27 Mar 2025 17:06:25 GMT" } ]
2025-03-28T00:00:00
[ [ "Yang", "Zhantao", "" ], [ "Feng", "Ruili", "" ], [ "Yan", "Keyu", "" ], [ "Wang", "Huangji", "" ], [ "Wang", "Zhicai", "" ], [ "Zhu", "Shangwen", "" ], [ "Zhang", "Han", "" ], [ "Xiao", "Jie", "" ], [ "Wu", "Pingyu", "" ], [ "Zhu", "Kai", "" ], [ "Chen", "Jixuan", "" ], [ "Xie", "Chen-Wei", "" ], [ "Yang", "Yue", "" ], [ "Zhang", "Hongyang", "" ], [ "Liu", "Yu", "" ], [ "Cheng", "Fan", "" ] ]
TITLE: BACON: Improving Clarity of Image Captions via Bag-of-Concept Graphs ABSTRACT: Advancements in large Vision-Language Models have brought precise, accurate image captioning, vital for advancing multi-modal image understanding and processing. Yet these captions often carry lengthy, intertwined contexts that are difficult to parse and frequently overlook essential cues, posing a great barrier for models like GroundingDINO and SDXL, which lack the strong text encoding and syntax analysis needed to fully leverage dense captions. To address this, we propose BACON, a prompting method that breaks down VLM-generated captions into disentangled, structured elements such as objects, relationships, styles, and themes. This approach not only minimizes confusion from handling complex contexts but also allows for efficient transfer into a JSON dictionary, enabling models without linguistic processing capabilities to easily access key information. We annotated 100,000 image-caption pairs using BACON with GPT-4V and trained an LLaVA captioner on this dataset, enabling it to produce BACON-style captions without relying on costly GPT-4V. Evaluations of overall quality, precision, and recall-as well as user studies-demonstrate that the resulting caption model consistently outperforms other SOTA VLM models in generating high-quality captions. Besides, we show that BACON-style captions exhibit better clarity when applied to various models, enabling them to accomplish previously unattainable tasks or surpass existing SOTA solutions without training. For example, BACON-style captions help GroundingDINO achieve 1.51x higher recall scores on open-vocabulary object detection tasks compared to leading methods.
2407.05608
Xiaoxiao Miao
Xiaoxiao Miao, Ruijie Tao, Chang Zeng, Xin Wang
A Benchmark for Multi-speaker Anonymization
Accepted by TIFS
null
null
null
cs.SD cs.CL eess.AS
http://creativecommons.org/licenses/by-nc-nd/4.0/
Privacy-preserving voice protection approaches primarily suppress privacy-related information derived from paralinguistic attributes while preserving the linguistic content. Existing solutions focus particularly on single-speaker scenarios. However, they lack practicality for real-world applications, i.e., multi-speaker scenarios. In this paper, we present an initial attempt to provide a multi-speaker anonymization benchmark by defining the task and evaluation protocol, proposing benchmarking solutions, and discussing the privacy leakage of overlapping conversations. The proposed benchmark solutions are based on a cascaded system that integrates spectral-clustering-based speaker diarization and disentanglement-based speaker anonymization using a selection-based anonymizer. To improve utility, the benchmark solutions are further enhanced by two conversation-level speaker vector anonymization methods. The first method minimizes the differential similarity across speaker pairs in the original and anonymized conversations, which maintains original speaker relationships in the anonymized version. The other minimizes the aggregated similarity across anonymized speakers, which achieves better differentiation between speakers.Experiments conducted on both non-overlap simulated and real-world datasets demonstrate the effectiveness of the multi-speaker anonymization system with the proposed speaker anonymizers. Additionally, we analyzed overlapping speech regarding privacy leakage and provided potential solutions
[ { "version": "v1", "created": "Mon, 8 Jul 2024 04:48:43 GMT" }, { "version": "v2", "created": "Thu, 27 Mar 2025 06:27:57 GMT" } ]
2025-03-28T00:00:00
[ [ "Miao", "Xiaoxiao", "" ], [ "Tao", "Ruijie", "" ], [ "Zeng", "Chang", "" ], [ "Wang", "Xin", "" ] ]
TITLE: A Benchmark for Multi-speaker Anonymization ABSTRACT: Privacy-preserving voice protection approaches primarily suppress privacy-related information derived from paralinguistic attributes while preserving the linguistic content. Existing solutions focus particularly on single-speaker scenarios. However, they lack practicality for real-world applications, i.e., multi-speaker scenarios. In this paper, we present an initial attempt to provide a multi-speaker anonymization benchmark by defining the task and evaluation protocol, proposing benchmarking solutions, and discussing the privacy leakage of overlapping conversations. The proposed benchmark solutions are based on a cascaded system that integrates spectral-clustering-based speaker diarization and disentanglement-based speaker anonymization using a selection-based anonymizer. To improve utility, the benchmark solutions are further enhanced by two conversation-level speaker vector anonymization methods. The first method minimizes the differential similarity across speaker pairs in the original and anonymized conversations, which maintains original speaker relationships in the anonymized version. The other minimizes the aggregated similarity across anonymized speakers, which achieves better differentiation between speakers.Experiments conducted on both non-overlap simulated and real-world datasets demonstrate the effectiveness of the multi-speaker anonymization system with the proposed speaker anonymizers. Additionally, we analyzed overlapping speech regarding privacy leakage and provided potential solutions
2407.11828
Julien Hauret
Julien Hauret and Malo Olivier and Thomas Joubaud and Christophe Langrenne and Sarah Poir\'ee and V\'eronique Zimpfer and \'Eric Bavu
Vibravox: A Dataset of French Speech Captured with Body-conduction Audio Sensors
23 pages, 42 figures
null
null
null
eess.AS cs.LG
http://creativecommons.org/licenses/by/4.0/
Vibravox is a dataset compliant with the General Data Protection Regulation (GDPR) containing audio recordings using five different body-conduction audio sensors: two in-ear microphones, two bone conduction vibration pickups, and a laryngophone. The dataset also includes audio data from an airborne microphone used as a reference. The Vibravox corpus contains 45 hours per sensor of speech samples and physiological sounds recorded by 188 participants under different acoustic conditions imposed by a high order ambisonics 3D spatializer. Annotations about the recording conditions and linguistic transcriptions are also included in the corpus. We conducted a series of experiments on various speech-related tasks, including speech recognition, speech enhancement, and speaker verification. These experiments were carried out using state-of-the-art models to evaluate and compare their performances on signals captured by the different audio sensors offered by the Vibravox dataset, with the aim of gaining a better grasp of their individual characteristics.
[ { "version": "v1", "created": "Tue, 16 Jul 2024 15:16:10 GMT" }, { "version": "v2", "created": "Wed, 17 Jul 2024 08:09:01 GMT" }, { "version": "v3", "created": "Fri, 21 Feb 2025 17:42:56 GMT" }, { "version": "v4", "created": "Thu, 27 Mar 2025 01:13:48 GMT" } ]
2025-03-28T00:00:00
[ [ "Hauret", "Julien", "" ], [ "Olivier", "Malo", "" ], [ "Joubaud", "Thomas", "" ], [ "Langrenne", "Christophe", "" ], [ "Poirée", "Sarah", "" ], [ "Zimpfer", "Véronique", "" ], [ "Bavu", "Éric", "" ] ]
TITLE: Vibravox: A Dataset of French Speech Captured with Body-conduction Audio Sensors ABSTRACT: Vibravox is a dataset compliant with the General Data Protection Regulation (GDPR) containing audio recordings using five different body-conduction audio sensors: two in-ear microphones, two bone conduction vibration pickups, and a laryngophone. The dataset also includes audio data from an airborne microphone used as a reference. The Vibravox corpus contains 45 hours per sensor of speech samples and physiological sounds recorded by 188 participants under different acoustic conditions imposed by a high order ambisonics 3D spatializer. Annotations about the recording conditions and linguistic transcriptions are also included in the corpus. We conducted a series of experiments on various speech-related tasks, including speech recognition, speech enhancement, and speaker verification. These experiments were carried out using state-of-the-art models to evaluate and compare their performances on signals captured by the different audio sensors offered by the Vibravox dataset, with the aim of gaining a better grasp of their individual characteristics.
2408.00279
Yesheng Zhang
Yesheng Zhang, Shuhan Shen, Xu Zhao
MESA: Effective Matching Redundancy Reduction by Semantic Area Segmentation
18pages+suppl
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
We propose MESA and DMESA as novel feature matching methods, which utilize Segment Anything Model (SAM) to effectively mitigate matching redundancy. The key insight of our methods is to establish implicit-semantic area matching prior to point matching, based on advanced image understanding of SAM. Then, informative area matches with consistent internal semantic are able to undergo dense feature comparison, facilitating precise inside-area point matching. Specifically, MESA adopts a sparse matching framework and first obtains candidate areas from SAM results through a novel Area Graph (AG). Then, area matching among the candidates is formulated as graph energy minimization and solved by graphical models derived from AG. To address the efficiency issue of MESA, we further propose DMESA as its dense counterpart, applying a dense matching framework. After candidate areas are identified by AG, DMESA establishes area matches through generating dense matching distributions. The distributions are produced from off-the-shelf patch matching utilizing the Gaussian Mixture Model and refined via the Expectation Maximization. With less repetitive computation, DMESA showcases a speed improvement of nearly five times compared to MESA, while maintaining competitive accuracy. Our methods are extensively evaluated on five datasets encompassing indoor and outdoor scenes. The results illustrate consistent performance improvements from our methods for five distinct point matching baselines across all datasets. Furthermore, our methods exhibit promise generalization and improved robustness against image resolution variations. The code is publicly available at https://github.com/Easonyesheng/A2PM-MESA.
[ { "version": "v1", "created": "Thu, 1 Aug 2024 04:39:36 GMT" }, { "version": "v2", "created": "Thu, 27 Mar 2025 08:11:57 GMT" } ]
2025-03-28T00:00:00
[ [ "Zhang", "Yesheng", "" ], [ "Shen", "Shuhan", "" ], [ "Zhao", "Xu", "" ] ]
TITLE: MESA: Effective Matching Redundancy Reduction by Semantic Area Segmentation ABSTRACT: We propose MESA and DMESA as novel feature matching methods, which utilize Segment Anything Model (SAM) to effectively mitigate matching redundancy. The key insight of our methods is to establish implicit-semantic area matching prior to point matching, based on advanced image understanding of SAM. Then, informative area matches with consistent internal semantic are able to undergo dense feature comparison, facilitating precise inside-area point matching. Specifically, MESA adopts a sparse matching framework and first obtains candidate areas from SAM results through a novel Area Graph (AG). Then, area matching among the candidates is formulated as graph energy minimization and solved by graphical models derived from AG. To address the efficiency issue of MESA, we further propose DMESA as its dense counterpart, applying a dense matching framework. After candidate areas are identified by AG, DMESA establishes area matches through generating dense matching distributions. The distributions are produced from off-the-shelf patch matching utilizing the Gaussian Mixture Model and refined via the Expectation Maximization. With less repetitive computation, DMESA showcases a speed improvement of nearly five times compared to MESA, while maintaining competitive accuracy. Our methods are extensively evaluated on five datasets encompassing indoor and outdoor scenes. The results illustrate consistent performance improvements from our methods for five distinct point matching baselines across all datasets. Furthermore, our methods exhibit promise generalization and improved robustness against image resolution variations. The code is publicly available at https://github.com/Easonyesheng/A2PM-MESA.
2408.09769
Enrico del Re
Enrico Del Re, Amirhesam Aghanouri, Cristina Olaverri-Monreal
Integrating Naturalistic Insights in Objective Multi-Vehicle Safety Framework
null
null
10.1109/ITSC58415.2024.10920258.
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As autonomous vehicle technology advances, the precise assessment of safety in complex traffic scenarios becomes crucial, especially in mixed-vehicle environments where human perception of safety must be taken into account. This paper presents a framework designed for assessing traffic safety in multi-vehicle situations, facilitating the simultaneous utilization of diverse objective safety metrics. Additionally, it allows the integration of subjective perception of safety by adjusting model parameters. The framework was applied to evaluate various model configurations in car-following scenarios on a highway, utilizing naturalistic driving datasets. The evaluation of the model showed an outstanding performance, particularly when integrating multiple objective safety measures. Furthermore, the performance was significantly enhanced when considering all surrounding vehicles.
[ { "version": "v1", "created": "Mon, 19 Aug 2024 07:58:10 GMT" }, { "version": "v2", "created": "Thu, 27 Mar 2025 12:09:05 GMT" } ]
2025-03-28T00:00:00
[ [ "Del Re", "Enrico", "" ], [ "Aghanouri", "Amirhesam", "" ], [ "Olaverri-Monreal", "Cristina", "" ] ]
TITLE: Integrating Naturalistic Insights in Objective Multi-Vehicle Safety Framework ABSTRACT: As autonomous vehicle technology advances, the precise assessment of safety in complex traffic scenarios becomes crucial, especially in mixed-vehicle environments where human perception of safety must be taken into account. This paper presents a framework designed for assessing traffic safety in multi-vehicle situations, facilitating the simultaneous utilization of diverse objective safety metrics. Additionally, it allows the integration of subjective perception of safety by adjusting model parameters. The framework was applied to evaluate various model configurations in car-following scenarios on a highway, utilizing naturalistic driving datasets. The evaluation of the model showed an outstanding performance, particularly when integrating multiple objective safety measures. Furthermore, the performance was significantly enhanced when considering all surrounding vehicles.
2408.09833
Mohamed Sabry MSc
Mohamed Sabry, Walter Morales-Alvarez and Cristina Olaverri-Monreal
Automated Vehicle Driver Monitoring Dataset from Real-World Scenarios
6 pages
null
10.1109/ITSC58415.2024.10920048
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
From SAE Level 3 of automation onwards, drivers are allowed to engage in activities that are not directly related to driving during their travel. However, in level 3, a misunderstanding of the capabilities of the system might lead drivers to engage in secondary tasks, which could impair their ability to react to challenging traffic situations. Anticipating driver activity allows for early detection of risky behaviors, to prevent accidents. To be able to predict the driver activity, a Deep Learning network needs to be trained on a dataset. However, the use of datasets based on simulation for training and the migration to real-world data for prediction has proven to be suboptimal. Hence, this paper presents a real-world driver activity dataset, openly accessible on IEEE Dataport, which encompasses various activities that occur in autonomous driving scenarios under various illumination and weather conditions. Results from the training process showed that the dataset provides an excellent benchmark for implementing models for driver activity recognition.
[ { "version": "v1", "created": "Mon, 19 Aug 2024 09:29:00 GMT" }, { "version": "v2", "created": "Wed, 26 Mar 2025 22:41:51 GMT" } ]
2025-03-28T00:00:00
[ [ "Sabry", "Mohamed", "" ], [ "Morales-Alvarez", "Walter", "" ], [ "Olaverri-Monreal", "Cristina", "" ] ]
TITLE: Automated Vehicle Driver Monitoring Dataset from Real-World Scenarios ABSTRACT: From SAE Level 3 of automation onwards, drivers are allowed to engage in activities that are not directly related to driving during their travel. However, in level 3, a misunderstanding of the capabilities of the system might lead drivers to engage in secondary tasks, which could impair their ability to react to challenging traffic situations. Anticipating driver activity allows for early detection of risky behaviors, to prevent accidents. To be able to predict the driver activity, a Deep Learning network needs to be trained on a dataset. However, the use of datasets based on simulation for training and the migration to real-world data for prediction has proven to be suboptimal. Hence, this paper presents a real-world driver activity dataset, openly accessible on IEEE Dataport, which encompasses various activities that occur in autonomous driving scenarios under various illumination and weather conditions. Results from the training process showed that the dataset provides an excellent benchmark for implementing models for driver activity recognition.
2408.12691
Pooya Ashtari
Pooya Ashtari, Pourya Behmandpoor, Fateme Nateghi Haredasht, Jonathan H. Chen, Panagiotis Patrinos and Sabine Van Huffel
Quantization-aware Matrix Factorization for Low Bit Rate Image Compression
22 pages, 6 figures, 1 table, 1 algorithm
null
null
null
eess.IV cs.CV math.OC
http://creativecommons.org/licenses/by/4.0/
Lossy image compression is essential for efficient transmission and storage. Traditional compression methods mainly rely on discrete cosine transform (DCT) or singular value decomposition (SVD), both of which represent image data in continuous domains and, therefore, necessitate carefully designed quantizers. Notably, these methods consider quantization as a separate step, where quantization errors cannot be incorporated into the compression process. The sensitivity of these methods, especially SVD-based ones, to quantization errors significantly degrades reconstruction quality. To address this issue, we introduce a quantization-aware matrix factorization (QMF) to develop a novel lossy image compression method. QMF provides a low-rank representation of the image data as a product of two smaller factor matrices, with elements constrained to bounded integer values, thereby effectively integrating quantization with low-rank approximation. We propose an efficient, provably convergent iterative algorithm for QMF using a block coordinate descent (BCD) scheme, with subproblems having closed-form solutions. Our experiments on the Kodak and CLIC 2024 datasets demonstrate that our QMF compression method consistently outperforms JPEG at low bit rates below 0.25 bits per pixel (bpp) and remains comparable at higher bit rates. We also assessed our method's capability to preserve visual semantics by evaluating an ImageNet pre-trained classifier on compressed images. Remarkably, our method improved top-1 accuracy by over 5 percentage points compared to JPEG at bit rates under 0.25 bpp. The project is available at https://github.com/pashtari/lrf .
[ { "version": "v1", "created": "Thu, 22 Aug 2024 19:08:08 GMT" }, { "version": "v2", "created": "Thu, 27 Mar 2025 14:26:49 GMT" } ]
2025-03-28T00:00:00
[ [ "Ashtari", "Pooya", "" ], [ "Behmandpoor", "Pourya", "" ], [ "Haredasht", "Fateme Nateghi", "" ], [ "Chen", "Jonathan H.", "" ], [ "Patrinos", "Panagiotis", "" ], [ "Van Huffel", "Sabine", "" ] ]
TITLE: Quantization-aware Matrix Factorization for Low Bit Rate Image Compression ABSTRACT: Lossy image compression is essential for efficient transmission and storage. Traditional compression methods mainly rely on discrete cosine transform (DCT) or singular value decomposition (SVD), both of which represent image data in continuous domains and, therefore, necessitate carefully designed quantizers. Notably, these methods consider quantization as a separate step, where quantization errors cannot be incorporated into the compression process. The sensitivity of these methods, especially SVD-based ones, to quantization errors significantly degrades reconstruction quality. To address this issue, we introduce a quantization-aware matrix factorization (QMF) to develop a novel lossy image compression method. QMF provides a low-rank representation of the image data as a product of two smaller factor matrices, with elements constrained to bounded integer values, thereby effectively integrating quantization with low-rank approximation. We propose an efficient, provably convergent iterative algorithm for QMF using a block coordinate descent (BCD) scheme, with subproblems having closed-form solutions. Our experiments on the Kodak and CLIC 2024 datasets demonstrate that our QMF compression method consistently outperforms JPEG at low bit rates below 0.25 bits per pixel (bpp) and remains comparable at higher bit rates. We also assessed our method's capability to preserve visual semantics by evaluating an ImageNet pre-trained classifier on compressed images. Remarkably, our method improved top-1 accuracy by over 5 percentage points compared to JPEG at bit rates under 0.25 bpp. The project is available at https://github.com/pashtari/lrf .
2408.16863
Robert Mahari
Alexandre Mojon, Robert Mahari, Sandro Claudio Lera
Data-Driven Law Firm Rankings to Reduce Information Asymmetry in Legal Disputes
null
null
null
null
cs.CY
http://creativecommons.org/licenses/by-nc-sa/4.0/
Selecting capable counsel can shape the outcome of litigation, yet evaluating law firm performance remains challenging. Widely used rankings prioritize prestige, size, and revenue rather than empirical litigation outcomes, offering little practical guidance. To address this gap, we build on the Bradley-Terry model and introduce a new ranking framework that treats each lawsuit as a competitive game between plaintiff and defendant law firms. Leveraging a newly constructed dataset of 60,540 U.S. civil lawsuits involving 54,541 law firms, our findings show that existing reputation-based rankings correlate poorly with actual litigation success, whereas our outcome-based ranking substantially improves predictive accuracy. These findings establish a foundation for more transparent, data-driven assessments of legal performance.
[ { "version": "v1", "created": "Thu, 29 Aug 2024 19:04:45 GMT" }, { "version": "v2", "created": "Thu, 27 Mar 2025 00:35:30 GMT" } ]
2025-03-28T00:00:00
[ [ "Mojon", "Alexandre", "" ], [ "Mahari", "Robert", "" ], [ "Lera", "Sandro Claudio", "" ] ]
TITLE: Data-Driven Law Firm Rankings to Reduce Information Asymmetry in Legal Disputes ABSTRACT: Selecting capable counsel can shape the outcome of litigation, yet evaluating law firm performance remains challenging. Widely used rankings prioritize prestige, size, and revenue rather than empirical litigation outcomes, offering little practical guidance. To address this gap, we build on the Bradley-Terry model and introduce a new ranking framework that treats each lawsuit as a competitive game between plaintiff and defendant law firms. Leveraging a newly constructed dataset of 60,540 U.S. civil lawsuits involving 54,541 law firms, our findings show that existing reputation-based rankings correlate poorly with actual litigation success, whereas our outcome-based ranking substantially improves predictive accuracy. These findings establish a foundation for more transparent, data-driven assessments of legal performance.
2408.17258
Tong Nie
Tong Nie, Junlin He, Yuewen Mei, Guoyang Qin, Guilong Li, Jian Sun, Wei Ma
Joint Estimation and Prediction of City-wide Delivery Demand: A Large Language Model Empowered Graph-based Learning Approach
null
Transportation Research Part E: Logistics and Transportation Review, 2025
10.1016/j.tre.2025.104075
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The proliferation of e-commerce and urbanization has significantly intensified delivery operations in urban areas, boosting the volume and complexity of delivery demand. Data-driven predictive methods, especially those utilizing machine learning techniques, have emerged to handle these complexities in urban delivery demand management problems. One particularly pressing issue that has yet to be sufficiently addressed is the joint estimation and prediction of city-wide delivery demand, as well as the generalization of the model to new cities. To this end, we formulate this problem as a transferable graph-based spatiotemporal learning task. First, an individual-collective message-passing neural network model is formalized to capture the interaction between demand patterns of associated regions. Second, by exploiting recent advances in large language models (LLMs), we extract general geospatial knowledge encodings from the unstructured locational data using the embedding generated by LLMs. Last, to encourage the cross-city generalization of the model, we integrate the encoding into the demand predictor in a transferable way. Comprehensive empirical evaluation results on two real-world delivery datasets, including eight cities in China and the US, demonstrate that our model significantly outperforms state-of-the-art baselines in accuracy, efficiency, and transferability.
[ { "version": "v1", "created": "Fri, 30 Aug 2024 12:56:17 GMT" }, { "version": "v2", "created": "Sat, 30 Nov 2024 12:13:01 GMT" }, { "version": "v3", "created": "Thu, 27 Mar 2025 11:41:54 GMT" } ]
2025-03-28T00:00:00
[ [ "Nie", "Tong", "" ], [ "He", "Junlin", "" ], [ "Mei", "Yuewen", "" ], [ "Qin", "Guoyang", "" ], [ "Li", "Guilong", "" ], [ "Sun", "Jian", "" ], [ "Ma", "Wei", "" ] ]
TITLE: Joint Estimation and Prediction of City-wide Delivery Demand: A Large Language Model Empowered Graph-based Learning Approach ABSTRACT: The proliferation of e-commerce and urbanization has significantly intensified delivery operations in urban areas, boosting the volume and complexity of delivery demand. Data-driven predictive methods, especially those utilizing machine learning techniques, have emerged to handle these complexities in urban delivery demand management problems. One particularly pressing issue that has yet to be sufficiently addressed is the joint estimation and prediction of city-wide delivery demand, as well as the generalization of the model to new cities. To this end, we formulate this problem as a transferable graph-based spatiotemporal learning task. First, an individual-collective message-passing neural network model is formalized to capture the interaction between demand patterns of associated regions. Second, by exploiting recent advances in large language models (LLMs), we extract general geospatial knowledge encodings from the unstructured locational data using the embedding generated by LLMs. Last, to encourage the cross-city generalization of the model, we integrate the encoding into the demand predictor in a transferable way. Comprehensive empirical evaluation results on two real-world delivery datasets, including eight cities in China and the US, demonstrate that our model significantly outperforms state-of-the-art baselines in accuracy, efficiency, and transferability.
2409.09430
Amirreza Mahbod
Amirreza Mahbod, Nematollah Saeidi, Sepideh Hatamikia, Ramona Woitek
Evaluating Pre-trained Convolutional Neural Networks and Foundation Models as Feature Extractors for Content-based Medical Image Retrieval
37 pages
null
10.1016/j.engappai.2025.110571
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Medical image retrieval refers to the task of finding similar images for given query images in a database, with applications such as diagnosis support. While traditional medical image retrieval relied on clinical metadata, content-based medical image retrieval (CBMIR) depends on image features, which can be extracted automatically or semi-automatically. Many approaches have been proposed for CBMIR, and among them, using pre-trained convolutional neural networks (CNNs) is a widely utilized approach. However, considering the recent advances in the development of foundation models for various computer vision tasks, their application for CBMIR can also be investigated. In this study, we used several pre-trained feature extractors from well-known pre-trained CNNs and pre-trained foundation models and investigated the CBMIR performance on eight types of two-dimensional (2D) and three-dimensional (3D) medical images. Furthermore, we investigated the effect of image size on the CBMIR performance. Our results show that, overall, for the 2D datasets, foundation models deliver superior performance by a large margin compared to CNNs, with the general-purpose self-supervised model for computational pathology (UNI) providing the best overall performance across all datasets and image sizes. For 3D datasets, CNNs and foundation models deliver more competitive performance, with contrastive learning from captions for histopathology model (CONCH) achieving the best overall performance. Moreover, our findings confirm that while using larger image sizes (especially for 2D datasets) yields slightly better performance, competitive CBMIR performance can still be achieved even with smaller image sizes. Our codes to reproduce the results are available at: https://github.com/masih4/MedImageRetrieval.
[ { "version": "v1", "created": "Sat, 14 Sep 2024 13:07:30 GMT" }, { "version": "v2", "created": "Wed, 26 Mar 2025 19:11:03 GMT" } ]
2025-03-28T00:00:00
[ [ "Mahbod", "Amirreza", "" ], [ "Saeidi", "Nematollah", "" ], [ "Hatamikia", "Sepideh", "" ], [ "Woitek", "Ramona", "" ] ]
TITLE: Evaluating Pre-trained Convolutional Neural Networks and Foundation Models as Feature Extractors for Content-based Medical Image Retrieval ABSTRACT: Medical image retrieval refers to the task of finding similar images for given query images in a database, with applications such as diagnosis support. While traditional medical image retrieval relied on clinical metadata, content-based medical image retrieval (CBMIR) depends on image features, which can be extracted automatically or semi-automatically. Many approaches have been proposed for CBMIR, and among them, using pre-trained convolutional neural networks (CNNs) is a widely utilized approach. However, considering the recent advances in the development of foundation models for various computer vision tasks, their application for CBMIR can also be investigated. In this study, we used several pre-trained feature extractors from well-known pre-trained CNNs and pre-trained foundation models and investigated the CBMIR performance on eight types of two-dimensional (2D) and three-dimensional (3D) medical images. Furthermore, we investigated the effect of image size on the CBMIR performance. Our results show that, overall, for the 2D datasets, foundation models deliver superior performance by a large margin compared to CNNs, with the general-purpose self-supervised model for computational pathology (UNI) providing the best overall performance across all datasets and image sizes. For 3D datasets, CNNs and foundation models deliver more competitive performance, with contrastive learning from captions for histopathology model (CONCH) achieving the best overall performance. Moreover, our findings confirm that while using larger image sizes (especially for 2D datasets) yields slightly better performance, competitive CBMIR performance can still be achieved even with smaller image sizes. Our codes to reproduce the results are available at: https://github.com/masih4/MedImageRetrieval.
2409.11593
Xing Chen
Xing Chen, Dongshu Liu, Jeremie Laydevant, Julie Grollier
Self-Contrastive Forward-Forward Algorithm
null
null
null
null
cs.LG cs.AI cs.CV cs.ET cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Agents that operate autonomously benefit from lifelong learning capabilities. However, compatible training algorithms must comply with the decentralized nature of these systems, which imposes constraints on both the parameter counts and the computational resources. The Forward-Forward (FF) algorithm is one of these. FF relies only on feedforward operations, the same used for inference, for optimizing layer-wise objectives. This purely forward approach eliminates the need for transpose operations required in traditional backpropagation. Despite its potential, FF has failed to reach state-of-the-art performance on most standard benchmark tasks, in part due to unreliable negative data generation methods for unsupervised learning. In this work, we propose the Self-Contrastive Forward-Forward (SCFF) algorithm, a competitive training method aimed at closing this performance gap. Inspired by standard self-supervised contrastive learning for vision tasks, SCFF generates positive and negative inputs applicable across various datasets. The method demonstrates superior performance compared to existing unsupervised local learning algorithms on several benchmark datasets, including MNIST, CIFAR-10, STL-10, and Tiny ImageNet. We extend FF's application to training recurrent neural networks, expanding its utility to sequential data tasks. These findings pave the way for high-accuracy, real-time learning on resource-constrained edge devices.
[ { "version": "v1", "created": "Tue, 17 Sep 2024 22:58:20 GMT" }, { "version": "v2", "created": "Thu, 27 Mar 2025 15:57:57 GMT" } ]
2025-03-28T00:00:00
[ [ "Chen", "Xing", "" ], [ "Liu", "Dongshu", "" ], [ "Laydevant", "Jeremie", "" ], [ "Grollier", "Julie", "" ] ]
TITLE: Self-Contrastive Forward-Forward Algorithm ABSTRACT: Agents that operate autonomously benefit from lifelong learning capabilities. However, compatible training algorithms must comply with the decentralized nature of these systems, which imposes constraints on both the parameter counts and the computational resources. The Forward-Forward (FF) algorithm is one of these. FF relies only on feedforward operations, the same used for inference, for optimizing layer-wise objectives. This purely forward approach eliminates the need for transpose operations required in traditional backpropagation. Despite its potential, FF has failed to reach state-of-the-art performance on most standard benchmark tasks, in part due to unreliable negative data generation methods for unsupervised learning. In this work, we propose the Self-Contrastive Forward-Forward (SCFF) algorithm, a competitive training method aimed at closing this performance gap. Inspired by standard self-supervised contrastive learning for vision tasks, SCFF generates positive and negative inputs applicable across various datasets. The method demonstrates superior performance compared to existing unsupervised local learning algorithms on several benchmark datasets, including MNIST, CIFAR-10, STL-10, and Tiny ImageNet. We extend FF's application to training recurrent neural networks, expanding its utility to sequential data tasks. These findings pave the way for high-accuracy, real-time learning on resource-constrained edge devices.
2409.12249
Yipeng Xu
Yuzhe Wu, Yipeng Xu, Tianyu Xu, Jialu Zhang, Jianfeng Ren, Xudong Jiang
GCA-SUNet: A Gated Context-Aware Swin-UNet for Exemplar-Free Counting
Accepted by ICME 2025
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Exemplar-Free Counting aims to count objects of interest without intensive annotations of objects or exemplars. To achieve this, we propose a Gated Context-Aware Swin-UNet (GCA-SUNet) to directly map an input image to the density map of countable objects. Specifically, a set of Swin transformers form an encoder to derive a robust feature representation, and a Gated Context-Aware Modulation block is designed to suppress irrelevant objects or background through a gate mechanism and exploit the attentive support of objects of interest through a self-similarity matrix. The gate strategy is also incorporated into the bottleneck network and the decoder of the Swin-UNet to highlight the features most relevant to objects of interest. By explicitly exploiting the attentive support among countable objects and eliminating irrelevant features through the gate mechanisms, the proposed GCA-SUNet focuses on and counts objects of interest without relying on predefined categories or exemplars. Experimental results on the real-world datasets such as FSC-147 and CARPK demonstrate that GCA-SUNet significantly and consistently outperforms state-of-the-art methods. The code is available at https://github.com/Amordia/GCA-SUNet.
[ { "version": "v1", "created": "Wed, 18 Sep 2024 18:14:00 GMT" }, { "version": "v2", "created": "Thu, 27 Mar 2025 00:09:03 GMT" } ]
2025-03-28T00:00:00
[ [ "Wu", "Yuzhe", "" ], [ "Xu", "Yipeng", "" ], [ "Xu", "Tianyu", "" ], [ "Zhang", "Jialu", "" ], [ "Ren", "Jianfeng", "" ], [ "Jiang", "Xudong", "" ] ]
TITLE: GCA-SUNet: A Gated Context-Aware Swin-UNet for Exemplar-Free Counting ABSTRACT: Exemplar-Free Counting aims to count objects of interest without intensive annotations of objects or exemplars. To achieve this, we propose a Gated Context-Aware Swin-UNet (GCA-SUNet) to directly map an input image to the density map of countable objects. Specifically, a set of Swin transformers form an encoder to derive a robust feature representation, and a Gated Context-Aware Modulation block is designed to suppress irrelevant objects or background through a gate mechanism and exploit the attentive support of objects of interest through a self-similarity matrix. The gate strategy is also incorporated into the bottleneck network and the decoder of the Swin-UNet to highlight the features most relevant to objects of interest. By explicitly exploiting the attentive support among countable objects and eliminating irrelevant features through the gate mechanisms, the proposed GCA-SUNet focuses on and counts objects of interest without relying on predefined categories or exemplars. Experimental results on the real-world datasets such as FSC-147 and CARPK demonstrate that GCA-SUNet significantly and consistently outperforms state-of-the-art methods. The code is available at https://github.com/Amordia/GCA-SUNet.
2409.12259
Rolandos Alexandros Potamias
Rolandos Alexandros Potamias and Jinglei Zhang and Jiankang Deng and Stefanos Zafeiriou
WiLoR: End-to-end 3D Hand Localization and Reconstruction in-the-wild
CVPR 2025, Project Page https://rolpotamias.github.io/WiLoR
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
In recent years, 3D hand pose estimation methods have garnered significant attention due to their extensive applications in human-computer interaction, virtual reality, and robotics. In contrast, there has been a notable gap in hand detection pipelines, posing significant challenges in constructing effective real-world multi-hand reconstruction systems. In this work, we present a data-driven pipeline for efficient multi-hand reconstruction in the wild. The proposed pipeline is composed of two components: a real-time fully convolutional hand localization and a high-fidelity transformer-based 3D hand reconstruction model. To tackle the limitations of previous methods and build a robust and stable detection network, we introduce a large-scale dataset with over than 2M in-the-wild hand images with diverse lighting, illumination, and occlusion conditions. Our approach outperforms previous methods in both efficiency and accuracy on popular 2D and 3D benchmarks. Finally, we showcase the effectiveness of our pipeline to achieve smooth 3D hand tracking from monocular videos, without utilizing any temporal components. Code, models, and dataset are available https://rolpotamias.github.io/WiLoR.
[ { "version": "v1", "created": "Wed, 18 Sep 2024 18:46:51 GMT" }, { "version": "v2", "created": "Wed, 26 Mar 2025 18:05:52 GMT" } ]
2025-03-28T00:00:00
[ [ "Potamias", "Rolandos Alexandros", "" ], [ "Zhang", "Jinglei", "" ], [ "Deng", "Jiankang", "" ], [ "Zafeiriou", "Stefanos", "" ] ]
TITLE: WiLoR: End-to-end 3D Hand Localization and Reconstruction in-the-wild ABSTRACT: In recent years, 3D hand pose estimation methods have garnered significant attention due to their extensive applications in human-computer interaction, virtual reality, and robotics. In contrast, there has been a notable gap in hand detection pipelines, posing significant challenges in constructing effective real-world multi-hand reconstruction systems. In this work, we present a data-driven pipeline for efficient multi-hand reconstruction in the wild. The proposed pipeline is composed of two components: a real-time fully convolutional hand localization and a high-fidelity transformer-based 3D hand reconstruction model. To tackle the limitations of previous methods and build a robust and stable detection network, we introduce a large-scale dataset with over than 2M in-the-wild hand images with diverse lighting, illumination, and occlusion conditions. Our approach outperforms previous methods in both efficiency and accuracy on popular 2D and 3D benchmarks. Finally, we showcase the effectiveness of our pipeline to achieve smooth 3D hand tracking from monocular videos, without utilizing any temporal components. Code, models, and dataset are available https://rolpotamias.github.io/WiLoR.
2409.15272
Yizhi Li
Yizhi Li, Ge Zhang, Yinghao Ma, Ruibin Yuan, Kang Zhu, Hangyu Guo, Yiming Liang, Jiaheng Liu, Zekun Wang, Jian Yang, Siwei Wu, Xingwei Qu, Jinjie Shi, Xinyue Zhang, Zhenzhu Yang, Xiangzhou Wang, Zhaoxiang Zhang, Zachary Liu, Emmanouil Benetos, Wenhao Huang, Chenghua Lin
OmniBench: Towards The Future of Universal Omni-Language Models
null
null
null
null
cs.CL cs.AI cs.CV
http://creativecommons.org/licenses/by-sa/4.0/
Recent advancements in multimodal large language models (MLLMs) have focused on integrating multiple modalities, yet their ability to simultaneously process and reason across different inputs remains underexplored. We introduce OmniBench, a novel benchmark designed to evaluate models' ability to recognize, interpret, and reason across visual, acoustic, and textual inputs simultaneously. We define language models capable of such tri-modal processing as omni-language models (OLMs). OmniBench features high-quality human annotations that require integrated understanding across all modalities. Our evaluation reveals that: i) open-source OLMs show significant limitations in instruction-following and reasoning in tri-modal contexts; and ii) most baseline models perform poorly (around 50% accuracy) even with textual alternatives to image/audio inputs. To address these limitations, we develop OmniInstruct, an 96K-sample instruction tuning dataset for training OLMs. We advocate for developing more robust tri-modal integration techniques and training strategies to enhance OLM performance. Codes and data could be found at our repo (https://github.com/multimodal-art-projection/OmniBench).
[ { "version": "v1", "created": "Mon, 23 Sep 2024 17:59:05 GMT" }, { "version": "v2", "created": "Tue, 24 Sep 2024 16:51:45 GMT" }, { "version": "v3", "created": "Thu, 3 Oct 2024 22:32:50 GMT" }, { "version": "v4", "created": "Thu, 27 Mar 2025 16:21:06 GMT" } ]
2025-03-28T00:00:00
[ [ "Li", "Yizhi", "" ], [ "Zhang", "Ge", "" ], [ "Ma", "Yinghao", "" ], [ "Yuan", "Ruibin", "" ], [ "Zhu", "Kang", "" ], [ "Guo", "Hangyu", "" ], [ "Liang", "Yiming", "" ], [ "Liu", "Jiaheng", "" ], [ "Wang", "Zekun", "" ], [ "Yang", "Jian", "" ], [ "Wu", "Siwei", "" ], [ "Qu", "Xingwei", "" ], [ "Shi", "Jinjie", "" ], [ "Zhang", "Xinyue", "" ], [ "Yang", "Zhenzhu", "" ], [ "Wang", "Xiangzhou", "" ], [ "Zhang", "Zhaoxiang", "" ], [ "Liu", "Zachary", "" ], [ "Benetos", "Emmanouil", "" ], [ "Huang", "Wenhao", "" ], [ "Lin", "Chenghua", "" ] ]
TITLE: OmniBench: Towards The Future of Universal Omni-Language Models ABSTRACT: Recent advancements in multimodal large language models (MLLMs) have focused on integrating multiple modalities, yet their ability to simultaneously process and reason across different inputs remains underexplored. We introduce OmniBench, a novel benchmark designed to evaluate models' ability to recognize, interpret, and reason across visual, acoustic, and textual inputs simultaneously. We define language models capable of such tri-modal processing as omni-language models (OLMs). OmniBench features high-quality human annotations that require integrated understanding across all modalities. Our evaluation reveals that: i) open-source OLMs show significant limitations in instruction-following and reasoning in tri-modal contexts; and ii) most baseline models perform poorly (around 50% accuracy) even with textual alternatives to image/audio inputs. To address these limitations, we develop OmniInstruct, an 96K-sample instruction tuning dataset for training OLMs. We advocate for developing more robust tri-modal integration techniques and training strategies to enhance OLM performance. Codes and data could be found at our repo (https://github.com/multimodal-art-projection/OmniBench).
2409.18119
Yuexi Du
Yuexi Du, John Onofrey, Nicha C. Dvornek
Multi-View and Multi-Scale Alignment for Contrastive Language-Image Pre-training in Mammography
This paper is accepted by IPMI 2025 for Oral Presentation
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Contrastive Language-Image Pre-training (CLIP) demonstrates strong potential in medical image analysis but requires substantial data and computational resources. Due to these restrictions, existing CLIP applications in medical imaging focus mainly on modalities like chest X-rays that have abundant image-report data available, leaving many other important modalities underexplored. Here, we propose one of the first adaptations of the full CLIP model to mammography, which presents significant challenges due to labeled data scarcity, high-resolution images with small regions of interest, and class-wise imbalance. We first develop a specialized supervision framework for mammography that leverages its multi-view nature. Furthermore, we design a symmetric local alignment module to better focus on detailed features in high-resolution images. Lastly, we incorporate a parameter-efficient fine-tuning approach for large language models pre-trained with medical knowledge to address data limitations. Our multi-view and multi-scale alignment (MaMA) method outperforms state-of-the-art baselines for three different tasks on two large real-world mammography datasets, EMBED and RSNA-Mammo, with only 52% model size compared with the largest baseline. The code is available at https://github.com/XYPB/MaMA
[ { "version": "v1", "created": "Thu, 26 Sep 2024 17:56:59 GMT" }, { "version": "v2", "created": "Thu, 27 Mar 2025 17:39:55 GMT" } ]
2025-03-28T00:00:00
[ [ "Du", "Yuexi", "" ], [ "Onofrey", "John", "" ], [ "Dvornek", "Nicha C.", "" ] ]
TITLE: Multi-View and Multi-Scale Alignment for Contrastive Language-Image Pre-training in Mammography ABSTRACT: Contrastive Language-Image Pre-training (CLIP) demonstrates strong potential in medical image analysis but requires substantial data and computational resources. Due to these restrictions, existing CLIP applications in medical imaging focus mainly on modalities like chest X-rays that have abundant image-report data available, leaving many other important modalities underexplored. Here, we propose one of the first adaptations of the full CLIP model to mammography, which presents significant challenges due to labeled data scarcity, high-resolution images with small regions of interest, and class-wise imbalance. We first develop a specialized supervision framework for mammography that leverages its multi-view nature. Furthermore, we design a symmetric local alignment module to better focus on detailed features in high-resolution images. Lastly, we incorporate a parameter-efficient fine-tuning approach for large language models pre-trained with medical knowledge to address data limitations. Our multi-view and multi-scale alignment (MaMA) method outperforms state-of-the-art baselines for three different tasks on two large real-world mammography datasets, EMBED and RSNA-Mammo, with only 52% model size compared with the largest baseline. The code is available at https://github.com/XYPB/MaMA
2409.19804
Xuyang Wu
Xuyang Wu, Shuowei Li, Hsin-Tai Wu, Zhiqiang Tao and Yi Fang
Does RAG Introduce Unfairness in LLMs? Evaluating Fairness in Retrieval-Augmented Generation Systems
Published at COLING 2025
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Retrieval-Augmented Generation (RAG) has recently gained significant attention for its enhanced ability to integrate external knowledge sources into open-domain question answering (QA) tasks. However, it remains unclear how these models address fairness concerns, particularly with respect to sensitive attributes such as gender, geographic location, and other demographic factors. First, as language models evolve to prioritize utility, like improving exact match accuracy, fairness considerations may have been largely overlooked. Second, the complex, multi-component architecture of RAG methods poses challenges in identifying and mitigating biases, as each component is optimized for distinct objectives. In this paper, we aim to empirically evaluate fairness in several RAG methods. We propose a fairness evaluation framework tailored to RAG, using scenario-based questions and analyzing disparities across demographic attributes. Our experimental results indicate that, despite recent advances in utility-driven optimization, fairness issues persist in both the retrieval and generation stages. These findings underscore the need for targeted interventions to address fairness concerns throughout the RAG pipeline. The dataset and code used in this study are publicly available at this GitHub Repository https://github.com/elviswxy/RAG_fairness .
[ { "version": "v1", "created": "Sun, 29 Sep 2024 22:04:26 GMT" }, { "version": "v2", "created": "Thu, 27 Mar 2025 04:36:46 GMT" } ]
2025-03-28T00:00:00
[ [ "Wu", "Xuyang", "" ], [ "Li", "Shuowei", "" ], [ "Wu", "Hsin-Tai", "" ], [ "Tao", "Zhiqiang", "" ], [ "Fang", "Yi", "" ] ]
TITLE: Does RAG Introduce Unfairness in LLMs? Evaluating Fairness in Retrieval-Augmented Generation Systems ABSTRACT: Retrieval-Augmented Generation (RAG) has recently gained significant attention for its enhanced ability to integrate external knowledge sources into open-domain question answering (QA) tasks. However, it remains unclear how these models address fairness concerns, particularly with respect to sensitive attributes such as gender, geographic location, and other demographic factors. First, as language models evolve to prioritize utility, like improving exact match accuracy, fairness considerations may have been largely overlooked. Second, the complex, multi-component architecture of RAG methods poses challenges in identifying and mitigating biases, as each component is optimized for distinct objectives. In this paper, we aim to empirically evaluate fairness in several RAG methods. We propose a fairness evaluation framework tailored to RAG, using scenario-based questions and analyzing disparities across demographic attributes. Our experimental results indicate that, despite recent advances in utility-driven optimization, fairness issues persist in both the retrieval and generation stages. These findings underscore the need for targeted interventions to address fairness concerns throughout the RAG pipeline. The dataset and code used in this study are publicly available at this GitHub Repository https://github.com/elviswxy/RAG_fairness .
2410.00068
Xinyuan Zheng
Xinyuan Zheng, Orren Ravid, Robert A.J. Barry, Yoojean Kim, Qian Wang, Young-geun Kim, Xi Zhu, Xiaofu He
Denoising VAE as an Explainable Feature Reduction and Diagnostic Pipeline for Autism Based on Resting state fMRI
null
null
null
null
eess.IV cs.LG stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Autism spectrum disorders (ASDs) are developmental conditions characterized by restricted interests and difficulties in communication. The complexity of ASD has resulted in a deficiency of objective diagnostic biomarkers. Deep learning methods have gained recognition for addressing these challenges in neuroimaging analysis, but finding and interpreting such diagnostic biomarkers are still challenging computationally. Here, we propose a feature reduction pipeline using resting-state fMRI data. We used Craddock atlas and Power atlas to extract functional connectivity data from rs-fMRI, resulting in over 30 thousand features. By using a denoising variational autoencoder, our proposed pipeline further compresses the connectivity features into 5 latent Gaussian distributions, providing is a low-dimensional representation of the data to promote computational efficiency and interpretability. To test the method, we employed the extracted latent representations to classify ASD using traditional classifiers such as SVM on a large multi-site dataset. The 95% confidence interval for the prediction accuracy of SVM is [0.63, 0.76] after site harmonization using the extracted latent distributions. Without using DVAE for dimensionality reduction, the prediction accuracy is 0.70, which falls within the interval. The DVAE successfully encoded the diagnostic information from rs-fMRI data without sacrificing prediction performance. The runtime for training the DVAE and obtaining classification results from its extracted latent features was 7 times shorter compared to training classifiers directly on the raw data. Our findings suggest that the Power atlas provides more effective brain connectivity insights for diagnosing ASD than Craddock atlas. Additionally, we visualized the latent representations to gain insights into the brain networks contributing to the differences between ASD and neurotypical brains.
[ { "version": "v1", "created": "Mon, 30 Sep 2024 09:38:47 GMT" }, { "version": "v2", "created": "Sun, 5 Jan 2025 21:50:03 GMT" }, { "version": "v3", "created": "Thu, 27 Mar 2025 16:25:38 GMT" } ]
2025-03-28T00:00:00
[ [ "Zheng", "Xinyuan", "" ], [ "Ravid", "Orren", "" ], [ "Barry", "Robert A. J.", "" ], [ "Kim", "Yoojean", "" ], [ "Wang", "Qian", "" ], [ "Kim", "Young-geun", "" ], [ "Zhu", "Xi", "" ], [ "He", "Xiaofu", "" ] ]
TITLE: Denoising VAE as an Explainable Feature Reduction and Diagnostic Pipeline for Autism Based on Resting state fMRI ABSTRACT: Autism spectrum disorders (ASDs) are developmental conditions characterized by restricted interests and difficulties in communication. The complexity of ASD has resulted in a deficiency of objective diagnostic biomarkers. Deep learning methods have gained recognition for addressing these challenges in neuroimaging analysis, but finding and interpreting such diagnostic biomarkers are still challenging computationally. Here, we propose a feature reduction pipeline using resting-state fMRI data. We used Craddock atlas and Power atlas to extract functional connectivity data from rs-fMRI, resulting in over 30 thousand features. By using a denoising variational autoencoder, our proposed pipeline further compresses the connectivity features into 5 latent Gaussian distributions, providing is a low-dimensional representation of the data to promote computational efficiency and interpretability. To test the method, we employed the extracted latent representations to classify ASD using traditional classifiers such as SVM on a large multi-site dataset. The 95% confidence interval for the prediction accuracy of SVM is [0.63, 0.76] after site harmonization using the extracted latent distributions. Without using DVAE for dimensionality reduction, the prediction accuracy is 0.70, which falls within the interval. The DVAE successfully encoded the diagnostic information from rs-fMRI data without sacrificing prediction performance. The runtime for training the DVAE and obtaining classification results from its extracted latent features was 7 times shorter compared to training classifiers directly on the raw data. Our findings suggest that the Power atlas provides more effective brain connectivity insights for diagnosing ASD than Craddock atlas. Additionally, we visualized the latent representations to gain insights into the brain networks contributing to the differences between ASD and neurotypical brains.
2410.12399
Xuyuan Li
Xuyuan Li, Zengqiang Shang, Hua Hua, Peiyang Shi, Chen Yang, Li Wang, Pengyuan Zhang
SF-Speech: Straightened Flow for Zero-Shot Voice Clone
Accepted by IEEE Transactions on Audio, Speech and Language Processing
null
null
null
cs.SD eess.AS
http://creativecommons.org/licenses/by-nc-sa/4.0/
Recently, neural ordinary differential equations (ODE) models trained with flow matching have achieved impressive performance on the zero-shot voice clone task. Nevertheless, postulating standard Gaussian noise as the initial distribution of ODE gives rise to numerous intersections within the fitted targets of flow matching, which presents challenges to model training and enhances the curvature of the learned generated trajectories. These curved trajectories restrict the capacity of ODE models for generating desirable samples with a few steps. This paper proposes SF-Speech, a novel voice clone model based on ODE and in-context learning. Unlike the previous works, SF-Speech adopts a lightweight multi-stage module to generate a more deterministic initial distribution for ODE. Without introducing any additional loss function, we effectively straighten the curved reverse trajectories of the ODE model by jointly training it with the proposed module. Experiment results on datasets of various scales show that SF-Speech outperforms the state-of-the-art zero-shot TTS methods and requires only a quarter of the solver steps, resulting in a generation speed approximately 3.7 times that of Voicebox and E2 TTS. Audio samples are available at the demo page\footnote{[Online] Available: https://lixuyuan102.github.io/Demo/}.
[ { "version": "v1", "created": "Wed, 16 Oct 2024 09:27:25 GMT" }, { "version": "v2", "created": "Thu, 27 Mar 2025 13:14:57 GMT" } ]
2025-03-28T00:00:00
[ [ "Li", "Xuyuan", "" ], [ "Shang", "Zengqiang", "" ], [ "Hua", "Hua", "" ], [ "Shi", "Peiyang", "" ], [ "Yang", "Chen", "" ], [ "Wang", "Li", "" ], [ "Zhang", "Pengyuan", "" ] ]
TITLE: SF-Speech: Straightened Flow for Zero-Shot Voice Clone ABSTRACT: Recently, neural ordinary differential equations (ODE) models trained with flow matching have achieved impressive performance on the zero-shot voice clone task. Nevertheless, postulating standard Gaussian noise as the initial distribution of ODE gives rise to numerous intersections within the fitted targets of flow matching, which presents challenges to model training and enhances the curvature of the learned generated trajectories. These curved trajectories restrict the capacity of ODE models for generating desirable samples with a few steps. This paper proposes SF-Speech, a novel voice clone model based on ODE and in-context learning. Unlike the previous works, SF-Speech adopts a lightweight multi-stage module to generate a more deterministic initial distribution for ODE. Without introducing any additional loss function, we effectively straighten the curved reverse trajectories of the ODE model by jointly training it with the proposed module. Experiment results on datasets of various scales show that SF-Speech outperforms the state-of-the-art zero-shot TTS methods and requires only a quarter of the solver steps, resulting in a generation speed approximately 3.7 times that of Voicebox and E2 TTS. Audio samples are available at the demo page\footnote{[Online] Available: https://lixuyuan102.github.io/Demo/}.
2410.14379
Ziming Huang
Ziming Huang, Xurui Li, Haotian Liu, Feng Xue, Yuzhe Wang, Yu Zhou
AnomalyNCD: Towards Novel Anomaly Class Discovery in Industrial Scenarios
Accepted at CVPR2025
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, multi-class anomaly classification has garnered increasing attention. Previous methods directly cluster anomalies but often struggle due to the lack of anomaly-prior knowledge. Acquiring this knowledge faces two issues: the non-prominent and weak-semantics anomalies. In this paper, we propose AnomalyNCD, a multi-class anomaly classification network compatible with different anomaly detection methods. To address the non-prominence of anomalies, we design main element binarization (MEBin) to obtain anomaly-centered images, ensuring anomalies are learned while avoiding the impact of incorrect detections. Next, to learn anomalies with weak semantics, we design mask-guided representation learning, which focuses on isolated anomalies guided by masks and reduces confusion from erroneous inputs through corrected pseudo labels. Finally, to enable flexible classification at both region and image levels, we develop a region merging strategy that determines the overall image category based on the classified anomaly regions. Our method outperforms the state-of-the-art works on the MVTec AD and MTD datasets. Compared with the current methods, AnomalyNCD combined with zero-shot anomaly detection method achieves a 10.8% $F_1$ gain, 8.8% NMI gain, and 9.5% ARI gain on MVTec AD, and 12.8% $F_1$ gain, 5.7% NMI gain, and 10.8% ARI gain on MTD. Code is available at https://github.com/HUST-SLOW/AnomalyNCD.
[ { "version": "v1", "created": "Fri, 18 Oct 2024 11:07:12 GMT" }, { "version": "v2", "created": "Thu, 27 Mar 2025 13:09:07 GMT" } ]
2025-03-28T00:00:00
[ [ "Huang", "Ziming", "" ], [ "Li", "Xurui", "" ], [ "Liu", "Haotian", "" ], [ "Xue", "Feng", "" ], [ "Wang", "Yuzhe", "" ], [ "Zhou", "Yu", "" ] ]
TITLE: AnomalyNCD: Towards Novel Anomaly Class Discovery in Industrial Scenarios ABSTRACT: Recently, multi-class anomaly classification has garnered increasing attention. Previous methods directly cluster anomalies but often struggle due to the lack of anomaly-prior knowledge. Acquiring this knowledge faces two issues: the non-prominent and weak-semantics anomalies. In this paper, we propose AnomalyNCD, a multi-class anomaly classification network compatible with different anomaly detection methods. To address the non-prominence of anomalies, we design main element binarization (MEBin) to obtain anomaly-centered images, ensuring anomalies are learned while avoiding the impact of incorrect detections. Next, to learn anomalies with weak semantics, we design mask-guided representation learning, which focuses on isolated anomalies guided by masks and reduces confusion from erroneous inputs through corrected pseudo labels. Finally, to enable flexible classification at both region and image levels, we develop a region merging strategy that determines the overall image category based on the classified anomaly regions. Our method outperforms the state-of-the-art works on the MVTec AD and MTD datasets. Compared with the current methods, AnomalyNCD combined with zero-shot anomaly detection method achieves a 10.8% $F_1$ gain, 8.8% NMI gain, and 9.5% ARI gain on MVTec AD, and 12.8% $F_1$ gain, 5.7% NMI gain, and 10.8% ARI gain on MTD. Code is available at https://github.com/HUST-SLOW/AnomalyNCD.
2410.14770
Jiaxin Lu
Jiaxin Lu, Yongqing Liang, Huijun Han, Jiacheng Hua, Junfeng Jiang, Xin Li, Qixing Huang
A Survey on Computational Solutions for Reconstructing Complete Objects by Reassembling Their Fractured Parts
36 pages, 22 figures
null
null
null
cs.CV cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reconstructing a complete object from its parts is a fundamental problem in many scientific domains. The purpose of this article is to provide a systematic survey on this topic. The reassembly problem requires understanding the attributes of individual pieces and establishing matches between different pieces. Many approaches also model priors of the underlying complete object. Existing approaches are tightly connected problems of shape segmentation, shape matching, and learning shape priors. We provide existing algorithms in this context and emphasize their similarities and differences to general-purpose approaches. We also survey the trends from early non-deep learning approaches to more recent deep learning approaches. In addition to algorithms, this survey will also describe existing datasets, open-source software packages, and applications. To the best of our knowledge, this is the first comprehensive survey on this topic in computer graphics.
[ { "version": "v1", "created": "Fri, 18 Oct 2024 17:53:07 GMT" }, { "version": "v2", "created": "Thu, 27 Mar 2025 17:45:43 GMT" } ]
2025-03-28T00:00:00
[ [ "Lu", "Jiaxin", "" ], [ "Liang", "Yongqing", "" ], [ "Han", "Huijun", "" ], [ "Hua", "Jiacheng", "" ], [ "Jiang", "Junfeng", "" ], [ "Li", "Xin", "" ], [ "Huang", "Qixing", "" ] ]
TITLE: A Survey on Computational Solutions for Reconstructing Complete Objects by Reassembling Their Fractured Parts ABSTRACT: Reconstructing a complete object from its parts is a fundamental problem in many scientific domains. The purpose of this article is to provide a systematic survey on this topic. The reassembly problem requires understanding the attributes of individual pieces and establishing matches between different pieces. Many approaches also model priors of the underlying complete object. Existing approaches are tightly connected problems of shape segmentation, shape matching, and learning shape priors. We provide existing algorithms in this context and emphasize their similarities and differences to general-purpose approaches. We also survey the trends from early non-deep learning approaches to more recent deep learning approaches. In addition to algorithms, this survey will also describe existing datasets, open-source software packages, and applications. To the best of our knowledge, this is the first comprehensive survey on this topic in computer graphics.
2410.21897
Monan Zhou Dr
Yifu Sun, Xulong Zhang, Monan Zhou, Wei Li
Semi-Supervised Self-Learning Enhanced Music Emotion Recognition
12 pages, 2 figures
Proceedings of the 11th Conference on Sound and Music Technology. CSMT 2024. Lecture Notes in Electrical Engineering. Springer, Singapore
null
null
cs.SD cs.AI eess.AS
http://creativecommons.org/licenses/by/4.0/
Music emotion recognition (MER) aims to identify the emotions conveyed in a given musical piece. However, currently, in the field of MER, the available public datasets have limited sample sizes. Recently, segment-based methods for emotion-related tasks have been proposed, which train backbone networks on shorter segments instead of entire audio clips, thereby naturally augmenting training samples without requiring additional resources. Then, the predicted segment-level results are aggregated to obtain the entire song prediction. The most commonly used method is that the segment inherits the label of the clip containing it, but music emotion is not constant during the whole clip. Doing so will introduce label noise and make the training easy to overfit. To handle the noisy label issue, we propose a semi-supervised self-learning (SSSL) method, which can differentiate between samples with correct and incorrect labels in a self-learning manner, thus effectively utilizing the augmented segment-level data. Experiments on three public emotional datasets demonstrate that the proposed method can achieve better or comparable performance.
[ { "version": "v1", "created": "Tue, 29 Oct 2024 09:42:07 GMT" }, { "version": "v2", "created": "Thu, 27 Mar 2025 02:39:50 GMT" } ]
2025-03-28T00:00:00
[ [ "Sun", "Yifu", "" ], [ "Zhang", "Xulong", "" ], [ "Zhou", "Monan", "" ], [ "Li", "Wei", "" ] ]
TITLE: Semi-Supervised Self-Learning Enhanced Music Emotion Recognition ABSTRACT: Music emotion recognition (MER) aims to identify the emotions conveyed in a given musical piece. However, currently, in the field of MER, the available public datasets have limited sample sizes. Recently, segment-based methods for emotion-related tasks have been proposed, which train backbone networks on shorter segments instead of entire audio clips, thereby naturally augmenting training samples without requiring additional resources. Then, the predicted segment-level results are aggregated to obtain the entire song prediction. The most commonly used method is that the segment inherits the label of the clip containing it, but music emotion is not constant during the whole clip. Doing so will introduce label noise and make the training easy to overfit. To handle the noisy label issue, we propose a semi-supervised self-learning (SSSL) method, which can differentiate between samples with correct and incorrect labels in a self-learning manner, thus effectively utilizing the augmented segment-level data. Experiments on three public emotional datasets demonstrate that the proposed method can achieve better or comparable performance.
2411.01739
Yanyi Zhang
Yanyi Zhang, Binglin Qiu, Qi Jia, Yu Liu, Ran He
Not Just Object, But State: Compositional Incremental Learning without Forgetting
NeurIPS 2024
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Most incremental learners excessively prioritize coarse classes of objects while neglecting various kinds of states (e.g. color and material) attached to the objects. As a result, they are limited in the ability to reason fine-grained compositionality of state-object pairs. To remedy this limitation, we propose a novel task called Compositional Incremental Learning (composition-IL), enabling the model to recognize state-object compositions as a whole in an incremental learning fashion. Since the lack of suitable benchmarks, we re-organize two existing datasets and make them tailored for composition-IL. Then, we propose a prompt-based Composition Incremental Learner (CompILer), to overcome the ambiguous composition boundary problem which challenges composition-IL largely. Specifically, we exploit multi-pool prompt learning, which is regularized by inter-pool prompt discrepancy and intra-pool prompt diversity. Besides, we devise object-injected state prompting by using object prompts to guide the selection of state prompts. Furthermore, we fuse the selected prompts by a generalized-mean strategy, to eliminate irrelevant information learned in the prompts. Extensive experiments on two datasets exhibit state-of-the-art performance achieved by CompILer.
[ { "version": "v1", "created": "Mon, 4 Nov 2024 01:42:41 GMT" }, { "version": "v2", "created": "Tue, 5 Nov 2024 10:23:00 GMT" }, { "version": "v3", "created": "Thu, 27 Mar 2025 08:15:46 GMT" } ]
2025-03-28T00:00:00
[ [ "Zhang", "Yanyi", "" ], [ "Qiu", "Binglin", "" ], [ "Jia", "Qi", "" ], [ "Liu", "Yu", "" ], [ "He", "Ran", "" ] ]
TITLE: Not Just Object, But State: Compositional Incremental Learning without Forgetting ABSTRACT: Most incremental learners excessively prioritize coarse classes of objects while neglecting various kinds of states (e.g. color and material) attached to the objects. As a result, they are limited in the ability to reason fine-grained compositionality of state-object pairs. To remedy this limitation, we propose a novel task called Compositional Incremental Learning (composition-IL), enabling the model to recognize state-object compositions as a whole in an incremental learning fashion. Since the lack of suitable benchmarks, we re-organize two existing datasets and make them tailored for composition-IL. Then, we propose a prompt-based Composition Incremental Learner (CompILer), to overcome the ambiguous composition boundary problem which challenges composition-IL largely. Specifically, we exploit multi-pool prompt learning, which is regularized by inter-pool prompt discrepancy and intra-pool prompt diversity. Besides, we devise object-injected state prompting by using object prompts to guide the selection of state prompts. Furthermore, we fuse the selected prompts by a generalized-mean strategy, to eliminate irrelevant information learned in the prompts. Extensive experiments on two datasets exhibit state-of-the-art performance achieved by CompILer.
2411.03055
Luca Zhou
Luca Zhou, Daniele Solombrino, Donato Crisostomi, Maria Sofia Bucarelli, Fabrizio Silvestri, Emanuele Rodol\`a
ATM: Improving Model Merging by Alternating Tuning and Merging
Main paper: 9 Pages, 9 figures, 1 table
null
null
null
cs.LG cs.AI cs.CV
http://creativecommons.org/licenses/by/4.0/
Model merging has recently emerged as a cost-efficient paradigm for multi-task learning. Among current approaches, task arithmetic stands out for its simplicity and effectiveness. In this paper, we motivate the effectiveness of task vectors by linking them to multi-task gradients. We show that in a single-epoch scenario, if the optimization is performed via gradient descent, task vectors are after one step mathematically equivalent to the gradients obtained via gradient descent in a multi-task setting, and still approximate these gradients in subsequent epochs. Furthermore, we show that the effectiveness of task vectors is largely driven by the first epoch's gradient. Given this parallel between task vectors and gradients, we propose viewing model merging as a single step in an iterative process that alternates between tuning and merging (ATM). We then propose two ways to utilize ATM. The first is to replace multi-task learning with ATM in scenarios where data sharing is prohibited, such as federated learning. The second is to improve the outcome of any model merging algorithm by applying a few post-hoc iterations of ATM on a small validation dataset, which is commonly available for hyperparameter tuning. Finally, we provide both empirical and theoretical support for the effectiveness of ATM, demonstrating that it minimizes an upper bound on the loss obtained by jointly finetuning all tasks.
[ { "version": "v1", "created": "Tue, 5 Nov 2024 12:42:42 GMT" }, { "version": "v2", "created": "Wed, 6 Nov 2024 13:24:10 GMT" }, { "version": "v3", "created": "Thu, 27 Mar 2025 08:57:30 GMT" } ]
2025-03-28T00:00:00
[ [ "Zhou", "Luca", "" ], [ "Solombrino", "Daniele", "" ], [ "Crisostomi", "Donato", "" ], [ "Bucarelli", "Maria Sofia", "" ], [ "Silvestri", "Fabrizio", "" ], [ "Rodolà", "Emanuele", "" ] ]
TITLE: ATM: Improving Model Merging by Alternating Tuning and Merging ABSTRACT: Model merging has recently emerged as a cost-efficient paradigm for multi-task learning. Among current approaches, task arithmetic stands out for its simplicity and effectiveness. In this paper, we motivate the effectiveness of task vectors by linking them to multi-task gradients. We show that in a single-epoch scenario, if the optimization is performed via gradient descent, task vectors are after one step mathematically equivalent to the gradients obtained via gradient descent in a multi-task setting, and still approximate these gradients in subsequent epochs. Furthermore, we show that the effectiveness of task vectors is largely driven by the first epoch's gradient. Given this parallel between task vectors and gradients, we propose viewing model merging as a single step in an iterative process that alternates between tuning and merging (ATM). We then propose two ways to utilize ATM. The first is to replace multi-task learning with ATM in scenarios where data sharing is prohibited, such as federated learning. The second is to improve the outcome of any model merging algorithm by applying a few post-hoc iterations of ATM on a small validation dataset, which is commonly available for hyperparameter tuning. Finally, we provide both empirical and theoretical support for the effectiveness of ATM, demonstrating that it minimizes an upper bound on the loss obtained by jointly finetuning all tasks.
2411.04844
Shaokai Wu
Shaokai Wu, Yuxiang Lu, Wei Ji, Suizhi Huang, Fengyu Yang, Shalayiding Sirejiding, Qichen He, Jing Tong, Yanbiao Ji, Yue Ding, Hongtao Lu
Discretized Gaussian Representation for Tomographic Reconstruction
null
null
null
null
eess.IV cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Computed Tomography (CT) is a widely used imaging technique that provides detailed cross-sectional views of objects. Over the past decade, Deep Learning-based Reconstruction (DLR) methods have led efforts to enhance image quality and reduce noise, yet they often require large amounts of data and are computationally intensive. Inspired by recent advancements in scene reconstruction, some approaches have adapted NeRF and 3D Gaussian Splatting (3DGS) techniques for CT reconstruction. However, these methods are not ideal for direct 3D volume reconstruction. In this paper, we propose a novel Discretized Gaussian Representation (DGR) for CT reconstruction, which directly reconstructs the 3D volume using a set of discretized Gaussian functions in an end-to-end manner. To further enhance computational efficiency, we introduce a Fast Volume Reconstruction technique that aggregates the contributions of these Gaussians into a discretized volume in a highly parallelized fashion. Our extensive experiments on both real-world and synthetic datasets demonstrate that DGR achieves superior reconstruction quality and significantly improved computational efficiency compared to existing DLR and instance reconstruction methods. Our code has been provided for review purposes and will be made publicly available upon publication.
[ { "version": "v1", "created": "Thu, 7 Nov 2024 16:32:29 GMT" }, { "version": "v2", "created": "Wed, 11 Dec 2024 17:40:32 GMT" }, { "version": "v3", "created": "Thu, 27 Mar 2025 15:00:57 GMT" } ]
2025-03-28T00:00:00
[ [ "Wu", "Shaokai", "" ], [ "Lu", "Yuxiang", "" ], [ "Ji", "Wei", "" ], [ "Huang", "Suizhi", "" ], [ "Yang", "Fengyu", "" ], [ "Sirejiding", "Shalayiding", "" ], [ "He", "Qichen", "" ], [ "Tong", "Jing", "" ], [ "Ji", "Yanbiao", "" ], [ "Ding", "Yue", "" ], [ "Lu", "Hongtao", "" ] ]
TITLE: Discretized Gaussian Representation for Tomographic Reconstruction ABSTRACT: Computed Tomography (CT) is a widely used imaging technique that provides detailed cross-sectional views of objects. Over the past decade, Deep Learning-based Reconstruction (DLR) methods have led efforts to enhance image quality and reduce noise, yet they often require large amounts of data and are computationally intensive. Inspired by recent advancements in scene reconstruction, some approaches have adapted NeRF and 3D Gaussian Splatting (3DGS) techniques for CT reconstruction. However, these methods are not ideal for direct 3D volume reconstruction. In this paper, we propose a novel Discretized Gaussian Representation (DGR) for CT reconstruction, which directly reconstructs the 3D volume using a set of discretized Gaussian functions in an end-to-end manner. To further enhance computational efficiency, we introduce a Fast Volume Reconstruction technique that aggregates the contributions of these Gaussians into a discretized volume in a highly parallelized fashion. Our extensive experiments on both real-world and synthetic datasets demonstrate that DGR achieves superior reconstruction quality and significantly improved computational efficiency compared to existing DLR and instance reconstruction methods. Our code has been provided for review purposes and will be made publicly available upon publication.
2411.10684
Haoxu Huang
Haoxu Huang, Cem M. Deniz, Kyunghyun Cho, Sumit Chopra, Divyam Madaan
HIST-AID: Leveraging Historical Patient Reports for Enhanced Multi-Modal Automatic Diagnosis
In Proceedings of Machine Learning for Health
PMLR 259(2025):502-523
null
null
eess.IV cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Chest X-ray imaging is a widely accessible and non-invasive diagnostic tool for detecting thoracic abnormalities. While numerous AI models assist radiologists in interpreting these images, most overlook patients' historical data. To bridge this gap, we introduce Temporal MIMIC dataset, which integrates five years of patient history, including radiographic scans and reports from MIMIC-CXR and MIMIC-IV, encompassing 12,221 patients and thirteen pathologies. Building on this, we present HIST-AID, a framework that enhances automatic diagnostic accuracy using historical reports. HIST-AID emulates the radiologist's comprehensive approach, leveraging historical data to improve diagnostic accuracy. Our experiments demonstrate significant improvements, with AUROC increasing by 6.56% and AUPRC by 9.51% compared to models that rely solely on radiographic scans. These gains were consistently observed across diverse demographic groups, including variations in gender, age, and racial categories. We show that while recent data boost performance, older data may reduce accuracy due to changes in patient conditions. Our work paves the potential of incorporating historical data for more reliable automatic diagnosis, providing critical support for clinical decision-making.
[ { "version": "v1", "created": "Sat, 16 Nov 2024 03:20:53 GMT" } ]
2025-03-28T00:00:00
[ [ "Huang", "Haoxu", "" ], [ "Deniz", "Cem M.", "" ], [ "Cho", "Kyunghyun", "" ], [ "Chopra", "Sumit", "" ], [ "Madaan", "Divyam", "" ] ]
TITLE: HIST-AID: Leveraging Historical Patient Reports for Enhanced Multi-Modal Automatic Diagnosis ABSTRACT: Chest X-ray imaging is a widely accessible and non-invasive diagnostic tool for detecting thoracic abnormalities. While numerous AI models assist radiologists in interpreting these images, most overlook patients' historical data. To bridge this gap, we introduce Temporal MIMIC dataset, which integrates five years of patient history, including radiographic scans and reports from MIMIC-CXR and MIMIC-IV, encompassing 12,221 patients and thirteen pathologies. Building on this, we present HIST-AID, a framework that enhances automatic diagnostic accuracy using historical reports. HIST-AID emulates the radiologist's comprehensive approach, leveraging historical data to improve diagnostic accuracy. Our experiments demonstrate significant improvements, with AUROC increasing by 6.56% and AUPRC by 9.51% compared to models that rely solely on radiographic scans. These gains were consistently observed across diverse demographic groups, including variations in gender, age, and racial categories. We show that while recent data boost performance, older data may reduce accuracy due to changes in patient conditions. Our work paves the potential of incorporating historical data for more reliable automatic diagnosis, providing critical support for clinical decision-making.
2411.14522
Tianbin Li
Tianbin Li, Yanzhou Su, Wei Li, Bin Fu, Zhe Chen, Ziyan Huang, Guoan Wang, Chenglong Ma, Ying Chen, Ming Hu, Yanjun Li, Pengcheng Chen, Xiaowei Hu, Zhongying Deng, Yuanfeng Ji, Jin Ye, Yu Qiao, Junjun He
GMAI-VL & GMAI-VL-5.5M: A Large Vision-Language Model and A Comprehensive Multimodal Dataset Towards General Medical AI
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite significant advancements in general AI, its effectiveness in the medical domain is limited by the lack of specialized medical knowledge. To address this, we formulate GMAI-VL-5.5M, a multimodal medical dataset created by converting hundreds of specialized medical datasets with various annotations into high-quality image-text pairs. This dataset offers comprehensive task coverage, diverse modalities, and rich image-text data. Building upon this dataset, we develop GMAI-VL, a general medical vision-language model, with a three-stage training strategy that enhances the integration of visual and textual information. This approach significantly improves the model's ability to process multimodal data, supporting accurate diagnoses and clinical decision-making. Experiments show that GMAI-VL achieves state-of-the-art performance across various multimodal medical tasks, including visual question answering and medical image diagnosis.
[ { "version": "v1", "created": "Thu, 21 Nov 2024 18:59:36 GMT" }, { "version": "v2", "created": "Thu, 27 Mar 2025 15:24:29 GMT" } ]
2025-03-28T00:00:00
[ [ "Li", "Tianbin", "" ], [ "Su", "Yanzhou", "" ], [ "Li", "Wei", "" ], [ "Fu", "Bin", "" ], [ "Chen", "Zhe", "" ], [ "Huang", "Ziyan", "" ], [ "Wang", "Guoan", "" ], [ "Ma", "Chenglong", "" ], [ "Chen", "Ying", "" ], [ "Hu", "Ming", "" ], [ "Li", "Yanjun", "" ], [ "Chen", "Pengcheng", "" ], [ "Hu", "Xiaowei", "" ], [ "Deng", "Zhongying", "" ], [ "Ji", "Yuanfeng", "" ], [ "Ye", "Jin", "" ], [ "Qiao", "Yu", "" ], [ "He", "Junjun", "" ] ]
TITLE: GMAI-VL & GMAI-VL-5.5M: A Large Vision-Language Model and A Comprehensive Multimodal Dataset Towards General Medical AI ABSTRACT: Despite significant advancements in general AI, its effectiveness in the medical domain is limited by the lack of specialized medical knowledge. To address this, we formulate GMAI-VL-5.5M, a multimodal medical dataset created by converting hundreds of specialized medical datasets with various annotations into high-quality image-text pairs. This dataset offers comprehensive task coverage, diverse modalities, and rich image-text data. Building upon this dataset, we develop GMAI-VL, a general medical vision-language model, with a three-stage training strategy that enhances the integration of visual and textual information. This approach significantly improves the model's ability to process multimodal data, supporting accurate diagnoses and clinical decision-making. Experiments show that GMAI-VL achieves state-of-the-art performance across various multimodal medical tasks, including visual question answering and medical image diagnosis.
2411.15482
Su Sun
Su Sun, Cheng Zhao, Zhuoyang Sun, Yingjie Victor Chen, Mei Chen
SplatFlow: Self-Supervised Dynamic Gaussian Splatting in Neural Motion Flow Field for Autonomous Driving
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most existing Dynamic Gaussian Splatting methods for complex dynamic urban scenarios rely on accurate object-level supervision from expensive manual labeling, limiting their scalability in real-world applications. In this paper, we introduce SplatFlow, a Self-Supervised Dynamic Gaussian Splatting within Neural Motion Flow Fields (NMFF) to learn 4D space-time representations without requiring tracked 3D bounding boxes, enabling accurate dynamic scene reconstruction and novel view RGB/depth/flow synthesis. SplatFlow designs a unified framework to seamlessly integrate time-dependent 4D Gaussian representation within NMFF, where NMFF is a set of implicit functions to model temporal motions of both LiDAR points and Gaussians as continuous motion flow fields. Leveraging NMFF, SplatFlow effectively decomposes static background and dynamic objects, representing them with 3D and 4D Gaussian primitives, respectively. NMFF also models the correspondences of each 4D Gaussian across time, which aggregates temporal features to enhance cross-view consistency of dynamic components. SplatFlow further improves dynamic object identification by distilling features from 2D foundation models into 4D space-time representation. Comprehensive evaluations conducted on the Waymo and KITTI Datasets validate SplatFlow's state-of-the-art (SOTA) performance for both image reconstruction and novel view synthesis in dynamic urban scenarios.
[ { "version": "v1", "created": "Sat, 23 Nov 2024 07:39:30 GMT" }, { "version": "v2", "created": "Thu, 27 Mar 2025 00:51:33 GMT" } ]
2025-03-28T00:00:00
[ [ "Sun", "Su", "" ], [ "Zhao", "Cheng", "" ], [ "Sun", "Zhuoyang", "" ], [ "Chen", "Yingjie Victor", "" ], [ "Chen", "Mei", "" ] ]
TITLE: SplatFlow: Self-Supervised Dynamic Gaussian Splatting in Neural Motion Flow Field for Autonomous Driving ABSTRACT: Most existing Dynamic Gaussian Splatting methods for complex dynamic urban scenarios rely on accurate object-level supervision from expensive manual labeling, limiting their scalability in real-world applications. In this paper, we introduce SplatFlow, a Self-Supervised Dynamic Gaussian Splatting within Neural Motion Flow Fields (NMFF) to learn 4D space-time representations without requiring tracked 3D bounding boxes, enabling accurate dynamic scene reconstruction and novel view RGB/depth/flow synthesis. SplatFlow designs a unified framework to seamlessly integrate time-dependent 4D Gaussian representation within NMFF, where NMFF is a set of implicit functions to model temporal motions of both LiDAR points and Gaussians as continuous motion flow fields. Leveraging NMFF, SplatFlow effectively decomposes static background and dynamic objects, representing them with 3D and 4D Gaussian primitives, respectively. NMFF also models the correspondences of each 4D Gaussian across time, which aggregates temporal features to enhance cross-view consistency of dynamic components. SplatFlow further improves dynamic object identification by distilling features from 2D foundation models into 4D space-time representation. Comprehensive evaluations conducted on the Waymo and KITTI Datasets validate SplatFlow's state-of-the-art (SOTA) performance for both image reconstruction and novel view synthesis in dynamic urban scenarios.
2411.18620
Zhi Zhang
Zhi Zhang, Srishti Yadav, Fengze Han, Ekaterina Shutova
Cross-modal Information Flow in Multimodal Large Language Models
null
CVPR2025
null
null
cs.AI cs.CL cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The recent advancements in auto-regressive multimodal large language models (MLLMs) have demonstrated promising progress for vision-language tasks. While there exists a variety of studies investigating the processing of linguistic information within large language models, little is currently known about the inner working mechanism of MLLMs and how linguistic and visual information interact within these models. In this study, we aim to fill this gap by examining the information flow between different modalities -- language and vision -- in MLLMs, focusing on visual question answering. Specifically, given an image-question pair as input, we investigate where in the model and how the visual and linguistic information are combined to generate the final prediction. Conducting experiments with a series of models from the LLaVA series, we find that there are two distinct stages in the process of integration of the two modalities. In the lower layers, the model first transfers the more general visual features of the whole image into the representations of (linguistic) question tokens. In the middle layers, it once again transfers visual information about specific objects relevant to the question to the respective token positions of the question. Finally, in the higher layers, the resulting multimodal representation is propagated to the last position of the input sequence for the final prediction. Overall, our findings provide a new and comprehensive perspective on the spatial and functional aspects of image and language processing in the MLLMs, thereby facilitating future research into multimodal information localization and editing. Our code and collected dataset are released here: https://github.com/FightingFighting/cross-modal-information-flow-in-MLLM.git.
[ { "version": "v1", "created": "Wed, 27 Nov 2024 18:59:26 GMT" }, { "version": "v2", "created": "Tue, 25 Mar 2025 18:59:50 GMT" } ]
2025-03-28T00:00:00
[ [ "Zhang", "Zhi", "" ], [ "Yadav", "Srishti", "" ], [ "Han", "Fengze", "" ], [ "Shutova", "Ekaterina", "" ] ]
TITLE: Cross-modal Information Flow in Multimodal Large Language Models ABSTRACT: The recent advancements in auto-regressive multimodal large language models (MLLMs) have demonstrated promising progress for vision-language tasks. While there exists a variety of studies investigating the processing of linguistic information within large language models, little is currently known about the inner working mechanism of MLLMs and how linguistic and visual information interact within these models. In this study, we aim to fill this gap by examining the information flow between different modalities -- language and vision -- in MLLMs, focusing on visual question answering. Specifically, given an image-question pair as input, we investigate where in the model and how the visual and linguistic information are combined to generate the final prediction. Conducting experiments with a series of models from the LLaVA series, we find that there are two distinct stages in the process of integration of the two modalities. In the lower layers, the model first transfers the more general visual features of the whole image into the representations of (linguistic) question tokens. In the middle layers, it once again transfers visual information about specific objects relevant to the question to the respective token positions of the question. Finally, in the higher layers, the resulting multimodal representation is propagated to the last position of the input sequence for the final prediction. Overall, our findings provide a new and comprehensive perspective on the spatial and functional aspects of image and language processing in the MLLMs, thereby facilitating future research into multimodal information localization and editing. Our code and collected dataset are released here: https://github.com/FightingFighting/cross-modal-information-flow-in-MLLM.git.
2411.19835
Mario Koddenbrock
S\"onke Tenckhoff, Mario Koddenbrock, Erik Rodner
Feedback-driven object detection and iterative model improvement
Code: https://github.com/ml-lab-htw/iterative-annotate Video: https://www.youtube.com/watch?v=CM9uhE8NN5E
https://www.gfai.de/fileadmin/Downloads/Tagungsband/gfai-tagungsband-2024.pdf
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automated object detection has become increasingly valuable across diverse applications, yet efficient, high-quality annotation remains a persistent challenge. In this paper, we present the development and evaluation of a platform designed to interactively improve object detection models. The platform allows uploading and annotating images as well as fine-tuning object detection models. Users can then manually review and refine annotations, further creating improved snapshots that are used for automatic object detection on subsequent image uploads - a process we refer to as semi-automatic annotation resulting in a significant gain in annotation efficiency. Whereas iterative refinement of model results to speed up annotation has become common practice, we are the first to quantitatively evaluate its benefits with respect to time, effort, and interaction savings. Our experimental results show clear evidence for a significant time reduction of up to 53% for semi-automatic compared to manual annotation. Importantly, these efficiency gains did not compromise annotation quality, while matching or occasionally even exceeding the accuracy of manual annotations. These findings demonstrate the potential of our lightweight annotation platform for creating high-quality object detection datasets and provide best practices to guide future development of annotation platforms. The platform is open-source, with the frontend and backend repositories available on GitHub. To support the understanding of our labeling process, we have created an explanatory video demonstrating the methodology using microscopy images of E. coli bacteria as an example.
[ { "version": "v1", "created": "Fri, 29 Nov 2024 16:45:25 GMT" }, { "version": "v2", "created": "Tue, 14 Jan 2025 14:53:10 GMT" }, { "version": "v3", "created": "Thu, 27 Mar 2025 08:34:04 GMT" } ]
2025-03-28T00:00:00
[ [ "Tenckhoff", "Sönke", "" ], [ "Koddenbrock", "Mario", "" ], [ "Rodner", "Erik", "" ] ]
TITLE: Feedback-driven object detection and iterative model improvement ABSTRACT: Automated object detection has become increasingly valuable across diverse applications, yet efficient, high-quality annotation remains a persistent challenge. In this paper, we present the development and evaluation of a platform designed to interactively improve object detection models. The platform allows uploading and annotating images as well as fine-tuning object detection models. Users can then manually review and refine annotations, further creating improved snapshots that are used for automatic object detection on subsequent image uploads - a process we refer to as semi-automatic annotation resulting in a significant gain in annotation efficiency. Whereas iterative refinement of model results to speed up annotation has become common practice, we are the first to quantitatively evaluate its benefits with respect to time, effort, and interaction savings. Our experimental results show clear evidence for a significant time reduction of up to 53% for semi-automatic compared to manual annotation. Importantly, these efficiency gains did not compromise annotation quality, while matching or occasionally even exceeding the accuracy of manual annotations. These findings demonstrate the potential of our lightweight annotation platform for creating high-quality object detection datasets and provide best practices to guide future development of annotation platforms. The platform is open-source, with the frontend and backend repositories available on GitHub. To support the understanding of our labeling process, we have created an explanatory video demonstrating the methodology using microscopy images of E. coli bacteria as an example.
2412.00692
Yizhou Wang
Yizhou Wang, Tim Meinhardt, Orcun Cetintas, Cheng-Yen Yang, Sameer Satish Pusegaonkar, Benjamin Missaoui, Sujit Biswas, Zheng Tang, Laura Leal-Taix\'e
MCBLT: Multi-Camera Multi-Object 3D Tracking in Long Videos
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Object perception from multi-view cameras is crucial for intelligent systems, particularly in indoor environments, e.g., warehouses, retail stores, and hospitals. Most traditional multi-target multi-camera (MTMC) detection and tracking methods rely on 2D object detection, single-view multi-object tracking (MOT), and cross-view re-identification (ReID) techniques, without properly handling important 3D information by multi-view image aggregation. In this paper, we propose a 3D object detection and tracking framework, named MCBLT, which first aggregates multi-view images with necessary camera calibration parameters to obtain 3D object detections in bird's-eye view (BEV). Then, we introduce hierarchical graph neural networks (GNNs) to track these 3D detections in BEV for MTMC tracking results. Unlike existing methods, MCBLT has impressive generalizability across different scenes and diverse camera settings, with exceptional capability for long-term association handling. As a result, our proposed MCBLT establishes a new state-of-the-art on the AICity'24 dataset with $81.22$ HOTA, and on the WildTrack dataset with $95.6$ IDF1.
[ { "version": "v1", "created": "Sun, 1 Dec 2024 06:18:06 GMT" }, { "version": "v2", "created": "Sat, 7 Dec 2024 22:46:42 GMT" }, { "version": "v3", "created": "Wed, 26 Mar 2025 19:59:25 GMT" } ]
2025-03-28T00:00:00
[ [ "Wang", "Yizhou", "" ], [ "Meinhardt", "Tim", "" ], [ "Cetintas", "Orcun", "" ], [ "Yang", "Cheng-Yen", "" ], [ "Pusegaonkar", "Sameer Satish", "" ], [ "Missaoui", "Benjamin", "" ], [ "Biswas", "Sujit", "" ], [ "Tang", "Zheng", "" ], [ "Leal-Taixé", "Laura", "" ] ]
TITLE: MCBLT: Multi-Camera Multi-Object 3D Tracking in Long Videos ABSTRACT: Object perception from multi-view cameras is crucial for intelligent systems, particularly in indoor environments, e.g., warehouses, retail stores, and hospitals. Most traditional multi-target multi-camera (MTMC) detection and tracking methods rely on 2D object detection, single-view multi-object tracking (MOT), and cross-view re-identification (ReID) techniques, without properly handling important 3D information by multi-view image aggregation. In this paper, we propose a 3D object detection and tracking framework, named MCBLT, which first aggregates multi-view images with necessary camera calibration parameters to obtain 3D object detections in bird's-eye view (BEV). Then, we introduce hierarchical graph neural networks (GNNs) to track these 3D detections in BEV for MTMC tracking results. Unlike existing methods, MCBLT has impressive generalizability across different scenes and diverse camera settings, with exceptional capability for long-term association handling. As a result, our proposed MCBLT establishes a new state-of-the-art on the AICity'24 dataset with $81.22$ HOTA, and on the WildTrack dataset with $95.6$ IDF1.
2412.02479
Caixin Kang
Caixin Kang, Yubo Chen, Shouwei Ruan, Shiji Zhao, Ruochen Zhang, Jiayi Wang, Shan Fu, Xingxing Wei
OODFace: Benchmarking Robustness of Face Recognition under Common Corruptions and Appearance Variations
null
null
null
null
cs.CV cs.AI cs.CR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the rise of deep learning, facial recognition technology has seen extensive research and rapid development. Although facial recognition is considered a mature technology, we find that existing open-source models and commercial algorithms lack robustness in certain complex Out-of-Distribution (OOD) scenarios, raising concerns about the reliability of these systems. In this paper, we introduce OODFace, which explores the OOD challenges faced by facial recognition models from two perspectives: common corruptions and appearance variations. We systematically design 30 OOD scenarios across 9 major categories tailored for facial recognition. By simulating these challenges on public datasets, we establish three robustness benchmarks: LFW-C/V, CFP-FP-C/V, and YTF-C/V. We then conduct extensive experiments on 19 facial recognition models and 3 commercial APIs, along with extended physical experiments on face masks to assess their robustness. Next, we explore potential solutions from two perspectives: defense strategies and Vision-Language Models (VLMs). Based on the results, we draw several key insights, highlighting the vulnerability of facial recognition systems to OOD data and suggesting possible solutions. Additionally, we offer a unified toolkit that includes all corruption and variation types, easily extendable to other datasets. We hope that our benchmarks and findings can provide guidance for future improvements in facial recognition model robustness.
[ { "version": "v1", "created": "Tue, 3 Dec 2024 14:42:31 GMT" }, { "version": "v2", "created": "Thu, 27 Mar 2025 05:40:57 GMT" } ]
2025-03-28T00:00:00
[ [ "Kang", "Caixin", "" ], [ "Chen", "Yubo", "" ], [ "Ruan", "Shouwei", "" ], [ "Zhao", "Shiji", "" ], [ "Zhang", "Ruochen", "" ], [ "Wang", "Jiayi", "" ], [ "Fu", "Shan", "" ], [ "Wei", "Xingxing", "" ] ]
TITLE: OODFace: Benchmarking Robustness of Face Recognition under Common Corruptions and Appearance Variations ABSTRACT: With the rise of deep learning, facial recognition technology has seen extensive research and rapid development. Although facial recognition is considered a mature technology, we find that existing open-source models and commercial algorithms lack robustness in certain complex Out-of-Distribution (OOD) scenarios, raising concerns about the reliability of these systems. In this paper, we introduce OODFace, which explores the OOD challenges faced by facial recognition models from two perspectives: common corruptions and appearance variations. We systematically design 30 OOD scenarios across 9 major categories tailored for facial recognition. By simulating these challenges on public datasets, we establish three robustness benchmarks: LFW-C/V, CFP-FP-C/V, and YTF-C/V. We then conduct extensive experiments on 19 facial recognition models and 3 commercial APIs, along with extended physical experiments on face masks to assess their robustness. Next, we explore potential solutions from two perspectives: defense strategies and Vision-Language Models (VLMs). Based on the results, we draw several key insights, highlighting the vulnerability of facial recognition systems to OOD data and suggesting possible solutions. Additionally, we offer a unified toolkit that includes all corruption and variation types, easily extendable to other datasets. We hope that our benchmarks and findings can provide guidance for future improvements in facial recognition model robustness.
2412.03044
Xiaofeng Tan
Xiaofeng Tan, Hongsong Wang, Xin Geng and Liang Wang
Frequency-Guided Diffusion Model with Perturbation Training for Skeleton-Based Video Anomaly Detection
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Video anomaly detection (VAD) is a vital yet complex open-set task in computer vision, commonly tackled through reconstruction-based methods. However, these methods struggle with two key limitations: (1) insufficient robustness in open-set scenarios, where unseen normal motions are frequently misclassified as anomalies, and (2) an overemphasis on, but restricted capacity for, local motion reconstruction, which are inherently difficult to capture accurately due to their diversity. To overcome these challenges, we introduce a novel frequency-guided diffusion model with perturbation training. First, we enhance robustness by training a generator to produce perturbed samples, which are similar to normal samples and target the weakness of the reconstruction model. This training paradigm expands the reconstruction domain of the model, improving its generalization to unseen normal motions. Second, to address the overemphasis on motion details, we employ the 2D Discrete Cosine Transform (DCT) to separate high-frequency (local) and low-frequency (global) motion components. By guiding the diffusion model with observed high-frequency information, we prioritize the reconstruction of low-frequency components, enabling more accurate and robust anomaly detection. Extensive experiments on five widely used VAD datasets demonstrate that our approach surpasses state-of-the-art methods, underscoring its effectiveness in open-set scenarios and diverse motion contexts. Our project website is https://xiaofeng-tan.github.io/projects/FG-Diff/index.html.
[ { "version": "v1", "created": "Wed, 4 Dec 2024 05:43:53 GMT" }, { "version": "v2", "created": "Thu, 27 Mar 2025 05:03:14 GMT" } ]
2025-03-28T00:00:00
[ [ "Tan", "Xiaofeng", "" ], [ "Wang", "Hongsong", "" ], [ "Geng", "Xin", "" ], [ "Wang", "Liang", "" ] ]
TITLE: Frequency-Guided Diffusion Model with Perturbation Training for Skeleton-Based Video Anomaly Detection ABSTRACT: Video anomaly detection (VAD) is a vital yet complex open-set task in computer vision, commonly tackled through reconstruction-based methods. However, these methods struggle with two key limitations: (1) insufficient robustness in open-set scenarios, where unseen normal motions are frequently misclassified as anomalies, and (2) an overemphasis on, but restricted capacity for, local motion reconstruction, which are inherently difficult to capture accurately due to their diversity. To overcome these challenges, we introduce a novel frequency-guided diffusion model with perturbation training. First, we enhance robustness by training a generator to produce perturbed samples, which are similar to normal samples and target the weakness of the reconstruction model. This training paradigm expands the reconstruction domain of the model, improving its generalization to unseen normal motions. Second, to address the overemphasis on motion details, we employ the 2D Discrete Cosine Transform (DCT) to separate high-frequency (local) and low-frequency (global) motion components. By guiding the diffusion model with observed high-frequency information, we prioritize the reconstruction of low-frequency components, enabling more accurate and robust anomaly detection. Extensive experiments on five widely used VAD datasets demonstrate that our approach surpasses state-of-the-art methods, underscoring its effectiveness in open-set scenarios and diverse motion contexts. Our project website is https://xiaofeng-tan.github.io/projects/FG-Diff/index.html.
2412.06602
Tianxin Xie
Tianxin Xie, Yan Rong, Pengfei Zhang, Wenwu Wang, Li Liu
Towards Controllable Speech Synthesis in the Era of Large Language Models: A Survey
A comprehensive survey on controllable TTS, 26 pages, 7 tables, 6 figures, 317 references. Under review
null
null
null
cs.CL cs.AI cs.LG cs.MM cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Text-to-speech (TTS), also known as speech synthesis, is a prominent research area that aims to generate natural-sounding human speech from text. Recently, with the increasing industrial demand, TTS technologies have evolved beyond synthesizing human-like speech to enabling controllable speech generation. This includes fine-grained control over various attributes of synthesized speech such as emotion, prosody, timbre, and duration. In addition, advancements in deep learning, such as diffusion and large language models, have significantly enhanced controllable TTS over the past several years. In this work, we conduct a comprehensive survey of controllable TTS, covering approaches ranging from basic control techniques to methods utilizing natural language prompts, aiming to provide a clear understanding of the current state of research. We examine the general controllable TTS pipeline, challenges, model architectures, and control strategies, offering a comprehensive and clear taxonomy of existing methods. Additionally, we provide a detailed summary of datasets and evaluation metrics and shed some light on the applications and future directions of controllable TTS. To the best of our knowledge, this survey paper provides the first comprehensive review of emerging controllable TTS methods, which can serve as a beneficial resource for both academic researchers and industrial practitioners.
[ { "version": "v1", "created": "Mon, 9 Dec 2024 15:50:25 GMT" }, { "version": "v2", "created": "Thu, 27 Mar 2025 03:56:00 GMT" } ]
2025-03-28T00:00:00
[ [ "Xie", "Tianxin", "" ], [ "Rong", "Yan", "" ], [ "Zhang", "Pengfei", "" ], [ "Wang", "Wenwu", "" ], [ "Liu", "Li", "" ] ]
TITLE: Towards Controllable Speech Synthesis in the Era of Large Language Models: A Survey ABSTRACT: Text-to-speech (TTS), also known as speech synthesis, is a prominent research area that aims to generate natural-sounding human speech from text. Recently, with the increasing industrial demand, TTS technologies have evolved beyond synthesizing human-like speech to enabling controllable speech generation. This includes fine-grained control over various attributes of synthesized speech such as emotion, prosody, timbre, and duration. In addition, advancements in deep learning, such as diffusion and large language models, have significantly enhanced controllable TTS over the past several years. In this work, we conduct a comprehensive survey of controllable TTS, covering approaches ranging from basic control techniques to methods utilizing natural language prompts, aiming to provide a clear understanding of the current state of research. We examine the general controllable TTS pipeline, challenges, model architectures, and control strategies, offering a comprehensive and clear taxonomy of existing methods. Additionally, we provide a detailed summary of datasets and evaluation metrics and shed some light on the applications and future directions of controllable TTS. To the best of our knowledge, this survey paper provides the first comprehensive review of emerging controllable TTS methods, which can serve as a beneficial resource for both academic researchers and industrial practitioners.
2412.09599
Ayaka Higami
Ayaka Higami, Karin Oshima, Tomoyo Isoguchi Shiramatsu, Hirokazu Takahashi, Shohei Nobuhara, Ko Nishino
RatBodyFormer: Rat Body Surface from Keypoints
https://vision.ist.i.kyoto-u.ac.jp/research/ratbodyformer/
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Analyzing rat behavior lies at the heart of many scientific studies. Past methods for automated rodent modeling have focused on 3D pose estimation from keypoints, e.g., face and appendages. The pose, however, does not capture the rich body surface movement encoding the subtle rat behaviors like curling and stretching. The body surface lacks features that can be visually defined, evading these established keypoint-based methods. In this paper, we introduce the first method for reconstructing the rat body surface as a dense set of points by learning to predict it from the sparse keypoints that can be detected with past methods. Our method consists of two key contributions. The first is RatDome, a novel multi-camera system for rat behavior capture, and a large-scale dataset captured with it that consists of pairs of 3D keypoints and 3D body surface points. The second is RatBodyFormer, a novel network to transform detected keypoints to 3D body surface points. RatBodyFormer is agnostic to the exact locations of the 3D body surface points in the training data and is trained with masked-learning. We experimentally validate our framework with a number of real-world experiments. Our results collectively serve as a novel foundation for automated rat behavior analysis.
[ { "version": "v1", "created": "Thu, 12 Dec 2024 18:59:00 GMT" }, { "version": "v2", "created": "Wed, 18 Dec 2024 03:49:22 GMT" }, { "version": "v3", "created": "Thu, 27 Mar 2025 01:58:34 GMT" } ]
2025-03-28T00:00:00
[ [ "Higami", "Ayaka", "" ], [ "Oshima", "Karin", "" ], [ "Shiramatsu", "Tomoyo Isoguchi", "" ], [ "Takahashi", "Hirokazu", "" ], [ "Nobuhara", "Shohei", "" ], [ "Nishino", "Ko", "" ] ]
TITLE: RatBodyFormer: Rat Body Surface from Keypoints ABSTRACT: Analyzing rat behavior lies at the heart of many scientific studies. Past methods for automated rodent modeling have focused on 3D pose estimation from keypoints, e.g., face and appendages. The pose, however, does not capture the rich body surface movement encoding the subtle rat behaviors like curling and stretching. The body surface lacks features that can be visually defined, evading these established keypoint-based methods. In this paper, we introduce the first method for reconstructing the rat body surface as a dense set of points by learning to predict it from the sparse keypoints that can be detected with past methods. Our method consists of two key contributions. The first is RatDome, a novel multi-camera system for rat behavior capture, and a large-scale dataset captured with it that consists of pairs of 3D keypoints and 3D body surface points. The second is RatBodyFormer, a novel network to transform detected keypoints to 3D body surface points. RatBodyFormer is agnostic to the exact locations of the 3D body surface points in the training data and is trained with masked-learning. We experimentally validate our framework with a number of real-world experiments. Our results collectively serve as a novel foundation for automated rat behavior analysis.
2412.15215
Tao Xie
Tao Xie, Xi Chen, Zhen Xu, Yiman Xie, Yudong Jin, Yujun Shen, Sida Peng, Hujun Bao, Xiaowei Zhou
EnvGS: Modeling View-Dependent Appearance with Environment Gaussian
Project page: https://zju3dv.github.io/envgs
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reconstructing complex reflections in real-world scenes from 2D images is essential for achieving photorealistic novel view synthesis. Existing methods that utilize environment maps to model reflections from distant lighting often struggle with high-frequency reflection details and fail to account for near-field reflections. In this work, we introduce EnvGS, a novel approach that employs a set of Gaussian primitives as an explicit 3D representation for capturing reflections of environments. These environment Gaussian primitives are incorporated with base Gaussian primitives to model the appearance of the whole scene. To efficiently render these environment Gaussian primitives, we developed a ray-tracing-based renderer that leverages the GPU's RT core for fast rendering. This allows us to jointly optimize our model for high-quality reconstruction while maintaining real-time rendering speeds. Results from multiple real-world and synthetic datasets demonstrate that our method produces significantly more detailed reflections, achieving the best rendering quality in real-time novel view synthesis. The code is available at https://zju3dv.github.io/envgs.
[ { "version": "v1", "created": "Thu, 19 Dec 2024 18:59:57 GMT" }, { "version": "v2", "created": "Thu, 27 Mar 2025 11:12:07 GMT" } ]
2025-03-28T00:00:00
[ [ "Xie", "Tao", "" ], [ "Chen", "Xi", "" ], [ "Xu", "Zhen", "" ], [ "Xie", "Yiman", "" ], [ "Jin", "Yudong", "" ], [ "Shen", "Yujun", "" ], [ "Peng", "Sida", "" ], [ "Bao", "Hujun", "" ], [ "Zhou", "Xiaowei", "" ] ]
TITLE: EnvGS: Modeling View-Dependent Appearance with Environment Gaussian ABSTRACT: Reconstructing complex reflections in real-world scenes from 2D images is essential for achieving photorealistic novel view synthesis. Existing methods that utilize environment maps to model reflections from distant lighting often struggle with high-frequency reflection details and fail to account for near-field reflections. In this work, we introduce EnvGS, a novel approach that employs a set of Gaussian primitives as an explicit 3D representation for capturing reflections of environments. These environment Gaussian primitives are incorporated with base Gaussian primitives to model the appearance of the whole scene. To efficiently render these environment Gaussian primitives, we developed a ray-tracing-based renderer that leverages the GPU's RT core for fast rendering. This allows us to jointly optimize our model for high-quality reconstruction while maintaining real-time rendering speeds. Results from multiple real-world and synthetic datasets demonstrate that our method produces significantly more detailed reflections, achieving the best rendering quality in real-time novel view synthesis. The code is available at https://zju3dv.github.io/envgs.
2412.16218
Xinkai Wei
Jianqing Liang, Xinkai Wei, Min Chen, Zhiqiang Wang, Jiye Liang
GNN-Transformer Cooperative Architecture for Trustworthy Graph Contrastive Learning
In Proceedings of AAAI 2025
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph contrastive learning (GCL) has become a hot topic in the field of graph representation learning. In contrast to traditional supervised learning relying on a large number of labels, GCL exploits augmentation strategies to generate multiple views and positive/negative pairs, both of which greatly influence the performance. Unfortunately, commonly used random augmentations may disturb the underlying semantics of graphs. Moreover, traditional GNNs, a type of widely employed encoders in GCL, are inevitably confronted with over-smoothing and over-squashing problems. To address these issues, we propose GNN-Transformer Cooperative Architecture for Trustworthy Graph Contrastive Learning (GTCA), which inherits the advantages of both GNN and Transformer, incorporating graph topology to obtain comprehensive graph representations. Theoretical analysis verifies the trustworthiness of the proposed method. Extensive experiments on benchmark datasets demonstrate state-of-the-art empirical performance.
[ { "version": "v1", "created": "Wed, 18 Dec 2024 09:20:12 GMT" }, { "version": "v2", "created": "Tue, 24 Dec 2024 02:02:24 GMT" }, { "version": "v3", "created": "Tue, 28 Jan 2025 09:48:54 GMT" }, { "version": "v4", "created": "Thu, 27 Mar 2025 13:44:56 GMT" } ]
2025-03-28T00:00:00
[ [ "Liang", "Jianqing", "" ], [ "Wei", "Xinkai", "" ], [ "Chen", "Min", "" ], [ "Wang", "Zhiqiang", "" ], [ "Liang", "Jiye", "" ] ]
TITLE: GNN-Transformer Cooperative Architecture for Trustworthy Graph Contrastive Learning ABSTRACT: Graph contrastive learning (GCL) has become a hot topic in the field of graph representation learning. In contrast to traditional supervised learning relying on a large number of labels, GCL exploits augmentation strategies to generate multiple views and positive/negative pairs, both of which greatly influence the performance. Unfortunately, commonly used random augmentations may disturb the underlying semantics of graphs. Moreover, traditional GNNs, a type of widely employed encoders in GCL, are inevitably confronted with over-smoothing and over-squashing problems. To address these issues, we propose GNN-Transformer Cooperative Architecture for Trustworthy Graph Contrastive Learning (GTCA), which inherits the advantages of both GNN and Transformer, incorporating graph topology to obtain comprehensive graph representations. Theoretical analysis verifies the trustworthiness of the proposed method. Extensive experiments on benchmark datasets demonstrate state-of-the-art empirical performance.
2412.20104
Wenkun He
Wenkun He, Yun Liu, Ruitao Liu, Li Yi
SyncDiff: Synchronized Motion Diffusion for Multi-Body Human-Object Interaction Synthesis
26 pages, 10 figures
null
null
null
cs.CV cs.AI cs.LG cs.RO
http://creativecommons.org/licenses/by/4.0/
Synthesizing realistic human-object interaction motions is a critical problem in VR/AR and human animation. Unlike the commonly studied scenarios involving a single human or hand interacting with one object, we address a more generic multi-body setting with arbitrary numbers of humans, hands, and objects. This complexity introduces significant challenges in synchronizing motions due to the high correlations and mutual influences among bodies. To address these challenges, we introduce SyncDiff, a novel method for multi-body interaction synthesis using a synchronized motion diffusion strategy. SyncDiff employs a single diffusion model to capture the joint distribution of multi-body motions. To enhance motion fidelity, we propose a frequency-domain motion decomposition scheme. Additionally, we introduce a new set of alignment scores to emphasize the synchronization of different body motions. SyncDiff jointly optimizes both data sample likelihood and alignment likelihood through an explicit synchronization strategy. Extensive experiments across four datasets with various multi-body configurations demonstrate the superiority of SyncDiff over existing state-of-the-art motion synthesis methods.
[ { "version": "v1", "created": "Sat, 28 Dec 2024 10:12:12 GMT" }, { "version": "v2", "created": "Mon, 13 Jan 2025 11:46:06 GMT" }, { "version": "v3", "created": "Tue, 25 Mar 2025 04:15:15 GMT" }, { "version": "v4", "created": "Thu, 27 Mar 2025 02:17:08 GMT" } ]
2025-03-28T00:00:00
[ [ "He", "Wenkun", "" ], [ "Liu", "Yun", "" ], [ "Liu", "Ruitao", "" ], [ "Yi", "Li", "" ] ]
TITLE: SyncDiff: Synchronized Motion Diffusion for Multi-Body Human-Object Interaction Synthesis ABSTRACT: Synthesizing realistic human-object interaction motions is a critical problem in VR/AR and human animation. Unlike the commonly studied scenarios involving a single human or hand interacting with one object, we address a more generic multi-body setting with arbitrary numbers of humans, hands, and objects. This complexity introduces significant challenges in synchronizing motions due to the high correlations and mutual influences among bodies. To address these challenges, we introduce SyncDiff, a novel method for multi-body interaction synthesis using a synchronized motion diffusion strategy. SyncDiff employs a single diffusion model to capture the joint distribution of multi-body motions. To enhance motion fidelity, we propose a frequency-domain motion decomposition scheme. Additionally, we introduce a new set of alignment scores to emphasize the synchronization of different body motions. SyncDiff jointly optimizes both data sample likelihood and alignment likelihood through an explicit synchronization strategy. Extensive experiments across four datasets with various multi-body configurations demonstrate the superiority of SyncDiff over existing state-of-the-art motion synthesis methods.
2501.01855
Huaxiang Zhang
Huaxiang Zhang, Kai Liu, Zhongxue Gan, and Guo-Niu Zhu
UAV-DETR: Efficient End-to-End Object Detection for Unmanned Aerial Vehicle Imagery
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Unmanned aerial vehicle object detection (UAV-OD) has been widely used in various scenarios. However, most existing UAV-OD algorithms rely on manually designed components, which require extensive tuning. End-to-end models that do not depend on such manually designed components are mainly designed for natural images, which are less effective for UAV imagery. To address such challenges, this paper proposes an efficient detection transformer (DETR) framework tailored for UAV imagery, i.e., UAV-DETR. The framework includes a multi-scale feature fusion with frequency enhancement module, which captures both spatial and frequency information at different scales. In addition, a frequency-focused down-sampling module is presented to retain critical spatial details during down-sampling. A semantic alignment and calibration module is developed to align and fuse features from different fusion paths. Experimental results demonstrate the effectiveness and generalization of our approach across various UAV imagery datasets. On the VisDrone dataset, our method improves AP by 3.1\% and $\text{AP}_{50}$ by 4.2\% over the baseline. Similar enhancements are observed on the UAVVaste dataset. The project page: https://github.com/ValiantDiligent/UAV-DETR
[ { "version": "v1", "created": "Fri, 3 Jan 2025 15:11:14 GMT" }, { "version": "v2", "created": "Thu, 27 Mar 2025 14:17:42 GMT" } ]
2025-03-28T00:00:00
[ [ "Zhang", "Huaxiang", "" ], [ "Liu", "Kai", "" ], [ "Gan", "Zhongxue", "" ], [ "Zhu", "Guo-Niu", "" ] ]
TITLE: UAV-DETR: Efficient End-to-End Object Detection for Unmanned Aerial Vehicle Imagery ABSTRACT: Unmanned aerial vehicle object detection (UAV-OD) has been widely used in various scenarios. However, most existing UAV-OD algorithms rely on manually designed components, which require extensive tuning. End-to-end models that do not depend on such manually designed components are mainly designed for natural images, which are less effective for UAV imagery. To address such challenges, this paper proposes an efficient detection transformer (DETR) framework tailored for UAV imagery, i.e., UAV-DETR. The framework includes a multi-scale feature fusion with frequency enhancement module, which captures both spatial and frequency information at different scales. In addition, a frequency-focused down-sampling module is presented to retain critical spatial details during down-sampling. A semantic alignment and calibration module is developed to align and fuse features from different fusion paths. Experimental results demonstrate the effectiveness and generalization of our approach across various UAV imagery datasets. On the VisDrone dataset, our method improves AP by 3.1\% and $\text{AP}_{50}$ by 4.2\% over the baseline. Similar enhancements are observed on the UAVVaste dataset. The project page: https://github.com/ValiantDiligent/UAV-DETR
2501.02471
Yishen Liu
Yishen Liu and Shengda Luo and Zishao Zhong and Tongtong Wu and Jianguo Zhang and Peiyao Ou and Yong Liang and Liang Liu and Hudan Pan
Hengqin-RA-v1: Advanced Large Language Model for Diagnosis and Treatment of Rheumatoid Arthritis with Dataset based Traditional Chinese Medicine
8 pages, 5 figures, AAAI-2025 Workshop
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Large language models (LLMs) primarily trained on English texts, often face biases and inaccuracies in Chinese contexts. Their limitations are pronounced in fields like Traditional Chinese Medicine (TCM), where cultural and clinical subtleties are vital, further hindered by a lack of domain-specific data, such as rheumatoid arthritis (RA). To address these issues, this paper introduces Hengqin-RA-v1, the first large language model specifically tailored for TCM with a focus on diagnosing and treating RA. We also present HQ-GCM-RA-C1, a comprehensive RA-specific dataset curated from ancient Chinese medical literature, classical texts, and modern clinical studies. This dataset empowers Hengqin-RA-v1 to deliver accurate and culturally informed responses, effectively bridging the gaps left by general-purpose models. Extensive experiments demonstrate that Hengqin-RA-v1 outperforms state-of-the-art models, even surpassing the diagnostic accuracy of TCM practitioners in certain cases.
[ { "version": "v1", "created": "Sun, 5 Jan 2025 07:46:51 GMT" }, { "version": "v2", "created": "Thu, 27 Mar 2025 06:39:45 GMT" } ]
2025-03-28T00:00:00
[ [ "Liu", "Yishen", "" ], [ "Luo", "Shengda", "" ], [ "Zhong", "Zishao", "" ], [ "Wu", "Tongtong", "" ], [ "Zhang", "Jianguo", "" ], [ "Ou", "Peiyao", "" ], [ "Liang", "Yong", "" ], [ "Liu", "Liang", "" ], [ "Pan", "Hudan", "" ] ]
TITLE: Hengqin-RA-v1: Advanced Large Language Model for Diagnosis and Treatment of Rheumatoid Arthritis with Dataset based Traditional Chinese Medicine ABSTRACT: Large language models (LLMs) primarily trained on English texts, often face biases and inaccuracies in Chinese contexts. Their limitations are pronounced in fields like Traditional Chinese Medicine (TCM), where cultural and clinical subtleties are vital, further hindered by a lack of domain-specific data, such as rheumatoid arthritis (RA). To address these issues, this paper introduces Hengqin-RA-v1, the first large language model specifically tailored for TCM with a focus on diagnosing and treating RA. We also present HQ-GCM-RA-C1, a comprehensive RA-specific dataset curated from ancient Chinese medical literature, classical texts, and modern clinical studies. This dataset empowers Hengqin-RA-v1 to deliver accurate and culturally informed responses, effectively bridging the gaps left by general-purpose models. Extensive experiments demonstrate that Hengqin-RA-v1 outperforms state-of-the-art models, even surpassing the diagnostic accuracy of TCM practitioners in certain cases.
2501.03550
Sha Wang
Pan Guo, Yuan Gao, Yongjie Pu, Zhigang Zhao, Zhenhua Cong and Sha Wang
Intelligent Mode-Locked Single-Cavity Dual-Comb Laser Utilizing Time-Stretch Dispersive Fourier Transform Spectroscopy with Supplemental File
10 pages, 8 figures
null
null
null
physics.optics
http://creativecommons.org/licenses/by/4.0/
As dual combs play a significant role in numerous high-precision measurements, their efficient generation has been widely researched. Although the single-cavity dual-comb generation can avoid the complex active stabilization methods, achieving and maintaining stable dual-comb mode locking within a single cavity remains a critical challenge. To break through this constraint, a two-part evaluation criterion containing a fitness function and a CNN-Transformer network is employed to achieve mode locking and classify the dual-comb mode-locked state. Simulated time-stretch dispersive Fourier transform (DFT) spectra are used as datasets, which simplifies the optimization process and does not rely on specific experimental data. A developed evolutionary algorithm (EA) for paddle-based motorized polarization controllers (MPCs) is proposed, enabling the intelligent attainment of dual-comb mode-locked states. A real-time library stores fitness and MPC angles, facilitating mode-locked state achievement within 2 seconds. Finally, long term running of dual-comb mode locking is ensured by a random collision algorithm utilizing an evaluation criterion of weak soliton peaks.
[ { "version": "v1", "created": "Tue, 7 Jan 2025 05:52:35 GMT" }, { "version": "v2", "created": "Thu, 27 Mar 2025 12:35:53 GMT" } ]
2025-03-28T00:00:00
[ [ "Guo", "Pan", "" ], [ "Gao", "Yuan", "" ], [ "Pu", "Yongjie", "" ], [ "Zhao", "Zhigang", "" ], [ "Cong", "Zhenhua", "" ], [ "Wang", "Sha", "" ] ]
TITLE: Intelligent Mode-Locked Single-Cavity Dual-Comb Laser Utilizing Time-Stretch Dispersive Fourier Transform Spectroscopy with Supplemental File ABSTRACT: As dual combs play a significant role in numerous high-precision measurements, their efficient generation has been widely researched. Although the single-cavity dual-comb generation can avoid the complex active stabilization methods, achieving and maintaining stable dual-comb mode locking within a single cavity remains a critical challenge. To break through this constraint, a two-part evaluation criterion containing a fitness function and a CNN-Transformer network is employed to achieve mode locking and classify the dual-comb mode-locked state. Simulated time-stretch dispersive Fourier transform (DFT) spectra are used as datasets, which simplifies the optimization process and does not rely on specific experimental data. A developed evolutionary algorithm (EA) for paddle-based motorized polarization controllers (MPCs) is proposed, enabling the intelligent attainment of dual-comb mode-locked states. A real-time library stores fitness and MPC angles, facilitating mode-locked state achievement within 2 seconds. Finally, long term running of dual-comb mode locking is ensured by a random collision algorithm utilizing an evaluation criterion of weak soliton peaks.