id
stringlengths
9
16
submitter
stringlengths
3
64
authors
stringlengths
5
6.63k
title
stringlengths
7
245
comments
stringlengths
1
482
journal-ref
stringlengths
4
382
doi
stringlengths
9
151
report-no
stringclasses
984 values
categories
stringlengths
5
108
license
stringclasses
9 values
abstract
stringlengths
83
3.41k
versions
listlengths
1
20
update_date
timestamp[s]date
2007-05-23 00:00:00
2025-04-11 00:00:00
authors_parsed
listlengths
1
427
prompt
stringlengths
166
3.49k
label
stringclasses
2 values
prob
float64
0.5
0.98
2503.10604
Yingshuang Zou
Yingshuang Zou, Yikang Ding, Chuanrui Zhang, Jiazhe Guo, Bohan Li, Xiaoyang Lyu, Feiyang Tan, Xiaojuan Qi, Haoqian Wang
MuDG: Taming Multi-modal Diffusion with Gaussian Splatting for Urban Scene Reconstruction
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent breakthroughs in radiance fields have significantly advanced 3D scene reconstruction and novel view synthesis (NVS) in autonomous driving. Nevertheless, critical limitations persist: reconstruction-based methods exhibit substantial performance deterioration under significant viewpoint deviations from training trajectories, while generation-based techniques struggle with temporal coherence and precise scene controllability. To overcome these challenges, we present MuDG, an innovative framework that integrates Multi-modal Diffusion model with Gaussian Splatting (GS) for Urban Scene Reconstruction. MuDG leverages aggregated LiDAR point clouds with RGB and geometric priors to condition a multi-modal video diffusion model, synthesizing photorealistic RGB, depth, and semantic outputs for novel viewpoints. This synthesis pipeline enables feed-forward NVS without computationally intensive per-scene optimization, providing comprehensive supervision signals to refine 3DGS representations for rendering robustness enhancement under extreme viewpoint changes. Experiments on the Open Waymo Dataset demonstrate that MuDG outperforms existing methods in both reconstruction and synthesis quality.
[ { "version": "v1", "created": "Thu, 13 Mar 2025 17:48:41 GMT" } ]
2025-03-14T00:00:00
[ [ "Zou", "Yingshuang", "" ], [ "Ding", "Yikang", "" ], [ "Zhang", "Chuanrui", "" ], [ "Guo", "Jiazhe", "" ], [ "Li", "Bohan", "" ], [ "Lyu", "Xiaoyang", "" ], [ "Tan", "Feiyang", "" ], [ "Qi", "Xiaojuan", "" ], [ "Wang", "Haoqian", "" ] ]
TITLE: MuDG: Taming Multi-modal Diffusion with Gaussian Splatting for Urban Scene Reconstruction ABSTRACT: Recent breakthroughs in radiance fields have significantly advanced 3D scene reconstruction and novel view synthesis (NVS) in autonomous driving. Nevertheless, critical limitations persist: reconstruction-based methods exhibit substantial performance deterioration under significant viewpoint deviations from training trajectories, while generation-based techniques struggle with temporal coherence and precise scene controllability. To overcome these challenges, we present MuDG, an innovative framework that integrates Multi-modal Diffusion model with Gaussian Splatting (GS) for Urban Scene Reconstruction. MuDG leverages aggregated LiDAR point clouds with RGB and geometric priors to condition a multi-modal video diffusion model, synthesizing photorealistic RGB, depth, and semantic outputs for novel viewpoints. This synthesis pipeline enables feed-forward NVS without computationally intensive per-scene optimization, providing comprehensive supervision signals to refine 3DGS representations for rendering robustness enhancement under extreme viewpoint changes. Experiments on the Open Waymo Dataset demonstrate that MuDG outperforms existing methods in both reconstruction and synthesis quality.
no_new_dataset
0.945751
2503.10605
Alexey Nekrasov
Severin Heidrich, Till Beemelmanns, Alexey Nekrasov, Bastian Leibe, Lutz Eckstein
OCCUQ: Exploring Efficient Uncertainty Quantification for 3D Occupancy Prediction
Accepted for publication at ICRA 2025
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Autonomous driving has the potential to significantly enhance productivity and provide numerous societal benefits. Ensuring robustness in these safety-critical systems is essential, particularly when vehicles must navigate adverse weather conditions and sensor corruptions that may not have been encountered during training. Current methods often overlook uncertainties arising from adversarial conditions or distributional shifts, limiting their real-world applicability. We propose an efficient adaptation of an uncertainty estimation technique for 3D occupancy prediction. Our method dynamically calibrates model confidence using epistemic uncertainty estimates. Our evaluation under various camera corruption scenarios, such as fog or missing cameras, demonstrates that our approach effectively quantifies epistemic uncertainty by assigning higher uncertainty values to unseen data. We introduce region-specific corruptions to simulate defects affecting only a single camera and validate our findings through both scene-level and region-level assessments. Our results show superior performance in Out-of-Distribution (OoD) detection and confidence calibration compared to common baselines such as Deep Ensembles and MC-Dropout. Our approach consistently demonstrates reliable uncertainty measures, indicating its potential for enhancing the robustness of autonomous driving systems in real-world scenarios. Code and dataset are available at https://github.com/ika-rwth-aachen/OCCUQ .
[ { "version": "v1", "created": "Thu, 13 Mar 2025 17:50:07 GMT" } ]
2025-03-14T00:00:00
[ [ "Heidrich", "Severin", "" ], [ "Beemelmanns", "Till", "" ], [ "Nekrasov", "Alexey", "" ], [ "Leibe", "Bastian", "" ], [ "Eckstein", "Lutz", "" ] ]
TITLE: OCCUQ: Exploring Efficient Uncertainty Quantification for 3D Occupancy Prediction ABSTRACT: Autonomous driving has the potential to significantly enhance productivity and provide numerous societal benefits. Ensuring robustness in these safety-critical systems is essential, particularly when vehicles must navigate adverse weather conditions and sensor corruptions that may not have been encountered during training. Current methods often overlook uncertainties arising from adversarial conditions or distributional shifts, limiting their real-world applicability. We propose an efficient adaptation of an uncertainty estimation technique for 3D occupancy prediction. Our method dynamically calibrates model confidence using epistemic uncertainty estimates. Our evaluation under various camera corruption scenarios, such as fog or missing cameras, demonstrates that our approach effectively quantifies epistemic uncertainty by assigning higher uncertainty values to unseen data. We introduce region-specific corruptions to simulate defects affecting only a single camera and validate our findings through both scene-level and region-level assessments. Our results show superior performance in Out-of-Distribution (OoD) detection and confidence calibration compared to common baselines such as Deep Ensembles and MC-Dropout. Our approach consistently demonstrates reliable uncertainty measures, indicating its potential for enhancing the robustness of autonomous driving systems in real-world scenarios. Code and dataset are available at https://github.com/ika-rwth-aachen/OCCUQ .
no_new_dataset
0.881819
2503.10621
Ayesha Ishaq Ms
Ayesha Ishaq, Jean Lahoud, Ketan More, Omkar Thawakar, Ritesh Thawkar, Dinura Dissanayake, Noor Ahsan, Yuhao Li, Fahad Shahbaz Khan, Hisham Cholakkal, Ivan Laptev, Rao Muhammad Anwer, Salman Khan
DriveLMM-o1: A Step-by-Step Reasoning Dataset and Large Multimodal Model for Driving Scenario Understanding
8 pages, 4 figures, 3 tables, github: https://github.com/ayesha-ishaq/DriveLMM-o1
null
null
null
cs.CV cs.RO
http://creativecommons.org/licenses/by/4.0/
While large multimodal models (LMMs) have demonstrated strong performance across various Visual Question Answering (VQA) tasks, certain challenges require complex multi-step reasoning to reach accurate answers. One particularly challenging task is autonomous driving, which demands thorough cognitive processing before decisions can be made. In this domain, a sequential and interpretive understanding of visual cues is essential for effective perception, prediction, and planning. Nevertheless, common VQA benchmarks often focus on the accuracy of the final answer while overlooking the reasoning process that enables the generation of accurate responses. Moreover, existing methods lack a comprehensive framework for evaluating step-by-step reasoning in realistic driving scenarios. To address this gap, we propose DriveLMM-o1, a new dataset and benchmark specifically designed to advance step-wise visual reasoning for autonomous driving. Our benchmark features over 18k VQA examples in the training set and more than 4k in the test set, covering diverse questions on perception, prediction, and planning, each enriched with step-by-step reasoning to ensure logical inference in autonomous driving scenarios. We further introduce a large multimodal model that is fine-tuned on our reasoning dataset, demonstrating robust performance in complex driving scenarios. In addition, we benchmark various open-source and closed-source methods on our proposed dataset, systematically comparing their reasoning capabilities for autonomous driving tasks. Our model achieves a +7.49% gain in final answer accuracy, along with a 3.62% improvement in reasoning score over the previous best open-source model. Our framework, dataset, and model are available at https://github.com/ayesha-ishaq/DriveLMM-o1.
[ { "version": "v1", "created": "Thu, 13 Mar 2025 17:59:01 GMT" } ]
2025-03-14T00:00:00
[ [ "Ishaq", "Ayesha", "" ], [ "Lahoud", "Jean", "" ], [ "More", "Ketan", "" ], [ "Thawakar", "Omkar", "" ], [ "Thawkar", "Ritesh", "" ], [ "Dissanayake", "Dinura", "" ], [ "Ahsan", "Noor", "" ], [ "Li", "Yuhao", "" ], [ "Khan", "Fahad Shahbaz", "" ], [ "Cholakkal", "Hisham", "" ], [ "Laptev", "Ivan", "" ], [ "Anwer", "Rao Muhammad", "" ], [ "Khan", "Salman", "" ] ]
TITLE: DriveLMM-o1: A Step-by-Step Reasoning Dataset and Large Multimodal Model for Driving Scenario Understanding ABSTRACT: While large multimodal models (LMMs) have demonstrated strong performance across various Visual Question Answering (VQA) tasks, certain challenges require complex multi-step reasoning to reach accurate answers. One particularly challenging task is autonomous driving, which demands thorough cognitive processing before decisions can be made. In this domain, a sequential and interpretive understanding of visual cues is essential for effective perception, prediction, and planning. Nevertheless, common VQA benchmarks often focus on the accuracy of the final answer while overlooking the reasoning process that enables the generation of accurate responses. Moreover, existing methods lack a comprehensive framework for evaluating step-by-step reasoning in realistic driving scenarios. To address this gap, we propose DriveLMM-o1, a new dataset and benchmark specifically designed to advance step-wise visual reasoning for autonomous driving. Our benchmark features over 18k VQA examples in the training set and more than 4k in the test set, covering diverse questions on perception, prediction, and planning, each enriched with step-by-step reasoning to ensure logical inference in autonomous driving scenarios. We further introduce a large multimodal model that is fine-tuned on our reasoning dataset, demonstrating robust performance in complex driving scenarios. In addition, we benchmark various open-source and closed-source methods on our proposed dataset, systematically comparing their reasoning capabilities for autonomous driving tasks. Our model achieves a +7.49% gain in final answer accuracy, along with a 3.62% improvement in reasoning score over the previous best open-source model. Our framework, dataset, and model are available at https://github.com/ayesha-ishaq/DriveLMM-o1.
new_dataset
0.963472
2503.10629
Hashmat Shadab Malik
Hashmat Shadab Malik, Shahina Kunhimon, Muzammal Naseer, Fahad Shahbaz Khan, Salman Khan
Hierarchical Self-Supervised Adversarial Training for Robust Vision Models in Histopathology
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Adversarial attacks pose significant challenges for vision models in critical fields like healthcare, where reliability is essential. Although adversarial training has been well studied in natural images, its application to biomedical and microscopy data remains limited. Existing self-supervised adversarial training methods overlook the hierarchical structure of histopathology images, where patient-slide-patch relationships provide valuable discriminative signals. To address this, we propose Hierarchical Self-Supervised Adversarial Training (HSAT), which exploits these properties to craft adversarial examples using multi-level contrastive learning and integrate it into adversarial training for enhanced robustness. We evaluate HSAT on multiclass histopathology dataset OpenSRH and the results show that HSAT outperforms existing methods from both biomedical and natural image domains. HSAT enhances robustness, achieving an average gain of 54.31% in the white-box setting and reducing performance drops to 3-4% in the black-box setting, compared to 25-30% for the baseline. These results set a new benchmark for adversarial training in this domain, paving the way for more robust models. Our Code for training and evaluation is available at https://github.com/HashmatShadab/HSAT.
[ { "version": "v1", "created": "Thu, 13 Mar 2025 17:59:47 GMT" } ]
2025-03-14T00:00:00
[ [ "Malik", "Hashmat Shadab", "" ], [ "Kunhimon", "Shahina", "" ], [ "Naseer", "Muzammal", "" ], [ "Khan", "Fahad Shahbaz", "" ], [ "Khan", "Salman", "" ] ]
TITLE: Hierarchical Self-Supervised Adversarial Training for Robust Vision Models in Histopathology ABSTRACT: Adversarial attacks pose significant challenges for vision models in critical fields like healthcare, where reliability is essential. Although adversarial training has been well studied in natural images, its application to biomedical and microscopy data remains limited. Existing self-supervised adversarial training methods overlook the hierarchical structure of histopathology images, where patient-slide-patch relationships provide valuable discriminative signals. To address this, we propose Hierarchical Self-Supervised Adversarial Training (HSAT), which exploits these properties to craft adversarial examples using multi-level contrastive learning and integrate it into adversarial training for enhanced robustness. We evaluate HSAT on multiclass histopathology dataset OpenSRH and the results show that HSAT outperforms existing methods from both biomedical and natural image domains. HSAT enhances robustness, achieving an average gain of 54.31% in the white-box setting and reducing performance drops to 3-4% in the black-box setting, compared to 25-30% for the baseline. These results set a new benchmark for adversarial training in this domain, paving the way for more robust models. Our Code for training and evaluation is available at https://github.com/HashmatShadab/HSAT.
no_new_dataset
0.950503
2503.10632
Subhajit Maity
Subhajit Maity, Killian Hitsman, Xin Li, Aritra Dutta
Kolmogorov-Arnold Attention: Is Learnable Attention Better For Vision Transformers?
Preprint, Appendix included
null
null
null
cs.LG cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Kolmogorov-Arnold networks (KANs) are a remarkable innovation consisting of learnable activation functions with the potential to capture more complex relationships from data. Although KANs are useful in finding symbolic representations and continual learning of one-dimensional functions, their effectiveness in diverse machine learning (ML) tasks, such as vision, remains questionable. Presently, KANs are deployed by replacing multilayer perceptrons (MLPs) in deep network architectures, including advanced architectures such as vision Transformers (ViTs). In this paper, we are the first to design a general learnable Kolmogorov-Arnold Attention (KArAt) for vanilla ViTs that can operate on any choice of basis. However, the computing and memory costs of training them motivated us to propose a more modular version, and we designed particular learnable attention, called Fourier-KArAt. Fourier-KArAt and its variants either outperform their ViT counterparts or show comparable performance on CIFAR-10, CIFAR-100, and ImageNet-1K datasets. We dissect these architectures' performance and generalization capacity by analyzing their loss landscapes, weight distributions, optimizer path, attention visualization, and spectral behavior, and contrast them with vanilla ViTs. The goal of this paper is not to produce parameter- and compute-efficient attention, but to encourage the community to explore KANs in conjunction with more advanced architectures that require a careful understanding of learnable activations. Our open-source code and implementation details are available on: https://subhajitmaity.me/KArAt
[ { "version": "v1", "created": "Thu, 13 Mar 2025 17:59:52 GMT" } ]
2025-03-14T00:00:00
[ [ "Maity", "Subhajit", "" ], [ "Hitsman", "Killian", "" ], [ "Li", "Xin", "" ], [ "Dutta", "Aritra", "" ] ]
TITLE: Kolmogorov-Arnold Attention: Is Learnable Attention Better For Vision Transformers? ABSTRACT: Kolmogorov-Arnold networks (KANs) are a remarkable innovation consisting of learnable activation functions with the potential to capture more complex relationships from data. Although KANs are useful in finding symbolic representations and continual learning of one-dimensional functions, their effectiveness in diverse machine learning (ML) tasks, such as vision, remains questionable. Presently, KANs are deployed by replacing multilayer perceptrons (MLPs) in deep network architectures, including advanced architectures such as vision Transformers (ViTs). In this paper, we are the first to design a general learnable Kolmogorov-Arnold Attention (KArAt) for vanilla ViTs that can operate on any choice of basis. However, the computing and memory costs of training them motivated us to propose a more modular version, and we designed particular learnable attention, called Fourier-KArAt. Fourier-KArAt and its variants either outperform their ViT counterparts or show comparable performance on CIFAR-10, CIFAR-100, and ImageNet-1K datasets. We dissect these architectures' performance and generalization capacity by analyzing their loss landscapes, weight distributions, optimizer path, attention visualization, and spectral behavior, and contrast them with vanilla ViTs. The goal of this paper is not to produce parameter- and compute-efficient attention, but to encourage the community to explore KANs in conjunction with more advanced architectures that require a careful understanding of learnable activations. Our open-source code and implementation details are available on: https://subhajitmaity.me/KArAt
no_new_dataset
0.945248
2503.10633
Eliahu Horwitz
Eliahu Horwitz, Nitzan Kurer, Jonathan Kahana, Liel Amar, Yedid Hoshen
Charting and Navigating Hugging Face's Model Atlas
null
null
null
null
cs.LG cs.CL cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As there are now millions of publicly available neural networks, searching and analyzing large model repositories becomes increasingly important. Navigating so many models requires an atlas, but as most models are poorly documented charting such an atlas is challenging. To explore the hidden potential of model repositories, we chart a preliminary atlas representing the documented fraction of Hugging Face. It provides stunning visualizations of the model landscape and evolution. We demonstrate several applications of this atlas including predicting model attributes (e.g., accuracy), and analyzing trends in computer vision models. However, as the current atlas remains incomplete, we propose a method for charting undocumented regions. Specifically, we identify high-confidence structural priors based on dominant real-world model training practices. Leveraging these priors, our approach enables accurate mapping of previously undocumented areas of the atlas. We publicly release our datasets, code, and interactive atlas.
[ { "version": "v1", "created": "Thu, 13 Mar 2025 17:59:53 GMT" } ]
2025-03-14T00:00:00
[ [ "Horwitz", "Eliahu", "" ], [ "Kurer", "Nitzan", "" ], [ "Kahana", "Jonathan", "" ], [ "Amar", "Liel", "" ], [ "Hoshen", "Yedid", "" ] ]
TITLE: Charting and Navigating Hugging Face's Model Atlas ABSTRACT: As there are now millions of publicly available neural networks, searching and analyzing large model repositories becomes increasingly important. Navigating so many models requires an atlas, but as most models are poorly documented charting such an atlas is challenging. To explore the hidden potential of model repositories, we chart a preliminary atlas representing the documented fraction of Hugging Face. It provides stunning visualizations of the model landscape and evolution. We demonstrate several applications of this atlas including predicting model attributes (e.g., accuracy), and analyzing trends in computer vision models. However, as the current atlas remains incomplete, we propose a method for charting undocumented regions. Specifically, we identify high-confidence structural priors based on dominant real-world model training practices. Leveraging these priors, our approach enables accurate mapping of previously undocumented areas of the atlas. We publicly release our datasets, code, and interactive atlas.
new_dataset
0.953751
2503.10635
Zhiqiang Shen
Zhaoyi Li and Xiaohan Zhao and Dong-Dong Wu and Jiacheng Cui and Zhiqiang Shen
A Frustratingly Simple Yet Highly Effective Attack Baseline: Over 90% Success Rate Against the Strong Black-box Models of GPT-4.5/4o/o1
Code at: https://github.com/VILA-Lab/M-Attack
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Despite promising performance on open-source large vision-language models (LVLMs), transfer-based targeted attacks often fail against black-box commercial LVLMs. Analyzing failed adversarial perturbations reveals that the learned perturbations typically originate from a uniform distribution and lack clear semantic details, resulting in unintended responses. This critical absence of semantic information leads commercial LVLMs to either ignore the perturbation entirely or misinterpret its embedded semantics, thereby causing the attack to fail. To overcome these issues, we notice that identifying core semantic objects is a key objective for models trained with various datasets and methodologies. This insight motivates our approach that refines semantic clarity by encoding explicit semantic details within local regions, thus ensuring interoperability and capturing finer-grained features, and by concentrating modifications on semantically rich areas rather than applying them uniformly. To achieve this, we propose a simple yet highly effective solution: at each optimization step, the adversarial image is cropped randomly by a controlled aspect ratio and scale, resized, and then aligned with the target image in the embedding space. Experimental results confirm our hypothesis. Our adversarial examples crafted with local-aggregated perturbations focused on crucial regions exhibit surprisingly good transferability to commercial LVLMs, including GPT-4.5, GPT-4o, Gemini-2.0-flash, Claude-3.5-sonnet, Claude-3.7-sonnet, and even reasoning models like o1, Claude-3.7-thinking and Gemini-2.0-flash-thinking. Our approach achieves success rates exceeding 90% on GPT-4.5, 4o, and o1, significantly outperforming all prior state-of-the-art attack methods. Our optimized adversarial examples under different configurations and training code are available at https://github.com/VILA-Lab/M-Attack.
[ { "version": "v1", "created": "Thu, 13 Mar 2025 17:59:55 GMT" } ]
2025-03-14T00:00:00
[ [ "Li", "Zhaoyi", "" ], [ "Zhao", "Xiaohan", "" ], [ "Wu", "Dong-Dong", "" ], [ "Cui", "Jiacheng", "" ], [ "Shen", "Zhiqiang", "" ] ]
TITLE: A Frustratingly Simple Yet Highly Effective Attack Baseline: Over 90% Success Rate Against the Strong Black-box Models of GPT-4.5/4o/o1 ABSTRACT: Despite promising performance on open-source large vision-language models (LVLMs), transfer-based targeted attacks often fail against black-box commercial LVLMs. Analyzing failed adversarial perturbations reveals that the learned perturbations typically originate from a uniform distribution and lack clear semantic details, resulting in unintended responses. This critical absence of semantic information leads commercial LVLMs to either ignore the perturbation entirely or misinterpret its embedded semantics, thereby causing the attack to fail. To overcome these issues, we notice that identifying core semantic objects is a key objective for models trained with various datasets and methodologies. This insight motivates our approach that refines semantic clarity by encoding explicit semantic details within local regions, thus ensuring interoperability and capturing finer-grained features, and by concentrating modifications on semantically rich areas rather than applying them uniformly. To achieve this, we propose a simple yet highly effective solution: at each optimization step, the adversarial image is cropped randomly by a controlled aspect ratio and scale, resized, and then aligned with the target image in the embedding space. Experimental results confirm our hypothesis. Our adversarial examples crafted with local-aggregated perturbations focused on crucial regions exhibit surprisingly good transferability to commercial LVLMs, including GPT-4.5, GPT-4o, Gemini-2.0-flash, Claude-3.5-sonnet, Claude-3.7-sonnet, and even reasoning models like o1, Claude-3.7-thinking and Gemini-2.0-flash-thinking. Our approach achieves success rates exceeding 90% on GPT-4.5, 4o, and o1, significantly outperforming all prior state-of-the-art attack methods. Our optimized adversarial examples under different configurations and training code are available at https://github.com/VILA-Lab/M-Attack.
no_new_dataset
0.947866
2503.10638
Xiaoming Zhao
Xiaoming Zhao, Alexander G. Schwing
Studying Classifier(-Free) Guidance From a Classifier-Centric Perspective
null
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Classifier-free guidance has become a staple for conditional generation with denoising diffusion models. However, a comprehensive understanding of classifier-free guidance is still missing. In this work, we carry out an empirical study to provide a fresh perspective on classifier-free guidance. Concretely, instead of solely focusing on classifier-free guidance, we trace back to the root, i.e., classifier guidance, pinpoint the key assumption for the derivation, and conduct a systematic study to understand the role of the classifier. We find that both classifier guidance and classifier-free guidance achieve conditional generation by pushing the denoising diffusion trajectories away from decision boundaries, i.e., areas where conditional information is usually entangled and is hard to learn. Based on this classifier-centric understanding, we propose a generic postprocessing step built upon flow-matching to shrink the gap between the learned distribution for a pre-trained denoising diffusion model and the real data distribution, majorly around the decision boundaries. Experiments on various datasets verify the effectiveness of the proposed approach.
[ { "version": "v1", "created": "Thu, 13 Mar 2025 17:59:59 GMT" } ]
2025-03-14T00:00:00
[ [ "Zhao", "Xiaoming", "" ], [ "Schwing", "Alexander G.", "" ] ]
TITLE: Studying Classifier(-Free) Guidance From a Classifier-Centric Perspective ABSTRACT: Classifier-free guidance has become a staple for conditional generation with denoising diffusion models. However, a comprehensive understanding of classifier-free guidance is still missing. In this work, we carry out an empirical study to provide a fresh perspective on classifier-free guidance. Concretely, instead of solely focusing on classifier-free guidance, we trace back to the root, i.e., classifier guidance, pinpoint the key assumption for the derivation, and conduct a systematic study to understand the role of the classifier. We find that both classifier guidance and classifier-free guidance achieve conditional generation by pushing the denoising diffusion trajectories away from decision boundaries, i.e., areas where conditional information is usually entangled and is hard to learn. Based on this classifier-centric understanding, we propose a generic postprocessing step built upon flow-matching to shrink the gap between the learned distribution for a pre-trained denoising diffusion model and the real data distribution, majorly around the decision boundaries. Experiments on various datasets verify the effectiveness of the proposed approach.
no_new_dataset
0.949902
2503.10639
Rongyao Fang
Rongyao Fang, Chengqi Duan, Kun Wang, Linjiang Huang, Hao Li, Shilin Yan, Hao Tian, Xingyu Zeng, Rui Zhao, Jifeng Dai, Xihui Liu, Hongsheng Li
GoT: Unleashing Reasoning Capability of Multimodal Large Language Model for Visual Generation and Editing
Dataset and models are released in https://github.com/rongyaofang/GoT
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Current image generation and editing methods primarily process textual prompts as direct inputs without reasoning about visual composition and explicit operations. We present Generation Chain-of-Thought (GoT), a novel paradigm that enables generation and editing through an explicit language reasoning process before outputting images. This approach transforms conventional text-to-image generation and editing into a reasoning-guided framework that analyzes semantic relationships and spatial arrangements. We define the formulation of GoT and construct large-scale GoT datasets containing over 9M samples with detailed reasoning chains capturing semantic-spatial relationships. To leverage the advantages of GoT, we implement a unified framework that integrates Qwen2.5-VL for reasoning chain generation with an end-to-end diffusion model enhanced by our novel Semantic-Spatial Guidance Module. Experiments show our GoT framework achieves excellent performance on both generation and editing tasks, with significant improvements over baselines. Additionally, our approach enables interactive visual generation, allowing users to explicitly modify reasoning steps for precise image adjustments. GoT pioneers a new direction for reasoning-driven visual generation and editing, producing images that better align with human intent. To facilitate future research, we make our datasets, code, and pretrained models publicly available at https://github.com/rongyaofang/GoT.
[ { "version": "v1", "created": "Thu, 13 Mar 2025 17:59:59 GMT" } ]
2025-03-14T00:00:00
[ [ "Fang", "Rongyao", "" ], [ "Duan", "Chengqi", "" ], [ "Wang", "Kun", "" ], [ "Huang", "Linjiang", "" ], [ "Li", "Hao", "" ], [ "Yan", "Shilin", "" ], [ "Tian", "Hao", "" ], [ "Zeng", "Xingyu", "" ], [ "Zhao", "Rui", "" ], [ "Dai", "Jifeng", "" ], [ "Liu", "Xihui", "" ], [ "Li", "Hongsheng", "" ] ]
TITLE: GoT: Unleashing Reasoning Capability of Multimodal Large Language Model for Visual Generation and Editing ABSTRACT: Current image generation and editing methods primarily process textual prompts as direct inputs without reasoning about visual composition and explicit operations. We present Generation Chain-of-Thought (GoT), a novel paradigm that enables generation and editing through an explicit language reasoning process before outputting images. This approach transforms conventional text-to-image generation and editing into a reasoning-guided framework that analyzes semantic relationships and spatial arrangements. We define the formulation of GoT and construct large-scale GoT datasets containing over 9M samples with detailed reasoning chains capturing semantic-spatial relationships. To leverage the advantages of GoT, we implement a unified framework that integrates Qwen2.5-VL for reasoning chain generation with an end-to-end diffusion model enhanced by our novel Semantic-Spatial Guidance Module. Experiments show our GoT framework achieves excellent performance on both generation and editing tasks, with significant improvements over baselines. Additionally, our approach enables interactive visual generation, allowing users to explicitly modify reasoning steps for precise image adjustments. GoT pioneers a new direction for reasoning-driven visual generation and editing, producing images that better align with human intent. To facilitate future research, we make our datasets, code, and pretrained models publicly available at https://github.com/rongyaofang/GoT.
new_dataset
0.662387
2202.04348
Yunli Wang
Siguang Huang, Yunli Wang, Lili Mou, Huayue Zhang, Han Zhu, Chuan Yu, Bo Zheng
MBCT: Tree-Based Feature-Aware Binning for Individual Uncertainty Calibration
WWW 2022. The new version fixed an error in Eq13
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most machine learning classifiers only concern classification accuracy, while certain applications (such as medical diagnosis, meteorological forecasting, and computation advertising) require the model to predict the true probability, known as a calibrated estimate. In previous work, researchers have developed several calibration methods to post-process the outputs of a predictor to obtain calibrated values, such as binning and scaling methods. Compared with scaling, binning methods are shown to have distribution-free theoretical guarantees, which motivates us to prefer binning methods for calibration. However, we notice that existing binning methods have several drawbacks: (a) the binning scheme only considers the original prediction values, thus limiting the calibration performance; and (b) the binning approach is non-individual, mapping multiple samples in a bin to the same value, and thus is not suitable for order-sensitive applications. In this paper, we propose a feature-aware binning framework, called Multiple Boosting Calibration Trees (MBCT), along with a multi-view calibration loss to tackle the above issues. Our MBCT optimizes the binning scheme by the tree structures of features, and adopts a linear function in a tree node to achieve individual calibration. Our MBCT is non-monotonic, and has the potential to improve order accuracy, due to its learnable binning scheme and the individual calibration. We conduct comprehensive experiments on three datasets in different fields. Results show that our method outperforms all competing models in terms of both calibration error and order accuracy. We also conduct simulation experiments, justifying that the proposed multi-view calibration loss is a better metric in modeling calibration error.
[ { "version": "v1", "created": "Wed, 9 Feb 2022 08:59:16 GMT" }, { "version": "v2", "created": "Wed, 12 Mar 2025 08:15:57 GMT" } ]
2025-03-13T00:00:00
[ [ "Huang", "Siguang", "" ], [ "Wang", "Yunli", "" ], [ "Mou", "Lili", "" ], [ "Zhang", "Huayue", "" ], [ "Zhu", "Han", "" ], [ "Yu", "Chuan", "" ], [ "Zheng", "Bo", "" ] ]
TITLE: MBCT: Tree-Based Feature-Aware Binning for Individual Uncertainty Calibration ABSTRACT: Most machine learning classifiers only concern classification accuracy, while certain applications (such as medical diagnosis, meteorological forecasting, and computation advertising) require the model to predict the true probability, known as a calibrated estimate. In previous work, researchers have developed several calibration methods to post-process the outputs of a predictor to obtain calibrated values, such as binning and scaling methods. Compared with scaling, binning methods are shown to have distribution-free theoretical guarantees, which motivates us to prefer binning methods for calibration. However, we notice that existing binning methods have several drawbacks: (a) the binning scheme only considers the original prediction values, thus limiting the calibration performance; and (b) the binning approach is non-individual, mapping multiple samples in a bin to the same value, and thus is not suitable for order-sensitive applications. In this paper, we propose a feature-aware binning framework, called Multiple Boosting Calibration Trees (MBCT), along with a multi-view calibration loss to tackle the above issues. Our MBCT optimizes the binning scheme by the tree structures of features, and adopts a linear function in a tree node to achieve individual calibration. Our MBCT is non-monotonic, and has the potential to improve order accuracy, due to its learnable binning scheme and the individual calibration. We conduct comprehensive experiments on three datasets in different fields. Results show that our method outperforms all competing models in terms of both calibration error and order accuracy. We also conduct simulation experiments, justifying that the proposed multi-view calibration loss is a better metric in modeling calibration error.
no_new_dataset
0.943556
2208.11636
Julius Gonsior
Julius Gonsior, Maik Thiele, Wolfgang Lehner
ImitAL: Learned Active Learning Strategy on Synthetic Data
arXiv admin note: text overlap with arXiv:2108.07670
null
10.1007/978-3-031-18840-4_4
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Active Learning (AL) is a well-known standard method for efficiently obtaining annotated data by first labeling the samples that contain the most information based on a query strategy. In the past, a large variety of such query strategies has been proposed, with each generation of new strategies increasing the runtime and adding more complexity. However, to the best of our our knowledge, none of these strategies excels consistently over a large number of datasets from different application domains. Basically, most of the the existing AL strategies are a combination of the two simple heuristics informativeness and representativeness, and the big differences lie in the combination of the often conflicting heuristics. Within this paper, we propose ImitAL, a domain-independent novel query strategy, which encodes AL as a learning-to-rank problem and learns an optimal combination between both heuristics. We train ImitAL on large-scale simulated AL runs on purely synthetic datasets. To show that ImitAL was successfully trained, we perform an extensive evaluation comparing our strategy on 13 different datasets, from a wide range of domains, with 7 other query strategies.
[ { "version": "v1", "created": "Wed, 24 Aug 2022 16:17:53 GMT" } ]
2025-03-13T00:00:00
[ [ "Gonsior", "Julius", "" ], [ "Thiele", "Maik", "" ], [ "Lehner", "Wolfgang", "" ] ]
TITLE: ImitAL: Learned Active Learning Strategy on Synthetic Data ABSTRACT: Active Learning (AL) is a well-known standard method for efficiently obtaining annotated data by first labeling the samples that contain the most information based on a query strategy. In the past, a large variety of such query strategies has been proposed, with each generation of new strategies increasing the runtime and adding more complexity. However, to the best of our our knowledge, none of these strategies excels consistently over a large number of datasets from different application domains. Basically, most of the the existing AL strategies are a combination of the two simple heuristics informativeness and representativeness, and the big differences lie in the combination of the often conflicting heuristics. Within this paper, we propose ImitAL, a domain-independent novel query strategy, which encodes AL as a learning-to-rank problem and learns an optimal combination between both heuristics. We train ImitAL on large-scale simulated AL runs on purely synthetic datasets. To show that ImitAL was successfully trained, we perform an extensive evaluation comparing our strategy on 13 different datasets, from a wide range of domains, with 7 other query strategies.
no_new_dataset
0.948106
2210.03005
Julius Gonsior
Julius Gonsior, Christian Falkenberg, Silvio Magino, Anja Reusch, Maik Thiele, Wolfgang Lehner
To Softmax, or not to Softmax: that is the question when applying Active Learning for Transformer Models
null
null
10.1007/978-3-031-42914-9_9
null
cs.LG cs.AI cs.CL cs.DB
http://creativecommons.org/licenses/by/4.0/
Despite achieving state-of-the-art results in nearly all Natural Language Processing applications, fine-tuning Transformer-based language models still requires a significant amount of labeled data to work. A well known technique to reduce the amount of human effort in acquiring a labeled dataset is \textit{Active Learning} (AL): an iterative process in which only the minimal amount of samples is labeled. AL strategies require access to a quantified confidence measure of the model predictions. A common choice is the softmax activation function for the final layer. As the softmax function provides misleading probabilities, this paper compares eight alternatives on seven datasets. Our almost paradoxical finding is that most of the methods are too good at identifying the true most uncertain samples (outliers), and that labeling therefore exclusively outliers results in worse performance. As a heuristic we propose to systematically ignore samples, which results in improvements of various methods compared to the softmax function.
[ { "version": "v1", "created": "Thu, 6 Oct 2022 15:51:39 GMT" } ]
2025-03-13T00:00:00
[ [ "Gonsior", "Julius", "" ], [ "Falkenberg", "Christian", "" ], [ "Magino", "Silvio", "" ], [ "Reusch", "Anja", "" ], [ "Thiele", "Maik", "" ], [ "Lehner", "Wolfgang", "" ] ]
TITLE: To Softmax, or not to Softmax: that is the question when applying Active Learning for Transformer Models ABSTRACT: Despite achieving state-of-the-art results in nearly all Natural Language Processing applications, fine-tuning Transformer-based language models still requires a significant amount of labeled data to work. A well known technique to reduce the amount of human effort in acquiring a labeled dataset is \textit{Active Learning} (AL): an iterative process in which only the minimal amount of samples is labeled. AL strategies require access to a quantified confidence measure of the model predictions. A common choice is the softmax activation function for the final layer. As the softmax function provides misleading probabilities, this paper compares eight alternatives on seven datasets. Our almost paradoxical finding is that most of the methods are too good at identifying the true most uncertain samples (outliers), and that labeling therefore exclusively outliers results in worse performance. As a heuristic we propose to systematically ignore samples, which results in improvements of various methods compared to the softmax function.
no_new_dataset
0.949153
2303.02278
Chun-Yin Huang
Chun-Yin Huang, Ruinan Jin, Can Zhao, Daguang Xu, and Xiaoxiao Li
Federated Learning on Virtual Heterogeneous Data with Local-global Distillation
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While Federated Learning (FL) is gaining popularity for training machine learning models in a decentralized fashion, numerous challenges persist, such as asynchronization, computational expenses, data heterogeneity, and gradient and membership privacy attacks. Lately, dataset distillation has emerged as a promising solution for addressing the aforementioned challenges by generating a compact synthetic dataset that preserves a model's training efficacy. However, we discover that using distilled local datasets can amplify the heterogeneity issue in FL. To address this, we propose Federated Learning on Virtual Heterogeneous Data with Local-Global Dataset Distillation (FedLGD), where we seamlessly integrate dataset distillation algorithms into FL pipeline and train FL using a smaller synthetic dataset (referred as virtual data). Specifically, to harmonize the domain shifts, we propose iterative distribution matching to inpaint global information to local virtual data and use federated gradient matching to distill global virtual data that serve as anchor points to rectify heterogeneous local training, without compromising data privacy. We experiment on both benchmark and real-world datasets that contain heterogeneous data from different sources, and further scale up to an FL scenario that contains a large number of clients with heterogeneous and class-imbalanced data. Our method outperforms state-of-the-art heterogeneous FL algorithms under various settings. Our code is available at https://github.com/ubc-tea/FedLGD.
[ { "version": "v1", "created": "Sat, 4 Mar 2023 00:35:29 GMT" }, { "version": "v2", "created": "Mon, 5 Jun 2023 18:43:26 GMT" }, { "version": "v3", "created": "Wed, 12 Mar 2025 01:01:17 GMT" } ]
2025-03-13T00:00:00
[ [ "Huang", "Chun-Yin", "" ], [ "Jin", "Ruinan", "" ], [ "Zhao", "Can", "" ], [ "Xu", "Daguang", "" ], [ "Li", "Xiaoxiao", "" ] ]
TITLE: Federated Learning on Virtual Heterogeneous Data with Local-global Distillation ABSTRACT: While Federated Learning (FL) is gaining popularity for training machine learning models in a decentralized fashion, numerous challenges persist, such as asynchronization, computational expenses, data heterogeneity, and gradient and membership privacy attacks. Lately, dataset distillation has emerged as a promising solution for addressing the aforementioned challenges by generating a compact synthetic dataset that preserves a model's training efficacy. However, we discover that using distilled local datasets can amplify the heterogeneity issue in FL. To address this, we propose Federated Learning on Virtual Heterogeneous Data with Local-Global Dataset Distillation (FedLGD), where we seamlessly integrate dataset distillation algorithms into FL pipeline and train FL using a smaller synthetic dataset (referred as virtual data). Specifically, to harmonize the domain shifts, we propose iterative distribution matching to inpaint global information to local virtual data and use federated gradient matching to distill global virtual data that serve as anchor points to rectify heterogeneous local training, without compromising data privacy. We experiment on both benchmark and real-world datasets that contain heterogeneous data from different sources, and further scale up to an FL scenario that contains a large number of clients with heterogeneous and class-imbalanced data. Our method outperforms state-of-the-art heterogeneous FL algorithms under various settings. Our code is available at https://github.com/ubc-tea/FedLGD.
no_new_dataset
0.946941
2308.04371
Yifan Zhang
Yifan Zhang, Jingqin Yang, Yang Yuan, Andrew Chi-Chih Yao
Cumulative Reasoning with Large Language Models
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent advancements in large language models (LLMs) have shown remarkable progress, yet their ability to solve complex problems remains limited. In this work, we introduce Cumulative Reasoning (CR), an approach that utilizes LLMs cumulatively and iteratively, mirroring human thought processes for problem-solving. CR decomposes tasks into smaller, manageable components and leverages previous propositions for effective composition, significantly enhancing problem-solving capabilities. We demonstrate CR's advantage through several complex reasoning tasks: it outperforms existing methods in logical inference tasks with up to a 9.3% improvement, achieving 98.04% accuracy on the curated FOLIO wiki dataset. In the Game of 24, it achieves 98% accuracy, marking a 24% improvement over the prior state-of-the-art. In solving MATH problems, CR achieves a 4.2% increase from previous methods and a 43% relative improvement in the most challenging level 5 problems. When incorporating a code environment with CR, we further harness LLMs' reasoning capabilities and outperform the Program of Thought (PoT) method by 38.8%. The code is available at https://github.com/iiis-ai/cumulative-reasoning.
[ { "version": "v1", "created": "Tue, 8 Aug 2023 16:18:20 GMT" }, { "version": "v2", "created": "Wed, 9 Aug 2023 14:37:37 GMT" }, { "version": "v3", "created": "Thu, 10 Aug 2023 08:24:09 GMT" }, { "version": "v4", "created": "Fri, 25 Aug 2023 02:40:37 GMT" }, { "version": "v5", "created": "Sat, 2 Dec 2023 02:59:12 GMT" }, { "version": "v6", "created": "Tue, 2 Apr 2024 03:37:39 GMT" }, { "version": "v7", "created": "Wed, 12 Mar 2025 02:55:36 GMT" } ]
2025-03-13T00:00:00
[ [ "Zhang", "Yifan", "" ], [ "Yang", "Jingqin", "" ], [ "Yuan", "Yang", "" ], [ "Yao", "Andrew Chi-Chih", "" ] ]
TITLE: Cumulative Reasoning with Large Language Models ABSTRACT: Recent advancements in large language models (LLMs) have shown remarkable progress, yet their ability to solve complex problems remains limited. In this work, we introduce Cumulative Reasoning (CR), an approach that utilizes LLMs cumulatively and iteratively, mirroring human thought processes for problem-solving. CR decomposes tasks into smaller, manageable components and leverages previous propositions for effective composition, significantly enhancing problem-solving capabilities. We demonstrate CR's advantage through several complex reasoning tasks: it outperforms existing methods in logical inference tasks with up to a 9.3% improvement, achieving 98.04% accuracy on the curated FOLIO wiki dataset. In the Game of 24, it achieves 98% accuracy, marking a 24% improvement over the prior state-of-the-art. In solving MATH problems, CR achieves a 4.2% increase from previous methods and a 43% relative improvement in the most challenging level 5 problems. When incorporating a code environment with CR, we further harness LLMs' reasoning capabilities and outperform the Program of Thought (PoT) method by 38.8%. The code is available at https://github.com/iiis-ai/cumulative-reasoning.
new_dataset
0.954223
2309.16460
Qianyu Zhou
Shaocong Long, Qianyu Zhou, Chenhao Ying, Lizhuang Ma, Yuan Luo
Diverse Target and Contribution Scheduling for Domain Generalization
This work has been submitted to the IEEE for possible publication
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Generalization under the distribution shift has been a great challenge in computer vision. The prevailing practice of directly employing the one-hot labels as the training targets in domain generalization~(DG) can lead to gradient conflicts, making it insufficient for capturing the intrinsic class characteristics and hard to increase the intra-class variation. Besides, existing methods in DG mostly overlook the distinct contributions of source (seen) domains, resulting in uneven learning from these domains. To address these issues, we firstly present a theoretical and empirical analysis of the existence of gradient conflicts in DG, unveiling the previously unexplored relationship between distribution shifts and gradient conflicts during the optimization process. In this paper, we present a novel perspective of DG from the empirical source domain's risk and propose a new paradigm for DG called Diverse Target and Contribution Scheduling (DTCS). DTCS comprises two innovative modules: Diverse Target Supervision (DTS) and Diverse Contribution Balance (DCB), with the aim of addressing the limitations associated with the common utilization of one-hot labels and equal contributions for source domains in DG. In specific, DTS employs distinct soft labels as training targets to account for various feature distributions across domains and thereby mitigates the gradient conflicts, and DCB dynamically balances the contributions of source domains by ensuring a fair decline in losses of different source domains. Extensive experiments with analysis on four benchmark datasets show that the proposed method achieves a competitive performance in comparison with the state-of-the-art approaches, demonstrating the effectiveness and advantages of the proposed DTCS.
[ { "version": "v1", "created": "Thu, 28 Sep 2023 14:10:25 GMT" }, { "version": "v2", "created": "Wed, 12 Mar 2025 08:24:26 GMT" } ]
2025-03-13T00:00:00
[ [ "Long", "Shaocong", "" ], [ "Zhou", "Qianyu", "" ], [ "Ying", "Chenhao", "" ], [ "Ma", "Lizhuang", "" ], [ "Luo", "Yuan", "" ] ]
TITLE: Diverse Target and Contribution Scheduling for Domain Generalization ABSTRACT: Generalization under the distribution shift has been a great challenge in computer vision. The prevailing practice of directly employing the one-hot labels as the training targets in domain generalization~(DG) can lead to gradient conflicts, making it insufficient for capturing the intrinsic class characteristics and hard to increase the intra-class variation. Besides, existing methods in DG mostly overlook the distinct contributions of source (seen) domains, resulting in uneven learning from these domains. To address these issues, we firstly present a theoretical and empirical analysis of the existence of gradient conflicts in DG, unveiling the previously unexplored relationship between distribution shifts and gradient conflicts during the optimization process. In this paper, we present a novel perspective of DG from the empirical source domain's risk and propose a new paradigm for DG called Diverse Target and Contribution Scheduling (DTCS). DTCS comprises two innovative modules: Diverse Target Supervision (DTS) and Diverse Contribution Balance (DCB), with the aim of addressing the limitations associated with the common utilization of one-hot labels and equal contributions for source domains in DG. In specific, DTS employs distinct soft labels as training targets to account for various feature distributions across domains and thereby mitigates the gradient conflicts, and DCB dynamically balances the contributions of source domains by ensuring a fair decline in losses of different source domains. Extensive experiments with analysis on four benchmark datasets show that the proposed method achieves a competitive performance in comparison with the state-of-the-art approaches, demonstrating the effectiveness and advantages of the proposed DTCS.
no_new_dataset
0.949342
2310.07259
Haoyu Zhang
Haoyu Zhang, Meng Liu, Yisen Feng, Yaowei Wang, Weili Guan, Liqiang Nie
Uncovering Hidden Connections: Iterative Search and Reasoning for Video-grounded Dialog
null
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In contrast to conventional visual question answering, video-grounded dialog necessitates a profound understanding of both dialog history and video content for accurate response generation. Despite commendable progress made by existing approaches, they still face the challenges of incrementally understanding complex dialog history and assimilating video information. In response to these challenges, we present an iterative search and reasoning framework, which consists of a textual encoder, a visual encoder, and a generator. Specifically, we devise a path search and aggregation strategy in the textual encoder, mining core cues from dialog history that are pivotal to understanding the posed questions. Concurrently, our visual encoder harnesses an iterative reasoning network to extract and emphasize critical visual markers from videos, enhancing the depth of visual comprehension. Finally, we utilize the pre-trained GPT-2 model as our answer generator to decode the mined hidden clues into coherent and contextualized answers. Extensive experiments on three public datasets demonstrate the effectiveness and generalizability of our proposed framework.
[ { "version": "v1", "created": "Wed, 11 Oct 2023 07:37:13 GMT" }, { "version": "v2", "created": "Wed, 22 May 2024 11:58:12 GMT" }, { "version": "v3", "created": "Mon, 18 Nov 2024 02:18:14 GMT" }, { "version": "v4", "created": "Wed, 12 Mar 2025 05:09:37 GMT" } ]
2025-03-13T00:00:00
[ [ "Zhang", "Haoyu", "" ], [ "Liu", "Meng", "" ], [ "Feng", "Yisen", "" ], [ "Wang", "Yaowei", "" ], [ "Guan", "Weili", "" ], [ "Nie", "Liqiang", "" ] ]
TITLE: Uncovering Hidden Connections: Iterative Search and Reasoning for Video-grounded Dialog ABSTRACT: In contrast to conventional visual question answering, video-grounded dialog necessitates a profound understanding of both dialog history and video content for accurate response generation. Despite commendable progress made by existing approaches, they still face the challenges of incrementally understanding complex dialog history and assimilating video information. In response to these challenges, we present an iterative search and reasoning framework, which consists of a textual encoder, a visual encoder, and a generator. Specifically, we devise a path search and aggregation strategy in the textual encoder, mining core cues from dialog history that are pivotal to understanding the posed questions. Concurrently, our visual encoder harnesses an iterative reasoning network to extract and emphasize critical visual markers from videos, enhancing the depth of visual comprehension. Finally, we utilize the pre-trained GPT-2 model as our answer generator to decode the mined hidden clues into coherent and contextualized answers. Extensive experiments on three public datasets demonstrate the effectiveness and generalizability of our proposed framework.
no_new_dataset
0.933734
2310.14687
Yihan Cao
Yihan Cao, Shuyi Chen, Ryan Liu, Zhiruo Wang, Daniel Fried
API-Assisted Code Generation for Question Answering on Varied Table Structures
EMNLP 2023 camera ready, 13 pages, 11 figures
Proceedings of the Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, 2023, pages 14536-14548, Singapore
10.18653/v1/2023.emnlp-main.897
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
A persistent challenge to table question answering (TableQA) by generating executable programs has been adapting to varied table structures, typically requiring domain-specific logical forms. In response, this paper introduces a unified TableQA framework that: (1) provides a unified representation for structured tables as multi-index Pandas data frames, (2) uses Python as a powerful querying language, and (3) uses few-shot prompting to translate NL questions into Python programs, which are executable on Pandas data frames. Furthermore, to answer complex relational questions with extended program functionality and external knowledge, our framework allows customized APIs that Python programs can call. We experiment with four TableQA datasets that involve tables of different structures -- relational, multi-table, and hierarchical matrix shapes -- and achieve prominent improvements over past state-of-the-art systems. In ablation studies, we (1) show benefits from our multi-index representation and APIs over baselines that use only an LLM, and (2) demonstrate that our approach is modular and can incorporate additional APIs.
[ { "version": "v1", "created": "Mon, 23 Oct 2023 08:26:28 GMT" } ]
2025-03-13T00:00:00
[ [ "Cao", "Yihan", "" ], [ "Chen", "Shuyi", "" ], [ "Liu", "Ryan", "" ], [ "Wang", "Zhiruo", "" ], [ "Fried", "Daniel", "" ] ]
TITLE: API-Assisted Code Generation for Question Answering on Varied Table Structures ABSTRACT: A persistent challenge to table question answering (TableQA) by generating executable programs has been adapting to varied table structures, typically requiring domain-specific logical forms. In response, this paper introduces a unified TableQA framework that: (1) provides a unified representation for structured tables as multi-index Pandas data frames, (2) uses Python as a powerful querying language, and (3) uses few-shot prompting to translate NL questions into Python programs, which are executable on Pandas data frames. Furthermore, to answer complex relational questions with extended program functionality and external knowledge, our framework allows customized APIs that Python programs can call. We experiment with four TableQA datasets that involve tables of different structures -- relational, multi-table, and hierarchical matrix shapes -- and achieve prominent improvements over past state-of-the-art systems. In ablation studies, we (1) show benefits from our multi-index representation and APIs over baselines that use only an LLM, and (2) demonstrate that our approach is modular and can incorporate additional APIs.
no_new_dataset
0.91782
2312.04539
Osman \"Ulger
Osman \"Ulger, Maksymilian Kulicki, Yuki Asano, Martin R. Oswald
Auto-Vocabulary Semantic Segmentation
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Open-Vocabulary Segmentation (OVS) methods are capable of performing semantic segmentation without relying on a fixed vocabulary, and in some cases, without training or fine-tuning. However, OVS methods typically require a human in the loop to specify the vocabulary based on the task or dataset at hand. In this paper, we introduce Auto-Vocabulary Semantic Segmentation (AVS), advancing open-ended image understanding by eliminating the necessity to predefine object categories for segmentation. Our approach, AutoSeg, presents a framework that autonomously identifies relevant class names using semantically enhanced BLIP embeddings and segments them afterwards. Given that open-ended object category predictions cannot be directly compared with a fixed ground truth, we develop a Large Language Model-based Auto-Vocabulary Evaluator (LAVE) to efficiently evaluate the automatically generated classes and their corresponding segments. With AVS, our method sets new benchmarks on datasets PASCAL VOC, Context, ADE20K, and Cityscapes, while showing competitive performance to OVS methods that require specified class names.
[ { "version": "v1", "created": "Thu, 7 Dec 2023 18:55:52 GMT" }, { "version": "v2", "created": "Wed, 20 Mar 2024 16:11:22 GMT" }, { "version": "v3", "created": "Wed, 12 Mar 2025 12:39:35 GMT" } ]
2025-03-13T00:00:00
[ [ "Ülger", "Osman", "" ], [ "Kulicki", "Maksymilian", "" ], [ "Asano", "Yuki", "" ], [ "Oswald", "Martin R.", "" ] ]
TITLE: Auto-Vocabulary Semantic Segmentation ABSTRACT: Open-Vocabulary Segmentation (OVS) methods are capable of performing semantic segmentation without relying on a fixed vocabulary, and in some cases, without training or fine-tuning. However, OVS methods typically require a human in the loop to specify the vocabulary based on the task or dataset at hand. In this paper, we introduce Auto-Vocabulary Semantic Segmentation (AVS), advancing open-ended image understanding by eliminating the necessity to predefine object categories for segmentation. Our approach, AutoSeg, presents a framework that autonomously identifies relevant class names using semantically enhanced BLIP embeddings and segments them afterwards. Given that open-ended object category predictions cannot be directly compared with a fixed ground truth, we develop a Large Language Model-based Auto-Vocabulary Evaluator (LAVE) to efficiently evaluate the automatically generated classes and their corresponding segments. With AVS, our method sets new benchmarks on datasets PASCAL VOC, Context, ADE20K, and Cityscapes, while showing competitive performance to OVS methods that require specified class names.
no_new_dataset
0.949153
2402.03166
Jos\'e Morano
Jos\'e Morano and Guilherme Aresta and Hrvoje Bogunovi\'c
RRWNet: Recursive Refinement Network for effective retinal artery/vein segmentation and classification
null
Expert Systems with Applications, 2024
10.1016/j.eswa.2024.124970
null
eess.IV cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
The caliber and configuration of retinal blood vessels serve as important biomarkers for various diseases and medical conditions. A thorough analysis of the retinal vasculature requires the segmentation of the blood vessels and their classification into arteries and veins, typically performed on color fundus images obtained by retinography. However, manually performing these tasks is labor-intensive and prone to human error. While several automated methods have been proposed to address this task, the current state of art faces challenges due to manifest classification errors affecting the topological consistency of segmentation maps. In this work, we introduce RRWNet, a novel end-to-end deep learning framework that addresses this limitation. The framework consists of a fully convolutional neural network that recursively refines semantic segmentation maps, correcting manifest classification errors and thus improving topological consistency. In particular, RRWNet is composed of two specialized subnetworks: a Base subnetwork that generates base segmentation maps from the input images, and a Recursive Refinement subnetwork that iteratively and recursively improves these maps. Evaluation on three different public datasets demonstrates the state-of-the-art performance of the proposed method, yielding more topologically consistent segmentation maps with fewer manifest classification errors than existing approaches. In addition, the Recursive Refinement module within RRWNet proves effective in post-processing segmentation maps from other methods, further demonstrating its potential. The model code, weights, and predictions will be publicly available at https://github.com/j-morano/rrwnet.
[ { "version": "v1", "created": "Mon, 5 Feb 2024 16:35:29 GMT" }, { "version": "v2", "created": "Wed, 13 Mar 2024 12:52:26 GMT" }, { "version": "v3", "created": "Wed, 3 Apr 2024 07:10:22 GMT" }, { "version": "v4", "created": "Thu, 8 Aug 2024 13:32:21 GMT" }, { "version": "v5", "created": "Wed, 12 Mar 2025 17:04:36 GMT" } ]
2025-03-13T00:00:00
[ [ "Morano", "José", "" ], [ "Aresta", "Guilherme", "" ], [ "Bogunović", "Hrvoje", "" ] ]
TITLE: RRWNet: Recursive Refinement Network for effective retinal artery/vein segmentation and classification ABSTRACT: The caliber and configuration of retinal blood vessels serve as important biomarkers for various diseases and medical conditions. A thorough analysis of the retinal vasculature requires the segmentation of the blood vessels and their classification into arteries and veins, typically performed on color fundus images obtained by retinography. However, manually performing these tasks is labor-intensive and prone to human error. While several automated methods have been proposed to address this task, the current state of art faces challenges due to manifest classification errors affecting the topological consistency of segmentation maps. In this work, we introduce RRWNet, a novel end-to-end deep learning framework that addresses this limitation. The framework consists of a fully convolutional neural network that recursively refines semantic segmentation maps, correcting manifest classification errors and thus improving topological consistency. In particular, RRWNet is composed of two specialized subnetworks: a Base subnetwork that generates base segmentation maps from the input images, and a Recursive Refinement subnetwork that iteratively and recursively improves these maps. Evaluation on three different public datasets demonstrates the state-of-the-art performance of the proposed method, yielding more topologically consistent segmentation maps with fewer manifest classification errors than existing approaches. In addition, the Recursive Refinement module within RRWNet proves effective in post-processing segmentation maps from other methods, further demonstrating its potential. The model code, weights, and predictions will be publicly available at https://github.com/j-morano/rrwnet.
no_new_dataset
0.949902
2402.03848
David Peer
David Peer, Philemon Sch\"opf, Volckmar Nebendahl, Alexander Rietzler, Sebastian Stabinger
ANLS* -- A Universal Document Processing Metric for Generative Large Language Models
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Traditionally, discriminative models have been the predominant choice for tasks like document classification and information extraction. These models make predictions that fall into a limited number of predefined classes, facilitating a binary true or false evaluation and enabling the direct calculation of metrics such as the F1 score. However, recent advancements in generative large language models (GLLMs) have prompted a shift in the field due to their enhanced zero-shot capabilities, which eliminate the need for a downstream dataset and computationally expensive fine-tuning. However, evaluating GLLMs presents a challenge as the binary true or false evaluation used for discriminative models is not applicable to the predictions made by GLLMs. This paper introduces a new metric for generative models called ANLS* for evaluating a wide variety of tasks, including information extraction and classification tasks. The ANLS* metric extends existing ANLS metrics as a drop-in-replacement and is still compatible with previously reported ANLS scores. An evaluation of 7 different datasets, and more than 20 different GLLMs together with 3 different prompting methods using the ANLS* metric is also provided, demonstrating the importance of the proposed metric. We also benchmark a novel approach to generate prompts for documents, called SFT, against other prompting techniques such as LATIN. In almost all cases, SFT outperforms other techniques and improves the state-of-the-art, sometimes by as much as $10$ percentage points. Sources are available at https://github.com/deepopinion/anls_star_metric
[ { "version": "v1", "created": "Tue, 6 Feb 2024 09:50:08 GMT" }, { "version": "v2", "created": "Tue, 27 Feb 2024 13:14:28 GMT" }, { "version": "v3", "created": "Thu, 21 Mar 2024 05:58:10 GMT" }, { "version": "v4", "created": "Tue, 16 Apr 2024 09:14:46 GMT" }, { "version": "v5", "created": "Sat, 25 May 2024 06:31:45 GMT" }, { "version": "v6", "created": "Fri, 28 Jun 2024 06:49:39 GMT" }, { "version": "v7", "created": "Tue, 27 Aug 2024 08:33:29 GMT" }, { "version": "v8", "created": "Mon, 3 Mar 2025 12:50:31 GMT" }, { "version": "v9", "created": "Wed, 12 Mar 2025 08:02:54 GMT" } ]
2025-03-13T00:00:00
[ [ "Peer", "David", "" ], [ "Schöpf", "Philemon", "" ], [ "Nebendahl", "Volckmar", "" ], [ "Rietzler", "Alexander", "" ], [ "Stabinger", "Sebastian", "" ] ]
TITLE: ANLS* -- A Universal Document Processing Metric for Generative Large Language Models ABSTRACT: Traditionally, discriminative models have been the predominant choice for tasks like document classification and information extraction. These models make predictions that fall into a limited number of predefined classes, facilitating a binary true or false evaluation and enabling the direct calculation of metrics such as the F1 score. However, recent advancements in generative large language models (GLLMs) have prompted a shift in the field due to their enhanced zero-shot capabilities, which eliminate the need for a downstream dataset and computationally expensive fine-tuning. However, evaluating GLLMs presents a challenge as the binary true or false evaluation used for discriminative models is not applicable to the predictions made by GLLMs. This paper introduces a new metric for generative models called ANLS* for evaluating a wide variety of tasks, including information extraction and classification tasks. The ANLS* metric extends existing ANLS metrics as a drop-in-replacement and is still compatible with previously reported ANLS scores. An evaluation of 7 different datasets, and more than 20 different GLLMs together with 3 different prompting methods using the ANLS* metric is also provided, demonstrating the importance of the proposed metric. We also benchmark a novel approach to generate prompts for documents, called SFT, against other prompting techniques such as LATIN. In almost all cases, SFT outperforms other techniques and improves the state-of-the-art, sometimes by as much as $10$ percentage points. Sources are available at https://github.com/deepopinion/anls_star_metric
no_new_dataset
0.948489
2402.15131
Guanming Xiong
Guanming Xiong, Junwei Bao, Wen Zhao
Interactive-KBQA: Multi-Turn Interactions for Knowledge Base Question Answering with Large Language Models
This work has been accepted by the ACL 2024 main conference. Code and data are available at: https://github.com/JimXiongGM/Interactive-KBQA
null
10.18653/v1/2024.acl-long.569
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
This study explores the realm of knowledge base question answering (KBQA). KBQA is considered a challenging task, particularly in parsing intricate questions into executable logical forms. Traditional semantic parsing (SP)-based methods require extensive data annotations, which result in significant costs. Recently, the advent of few-shot in-context learning, powered by large language models (LLMs), has showcased promising capabilities. However, fully leveraging LLMs to parse questions into logical forms in low-resource scenarios poses a substantial challenge. To tackle these hurdles, we introduce Interactive-KBQA, a framework designed to generate logical forms through direct interaction with knowledge bases (KBs). Within this framework, we have developed three generic APIs for KB interaction. For each category of complex question, we devised exemplars to guide LLMs through the reasoning processes. Our method achieves competitive results on the WebQuestionsSP, ComplexWebQuestions, KQA Pro, and MetaQA datasets with a minimal number of examples (shots). Importantly, our approach supports manual intervention, allowing for the iterative refinement of LLM outputs. By annotating a dataset with step-wise reasoning processes, we showcase our model's adaptability and highlight its potential for contributing significant enhancements to the field.
[ { "version": "v1", "created": "Fri, 23 Feb 2024 06:32:18 GMT" }, { "version": "v2", "created": "Fri, 19 Jul 2024 06:14:20 GMT" }, { "version": "v3", "created": "Wed, 12 Mar 2025 06:15:34 GMT" } ]
2025-03-13T00:00:00
[ [ "Xiong", "Guanming", "" ], [ "Bao", "Junwei", "" ], [ "Zhao", "Wen", "" ] ]
TITLE: Interactive-KBQA: Multi-Turn Interactions for Knowledge Base Question Answering with Large Language Models ABSTRACT: This study explores the realm of knowledge base question answering (KBQA). KBQA is considered a challenging task, particularly in parsing intricate questions into executable logical forms. Traditional semantic parsing (SP)-based methods require extensive data annotations, which result in significant costs. Recently, the advent of few-shot in-context learning, powered by large language models (LLMs), has showcased promising capabilities. However, fully leveraging LLMs to parse questions into logical forms in low-resource scenarios poses a substantial challenge. To tackle these hurdles, we introduce Interactive-KBQA, a framework designed to generate logical forms through direct interaction with knowledge bases (KBs). Within this framework, we have developed three generic APIs for KB interaction. For each category of complex question, we devised exemplars to guide LLMs through the reasoning processes. Our method achieves competitive results on the WebQuestionsSP, ComplexWebQuestions, KQA Pro, and MetaQA datasets with a minimal number of examples (shots). Importantly, our approach supports manual intervention, allowing for the iterative refinement of LLM outputs. By annotating a dataset with step-wise reasoning processes, we showcase our model's adaptability and highlight its potential for contributing significant enhancements to the field.
no_new_dataset
0.941007
2402.16424
Qingqing Long
Yuqi Li, Qingqing Long, Yihang Zhou, Meng Xiao, Ran Zhang, Zhiyuan Ning, Zhihong Zhu, Xuezhi Wang, Yuanchun Zhou
COMAE: COMprehensive Attribute Exploration for Zero-shot Hashing
18 pages, 7 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Zero-shot hashing (ZSH) has shown excellent success owing to its efficiency and generalization in large-scale retrieval scenarios. While considerable success has been achieved, there still exist urgent limitations. Existing works ignore the locality relationships of representations and attributes, which have effective transferability between seeable classes and unseeable classes. Also, the continuous-value attributes are not fully harnessed. In response, we conduct a COMprehensive Attribute Exploration for ZSH, named COMAE, which depicts the relationships from seen classes to unseen ones through three meticulously designed explorations, i.e., point-wise, pair-wise and class-wise consistency constraints. By regressing attributes from the proposed attribute prototype network, COMAE learns the local features that are relevant to the visual attributes. Then COMAE utilizes contrastive learning to comprehensively depict the context of attributes, rather than instance-independent optimization. Finally, the class-wise constraint is designed to cohesively learn the hash code, image representation, and visual attributes more effectively. Experimental results on the popular ZSH datasets demonstrate that COMAE outperforms state-of-the-art hashing techniques, especially in scenarios with a larger number of unseen label classes.
[ { "version": "v1", "created": "Mon, 26 Feb 2024 09:22:57 GMT" }, { "version": "v2", "created": "Wed, 17 Jul 2024 08:23:33 GMT" }, { "version": "v3", "created": "Sun, 21 Jul 2024 12:37:41 GMT" }, { "version": "v4", "created": "Wed, 12 Mar 2025 14:29:30 GMT" } ]
2025-03-13T00:00:00
[ [ "Li", "Yuqi", "" ], [ "Long", "Qingqing", "" ], [ "Zhou", "Yihang", "" ], [ "Xiao", "Meng", "" ], [ "Zhang", "Ran", "" ], [ "Ning", "Zhiyuan", "" ], [ "Zhu", "Zhihong", "" ], [ "Wang", "Xuezhi", "" ], [ "Zhou", "Yuanchun", "" ] ]
TITLE: COMAE: COMprehensive Attribute Exploration for Zero-shot Hashing ABSTRACT: Zero-shot hashing (ZSH) has shown excellent success owing to its efficiency and generalization in large-scale retrieval scenarios. While considerable success has been achieved, there still exist urgent limitations. Existing works ignore the locality relationships of representations and attributes, which have effective transferability between seeable classes and unseeable classes. Also, the continuous-value attributes are not fully harnessed. In response, we conduct a COMprehensive Attribute Exploration for ZSH, named COMAE, which depicts the relationships from seen classes to unseen ones through three meticulously designed explorations, i.e., point-wise, pair-wise and class-wise consistency constraints. By regressing attributes from the proposed attribute prototype network, COMAE learns the local features that are relevant to the visual attributes. Then COMAE utilizes contrastive learning to comprehensively depict the context of attributes, rather than instance-independent optimization. Finally, the class-wise constraint is designed to cohesively learn the hash code, image representation, and visual attributes more effectively. Experimental results on the popular ZSH datasets demonstrate that COMAE outperforms state-of-the-art hashing techniques, especially in scenarios with a larger number of unseen label classes.
no_new_dataset
0.945197
2403.06681
Jintao Huang
Jintao Huang, Yiu-Ming Cheung, and Chi-Man Vong
PLOOD: Partial Label Learning with Out-of-distribution Objects
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Existing Partial Label Learning (PLL) methods posit that training and test data adhere to the same distribution, a premise that frequently does not hold in practical application where Out-of-Distribution (OOD) objects are present. We introduce the OODPLL paradigm to tackle this significant yet underexplored issue. And our newly proposed PLOOD framework enables PLL to tackle OOD objects through Positive-Negative Sample Augmented (PNSA) feature learning and Partial Energy (PE)-based label refinement. The PNSA module enhances feature discrimination and OOD recognition by simulating in- and out-of-distribution instances, which employ structured positive and negative sample augmentation, in contrast to conventional PLL methods struggling to distinguish OOD samples. The PE scoring mechanism combines label confidence with energy-based uncertainty estimation, thereby reducing the impact of imprecise supervision and effectively achieving label disambiguation. Experimental results on CIFAR-10 and CIFAR-100, alongside various OOD datasets, demonstrate that conventional PLL methods exhibit substantial degradation in OOD scenarios, underscoring the necessity of incorporating OOD considerations in PLL approaches. Ablation studies show that PNSA feature learning and PE-based label refinement are necessary for PLOOD to work, offering a robust solution for open-set PLL problems.
[ { "version": "v1", "created": "Mon, 11 Mar 2024 12:56:36 GMT" }, { "version": "v2", "created": "Thu, 30 May 2024 11:16:49 GMT" }, { "version": "v3", "created": "Sat, 1 Jun 2024 05:19:24 GMT" }, { "version": "v4", "created": "Wed, 12 Mar 2025 05:54:38 GMT" } ]
2025-03-13T00:00:00
[ [ "Huang", "Jintao", "" ], [ "Cheung", "Yiu-Ming", "" ], [ "Vong", "Chi-Man", "" ] ]
TITLE: PLOOD: Partial Label Learning with Out-of-distribution Objects ABSTRACT: Existing Partial Label Learning (PLL) methods posit that training and test data adhere to the same distribution, a premise that frequently does not hold in practical application where Out-of-Distribution (OOD) objects are present. We introduce the OODPLL paradigm to tackle this significant yet underexplored issue. And our newly proposed PLOOD framework enables PLL to tackle OOD objects through Positive-Negative Sample Augmented (PNSA) feature learning and Partial Energy (PE)-based label refinement. The PNSA module enhances feature discrimination and OOD recognition by simulating in- and out-of-distribution instances, which employ structured positive and negative sample augmentation, in contrast to conventional PLL methods struggling to distinguish OOD samples. The PE scoring mechanism combines label confidence with energy-based uncertainty estimation, thereby reducing the impact of imprecise supervision and effectively achieving label disambiguation. Experimental results on CIFAR-10 and CIFAR-100, alongside various OOD datasets, demonstrate that conventional PLL methods exhibit substantial degradation in OOD scenarios, underscoring the necessity of incorporating OOD considerations in PLL approaches. Ablation studies show that PNSA feature learning and PE-based label refinement are necessary for PLOOD to work, offering a robust solution for open-set PLL problems.
no_new_dataset
0.946151
2403.20312
Jaisidh Singh
Jaisidh Singh, Ishaan Shrivastava, Mayank Vatsa, Richa Singh, Aparna Bharati
Learn "No" to Say "Yes" Better: Improving Vision-Language Models via Negations
14 pages + 6 figures in main manuscript (excluding references)
WACV 2025 pages(7991-8001)
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Existing vision-language models (VLMs) treat text descriptions as a unit, confusing individual concepts in a prompt and impairing visual semantic matching and reasoning. An important aspect of reasoning in logic and language is negations. This paper highlights the limitations of popular VLMs such as CLIP, at understanding the implications of negations, i.e., the effect of the word "not" in a given prompt. To enable evaluation of VLMs on fluent prompts with negations, we present CC-Neg, a dataset containing 228,246 images, true captions and their corresponding negated captions. Using CC-Neg along with modifications to the contrastive loss of CLIP, our proposed CoN-CLIP framework, has an improved understanding of negations. This training paradigm improves CoN-CLIP's ability to encode semantics reliably, resulting in 3.85% average gain in top-1 accuracy for zero-shot image classification across 8 datasets. Further, CoN-CLIP outperforms CLIP on challenging compositionality benchmarks such as SugarCREPE by 4.4%, showcasing emergent compositional understanding of objects, relations, and attributes in text. Overall, our work addresses a crucial limitation of VLMs by introducing a dataset and framework that strengthens semantic associations between images and text, demonstrating improved large-scale foundation models with significantly reduced computational cost, promoting efficiency and accessibility.
[ { "version": "v1", "created": "Fri, 29 Mar 2024 17:33:42 GMT" } ]
2025-03-13T00:00:00
[ [ "Singh", "Jaisidh", "" ], [ "Shrivastava", "Ishaan", "" ], [ "Vatsa", "Mayank", "" ], [ "Singh", "Richa", "" ], [ "Bharati", "Aparna", "" ] ]
TITLE: Learn "No" to Say "Yes" Better: Improving Vision-Language Models via Negations ABSTRACT: Existing vision-language models (VLMs) treat text descriptions as a unit, confusing individual concepts in a prompt and impairing visual semantic matching and reasoning. An important aspect of reasoning in logic and language is negations. This paper highlights the limitations of popular VLMs such as CLIP, at understanding the implications of negations, i.e., the effect of the word "not" in a given prompt. To enable evaluation of VLMs on fluent prompts with negations, we present CC-Neg, a dataset containing 228,246 images, true captions and their corresponding negated captions. Using CC-Neg along with modifications to the contrastive loss of CLIP, our proposed CoN-CLIP framework, has an improved understanding of negations. This training paradigm improves CoN-CLIP's ability to encode semantics reliably, resulting in 3.85% average gain in top-1 accuracy for zero-shot image classification across 8 datasets. Further, CoN-CLIP outperforms CLIP on challenging compositionality benchmarks such as SugarCREPE by 4.4%, showcasing emergent compositional understanding of objects, relations, and attributes in text. Overall, our work addresses a crucial limitation of VLMs by introducing a dataset and framework that strengthens semantic associations between images and text, demonstrating improved large-scale foundation models with significantly reduced computational cost, promoting efficiency and accessibility.
new_dataset
0.960584
2404.11100
Qiyu Hou
Qiyu Hou, Jun Wang, Meixuan Qiao, Lujun Tian
Synthesizing Realistic Data for Table Recognition
ICDAR 2024
null
10.1007/978-3-031-70533-5_22
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To overcome the limitations and challenges of current automatic table data annotation methods and random table data synthesis approaches, we propose a novel method for synthesizing annotation data specifically designed for table recognition. This method utilizes the structure and content of existing complex tables, facilitating the efficient creation of tables that closely replicate the authentic styles found in the target domain. By leveraging the actual structure and content of tables from Chinese financial announcements, we have developed the first extensive table annotation dataset in this domain. We used this dataset to train several recent deep learning-based end-to-end table recognition models. Additionally, we have established the inaugural benchmark for real-world complex tables in the Chinese financial announcement domain, using it to assess the performance of models trained on our synthetic data, thereby effectively validating our method's practicality and effectiveness. Furthermore, we applied our synthesis method to augment the FinTabNet dataset, extracted from English financial announcements, by increasing the proportion of tables with multiple spanning cells to introduce greater complexity. Our experiments show that models trained on this augmented dataset achieve comprehensive improvements in performance, especially in the recognition of tables with multiple spanning cells.
[ { "version": "v1", "created": "Wed, 17 Apr 2024 06:36:17 GMT" }, { "version": "v2", "created": "Tue, 9 Jul 2024 12:09:32 GMT" } ]
2025-03-13T00:00:00
[ [ "Hou", "Qiyu", "" ], [ "Wang", "Jun", "" ], [ "Qiao", "Meixuan", "" ], [ "Tian", "Lujun", "" ] ]
TITLE: Synthesizing Realistic Data for Table Recognition ABSTRACT: To overcome the limitations and challenges of current automatic table data annotation methods and random table data synthesis approaches, we propose a novel method for synthesizing annotation data specifically designed for table recognition. This method utilizes the structure and content of existing complex tables, facilitating the efficient creation of tables that closely replicate the authentic styles found in the target domain. By leveraging the actual structure and content of tables from Chinese financial announcements, we have developed the first extensive table annotation dataset in this domain. We used this dataset to train several recent deep learning-based end-to-end table recognition models. Additionally, we have established the inaugural benchmark for real-world complex tables in the Chinese financial announcement domain, using it to assess the performance of models trained on our synthetic data, thereby effectively validating our method's practicality and effectiveness. Furthermore, we applied our synthesis method to augment the FinTabNet dataset, extracted from English financial announcements, by increasing the proportion of tables with multiple spanning cells to introduce greater complexity. Our experiments show that models trained on this augmented dataset achieve comprehensive improvements in performance, especially in the recognition of tables with multiple spanning cells.
new_dataset
0.963575
2404.11465
Arvindh Arun
Arvindh Arun, Saurav Chhatani, Jisun An, Ponnurangam Kumaraguru
X-posing Free Speech: Examining the Impact of Moderation Relaxation on Online Social Networks
null
null
10.18653/v1/2024.woah-1.15
null
cs.SI
http://creativecommons.org/licenses/by/4.0/
We investigate the impact of free speech and the relaxation of moderation on online social media platforms using Elon Musk's takeover of Twitter as a case study. By curating a dataset of over 10 million tweets, our study employs a novel framework combining content and network analysis. Our findings reveal a significant increase in the distribution of certain forms of hate content, particularly targeting the LGBTQ+ community and liberals. Network analysis reveals the formation of cohesive hate communities facilitated by influential bridge users, with substantial growth in interactions hinting at increased hate production and diffusion. By tracking the temporal evolution of PageRank, we identify key influencers, primarily self-identified far-right supporters disseminating hate against liberals and woke culture. Ironically, embracing free speech principles appears to have enabled hate speech against the very concept of freedom of expression and free speech itself. Our findings underscore the delicate balance platforms must strike between open expression and robust moderation to curb the proliferation of hate online.
[ { "version": "v1", "created": "Wed, 17 Apr 2024 15:15:47 GMT" }, { "version": "v2", "created": "Thu, 23 May 2024 08:05:39 GMT" } ]
2025-03-13T00:00:00
[ [ "Arun", "Arvindh", "" ], [ "Chhatani", "Saurav", "" ], [ "An", "Jisun", "" ], [ "Kumaraguru", "Ponnurangam", "" ] ]
TITLE: X-posing Free Speech: Examining the Impact of Moderation Relaxation on Online Social Networks ABSTRACT: We investigate the impact of free speech and the relaxation of moderation on online social media platforms using Elon Musk's takeover of Twitter as a case study. By curating a dataset of over 10 million tweets, our study employs a novel framework combining content and network analysis. Our findings reveal a significant increase in the distribution of certain forms of hate content, particularly targeting the LGBTQ+ community and liberals. Network analysis reveals the formation of cohesive hate communities facilitated by influential bridge users, with substantial growth in interactions hinting at increased hate production and diffusion. By tracking the temporal evolution of PageRank, we identify key influencers, primarily self-identified far-right supporters disseminating hate against liberals and woke culture. Ironically, embracing free speech principles appears to have enabled hate speech against the very concept of freedom of expression and free speech itself. Our findings underscore the delicate balance platforms must strike between open expression and robust moderation to curb the proliferation of hate online.
new_dataset
0.964456
2405.08971
Marvin Pf\"ortner
Marvin Pf\"ortner, Jonathan Wenger, Jon Cockayne, Philipp Hennig
Computation-Aware Kalman Filtering and Smoothing
null
null
null
null
cs.LG cs.NA math.NA stat.ML
http://creativecommons.org/licenses/by/4.0/
Kalman filtering and smoothing are the foundational mechanisms for efficient inference in Gauss-Markov models. However, their time and memory complexities scale prohibitively with the size of the state space. This is particularly problematic in spatiotemporal regression problems, where the state dimension scales with the number of spatial observations. Existing approximate frameworks leverage low-rank approximations of the covariance matrix. But since they do not model the error introduced by the computational approximation, their predictive uncertainty estimates can be overly optimistic. In this work, we propose a probabilistic numerical method for inference in high-dimensional Gauss-Markov models which mitigates these scaling issues. Our matrix-free iterative algorithm leverages GPU acceleration and crucially enables a tunable trade-off between computational cost and predictive uncertainty. Finally, we demonstrate the scalability of our method on a large-scale climate dataset.
[ { "version": "v1", "created": "Tue, 14 May 2024 21:31:11 GMT" }, { "version": "v2", "created": "Wed, 12 Mar 2025 15:51:20 GMT" } ]
2025-03-13T00:00:00
[ [ "Pförtner", "Marvin", "" ], [ "Wenger", "Jonathan", "" ], [ "Cockayne", "Jon", "" ], [ "Hennig", "Philipp", "" ] ]
TITLE: Computation-Aware Kalman Filtering and Smoothing ABSTRACT: Kalman filtering and smoothing are the foundational mechanisms for efficient inference in Gauss-Markov models. However, their time and memory complexities scale prohibitively with the size of the state space. This is particularly problematic in spatiotemporal regression problems, where the state dimension scales with the number of spatial observations. Existing approximate frameworks leverage low-rank approximations of the covariance matrix. But since they do not model the error introduced by the computational approximation, their predictive uncertainty estimates can be overly optimistic. In this work, we propose a probabilistic numerical method for inference in high-dimensional Gauss-Markov models which mitigates these scaling issues. Our matrix-free iterative algorithm leverages GPU acceleration and crucially enables a tunable trade-off between computational cost and predictive uncertainty. Finally, we demonstrate the scalability of our method on a large-scale climate dataset.
no_new_dataset
0.944842
2405.18458
Yizhi Wang
Yizhi Wang, Minjia Chen, Chunhui Yao, Jie Ma, Ting Yan, Richard Penty, Qixiang Cheng
Asymmetrical estimator for training encapsulated deep photonic neural networks
23 pages, 6 figures
Nat Commun 16, 2143 (2025)
10.1038/s41467-025-57459-5
null
cs.LG physics.optics
http://creativecommons.org/licenses/by/4.0/
Photonic neural networks (PNNs) are fast in-propagation and high bandwidth paradigms that aim to popularize reproducible NN acceleration with higher efficiency and lower cost. However, the training of PNN is known to be challenging, where the device-to-device and system-to-system variations create imperfect knowledge of the PNN. Despite backpropagation (BP)-based training algorithms being the industry standard for their robustness, generality, and fast gradient convergence for digital training, existing PNN-BP methods rely heavily on accurate intermediate state extraction or extensive computational resources for deep PNNs (DPNNs). The truncated photonic signal propagation and the computation overhead bottleneck DPNN's operation efficiency and increase system construction cost. Here, we introduce the asymmetrical training (AsyT) method, tailored for encapsulated DPNNs, where the signal is preserved in the analogue photonic domain for the entire structure. AsyT offers a lightweight solution for DPNNs with minimum readouts, fast and energy-efficient operation, and minimum system footprint. AsyT's ease of operation, error tolerance, and generality aim to promote PNN acceleration in a widened operational scenario despite the fabrication variations and imperfect controls. We demonstrated AsyT for encapsulated DPNN with integrated photonic chips, repeatably enhancing the performance from in-silico BP for different network structures and datasets.
[ { "version": "v1", "created": "Tue, 28 May 2024 17:27:20 GMT" }, { "version": "v2", "created": "Thu, 15 Aug 2024 10:58:17 GMT" }, { "version": "v3", "created": "Sun, 17 Nov 2024 12:33:25 GMT" }, { "version": "v4", "created": "Thu, 13 Feb 2025 11:59:20 GMT" } ]
2025-03-13T00:00:00
[ [ "Wang", "Yizhi", "" ], [ "Chen", "Minjia", "" ], [ "Yao", "Chunhui", "" ], [ "Ma", "Jie", "" ], [ "Yan", "Ting", "" ], [ "Penty", "Richard", "" ], [ "Cheng", "Qixiang", "" ] ]
TITLE: Asymmetrical estimator for training encapsulated deep photonic neural networks ABSTRACT: Photonic neural networks (PNNs) are fast in-propagation and high bandwidth paradigms that aim to popularize reproducible NN acceleration with higher efficiency and lower cost. However, the training of PNN is known to be challenging, where the device-to-device and system-to-system variations create imperfect knowledge of the PNN. Despite backpropagation (BP)-based training algorithms being the industry standard for their robustness, generality, and fast gradient convergence for digital training, existing PNN-BP methods rely heavily on accurate intermediate state extraction or extensive computational resources for deep PNNs (DPNNs). The truncated photonic signal propagation and the computation overhead bottleneck DPNN's operation efficiency and increase system construction cost. Here, we introduce the asymmetrical training (AsyT) method, tailored for encapsulated DPNNs, where the signal is preserved in the analogue photonic domain for the entire structure. AsyT offers a lightweight solution for DPNNs with minimum readouts, fast and energy-efficient operation, and minimum system footprint. AsyT's ease of operation, error tolerance, and generality aim to promote PNN acceleration in a widened operational scenario despite the fabrication variations and imperfect controls. We demonstrated AsyT for encapsulated DPNN with integrated photonic chips, repeatably enhancing the performance from in-silico BP for different network structures and datasets.
no_new_dataset
0.951006
2405.20903
Valeria Mascolo
Valeria Mascolo, Alessandro Lovo, Corentin Herbert, Freddy Bouchet
Gaussian Framework and Optimal Projection of Weather Fields for Prediction of Extreme Events
40 pages, 11 figures, 6 tables
null
null
null
physics.ao-ph physics.data-an
http://creativecommons.org/licenses/by-nc-sa/4.0/
Extreme events are the major weather-related hazard for humanity. It is then of crucial importance to have a good understanding of their statistics and to be able to forecast them. However, lack of sufficient data makes their study particularly challenging. In this work, we provide a simple framework for studying extreme events that tackles the lack of data issue by using the entire available dataset, rather than focusing on the extremes of the dataset. To do so, we make the assumption that the set of predictors and the observable used to define the extreme event follow a jointly Gaussian distribution. This naturally gives the notion of an optimal projection of the predictors for forecasting the event. We take as a case study extreme heatwaves over France, and we test our method on an 8000-year-long intermediate complexity climate model time series and on the ERA5 reanalysis dataset. For a-posteriori statistics, we observe and motivate the fact that composite maps of very extreme events look similar to less extreme ones. For prediction, we show that our method is competitive with off-the-shelf neural networks on the long dataset and outperforms them on reanalysis. The optimal projection pattern, which makes our forecast intrinsically interpretable, highlights the importance of soil moisture deficit and quasi-stationary Rossby waves as precursors to extreme heatwaves.
[ { "version": "v1", "created": "Fri, 31 May 2024 15:15:29 GMT" }, { "version": "v2", "created": "Wed, 26 Jun 2024 10:42:48 GMT" }, { "version": "v3", "created": "Tue, 11 Mar 2025 18:18:02 GMT" } ]
2025-03-13T00:00:00
[ [ "Mascolo", "Valeria", "" ], [ "Lovo", "Alessandro", "" ], [ "Herbert", "Corentin", "" ], [ "Bouchet", "Freddy", "" ] ]
TITLE: Gaussian Framework and Optimal Projection of Weather Fields for Prediction of Extreme Events ABSTRACT: Extreme events are the major weather-related hazard for humanity. It is then of crucial importance to have a good understanding of their statistics and to be able to forecast them. However, lack of sufficient data makes their study particularly challenging. In this work, we provide a simple framework for studying extreme events that tackles the lack of data issue by using the entire available dataset, rather than focusing on the extremes of the dataset. To do so, we make the assumption that the set of predictors and the observable used to define the extreme event follow a jointly Gaussian distribution. This naturally gives the notion of an optimal projection of the predictors for forecasting the event. We take as a case study extreme heatwaves over France, and we test our method on an 8000-year-long intermediate complexity climate model time series and on the ERA5 reanalysis dataset. For a-posteriori statistics, we observe and motivate the fact that composite maps of very extreme events look similar to less extreme ones. For prediction, we show that our method is competitive with off-the-shelf neural networks on the long dataset and outperforms them on reanalysis. The optimal projection pattern, which makes our forecast intrinsically interpretable, highlights the importance of soil moisture deficit and quasi-stationary Rossby waves as precursors to extreme heatwaves.
no_new_dataset
0.947332
2406.08788
Jay Revolinsky
Jay Revolinsky, Harry Shomer, Jiliang Tang
Towards Understanding Link Predictor Generalizability Under Distribution Shifts
23 pages, 8 figures, 17 tables
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
State-of-the-art link prediction (LP) models demonstrate impressive benchmark results. However, popular benchmark datasets often assume that training, validation, and testing samples are representative of the overall dataset distribution. In real-world situations, this assumption is often incorrect; uncontrolled factors lead new dataset samples to come from a different distribution than training samples. Additionally, the majority of recent work with graph dataset shift focuses on node- and graph-level tasks, largely ignoring link-level tasks. To bridge this gap, we introduce a novel splitting strategy, known as LPShift, which utilizes structural properties to induce a controlled distribution shift. We verify LPShift's effect through empirical evaluation of SOTA LP models on 16 LPShift variants of original dataset splits, with results indicating drastic changes to model performance. Additional experiments demonstrate graph structure has a strong influence on the success of current generalization methods. Source Code Available Here: https://github.com/revolins/LPShift
[ { "version": "v1", "created": "Thu, 13 Jun 2024 03:47:12 GMT" }, { "version": "v2", "created": "Tue, 11 Mar 2025 19:49:55 GMT" } ]
2025-03-13T00:00:00
[ [ "Revolinsky", "Jay", "" ], [ "Shomer", "Harry", "" ], [ "Tang", "Jiliang", "" ] ]
TITLE: Towards Understanding Link Predictor Generalizability Under Distribution Shifts ABSTRACT: State-of-the-art link prediction (LP) models demonstrate impressive benchmark results. However, popular benchmark datasets often assume that training, validation, and testing samples are representative of the overall dataset distribution. In real-world situations, this assumption is often incorrect; uncontrolled factors lead new dataset samples to come from a different distribution than training samples. Additionally, the majority of recent work with graph dataset shift focuses on node- and graph-level tasks, largely ignoring link-level tasks. To bridge this gap, we introduce a novel splitting strategy, known as LPShift, which utilizes structural properties to induce a controlled distribution shift. We verify LPShift's effect through empirical evaluation of SOTA LP models on 16 LPShift variants of original dataset splits, with results indicating drastic changes to model performance. Additional experiments demonstrate graph structure has a strong influence on the success of current generalization methods. Source Code Available Here: https://github.com/revolins/LPShift
no_new_dataset
0.946695
2406.09836
Zhiwei Zhang
Zhiwei Zhang, Minhua Lin, Junjie Xu, Zongyu Wu, Enyan Dai, Suhang Wang
Robustness Inspired Graph Backdoor Defense
Accepted by ICLR 2025 (Oral)
null
null
null
cs.LG cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph Neural Networks (GNNs) have achieved promising results in tasks such as node classification and graph classification. However, recent studies reveal that GNNs are vulnerable to backdoor attacks, posing a significant threat to their real-world adoption. Despite initial efforts to defend against specific graph backdoor attacks, there is no work on defending against various types of backdoor attacks where generated triggers have different properties. Hence, we first empirically verify that prediction variance under edge dropping is a crucial indicator for identifying poisoned nodes. With this observation, we propose using random edge dropping to detect backdoors and theoretically show that it can efficiently distinguish poisoned nodes from clean ones. Furthermore, we introduce a novel robust training strategy to efficiently counteract the impact of the triggers. Extensive experiments on real-world datasets show that our framework can effectively identify poisoned nodes, significantly degrade the attack success rate, and maintain clean accuracy when defending against various types of graph backdoor attacks with different properties.
[ { "version": "v1", "created": "Fri, 14 Jun 2024 08:46:26 GMT" }, { "version": "v2", "created": "Wed, 12 Mar 2025 02:55:02 GMT" } ]
2025-03-13T00:00:00
[ [ "Zhang", "Zhiwei", "" ], [ "Lin", "Minhua", "" ], [ "Xu", "Junjie", "" ], [ "Wu", "Zongyu", "" ], [ "Dai", "Enyan", "" ], [ "Wang", "Suhang", "" ] ]
TITLE: Robustness Inspired Graph Backdoor Defense ABSTRACT: Graph Neural Networks (GNNs) have achieved promising results in tasks such as node classification and graph classification. However, recent studies reveal that GNNs are vulnerable to backdoor attacks, posing a significant threat to their real-world adoption. Despite initial efforts to defend against specific graph backdoor attacks, there is no work on defending against various types of backdoor attacks where generated triggers have different properties. Hence, we first empirically verify that prediction variance under edge dropping is a crucial indicator for identifying poisoned nodes. With this observation, we propose using random edge dropping to detect backdoors and theoretically show that it can efficiently distinguish poisoned nodes from clean ones. Furthermore, we introduce a novel robust training strategy to efficiently counteract the impact of the triggers. Extensive experiments on real-world datasets show that our framework can effectively identify poisoned nodes, significantly degrade the attack success rate, and maintain clean accuracy when defending against various types of graph backdoor attacks with different properties.
no_new_dataset
0.945298
2406.16038
Delin Qu
Delin Qu, Qizhi Chen, Pingrui Zhang, Xianqiang Gao, Junzhe Li, Bin Zhao, Dong Wang and Xuelong Li
LiveScene: Language Embedding Interactive Radiance Fields for Physical Scene Rendering and Control
Accepted at Neurips 2024
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper scales object-level reconstruction to complex scenes, advancing interactive scene reconstruction. We introduce two datasets, OmniSim and InterReal, featuring 28 scenes with multiple interactive objects. To tackle the challenge of inaccurate interactive motion recovery in complex scenes, we propose LiveScene, a scene-level language-embedded interactive radiance field that efficiently reconstructs and controls multiple objects. By decomposing the interactive scene into local deformable fields, LiveScene enables separate reconstruction of individual object motions, reducing memory consumption. Additionally, our interaction-aware language embedding localizes individual interactive objects, allowing for arbitrary control using natural language. Our approach demonstrates significant superiority in novel view synthesis, interactive scene control, and language grounding performance through extensive experiments. Project page: https://livescenes.github.io.
[ { "version": "v1", "created": "Sun, 23 Jun 2024 07:26:13 GMT" }, { "version": "v2", "created": "Sun, 3 Nov 2024 07:37:05 GMT" }, { "version": "v3", "created": "Wed, 12 Mar 2025 03:19:42 GMT" } ]
2025-03-13T00:00:00
[ [ "Qu", "Delin", "" ], [ "Chen", "Qizhi", "" ], [ "Zhang", "Pingrui", "" ], [ "Gao", "Xianqiang", "" ], [ "Li", "Junzhe", "" ], [ "Zhao", "Bin", "" ], [ "Wang", "Dong", "" ], [ "Li", "Xuelong", "" ] ]
TITLE: LiveScene: Language Embedding Interactive Radiance Fields for Physical Scene Rendering and Control ABSTRACT: This paper scales object-level reconstruction to complex scenes, advancing interactive scene reconstruction. We introduce two datasets, OmniSim and InterReal, featuring 28 scenes with multiple interactive objects. To tackle the challenge of inaccurate interactive motion recovery in complex scenes, we propose LiveScene, a scene-level language-embedded interactive radiance field that efficiently reconstructs and controls multiple objects. By decomposing the interactive scene into local deformable fields, LiveScene enables separate reconstruction of individual object motions, reducing memory consumption. Additionally, our interaction-aware language embedding localizes individual interactive objects, allowing for arbitrary control using natural language. Our approach demonstrates significant superiority in novel view synthesis, interactive scene control, and language grounding performance through extensive experiments. Project page: https://livescenes.github.io.
new_dataset
0.952264
2407.02165
Zihao Huang
Zihao Huang, Shoukang Hu, Guangcong Wang, Tianqi Liu, Yuhang Zang, Zhiguo Cao, Wei Li, Ziwei Liu
WildAvatar: Learning In-the-wild 3D Avatars from the Web
CVPR2025, Project page: https://wildavatar.github.io/
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Existing research on avatar creation is typically limited to laboratory datasets, which require high costs against scalability and exhibit insufficient representation of the real world. On the other hand, the web abounds with off-the-shelf real-world human videos, but these videos vary in quality and require accurate annotations for avatar creation. To this end, we propose an automatic annotating pipeline with filtering protocols to curate these humans from the web. Our pipeline surpasses state-of-the-art methods on the EMDB benchmark, and the filtering protocols boost verification metrics on web videos. We then curate WildAvatar, a web-scale in-the-wild human avatar creation dataset extracted from YouTube, with $10000+$ different human subjects and scenes. WildAvatar is at least $10\times$ richer than previous datasets for 3D human avatar creation and closer to the real world. To explore its potential, we demonstrate the quality and generalizability of avatar creation methods on WildAvatar. We will publicly release our code, data source links and annotations to push forward 3D human avatar creation and other related fields for real-world applications.
[ { "version": "v1", "created": "Tue, 2 Jul 2024 11:17:48 GMT" }, { "version": "v2", "created": "Wed, 10 Jul 2024 09:20:39 GMT" }, { "version": "v3", "created": "Sun, 14 Jul 2024 08:15:12 GMT" }, { "version": "v4", "created": "Wed, 12 Mar 2025 14:19:55 GMT" } ]
2025-03-13T00:00:00
[ [ "Huang", "Zihao", "" ], [ "Hu", "Shoukang", "" ], [ "Wang", "Guangcong", "" ], [ "Liu", "Tianqi", "" ], [ "Zang", "Yuhang", "" ], [ "Cao", "Zhiguo", "" ], [ "Li", "Wei", "" ], [ "Liu", "Ziwei", "" ] ]
TITLE: WildAvatar: Learning In-the-wild 3D Avatars from the Web ABSTRACT: Existing research on avatar creation is typically limited to laboratory datasets, which require high costs against scalability and exhibit insufficient representation of the real world. On the other hand, the web abounds with off-the-shelf real-world human videos, but these videos vary in quality and require accurate annotations for avatar creation. To this end, we propose an automatic annotating pipeline with filtering protocols to curate these humans from the web. Our pipeline surpasses state-of-the-art methods on the EMDB benchmark, and the filtering protocols boost verification metrics on web videos. We then curate WildAvatar, a web-scale in-the-wild human avatar creation dataset extracted from YouTube, with $10000+$ different human subjects and scenes. WildAvatar is at least $10\times$ richer than previous datasets for 3D human avatar creation and closer to the real world. To explore its potential, we demonstrate the quality and generalizability of avatar creation methods on WildAvatar. We will publicly release our code, data source links and annotations to push forward 3D human avatar creation and other related fields for real-world applications.
new_dataset
0.96606
2407.06060
Pedro Louro
Pedro Lima Louro, Hugo Redinho, Ricardo Santos, Ricardo Malheiro, Renato Panda, Rui Pedro Paiva
MERGE -- A Bimodal Dataset for Static Music Emotion Recognition
16 pages, 4 figures, 13 tables, submitted to IEEE Transactions on Affective Computing
null
null
null
cs.SD cs.IR cs.LG cs.MM eess.AS
http://creativecommons.org/licenses/by-nc-sa/4.0/
The Music Emotion Recognition (MER) field has seen steady developments in recent years, with contributions from feature engineering, machine learning, and deep learning. The landscape has also shifted from audio-centric systems to bimodal ensembles that combine audio and lyrics. However, a severe lack of public and sizeable bimodal databases has hampered the development and improvement of bimodal audio-lyrics systems. This article proposes three new audio, lyrics, and bimodal MER research datasets, collectively called MERGE, created using a semi-automatic approach. To comprehensively assess the proposed datasets and establish a baseline for benchmarking, we conducted several experiments for each modality, using feature engineering, machine learning, and deep learning methodologies. In addition, we propose and validate fixed train-validate-test splits. The obtained results confirm the viability of the proposed datasets, achieving the best overall result of 79.21% F1-score for bimodal classification using a deep neural network.
[ { "version": "v1", "created": "Mon, 8 Jul 2024 16:01:04 GMT" }, { "version": "v2", "created": "Wed, 12 Mar 2025 00:52:43 GMT" } ]
2025-03-13T00:00:00
[ [ "Louro", "Pedro Lima", "" ], [ "Redinho", "Hugo", "" ], [ "Santos", "Ricardo", "" ], [ "Malheiro", "Ricardo", "" ], [ "Panda", "Renato", "" ], [ "Paiva", "Rui Pedro", "" ] ]
TITLE: MERGE -- A Bimodal Dataset for Static Music Emotion Recognition ABSTRACT: The Music Emotion Recognition (MER) field has seen steady developments in recent years, with contributions from feature engineering, machine learning, and deep learning. The landscape has also shifted from audio-centric systems to bimodal ensembles that combine audio and lyrics. However, a severe lack of public and sizeable bimodal databases has hampered the development and improvement of bimodal audio-lyrics systems. This article proposes three new audio, lyrics, and bimodal MER research datasets, collectively called MERGE, created using a semi-automatic approach. To comprehensively assess the proposed datasets and establish a baseline for benchmarking, we conducted several experiments for each modality, using feature engineering, machine learning, and deep learning methodologies. In addition, we propose and validate fixed train-validate-test splits. The obtained results confirm the viability of the proposed datasets, achieving the best overall result of 79.21% F1-score for bimodal classification using a deep neural network.
new_dataset
0.961098
2407.08952
Ye Liu
Ye Liu, Jiajun Zhu, Xukai Liu, Haoyu Tang, Yanghai Zhang, Kai Zhang, Xiaofang Zhou, Enhong Chen
Detect, Investigate, Judge and Determine: A Knowledge-guided Framework for Few-shot Fake News Detection
null
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Few-Shot Fake News Detection (FS-FND) aims to distinguish inaccurate news from real ones in extremely low-resource scenarios. This task has garnered increased attention due to the widespread dissemination and harmful impact of fake news on social media. Large Language Models (LLMs) have demonstrated competitive performance with the help of their rich prior knowledge and excellent in-context learning abilities. However, existing methods face significant limitations, such as the Understanding Ambiguity and Information Scarcity, which significantly undermine the potential of LLMs. To address these shortcomings, we propose a Dual-perspective Knowledge-guided Fake News Detection (DKFND) model, designed to enhance LLMs from both inside and outside perspectives. Specifically, DKFND first identifies the knowledge concepts of each news article through a Detection Module. Subsequently, DKFND creatively designs an Investigation Module to retrieve inside and outside valuable information concerning to the current news, followed by another Judge Module to evaluate the relevance and confidence of them. Finally, a Determination Module further derives two respective predictions and obtain the final result. Extensive experiments on two public datasets show the efficacy of our proposed method, particularly in low-resource settings.
[ { "version": "v1", "created": "Fri, 12 Jul 2024 03:15:01 GMT" }, { "version": "v2", "created": "Fri, 14 Feb 2025 04:56:16 GMT" }, { "version": "v3", "created": "Mon, 17 Feb 2025 05:25:32 GMT" }, { "version": "v4", "created": "Tue, 11 Mar 2025 13:06:04 GMT" }, { "version": "v5", "created": "Wed, 12 Mar 2025 04:46:47 GMT" } ]
2025-03-13T00:00:00
[ [ "Liu", "Ye", "" ], [ "Zhu", "Jiajun", "" ], [ "Liu", "Xukai", "" ], [ "Tang", "Haoyu", "" ], [ "Zhang", "Yanghai", "" ], [ "Zhang", "Kai", "" ], [ "Zhou", "Xiaofang", "" ], [ "Chen", "Enhong", "" ] ]
TITLE: Detect, Investigate, Judge and Determine: A Knowledge-guided Framework for Few-shot Fake News Detection ABSTRACT: Few-Shot Fake News Detection (FS-FND) aims to distinguish inaccurate news from real ones in extremely low-resource scenarios. This task has garnered increased attention due to the widespread dissemination and harmful impact of fake news on social media. Large Language Models (LLMs) have demonstrated competitive performance with the help of their rich prior knowledge and excellent in-context learning abilities. However, existing methods face significant limitations, such as the Understanding Ambiguity and Information Scarcity, which significantly undermine the potential of LLMs. To address these shortcomings, we propose a Dual-perspective Knowledge-guided Fake News Detection (DKFND) model, designed to enhance LLMs from both inside and outside perspectives. Specifically, DKFND first identifies the knowledge concepts of each news article through a Detection Module. Subsequently, DKFND creatively designs an Investigation Module to retrieve inside and outside valuable information concerning to the current news, followed by another Judge Module to evaluate the relevance and confidence of them. Finally, a Determination Module further derives two respective predictions and obtain the final result. Extensive experiments on two public datasets show the efficacy of our proposed method, particularly in low-resource settings.
no_new_dataset
0.944074
2407.12358
Chuwei Luo
Yufan Shen, Chuwei Luo, Zhaoqing Zhu, Yang Chen, Qi Zheng, Zhi Yu, Jiajun Bu, Cong Yao
ProcTag: Process Tagging for Assessing the Efficacy of Document Instruction Data
AAAI 2025
null
null
null
cs.CV cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, large language models (LLMs) and multimodal large language models (MLLMs) have demonstrated promising results on document visual question answering (VQA) task, particularly after training on document instruction datasets. An effective evaluation method for document instruction data is crucial in constructing instruction data with high efficacy, which, in turn, facilitates the training of LLMs and MLLMs for document VQA. However, most existing evaluation methods for instruction data are limited to the textual content of the instructions themselves, thereby hindering the effective assessment of document instruction datasets and constraining their construction. In this paper, we propose ProcTag, a data-oriented method that assesses the efficacy of document instruction data. ProcTag innovatively performs tagging on the execution process of instructions rather than the instruction text itself. By leveraging the diversity and complexity of these tags to assess the efficacy of the given dataset, ProcTag enables selective sampling or filtering of document instructions. Furthermore, DocLayPrompt, a novel semi-structured layout-aware document prompting strategy, is proposed for effectively representing documents. Experiments demonstrate that sampling existing open-sourced and generated document VQA/instruction datasets with ProcTag significantly outperforms current methods for evaluating instruction data. Impressively, with ProcTag-based sampling in the generated document datasets, only 30.5\% of the document instructions are required to achieve 100\% efficacy compared to the complete dataset. The code is publicly available at https://github.com/AlibabaResearch/AdvancedLiterateMachinery/tree/main/DocumentUnderstanding/ProcTag.
[ { "version": "v1", "created": "Wed, 17 Jul 2024 07:29:59 GMT" }, { "version": "v2", "created": "Wed, 12 Mar 2025 02:20:28 GMT" } ]
2025-03-13T00:00:00
[ [ "Shen", "Yufan", "" ], [ "Luo", "Chuwei", "" ], [ "Zhu", "Zhaoqing", "" ], [ "Chen", "Yang", "" ], [ "Zheng", "Qi", "" ], [ "Yu", "Zhi", "" ], [ "Bu", "Jiajun", "" ], [ "Yao", "Cong", "" ] ]
TITLE: ProcTag: Process Tagging for Assessing the Efficacy of Document Instruction Data ABSTRACT: Recently, large language models (LLMs) and multimodal large language models (MLLMs) have demonstrated promising results on document visual question answering (VQA) task, particularly after training on document instruction datasets. An effective evaluation method for document instruction data is crucial in constructing instruction data with high efficacy, which, in turn, facilitates the training of LLMs and MLLMs for document VQA. However, most existing evaluation methods for instruction data are limited to the textual content of the instructions themselves, thereby hindering the effective assessment of document instruction datasets and constraining their construction. In this paper, we propose ProcTag, a data-oriented method that assesses the efficacy of document instruction data. ProcTag innovatively performs tagging on the execution process of instructions rather than the instruction text itself. By leveraging the diversity and complexity of these tags to assess the efficacy of the given dataset, ProcTag enables selective sampling or filtering of document instructions. Furthermore, DocLayPrompt, a novel semi-structured layout-aware document prompting strategy, is proposed for effectively representing documents. Experiments demonstrate that sampling existing open-sourced and generated document VQA/instruction datasets with ProcTag significantly outperforms current methods for evaluating instruction data. Impressively, with ProcTag-based sampling in the generated document datasets, only 30.5\% of the document instructions are required to achieve 100\% efficacy compared to the complete dataset. The code is publicly available at https://github.com/AlibabaResearch/AdvancedLiterateMachinery/tree/main/DocumentUnderstanding/ProcTag.
no_new_dataset
0.924279
2407.13329
Lorenzo Paolini
Lorenzo Paolini, Sahar Vahdati, Angelo Di Iorio, Robert Wardenga, Ivan Heibi, Silvio Peroni
CiteFusion: An Ensemble Framework for Citation Intent Classification Harnessing Dual-Model Binary Couples and SHAP Analyses
Submitted to Scientometrics Journal
null
10.5281/zenodo.15011985
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding the motivations underlying scholarly citations is critical for evaluating research impact and fostering transparent scholarly communication. This study introduces CiteFusion, an ensemble framework designed to address the multiclass Citation Intent Classification (CIC) task on benchmark datasets, SciCite and ACL-ARC. The framework decomposes the task into binary classification subtasks, utilizing complementary pairs of SciBERT and XLNet models fine-tuned independently for each citation intent. These base models are aggregated through a feedforward neural network meta-classifier, ensuring robust performance in imbalanced and data-scarce scenarios. To enhance interpretability, SHAP (SHapley Additive exPlanations) is employed to analyze token-level contributions and interactions among base models, providing transparency into classification dynamics. We further investigate the semantic role of structural context by incorporating section titles into input sentences, demonstrating their significant impact on classification accuracy and model reliability. Experimental results show that CiteFusion achieves state-of-the-art performance, with Macro-F1 scores of 89.60% on SciCite and 76.24% on ACL-ARC. The original intents from both datasets are mapped to Citation Typing Ontology (CiTO) object properties to ensure interoperability and reusability. This mapping highlights overlaps between the two datasets labels, enhancing their understandability and reusability. Finally, we release a web-based application that classifies citation intents leveraging CiteFusion models developed on SciCite.
[ { "version": "v1", "created": "Thu, 18 Jul 2024 09:29:33 GMT" }, { "version": "v2", "created": "Wed, 12 Mar 2025 11:59:18 GMT" } ]
2025-03-13T00:00:00
[ [ "Paolini", "Lorenzo", "" ], [ "Vahdati", "Sahar", "" ], [ "Di Iorio", "Angelo", "" ], [ "Wardenga", "Robert", "" ], [ "Heibi", "Ivan", "" ], [ "Peroni", "Silvio", "" ] ]
TITLE: CiteFusion: An Ensemble Framework for Citation Intent Classification Harnessing Dual-Model Binary Couples and SHAP Analyses ABSTRACT: Understanding the motivations underlying scholarly citations is critical for evaluating research impact and fostering transparent scholarly communication. This study introduces CiteFusion, an ensemble framework designed to address the multiclass Citation Intent Classification (CIC) task on benchmark datasets, SciCite and ACL-ARC. The framework decomposes the task into binary classification subtasks, utilizing complementary pairs of SciBERT and XLNet models fine-tuned independently for each citation intent. These base models are aggregated through a feedforward neural network meta-classifier, ensuring robust performance in imbalanced and data-scarce scenarios. To enhance interpretability, SHAP (SHapley Additive exPlanations) is employed to analyze token-level contributions and interactions among base models, providing transparency into classification dynamics. We further investigate the semantic role of structural context by incorporating section titles into input sentences, demonstrating their significant impact on classification accuracy and model reliability. Experimental results show that CiteFusion achieves state-of-the-art performance, with Macro-F1 scores of 89.60% on SciCite and 76.24% on ACL-ARC. The original intents from both datasets are mapped to Citation Typing Ontology (CiTO) object properties to ensure interoperability and reusability. This mapping highlights overlaps between the two datasets labels, enhancing their understandability and reusability. Finally, we release a web-based application that classifies citation intents leveraging CiteFusion models developed on SciCite.
no_new_dataset
0.950549
2407.14210
Jos\'e Daniel Pascual-Triana
Jos\'e Daniel Pascual-Triana, Alberto Fern\'andez, Paulo Novais, Francisco Herrera
Fair Overlap Number of Balls (Fair-ONB): A Data-Morphology-based Undersampling Method for Bias Reduction
14 pages, 5 tables, 8 figures
null
10.1080/02331888.2025.2476029
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the key issues regarding classification problems in Trustworthy Artificial Intelligence is ensuring Fairness in the prediction of different classes when protected (sensitive) features are present. Data quality is critical in these cases, as biases in training data can be reflected in machine learning, impacting human lives and failing to comply with current regulations. One strategy to improve data quality and avoid these problems is preprocessing the dataset. Instance selection via undersampling can foster balanced learning of classes and protected feature values. Performing undersampling in class overlap areas close to the decision boundary should bolster the impact on the classifier. This work proposes Fair Overlap Number of Balls (Fair-ONB), an undersampling method that harnesses the data morphology of the different data groups (obtained from the combination of classes and protected feature values) to perform guided undersampling in overlap areas. It employs attributes of the ball coverage of the groups, such as the radius, number of covered instances and density, to select the most suitable areas for undersampling and reduce bias. Results show that the Fair-ONB method improves model Fairness with low impact on the classifier's predictive performance.
[ { "version": "v1", "created": "Fri, 19 Jul 2024 11:16:02 GMT" }, { "version": "v2", "created": "Mon, 23 Sep 2024 16:52:05 GMT" } ]
2025-03-13T00:00:00
[ [ "Pascual-Triana", "José Daniel", "" ], [ "Fernández", "Alberto", "" ], [ "Novais", "Paulo", "" ], [ "Herrera", "Francisco", "" ] ]
TITLE: Fair Overlap Number of Balls (Fair-ONB): A Data-Morphology-based Undersampling Method for Bias Reduction ABSTRACT: One of the key issues regarding classification problems in Trustworthy Artificial Intelligence is ensuring Fairness in the prediction of different classes when protected (sensitive) features are present. Data quality is critical in these cases, as biases in training data can be reflected in machine learning, impacting human lives and failing to comply with current regulations. One strategy to improve data quality and avoid these problems is preprocessing the dataset. Instance selection via undersampling can foster balanced learning of classes and protected feature values. Performing undersampling in class overlap areas close to the decision boundary should bolster the impact on the classifier. This work proposes Fair Overlap Number of Balls (Fair-ONB), an undersampling method that harnesses the data morphology of the different data groups (obtained from the combination of classes and protected feature values) to perform guided undersampling in overlap areas. It employs attributes of the ball coverage of the groups, such as the radius, number of covered instances and density, to select the most suitable areas for undersampling and reduce bias. Results show that the Fair-ONB method improves model Fairness with low impact on the classifier's predictive performance.
no_new_dataset
0.953966
2407.15620
Xihong Yang
Xihong Yang, Yiqi Wang, Jin Chen, Wenqi Fan, Xiangyu Zhao, En Zhu, Xinwang Liu, Defu Lian
Dual Test-time Training for Out-of-distribution Recommender System
null
null
null
null
cs.IR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep learning has been widely applied in recommender systems, which has achieved revolutionary progress recently. However, most existing learning-based methods assume that the user and item distributions remain unchanged between the training phase and the test phase. However, the distribution of user and item features can naturally shift in real-world scenarios, potentially resulting in a substantial decrease in recommendation performance. This phenomenon can be formulated as an Out-Of-Distribution (OOD) recommendation problem. To address this challenge, we propose a novel Dual Test-Time-Training framework for OOD Recommendation, termed DT3OR. In DT3OR, we incorporate a model adaptation mechanism during the test-time phase to carefully update the recommendation model, allowing the model to specially adapt to the shifting user and item features. To be specific, we propose a self-distillation task and a contrastive task to assist the model learning both the user's invariant interest preferences and the variant user/item characteristics during the test-time phase, thus facilitating a smooth adaptation to the shifting features. Furthermore, we provide theoretical analysis to support the rationale behind our dual test-time training framework. To the best of our knowledge, this paper is the first work to address OOD recommendation via a test-time-training strategy. We conduct experiments on three datasets with various backbones. Comprehensive experimental results have demonstrated the effectiveness of DT3OR compared to other state-of-the-art baselines.
[ { "version": "v1", "created": "Mon, 22 Jul 2024 13:27:51 GMT" }, { "version": "v2", "created": "Wed, 12 Mar 2025 14:06:24 GMT" } ]
2025-03-13T00:00:00
[ [ "Yang", "Xihong", "" ], [ "Wang", "Yiqi", "" ], [ "Chen", "Jin", "" ], [ "Fan", "Wenqi", "" ], [ "Zhao", "Xiangyu", "" ], [ "Zhu", "En", "" ], [ "Liu", "Xinwang", "" ], [ "Lian", "Defu", "" ] ]
TITLE: Dual Test-time Training for Out-of-distribution Recommender System ABSTRACT: Deep learning has been widely applied in recommender systems, which has achieved revolutionary progress recently. However, most existing learning-based methods assume that the user and item distributions remain unchanged between the training phase and the test phase. However, the distribution of user and item features can naturally shift in real-world scenarios, potentially resulting in a substantial decrease in recommendation performance. This phenomenon can be formulated as an Out-Of-Distribution (OOD) recommendation problem. To address this challenge, we propose a novel Dual Test-Time-Training framework for OOD Recommendation, termed DT3OR. In DT3OR, we incorporate a model adaptation mechanism during the test-time phase to carefully update the recommendation model, allowing the model to specially adapt to the shifting user and item features. To be specific, we propose a self-distillation task and a contrastive task to assist the model learning both the user's invariant interest preferences and the variant user/item characteristics during the test-time phase, thus facilitating a smooth adaptation to the shifting features. Furthermore, we provide theoretical analysis to support the rationale behind our dual test-time training framework. To the best of our knowledge, this paper is the first work to address OOD recommendation via a test-time-training strategy. We conduct experiments on three datasets with various backbones. Comprehensive experimental results have demonstrated the effectiveness of DT3OR compared to other state-of-the-art baselines.
no_new_dataset
0.944331
2407.21299
Kaustav Bhattacharjee
Kaustav Bhattacharjee, Soumya Kundu, Indrasis Chakraborty, and Aritra Dasgupta
Who should I trust? A Visual Analytics Approach for Comparing Net Load Forecasting Models
Accepted for publication in the proceedings of 2025 IEEE PES Grid Edge Technologies Conference & Exposition (Grid Edge)
GridEdge 2025, pp. 1-5, 2025
10.1109/GridEdge61154.2025.10887523
null
cs.HC cs.AI cs.LG cs.SY eess.SP eess.SY
http://creativecommons.org/licenses/by-nc-sa/4.0/
Net load forecasting is crucial for energy planning and facilitating informed decision-making regarding trade and load distributions. However, evaluating forecasting models' performance against benchmark models remains challenging, thereby impeding experts' trust in the model's performance. In this context, there is a demand for technological interventions that allow scientists to compare models across various timeframes and solar penetration levels. This paper introduces a visual analytics-based application designed to compare the performance of deep-learning-based net load forecasting models with other models for probabilistic net load forecasting. This application employs carefully selected visual analytic interventions, enabling users to discern differences in model performance across different solar penetration levels, dataset resolutions, and hours of the day over multiple months. We also present observations made using our application through a case study, demonstrating the effectiveness of visualizations in aiding scientists in making informed decisions and enhancing trust in net load forecasting models.
[ { "version": "v1", "created": "Wed, 31 Jul 2024 02:57:21 GMT" } ]
2025-03-13T00:00:00
[ [ "Bhattacharjee", "Kaustav", "" ], [ "Kundu", "Soumya", "" ], [ "Chakraborty", "Indrasis", "" ], [ "Dasgupta", "Aritra", "" ] ]
TITLE: Who should I trust? A Visual Analytics Approach for Comparing Net Load Forecasting Models ABSTRACT: Net load forecasting is crucial for energy planning and facilitating informed decision-making regarding trade and load distributions. However, evaluating forecasting models' performance against benchmark models remains challenging, thereby impeding experts' trust in the model's performance. In this context, there is a demand for technological interventions that allow scientists to compare models across various timeframes and solar penetration levels. This paper introduces a visual analytics-based application designed to compare the performance of deep-learning-based net load forecasting models with other models for probabilistic net load forecasting. This application employs carefully selected visual analytic interventions, enabling users to discern differences in model performance across different solar penetration levels, dataset resolutions, and hours of the day over multiple months. We also present observations made using our application through a case study, demonstrating the effectiveness of visualizations in aiding scientists in making informed decisions and enhancing trust in net load forecasting models.
no_new_dataset
0.950227
2408.00374
Rahul Bhadani
Xi Chen, Rahul Bhadani, Larry Head
Conformal Trajectory Prediction with Multi-View Data Integration in Cooperative Driving
null
null
null
null
cs.AI cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Current research on trajectory prediction primarily relies on data collected by onboard sensors of an ego vehicle. With the rapid advancement in connected technologies, such as vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication, valuable information from alternate views becomes accessible via wireless networks. The integration of information from alternative views has the potential to overcome the inherent limitations associated with a single viewpoint, such as occlusions and limited field of view. In this work, we introduce V2INet, a novel trajectory prediction framework designed to model multi-view data by extending existing single-view models. Unlike previous approaches where the multi-view data is manually fused or formulated as a separate training stage, our model supports end-to-end training, enhancing both flexibility and performance. Moreover, the predicted multimodal trajectories are calibrated by a post-hoc conformal prediction module to get valid and efficient confidence regions. We evaluated the entire framework using the real-world V2I dataset V2X-Seq. Our results demonstrate superior performance in terms of Final Displacement Error (FDE) and Miss Rate (MR) using a single GPU. The code is publicly available at: https://github.com/xichennn/V2I_trajectory_prediction.
[ { "version": "v1", "created": "Thu, 1 Aug 2024 08:32:03 GMT" }, { "version": "v2", "created": "Fri, 2 Aug 2024 13:00:46 GMT" }, { "version": "v3", "created": "Tue, 11 Mar 2025 18:19:56 GMT" } ]
2025-03-13T00:00:00
[ [ "Chen", "Xi", "" ], [ "Bhadani", "Rahul", "" ], [ "Head", "Larry", "" ] ]
TITLE: Conformal Trajectory Prediction with Multi-View Data Integration in Cooperative Driving ABSTRACT: Current research on trajectory prediction primarily relies on data collected by onboard sensors of an ego vehicle. With the rapid advancement in connected technologies, such as vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication, valuable information from alternate views becomes accessible via wireless networks. The integration of information from alternative views has the potential to overcome the inherent limitations associated with a single viewpoint, such as occlusions and limited field of view. In this work, we introduce V2INet, a novel trajectory prediction framework designed to model multi-view data by extending existing single-view models. Unlike previous approaches where the multi-view data is manually fused or formulated as a separate training stage, our model supports end-to-end training, enhancing both flexibility and performance. Moreover, the predicted multimodal trajectories are calibrated by a post-hoc conformal prediction module to get valid and efficient confidence regions. We evaluated the entire framework using the real-world V2I dataset V2X-Seq. Our results demonstrate superior performance in terms of Final Displacement Error (FDE) and Miss Rate (MR) using a single GPU. The code is publicly available at: https://github.com/xichennn/V2I_trajectory_prediction.
no_new_dataset
0.943086
2408.00531
Max Klabunde
Max Klabunde, Tassilo Wald, Tobias Schumacher, Klaus Maier-Hein, Markus Strohmaier, Florian Lemmerich
ReSi: A Comprehensive Benchmark for Representational Similarity Measures
ICLR 2025. Code and data at https://github.com/mklabunde/resi
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Measuring the similarity of different representations of neural architectures is a fundamental task and an open research challenge for the machine learning community. This paper presents the first comprehensive benchmark for evaluating representational similarity measures based on well-defined groundings of similarity. The representational similarity (ReSi) benchmark consists of (i) six carefully designed tests for similarity measures, (ii) 24 similarity measures, (iii) 14 neural network architectures, and (iv) seven datasets, spanning over the graph, language, and vision domains. The benchmark opens up several important avenues of research on representational similarity that enable novel explorations and applications of neural architectures. We demonstrate the utility of the ReSi benchmark by conducting experiments on various neural network architectures, real world datasets and similarity measures. All components of the benchmark are publicly available and thereby facilitate systematic reproduction and production of research results. The benchmark is extensible, future research can build on and further expand it. We believe that the ReSi benchmark can serve as a sound platform catalyzing future research that aims to systematically evaluate existing and explore novel ways of comparing representations of neural architectures.
[ { "version": "v1", "created": "Thu, 1 Aug 2024 13:08:02 GMT" }, { "version": "v2", "created": "Tue, 11 Mar 2025 20:01:30 GMT" } ]
2025-03-13T00:00:00
[ [ "Klabunde", "Max", "" ], [ "Wald", "Tassilo", "" ], [ "Schumacher", "Tobias", "" ], [ "Maier-Hein", "Klaus", "" ], [ "Strohmaier", "Markus", "" ], [ "Lemmerich", "Florian", "" ] ]
TITLE: ReSi: A Comprehensive Benchmark for Representational Similarity Measures ABSTRACT: Measuring the similarity of different representations of neural architectures is a fundamental task and an open research challenge for the machine learning community. This paper presents the first comprehensive benchmark for evaluating representational similarity measures based on well-defined groundings of similarity. The representational similarity (ReSi) benchmark consists of (i) six carefully designed tests for similarity measures, (ii) 24 similarity measures, (iii) 14 neural network architectures, and (iv) seven datasets, spanning over the graph, language, and vision domains. The benchmark opens up several important avenues of research on representational similarity that enable novel explorations and applications of neural architectures. We demonstrate the utility of the ReSi benchmark by conducting experiments on various neural network architectures, real world datasets and similarity measures. All components of the benchmark are publicly available and thereby facilitate systematic reproduction and production of research results. The benchmark is extensible, future research can build on and further expand it. We believe that the ReSi benchmark can serve as a sound platform catalyzing future research that aims to systematically evaluate existing and explore novel ways of comparing representations of neural architectures.
new_dataset
0.82566
2408.01434
A'di Dust
Adi Dust, Pat Levitt, Maja Matari\'c
Behind the Smile: Mental Health Implications of Mother-Infant Interactions Revealed Through Smile Analysis
9 pages, 2 Figures, Affective Computing & Intelligent Interaction Conference 2024
null
10.1109/ACII63134.2024.00010
null
cs.CY cs.HC
http://creativecommons.org/licenses/by/4.0/
Mothers of infants have specific demands in fostering emotional bonds with their children, characterized by dynamics that are different from adult-adult interactions, notably requiring heightened maternal emotional regulation. In this study, we analyzed maternal emotional state by modeling maternal emotion regulation reflected in smiles. The dataset comprises N=94 videos of approximately 3 plus or minus 1-minutes, capturing free play interactions between 6 and 12-month-old infants and their mothers. Corresponding demographic details of self-reported maternal mental health provide variables for determining mothers' relations to emotions measured during free play. In this work, we employ diverse methodological approaches to explore the temporal evolution of maternal smiles. Our findings reveal a correlation between the temporal dynamics of mothers' smiles and their emotional state. Furthermore, we identify specific smile features that correlate with maternal emotional state, thereby enabling informed inferences with existing literature on general smile analysis. This study offers insights into emotional labor, defined as the management of one's own emotions for the benefit of others, and emotion regulation entailed in mother-infant interactions.
[ { "version": "v1", "created": "Thu, 18 Jul 2024 23:22:57 GMT" }, { "version": "v2", "created": "Tue, 11 Mar 2025 23:31:31 GMT" } ]
2025-03-13T00:00:00
[ [ "Dust", "Adi", "" ], [ "Levitt", "Pat", "" ], [ "Matarić", "Maja", "" ] ]
TITLE: Behind the Smile: Mental Health Implications of Mother-Infant Interactions Revealed Through Smile Analysis ABSTRACT: Mothers of infants have specific demands in fostering emotional bonds with their children, characterized by dynamics that are different from adult-adult interactions, notably requiring heightened maternal emotional regulation. In this study, we analyzed maternal emotional state by modeling maternal emotion regulation reflected in smiles. The dataset comprises N=94 videos of approximately 3 plus or minus 1-minutes, capturing free play interactions between 6 and 12-month-old infants and their mothers. Corresponding demographic details of self-reported maternal mental health provide variables for determining mothers' relations to emotions measured during free play. In this work, we employ diverse methodological approaches to explore the temporal evolution of maternal smiles. Our findings reveal a correlation between the temporal dynamics of mothers' smiles and their emotional state. Furthermore, we identify specific smile features that correlate with maternal emotional state, thereby enabling informed inferences with existing literature on general smile analysis. This study offers insights into emotional labor, defined as the management of one's own emotions for the benefit of others, and emotion regulation entailed in mother-infant interactions.
new_dataset
0.643861
2408.14998
Alloy Das
Alloy Das, Sanket Biswas, Umapada Pal, Josep Llad\'os, Saumik Bhattacharya
FastTextSpotter: A High-Efficiency Transformer for Multilingual Scene Text Spotting
Accepted in ICPR 2024
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
The proliferation of scene text in both structured and unstructured environments presents significant challenges in optical character recognition (OCR), necessitating more efficient and robust text spotting solutions. This paper presents FastTextSpotter, a framework that integrates a Swin Transformer visual backbone with a Transformer Encoder-Decoder architecture, enhanced by a novel, faster self-attention unit, SAC2, to improve processing speeds while maintaining accuracy. FastTextSpotter has been validated across multiple datasets, including ICDAR2015 for regular texts and CTW1500 and TotalText for arbitrary-shaped texts, benchmarking against current state-of-the-art models. Our results indicate that FastTextSpotter not only achieves superior accuracy in detecting and recognizing multilingual scene text (English and Vietnamese) but also improves model efficiency, thereby setting new benchmarks in the field. This study underscores the potential of advanced transformer architectures in improving the adaptability and speed of text spotting applications in diverse real-world settings. The dataset, code, and pre-trained models have been released in our Github.
[ { "version": "v1", "created": "Tue, 27 Aug 2024 12:28:41 GMT" }, { "version": "v2", "created": "Wed, 12 Mar 2025 14:56:20 GMT" } ]
2025-03-13T00:00:00
[ [ "Das", "Alloy", "" ], [ "Biswas", "Sanket", "" ], [ "Pal", "Umapada", "" ], [ "Lladós", "Josep", "" ], [ "Bhattacharya", "Saumik", "" ] ]
TITLE: FastTextSpotter: A High-Efficiency Transformer for Multilingual Scene Text Spotting ABSTRACT: The proliferation of scene text in both structured and unstructured environments presents significant challenges in optical character recognition (OCR), necessitating more efficient and robust text spotting solutions. This paper presents FastTextSpotter, a framework that integrates a Swin Transformer visual backbone with a Transformer Encoder-Decoder architecture, enhanced by a novel, faster self-attention unit, SAC2, to improve processing speeds while maintaining accuracy. FastTextSpotter has been validated across multiple datasets, including ICDAR2015 for regular texts and CTW1500 and TotalText for arbitrary-shaped texts, benchmarking against current state-of-the-art models. Our results indicate that FastTextSpotter not only achieves superior accuracy in detecting and recognizing multilingual scene text (English and Vietnamese) but also improves model efficiency, thereby setting new benchmarks in the field. This study underscores the potential of advanced transformer architectures in improving the adaptability and speed of text spotting applications in diverse real-world settings. The dataset, code, and pre-trained models have been released in our Github.
new_dataset
0.953923
2409.04011
Weijie He
Weijie He, Mushui Liu, Yunlong Yu
Hybrid Mask Generation for Infrared Small Target Detection with Single-Point Supervision
11 pages, 9 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Single-frame infrared small target (SIRST) detection poses a significant challenge due to the requirement to discern minute targets amidst complex infrared background clutter. In this paper, we focus on a weakly-supervised paradigm to obtain high-quality pseudo masks from the point-level annotation by integrating a novel learning-free method with the hybrid of the learning-based method. The learning-free method adheres to a sequential process, progressing from a point annotation to the bounding box that encompasses the target, and subsequently to detailed pseudo masks, while the hybrid is achieved through filtering out false alarms and retrieving missed detections in the network's prediction to provide a reliable supplement for learning-free masks. The experimental results show that our learning-free method generates pseudo masks with an average Intersection over Union (IoU) that is 4.3% higher than the second-best learning-free competitor across three datasets, while the hybrid learning-based method further enhances the quality of pseudo masks, achieving an additional average IoU increase of 3.4%.
[ { "version": "v1", "created": "Fri, 6 Sep 2024 03:34:44 GMT" }, { "version": "v2", "created": "Wed, 12 Mar 2025 08:13:29 GMT" } ]
2025-03-13T00:00:00
[ [ "He", "Weijie", "" ], [ "Liu", "Mushui", "" ], [ "Yu", "Yunlong", "" ] ]
TITLE: Hybrid Mask Generation for Infrared Small Target Detection with Single-Point Supervision ABSTRACT: Single-frame infrared small target (SIRST) detection poses a significant challenge due to the requirement to discern minute targets amidst complex infrared background clutter. In this paper, we focus on a weakly-supervised paradigm to obtain high-quality pseudo masks from the point-level annotation by integrating a novel learning-free method with the hybrid of the learning-based method. The learning-free method adheres to a sequential process, progressing from a point annotation to the bounding box that encompasses the target, and subsequently to detailed pseudo masks, while the hybrid is achieved through filtering out false alarms and retrieving missed detections in the network's prediction to provide a reliable supplement for learning-free masks. The experimental results show that our learning-free method generates pseudo masks with an average Intersection over Union (IoU) that is 4.3% higher than the second-best learning-free competitor across three datasets, while the hybrid learning-based method further enhances the quality of pseudo masks, achieving an additional average IoU increase of 3.4%.
no_new_dataset
0.951097
2409.04824
Mahmoud Jahanshahi
Mahmoud Jahanshahi, David Reid, Adam McDaniel, Audris Mockus
OSS License Identification at Scale: A Comprehensive Dataset Using World of Code
Accepted in 2025 IEEE/ACM 22st International Conference on Mining Software Repositories (MSR)
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The proliferation of open source software (OSS) and different types of reuse has made it incredibly difficult to perform an essential legal and compliance task of accurate license identification within the software supply chain. This study presents a reusable and comprehensive dataset of OSS licenses, created using the World of Code (WoC) infrastructure. By scanning all files containing "license" in their file paths, and applying the approximate matching via winnowing algorithm to identify the most similar license from the SPDX list, we found and identified 5.5 million distinct license blobs in OSS projects. The dataset includes a detailed project-to-license (P2L) map with commit timestamps, enabling dynamic analysis of license adoption and changes over time. To verify the accuracy of the dataset we use stratified sampling and manual review, achieving a final accuracy of 92.08%, with precision of 87.14%, recall of 95.45%, and an F1 score of 91.11%. This dataset is intended to support a range of research and practical tasks, including the detection of license noncompliance, the investigations of license changes, study of licensing trends, and the development of compliance tools. The dataset is open, providing a valuable resource for developers, researchers, and legal professionals in the OSS community.
[ { "version": "v1", "created": "Sat, 7 Sep 2024 13:34:55 GMT" }, { "version": "v2", "created": "Fri, 6 Dec 2024 15:04:07 GMT" }, { "version": "v3", "created": "Tue, 11 Mar 2025 20:13:22 GMT" } ]
2025-03-13T00:00:00
[ [ "Jahanshahi", "Mahmoud", "" ], [ "Reid", "David", "" ], [ "McDaniel", "Adam", "" ], [ "Mockus", "Audris", "" ] ]
TITLE: OSS License Identification at Scale: A Comprehensive Dataset Using World of Code ABSTRACT: The proliferation of open source software (OSS) and different types of reuse has made it incredibly difficult to perform an essential legal and compliance task of accurate license identification within the software supply chain. This study presents a reusable and comprehensive dataset of OSS licenses, created using the World of Code (WoC) infrastructure. By scanning all files containing "license" in their file paths, and applying the approximate matching via winnowing algorithm to identify the most similar license from the SPDX list, we found and identified 5.5 million distinct license blobs in OSS projects. The dataset includes a detailed project-to-license (P2L) map with commit timestamps, enabling dynamic analysis of license adoption and changes over time. To verify the accuracy of the dataset we use stratified sampling and manual review, achieving a final accuracy of 92.08%, with precision of 87.14%, recall of 95.45%, and an F1 score of 91.11%. This dataset is intended to support a range of research and practical tasks, including the detection of license noncompliance, the investigations of license changes, study of licensing trends, and the development of compliance tools. The dataset is open, providing a valuable resource for developers, researchers, and legal professionals in the OSS community.
new_dataset
0.955236
2409.07041
Xinrui Wang
Xinrui Wang, Lanqing Guo, Xiyu Wang, Siyu Huang, Bihan Wen
SoftShadow: Leveraging Soft Masks for Penumbra-Aware Shadow Removal
This paper has been accepted by CVPR 2025
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Recent advancements in deep learning have yielded promising results for the image shadow removal task. However, most existing methods rely on binary pre-generated shadow masks. The binary nature of such masks could potentially lead to artifacts near the boundary between shadow and non-shadow areas. In view of this, inspired by the physical model of shadow formation, we introduce novel soft shadow masks specifically designed for shadow removal. To achieve such soft masks, we propose a SoftShadow framework by leveraging the prior knowledge of pretrained SAM and integrating physical constraints. Specifically, we jointly tune the SAM and the subsequent shadow removal network using penumbra formation constraint loss, mask reconstruction loss, and shadow removal loss. This framework enables accurate predictions of penumbra (partially shaded) and umbra (fully shaded) areas while simultaneously facilitating end-to-end shadow removal. Through extensive experiments on popular datasets, we found that our SoftShadow framework, which generates soft masks, can better restore boundary artifacts, achieve state-of-the-art performance, and demonstrate superior generalizability.
[ { "version": "v1", "created": "Wed, 11 Sep 2024 06:12:26 GMT" }, { "version": "v2", "created": "Wed, 12 Mar 2025 07:18:15 GMT" } ]
2025-03-13T00:00:00
[ [ "Wang", "Xinrui", "" ], [ "Guo", "Lanqing", "" ], [ "Wang", "Xiyu", "" ], [ "Huang", "Siyu", "" ], [ "Wen", "Bihan", "" ] ]
TITLE: SoftShadow: Leveraging Soft Masks for Penumbra-Aware Shadow Removal ABSTRACT: Recent advancements in deep learning have yielded promising results for the image shadow removal task. However, most existing methods rely on binary pre-generated shadow masks. The binary nature of such masks could potentially lead to artifacts near the boundary between shadow and non-shadow areas. In view of this, inspired by the physical model of shadow formation, we introduce novel soft shadow masks specifically designed for shadow removal. To achieve such soft masks, we propose a SoftShadow framework by leveraging the prior knowledge of pretrained SAM and integrating physical constraints. Specifically, we jointly tune the SAM and the subsequent shadow removal network using penumbra formation constraint loss, mask reconstruction loss, and shadow removal loss. This framework enables accurate predictions of penumbra (partially shaded) and umbra (fully shaded) areas while simultaneously facilitating end-to-end shadow removal. Through extensive experiments on popular datasets, we found that our SoftShadow framework, which generates soft masks, can better restore boundary artifacts, achieve state-of-the-art performance, and demonstrate superior generalizability.
no_new_dataset
0.948917
2409.07989
Amirreza Fateh
Fatemeh Askari, Amirreza Fateh, Mohammad Reza Mohammadi
Enhancing Few-Shot Image Classification through Learnable Multi-Scale Embedding and Attention Mechanisms
null
null
10.1016/j.neunet.2025.107339
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
In the context of few-shot classification, the goal is to train a classifier using a limited number of samples while maintaining satisfactory performance. However, traditional metric-based methods exhibit certain limitations in achieving this objective. These methods typically rely on a single distance value between the query feature and support feature, thereby overlooking the contribution of shallow features. To overcome this challenge, we propose a novel approach in this paper. Our approach involves utilizing a multi-output embedding network that maps samples into distinct feature spaces. The proposed method extracts feature vectors at different stages, enabling the model to capture both global and abstract features. By utilizing these diverse feature spaces, our model enhances its performance. Moreover, employing a self-attention mechanism improves the refinement of features at each stage, leading to even more robust representations and improved overall performance. Furthermore, assigning learnable weights to each stage significantly improved performance and results. We conducted comprehensive evaluations on the MiniImageNet and FC100 datasets, specifically in the 5-way 1-shot and 5-way 5-shot scenarios. Additionally, we performed cross-domain tasks across eight benchmark datasets, achieving high accuracy in the testing domains. These evaluations demonstrate the efficacy of our proposed method in comparison to state-of-the-art approaches. https://github.com/FatemehAskari/MSENet
[ { "version": "v1", "created": "Thu, 12 Sep 2024 12:34:29 GMT" }, { "version": "v2", "created": "Thu, 16 Jan 2025 14:01:58 GMT" } ]
2025-03-13T00:00:00
[ [ "Askari", "Fatemeh", "" ], [ "Fateh", "Amirreza", "" ], [ "Mohammadi", "Mohammad Reza", "" ] ]
TITLE: Enhancing Few-Shot Image Classification through Learnable Multi-Scale Embedding and Attention Mechanisms ABSTRACT: In the context of few-shot classification, the goal is to train a classifier using a limited number of samples while maintaining satisfactory performance. However, traditional metric-based methods exhibit certain limitations in achieving this objective. These methods typically rely on a single distance value between the query feature and support feature, thereby overlooking the contribution of shallow features. To overcome this challenge, we propose a novel approach in this paper. Our approach involves utilizing a multi-output embedding network that maps samples into distinct feature spaces. The proposed method extracts feature vectors at different stages, enabling the model to capture both global and abstract features. By utilizing these diverse feature spaces, our model enhances its performance. Moreover, employing a self-attention mechanism improves the refinement of features at each stage, leading to even more robust representations and improved overall performance. Furthermore, assigning learnable weights to each stage significantly improved performance and results. We conducted comprehensive evaluations on the MiniImageNet and FC100 datasets, specifically in the 5-way 1-shot and 5-way 5-shot scenarios. Additionally, we performed cross-domain tasks across eight benchmark datasets, achieving high accuracy in the testing domains. These evaluations demonstrate the efficacy of our proposed method in comparison to state-of-the-art approaches. https://github.com/FatemehAskari/MSENet
no_new_dataset
0.947527
2409.09479
Yutian Chen
Yuheng Qiu, Yutian Chen, Zihao Zhang, Wenshan Wang, Sebastian Scherer
MAC-VO: Metrics-aware Covariance for Learning-based Stereo Visual Odometry
null
null
null
null
cs.RO cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose the MAC-VO, a novel learning-based stereo VO that leverages the learned metrics-aware matching uncertainty for dual purposes: selecting keypoint and weighing the residual in pose graph optimization. Compared to traditional geometric methods prioritizing texture-affluent features like edges, our keypoint selector employs the learned uncertainty to filter out the low-quality features based on global inconsistency. In contrast to the learning-based algorithms that model the scale-agnostic diagonal weight matrix for covariance, we design a metrics-aware covariance model to capture the spatial error during keypoint registration and the correlations between different axes. Integrating this covariance model into pose graph optimization enhances the robustness and reliability of pose estimation, particularly in challenging environments with varying illumination, feature density, and motion patterns. On public benchmark datasets, MAC-VO outperforms existing VO algorithms and even some SLAM algorithms in challenging environments. The covariance map also provides valuable information about the reliability of the estimated poses, which can benefit decision-making for autonomous systems.
[ { "version": "v1", "created": "Sat, 14 Sep 2024 16:49:42 GMT" }, { "version": "v2", "created": "Wed, 12 Mar 2025 04:51:33 GMT" } ]
2025-03-13T00:00:00
[ [ "Qiu", "Yuheng", "" ], [ "Chen", "Yutian", "" ], [ "Zhang", "Zihao", "" ], [ "Wang", "Wenshan", "" ], [ "Scherer", "Sebastian", "" ] ]
TITLE: MAC-VO: Metrics-aware Covariance for Learning-based Stereo Visual Odometry ABSTRACT: We propose the MAC-VO, a novel learning-based stereo VO that leverages the learned metrics-aware matching uncertainty for dual purposes: selecting keypoint and weighing the residual in pose graph optimization. Compared to traditional geometric methods prioritizing texture-affluent features like edges, our keypoint selector employs the learned uncertainty to filter out the low-quality features based on global inconsistency. In contrast to the learning-based algorithms that model the scale-agnostic diagonal weight matrix for covariance, we design a metrics-aware covariance model to capture the spatial error during keypoint registration and the correlations between different axes. Integrating this covariance model into pose graph optimization enhances the robustness and reliability of pose estimation, particularly in challenging environments with varying illumination, feature density, and motion patterns. On public benchmark datasets, MAC-VO outperforms existing VO algorithms and even some SLAM algorithms in challenging environments. The covariance map also provides valuable information about the reliability of the estimated poses, which can benefit decision-making for autonomous systems.
no_new_dataset
0.954095
2409.11889
Jiaming Zhou
Jiaming Zhou, Shiwan Zhao, Jiabei He, Hui Wang, Wenjia Zeng, Yong Chen, Haoqin Sun, Aobo Kong, Yong Qin
M2R-Whisper: Multi-stage and Multi-scale Retrieval Augmentation for Enhancing Whisper
Accepted by ICASSP 2025, oral
null
null
null
cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
State-of-the-art models like OpenAI's Whisper exhibit strong performance in multilingual automatic speech recognition (ASR), but they still face challenges in accurately recognizing diverse subdialects. In this paper, we propose M2R-whisper, a novel multi-stage and multi-scale retrieval augmentation approach designed to enhance ASR performance in low-resource settings. Building on the principles of in-context learning (ICL) and retrieval-augmented techniques, our method employs sentence-level ICL in the pre-processing stage to harness contextual information, while integrating token-level k-Nearest Neighbors (kNN) retrieval as a post-processing step to further refine the final output distribution. By synergistically combining sentence-level and token-level retrieval strategies, M2R-whisper effectively mitigates various types of recognition errors. Experiments conducted on Mandarin and subdialect datasets, including AISHELL-1 and KeSpeech, demonstrate substantial improvements in ASR accuracy, all achieved without any parameter updates.
[ { "version": "v1", "created": "Wed, 18 Sep 2024 11:35:55 GMT" }, { "version": "v2", "created": "Tue, 31 Dec 2024 03:04:54 GMT" }, { "version": "v3", "created": "Wed, 12 Mar 2025 05:22:58 GMT" } ]
2025-03-13T00:00:00
[ [ "Zhou", "Jiaming", "" ], [ "Zhao", "Shiwan", "" ], [ "He", "Jiabei", "" ], [ "Wang", "Hui", "" ], [ "Zeng", "Wenjia", "" ], [ "Chen", "Yong", "" ], [ "Sun", "Haoqin", "" ], [ "Kong", "Aobo", "" ], [ "Qin", "Yong", "" ] ]
TITLE: M2R-Whisper: Multi-stage and Multi-scale Retrieval Augmentation for Enhancing Whisper ABSTRACT: State-of-the-art models like OpenAI's Whisper exhibit strong performance in multilingual automatic speech recognition (ASR), but they still face challenges in accurately recognizing diverse subdialects. In this paper, we propose M2R-whisper, a novel multi-stage and multi-scale retrieval augmentation approach designed to enhance ASR performance in low-resource settings. Building on the principles of in-context learning (ICL) and retrieval-augmented techniques, our method employs sentence-level ICL in the pre-processing stage to harness contextual information, while integrating token-level k-Nearest Neighbors (kNN) retrieval as a post-processing step to further refine the final output distribution. By synergistically combining sentence-level and token-level retrieval strategies, M2R-whisper effectively mitigates various types of recognition errors. Experiments conducted on Mandarin and subdialect datasets, including AISHELL-1 and KeSpeech, demonstrate substantial improvements in ASR accuracy, all achieved without any parameter updates.
no_new_dataset
0.942665
2409.14572
Hongchen Wang
Hongchen Wang, Kangming Li, Scott Ramsay, Yao Fehlis, Edward Kim, and Jason Hattrick-Simpers
Evaluating the Performance and Robustness of LLMs in Materials Science Q&A and Property Predictions
null
null
null
null
cs.CL cond-mat.mtrl-sci cs.AI cs.LG
http://creativecommons.org/publicdomain/zero/1.0/
Large Language Models (LLMs) have the potential to revolutionize scientific research, yet their robustness and reliability in domain-specific applications remain insufficiently explored. In this study, we evaluate the performance and robustness of LLMs for materials science, focusing on domain-specific question answering and materials property prediction across diverse real-world and adversarial conditions. Three distinct datasets are used in this study: 1) a set of multiple-choice questions from undergraduate-level materials science courses, 2) a dataset including various steel compositions and yield strengths, and 3) a band gap dataset, containing textual descriptions of material crystal structures and band gap values. The performance of LLMs is assessed using various prompting strategies, including zero-shot chain-of-thought, expert prompting, and few-shot in-context learning. The robustness of these models is tested against various forms of 'noise', ranging from realistic disturbances to intentionally adversarial manipulations, to evaluate their resilience and reliability under real-world conditions. Additionally, the study showcases unique phenomena of LLMs during predictive tasks, such as mode collapse behavior when the proximity of prompt examples is altered and performance recovery from train/test mismatch. The findings aim to provide informed skepticism for the broad use of LLMs in materials science and to inspire advancements that enhance their robustness and reliability for practical applications.
[ { "version": "v1", "created": "Sun, 22 Sep 2024 19:31:16 GMT" }, { "version": "v2", "created": "Tue, 11 Mar 2025 22:03:26 GMT" } ]
2025-03-13T00:00:00
[ [ "Wang", "Hongchen", "" ], [ "Li", "Kangming", "" ], [ "Ramsay", "Scott", "" ], [ "Fehlis", "Yao", "" ], [ "Kim", "Edward", "" ], [ "Hattrick-Simpers", "Jason", "" ] ]
TITLE: Evaluating the Performance and Robustness of LLMs in Materials Science Q&A and Property Predictions ABSTRACT: Large Language Models (LLMs) have the potential to revolutionize scientific research, yet their robustness and reliability in domain-specific applications remain insufficiently explored. In this study, we evaluate the performance and robustness of LLMs for materials science, focusing on domain-specific question answering and materials property prediction across diverse real-world and adversarial conditions. Three distinct datasets are used in this study: 1) a set of multiple-choice questions from undergraduate-level materials science courses, 2) a dataset including various steel compositions and yield strengths, and 3) a band gap dataset, containing textual descriptions of material crystal structures and band gap values. The performance of LLMs is assessed using various prompting strategies, including zero-shot chain-of-thought, expert prompting, and few-shot in-context learning. The robustness of these models is tested against various forms of 'noise', ranging from realistic disturbances to intentionally adversarial manipulations, to evaluate their resilience and reliability under real-world conditions. Additionally, the study showcases unique phenomena of LLMs during predictive tasks, such as mode collapse behavior when the proximity of prompt examples is altered and performance recovery from train/test mismatch. The findings aim to provide informed skepticism for the broad use of LLMs in materials science and to inspire advancements that enhance their robustness and reliability for practical applications.
new_dataset
0.963369
2409.15949
Adithi Satish
Danqing Chen, Adithi Satish, Rasul Khanbayov, Carolin M. Schuster and Georg Groh
Tuning Into Bias: A Computational Study of Gender Bias in Song Lyrics
Accepted to be presented at the 9th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature, co-located with NAACL 2025; also accepted and presented as working paper at the SBP-BRiMS 2024 (see https://sbp-brims.org/2024/papers/working-papers/Chen_SBP-BRiMS2024_Final_31.pdf )
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
The application of text mining methods is becoming increasingly prevalent, particularly within Humanities and Computational Social Sciences, as well as in a broader range of disciplines. This paper presents an analysis of gender bias in English song lyrics using topic modeling and bias measurement techniques. Leveraging BERTopic, we cluster a dataset of 537,553 English songs into distinct topics and analyze their temporal evolution. Our results reveal a significant thematic shift in song lyrics over time, transitioning from romantic themes to a heightened focus on the sexualization of women. Additionally, we observe a substantial prevalence of profanity and misogynistic content across various topics, with a particularly high concentration in the largest thematic cluster. To further analyse gender bias across topics and genres in a quantitative way, we employ the Single Category Word Embedding Association Test (SC-WEAT) to calculate bias scores for word embeddings trained on the most prominent topics as well as individual genres. The results indicate a consistent male bias in words associated with intelligence and strength, while appearance and weakness words show a female bias. Further analysis highlights variations in these biases across topics, illustrating the interplay between thematic content and gender stereotypes in song lyrics.
[ { "version": "v1", "created": "Tue, 24 Sep 2024 10:24:53 GMT" }, { "version": "v2", "created": "Tue, 11 Mar 2025 20:54:07 GMT" } ]
2025-03-13T00:00:00
[ [ "Chen", "Danqing", "" ], [ "Satish", "Adithi", "" ], [ "Khanbayov", "Rasul", "" ], [ "Schuster", "Carolin M.", "" ], [ "Groh", "Georg", "" ] ]
TITLE: Tuning Into Bias: A Computational Study of Gender Bias in Song Lyrics ABSTRACT: The application of text mining methods is becoming increasingly prevalent, particularly within Humanities and Computational Social Sciences, as well as in a broader range of disciplines. This paper presents an analysis of gender bias in English song lyrics using topic modeling and bias measurement techniques. Leveraging BERTopic, we cluster a dataset of 537,553 English songs into distinct topics and analyze their temporal evolution. Our results reveal a significant thematic shift in song lyrics over time, transitioning from romantic themes to a heightened focus on the sexualization of women. Additionally, we observe a substantial prevalence of profanity and misogynistic content across various topics, with a particularly high concentration in the largest thematic cluster. To further analyse gender bias across topics and genres in a quantitative way, we employ the Single Category Word Embedding Association Test (SC-WEAT) to calculate bias scores for word embeddings trained on the most prominent topics as well as individual genres. The results indicate a consistent male bias in words associated with intelligence and strength, while appearance and weakness words show a female bias. Further analysis highlights variations in these biases across topics, illustrating the interplay between thematic content and gender stereotypes in song lyrics.
no_new_dataset
0.945197
2410.01425
Yingdong Hu
Yingdong Hu, Zhening Liu, Jiawei Shao, Zehong Lin, Jun Zhang
EVA-Gaussian: 3D Gaussian-based Real-time Human Novel View Synthesis under Diverse Multi-view Camera Settings
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Feed-forward based 3D Gaussian Splatting methods have demonstrated exceptional capability in real-time novel view synthesis for human models. However, current approaches are confined to either dense viewpoint configurations or restricted image resolutions. These limitations hinder their flexibility in free-viewpoint rendering across a wide range of camera view angle discrepancies, and also restrict their ability to recover fine-grained human details in real time using commonly available GPUs. To address these challenges, we propose a novel pipeline named EVA-Gaussian for 3D human novel view synthesis across diverse multi-view camera settings. Specifically, we first design an Efficient Cross-View Attention (EVA) module to effectively fuse cross-view information under high resolution inputs and sparse view settings, while minimizing temporal and computational overhead. Additionally, we introduce a feature refinement mechianism to predict the attributes of the 3D Gaussians and assign a feature value to each Gaussian, enabling the correction of artifacts caused by geometric inaccuracies in position estimation and enhancing overall visual fidelity. Experimental results on the THuman2.0 and THumansit datasets showcase the superiority of EVA-Gaussian in rendering quality across diverse camera settings. Project page: https://zhenliuzju.github.io/huyingdong/EVA-Gaussian.
[ { "version": "v1", "created": "Wed, 2 Oct 2024 11:23:08 GMT" }, { "version": "v2", "created": "Wed, 12 Mar 2025 12:14:39 GMT" } ]
2025-03-13T00:00:00
[ [ "Hu", "Yingdong", "" ], [ "Liu", "Zhening", "" ], [ "Shao", "Jiawei", "" ], [ "Lin", "Zehong", "" ], [ "Zhang", "Jun", "" ] ]
TITLE: EVA-Gaussian: 3D Gaussian-based Real-time Human Novel View Synthesis under Diverse Multi-view Camera Settings ABSTRACT: Feed-forward based 3D Gaussian Splatting methods have demonstrated exceptional capability in real-time novel view synthesis for human models. However, current approaches are confined to either dense viewpoint configurations or restricted image resolutions. These limitations hinder their flexibility in free-viewpoint rendering across a wide range of camera view angle discrepancies, and also restrict their ability to recover fine-grained human details in real time using commonly available GPUs. To address these challenges, we propose a novel pipeline named EVA-Gaussian for 3D human novel view synthesis across diverse multi-view camera settings. Specifically, we first design an Efficient Cross-View Attention (EVA) module to effectively fuse cross-view information under high resolution inputs and sparse view settings, while minimizing temporal and computational overhead. Additionally, we introduce a feature refinement mechianism to predict the attributes of the 3D Gaussians and assign a feature value to each Gaussian, enabling the correction of artifacts caused by geometric inaccuracies in position estimation and enhancing overall visual fidelity. Experimental results on the THuman2.0 and THumansit datasets showcase the superiority of EVA-Gaussian in rendering quality across diverse camera settings. Project page: https://zhenliuzju.github.io/huyingdong/EVA-Gaussian.
no_new_dataset
0.951997
2410.02056
Sreyan Ghosh
Sreyan Ghosh and Sonal Kumar and Zhifeng Kong and Rafael Valle and Bryan Catanzaro and Dinesh Manocha
Synthio: Augmenting Small-Scale Audio Classification Datasets with Synthetic Data
Accepted at ICLR 2025. Code and Checkpoints available here: https://github.com/Sreyan88/Synthio
null
null
null
eess.AS cs.AI cs.CL
http://creativecommons.org/licenses/by/4.0/
We present Synthio, a novel approach for augmenting small-scale audio classification datasets with synthetic data. Our goal is to improve audio classification accuracy with limited labeled data. Traditional data augmentation techniques, which apply artificial transformations (e.g., adding random noise or masking segments), struggle to create data that captures the true diversity present in real-world audios. To address this shortcoming, we propose to augment the dataset with synthetic audio generated from text-to-audio (T2A) diffusion models. However, synthesizing effective augmentations is challenging because not only should the generated data be acoustically consistent with the underlying small-scale dataset, but they should also have sufficient compositional diversity. To overcome the first challenge, we align the generations of the T2A model with the small-scale dataset using preference optimization. This ensures that the acoustic characteristics of the generated data remain consistent with the small-scale dataset. To address the second challenge, we propose a novel caption generation technique that leverages the reasoning capabilities of Large Language Models to (1) generate diverse and meaningful audio captions and (2) iteratively refine their quality. The generated captions are then used to prompt the aligned T2A model. We extensively evaluate Synthio on ten datasets and four simulated limited-data settings. Results indicate our method consistently outperforms all baselines by 0.1%-39% using a T2A model trained only on weakly-captioned AudioSet.
[ { "version": "v1", "created": "Wed, 2 Oct 2024 22:05:36 GMT" }, { "version": "v2", "created": "Wed, 12 Mar 2025 00:25:08 GMT" } ]
2025-03-13T00:00:00
[ [ "Ghosh", "Sreyan", "" ], [ "Kumar", "Sonal", "" ], [ "Kong", "Zhifeng", "" ], [ "Valle", "Rafael", "" ], [ "Catanzaro", "Bryan", "" ], [ "Manocha", "Dinesh", "" ] ]
TITLE: Synthio: Augmenting Small-Scale Audio Classification Datasets with Synthetic Data ABSTRACT: We present Synthio, a novel approach for augmenting small-scale audio classification datasets with synthetic data. Our goal is to improve audio classification accuracy with limited labeled data. Traditional data augmentation techniques, which apply artificial transformations (e.g., adding random noise or masking segments), struggle to create data that captures the true diversity present in real-world audios. To address this shortcoming, we propose to augment the dataset with synthetic audio generated from text-to-audio (T2A) diffusion models. However, synthesizing effective augmentations is challenging because not only should the generated data be acoustically consistent with the underlying small-scale dataset, but they should also have sufficient compositional diversity. To overcome the first challenge, we align the generations of the T2A model with the small-scale dataset using preference optimization. This ensures that the acoustic characteristics of the generated data remain consistent with the small-scale dataset. To address the second challenge, we propose a novel caption generation technique that leverages the reasoning capabilities of Large Language Models to (1) generate diverse and meaningful audio captions and (2) iteratively refine their quality. The generated captions are then used to prompt the aligned T2A model. We extensively evaluate Synthio on ten datasets and four simulated limited-data settings. Results indicate our method consistently outperforms all baselines by 0.1%-39% using a T2A model trained only on weakly-captioned AudioSet.
no_new_dataset
0.951006
2410.05440
Zihao Zhou
Zihao Zhou, Rose Yu
Can LLMs Understand Time Series Anomalies?
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large Language Models (LLMs) have gained popularity in time series forecasting, but their potential for anomaly detection remains largely unexplored. Our study investigates whether LLMs can understand and detect anomalies in time series data, focusing on zero-shot and few-shot scenarios. Inspired by conjectures about LLMs' behavior from time series forecasting research, we formulate key hypotheses about LLMs' capabilities in time series anomaly detection. We design and conduct principled experiments to test each of these hypotheses. Our investigation reveals several surprising findings about LLMs for time series: (1) LLMs understand time series better as images rather than as text, (2) LLMs do not demonstrate enhanced performance when prompted to engage in explicit reasoning about time series analysis. (3) Contrary to common beliefs, LLMs' understanding of time series does not stem from their repetition biases or arithmetic abilities. (4) LLMs' behaviors and performance in time series analysis vary significantly across different models. This study provides the first comprehensive analysis of contemporary LLM capabilities in time series anomaly detection. Our results suggest that while LLMs can understand trivial time series anomalies, we have no evidence that they can understand more subtle real-world anomalies. Many common conjectures based on their reasoning capabilities do not hold. All synthetic dataset generators, final prompts, and evaluation scripts have been made available in https://github.com/rose-stl-lab/anomllm.
[ { "version": "v1", "created": "Mon, 7 Oct 2024 19:16:02 GMT" }, { "version": "v2", "created": "Mon, 14 Oct 2024 23:32:50 GMT" }, { "version": "v3", "created": "Tue, 11 Mar 2025 18:04:52 GMT" } ]
2025-03-13T00:00:00
[ [ "Zhou", "Zihao", "" ], [ "Yu", "Rose", "" ] ]
TITLE: Can LLMs Understand Time Series Anomalies? ABSTRACT: Large Language Models (LLMs) have gained popularity in time series forecasting, but their potential for anomaly detection remains largely unexplored. Our study investigates whether LLMs can understand and detect anomalies in time series data, focusing on zero-shot and few-shot scenarios. Inspired by conjectures about LLMs' behavior from time series forecasting research, we formulate key hypotheses about LLMs' capabilities in time series anomaly detection. We design and conduct principled experiments to test each of these hypotheses. Our investigation reveals several surprising findings about LLMs for time series: (1) LLMs understand time series better as images rather than as text, (2) LLMs do not demonstrate enhanced performance when prompted to engage in explicit reasoning about time series analysis. (3) Contrary to common beliefs, LLMs' understanding of time series does not stem from their repetition biases or arithmetic abilities. (4) LLMs' behaviors and performance in time series analysis vary significantly across different models. This study provides the first comprehensive analysis of contemporary LLM capabilities in time series anomaly detection. Our results suggest that while LLMs can understand trivial time series anomalies, we have no evidence that they can understand more subtle real-world anomalies. Many common conjectures based on their reasoning capabilities do not hold. All synthetic dataset generators, final prompts, and evaluation scripts have been made available in https://github.com/rose-stl-lab/anomllm.
no_new_dataset
0.75183
2410.05628
Jeongeun Park
Jeongeun Park, Sungjoon Choi, Sangdoo Yun
A Unified Framework for Motion Reasoning and Generation in Human Interaction
https://vim-motion-language.github.io/
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Recent advancements in large language models (LLMs) have significantly improved their ability to generate natural and contextually relevant text, enabling more human-like AI interactions. However, generating and understanding interactive human-like motion, where multiple individuals engage in coordinated movements, remains challenging due to the complexity of modeling these interactions. Additionally, a unified and versatile model is needed to handle diverse interactive scenarios, such as chat systems that dynamically adapt to user instructions and assigned roles. To address these challenges, we introduce VIM, the Versatile Interactive Motion-language model, which integrates both language and motion modalities to effectively understand, generate, and control interactive motions in multi-turn conversational contexts. Unlike previous studies that primarily focus on uni-directional tasks such as text-to-motion or motion-to-text, VIM employs a unified architecture capable of simultaneously understanding and generating both motion and text modalities. Given the absence of an appropriate dataset to support this task, we introduce Inter-MT2, a large-scale instruction-tuning dataset containing 82.7K multi-turn interactive motion instructions, covering 153K interactive motion samples. Inter-MT2 spans diverse instructional scenarios, including motion editing, question answering, and story generation, leveraging off-the-shelf large language models and motion diffusion models to construct a broad set of interactive motion instructions. We extensively evaluate the versatility of VIM across multiple interactive motion-related tasks, including motion-to-text, text-to-motion, reaction generation, motion editing, and reasoning about motion sequences.
[ { "version": "v1", "created": "Tue, 8 Oct 2024 02:23:53 GMT" }, { "version": "v2", "created": "Mon, 14 Oct 2024 11:22:39 GMT" }, { "version": "v3", "created": "Thu, 24 Oct 2024 12:47:56 GMT" }, { "version": "v4", "created": "Tue, 11 Mar 2025 15:18:47 GMT" }, { "version": "v5", "created": "Wed, 12 Mar 2025 05:54:44 GMT" } ]
2025-03-13T00:00:00
[ [ "Park", "Jeongeun", "" ], [ "Choi", "Sungjoon", "" ], [ "Yun", "Sangdoo", "" ] ]
TITLE: A Unified Framework for Motion Reasoning and Generation in Human Interaction ABSTRACT: Recent advancements in large language models (LLMs) have significantly improved their ability to generate natural and contextually relevant text, enabling more human-like AI interactions. However, generating and understanding interactive human-like motion, where multiple individuals engage in coordinated movements, remains challenging due to the complexity of modeling these interactions. Additionally, a unified and versatile model is needed to handle diverse interactive scenarios, such as chat systems that dynamically adapt to user instructions and assigned roles. To address these challenges, we introduce VIM, the Versatile Interactive Motion-language model, which integrates both language and motion modalities to effectively understand, generate, and control interactive motions in multi-turn conversational contexts. Unlike previous studies that primarily focus on uni-directional tasks such as text-to-motion or motion-to-text, VIM employs a unified architecture capable of simultaneously understanding and generating both motion and text modalities. Given the absence of an appropriate dataset to support this task, we introduce Inter-MT2, a large-scale instruction-tuning dataset containing 82.7K multi-turn interactive motion instructions, covering 153K interactive motion samples. Inter-MT2 spans diverse instructional scenarios, including motion editing, question answering, and story generation, leveraging off-the-shelf large language models and motion diffusion models to construct a broad set of interactive motion instructions. We extensively evaluate the versatility of VIM across multiple interactive motion-related tasks, including motion-to-text, text-to-motion, reaction generation, motion editing, and reasoning about motion sequences.
new_dataset
0.961965
2410.08917
Till Raphael Saenger
Till Raphael Saenger, Musashi Hinck, Justin Grimmer and Brandon M. Stewart
AutoPersuade: A Framework for Evaluating and Explaining Persuasive Arguments
Published in Proceedings of EMNLP 2024. The official version is available in the ACL Anthology at https://aclanthology.org/2024.emnlp-main.913/
null
10.18653/v1/2024.emnlp-main.913
null
cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
We introduce AutoPersuade, a three-part framework for constructing persuasive messages. First, we curate a large dataset of arguments with human evaluations. Next, we develop a novel topic model to identify argument features that influence persuasiveness. Finally, we use this model to predict the effectiveness of new arguments and assess the causal impact of different components to provide explanations. We validate AutoPersuade through an experimental study on arguments for veganism, demonstrating its effectiveness with human studies and out-of-sample predictions.
[ { "version": "v1", "created": "Fri, 11 Oct 2024 15:46:05 GMT" }, { "version": "v2", "created": "Tue, 11 Mar 2025 19:56:48 GMT" } ]
2025-03-13T00:00:00
[ [ "Saenger", "Till Raphael", "" ], [ "Hinck", "Musashi", "" ], [ "Grimmer", "Justin", "" ], [ "Stewart", "Brandon M.", "" ] ]
TITLE: AutoPersuade: A Framework for Evaluating and Explaining Persuasive Arguments ABSTRACT: We introduce AutoPersuade, a three-part framework for constructing persuasive messages. First, we curate a large dataset of arguments with human evaluations. Next, we develop a novel topic model to identify argument features that influence persuasiveness. Finally, we use this model to predict the effectiveness of new arguments and assess the causal impact of different components to provide explanations. We validate AutoPersuade through an experimental study on arguments for veganism, demonstrating its effectiveness with human studies and out-of-sample predictions.
new_dataset
0.73431
2410.10182
Javier Mar\'in
Javier Mar\'in
Hamiltonian Neural Networks for Robust Out-of-Time Credit Scoring
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
This paper presents a novel credit scoring approach using neural networks to address class imbalance and out-of-time prediction challenges. We develop a specific optimizer and loss function inspired by Hamiltonian mechanics that better captures credit risk dynamics. Testing on the Freddie Mac Single-Family Loan-Level Dataset shows our model achieves superior discriminative power (AUC) in out-of-time scenarios compared to conventional methods. The approach has consistent performance between in-sample and future test sets, maintaining reliability across time periods. This interdisciplinary method spans physical systems theory and financial risk management, offering practical advantages for long-term model stability.
[ { "version": "v1", "created": "Mon, 14 Oct 2024 06:08:26 GMT" }, { "version": "v2", "created": "Wed, 12 Mar 2025 06:03:20 GMT" } ]
2025-03-13T00:00:00
[ [ "Marín", "Javier", "" ] ]
TITLE: Hamiltonian Neural Networks for Robust Out-of-Time Credit Scoring ABSTRACT: This paper presents a novel credit scoring approach using neural networks to address class imbalance and out-of-time prediction challenges. We develop a specific optimizer and loss function inspired by Hamiltonian mechanics that better captures credit risk dynamics. Testing on the Freddie Mac Single-Family Loan-Level Dataset shows our model achieves superior discriminative power (AUC) in out-of-time scenarios compared to conventional methods. The approach has consistent performance between in-sample and future test sets, maintaining reliability across time periods. This interdisciplinary method spans physical systems theory and financial risk management, offering practical advantages for long-term model stability.
no_new_dataset
0.950319
2410.10782
Eduardo R. Corral-Soto
Eduardo R. Corral-Soto, Yang Liu, Tongtong Cao, Yuan Ren, Liu Bingbing
3DArticCyclists: Generating Synthetic Articulated 8D Pose-Controllable Cyclist Data for Computer Vision Applications
null
null
null
null
cs.CV cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In Autonomous Driving (AD) Perception, cyclists are considered safety-critical scene objects. Commonly used publicly-available AD datasets typically contain large amounts of car and vehicle object instances but a low number of cyclist instances, usually with limited appearance and pose diversity. This cyclist training data scarcity problem not only limits the generalization of deep-learning perception models for cyclist semantic segmentation, pose estimation, and cyclist crossing intention prediction, but also limits research on new cyclist-related tasks such as fine-grained cyclist pose estimation and spatio-temporal analysis under complex interactions between humans and articulated objects. To address this data scarcity problem, in this paper we propose a framework to generate synthetic dynamic 3D cyclist data assets that can be used to generate training data for different tasks. In our framework, we designed a methodology for creating a new part-based multi-view articulated synthetic 3D bicycle dataset that we call 3DArticBikes that we use to train a 3D Gaussian Splatting (3DGS)-based reconstruction and image rendering method. We then propose a parametric bicycle 3DGS composition model to assemble 8-DoF pose-controllable 3D bicycles. Finally, using dynamic information from cyclist videos, we build a complete synthetic dynamic 3D cyclist (rider pedaling a bicycle) by re-posing a selectable synthetic 3D person, while automatically placing the rider onto one of our new articulated 3D bicycles using a proposed 3D Keypoint optimization-based Inverse Kinematics pose refinement. We present both, qualitative and quantitative results where we compare our generated cyclists against those from a recent stable diffusion-based method.
[ { "version": "v1", "created": "Mon, 14 Oct 2024 17:50:47 GMT" }, { "version": "v2", "created": "Wed, 12 Mar 2025 01:15:52 GMT" } ]
2025-03-13T00:00:00
[ [ "Corral-Soto", "Eduardo R.", "" ], [ "Liu", "Yang", "" ], [ "Cao", "Tongtong", "" ], [ "Ren", "Yuan", "" ], [ "Bingbing", "Liu", "" ] ]
TITLE: 3DArticCyclists: Generating Synthetic Articulated 8D Pose-Controllable Cyclist Data for Computer Vision Applications ABSTRACT: In Autonomous Driving (AD) Perception, cyclists are considered safety-critical scene objects. Commonly used publicly-available AD datasets typically contain large amounts of car and vehicle object instances but a low number of cyclist instances, usually with limited appearance and pose diversity. This cyclist training data scarcity problem not only limits the generalization of deep-learning perception models for cyclist semantic segmentation, pose estimation, and cyclist crossing intention prediction, but also limits research on new cyclist-related tasks such as fine-grained cyclist pose estimation and spatio-temporal analysis under complex interactions between humans and articulated objects. To address this data scarcity problem, in this paper we propose a framework to generate synthetic dynamic 3D cyclist data assets that can be used to generate training data for different tasks. In our framework, we designed a methodology for creating a new part-based multi-view articulated synthetic 3D bicycle dataset that we call 3DArticBikes that we use to train a 3D Gaussian Splatting (3DGS)-based reconstruction and image rendering method. We then propose a parametric bicycle 3DGS composition model to assemble 8-DoF pose-controllable 3D bicycles. Finally, using dynamic information from cyclist videos, we build a complete synthetic dynamic 3D cyclist (rider pedaling a bicycle) by re-posing a selectable synthetic 3D person, while automatically placing the rider onto one of our new articulated 3D bicycles using a proposed 3D Keypoint optimization-based Inverse Kinematics pose refinement. We present both, qualitative and quantitative results where we compare our generated cyclists against those from a recent stable diffusion-based method.
new_dataset
0.963506
2410.12459
Artem Moskalev
Mehdi Yazdani-Jahromi and Mangal Prakash and Tommaso Mansi and Artem Moskalev and Rui Liao
HELM: Hierarchical Encoding for mRNA Language Modeling
null
null
null
null
cs.LG cs.CE
http://creativecommons.org/licenses/by/4.0/
Messenger RNA (mRNA) plays a crucial role in protein synthesis, with its codon structure directly impacting biological properties. While Language Models (LMs) have shown promise in analyzing biological sequences, existing approaches fail to account for the hierarchical nature of mRNA's codon structure. We introduce Hierarchical Encoding for mRNA Language Modeling (HELM), a novel pre-training strategy that incorporates codon-level hierarchical structure into language model training. HELM modulates the loss function based on codon synonymity, aligning the model's learning process with the biological reality of mRNA sequences. We evaluate HELM on diverse mRNA datasets and tasks, demonstrating that HELM outperforms standard language model pre-training as well as existing foundation model baselines on seven diverse downstream property prediction tasks and an antibody region annotation tasks on average by around 8%. Additionally, HELM enhances the generative capabilities of language model, producing diverse mRNA sequences that better align with the underlying true data distribution compared to non-hierarchical baselines.
[ { "version": "v1", "created": "Wed, 16 Oct 2024 11:16:47 GMT" }, { "version": "v2", "created": "Wed, 12 Mar 2025 10:51:14 GMT" } ]
2025-03-13T00:00:00
[ [ "Yazdani-Jahromi", "Mehdi", "" ], [ "Prakash", "Mangal", "" ], [ "Mansi", "Tommaso", "" ], [ "Moskalev", "Artem", "" ], [ "Liao", "Rui", "" ] ]
TITLE: HELM: Hierarchical Encoding for mRNA Language Modeling ABSTRACT: Messenger RNA (mRNA) plays a crucial role in protein synthesis, with its codon structure directly impacting biological properties. While Language Models (LMs) have shown promise in analyzing biological sequences, existing approaches fail to account for the hierarchical nature of mRNA's codon structure. We introduce Hierarchical Encoding for mRNA Language Modeling (HELM), a novel pre-training strategy that incorporates codon-level hierarchical structure into language model training. HELM modulates the loss function based on codon synonymity, aligning the model's learning process with the biological reality of mRNA sequences. We evaluate HELM on diverse mRNA datasets and tasks, demonstrating that HELM outperforms standard language model pre-training as well as existing foundation model baselines on seven diverse downstream property prediction tasks and an antibody region annotation tasks on average by around 8%. Additionally, HELM enhances the generative capabilities of language model, producing diverse mRNA sequences that better align with the underlying true data distribution compared to non-hierarchical baselines.
no_new_dataset
0.952131
2410.14634
Sandeep Nagar Mr.
Sandeep Nagar, Girish Varma
Parallel Backpropagation for Inverse of a Convolution with Application to Normalizing Flows
28th International Conference on Artificial Intelligence and Statistics (AISTATS) 2025
null
null
null
cs.CV cs.LG cs.MM math.PR
http://creativecommons.org/licenses/by/4.0/
The inverse of an invertible convolution is an important operation that comes up in Normalizing Flows, Image Deblurring, etc. The naive algorithm for backpropagation of this operation using Gaussian elimination has running time $O(n^3)$ where $n$ is the number of pixels in the image. We give a fast parallel backpropagation algorithm with running time $O(\sqrt{n})$ for a square image and provide a GPU implementation of the same. Inverse of Convolutions are usually used in Normalizing Flows in the sampling pass, making them slow. We propose to use the Inverse of Convolutions in the forward (image to latent vector) pass of the Normalizing flow. Since the sampling pass is the inverse of the forward pass, it will use convolutions only, resulting in efficient sampling times. We use our parallel backpropagation algorithm to optimize the inverse of the convolution layer, resulting in fast training times. We implement this approach in various Normalizing Flow backbones, resulting in our Inverse-Flow models. We benchmark Inverse-Flow on standard datasets and show significantly improved sampling times with similar bits per dimension compared to previous models.
[ { "version": "v1", "created": "Fri, 18 Oct 2024 17:35:33 GMT" }, { "version": "v2", "created": "Sat, 22 Feb 2025 20:58:09 GMT" }, { "version": "v3", "created": "Wed, 12 Mar 2025 06:28:50 GMT" } ]
2025-03-13T00:00:00
[ [ "Nagar", "Sandeep", "" ], [ "Varma", "Girish", "" ] ]
TITLE: Parallel Backpropagation for Inverse of a Convolution with Application to Normalizing Flows ABSTRACT: The inverse of an invertible convolution is an important operation that comes up in Normalizing Flows, Image Deblurring, etc. The naive algorithm for backpropagation of this operation using Gaussian elimination has running time $O(n^3)$ where $n$ is the number of pixels in the image. We give a fast parallel backpropagation algorithm with running time $O(\sqrt{n})$ for a square image and provide a GPU implementation of the same. Inverse of Convolutions are usually used in Normalizing Flows in the sampling pass, making them slow. We propose to use the Inverse of Convolutions in the forward (image to latent vector) pass of the Normalizing flow. Since the sampling pass is the inverse of the forward pass, it will use convolutions only, resulting in efficient sampling times. We use our parallel backpropagation algorithm to optimize the inverse of the convolution layer, resulting in fast training times. We implement this approach in various Normalizing Flow backbones, resulting in our Inverse-Flow models. We benchmark Inverse-Flow on standard datasets and show significantly improved sampling times with similar bits per dimension compared to previous models.
no_new_dataset
0.949716
2410.17448
Tyler Josephson
Samiha Sharlin, Tyler R. Josephson
In Context Learning and Reasoning for Symbolic Regression with Large Language Models
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Large Language Models (LLMs) are transformer-based machine learning models that have shown remarkable performance in tasks for which they were not explicitly trained. Here, we explore the potential of LLMs to perform symbolic regression -- a machine-learning method for finding simple and accurate equations from datasets. We prompt GPT-4 to suggest expressions from data, which are then optimized and evaluated using external Python tools. These results are fed back to GPT-4, which proposes improved expressions while optimizing for complexity and loss. Using chain-of-thought prompting, we instruct GPT-4 to analyze the data, prior expressions, and the scientific context (expressed in natural language) for each problem before generating new expressions. We evaluated the workflow in rediscovery of five well-known scientific equations from experimental data, and on an additional dataset without a known equation. GPT-4 successfully rediscovered all five equations, and in general, performed better when prompted to use a scratchpad and consider scientific context. We demonstrate how strategic prompting improves the model's performance and how the natural language interface simplifies integrating theory with data. We also observe how theory can sometimes offset noisy data and, in other cases, data can make up for poor context. Although this approach does not outperform established SR programs where target equations are more complex, LLMs can nonetheless iterate toward improved solutions while following instructions and incorporating scientific context in natural language.
[ { "version": "v1", "created": "Tue, 22 Oct 2024 21:50:52 GMT" }, { "version": "v2", "created": "Wed, 12 Mar 2025 13:14:22 GMT" } ]
2025-03-13T00:00:00
[ [ "Sharlin", "Samiha", "" ], [ "Josephson", "Tyler R.", "" ] ]
TITLE: In Context Learning and Reasoning for Symbolic Regression with Large Language Models ABSTRACT: Large Language Models (LLMs) are transformer-based machine learning models that have shown remarkable performance in tasks for which they were not explicitly trained. Here, we explore the potential of LLMs to perform symbolic regression -- a machine-learning method for finding simple and accurate equations from datasets. We prompt GPT-4 to suggest expressions from data, which are then optimized and evaluated using external Python tools. These results are fed back to GPT-4, which proposes improved expressions while optimizing for complexity and loss. Using chain-of-thought prompting, we instruct GPT-4 to analyze the data, prior expressions, and the scientific context (expressed in natural language) for each problem before generating new expressions. We evaluated the workflow in rediscovery of five well-known scientific equations from experimental data, and on an additional dataset without a known equation. GPT-4 successfully rediscovered all five equations, and in general, performed better when prompted to use a scratchpad and consider scientific context. We demonstrate how strategic prompting improves the model's performance and how the natural language interface simplifies integrating theory with data. We also observe how theory can sometimes offset noisy data and, in other cases, data can make up for poor context. Although this approach does not outperform established SR programs where target equations are more complex, LLMs can nonetheless iterate toward improved solutions while following instructions and incorporating scientific context in natural language.
no_new_dataset
0.947721
2410.18444
ChaeHun Park
ChaeHun Park, Hojun Cho, Jaegul Choo
Evaluating Automatic Speech Recognition Systems for Korean Meteorological Experts
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
This paper explores integrating Automatic Speech Recognition (ASR) into natural language query systems to improve weather forecasting efficiency for Korean meteorologists. We address challenges in developing ASR systems for the Korean weather domain, specifically specialized vocabulary and Korean linguistic intricacies. To tackle these issues, we constructed an evaluation dataset of spoken queries recorded by native Korean speakers. Using this dataset, we assessed various configurations of a multilingual ASR model family, identifying performance limitations related to domain-specific terminology. We then implemented a simple text-to-speech-based data augmentation method, which improved the recognition of specialized terms while maintaining general-domain performance. Our contributions include creating a domain-specific dataset, comprehensive ASR model evaluations, and an effective augmentation technique. We believe our work provides a foundation for future advancements in ASR for the Korean weather forecasting domain.
[ { "version": "v1", "created": "Thu, 24 Oct 2024 05:40:07 GMT" }, { "version": "v2", "created": "Wed, 12 Mar 2025 13:18:04 GMT" } ]
2025-03-13T00:00:00
[ [ "Park", "ChaeHun", "" ], [ "Cho", "Hojun", "" ], [ "Choo", "Jaegul", "" ] ]
TITLE: Evaluating Automatic Speech Recognition Systems for Korean Meteorological Experts ABSTRACT: This paper explores integrating Automatic Speech Recognition (ASR) into natural language query systems to improve weather forecasting efficiency for Korean meteorologists. We address challenges in developing ASR systems for the Korean weather domain, specifically specialized vocabulary and Korean linguistic intricacies. To tackle these issues, we constructed an evaluation dataset of spoken queries recorded by native Korean speakers. Using this dataset, we assessed various configurations of a multilingual ASR model family, identifying performance limitations related to domain-specific terminology. We then implemented a simple text-to-speech-based data augmentation method, which improved the recognition of specialized terms while maintaining general-domain performance. Our contributions include creating a domain-specific dataset, comprehensive ASR model evaluations, and an effective augmentation technique. We believe our work provides a foundation for future advancements in ASR for the Korean weather forecasting domain.
new_dataset
0.958304
2410.18469
Chung En Sun
Chung-En Sun, Xiaodong Liu, Weiwei Yang, Tsui-Wei Weng, Hao Cheng, Aidan San, Michel Galley, Jianfeng Gao
Iterative Self-Tuning LLMs for Enhanced Jailbreaking Capabilities
Accepted to NAACL 2025 Main (oral)
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent research has shown that Large Language Models (LLMs) are vulnerable to automated jailbreak attacks, where adversarial suffixes crafted by algorithms appended to harmful queries bypass safety alignment and trigger unintended responses. Current methods for generating these suffixes are computationally expensive and have low Attack Success Rates (ASR), especially against well-aligned models like Llama2 and Llama3. To overcome these limitations, we introduce ADV-LLM, an iterative self-tuning process that crafts adversarial LLMs with enhanced jailbreak ability. Our framework significantly reduces the computational cost of generating adversarial suffixes while achieving nearly 100\% ASR on various open-source LLMs. Moreover, it exhibits strong attack transferability to closed-source models, achieving 99\% ASR on GPT-3.5 and 49\% ASR on GPT-4, despite being optimized solely on Llama3. Beyond improving jailbreak ability, ADV-LLM provides valuable insights for future safety alignment research through its ability to generate large datasets for studying LLM safety.
[ { "version": "v1", "created": "Thu, 24 Oct 2024 06:36:12 GMT" }, { "version": "v2", "created": "Fri, 25 Oct 2024 23:05:59 GMT" }, { "version": "v3", "created": "Tue, 11 Mar 2025 23:26:25 GMT" } ]
2025-03-13T00:00:00
[ [ "Sun", "Chung-En", "" ], [ "Liu", "Xiaodong", "" ], [ "Yang", "Weiwei", "" ], [ "Weng", "Tsui-Wei", "" ], [ "Cheng", "Hao", "" ], [ "San", "Aidan", "" ], [ "Galley", "Michel", "" ], [ "Gao", "Jianfeng", "" ] ]
TITLE: Iterative Self-Tuning LLMs for Enhanced Jailbreaking Capabilities ABSTRACT: Recent research has shown that Large Language Models (LLMs) are vulnerable to automated jailbreak attacks, where adversarial suffixes crafted by algorithms appended to harmful queries bypass safety alignment and trigger unintended responses. Current methods for generating these suffixes are computationally expensive and have low Attack Success Rates (ASR), especially against well-aligned models like Llama2 and Llama3. To overcome these limitations, we introduce ADV-LLM, an iterative self-tuning process that crafts adversarial LLMs with enhanced jailbreak ability. Our framework significantly reduces the computational cost of generating adversarial suffixes while achieving nearly 100\% ASR on various open-source LLMs. Moreover, it exhibits strong attack transferability to closed-source models, achieving 99\% ASR on GPT-3.5 and 49\% ASR on GPT-4, despite being optimized solely on Llama3. Beyond improving jailbreak ability, ADV-LLM provides valuable insights for future safety alignment research through its ability to generate large datasets for studying LLM safety.
no_new_dataset
0.944587
2410.18857
Sanghyuk Chun
Sanghyuk Chun and Wonjae Kim and Song Park and Sangdoo Yun
Probabilistic Language-Image Pre-Training
Code: https://github.com/naver-ai/prolip HuggingFace Hub: https://huggingface.co/collections/SanghyukChun/prolip-6712595dfc87fd8597350291 33 pages, 4.8 MB; LongProLIP paper: arXiv:2503.08048
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Vision-language models (VLMs) embed aligned image-text pairs into a joint space but often rely on deterministic embeddings, assuming a one-to-one correspondence between images and texts. This oversimplifies real-world relationships, which are inherently many-to-many, with multiple captions describing a single image and vice versa. We introduce Probabilistic Language-Image Pre-training (ProLIP), the first probabilistic VLM pre-trained on a billion-scale image-text dataset using only probabilistic objectives, achieving a strong zero-shot capability (e.g., 74.6% ImageNet zero-shot accuracy with ViT-B/16). ProLIP efficiently estimates uncertainty by an "uncertainty token" without extra parameters. We also introduce a novel inclusion loss that enforces distributional inclusion relationships between image-text pairs and between original and masked inputs. Experiments demonstrate that, by leveraging uncertainty estimates, ProLIP benefits downstream tasks and aligns with intuitive notions of uncertainty, e.g., shorter texts being more uncertain and more general inputs including specific ones. Utilizing text uncertainties, we further improve ImageNet accuracy from 74.6% to 75.8% (under a few-shot setting), supporting the practical advantages of our probabilistic approach. The code is available at https://github.com/naver-ai/prolip
[ { "version": "v1", "created": "Thu, 24 Oct 2024 15:42:25 GMT" }, { "version": "v2", "created": "Fri, 6 Dec 2024 15:20:28 GMT" }, { "version": "v3", "created": "Wed, 12 Mar 2025 14:03:31 GMT" } ]
2025-03-13T00:00:00
[ [ "Chun", "Sanghyuk", "" ], [ "Kim", "Wonjae", "" ], [ "Park", "Song", "" ], [ "Yun", "Sangdoo", "" ] ]
TITLE: Probabilistic Language-Image Pre-Training ABSTRACT: Vision-language models (VLMs) embed aligned image-text pairs into a joint space but often rely on deterministic embeddings, assuming a one-to-one correspondence between images and texts. This oversimplifies real-world relationships, which are inherently many-to-many, with multiple captions describing a single image and vice versa. We introduce Probabilistic Language-Image Pre-training (ProLIP), the first probabilistic VLM pre-trained on a billion-scale image-text dataset using only probabilistic objectives, achieving a strong zero-shot capability (e.g., 74.6% ImageNet zero-shot accuracy with ViT-B/16). ProLIP efficiently estimates uncertainty by an "uncertainty token" without extra parameters. We also introduce a novel inclusion loss that enforces distributional inclusion relationships between image-text pairs and between original and masked inputs. Experiments demonstrate that, by leveraging uncertainty estimates, ProLIP benefits downstream tasks and aligns with intuitive notions of uncertainty, e.g., shorter texts being more uncertain and more general inputs including specific ones. Utilizing text uncertainties, we further improve ImageNet accuracy from 74.6% to 75.8% (under a few-shot setting), supporting the practical advantages of our probabilistic approach. The code is available at https://github.com/naver-ai/prolip
no_new_dataset
0.949389
2410.23746
Runzhe Zhan
Junchao Wu, Runzhe Zhan, Derek F. Wong, Shu Yang, Xinyi Yang, Yulin Yuan, Lidia S. Chao
DetectRL: Benchmarking LLM-Generated Text Detection in Real-World Scenarios
Accepted to NeurIPS 2024 Datasets and Benchmarks Track (Camera-Ready)
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Detecting text generated by large language models (LLMs) is of great recent interest. With zero-shot methods like DetectGPT, detection capabilities have reached impressive levels. However, the reliability of existing detectors in real-world applications remains underexplored. In this study, we present a new benchmark, DetectRL, highlighting that even state-of-the-art (SOTA) detection techniques still underperformed in this task. We collected human-written datasets from domains where LLMs are particularly prone to misuse. Using popular LLMs, we generated data that better aligns with real-world applications. Unlike previous studies, we employed heuristic rules to create adversarial LLM-generated text, simulating various prompts usages, human revisions like word substitutions, and writing noises like spelling mistakes. Our development of DetectRL reveals the strengths and limitations of current SOTA detectors. More importantly, we analyzed the potential impact of writing styles, model types, attack methods, the text lengths, and real-world human writing factors on different types of detectors. We believe DetectRL could serve as an effective benchmark for assessing detectors in real-world scenarios, evolving with advanced attack methods, thus providing more stressful evaluation to drive the development of more efficient detectors. Data and code are publicly available at: https://github.com/NLP2CT/DetectRL.
[ { "version": "v1", "created": "Thu, 31 Oct 2024 09:01:25 GMT" }, { "version": "v2", "created": "Fri, 7 Mar 2025 09:06:03 GMT" }, { "version": "v3", "created": "Wed, 12 Mar 2025 10:08:22 GMT" } ]
2025-03-13T00:00:00
[ [ "Wu", "Junchao", "" ], [ "Zhan", "Runzhe", "" ], [ "Wong", "Derek F.", "" ], [ "Yang", "Shu", "" ], [ "Yang", "Xinyi", "" ], [ "Yuan", "Yulin", "" ], [ "Chao", "Lidia S.", "" ] ]
TITLE: DetectRL: Benchmarking LLM-Generated Text Detection in Real-World Scenarios ABSTRACT: Detecting text generated by large language models (LLMs) is of great recent interest. With zero-shot methods like DetectGPT, detection capabilities have reached impressive levels. However, the reliability of existing detectors in real-world applications remains underexplored. In this study, we present a new benchmark, DetectRL, highlighting that even state-of-the-art (SOTA) detection techniques still underperformed in this task. We collected human-written datasets from domains where LLMs are particularly prone to misuse. Using popular LLMs, we generated data that better aligns with real-world applications. Unlike previous studies, we employed heuristic rules to create adversarial LLM-generated text, simulating various prompts usages, human revisions like word substitutions, and writing noises like spelling mistakes. Our development of DetectRL reveals the strengths and limitations of current SOTA detectors. More importantly, we analyzed the potential impact of writing styles, model types, attack methods, the text lengths, and real-world human writing factors on different types of detectors. We believe DetectRL could serve as an effective benchmark for assessing detectors in real-world scenarios, evolving with advanced attack methods, thus providing more stressful evaluation to drive the development of more efficient detectors. Data and code are publicly available at: https://github.com/NLP2CT/DetectRL.
new_dataset
0.544873
2411.00144
Chen Zhao
Chen Zhao, Xuan Wang, Tong Zhang, Saqib Javed, Mathieu Salzmann
Self-Ensembling Gaussian Splatting for Few-Shot Novel View Synthesis
null
null
null
null
cs.CV cs.GR
http://creativecommons.org/licenses/by/4.0/
3D Gaussian Splatting (3DGS) has demonstrated remarkable effectiveness in novel view synthesis (NVS). However, 3DGS tends to overfit when trained with sparse views, limiting its generalization to novel viewpoints. In this paper, we address this overfitting issue by introducing Self-Ensembling Gaussian Splatting (SE-GS). We achieve self-ensembling by incorporating an uncertainty-aware perturbation strategy during training. A $\mathbf{\Delta}$-model and a $\mathbf{\Sigma}$-model are jointly trained on the available images. The $\mathbf{\Delta}$-model is dynamically perturbed based on rendering uncertainty across training steps, generating diverse perturbed models with negligible computational overhead. Discrepancies between the $\mathbf{\Sigma}$-model and these perturbed models are minimized throughout training, forming a robust ensemble of 3DGS models. This ensemble, represented by the $\mathbf{\Sigma}$-model, is then used to generate novel-view images during inference. Experimental results on the LLFF, Mip-NeRF360, DTU, and MVImgNet datasets demonstrate that our approach enhances NVS quality under few-shot training conditions, outperforming existing state-of-the-art methods. The code is released at: https://sailor-z.github.io/projects/SEGS.html.
[ { "version": "v1", "created": "Thu, 31 Oct 2024 18:43:48 GMT" }, { "version": "v2", "created": "Fri, 22 Nov 2024 10:39:59 GMT" }, { "version": "v3", "created": "Wed, 12 Mar 2025 03:26:34 GMT" } ]
2025-03-13T00:00:00
[ [ "Zhao", "Chen", "" ], [ "Wang", "Xuan", "" ], [ "Zhang", "Tong", "" ], [ "Javed", "Saqib", "" ], [ "Salzmann", "Mathieu", "" ] ]
TITLE: Self-Ensembling Gaussian Splatting for Few-Shot Novel View Synthesis ABSTRACT: 3D Gaussian Splatting (3DGS) has demonstrated remarkable effectiveness in novel view synthesis (NVS). However, 3DGS tends to overfit when trained with sparse views, limiting its generalization to novel viewpoints. In this paper, we address this overfitting issue by introducing Self-Ensembling Gaussian Splatting (SE-GS). We achieve self-ensembling by incorporating an uncertainty-aware perturbation strategy during training. A $\mathbf{\Delta}$-model and a $\mathbf{\Sigma}$-model are jointly trained on the available images. The $\mathbf{\Delta}$-model is dynamically perturbed based on rendering uncertainty across training steps, generating diverse perturbed models with negligible computational overhead. Discrepancies between the $\mathbf{\Sigma}$-model and these perturbed models are minimized throughout training, forming a robust ensemble of 3DGS models. This ensemble, represented by the $\mathbf{\Sigma}$-model, is then used to generate novel-view images during inference. Experimental results on the LLFF, Mip-NeRF360, DTU, and MVImgNet datasets demonstrate that our approach enhances NVS quality under few-shot training conditions, outperforming existing state-of-the-art methods. The code is released at: https://sailor-z.github.io/projects/SEGS.html.
no_new_dataset
0.947137
2411.01126
Davin Hill
Davin Hill, Josh Bone, Aria Masoomi, Max Torop, Jennifer Dy
Axiomatic Explainer Globalness via Optimal Transport
Proceedings of the 28th International Conference on Artificial Intelligence and Statistics (AISTATS) 2025
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
Explainability methods are often challenging to evaluate and compare. With a multitude of explainers available, practitioners must often compare and select explainers based on quantitative evaluation metrics. One particular differentiator between explainers is the diversity of explanations for a given dataset; i.e. whether all explanations are identical, unique and uniformly distributed, or somewhere between these two extremes. In this work, we define a complexity measure for explainers, globalness, which enables deeper understanding of the distribution of explanations produced by feature attribution and feature selection methods for a given dataset. We establish the axiomatic properties that any such measure should possess and prove that our proposed measure, Wasserstein Globalness, meets these criteria. We validate the utility of Wasserstein Globalness using image, tabular, and synthetic datasets, empirically showing that it both facilitates meaningful comparison between explainers and improves the selection process for explainability methods.
[ { "version": "v1", "created": "Sat, 2 Nov 2024 04:01:38 GMT" }, { "version": "v2", "created": "Wed, 12 Mar 2025 03:46:50 GMT" } ]
2025-03-13T00:00:00
[ [ "Hill", "Davin", "" ], [ "Bone", "Josh", "" ], [ "Masoomi", "Aria", "" ], [ "Torop", "Max", "" ], [ "Dy", "Jennifer", "" ] ]
TITLE: Axiomatic Explainer Globalness via Optimal Transport ABSTRACT: Explainability methods are often challenging to evaluate and compare. With a multitude of explainers available, practitioners must often compare and select explainers based on quantitative evaluation metrics. One particular differentiator between explainers is the diversity of explanations for a given dataset; i.e. whether all explanations are identical, unique and uniformly distributed, or somewhere between these two extremes. In this work, we define a complexity measure for explainers, globalness, which enables deeper understanding of the distribution of explanations produced by feature attribution and feature selection methods for a given dataset. We establish the axiomatic properties that any such measure should possess and prove that our proposed measure, Wasserstein Globalness, meets these criteria. We validate the utility of Wasserstein Globalness using image, tabular, and synthetic datasets, empirically showing that it both facilitates meaningful comparison between explainers and improves the selection process for explainability methods.
no_new_dataset
0.951006
2411.01293
Rafa{\l} Karczewski
Rafa{\l} Karczewski, Markus Heinonen, Vikas Garg
Diffusion Models as Cartoonists: The Curious Case of High Density Regions
ICLR 2025
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
We investigate what kind of images lie in the high-density regions of diffusion models. We introduce a theoretical mode-tracking process capable of pinpointing the exact mode of the denoising distribution, and we propose a practical high-density sampler that consistently generates images of higher likelihood than usual samplers. Our empirical findings reveal the existence of significantly higher likelihood samples that typical samplers do not produce, often manifesting as cartoon-like drawings or blurry images depending on the noise level. Curiously, these patterns emerge in datasets devoid of such examples. We also present a novel approach to track sample likelihoods in diffusion SDEs, which remarkably incurs no additional computational cost.
[ { "version": "v1", "created": "Sat, 2 Nov 2024 16:02:47 GMT" }, { "version": "v2", "created": "Thu, 27 Feb 2025 10:41:45 GMT" }, { "version": "v3", "created": "Wed, 12 Mar 2025 12:08:55 GMT" } ]
2025-03-13T00:00:00
[ [ "Karczewski", "Rafał", "" ], [ "Heinonen", "Markus", "" ], [ "Garg", "Vikas", "" ] ]
TITLE: Diffusion Models as Cartoonists: The Curious Case of High Density Regions ABSTRACT: We investigate what kind of images lie in the high-density regions of diffusion models. We introduce a theoretical mode-tracking process capable of pinpointing the exact mode of the denoising distribution, and we propose a practical high-density sampler that consistently generates images of higher likelihood than usual samplers. Our empirical findings reveal the existence of significantly higher likelihood samples that typical samplers do not produce, often manifesting as cartoon-like drawings or blurry images depending on the noise level. Curiously, these patterns emerge in datasets devoid of such examples. We also present a novel approach to track sample likelihoods in diffusion SDEs, which remarkably incurs no additional computational cost.
no_new_dataset
0.950319
2411.08127
SangHyun Park
Shih-Ying Yeh, Sang-Hyun Park, Yi Li, Giyeong Oh, Xuehai Wang, Min Song, Youngjae Yu
TIPO: Text to Image with Text Presampling for Prompt Optimization
41 pages, 32 figures
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
TIPO (Text-to-Image Prompt Optimization) introduces an efficient approach for automatic prompt refinement in text-to-image (T2I) generation. Starting from simple user prompts, TIPO leverages a lightweight pre-trained model to expand these prompts into richer, detailed versions. Conceptually, TIPO samples refined prompts from a targeted sub-distribution within the broader semantic space, preserving the original intent while significantly improving visual quality, coherence, and detail. Unlike resource-intensive methods based on large language models (LLMs) or reinforcement learning (RL), TIPO provides computational efficiency and scalability, opening new possibilities for effective, automated prompt engineering in T2I tasks. We provide visual results, human preference report to investigate TIPO's effectiveness. Experimental evaluations on benchmark datasets demonstrate substantial improvements in aesthetic quality, significant reduction of visual artifacts, and enhanced alignment with target distributions along with significant human preference proficiency. These results highlight the importance of targeted prompt engineering in text-to-image tasks and indicate broader opportunities for automated prompt refinement.
[ { "version": "v1", "created": "Tue, 12 Nov 2024 19:09:45 GMT" }, { "version": "v2", "created": "Fri, 22 Nov 2024 14:58:31 GMT" }, { "version": "v3", "created": "Tue, 11 Mar 2025 18:21:57 GMT" } ]
2025-03-13T00:00:00
[ [ "Yeh", "Shih-Ying", "" ], [ "Park", "Sang-Hyun", "" ], [ "Li", "Yi", "" ], [ "Oh", "Giyeong", "" ], [ "Wang", "Xuehai", "" ], [ "Song", "Min", "" ], [ "Yu", "Youngjae", "" ] ]
TITLE: TIPO: Text to Image with Text Presampling for Prompt Optimization ABSTRACT: TIPO (Text-to-Image Prompt Optimization) introduces an efficient approach for automatic prompt refinement in text-to-image (T2I) generation. Starting from simple user prompts, TIPO leverages a lightweight pre-trained model to expand these prompts into richer, detailed versions. Conceptually, TIPO samples refined prompts from a targeted sub-distribution within the broader semantic space, preserving the original intent while significantly improving visual quality, coherence, and detail. Unlike resource-intensive methods based on large language models (LLMs) or reinforcement learning (RL), TIPO provides computational efficiency and scalability, opening new possibilities for effective, automated prompt engineering in T2I tasks. We provide visual results, human preference report to investigate TIPO's effectiveness. Experimental evaluations on benchmark datasets demonstrate substantial improvements in aesthetic quality, significant reduction of visual artifacts, and enhanced alignment with target distributions along with significant human preference proficiency. These results highlight the importance of targeted prompt engineering in text-to-image tasks and indicate broader opportunities for automated prompt refinement.
no_new_dataset
0.957038
2411.10153
Gerardo Duran-Martin
Gerardo Duran-Martin, Leandro S\'anchez-Betancourt, Alexander Y. Shestopaloff, Kevin Murphy
A unifying framework for generalised Bayesian online learning in non-stationary environments
Published in Transactions on Machine Learning Research (03/2025)
null
null
null
stat.ML cs.LG
http://creativecommons.org/licenses/by/4.0/
We propose a unifying framework for methods that perform probabilistic online learning in non-stationary environments. We call the framework BONE, which stands for generalised (B)ayesian (O)nline learning in (N)on-stationary (E)nvironments. BONE provides a common structure to tackle a variety of problems, including online continual learning, prequential forecasting, and contextual bandits. The framework requires specifying three modelling choices: (i) a model for measurements (e.g., a neural network), (ii) an auxiliary process to model non-stationarity (e.g., the time since the last changepoint), and (iii) a conditional prior over model parameters (e.g., a multivariate Gaussian). The framework also requires two algorithmic choices, which we use to carry out approximate inference under this framework: (i) an algorithm to estimate beliefs (posterior distribution) about the model parameters given the auxiliary variable, and (ii) an algorithm to estimate beliefs about the auxiliary variable. We show how the modularity of our framework allows for many existing methods to be reinterpreted as instances of BONE, and it allows us to propose new methods. We compare experimentally existing methods with our proposed new method on several datasets, providing insights into the situations that make each method more suitable for a specific task. We provide a Jax open source library to facilitate the adoption of this framework.
[ { "version": "v1", "created": "Fri, 15 Nov 2024 12:52:02 GMT" }, { "version": "v2", "created": "Mon, 18 Nov 2024 10:16:14 GMT" }, { "version": "v3", "created": "Wed, 12 Mar 2025 10:05:37 GMT" } ]
2025-03-13T00:00:00
[ [ "Duran-Martin", "Gerardo", "" ], [ "Sánchez-Betancourt", "Leandro", "" ], [ "Shestopaloff", "Alexander Y.", "" ], [ "Murphy", "Kevin", "" ] ]
TITLE: A unifying framework for generalised Bayesian online learning in non-stationary environments ABSTRACT: We propose a unifying framework for methods that perform probabilistic online learning in non-stationary environments. We call the framework BONE, which stands for generalised (B)ayesian (O)nline learning in (N)on-stationary (E)nvironments. BONE provides a common structure to tackle a variety of problems, including online continual learning, prequential forecasting, and contextual bandits. The framework requires specifying three modelling choices: (i) a model for measurements (e.g., a neural network), (ii) an auxiliary process to model non-stationarity (e.g., the time since the last changepoint), and (iii) a conditional prior over model parameters (e.g., a multivariate Gaussian). The framework also requires two algorithmic choices, which we use to carry out approximate inference under this framework: (i) an algorithm to estimate beliefs (posterior distribution) about the model parameters given the auxiliary variable, and (ii) an algorithm to estimate beliefs about the auxiliary variable. We show how the modularity of our framework allows for many existing methods to be reinterpreted as instances of BONE, and it allows us to propose new methods. We compare experimentally existing methods with our proposed new method on several datasets, providing insights into the situations that make each method more suitable for a specific task. We provide a Jax open source library to facilitate the adoption of this framework.
no_new_dataset
0.945349
2411.10224
Kang Liu
Qiguang Miao and Kang Liu and Zhuoqi Ma and Yunan Li and Xiaolu Kang and Ruixuan Liu and Tianyi Liu and Kun Xie and Zhicheng Jiao
EVOKE: Elevating Chest X-ray Report Generation via Multi-View Contrastive Learning and Patient-Specific Knowledge
The code is available at https://github.com/mk-runner/EVOKE
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Radiology reports are crucial for planning treatment strategies and facilitating effective doctor-patient communication. However, the manual creation of these reports places a significant burden on radiologists. While automatic radiology report generation presents a promising solution, existing methods often rely on single-view radiographs, which constrain diagnostic accuracy. To address this challenge, we propose \textbf{EVOKE}, a novel chest X-ray report generation framework that incorporates multi-view contrastive learning and patient-specific knowledge. Specifically, we introduce a multi-view contrastive learning method that enhances visual representation by aligning multi-view radiographs with their corresponding report. After that, we present a knowledge-guided report generation module that integrates available patient-specific indications (e.g., symptom descriptions) to trigger the production of accurate and coherent radiology reports. To support research in multi-view report generation, we construct Multi-view CXR and Two-view CXR datasets using publicly available sources. Our proposed EVOKE surpasses recent state-of-the-art methods across multiple datasets, achieving a 2.9\% F\textsubscript{1} RadGraph improvement on MIMIC-CXR, a 7.3\% BLEU-1 improvement on MIMIC-ABN, a 3.1\% BLEU-4 improvement on Multi-view CXR, and an 8.2\% F\textsubscript{1,mic-14} CheXbert improvement on Two-view CXR.
[ { "version": "v1", "created": "Fri, 15 Nov 2024 14:38:13 GMT" }, { "version": "v2", "created": "Wed, 12 Mar 2025 09:38:02 GMT" } ]
2025-03-13T00:00:00
[ [ "Miao", "Qiguang", "" ], [ "Liu", "Kang", "" ], [ "Ma", "Zhuoqi", "" ], [ "Li", "Yunan", "" ], [ "Kang", "Xiaolu", "" ], [ "Liu", "Ruixuan", "" ], [ "Liu", "Tianyi", "" ], [ "Xie", "Kun", "" ], [ "Jiao", "Zhicheng", "" ] ]
TITLE: EVOKE: Elevating Chest X-ray Report Generation via Multi-View Contrastive Learning and Patient-Specific Knowledge ABSTRACT: Radiology reports are crucial for planning treatment strategies and facilitating effective doctor-patient communication. However, the manual creation of these reports places a significant burden on radiologists. While automatic radiology report generation presents a promising solution, existing methods often rely on single-view radiographs, which constrain diagnostic accuracy. To address this challenge, we propose \textbf{EVOKE}, a novel chest X-ray report generation framework that incorporates multi-view contrastive learning and patient-specific knowledge. Specifically, we introduce a multi-view contrastive learning method that enhances visual representation by aligning multi-view radiographs with their corresponding report. After that, we present a knowledge-guided report generation module that integrates available patient-specific indications (e.g., symptom descriptions) to trigger the production of accurate and coherent radiology reports. To support research in multi-view report generation, we construct Multi-view CXR and Two-view CXR datasets using publicly available sources. Our proposed EVOKE surpasses recent state-of-the-art methods across multiple datasets, achieving a 2.9\% F\textsubscript{1} RadGraph improvement on MIMIC-CXR, a 7.3\% BLEU-1 improvement on MIMIC-ABN, a 3.1\% BLEU-4 improvement on Multi-view CXR, and an 8.2\% F\textsubscript{1,mic-14} CheXbert improvement on Two-view CXR.
no_new_dataset
0.942981
2411.15232
Taha Koleilat
Taha Koleilat, Hojat Asgariandehkordi, Hassan Rivaz, Yiming Xiao
BiomedCoOp: Learning to Prompt for Biomedical Vision-Language Models
Accepted to CVPR 2025
null
null
null
cs.CV cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
Recent advancements in vision-language models (VLMs), such as CLIP, have demonstrated substantial success in self-supervised representation learning for vision tasks. However, effectively adapting VLMs to downstream applications remains challenging, as their accuracy often depends on time-intensive and expertise-demanding prompt engineering, while full model fine-tuning is costly. This is particularly true for biomedical images, which, unlike natural images, typically suffer from limited annotated datasets, unintuitive image contrasts, and nuanced visual features. Recent prompt learning techniques, such as Context Optimization (CoOp) intend to tackle these issues, but still fall short in generalizability. Meanwhile, explorations in prompt learning for biomedical image analysis are still highly limited. In this work, we propose BiomedCoOp, a novel prompt learning framework that enables efficient adaptation of BiomedCLIP for accurate and highly generalizable few-shot biomedical image classification. Our approach achieves effective prompt context learning by leveraging semantic consistency with average prompt ensembles from Large Language Models (LLMs) and knowledge distillation with a statistics-based prompt selection strategy. We conducted comprehensive validation of our proposed framework on 11 medical datasets across 9 modalities and 10 organs against existing state-of-the-art methods, demonstrating significant improvements in both accuracy and generalizability. The code is publicly available at https://github.com/HealthX-Lab/BiomedCoOp.
[ { "version": "v1", "created": "Thu, 21 Nov 2024 19:13:04 GMT" }, { "version": "v2", "created": "Wed, 12 Mar 2025 03:28:09 GMT" } ]
2025-03-13T00:00:00
[ [ "Koleilat", "Taha", "" ], [ "Asgariandehkordi", "Hojat", "" ], [ "Rivaz", "Hassan", "" ], [ "Xiao", "Yiming", "" ] ]
TITLE: BiomedCoOp: Learning to Prompt for Biomedical Vision-Language Models ABSTRACT: Recent advancements in vision-language models (VLMs), such as CLIP, have demonstrated substantial success in self-supervised representation learning for vision tasks. However, effectively adapting VLMs to downstream applications remains challenging, as their accuracy often depends on time-intensive and expertise-demanding prompt engineering, while full model fine-tuning is costly. This is particularly true for biomedical images, which, unlike natural images, typically suffer from limited annotated datasets, unintuitive image contrasts, and nuanced visual features. Recent prompt learning techniques, such as Context Optimization (CoOp) intend to tackle these issues, but still fall short in generalizability. Meanwhile, explorations in prompt learning for biomedical image analysis are still highly limited. In this work, we propose BiomedCoOp, a novel prompt learning framework that enables efficient adaptation of BiomedCLIP for accurate and highly generalizable few-shot biomedical image classification. Our approach achieves effective prompt context learning by leveraging semantic consistency with average prompt ensembles from Large Language Models (LLMs) and knowledge distillation with a statistics-based prompt selection strategy. We conducted comprehensive validation of our proposed framework on 11 medical datasets across 9 modalities and 10 organs against existing state-of-the-art methods, demonstrating significant improvements in both accuracy and generalizability. The code is publicly available at https://github.com/HealthX-Lab/BiomedCoOp.
no_new_dataset
0.944536
2411.16370
Amaan Valiuddin
M.M.A. Valiuddin, R.J.G. van Sloun, C.G.A. Viviers, P.H.N. de With, F. van der Sommen
A Review of Bayesian Uncertainty Quantification in Deep Probabilistic Image Segmentation
20 pages, revised
null
null
null
cs.CV cs.AI cs.LG eess.IV stat.ML
http://creativecommons.org/licenses/by-nc-sa/4.0/
Advancements in image segmentation play an integral role within the broad scope of Deep Learning-based Computer Vision. Furthermore, their widespread applicability in critical real-world tasks has resulted in challenges related to the reliability of such algorithms. Hence, uncertainty quantification has been extensively studied within this context, enabling the expression of model ignorance (epistemic uncertainty) or data ambiguity (aleatoric uncertainty) to prevent uninformed decision-making. Due to the rapid adoption of Convolutional Neural Network (CNN)-based segmentation models in high-stake applications, a substantial body of research has been published on this very topic, causing its swift expansion into a distinct field. This work provides a comprehensive overview of probabilistic segmentation, by discussing fundamental concepts of uncertainty quantification, governing advancements in the field as well as the application to various tasks. Moreover, literature on both types of uncertainties trace back to four key applications: (1) to quantify statistical inconsistencies in the annotation process due ambiguous images, (2) correlating prediction error with uncertainty, (3) expanding the model hypothesis space for better generalization, and (4) Active Learning. An extensive discussion follows that includes an overview of utilized datasets for each of the applications and evaluation of the available methods. We also highlight challenges related to architectures, uncertainty quantification methods, standardization and benchmarking, and finally end with recommendations for future work such as methods based on single forward passes and models that appropriately leverage volumetric data.
[ { "version": "v1", "created": "Mon, 25 Nov 2024 13:26:09 GMT" }, { "version": "v2", "created": "Tue, 7 Jan 2025 09:34:51 GMT" }, { "version": "v3", "created": "Wed, 12 Mar 2025 09:51:17 GMT" } ]
2025-03-13T00:00:00
[ [ "Valiuddin", "M. M. A.", "" ], [ "van Sloun", "R. J. G.", "" ], [ "Viviers", "C. G. A.", "" ], [ "de With", "P. H. N.", "" ], [ "van der Sommen", "F.", "" ] ]
TITLE: A Review of Bayesian Uncertainty Quantification in Deep Probabilistic Image Segmentation ABSTRACT: Advancements in image segmentation play an integral role within the broad scope of Deep Learning-based Computer Vision. Furthermore, their widespread applicability in critical real-world tasks has resulted in challenges related to the reliability of such algorithms. Hence, uncertainty quantification has been extensively studied within this context, enabling the expression of model ignorance (epistemic uncertainty) or data ambiguity (aleatoric uncertainty) to prevent uninformed decision-making. Due to the rapid adoption of Convolutional Neural Network (CNN)-based segmentation models in high-stake applications, a substantial body of research has been published on this very topic, causing its swift expansion into a distinct field. This work provides a comprehensive overview of probabilistic segmentation, by discussing fundamental concepts of uncertainty quantification, governing advancements in the field as well as the application to various tasks. Moreover, literature on both types of uncertainties trace back to four key applications: (1) to quantify statistical inconsistencies in the annotation process due ambiguous images, (2) correlating prediction error with uncertainty, (3) expanding the model hypothesis space for better generalization, and (4) Active Learning. An extensive discussion follows that includes an overview of utilized datasets for each of the applications and evaluation of the available methods. We also highlight challenges related to architectures, uncertainty quantification methods, standardization and benchmarking, and finally end with recommendations for future work such as methods based on single forward passes and models that appropriately leverage volumetric data.
no_new_dataset
0.945248
2411.16901
Abdesselam Ferdi
Abdesselam Ferdi
Deep Convolutional Neural Networks Structured Pruning via Gravity Regularization
null
null
10.1109/ICMLANT63295.2024.00009
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Structured pruning is a widely employed strategy for accelerating deep convolutional neural networks (DCNNs). However, existing methods often necessitate modifications to the original architectures, involve complex implementations, and require lengthy fine-tuning stages. To address these challenges, we propose a novel physics-inspired approach that integrates the concept of gravity into the training stage of DCNNs. In this approach, the gravity is directly proportional to the product of the masses of the convolution filter and the attracting filter, and inversely proportional to the square of the distance between them. We applied this force to the convolution filters, either drawing filters closer to the attracting filter (experiencing weaker gravity) toward non-zero weights or pulling filters farther away (subject to stronger gravity) toward zero weights. As a result, filters experiencing stronger gravity have their weights reduced to zero, enabling their removal, while filters under weaker gravity retain significant weights and preserve important information. Our method simultaneously optimizes the filter weights and ranks their importance, eliminating the need for complex implementations or extensive fine-tuning. We validated the proposed approach on popular DCNN architectures using the CIFAR dataset, achieving competitive results compared to existing methods.
[ { "version": "v1", "created": "Mon, 25 Nov 2024 20:10:10 GMT" } ]
2025-03-13T00:00:00
[ [ "Ferdi", "Abdesselam", "" ] ]
TITLE: Deep Convolutional Neural Networks Structured Pruning via Gravity Regularization ABSTRACT: Structured pruning is a widely employed strategy for accelerating deep convolutional neural networks (DCNNs). However, existing methods often necessitate modifications to the original architectures, involve complex implementations, and require lengthy fine-tuning stages. To address these challenges, we propose a novel physics-inspired approach that integrates the concept of gravity into the training stage of DCNNs. In this approach, the gravity is directly proportional to the product of the masses of the convolution filter and the attracting filter, and inversely proportional to the square of the distance between them. We applied this force to the convolution filters, either drawing filters closer to the attracting filter (experiencing weaker gravity) toward non-zero weights or pulling filters farther away (subject to stronger gravity) toward zero weights. As a result, filters experiencing stronger gravity have their weights reduced to zero, enabling their removal, while filters under weaker gravity retain significant weights and preserve important information. Our method simultaneously optimizes the filter weights and ranks their importance, eliminating the need for complex implementations or extensive fine-tuning. We validated the proposed approach on popular DCNN architectures using the CIFAR dataset, achieving competitive results compared to existing methods.
no_new_dataset
0.952309
2411.17489
Nicolai Hermann
Nicolai Hermann, Jorge Condor, Piotr Didyk
Puzzle Similarity: A Perceptually-guided Cross-Reference Metric for Artifact Detection in 3D Scene Reconstructions
null
null
null
null
cs.CV cs.AI cs.GR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modern reconstruction techniques can effectively model complex 3D scenes from sparse 2D views. However, automatically assessing the quality of novel views and identifying artifacts is challenging due to the lack of ground truth images and the limitations of No-Reference image metrics in predicting reliable artifact maps. The absence of such metrics hinders the assessment of the quality of novel views and limits the adoption of post-processing techniques, such as inpainting, to enhance reconstruction quality. To tackle this, recent work has established a new category of metrics (Cross-Reference), predicting image quality solely by leveraging context from alternate viewpoint captures (arXiv:2404.14409). In this work, we propose a new Cross-Reference metric, Puzzle Similarity, which is designed to localize artifacts in novel views. Our approach utilizes image patch statistics from the input views to establish a scene-specific distribution, later used to identify poorly reconstructed regions in the novel views. Given the lack of good measures to evaluate Cross-Reference methods in the context of 3D reconstruction, we collected a novel human-labeled dataset of artifact and distortion maps in unseen reconstructed views. Through this dataset, we demonstrate that our method achieves state-of-the-art localization of artifacts in novel views, correlating with human assessment, even without aligned references. We can leverage our new metric to enhance applications like automatic image restoration, guided acquisition, or 3D reconstruction from sparse inputs. Find the project page at https://nihermann.github.io/puzzlesim/ .
[ { "version": "v1", "created": "Tue, 26 Nov 2024 14:57:30 GMT" }, { "version": "v2", "created": "Wed, 12 Mar 2025 09:04:43 GMT" } ]
2025-03-13T00:00:00
[ [ "Hermann", "Nicolai", "" ], [ "Condor", "Jorge", "" ], [ "Didyk", "Piotr", "" ] ]
TITLE: Puzzle Similarity: A Perceptually-guided Cross-Reference Metric for Artifact Detection in 3D Scene Reconstructions ABSTRACT: Modern reconstruction techniques can effectively model complex 3D scenes from sparse 2D views. However, automatically assessing the quality of novel views and identifying artifacts is challenging due to the lack of ground truth images and the limitations of No-Reference image metrics in predicting reliable artifact maps. The absence of such metrics hinders the assessment of the quality of novel views and limits the adoption of post-processing techniques, such as inpainting, to enhance reconstruction quality. To tackle this, recent work has established a new category of metrics (Cross-Reference), predicting image quality solely by leveraging context from alternate viewpoint captures (arXiv:2404.14409). In this work, we propose a new Cross-Reference metric, Puzzle Similarity, which is designed to localize artifacts in novel views. Our approach utilizes image patch statistics from the input views to establish a scene-specific distribution, later used to identify poorly reconstructed regions in the novel views. Given the lack of good measures to evaluate Cross-Reference methods in the context of 3D reconstruction, we collected a novel human-labeled dataset of artifact and distortion maps in unseen reconstructed views. Through this dataset, we demonstrate that our method achieves state-of-the-art localization of artifacts in novel views, correlating with human assessment, even without aligned references. We can leverage our new metric to enhance applications like automatic image restoration, guided acquisition, or 3D reconstruction from sparse inputs. Find the project page at https://nihermann.github.io/puzzlesim/ .
new_dataset
0.963403
2412.00139
Muhammad Huzaifa
Muhammad Huzaifa, Yova Kementchedjhieva
EFSA: Episodic Few-Shot Adaptation for Text-to-Image Retrieval
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-sa/4.0/
Text-to-image retrieval is a critical task for managing diverse visual content, but common benchmarks for the task rely on small, single-domain datasets that fail to capture real-world complexity. Pre-trained vision-language models tend to perform well with easy negatives but struggle with hard negatives--visually similar yet incorrect images--especially in open-domain scenarios. To address this, we introduce Episodic Few-Shot Adaptation (EFSA), a novel test-time framework that adapts pre-trained models dynamically to a query's domain by fine-tuning on top-k retrieved candidates and synthetic captions generated for them. EFSA improves performance across diverse domains while preserving generalization, as shown in evaluations on queries from eight highly distinct visual domains and an open-domain retrieval pool of over one million images. Our work highlights the potential of episodic few-shot adaptation to enhance robustness in the critical and understudied task of open-domain text-to-image retrieval.
[ { "version": "v1", "created": "Thu, 28 Nov 2024 17:09:20 GMT" }, { "version": "v2", "created": "Wed, 12 Mar 2025 09:54:42 GMT" } ]
2025-03-13T00:00:00
[ [ "Huzaifa", "Muhammad", "" ], [ "Kementchedjhieva", "Yova", "" ] ]
TITLE: EFSA: Episodic Few-Shot Adaptation for Text-to-Image Retrieval ABSTRACT: Text-to-image retrieval is a critical task for managing diverse visual content, but common benchmarks for the task rely on small, single-domain datasets that fail to capture real-world complexity. Pre-trained vision-language models tend to perform well with easy negatives but struggle with hard negatives--visually similar yet incorrect images--especially in open-domain scenarios. To address this, we introduce Episodic Few-Shot Adaptation (EFSA), a novel test-time framework that adapts pre-trained models dynamically to a query's domain by fine-tuning on top-k retrieved candidates and synthetic captions generated for them. EFSA improves performance across diverse domains while preserving generalization, as shown in evaluations on queries from eight highly distinct visual domains and an open-domain retrieval pool of over one million images. Our work highlights the potential of episodic few-shot adaptation to enhance robustness in the critical and understudied task of open-domain text-to-image retrieval.
no_new_dataset
0.9463
2412.00418
Yu Shi
Yu Shi, Yiqi Wang, WeiXuan Lang, Jiaxin Zhang, Pan Dong, Aiping Li
Mixture of Experts for Node Classification
null
null
null
null
cs.SI cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Nodes in the real-world graphs exhibit diverse patterns in numerous aspects, such as degree and homophily. However, most existent node predictors fail to capture a wide range of node patterns or to make predictions based on distinct node patterns, resulting in unsatisfactory classification performance. In this paper, we reveal that different node predictors are good at handling nodes with specific patterns and only apply one node predictor uniformly could lead to suboptimal result. To mitigate this gap, we propose a mixture of experts framework, MoE-NP, for node classification. Specifically, MoE-NP combines a mixture of node predictors and strategically selects models based on node patterns. Experimental results from a range of real-world datasets demonstrate significant performance improvements from MoE-NP.
[ { "version": "v1", "created": "Sat, 30 Nov 2024 10:05:03 GMT" }, { "version": "v2", "created": "Wed, 12 Mar 2025 12:33:46 GMT" } ]
2025-03-13T00:00:00
[ [ "Shi", "Yu", "" ], [ "Wang", "Yiqi", "" ], [ "Lang", "WeiXuan", "" ], [ "Zhang", "Jiaxin", "" ], [ "Dong", "Pan", "" ], [ "Li", "Aiping", "" ] ]
TITLE: Mixture of Experts for Node Classification ABSTRACT: Nodes in the real-world graphs exhibit diverse patterns in numerous aspects, such as degree and homophily. However, most existent node predictors fail to capture a wide range of node patterns or to make predictions based on distinct node patterns, resulting in unsatisfactory classification performance. In this paper, we reveal that different node predictors are good at handling nodes with specific patterns and only apply one node predictor uniformly could lead to suboptimal result. To mitigate this gap, we propose a mixture of experts framework, MoE-NP, for node classification. Specifically, MoE-NP combines a mixture of node predictors and strategically selects models based on node patterns. Experimental results from a range of real-world datasets demonstrate significant performance improvements from MoE-NP.
no_new_dataset
0.95803
2412.01562
Miroslav Purkrabek
Miroslav Purkrabek and Jiri Matas
Detection, Pose Estimation and Segmentation for Multiple Bodies: Closing the Virtuous Circle
Code: https://mirapurkrabek.github.io/BBox-Mask-Pose
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Human pose estimation methods work well on isolated people but struggle with multiple-bodies-in-proximity scenarios. Previous work has addressed this problem by conditioning pose estimation by detected bounding boxes or keypoints, but overlooked instance masks. We propose to iteratively enforce mutual consistency of bounding boxes, instance masks, and poses. The introduced BBox-Mask-Pose (BMP) method uses three specialized models that improve each other's output in a closed loop. All models are adapted for mutual conditioning, which improves robustness in multi-body scenes. MaskPose, a new mask-conditioned pose estimation model, is the best among top-down approaches on OCHuman. BBox-Mask-Pose pushes SOTA on OCHuman dataset in all three tasks - detection, instance segmentation, and pose estimation. It also achieves SOTA performance on COCO pose estimation. The method is especially good in scenes with large instances overlap, where it improves detection by 39% over the baseline detector. With small specialized models and faster runtime, BMP is an effective alternative to large human-centered foundational models. Code and models are available on https://MiraPurkrabek.github.io/BBox-Mask-Pose.
[ { "version": "v1", "created": "Mon, 2 Dec 2024 14:50:15 GMT" }, { "version": "v2", "created": "Wed, 12 Mar 2025 14:28:25 GMT" } ]
2025-03-13T00:00:00
[ [ "Purkrabek", "Miroslav", "" ], [ "Matas", "Jiri", "" ] ]
TITLE: Detection, Pose Estimation and Segmentation for Multiple Bodies: Closing the Virtuous Circle ABSTRACT: Human pose estimation methods work well on isolated people but struggle with multiple-bodies-in-proximity scenarios. Previous work has addressed this problem by conditioning pose estimation by detected bounding boxes or keypoints, but overlooked instance masks. We propose to iteratively enforce mutual consistency of bounding boxes, instance masks, and poses. The introduced BBox-Mask-Pose (BMP) method uses three specialized models that improve each other's output in a closed loop. All models are adapted for mutual conditioning, which improves robustness in multi-body scenes. MaskPose, a new mask-conditioned pose estimation model, is the best among top-down approaches on OCHuman. BBox-Mask-Pose pushes SOTA on OCHuman dataset in all three tasks - detection, instance segmentation, and pose estimation. It also achieves SOTA performance on COCO pose estimation. The method is especially good in scenes with large instances overlap, where it improves detection by 39% over the baseline detector. With small specialized models and faster runtime, BMP is an effective alternative to large human-centered foundational models. Code and models are available on https://MiraPurkrabek.github.io/BBox-Mask-Pose.
no_new_dataset
0.952706
2412.02386
Blanca Lasheras-Hernandez
Blanca Lasheras-Hernandez, Klaus H. Strobl, Sergio Izquierdo, Tim Bodenm\"uller, Rudolph Triebel, and Javier Civera
Single-Shot Metric Depth from Focused Plenoptic Cameras
8 pages (6 for text + 2 for references), 6 figures, 2 tables. Accepted at IEEE ICRA 2025
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Metric depth estimation from visual sensors is crucial for robots to perceive, navigate, and interact with their environment. Traditional range imaging setups, such as stereo or structured light cameras, face hassles including calibration, occlusions, and hardware demands, with accuracy limited by the baseline between cameras. Single- and multi-view monocular depth offers a more compact alternative, but is constrained by the unobservability of the metric scale. Light field imaging provides a promising solution for estimating metric depth by using a unique lens configuration through a single device. However, its application to single-view dense metric depth is under-addressed mainly due to the technology's high cost, the lack of public benchmarks, and proprietary geometrical models and software. Our work explores the potential of focused plenoptic cameras for dense metric depth. We propose a novel pipeline that predicts metric depth from a single plenoptic camera shot by first generating a sparse metric point cloud using machine learning, which is then used to scale and align a dense relative depth map regressed by a foundation depth model, resulting in dense metric depth. To validate it, we curated the Light Field & Stereo Image Dataset (LFS) of real-world light field images with stereo depth labels, filling a current gap in existing resources. Experimental results show that our pipeline produces accurate metric depth predictions, laying a solid groundwork for future research in this field.
[ { "version": "v1", "created": "Tue, 3 Dec 2024 11:21:17 GMT" }, { "version": "v2", "created": "Wed, 12 Mar 2025 13:31:15 GMT" } ]
2025-03-13T00:00:00
[ [ "Lasheras-Hernandez", "Blanca", "" ], [ "Strobl", "Klaus H.", "" ], [ "Izquierdo", "Sergio", "" ], [ "Bodenmüller", "Tim", "" ], [ "Triebel", "Rudolph", "" ], [ "Civera", "Javier", "" ] ]
TITLE: Single-Shot Metric Depth from Focused Plenoptic Cameras ABSTRACT: Metric depth estimation from visual sensors is crucial for robots to perceive, navigate, and interact with their environment. Traditional range imaging setups, such as stereo or structured light cameras, face hassles including calibration, occlusions, and hardware demands, with accuracy limited by the baseline between cameras. Single- and multi-view monocular depth offers a more compact alternative, but is constrained by the unobservability of the metric scale. Light field imaging provides a promising solution for estimating metric depth by using a unique lens configuration through a single device. However, its application to single-view dense metric depth is under-addressed mainly due to the technology's high cost, the lack of public benchmarks, and proprietary geometrical models and software. Our work explores the potential of focused plenoptic cameras for dense metric depth. We propose a novel pipeline that predicts metric depth from a single plenoptic camera shot by first generating a sparse metric point cloud using machine learning, which is then used to scale and align a dense relative depth map regressed by a foundation depth model, resulting in dense metric depth. To validate it, we curated the Light Field & Stereo Image Dataset (LFS) of real-world light field images with stereo depth labels, filling a current gap in existing resources. Experimental results show that our pipeline produces accurate metric depth predictions, laying a solid groundwork for future research in this field.
no_new_dataset
0.888662
2412.04106
Haoning Wu
Haoning Wu, Ziheng Zhao, Ya Zhang, Yanfeng Wang, Weidi Xie
MRGen: Segmentation Data Engine For Underrepresented MRI Modalities
Technical Report; Project Page: https://haoningwu3639.github.io/MRGen/
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Training medical image segmentation models for rare yet clinically significant imaging modalities is challenging due to the scarcity of annotated data, and manual mask annotations can be costly and labor-intensive to acquire. This paper investigates leveraging generative models to synthesize training data, to train segmentation models for underrepresented modalities, particularly on annotation-scarce MRI. Concretely, our contributions are threefold: (i) we introduce MRGen-DB, a large-scale radiology image-text dataset comprising extensive samples with rich metadata, including modality labels, attributes, regions, and organs information, with a subset having pixelwise mask annotations; (ii) we present MRGen, a diffusion-based data engine for controllable medical image synthesis, conditioned on text prompts and segmentation masks. MRGen can generate realistic images for diverse MRI modalities lacking mask annotations, facilitating segmentation training in low-source domains; (iii) extensive experiments across multiple modalities demonstrate that MRGen significantly improves segmentation performance on unannotated modalities by providing high-quality synthetic data. We believe that our method bridges a critical gap in medical image analysis, extending segmentation capabilities to scenarios that are challenging to acquire manual annotations.
[ { "version": "v1", "created": "Wed, 4 Dec 2024 16:34:22 GMT" }, { "version": "v2", "created": "Wed, 12 Mar 2025 11:59:46 GMT" } ]
2025-03-13T00:00:00
[ [ "Wu", "Haoning", "" ], [ "Zhao", "Ziheng", "" ], [ "Zhang", "Ya", "" ], [ "Wang", "Yanfeng", "" ], [ "Xie", "Weidi", "" ] ]
TITLE: MRGen: Segmentation Data Engine For Underrepresented MRI Modalities ABSTRACT: Training medical image segmentation models for rare yet clinically significant imaging modalities is challenging due to the scarcity of annotated data, and manual mask annotations can be costly and labor-intensive to acquire. This paper investigates leveraging generative models to synthesize training data, to train segmentation models for underrepresented modalities, particularly on annotation-scarce MRI. Concretely, our contributions are threefold: (i) we introduce MRGen-DB, a large-scale radiology image-text dataset comprising extensive samples with rich metadata, including modality labels, attributes, regions, and organs information, with a subset having pixelwise mask annotations; (ii) we present MRGen, a diffusion-based data engine for controllable medical image synthesis, conditioned on text prompts and segmentation masks. MRGen can generate realistic images for diverse MRI modalities lacking mask annotations, facilitating segmentation training in low-source domains; (iii) extensive experiments across multiple modalities demonstrate that MRGen significantly improves segmentation performance on unannotated modalities by providing high-quality synthetic data. We believe that our method bridges a critical gap in medical image analysis, extending segmentation capabilities to scenarios that are challenging to acquire manual annotations.
new_dataset
0.964656
2412.06485
Aylar Partovizadeh
Aylar Partovizadeh, Sebastian Sch\"ops, Dimitrios Loukrezis
Fourier-enhanced reduced-order surrogate modeling for uncertainty quantification in electric machine design
null
null
10.1007/s00366-025-02123-1
null
cs.CE
http://creativecommons.org/licenses/by-nc-nd/4.0/
This work proposes a data-driven surrogate modeling framework for cost-effectively inferring the torque of a permanent magnet synchronous machine under geometric design variations. The framework is separated into a reduced-order modeling and an inference part. Given a dataset of torque signals, each corresponding to a different set of design parameters, torque dimension is first reduced by post-processing a discrete Fourier transform and keeping a reduced number of frequency components. This allows to take advantage of torque periodicity and preserve physical information contained in the frequency components. Next, a response surface model is computed by means of machine learning regression, which maps the design parameters to the reduced frequency components. The response surface models of choice are polynomial chaos expansions, feedforward neural networks, and Gaussian processes. Torque inference is performed by evaluating the response surface model for new design parameters and then inverting the dimension reduction. Numerical results show that the resulting surrogate models lead to sufficiently accurate torque predictions for previously unseen design configurations. The framework is found to be significantly advantageous compared to approximating the original (not reduced) torque signal directly, as well as slightly advantageous compared to using principal component analysis for dimension reduction. The combination of discrete Fourier transform-based dimension reduction with Gaussian process-based response surfaces yields the best-in-class surrogate model for this use case. The surrogate models replace the original, high-fidelity model in Monte Carlo-based uncertainty quantification studies, where they provide accurate torque statistics estimates at significantly reduced computational cost.
[ { "version": "v1", "created": "Mon, 9 Dec 2024 13:35:28 GMT" }, { "version": "v2", "created": "Wed, 12 Mar 2025 08:57:50 GMT" } ]
2025-03-13T00:00:00
[ [ "Partovizadeh", "Aylar", "" ], [ "Schöps", "Sebastian", "" ], [ "Loukrezis", "Dimitrios", "" ] ]
TITLE: Fourier-enhanced reduced-order surrogate modeling for uncertainty quantification in electric machine design ABSTRACT: This work proposes a data-driven surrogate modeling framework for cost-effectively inferring the torque of a permanent magnet synchronous machine under geometric design variations. The framework is separated into a reduced-order modeling and an inference part. Given a dataset of torque signals, each corresponding to a different set of design parameters, torque dimension is first reduced by post-processing a discrete Fourier transform and keeping a reduced number of frequency components. This allows to take advantage of torque periodicity and preserve physical information contained in the frequency components. Next, a response surface model is computed by means of machine learning regression, which maps the design parameters to the reduced frequency components. The response surface models of choice are polynomial chaos expansions, feedforward neural networks, and Gaussian processes. Torque inference is performed by evaluating the response surface model for new design parameters and then inverting the dimension reduction. Numerical results show that the resulting surrogate models lead to sufficiently accurate torque predictions for previously unseen design configurations. The framework is found to be significantly advantageous compared to approximating the original (not reduced) torque signal directly, as well as slightly advantageous compared to using principal component analysis for dimension reduction. The combination of discrete Fourier transform-based dimension reduction with Gaussian process-based response surfaces yields the best-in-class surrogate model for this use case. The surrogate models replace the original, high-fidelity model in Monte Carlo-based uncertainty quantification studies, where they provide accurate torque statistics estimates at significantly reduced computational cost.
no_new_dataset
0.949342
2412.07923
Sagi Shaier
Sagi Shaier, Mario Sanz-Guerrero, Katharina von der Wense
Asking Again and Again: Exploring LLM Robustness to Repeated Questions
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
This study investigates whether repeating questions within prompts influences the performance of large language models (LLMs). We hypothesize that reiterating a question within a single prompt might enhance the model's focus on key elements of the query. We evaluate five recent LLMs -- including GPT-4o-mini, DeepSeek-V3, and smaller open-source models -- on three reading comprehension datasets under different prompt settings, varying question repetition levels (1, 3, or 5 times per prompt). Our results demonstrate that question repetition can increase models' accuracy by up to $6\%$. However, across all models, settings, and datasets, we do not find the result statistically significant. These findings provide insights into prompt design and LLM behavior, suggesting that repetition alone does not significantly impact output quality.
[ { "version": "v1", "created": "Tue, 10 Dec 2024 21:09:12 GMT" }, { "version": "v2", "created": "Sat, 8 Mar 2025 16:42:51 GMT" }, { "version": "v3", "created": "Wed, 12 Mar 2025 13:48:12 GMT" } ]
2025-03-13T00:00:00
[ [ "Shaier", "Sagi", "" ], [ "Sanz-Guerrero", "Mario", "" ], [ "von der Wense", "Katharina", "" ] ]
TITLE: Asking Again and Again: Exploring LLM Robustness to Repeated Questions ABSTRACT: This study investigates whether repeating questions within prompts influences the performance of large language models (LLMs). We hypothesize that reiterating a question within a single prompt might enhance the model's focus on key elements of the query. We evaluate five recent LLMs -- including GPT-4o-mini, DeepSeek-V3, and smaller open-source models -- on three reading comprehension datasets under different prompt settings, varying question repetition levels (1, 3, or 5 times per prompt). Our results demonstrate that question repetition can increase models' accuracy by up to $6\%$. However, across all models, settings, and datasets, we do not find the result statistically significant. These findings provide insights into prompt design and LLM behavior, suggesting that repetition alone does not significantly impact output quality.
no_new_dataset
0.943556
2412.10211
Paula Daud\'en-Oliver
Paula Daud\'en-Oliver and David Agost-Beltran and Emilio Sansano-Sansano and Valero Laparra and Jes\'us Malo and Marina Mart\'inez-Garcia
RAID-Database: human Responses to Affine Image Distortions
null
null
null
null
cs.CV q-bio.NC q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Image quality databases are used to train models for predicting subjective human perception. However, most existing databases focus on distortions commonly found in digital media and not in natural conditions. Affine transformations are particularly relevant to study, as they are among the most commonly encountered by human observers in everyday life. This Data Descriptor presents a set of human responses to suprathreshold affine image transforms (rotation, translation, scaling) and Gaussian noise as convenient reference to compare with previously existing image quality databases. The responses were measured using well established psychophysics: the Maximum Likelihood Difference Scaling method. The set contains responses to 864 distorted images. The experiments involved 105 observers and more than 20000 comparisons of quadruples of images. The quality of the dataset is ensured because (a) it reproduces the classical Pi\'eron's law, (b) it reproduces classical absolute detection thresholds, and (c) it is consistent with conventional image quality databases but improves them according to Group-MAD experiments.
[ { "version": "v1", "created": "Fri, 13 Dec 2024 15:34:34 GMT" }, { "version": "v2", "created": "Wed, 12 Mar 2025 15:12:43 GMT" } ]
2025-03-13T00:00:00
[ [ "Daudén-Oliver", "Paula", "" ], [ "Agost-Beltran", "David", "" ], [ "Sansano-Sansano", "Emilio", "" ], [ "Laparra", "Valero", "" ], [ "Malo", "Jesús", "" ], [ "Martínez-Garcia", "Marina", "" ] ]
TITLE: RAID-Database: human Responses to Affine Image Distortions ABSTRACT: Image quality databases are used to train models for predicting subjective human perception. However, most existing databases focus on distortions commonly found in digital media and not in natural conditions. Affine transformations are particularly relevant to study, as they are among the most commonly encountered by human observers in everyday life. This Data Descriptor presents a set of human responses to suprathreshold affine image transforms (rotation, translation, scaling) and Gaussian noise as convenient reference to compare with previously existing image quality databases. The responses were measured using well established psychophysics: the Maximum Likelihood Difference Scaling method. The set contains responses to 864 distorted images. The experiments involved 105 observers and more than 20000 comparisons of quadruples of images. The quality of the dataset is ensured because (a) it reproduces the classical Pi\'eron's law, (b) it reproduces classical absolute detection thresholds, and (c) it is consistent with conventional image quality databases but improves them according to Group-MAD experiments.
no_new_dataset
0.876264
2412.10488
Zehao Chen
Zehao Chen, Rong Pan
SVGBuilder: Component-Based Colored SVG Generation with Text-Guided Autoregressive Transformers
Project: https://svgbuilder.github.io
null
null
null
cs.CV cs.AI cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Scalable Vector Graphics (SVG) are essential XML-based formats for versatile graphics, offering resolution independence and scalability. Unlike raster images, SVGs use geometric shapes and support interactivity, animation, and manipulation via CSS and JavaScript. Current SVG generation methods face challenges related to high computational costs and complexity. In contrast, human designers use component-based tools for efficient SVG creation. Inspired by this, SVGBuilder introduces a component-based, autoregressive model for generating high-quality colored SVGs from textual input. It significantly reduces computational overhead and improves efficiency compared to traditional methods. Our model generates SVGs up to 604 times faster than optimization-based approaches. To address the limitations of existing SVG datasets and support our research, we introduce ColorSVG-100K, the first large-scale dataset of colored SVGs, comprising 100,000 graphics. This dataset fills the gap in color information for SVG generation models and enhances diversity in model training. Evaluation against state-of-the-art models demonstrates SVGBuilder's superior performance in practical applications, highlighting its efficiency and quality in generating complex SVG graphics.
[ { "version": "v1", "created": "Fri, 13 Dec 2024 15:24:11 GMT" }, { "version": "v2", "created": "Tue, 17 Dec 2024 16:13:15 GMT" }, { "version": "v3", "created": "Wed, 12 Mar 2025 14:34:11 GMT" } ]
2025-03-13T00:00:00
[ [ "Chen", "Zehao", "" ], [ "Pan", "Rong", "" ] ]
TITLE: SVGBuilder: Component-Based Colored SVG Generation with Text-Guided Autoregressive Transformers ABSTRACT: Scalable Vector Graphics (SVG) are essential XML-based formats for versatile graphics, offering resolution independence and scalability. Unlike raster images, SVGs use geometric shapes and support interactivity, animation, and manipulation via CSS and JavaScript. Current SVG generation methods face challenges related to high computational costs and complexity. In contrast, human designers use component-based tools for efficient SVG creation. Inspired by this, SVGBuilder introduces a component-based, autoregressive model for generating high-quality colored SVGs from textual input. It significantly reduces computational overhead and improves efficiency compared to traditional methods. Our model generates SVGs up to 604 times faster than optimization-based approaches. To address the limitations of existing SVG datasets and support our research, we introduce ColorSVG-100K, the first large-scale dataset of colored SVGs, comprising 100,000 graphics. This dataset fills the gap in color information for SVG generation models and enhances diversity in model training. Evaluation against state-of-the-art models demonstrates SVGBuilder's superior performance in practical applications, highlighting its efficiency and quality in generating complex SVG graphics.
new_dataset
0.956877
2412.11464
Quan-Sheng Zeng
Quan-Sheng Zeng, Yunheng Li, Daquan Zhou, Guanbin Li, Qibin Hou, Ming-Ming Cheng
High-Quality Mask Tuning Matters for Open-Vocabulary Segmentation
Revised version according to comments from reviewers of ICLR2025
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Open-vocabulary image segmentation has been advanced through the synergy between mask generators and vision-language models like Contrastive Language-Image Pre-training (CLIP). Previous approaches focus on generating masks while aligning mask features with text embeddings during training. In this paper, we observe that relying on generated low-quality masks can weaken the alignment of vision and language in regional representations. This motivates us to present a new fine-tuning framework, named MaskCLIP++, which uses ground-truth masks instead of generated masks to enhance the mask classification capability of CLIP. Due to the limited diversity of image segmentation datasets with mask annotations, we propose incorporating a consistency alignment principle during fine-tuning, which alleviates categorical bias toward the fine-tuning dataset. After low-cost fine-tuning, MaskCLIP++ significantly improves the mask classification performance on multi-domain datasets. Combining with the mask generator in previous state-of-the-art mask-based open vocabulary segmentation methods, we achieve performance improvements of +1.7, +2.3, +2.1, +3.1, and +0.3 mIoU on the A-847, PC-459, A-150, PC-59, and PAS-20 datasets, respectively. Code is avaliable at https://github.com/HVision-NKU/MaskCLIPpp .
[ { "version": "v1", "created": "Mon, 16 Dec 2024 05:44:45 GMT" }, { "version": "v2", "created": "Tue, 24 Dec 2024 04:13:08 GMT" }, { "version": "v3", "created": "Wed, 12 Mar 2025 08:04:32 GMT" } ]
2025-03-13T00:00:00
[ [ "Zeng", "Quan-Sheng", "" ], [ "Li", "Yunheng", "" ], [ "Zhou", "Daquan", "" ], [ "Li", "Guanbin", "" ], [ "Hou", "Qibin", "" ], [ "Cheng", "Ming-Ming", "" ] ]
TITLE: High-Quality Mask Tuning Matters for Open-Vocabulary Segmentation ABSTRACT: Open-vocabulary image segmentation has been advanced through the synergy between mask generators and vision-language models like Contrastive Language-Image Pre-training (CLIP). Previous approaches focus on generating masks while aligning mask features with text embeddings during training. In this paper, we observe that relying on generated low-quality masks can weaken the alignment of vision and language in regional representations. This motivates us to present a new fine-tuning framework, named MaskCLIP++, which uses ground-truth masks instead of generated masks to enhance the mask classification capability of CLIP. Due to the limited diversity of image segmentation datasets with mask annotations, we propose incorporating a consistency alignment principle during fine-tuning, which alleviates categorical bias toward the fine-tuning dataset. After low-cost fine-tuning, MaskCLIP++ significantly improves the mask classification performance on multi-domain datasets. Combining with the mask generator in previous state-of-the-art mask-based open vocabulary segmentation methods, we achieve performance improvements of +1.7, +2.3, +2.1, +3.1, and +0.3 mIoU on the A-847, PC-459, A-150, PC-59, and PAS-20 datasets, respectively. Code is avaliable at https://github.com/HVision-NKU/MaskCLIPpp .
no_new_dataset
0.948585
2412.14569
Chaoqun Liu
Chaoqun Liu, Xuanpeng Li, Chen Gong, Guangyu Li
Global Spatio-Temporal Fusion-based Traffic Prediction Algorithm with Anomaly Aware
null
GLOBECOM 2024 - 2024 IEEE Global Communications Conference
10.1109/GLOBECOM52923.2024.10901114
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Traffic prediction is an indispensable component of urban planning and traffic management. Achieving accurate traffic prediction hinges on the ability to capture the potential spatio-temporal relationships among road sensors. However, the majority of existing works focus on local short-term spatio-temporal correlations, failing to fully consider the interactions of different sensors in the long-term state. In addition, these works do not analyze the influences of anomalous factors, or have insufficient ability to extract personalized features of anomalous factors, which make them ineffectively capture their spatio-temporal influences on traffic prediction. To address the aforementioned issues, We propose a global spatio-temporal fusion-based traffic prediction algorithm that incorporates anomaly awareness. Initially, based on the designed anomaly detection network, we construct an efficient anomalous factors impacting module (AFIM), to evaluate the spatio-temporal impact of unexpected external events on traffic prediction. Furthermore, we propose a multi-scale spatio-temporal feature fusion module (MTSFFL) based on the transformer architecture, to obtain all possible both long and short term correlations among different sensors in a wide-area traffic environment for accurate prediction of traffic flow. Finally, experiments are implemented based on real-scenario public transportation datasets (PEMS04 and PEMS08) to demonstrate that our approach can achieve state-of-the-art performance.
[ { "version": "v1", "created": "Thu, 19 Dec 2024 06:40:21 GMT" } ]
2025-03-13T00:00:00
[ [ "Liu", "Chaoqun", "" ], [ "Li", "Xuanpeng", "" ], [ "Gong", "Chen", "" ], [ "Li", "Guangyu", "" ] ]
TITLE: Global Spatio-Temporal Fusion-based Traffic Prediction Algorithm with Anomaly Aware ABSTRACT: Traffic prediction is an indispensable component of urban planning and traffic management. Achieving accurate traffic prediction hinges on the ability to capture the potential spatio-temporal relationships among road sensors. However, the majority of existing works focus on local short-term spatio-temporal correlations, failing to fully consider the interactions of different sensors in the long-term state. In addition, these works do not analyze the influences of anomalous factors, or have insufficient ability to extract personalized features of anomalous factors, which make them ineffectively capture their spatio-temporal influences on traffic prediction. To address the aforementioned issues, We propose a global spatio-temporal fusion-based traffic prediction algorithm that incorporates anomaly awareness. Initially, based on the designed anomaly detection network, we construct an efficient anomalous factors impacting module (AFIM), to evaluate the spatio-temporal impact of unexpected external events on traffic prediction. Furthermore, we propose a multi-scale spatio-temporal feature fusion module (MTSFFL) based on the transformer architecture, to obtain all possible both long and short term correlations among different sensors in a wide-area traffic environment for accurate prediction of traffic flow. Finally, experiments are implemented based on real-scenario public transportation datasets (PEMS04 and PEMS08) to demonstrate that our approach can achieve state-of-the-art performance.
no_new_dataset
0.945901
2412.15341
Reza Shirkavand
Reza Shirkavand, Peiran Yu, Shangqian Gao, Gowthami Somepalli, Tom Goldstein, Heng Huang
Efficient Fine-Tuning and Concept Suppression for Pruned Diffusion Models
CVPR 2025
null
null
null
cs.LG cs.CV
http://creativecommons.org/licenses/by/4.0/
Recent advances in diffusion generative models have yielded remarkable progress. While the quality of generated content continues to improve, these models have grown considerably in size and complexity. This increasing computational burden poses significant challenges, particularly in resource-constrained deployment scenarios such as mobile devices. The combination of model pruning and knowledge distillation has emerged as a promising solution to reduce computational demands while preserving generation quality. However, this technique inadvertently propagates undesirable behaviors, including the generation of copyrighted content and unsafe concepts, even when such instances are absent from the fine-tuning dataset. In this paper, we propose a novel bilevel optimization framework for pruned diffusion models that consolidates the fine-tuning and unlearning processes into a unified phase. Our approach maintains the principal advantages of distillation-namely, efficient convergence and style transfer capabilities-while selectively suppressing the generation of unwanted content. This plug-in framework is compatible with various pruning and concept unlearning methods, facilitating efficient, safe deployment of diffusion models in controlled environments.
[ { "version": "v1", "created": "Thu, 19 Dec 2024 19:13:18 GMT" }, { "version": "v2", "created": "Tue, 11 Mar 2025 20:52:10 GMT" } ]
2025-03-13T00:00:00
[ [ "Shirkavand", "Reza", "" ], [ "Yu", "Peiran", "" ], [ "Gao", "Shangqian", "" ], [ "Somepalli", "Gowthami", "" ], [ "Goldstein", "Tom", "" ], [ "Huang", "Heng", "" ] ]
TITLE: Efficient Fine-Tuning and Concept Suppression for Pruned Diffusion Models ABSTRACT: Recent advances in diffusion generative models have yielded remarkable progress. While the quality of generated content continues to improve, these models have grown considerably in size and complexity. This increasing computational burden poses significant challenges, particularly in resource-constrained deployment scenarios such as mobile devices. The combination of model pruning and knowledge distillation has emerged as a promising solution to reduce computational demands while preserving generation quality. However, this technique inadvertently propagates undesirable behaviors, including the generation of copyrighted content and unsafe concepts, even when such instances are absent from the fine-tuning dataset. In this paper, we propose a novel bilevel optimization framework for pruned diffusion models that consolidates the fine-tuning and unlearning processes into a unified phase. Our approach maintains the principal advantages of distillation-namely, efficient convergence and style transfer capabilities-while selectively suppressing the generation of unwanted content. This plug-in framework is compatible with various pruning and concept unlearning methods, facilitating efficient, safe deployment of diffusion models in controlled environments.
no_new_dataset
0.945651
2412.19950
Christian Friedrich
Eric Hirsch and Christian Friedrich
Data-driven tool wear prediction in milling, based on a process-integrated single-sensor approach
This preprint has been submitted to Robotics and Computer-Integrated Manufacturing for possible publication ,14 pages, 12 figures
null
null
null
cs.LG cs.RO eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurate tool wear prediction is essential for maintaining productivity and minimizing costs in machining. However, the complex nature of the tool wear process poses significant challenges to achieving reliable predictions. This study explores data-driven methods, in particular deep learning, for tool wear prediction. Traditional data-driven approaches often focus on a single process, relying on multi-sensor setups and extensive data generation, which limits generalization to new settings. Moreover, multi-sensor integration is often impractical in industrial environments. To address these limitations, this research investigates the transferability of predictive models using minimal training data, validated across two processes. Furthermore, it uses a simple setup with a single acceleration sensor to establish a low-cost data generation approach that facilitates the generalization of models to other processes via transfer learning. The study evaluates several machine learning models, including transformer-inspired convolutional neural networks (CNN), long short-term memory networks (LSTM), support vector machines (SVM), and decision trees, trained on different input formats such as feature vectors and short-time Fourier transform (STFT). The performance of the models is evaluated on two machines and on different amounts of training data, including scenarios with significantly reduced datasets, providing insight into their effectiveness under constrained data conditions. The results demonstrate the potential of specific models and configurations for effective tool wear prediction, contributing to the development of more adaptable and efficient predictive maintenance strategies in machining. Notably, the ConvNeXt model has an exceptional performance, achieving 99.1\% accuracy in identifying tool wear using data from only four milling tools operated until they are worn.
[ { "version": "v1", "created": "Fri, 27 Dec 2024 23:10:32 GMT" }, { "version": "v2", "created": "Tue, 7 Jan 2025 14:35:01 GMT" }, { "version": "v3", "created": "Tue, 11 Mar 2025 18:20:38 GMT" } ]
2025-03-13T00:00:00
[ [ "Hirsch", "Eric", "" ], [ "Friedrich", "Christian", "" ] ]
TITLE: Data-driven tool wear prediction in milling, based on a process-integrated single-sensor approach ABSTRACT: Accurate tool wear prediction is essential for maintaining productivity and minimizing costs in machining. However, the complex nature of the tool wear process poses significant challenges to achieving reliable predictions. This study explores data-driven methods, in particular deep learning, for tool wear prediction. Traditional data-driven approaches often focus on a single process, relying on multi-sensor setups and extensive data generation, which limits generalization to new settings. Moreover, multi-sensor integration is often impractical in industrial environments. To address these limitations, this research investigates the transferability of predictive models using minimal training data, validated across two processes. Furthermore, it uses a simple setup with a single acceleration sensor to establish a low-cost data generation approach that facilitates the generalization of models to other processes via transfer learning. The study evaluates several machine learning models, including transformer-inspired convolutional neural networks (CNN), long short-term memory networks (LSTM), support vector machines (SVM), and decision trees, trained on different input formats such as feature vectors and short-time Fourier transform (STFT). The performance of the models is evaluated on two machines and on different amounts of training data, including scenarios with significantly reduced datasets, providing insight into their effectiveness under constrained data conditions. The results demonstrate the potential of specific models and configurations for effective tool wear prediction, contributing to the development of more adaptable and efficient predictive maintenance strategies in machining. Notably, the ConvNeXt model has an exceptional performance, achieving 99.1\% accuracy in identifying tool wear using data from only four milling tools operated until they are worn.
no_new_dataset
0.944791
2412.21197
Yang Chen
Yang Chen, Sheng Guo, Bo Zheng and Limin Wang
A Large-Scale Study on Video Action Dataset Condensation
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Recently, dataset condensation has made significant progress in the image domain. Unlike images, videos possess an additional temporal dimension, which harbors considerable redundant information, making condensation even more crucial. However, video dataset condensation still remains an underexplored area. We aim to bridge this gap by providing a large-scale study with systematic design and fair comparison. Specifically, our work delves into three key aspects to provide valuable empirical insights: (1) temporal processing of video data, (2) the evaluation protocol for video dataset condensation, and (3) adaptation of condensation algorithms to the space-time domain. From this study, we derive several intriguing observations: (i) labeling methods greatly influence condensation performance, (ii) simple sliding-window sampling is effective for temporal processing, and (iii) dataset distillation methods perform better in challenging scenarios, while sample selection methods excel in easier ones. Furthermore, we propose a unified evaluation protocol for the fair comparison of different condensation algorithms and achieve state-of-the-art results on four widely-used action recognition datasets: HMDB51, UCF101, SSv2 and K400. Our code is available at https://github.com/MCG-NJU/Video-DC.
[ { "version": "v1", "created": "Mon, 30 Dec 2024 18:58:29 GMT" }, { "version": "v2", "created": "Wed, 12 Mar 2025 03:28:28 GMT" } ]
2025-03-13T00:00:00
[ [ "Chen", "Yang", "" ], [ "Guo", "Sheng", "" ], [ "Zheng", "Bo", "" ], [ "Wang", "Limin", "" ] ]
TITLE: A Large-Scale Study on Video Action Dataset Condensation ABSTRACT: Recently, dataset condensation has made significant progress in the image domain. Unlike images, videos possess an additional temporal dimension, which harbors considerable redundant information, making condensation even more crucial. However, video dataset condensation still remains an underexplored area. We aim to bridge this gap by providing a large-scale study with systematic design and fair comparison. Specifically, our work delves into three key aspects to provide valuable empirical insights: (1) temporal processing of video data, (2) the evaluation protocol for video dataset condensation, and (3) adaptation of condensation algorithms to the space-time domain. From this study, we derive several intriguing observations: (i) labeling methods greatly influence condensation performance, (ii) simple sliding-window sampling is effective for temporal processing, and (iii) dataset distillation methods perform better in challenging scenarios, while sample selection methods excel in easier ones. Furthermore, we propose a unified evaluation protocol for the fair comparison of different condensation algorithms and achieve state-of-the-art results on four widely-used action recognition datasets: HMDB51, UCF101, SSv2 and K400. Our code is available at https://github.com/MCG-NJU/Video-DC.
no_new_dataset
0.9463
2501.01046
Youngjun Son
Youngjun Son, Chaewon Kim, Jaejin Lee
FED: Fast and Efficient Dataset Deduplication Framework with GPU Acceleration
13 pages, 4 figures
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Dataset deduplication plays a crucial role in enhancing data quality, ultimately improving the training performance and efficiency of large language models. A commonly used method for data deduplication is the MinHash LSH algorithm. Recently, NVIDIA introduced a GPU-based MinHash LSH deduplication method, but it remains suboptimal, leaving room for further improvement in processing efficiency. This paper proposes a GPU-accelerated deduplication framework, FED, that optimizes MinHash LSH for GPU clusters and leverages computationally efficient, partially reusable non-cryptographic hash functions. FED significantly outperforms the CPU-based deduplication tool in SlimPajama (using 64 logical CPU cores) by up to 107.2 times and the GPU-based tool in NVIDIA NeMo Curator by up to 6.3 times when processing 30 million documents on a node with four GPUs. Notably, our method dramatically accelerates the previously time-consuming MinHash signature generation phase, achieving speed-ups of up to 260 compared to the CPU baseline. Despite these gains in efficiency, FED maintains high deduplication quality, with the duplicate document sets reaching a Jaccard similarity of over 0.96 compared to those identified by the standard MinHash algorithm. In large-scale experiments, the deduplication of 1.2 trillion tokens is completed in just 6 hours in a four-node, 16-GPU environment. The related code is publicly available on GitHub (\href{https://github.com/mcrl/FED}{https://github.com/mcrl/FED}).
[ { "version": "v1", "created": "Thu, 2 Jan 2025 04:11:23 GMT" }, { "version": "v2", "created": "Sun, 16 Feb 2025 07:56:11 GMT" }, { "version": "v3", "created": "Wed, 12 Mar 2025 13:36:32 GMT" } ]
2025-03-13T00:00:00
[ [ "Son", "Youngjun", "" ], [ "Kim", "Chaewon", "" ], [ "Lee", "Jaejin", "" ] ]
TITLE: FED: Fast and Efficient Dataset Deduplication Framework with GPU Acceleration ABSTRACT: Dataset deduplication plays a crucial role in enhancing data quality, ultimately improving the training performance and efficiency of large language models. A commonly used method for data deduplication is the MinHash LSH algorithm. Recently, NVIDIA introduced a GPU-based MinHash LSH deduplication method, but it remains suboptimal, leaving room for further improvement in processing efficiency. This paper proposes a GPU-accelerated deduplication framework, FED, that optimizes MinHash LSH for GPU clusters and leverages computationally efficient, partially reusable non-cryptographic hash functions. FED significantly outperforms the CPU-based deduplication tool in SlimPajama (using 64 logical CPU cores) by up to 107.2 times and the GPU-based tool in NVIDIA NeMo Curator by up to 6.3 times when processing 30 million documents on a node with four GPUs. Notably, our method dramatically accelerates the previously time-consuming MinHash signature generation phase, achieving speed-ups of up to 260 compared to the CPU baseline. Despite these gains in efficiency, FED maintains high deduplication quality, with the duplicate document sets reaching a Jaccard similarity of over 0.96 compared to those identified by the standard MinHash algorithm. In large-scale experiments, the deduplication of 1.2 trillion tokens is completed in just 6 hours in a four-node, 16-GPU environment. The related code is publicly available on GitHub (\href{https://github.com/mcrl/FED}{https://github.com/mcrl/FED}).
no_new_dataset
0.952264
2501.02749
Zhongjin Xu
Hao Luo, Jianjun Wei, Shuchen Zhao, Ankai Liang, Zhongjin Xu, Ruxue Jiang
Intelligent logistics management robot path planning algorithm integrating transformer and GCN network
21 pages
null
null
null
cs.RO cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This research delves into advanced route optimization for robots in smart logistics, leveraging a fusion of Transformer architectures, Graph Neural Networks (GNNs), and Generative Adversarial Networks (GANs). The approach utilizes a graph-based representation encompassing geographical data, cargo allocation, and robot dynamics, addressing both spatial and resource limitations to refine route efficiency. Through extensive testing with authentic logistics datasets, the proposed method achieves notable improvements, including a 15% reduction in travel distance, a 20% boost in time efficiency, and a 10% decrease in energy consumption. These findings highlight the algorithm's effectiveness, promoting enhanced performance in intelligent logistics operations.
[ { "version": "v1", "created": "Mon, 6 Jan 2025 03:53:02 GMT" }, { "version": "v2", "created": "Wed, 12 Mar 2025 03:29:21 GMT" } ]
2025-03-13T00:00:00
[ [ "Luo", "Hao", "" ], [ "Wei", "Jianjun", "" ], [ "Zhao", "Shuchen", "" ], [ "Liang", "Ankai", "" ], [ "Xu", "Zhongjin", "" ], [ "Jiang", "Ruxue", "" ] ]
TITLE: Intelligent logistics management robot path planning algorithm integrating transformer and GCN network ABSTRACT: This research delves into advanced route optimization for robots in smart logistics, leveraging a fusion of Transformer architectures, Graph Neural Networks (GNNs), and Generative Adversarial Networks (GANs). The approach utilizes a graph-based representation encompassing geographical data, cargo allocation, and robot dynamics, addressing both spatial and resource limitations to refine route efficiency. Through extensive testing with authentic logistics datasets, the proposed method achieves notable improvements, including a 15% reduction in travel distance, a 20% boost in time efficiency, and a 10% decrease in energy consumption. These findings highlight the algorithm's effectiveness, promoting enhanced performance in intelligent logistics operations.
no_new_dataset
0.94868
2501.05712
Hyunwoo Ko
Guijin Son, Hyunwoo Ko, Dasol Choi
Multi-Step Reasoning in Korean and the Emergent Mirage
C3NLP @ NAACL 2025
null
null
null
cs.CL
http://creativecommons.org/licenses/by-sa/4.0/
We introduce HRMCR (HAE-RAE Multi-Step Commonsense Reasoning), a benchmark designed to evaluate large language models' ability to perform multi-step reasoning in culturally specific contexts, focusing on Korean. The questions are automatically generated via templates and algorithms, requiring LLMs to integrate Korean cultural knowledge into sequential reasoning steps. Consistent with prior observations on emergent abilities, our experiments reveal that models trained on fewer than \(2 \cdot 10^{25}\) training FLOPs struggle to solve any questions, showing near-zero performance. Beyond this threshold, performance improves sharply. State-of-the-art models (e.g., O1) still score under 50\%, underscoring the difficulty of our tasks. Notably, stepwise analysis suggests the observed emergent behavior may stem from compounding errors across multiple steps rather than reflecting a genuinely new capability. We publicly release the benchmark and commit to regularly updating the dataset to prevent contamination.
[ { "version": "v1", "created": "Fri, 10 Jan 2025 05:07:27 GMT" }, { "version": "v2", "created": "Wed, 12 Mar 2025 08:45:28 GMT" } ]
2025-03-13T00:00:00
[ [ "Son", "Guijin", "" ], [ "Ko", "Hyunwoo", "" ], [ "Choi", "Dasol", "" ] ]
TITLE: Multi-Step Reasoning in Korean and the Emergent Mirage ABSTRACT: We introduce HRMCR (HAE-RAE Multi-Step Commonsense Reasoning), a benchmark designed to evaluate large language models' ability to perform multi-step reasoning in culturally specific contexts, focusing on Korean. The questions are automatically generated via templates and algorithms, requiring LLMs to integrate Korean cultural knowledge into sequential reasoning steps. Consistent with prior observations on emergent abilities, our experiments reveal that models trained on fewer than \(2 \cdot 10^{25}\) training FLOPs struggle to solve any questions, showing near-zero performance. Beyond this threshold, performance improves sharply. State-of-the-art models (e.g., O1) still score under 50\%, underscoring the difficulty of our tasks. Notably, stepwise analysis suggests the observed emergent behavior may stem from compounding errors across multiple steps rather than reflecting a genuinely new capability. We publicly release the benchmark and commit to regularly updating the dataset to prevent contamination.
new_dataset
0.946892
2501.05757
Seungjoo Shin
Seungjoo Shin, Jaesik Park, Sunghyun Cho
Locality-aware Gaussian Compression for Fast and High-quality Rendering
Accepted to ICLR 2025. Project page: https://seungjooshin.github.io/LocoGS
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present LocoGS, a locality-aware 3D Gaussian Splatting (3DGS) framework that exploits the spatial coherence of 3D Gaussians for compact modeling of volumetric scenes. To this end, we first analyze the local coherence of 3D Gaussian attributes, and propose a novel locality-aware 3D Gaussian representation that effectively encodes locally-coherent Gaussian attributes using a neural field representation with a minimal storage requirement. On top of the novel representation, LocoGS is carefully designed with additional components such as dense initialization, an adaptive spherical harmonics bandwidth scheme and different encoding schemes for different Gaussian attributes to maximize compression performance. Experimental results demonstrate that our approach outperforms the rendering quality of existing compact Gaussian representations for representative real-world 3D datasets while achieving from 54.6$\times$ to 96.6$\times$ compressed storage size and from 2.1$\times$ to 2.4$\times$ rendering speed than 3DGS. Even our approach also demonstrates an averaged 2.4$\times$ higher rendering speed than the state-of-the-art compression method with comparable compression performance.
[ { "version": "v1", "created": "Fri, 10 Jan 2025 07:19:41 GMT" }, { "version": "v2", "created": "Mon, 3 Mar 2025 07:07:28 GMT" }, { "version": "v3", "created": "Wed, 12 Mar 2025 11:12:31 GMT" } ]
2025-03-13T00:00:00
[ [ "Shin", "Seungjoo", "" ], [ "Park", "Jaesik", "" ], [ "Cho", "Sunghyun", "" ] ]
TITLE: Locality-aware Gaussian Compression for Fast and High-quality Rendering ABSTRACT: We present LocoGS, a locality-aware 3D Gaussian Splatting (3DGS) framework that exploits the spatial coherence of 3D Gaussians for compact modeling of volumetric scenes. To this end, we first analyze the local coherence of 3D Gaussian attributes, and propose a novel locality-aware 3D Gaussian representation that effectively encodes locally-coherent Gaussian attributes using a neural field representation with a minimal storage requirement. On top of the novel representation, LocoGS is carefully designed with additional components such as dense initialization, an adaptive spherical harmonics bandwidth scheme and different encoding schemes for different Gaussian attributes to maximize compression performance. Experimental results demonstrate that our approach outperforms the rendering quality of existing compact Gaussian representations for representative real-world 3D datasets while achieving from 54.6$\times$ to 96.6$\times$ compressed storage size and from 2.1$\times$ to 2.4$\times$ rendering speed than 3DGS. Even our approach also demonstrates an averaged 2.4$\times$ higher rendering speed than the state-of-the-art compression method with comparable compression performance.
no_new_dataset
0.94743
2501.06557
Marco Giordano
Marco Giordano, Claudia Rinaldi
A Survey on Spoken Italian Datasets and Corpora
Published on IEEE Access Journal on Feb 2025
in IEEE Access, vol. 13, pp. 29190-29205, 2025
10.1109/ACCESS.2025.3538952
null
cs.CL cs.AI cs.DL
http://creativecommons.org/licenses/by-nc-nd/4.0/
Spoken language datasets are vital for advancing linguistic research, Natural Language Processing, and speech technology. However, resources dedicated to Italian, a linguistically rich and diverse Romance language, remain underexplored compared to major languages like English or Mandarin. This survey provides a comprehensive analysis of 66 spoken Italian datasets, highlighting their characteristics, methodologies, and applications. The datasets are categorized by speech type, source and context, and demographic and linguistic features, with a focus on their utility in fields such as Automatic Speech Recognition, emotion detection, and education. Challenges related to dataset scarcity, representativeness, and accessibility are discussed alongside recommendations for enhancing dataset creation and utilization. The full dataset inventory is publicly accessible via GitHub and archived on Zenodo, serving as a valuable resource for researchers and developers. By addressing current gaps and proposing future directions, this work aims to support the advancement of Italian speech technologies and linguistic research.
[ { "version": "v1", "created": "Sat, 11 Jan 2025 14:33:57 GMT" }, { "version": "v2", "created": "Wed, 12 Mar 2025 13:59:29 GMT" } ]
2025-03-13T00:00:00
[ [ "Giordano", "Marco", "" ], [ "Rinaldi", "Claudia", "" ] ]
TITLE: A Survey on Spoken Italian Datasets and Corpora ABSTRACT: Spoken language datasets are vital for advancing linguistic research, Natural Language Processing, and speech technology. However, resources dedicated to Italian, a linguistically rich and diverse Romance language, remain underexplored compared to major languages like English or Mandarin. This survey provides a comprehensive analysis of 66 spoken Italian datasets, highlighting their characteristics, methodologies, and applications. The datasets are categorized by speech type, source and context, and demographic and linguistic features, with a focus on their utility in fields such as Automatic Speech Recognition, emotion detection, and education. Challenges related to dataset scarcity, representativeness, and accessibility are discussed alongside recommendations for enhancing dataset creation and utilization. The full dataset inventory is publicly accessible via GitHub and archived on Zenodo, serving as a valuable resource for researchers and developers. By addressing current gaps and proposing future directions, this work aims to support the advancement of Italian speech technologies and linguistic research.
no_new_dataset
0.938745
2501.08333
Hyeonwoo Kim
Hyeonwoo Kim, Sangwon Beak, Hanbyul Joo
DAViD: Modeling Dynamic Affordance of 3D Objects using Pre-trained Video Diffusion Models
Project Page: https://snuvclab.github.io/david/
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Modeling how humans interact with objects is crucial for AI to effectively assist or mimic human behaviors. Existing studies for learning such ability primarily focus on static human-object interaction (HOI) patterns, such as contact and spatial relationships, while dynamic HOI patterns, capturing the movement of humans and objects over time, remain relatively underexplored. In this paper, we present a novel framework for learning Dynamic Affordance across various target object categories. To address the scarcity of 4D HOI datasets, our method learns the 3D dynamic affordance from synthetically generated 4D HOI samples. Specifically, we propose a pipeline that first generates 2D HOI videos from a given 3D target object using a pre-trained video diffusion model, then lifts them into 3D to generate 4D HOI samples. Leveraging these synthesized 4D HOI samples, we train DAViD, our generative 4D human-object interaction model, which is composed of two key components: (1) a human motion diffusion model (MDM) with Low-Rank Adaptation (LoRA) module to fine-tune a pre-trained MDM to learn the HOI motion concepts from limited HOI motion samples, (2) a motion diffusion model for 4D object poses conditioned by produced human interaction motions. Interestingly, DAViD can integrate newly learned HOI motion concepts with pre-trained human motions to create novel HOI motions, even for multiple HOI motion concepts, demonstrating the advantage of our pipeline with LoRA in integrating dynamic HOI concepts. Through extensive experiments, we demonstrate that DAViD outperforms baselines in synthesizing HOI motion.
[ { "version": "v1", "created": "Tue, 14 Jan 2025 18:59:59 GMT" }, { "version": "v2", "created": "Tue, 11 Mar 2025 21:35:21 GMT" } ]
2025-03-13T00:00:00
[ [ "Kim", "Hyeonwoo", "" ], [ "Beak", "Sangwon", "" ], [ "Joo", "Hanbyul", "" ] ]
TITLE: DAViD: Modeling Dynamic Affordance of 3D Objects using Pre-trained Video Diffusion Models ABSTRACT: Modeling how humans interact with objects is crucial for AI to effectively assist or mimic human behaviors. Existing studies for learning such ability primarily focus on static human-object interaction (HOI) patterns, such as contact and spatial relationships, while dynamic HOI patterns, capturing the movement of humans and objects over time, remain relatively underexplored. In this paper, we present a novel framework for learning Dynamic Affordance across various target object categories. To address the scarcity of 4D HOI datasets, our method learns the 3D dynamic affordance from synthetically generated 4D HOI samples. Specifically, we propose a pipeline that first generates 2D HOI videos from a given 3D target object using a pre-trained video diffusion model, then lifts them into 3D to generate 4D HOI samples. Leveraging these synthesized 4D HOI samples, we train DAViD, our generative 4D human-object interaction model, which is composed of two key components: (1) a human motion diffusion model (MDM) with Low-Rank Adaptation (LoRA) module to fine-tune a pre-trained MDM to learn the HOI motion concepts from limited HOI motion samples, (2) a motion diffusion model for 4D object poses conditioned by produced human interaction motions. Interestingly, DAViD can integrate newly learned HOI motion concepts with pre-trained human motions to create novel HOI motions, even for multiple HOI motion concepts, demonstrating the advantage of our pipeline with LoRA in integrating dynamic HOI concepts. Through extensive experiments, we demonstrate that DAViD outperforms baselines in synthesizing HOI motion.
no_new_dataset
0.93835
2501.12106
Stefan Lenz
Stefan Lenz, Arsenij Ustjanzew, Marco Jeray, Torsten Panholzer
Can open source large language models be used for tumor documentation in Germany? -- An evaluation on urological doctors' notes
48 pages, 5 figures
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tumor documentation in Germany is largely done manually, requiring reading patient records and entering data into structured databases. Large language models (LLMs) could potentially enhance this process by improving efficiency and reliability. This evaluation tests eleven different open source LLMs with sizes ranging from 1-70 billion model parameters on three basic tasks of the tumor documentation process: identifying tumor diagnoses, assigning ICD-10 codes, and extracting the date of first diagnosis. For evaluating the LLMs on these tasks, a dataset of annotated text snippets based on anonymized doctors' notes from urology was prepared. Different prompting strategies were used to investigate the effect of the number of examples in few-shot prompting and to explore the capabilities of the LLMs in general. The models Llama 3.1 8B, Mistral 7B, and Mistral NeMo 12 B performed comparably well in the tasks. Models with less extensive training data or having fewer than 7 billion parameters showed notably lower performance, while larger models did not display performance gains. Examples from a different medical domain than urology could also improve the outcome in few-shot prompting, which demonstrates the ability of LLMs to handle tasks needed for tumor documentation. Open source LLMs show a strong potential for automating tumor documentation. Models from 7-12 billion parameters could offer an optimal balance between performance and resource efficiency. With tailored fine-tuning and well-designed prompting, these models might become important tools for clinical documentation in the future. The code for the evaluation is available from https://github.com/stefan-m-lenz/UroLlmEval. We also release the dataset as a new valuable resource that addresses the shortage of authentic and easily accessible benchmarks in German-language medical NLP.
[ { "version": "v1", "created": "Tue, 21 Jan 2025 12:56:47 GMT" }, { "version": "v2", "created": "Wed, 12 Mar 2025 08:48:46 GMT" } ]
2025-03-13T00:00:00
[ [ "Lenz", "Stefan", "" ], [ "Ustjanzew", "Arsenij", "" ], [ "Jeray", "Marco", "" ], [ "Panholzer", "Torsten", "" ] ]
TITLE: Can open source large language models be used for tumor documentation in Germany? -- An evaluation on urological doctors' notes ABSTRACT: Tumor documentation in Germany is largely done manually, requiring reading patient records and entering data into structured databases. Large language models (LLMs) could potentially enhance this process by improving efficiency and reliability. This evaluation tests eleven different open source LLMs with sizes ranging from 1-70 billion model parameters on three basic tasks of the tumor documentation process: identifying tumor diagnoses, assigning ICD-10 codes, and extracting the date of first diagnosis. For evaluating the LLMs on these tasks, a dataset of annotated text snippets based on anonymized doctors' notes from urology was prepared. Different prompting strategies were used to investigate the effect of the number of examples in few-shot prompting and to explore the capabilities of the LLMs in general. The models Llama 3.1 8B, Mistral 7B, and Mistral NeMo 12 B performed comparably well in the tasks. Models with less extensive training data or having fewer than 7 billion parameters showed notably lower performance, while larger models did not display performance gains. Examples from a different medical domain than urology could also improve the outcome in few-shot prompting, which demonstrates the ability of LLMs to handle tasks needed for tumor documentation. Open source LLMs show a strong potential for automating tumor documentation. Models from 7-12 billion parameters could offer an optimal balance between performance and resource efficiency. With tailored fine-tuning and well-designed prompting, these models might become important tools for clinical documentation in the future. The code for the evaluation is available from https://github.com/stefan-m-lenz/UroLlmEval. We also release the dataset as a new valuable resource that addresses the shortage of authentic and easily accessible benchmarks in German-language medical NLP.
no_new_dataset
0.641113
2501.14198
Zeyun Deng
Zeyun Deng, Joseph Campbell
Sparse Mixture-of-Experts for Non-Uniform Noise Reduction in MRI Images
Accepted to the WACV Workshop on Image Quality
null
null
null
eess.IV cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Magnetic Resonance Imaging (MRI) is an essential diagnostic tool in clinical settings but its utility is often hindered by noise artifacts introduced during the imaging process. Effective denoising is critical for enhancing image quality while preserving anatomical structures. However traditional denoising methods which typically assume uniform noise distributions struggle to handle the non-uniform noise commonly present in MRI images. In this paper we introduce a novel approach leveraging a sparse mixture-of-experts framework for MRI image denoising. Each expert is a specialized denoising convolutional neural network fine-tuned to target specific noise characteristics associated with different image regions. Our method demonstrates superior performance over state-of-the-art denoising techniques on both synthetic and real-world MRI datasets. Furthermore we show that it generalizes effectively to unseen datasets highlighting its robustness and adaptability.
[ { "version": "v1", "created": "Fri, 24 Jan 2025 03:04:44 GMT" }, { "version": "v2", "created": "Wed, 12 Mar 2025 02:32:20 GMT" } ]
2025-03-13T00:00:00
[ [ "Deng", "Zeyun", "" ], [ "Campbell", "Joseph", "" ] ]
TITLE: Sparse Mixture-of-Experts for Non-Uniform Noise Reduction in MRI Images ABSTRACT: Magnetic Resonance Imaging (MRI) is an essential diagnostic tool in clinical settings but its utility is often hindered by noise artifacts introduced during the imaging process. Effective denoising is critical for enhancing image quality while preserving anatomical structures. However traditional denoising methods which typically assume uniform noise distributions struggle to handle the non-uniform noise commonly present in MRI images. In this paper we introduce a novel approach leveraging a sparse mixture-of-experts framework for MRI image denoising. Each expert is a specialized denoising convolutional neural network fine-tuned to target specific noise characteristics associated with different image regions. Our method demonstrates superior performance over state-of-the-art denoising techniques on both synthetic and real-world MRI datasets. Furthermore we show that it generalizes effectively to unseen datasets highlighting its robustness and adaptability.
no_new_dataset
0.946646
2501.14729
Xin Zhou
Xin Zhou, Dingkang Liang, Sifan Tu, Xiwu Chen, Yikang Ding, Dingyuan Zhang, Feiyang Tan, Hengshuang Zhao, Xiang Bai
HERMES: A Unified Self-Driving World Model for Simultaneous 3D Scene Understanding and Generation
The code will be available at https://github.com/LMD0311/HERMES
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Driving World Models (DWMs) have become essential for autonomous driving by enabling future scene prediction. However, existing DWMs are limited to scene generation and fail to incorporate scene understanding, which involves interpreting and reasoning about the driving environment. In this paper, we present a unified Driving World Model named HERMES. We seamlessly integrate 3D scene understanding and future scene evolution (generation) through a unified framework in driving scenarios. Specifically, HERMES leverages a Bird's-Eye View (BEV) representation to consolidate multi-view spatial information while preserving geometric relationships and interactions. We also introduce world queries, which incorporate world knowledge into BEV features via causal attention in the Large Language Model, enabling contextual enrichment for understanding and generation tasks. We conduct comprehensive studies on nuScenes and OmniDrive-nuScenes datasets to validate the effectiveness of our method. HERMES achieves state-of-the-art performance, reducing generation error by 32.4% and improving understanding metrics such as CIDEr by 8.0%. The model and code will be publicly released at https://github.com/LMD0311/HERMES.
[ { "version": "v1", "created": "Fri, 24 Jan 2025 18:59:51 GMT" }, { "version": "v2", "created": "Wed, 12 Mar 2025 17:58:02 GMT" } ]
2025-03-13T00:00:00
[ [ "Zhou", "Xin", "" ], [ "Liang", "Dingkang", "" ], [ "Tu", "Sifan", "" ], [ "Chen", "Xiwu", "" ], [ "Ding", "Yikang", "" ], [ "Zhang", "Dingyuan", "" ], [ "Tan", "Feiyang", "" ], [ "Zhao", "Hengshuang", "" ], [ "Bai", "Xiang", "" ] ]
TITLE: HERMES: A Unified Self-Driving World Model for Simultaneous 3D Scene Understanding and Generation ABSTRACT: Driving World Models (DWMs) have become essential for autonomous driving by enabling future scene prediction. However, existing DWMs are limited to scene generation and fail to incorporate scene understanding, which involves interpreting and reasoning about the driving environment. In this paper, we present a unified Driving World Model named HERMES. We seamlessly integrate 3D scene understanding and future scene evolution (generation) through a unified framework in driving scenarios. Specifically, HERMES leverages a Bird's-Eye View (BEV) representation to consolidate multi-view spatial information while preserving geometric relationships and interactions. We also introduce world queries, which incorporate world knowledge into BEV features via causal attention in the Large Language Model, enabling contextual enrichment for understanding and generation tasks. We conduct comprehensive studies on nuScenes and OmniDrive-nuScenes datasets to validate the effectiveness of our method. HERMES achieves state-of-the-art performance, reducing generation error by 32.4% and improving understanding metrics such as CIDEr by 8.0%. The model and code will be publicly released at https://github.com/LMD0311/HERMES.
no_new_dataset
0.9463
2501.17202
Chen Chen
Chen Chen, Yuchen Hu, Siyin Wang, Helin Wang, Zhehuai Chen, Chao Zhang, Chao-Han Huck Yang, and Eng Siong Chng
Audio Large Language Models Can Be Descriptive Speech Quality Evaluators
ICLR 2025
null
null
null
cs.SD cs.CL eess.AS
http://creativecommons.org/licenses/by/4.0/
An ideal multimodal agent should be aware of the quality of its input modalities. Recent advances have enabled large language models (LLMs) to incorporate auditory systems for handling various speech-related tasks. However, most audio LLMs remain unaware of the quality of the speech they process. This limitation arises because speech quality evaluation is typically excluded from multi-task training due to the lack of suitable datasets. To address this, we introduce the first natural language-based speech evaluation corpus, generated from authentic human ratings. In addition to the overall Mean Opinion Score (MOS), this corpus offers detailed analysis across multiple dimensions and identifies causes of quality degradation. It also enables descriptive comparisons between two speech samples (A/B tests) with human-like judgment. Leveraging this corpus, we propose an alignment approach with LLM distillation (ALLD) to guide the audio LLM in extracting relevant information from raw speech and generating meaningful responses. Experimental results demonstrate that ALLD outperforms the previous state-of-the-art regression model in MOS prediction, with a mean square error of 0.17 and an A/B test accuracy of 98.6%. Additionally, the generated responses achieve BLEU scores of 25.8 and 30.2 on two tasks, surpassing the capabilities of task-specific models. This work advances the comprehensive perception of speech signals by audio LLMs, contributing to the development of real-world auditory and sensory intelligent agents.
[ { "version": "v1", "created": "Mon, 27 Jan 2025 22:47:51 GMT" }, { "version": "v2", "created": "Wed, 12 Mar 2025 02:01:46 GMT" } ]
2025-03-13T00:00:00
[ [ "Chen", "Chen", "" ], [ "Hu", "Yuchen", "" ], [ "Wang", "Siyin", "" ], [ "Wang", "Helin", "" ], [ "Chen", "Zhehuai", "" ], [ "Zhang", "Chao", "" ], [ "Yang", "Chao-Han Huck", "" ], [ "Chng", "Eng Siong", "" ] ]
TITLE: Audio Large Language Models Can Be Descriptive Speech Quality Evaluators ABSTRACT: An ideal multimodal agent should be aware of the quality of its input modalities. Recent advances have enabled large language models (LLMs) to incorporate auditory systems for handling various speech-related tasks. However, most audio LLMs remain unaware of the quality of the speech they process. This limitation arises because speech quality evaluation is typically excluded from multi-task training due to the lack of suitable datasets. To address this, we introduce the first natural language-based speech evaluation corpus, generated from authentic human ratings. In addition to the overall Mean Opinion Score (MOS), this corpus offers detailed analysis across multiple dimensions and identifies causes of quality degradation. It also enables descriptive comparisons between two speech samples (A/B tests) with human-like judgment. Leveraging this corpus, we propose an alignment approach with LLM distillation (ALLD) to guide the audio LLM in extracting relevant information from raw speech and generating meaningful responses. Experimental results demonstrate that ALLD outperforms the previous state-of-the-art regression model in MOS prediction, with a mean square error of 0.17 and an A/B test accuracy of 98.6%. Additionally, the generated responses achieve BLEU scores of 25.8 and 30.2 on two tasks, surpassing the capabilities of task-specific models. This work advances the comprehensive perception of speech signals by audio LLMs, contributing to the development of real-world auditory and sensory intelligent agents.
no_new_dataset
0.652075