id
stringlengths
9
16
submitter
stringlengths
3
64
authors
stringlengths
5
6.63k
title
stringlengths
7
245
comments
stringlengths
1
482
journal-ref
stringlengths
4
382
doi
stringlengths
9
151
report-no
stringclasses
984 values
categories
stringlengths
5
108
license
stringclasses
9 values
abstract
stringlengths
83
3.41k
versions
listlengths
1
20
update_date
timestamp[s]date
2007-05-23 00:00:00
2025-04-11 00:00:00
authors_parsed
listlengths
1
427
prompt
stringlengths
166
3.49k
label
stringclasses
2 values
prob
float64
0.5
0.98
2503.06136
Ye Tao
Ye Tao, Jiawei Zhang, Yahao Shi, Dongqing Zou, Bin Zhou
GSV3D: Gaussian Splatting-based Geometric Distillation with Stable Video Diffusion for Single-Image 3D Object Generation
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Image-based 3D generation has vast applications in robotics and gaming, where high-quality, diverse outputs and consistent 3D representations are crucial. However, existing methods have limitations: 3D diffusion models are limited by dataset scarcity and the absence of strong pre-trained priors, while 2D diffusion-based approaches struggle with geometric consistency. We propose a method that leverages 2D diffusion models' implicit 3D reasoning ability while ensuring 3D consistency via Gaussian-splatting-based geometric distillation. Specifically, the proposed Gaussian Splatting Decoder enforces 3D consistency by transforming SV3D latent outputs into an explicit 3D representation. Unlike SV3D, which only relies on implicit 2D representations for video generation, Gaussian Splatting explicitly encodes spatial and appearance attributes, enabling multi-view consistency through geometric constraints. These constraints correct view inconsistencies, ensuring robust geometric consistency. As a result, our approach simultaneously generates high-quality, multi-view-consistent images and accurate 3D models, providing a scalable solution for single-image-based 3D generation and bridging the gap between 2D Diffusion diversity and 3D structural coherence. Experimental results demonstrate state-of-the-art multi-view consistency and strong generalization across diverse datasets. The code will be made publicly available upon acceptance.
[ { "version": "v1", "created": "Sat, 8 Mar 2025 09:10:31 GMT" } ]
2025-03-11T00:00:00
[ [ "Tao", "Ye", "" ], [ "Zhang", "Jiawei", "" ], [ "Shi", "Yahao", "" ], [ "Zou", "Dongqing", "" ], [ "Zhou", "Bin", "" ] ]
TITLE: GSV3D: Gaussian Splatting-based Geometric Distillation with Stable Video Diffusion for Single-Image 3D Object Generation ABSTRACT: Image-based 3D generation has vast applications in robotics and gaming, where high-quality, diverse outputs and consistent 3D representations are crucial. However, existing methods have limitations: 3D diffusion models are limited by dataset scarcity and the absence of strong pre-trained priors, while 2D diffusion-based approaches struggle with geometric consistency. We propose a method that leverages 2D diffusion models' implicit 3D reasoning ability while ensuring 3D consistency via Gaussian-splatting-based geometric distillation. Specifically, the proposed Gaussian Splatting Decoder enforces 3D consistency by transforming SV3D latent outputs into an explicit 3D representation. Unlike SV3D, which only relies on implicit 2D representations for video generation, Gaussian Splatting explicitly encodes spatial and appearance attributes, enabling multi-view consistency through geometric constraints. These constraints correct view inconsistencies, ensuring robust geometric consistency. As a result, our approach simultaneously generates high-quality, multi-view-consistent images and accurate 3D models, providing a scalable solution for single-image-based 3D generation and bridging the gap between 2D Diffusion diversity and 3D structural coherence. Experimental results demonstrate state-of-the-art multi-view consistency and strong generalization across diverse datasets. The code will be made publicly available upon acceptance.
no_new_dataset
0.949059
2503.06140
Xiaosen Wang
Bohan Liu, Xiaosen Wang
Boosting the Local Invariance for Better Adversarial Transferability
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Transfer-based attacks pose a significant threat to real-world applications by directly targeting victim models with adversarial examples generated on surrogate models. While numerous approaches have been proposed to enhance adversarial transferability, existing works often overlook the intrinsic relationship between adversarial perturbations and input images. In this work, we find that adversarial perturbation often exhibits poor translation invariance for a given clean image and model, which is attributed to local invariance. Through empirical analysis, we demonstrate that there is a positive correlation between the local invariance of adversarial perturbations w.r.t. the input image and their transferability across different models. Based on this finding, we propose a general adversarial transferability boosting technique called Local Invariance Boosting approach (LI-Boost). Extensive experiments on the standard ImageNet dataset demonstrate that LI-Boost could significantly boost various types of transfer-based attacks (e.g., gradient-based, input transformation-based, model-related, advanced objective function, ensemble, etc.) on CNNs, ViTs, and defense mechanisms. Our approach presents a promising direction for future research in improving adversarial transferability across different models.
[ { "version": "v1", "created": "Sat, 8 Mar 2025 09:44:45 GMT" } ]
2025-03-11T00:00:00
[ [ "Liu", "Bohan", "" ], [ "Wang", "Xiaosen", "" ] ]
TITLE: Boosting the Local Invariance for Better Adversarial Transferability ABSTRACT: Transfer-based attacks pose a significant threat to real-world applications by directly targeting victim models with adversarial examples generated on surrogate models. While numerous approaches have been proposed to enhance adversarial transferability, existing works often overlook the intrinsic relationship between adversarial perturbations and input images. In this work, we find that adversarial perturbation often exhibits poor translation invariance for a given clean image and model, which is attributed to local invariance. Through empirical analysis, we demonstrate that there is a positive correlation between the local invariance of adversarial perturbations w.r.t. the input image and their transferability across different models. Based on this finding, we propose a general adversarial transferability boosting technique called Local Invariance Boosting approach (LI-Boost). Extensive experiments on the standard ImageNet dataset demonstrate that LI-Boost could significantly boost various types of transfer-based attacks (e.g., gradient-based, input transformation-based, model-related, advanced objective function, ensemble, etc.) on CNNs, ViTs, and defense mechanisms. Our approach presents a promising direction for future research in improving adversarial transferability across different models.
no_new_dataset
0.941115
2503.06141
Mingxing Li
Mingxing Li, Rui Wang, Lei Sun, Yancheng Bai, Xiangxiang Chu
Next Token Is Enough: Realistic Image Quality and Aesthetic Scoring with Multimodal Large Language Model
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The rapid expansion of mobile internet has resulted in a substantial increase in user-generated content (UGC) images, thereby making the thorough assessment of UGC images both urgent and essential. Recently, multimodal large language models (MLLMs) have shown great potential in image quality assessment (IQA) and image aesthetic assessment (IAA). Despite this progress, effectively scoring the quality and aesthetics of UGC images still faces two main challenges: 1) A single score is inadequate to capture the hierarchical human perception. 2) How to use MLLMs to output numerical scores, such as mean opinion scores (MOS), remains an open question. To address these challenges, we introduce a novel dataset, named Realistic image Quality and Aesthetic (RealQA), including 14,715 UGC images, each of which is annoted with 10 fine-grained attributes. These attributes span three levels: low level (e.g., image clarity), middle level (e.g., subject integrity) and high level (e.g., composition). Besides, we conduct a series of in-depth and comprehensive investigations into how to effectively predict numerical scores using MLLMs. Surprisingly, by predicting just two extra significant digits, the next token paradigm can achieve SOTA performance. Furthermore, with the help of chain of thought (CoT) combined with the learnt fine-grained attributes, the proposed method can outperform SOTA methods on five public datasets for IQA and IAA with superior interpretability and show strong zero-shot generalization for video quality assessment (VQA). The code and dataset will be released.
[ { "version": "v1", "created": "Sat, 8 Mar 2025 09:49:10 GMT" } ]
2025-03-11T00:00:00
[ [ "Li", "Mingxing", "" ], [ "Wang", "Rui", "" ], [ "Sun", "Lei", "" ], [ "Bai", "Yancheng", "" ], [ "Chu", "Xiangxiang", "" ] ]
TITLE: Next Token Is Enough: Realistic Image Quality and Aesthetic Scoring with Multimodal Large Language Model ABSTRACT: The rapid expansion of mobile internet has resulted in a substantial increase in user-generated content (UGC) images, thereby making the thorough assessment of UGC images both urgent and essential. Recently, multimodal large language models (MLLMs) have shown great potential in image quality assessment (IQA) and image aesthetic assessment (IAA). Despite this progress, effectively scoring the quality and aesthetics of UGC images still faces two main challenges: 1) A single score is inadequate to capture the hierarchical human perception. 2) How to use MLLMs to output numerical scores, such as mean opinion scores (MOS), remains an open question. To address these challenges, we introduce a novel dataset, named Realistic image Quality and Aesthetic (RealQA), including 14,715 UGC images, each of which is annoted with 10 fine-grained attributes. These attributes span three levels: low level (e.g., image clarity), middle level (e.g., subject integrity) and high level (e.g., composition). Besides, we conduct a series of in-depth and comprehensive investigations into how to effectively predict numerical scores using MLLMs. Surprisingly, by predicting just two extra significant digits, the next token paradigm can achieve SOTA performance. Furthermore, with the help of chain of thought (CoT) combined with the learnt fine-grained attributes, the proposed method can outperform SOTA methods on five public datasets for IQA and IAA with superior interpretability and show strong zero-shot generalization for video quality assessment (VQA). The code and dataset will be released.
new_dataset
0.961171
2503.06142
Xinan He
Xinan He, Yue Zhou, Bing Fan, Bin Li, Guopu Zhu, Feng Ding
VLForgery Face Triad: Detection, Localization and Attribution via Multimodal Large Language Models
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Faces synthesized by diffusion models (DMs) with high-quality and controllable attributes pose a significant challenge for Deepfake detection. Most state-of-the-art detectors only yield a binary decision, incapable of forgery localization, attribution of forgery methods, and providing analysis on the cause of forgeries. In this work, we integrate Multimodal Large Language Models (MLLMs) within DM-based face forensics, and propose a fine-grained analysis triad framework called VLForgery, that can 1) predict falsified facial images; 2) locate the falsified face regions subjected to partial synthesis; and 3) attribute the synthesis with specific generators. To achieve the above goals, we introduce VLF (Visual Language Forensics), a novel and diverse synthesis face dataset designed to facilitate rich interactions between Visual and Language modalities in MLLMs. Additionally, we propose an extrinsic knowledge-guided description method, termed EkCot, which leverages knowledge from the image generation pipeline to enable MLLMs to quickly capture image content. Furthermore, we introduce a low-level vision comparison pipeline designed to identify differential features between real and fake that MLLMs can inherently understand. These features are then incorporated into EkCot, enhancing its ability to analyze forgeries in a structured manner, following the sequence of detection, localization, and attribution. Extensive experiments demonstrate that VLForgery outperforms other state-of-the-art forensic approaches in detection accuracy, with additional potential for falsified region localization and attribution analysis.
[ { "version": "v1", "created": "Sat, 8 Mar 2025 09:55:19 GMT" } ]
2025-03-11T00:00:00
[ [ "He", "Xinan", "" ], [ "Zhou", "Yue", "" ], [ "Fan", "Bing", "" ], [ "Li", "Bin", "" ], [ "Zhu", "Guopu", "" ], [ "Ding", "Feng", "" ] ]
TITLE: VLForgery Face Triad: Detection, Localization and Attribution via Multimodal Large Language Models ABSTRACT: Faces synthesized by diffusion models (DMs) with high-quality and controllable attributes pose a significant challenge for Deepfake detection. Most state-of-the-art detectors only yield a binary decision, incapable of forgery localization, attribution of forgery methods, and providing analysis on the cause of forgeries. In this work, we integrate Multimodal Large Language Models (MLLMs) within DM-based face forensics, and propose a fine-grained analysis triad framework called VLForgery, that can 1) predict falsified facial images; 2) locate the falsified face regions subjected to partial synthesis; and 3) attribute the synthesis with specific generators. To achieve the above goals, we introduce VLF (Visual Language Forensics), a novel and diverse synthesis face dataset designed to facilitate rich interactions between Visual and Language modalities in MLLMs. Additionally, we propose an extrinsic knowledge-guided description method, termed EkCot, which leverages knowledge from the image generation pipeline to enable MLLMs to quickly capture image content. Furthermore, we introduce a low-level vision comparison pipeline designed to identify differential features between real and fake that MLLMs can inherently understand. These features are then incorporated into EkCot, enhancing its ability to analyze forgeries in a structured manner, following the sequence of detection, localization, and attribution. Extensive experiments demonstrate that VLForgery outperforms other state-of-the-art forensic approaches in detection accuracy, with additional potential for falsified region localization and attribution analysis.
new_dataset
0.966188
2503.06149
Xudong Wang
Xudong Wang, Jiacheng Wang, Lei Feng, Dusit Niyato, Ruichen Zhang, Jiawen Kang, Zehui Xiong, Hongyang Du, Shiwen Mao
Wireless Hallucination in Generative AI-enabled Communications: Concepts, Issues, and Solutions
7 pages, 4 figures
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Generative AI (GenAI) is driving the intelligence of wireless communications. Due to data limitations, random generation, and dynamic environments, GenAI may generate channel information or optimization strategies that violate physical laws or deviate from actual real-world requirements. We refer to this phenomenon as wireless hallucination, which results in invalid channel information, spectrum wastage, and low communication reliability but remains underexplored. To address this gap, this article provides a comprehensive concept of wireless hallucinations in GenAI-driven communications, focusing on hallucination mitigation. Specifically, we first introduce the fundamental, analyze its causes based on the GenAI workflow, and propose mitigation solutions at the data, model, and post-generation levels. Then, we systematically examines representative hallucination scenarios in GenAI-enabled communications and their corresponding solutions. Finally, we propose a novel integrated mitigation solution for GenAI-based channel estimation. At the data level, we establish a channel estimation hallucination dataset and employ generative adversarial networks (GANs)-based data augmentation. Additionally, we incorporate attention mechanisms and large language models (LLMs) to enhance both training and inference performance. Experimental results demonstrate that the proposed hybrid solutions reduce the normalized mean square error (NMSE) by 0.19, effectively reducing wireless hallucinations.
[ { "version": "v1", "created": "Sat, 8 Mar 2025 10:16:24 GMT" } ]
2025-03-11T00:00:00
[ [ "Wang", "Xudong", "" ], [ "Wang", "Jiacheng", "" ], [ "Feng", "Lei", "" ], [ "Niyato", "Dusit", "" ], [ "Zhang", "Ruichen", "" ], [ "Kang", "Jiawen", "" ], [ "Xiong", "Zehui", "" ], [ "Du", "Hongyang", "" ], [ "Mao", "Shiwen", "" ] ]
TITLE: Wireless Hallucination in Generative AI-enabled Communications: Concepts, Issues, and Solutions ABSTRACT: Generative AI (GenAI) is driving the intelligence of wireless communications. Due to data limitations, random generation, and dynamic environments, GenAI may generate channel information or optimization strategies that violate physical laws or deviate from actual real-world requirements. We refer to this phenomenon as wireless hallucination, which results in invalid channel information, spectrum wastage, and low communication reliability but remains underexplored. To address this gap, this article provides a comprehensive concept of wireless hallucinations in GenAI-driven communications, focusing on hallucination mitigation. Specifically, we first introduce the fundamental, analyze its causes based on the GenAI workflow, and propose mitigation solutions at the data, model, and post-generation levels. Then, we systematically examines representative hallucination scenarios in GenAI-enabled communications and their corresponding solutions. Finally, we propose a novel integrated mitigation solution for GenAI-based channel estimation. At the data level, we establish a channel estimation hallucination dataset and employ generative adversarial networks (GANs)-based data augmentation. Additionally, we incorporate attention mechanisms and large language models (LLMs) to enhance both training and inference performance. Experimental results demonstrate that the proposed hybrid solutions reduce the normalized mean square error (NMSE) by 0.19, effectively reducing wireless hallucinations.
new_dataset
0.957794
2503.06154
Zidu Wang
Zidu Wang, Jiankuo Zhao, Miao Xu, Xiangyu Zhu, Zhen Lei
SRM-Hair: Single Image Head Mesh Reconstruction via 3D Morphable Hair
Under review
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
3D Morphable Models (3DMMs) have played a pivotal role as a fundamental representation or initialization for 3D avatar animation and reconstruction. However, extending 3DMMs to hair remains challenging due to the difficulty of enforcing vertex-level consistent semantic meaning across hair shapes. This paper introduces a novel method, Semantic-consistent Ray Modeling of Hair (SRM-Hair), for making 3D hair morphable and controlled by coefficients. The key contribution lies in semantic-consistent ray modeling, which extracts ordered hair surface vertices and exhibits notable properties such as additivity for hairstyle fusion, adaptability, flipping, and thickness modification. We collect a dataset of over 250 high-fidelity real hair scans paired with 3D face data to serve as a prior for the 3D morphable hair. Based on this, SRM-Hair can reconstruct a hair mesh combined with a 3D head from a single image. Note that SRM-Hair produces an independent hair mesh, facilitating applications in virtual avatar creation, realistic animation, and high-fidelity hair rendering. Both quantitative and qualitative experiments demonstrate that SRM-Hair achieves state-of-the-art performance in 3D mesh reconstruction. Our project is available at https://github.com/wang-zidu/SRM-Hair
[ { "version": "v1", "created": "Sat, 8 Mar 2025 10:37:46 GMT" } ]
2025-03-11T00:00:00
[ [ "Wang", "Zidu", "" ], [ "Zhao", "Jiankuo", "" ], [ "Xu", "Miao", "" ], [ "Zhu", "Xiangyu", "" ], [ "Lei", "Zhen", "" ] ]
TITLE: SRM-Hair: Single Image Head Mesh Reconstruction via 3D Morphable Hair ABSTRACT: 3D Morphable Models (3DMMs) have played a pivotal role as a fundamental representation or initialization for 3D avatar animation and reconstruction. However, extending 3DMMs to hair remains challenging due to the difficulty of enforcing vertex-level consistent semantic meaning across hair shapes. This paper introduces a novel method, Semantic-consistent Ray Modeling of Hair (SRM-Hair), for making 3D hair morphable and controlled by coefficients. The key contribution lies in semantic-consistent ray modeling, which extracts ordered hair surface vertices and exhibits notable properties such as additivity for hairstyle fusion, adaptability, flipping, and thickness modification. We collect a dataset of over 250 high-fidelity real hair scans paired with 3D face data to serve as a prior for the 3D morphable hair. Based on this, SRM-Hair can reconstruct a hair mesh combined with a 3D head from a single image. Note that SRM-Hair produces an independent hair mesh, facilitating applications in virtual avatar creation, realistic animation, and high-fidelity hair rendering. Both quantitative and qualitative experiments demonstrate that SRM-Hair achieves state-of-the-art performance in 3D mesh reconstruction. Our project is available at https://github.com/wang-zidu/SRM-Hair
new_dataset
0.966156
2503.06161
William Han
Kai Li, Junhao Wang, William Han, Ding Zhao
Feature-EndoGaussian: Feature Distilled Gaussian Splatting in Surgical Deformable Scene Reconstruction
14 pages, 5 figures
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Minimally invasive surgery (MIS) has transformed clinical practice by reducing recovery times, minimizing complications, and enhancing precision. Nonetheless, MIS inherently relies on indirect visualization and precise instrument control, posing unique challenges. Recent advances in artificial intelligence have enabled real-time surgical scene understanding through techniques such as image classification, object detection, and segmentation, with scene reconstruction emerging as a key element for enhanced intraoperative guidance. Although neural radiance fields (NeRFs) have been explored for this purpose, their substantial data requirements and slow rendering inhibit real-time performance. In contrast, 3D Gaussian Splatting (3DGS) offers a more efficient alternative, achieving state-of-the-art performance in dynamic surgical scene reconstruction. In this work, we introduce Feature-EndoGaussian (FEG), an extension of 3DGS that integrates 2D segmentation cues into 3D rendering to enable real-time semantic and scene reconstruction. By leveraging pretrained segmentation foundation models, FEG incorporates semantic feature distillation within the Gaussian deformation framework, thereby enhancing both reconstruction fidelity and segmentation accuracy. On the EndoNeRF dataset, FEG achieves superior performance (SSIM of 0.97, PSNR of 39.08, and LPIPS of 0.03) compared to leading methods. Additionally, on the EndoVis18 dataset, FEG demonstrates competitive class-wise segmentation metrics while balancing model size and real-time performance.
[ { "version": "v1", "created": "Sat, 8 Mar 2025 10:50:19 GMT" } ]
2025-03-11T00:00:00
[ [ "Li", "Kai", "" ], [ "Wang", "Junhao", "" ], [ "Han", "William", "" ], [ "Zhao", "Ding", "" ] ]
TITLE: Feature-EndoGaussian: Feature Distilled Gaussian Splatting in Surgical Deformable Scene Reconstruction ABSTRACT: Minimally invasive surgery (MIS) has transformed clinical practice by reducing recovery times, minimizing complications, and enhancing precision. Nonetheless, MIS inherently relies on indirect visualization and precise instrument control, posing unique challenges. Recent advances in artificial intelligence have enabled real-time surgical scene understanding through techniques such as image classification, object detection, and segmentation, with scene reconstruction emerging as a key element for enhanced intraoperative guidance. Although neural radiance fields (NeRFs) have been explored for this purpose, their substantial data requirements and slow rendering inhibit real-time performance. In contrast, 3D Gaussian Splatting (3DGS) offers a more efficient alternative, achieving state-of-the-art performance in dynamic surgical scene reconstruction. In this work, we introduce Feature-EndoGaussian (FEG), an extension of 3DGS that integrates 2D segmentation cues into 3D rendering to enable real-time semantic and scene reconstruction. By leveraging pretrained segmentation foundation models, FEG incorporates semantic feature distillation within the Gaussian deformation framework, thereby enhancing both reconstruction fidelity and segmentation accuracy. On the EndoNeRF dataset, FEG achieves superior performance (SSIM of 0.97, PSNR of 39.08, and LPIPS of 0.03) compared to leading methods. Additionally, on the EndoVis18 dataset, FEG demonstrates competitive class-wise segmentation metrics while balancing model size and real-time performance.
no_new_dataset
0.944995
2503.06179
Wongi Park
Wongi Park, Myeongseok Nam, Siwon Kim, Sangwoo Jo, Soomok Lee
ForestSplats: Deformable transient field for Gaussian Splatting in the Wild
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Recently, 3D Gaussian Splatting (3D-GS) has emerged, showing real-time rendering speeds and high-quality results in static scenes. Although 3D-GS shows effectiveness in static scenes, their performance significantly degrades in real-world environments due to transient objects, lighting variations, and diverse levels of occlusion. To tackle this, existing methods estimate occluders or transient elements by leveraging pre-trained models or integrating additional transient field pipelines. However, these methods still suffer from two defects: 1) Using semantic features from the Vision Foundation model (VFM) causes additional computational costs. 2) The transient field requires significant memory to handle transient elements with per-view Gaussians and struggles to define clear boundaries for occluders, solely relying on photometric errors. To address these problems, we propose ForestSplats, a novel approach that leverages the deformable transient field and a superpixel-aware mask to efficiently represent transient elements in the 2D scene across unconstrained image collections and effectively decompose static scenes from transient distractors without VFM. We designed the transient field to be deformable, capturing per-view transient elements. Furthermore, we introduce a superpixel-aware mask that clearly defines the boundaries of occluders by considering photometric errors and superpixels. Additionally, we propose uncertainty-aware densification to avoid generating Gaussians within the boundaries of occluders during densification. Through extensive experiments across several benchmark datasets, we demonstrate that ForestSplats outperforms existing methods without VFM and shows significant memory efficiency in representing transient elements.
[ { "version": "v1", "created": "Sat, 8 Mar 2025 11:44:57 GMT" } ]
2025-03-11T00:00:00
[ [ "Park", "Wongi", "" ], [ "Nam", "Myeongseok", "" ], [ "Kim", "Siwon", "" ], [ "Jo", "Sangwoo", "" ], [ "Lee", "Soomok", "" ] ]
TITLE: ForestSplats: Deformable transient field for Gaussian Splatting in the Wild ABSTRACT: Recently, 3D Gaussian Splatting (3D-GS) has emerged, showing real-time rendering speeds and high-quality results in static scenes. Although 3D-GS shows effectiveness in static scenes, their performance significantly degrades in real-world environments due to transient objects, lighting variations, and diverse levels of occlusion. To tackle this, existing methods estimate occluders or transient elements by leveraging pre-trained models or integrating additional transient field pipelines. However, these methods still suffer from two defects: 1) Using semantic features from the Vision Foundation model (VFM) causes additional computational costs. 2) The transient field requires significant memory to handle transient elements with per-view Gaussians and struggles to define clear boundaries for occluders, solely relying on photometric errors. To address these problems, we propose ForestSplats, a novel approach that leverages the deformable transient field and a superpixel-aware mask to efficiently represent transient elements in the 2D scene across unconstrained image collections and effectively decompose static scenes from transient distractors without VFM. We designed the transient field to be deformable, capturing per-view transient elements. Furthermore, we introduce a superpixel-aware mask that clearly defines the boundaries of occluders by considering photometric errors and superpixels. Additionally, we propose uncertainty-aware densification to avoid generating Gaussians within the boundaries of occluders during densification. Through extensive experiments across several benchmark datasets, we demonstrate that ForestSplats outperforms existing methods without VFM and shows significant memory efficiency in representing transient elements.
no_new_dataset
0.95018
2503.06182
Antonio Alliegro
Antonio Alliegro, Francesca Pistilli, Tatiana Tommasi, Giuseppe Averta
FORESCENE: FOREcasting human activity via latent SCENE graphs diffusion
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Forecasting human-environment interactions in daily activities is challenging due to the high variability of human behavior. While predicting directly from videos is possible, it is limited by confounding factors like irrelevant objects or background noise that do not contribute to the interaction. A promising alternative is using Scene Graphs (SGs) to track only the relevant elements. However, current methods for forecasting future SGs face significant challenges and often rely on unrealistic assumptions, such as fixed objects over time, limiting their applicability to long-term activities where interacted objects may appear or disappear. In this paper, we introduce FORESCENE, a novel framework for Scene Graph Anticipation (SGA) that predicts both object and relationship evolution over time. FORESCENE encodes observed video segments into a latent representation using a tailored Graph Auto-Encoder and forecasts future SGs using a Latent Diffusion Model (LDM). Our approach enables continuous prediction of interaction dynamics without making assumptions on the graph's content or structure. We evaluate FORESCENE on the Action Genome dataset, where it outperforms existing SGA methods while solving a significantly more complex task.
[ { "version": "v1", "created": "Sat, 8 Mar 2025 11:56:00 GMT" } ]
2025-03-11T00:00:00
[ [ "Alliegro", "Antonio", "" ], [ "Pistilli", "Francesca", "" ], [ "Tommasi", "Tatiana", "" ], [ "Averta", "Giuseppe", "" ] ]
TITLE: FORESCENE: FOREcasting human activity via latent SCENE graphs diffusion ABSTRACT: Forecasting human-environment interactions in daily activities is challenging due to the high variability of human behavior. While predicting directly from videos is possible, it is limited by confounding factors like irrelevant objects or background noise that do not contribute to the interaction. A promising alternative is using Scene Graphs (SGs) to track only the relevant elements. However, current methods for forecasting future SGs face significant challenges and often rely on unrealistic assumptions, such as fixed objects over time, limiting their applicability to long-term activities where interacted objects may appear or disappear. In this paper, we introduce FORESCENE, a novel framework for Scene Graph Anticipation (SGA) that predicts both object and relationship evolution over time. FORESCENE encodes observed video segments into a latent representation using a tailored Graph Auto-Encoder and forecasts future SGs using a Latent Diffusion Model (LDM). Our approach enables continuous prediction of interaction dynamics without making assumptions on the graph's content or structure. We evaluate FORESCENE on the Action Genome dataset, where it outperforms existing SGA methods while solving a significantly more complex task.
no_new_dataset
0.948537
2503.06196
Shashata Sawmya
Shashata Sawmya, Thomas L. Athey, Gwyneth Liu, Nir Shavit
NeuroADDA: Active Discriminative Domain Adaptation in Connectomic
8 pages, 3 figures, 3 tables
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Training segmentation models from scratch has been the standard approach for new electron microscopy connectomics datasets. However, leveraging pretrained models from existing datasets could improve efficiency and performance in constrained annotation budget. In this study, we investigate domain adaptation in connectomics by analyzing six major datasets spanning different organisms. We show that, Maximum Mean Discrepancy (MMD) between neuron image distributions serves as a reliable indicator of transferability, and identifies the optimal source domain for transfer learning. Building on this, we introduce NeuroADDA, a method that combines optimal domain selection with source-free active learning to effectively adapt pretrained backbones to a new dataset. NeuroADDA consistently outperforms training from scratch across diverse datasets and fine-tuning sample sizes, with the largest gain observed at $n=4$ samples with a 25-67\% reduction in Variation of Information. Finally, we show that our analysis of distributional differences among neuron images from multiple species in a learned feature space reveals that these domain "distances" correlate with phylogenetic distance among those species.
[ { "version": "v1", "created": "Sat, 8 Mar 2025 12:40:30 GMT" } ]
2025-03-11T00:00:00
[ [ "Sawmya", "Shashata", "" ], [ "Athey", "Thomas L.", "" ], [ "Liu", "Gwyneth", "" ], [ "Shavit", "Nir", "" ] ]
TITLE: NeuroADDA: Active Discriminative Domain Adaptation in Connectomic ABSTRACT: Training segmentation models from scratch has been the standard approach for new electron microscopy connectomics datasets. However, leveraging pretrained models from existing datasets could improve efficiency and performance in constrained annotation budget. In this study, we investigate domain adaptation in connectomics by analyzing six major datasets spanning different organisms. We show that, Maximum Mean Discrepancy (MMD) between neuron image distributions serves as a reliable indicator of transferability, and identifies the optimal source domain for transfer learning. Building on this, we introduce NeuroADDA, a method that combines optimal domain selection with source-free active learning to effectively adapt pretrained backbones to a new dataset. NeuroADDA consistently outperforms training from scratch across diverse datasets and fine-tuning sample sizes, with the largest gain observed at $n=4$ samples with a 25-67\% reduction in Variation of Information. Finally, we show that our analysis of distributional differences among neuron images from multiple species in a learned feature space reveals that these domain "distances" correlate with phylogenetic distance among those species.
no_new_dataset
0.945751
2503.06200
YeCong Wan
Yecong Wan, Mingwen Shao, Yuanshuo Cheng, Jun Shu, Shuigen Wang
Removing Multiple Hybrid Adverse Weather in Video via a Unified Model
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Videos captured under real-world adverse weather conditions typically suffer from uncertain hybrid weather artifacts with heterogeneous degradation distributions. However, existing algorithms only excel at specific single degradation distributions due to limited adaption capacity and have to deal with different weather degradations with separately trained models, thus may fail to handle real-world stochastic weather scenarios. Besides, the model training is also infeasible due to the lack of paired video data to characterize the coexistence of multiple weather. To ameliorate the aforementioned issue, we propose a novel unified model, dubbed UniWRV, to remove multiple heterogeneous video weather degradations in an all-in-one fashion. Specifically, to tackle degenerate spatial feature heterogeneity, we propose a tailored weather prior guided module that queries exclusive priors for different instances as prompts to steer spatial feature characterization. To tackle degenerate temporal feature heterogeneity, we propose a dynamic routing aggregation module that can automatically select optimal fusion paths for different instances to dynamically integrate temporal features. Additionally, we managed to construct a new synthetic video dataset, termed HWVideo, for learning and benchmarking multiple hybrid adverse weather removal, which contains 15 hybrid weather conditions with a total of 1500 adverse-weather/clean paired video clips. Real-world hybrid weather videos are also collected for evaluating model generalizability. Comprehensive experiments demonstrate that our UniWRV exhibits robust and superior adaptation capability in multiple heterogeneous degradations learning scenarios, including various generic video restoration tasks beyond weather removal.
[ { "version": "v1", "created": "Sat, 8 Mar 2025 13:01:22 GMT" } ]
2025-03-11T00:00:00
[ [ "Wan", "Yecong", "" ], [ "Shao", "Mingwen", "" ], [ "Cheng", "Yuanshuo", "" ], [ "Shu", "Jun", "" ], [ "Wang", "Shuigen", "" ] ]
TITLE: Removing Multiple Hybrid Adverse Weather in Video via a Unified Model ABSTRACT: Videos captured under real-world adverse weather conditions typically suffer from uncertain hybrid weather artifacts with heterogeneous degradation distributions. However, existing algorithms only excel at specific single degradation distributions due to limited adaption capacity and have to deal with different weather degradations with separately trained models, thus may fail to handle real-world stochastic weather scenarios. Besides, the model training is also infeasible due to the lack of paired video data to characterize the coexistence of multiple weather. To ameliorate the aforementioned issue, we propose a novel unified model, dubbed UniWRV, to remove multiple heterogeneous video weather degradations in an all-in-one fashion. Specifically, to tackle degenerate spatial feature heterogeneity, we propose a tailored weather prior guided module that queries exclusive priors for different instances as prompts to steer spatial feature characterization. To tackle degenerate temporal feature heterogeneity, we propose a dynamic routing aggregation module that can automatically select optimal fusion paths for different instances to dynamically integrate temporal features. Additionally, we managed to construct a new synthetic video dataset, termed HWVideo, for learning and benchmarking multiple hybrid adverse weather removal, which contains 15 hybrid weather conditions with a total of 1500 adverse-weather/clean paired video clips. Real-world hybrid weather videos are also collected for evaluating model generalizability. Comprehensive experiments demonstrate that our UniWRV exhibits robust and superior adaptation capability in multiple heterogeneous degradations learning scenarios, including various generic video restoration tasks beyond weather removal.
new_dataset
0.962391
2503.06201
Yixin Wu
Yixin Wu, Feiran Zhang, Tianyuan Shi, Ruicheng Yin, Zhenghua Wang, Zhenliang Gan, Xiaohua Wang, Changze Lv, Xiaoqing Zheng, Xuanjing Huang
Explainable Synthetic Image Detection through Diffusion Timestep Ensembling
13 pages, 5 figures
null
null
null
cs.CV cs.AI cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent advances in diffusion models have enabled the creation of deceptively real images, posing significant security risks when misused. In this study, we reveal that natural and synthetic images exhibit distinct differences in the high-frequency domains of their Fourier power spectra after undergoing iterative noise perturbations through an inverse multi-step denoising process, suggesting that such noise can provide additional discriminative information for identifying synthetic images. Based on this observation, we propose a novel detection method that amplifies these differences by progressively adding noise to the original images across multiple timesteps, and train an ensemble of classifiers on these noised images. To enhance human comprehension, we introduce an explanation generation and refinement module to identify flaws located in AI-generated images. Additionally, we construct two new datasets, GenHard and GenExplain, derived from the GenImage benchmark, providing detection samples of greater difficulty and high-quality rationales for fake images. Extensive experiments show that our method achieves state-of-the-art performance with 98.91% and 95.89% detection accuracy on regular and harder samples, increasing a minimal of 2.51% and 3.46% compared to baselines. Furthermore, our method also generalizes effectively to images generated by other diffusion models. Our code and datasets will be made publicly available.
[ { "version": "v1", "created": "Sat, 8 Mar 2025 13:04:20 GMT" } ]
2025-03-11T00:00:00
[ [ "Wu", "Yixin", "" ], [ "Zhang", "Feiran", "" ], [ "Shi", "Tianyuan", "" ], [ "Yin", "Ruicheng", "" ], [ "Wang", "Zhenghua", "" ], [ "Gan", "Zhenliang", "" ], [ "Wang", "Xiaohua", "" ], [ "Lv", "Changze", "" ], [ "Zheng", "Xiaoqing", "" ], [ "Huang", "Xuanjing", "" ] ]
TITLE: Explainable Synthetic Image Detection through Diffusion Timestep Ensembling ABSTRACT: Recent advances in diffusion models have enabled the creation of deceptively real images, posing significant security risks when misused. In this study, we reveal that natural and synthetic images exhibit distinct differences in the high-frequency domains of their Fourier power spectra after undergoing iterative noise perturbations through an inverse multi-step denoising process, suggesting that such noise can provide additional discriminative information for identifying synthetic images. Based on this observation, we propose a novel detection method that amplifies these differences by progressively adding noise to the original images across multiple timesteps, and train an ensemble of classifiers on these noised images. To enhance human comprehension, we introduce an explanation generation and refinement module to identify flaws located in AI-generated images. Additionally, we construct two new datasets, GenHard and GenExplain, derived from the GenImage benchmark, providing detection samples of greater difficulty and high-quality rationales for fake images. Extensive experiments show that our method achieves state-of-the-art performance with 98.91% and 95.89% detection accuracy on regular and harder samples, increasing a minimal of 2.51% and 3.46% compared to baselines. Furthermore, our method also generalizes effectively to images generated by other diffusion models. Our code and datasets will be made publicly available.
new_dataset
0.935169
2503.06202
Wei Liu
Wei Liu, Zhiying Deng, Zhongyu Niu, Jun Wang, Haozhao Wang, Zhigang Zeng, Ruixuan Li
Breaking Free from MMI: A New Frontier in Rationalization by Probing Input Utilization
null
null
null
null
cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Extracting a small subset of crucial rationales from the full input is a key problem in explainability research. The most widely used fundamental criterion for rationale extraction is the maximum mutual information (MMI) criterion. In this paper, we first demonstrate that MMI suffers from diminishing marginal returns. Once part of the rationale has been identified, finding the remaining portions contributes only marginally to increasing the mutual information, making it difficult to use MMI to locate the rest. In contrast to MMI that aims to reproduce the prediction, we seek to identify the parts of the input that the network can actually utilize. This is achieved by comparing how different rationale candidates match the capability space of the weight matrix. The weight matrix of a neural network is typically low-rank, meaning that the linear combinations of its column vectors can only cover part of the directions in a high-dimensional space (high-dimension: the dimensions of an input vector). If an input is fully utilized by the network, {it generally matches these directions (e.g., a portion of a hypersphere), resulting in a representation with a high norm. Conversely, if an input primarily falls outside (orthogonal to) these directions}, its representation norm will approach zero, behaving like noise that the network cannot effectively utilize. Building on this, we propose using the norms of rationale candidates as an alternative objective to MMI. Through experiments on four text classification datasets and one graph classification dataset using three network architectures (GRUs, BERT, and GCN), we show that our method outperforms MMI and its improved variants in identifying better rationales. We also compare our method with a representative LLM (llama-3.1-8b-instruct) and find that our simple method gets comparable results to it and can sometimes even outperform it.
[ { "version": "v1", "created": "Sat, 8 Mar 2025 13:08:46 GMT" } ]
2025-03-11T00:00:00
[ [ "Liu", "Wei", "" ], [ "Deng", "Zhiying", "" ], [ "Niu", "Zhongyu", "" ], [ "Wang", "Jun", "" ], [ "Wang", "Haozhao", "" ], [ "Zeng", "Zhigang", "" ], [ "Li", "Ruixuan", "" ] ]
TITLE: Breaking Free from MMI: A New Frontier in Rationalization by Probing Input Utilization ABSTRACT: Extracting a small subset of crucial rationales from the full input is a key problem in explainability research. The most widely used fundamental criterion for rationale extraction is the maximum mutual information (MMI) criterion. In this paper, we first demonstrate that MMI suffers from diminishing marginal returns. Once part of the rationale has been identified, finding the remaining portions contributes only marginally to increasing the mutual information, making it difficult to use MMI to locate the rest. In contrast to MMI that aims to reproduce the prediction, we seek to identify the parts of the input that the network can actually utilize. This is achieved by comparing how different rationale candidates match the capability space of the weight matrix. The weight matrix of a neural network is typically low-rank, meaning that the linear combinations of its column vectors can only cover part of the directions in a high-dimensional space (high-dimension: the dimensions of an input vector). If an input is fully utilized by the network, {it generally matches these directions (e.g., a portion of a hypersphere), resulting in a representation with a high norm. Conversely, if an input primarily falls outside (orthogonal to) these directions}, its representation norm will approach zero, behaving like noise that the network cannot effectively utilize. Building on this, we propose using the norms of rationale candidates as an alternative objective to MMI. Through experiments on four text classification datasets and one graph classification dataset using three network architectures (GRUs, BERT, and GCN), we show that our method outperforms MMI and its improved variants in identifying better rationales. We also compare our method with a representative LLM (llama-3.1-8b-instruct) and find that our simple method gets comparable results to it and can sometimes even outperform it.
no_new_dataset
0.948394
2503.06204
Ofir Ben Shoham
Oriel Perets, Ofir Ben Shoham, Nir Grinberg, Nadav Rappoport
CUPCase: Clinically Uncommon Patient Cases and Diagnoses Dataset
Accepted to AAAI 2025
null
null
null
cs.CL cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Medical benchmark datasets significantly contribute to developing Large Language Models (LLMs) for medical knowledge extraction, diagnosis, summarization, and other uses. Yet, current benchmarks are mainly derived from exam questions given to medical students or cases described in the medical literature, lacking the complexity of real-world patient cases that deviate from classic textbook abstractions. These include rare diseases, uncommon presentations of common diseases, and unexpected treatment responses. Here, we construct Clinically Uncommon Patient Cases and Diagnosis Dataset (CUPCase) based on 3,562 real-world case reports from BMC, including diagnoses in open-ended textual format and as multiple-choice options with distractors. Using this dataset, we evaluate the ability of state-of-the-art LLMs, including both general-purpose and Clinical LLMs, to identify and correctly diagnose a patient case, and test models' performance when only partial information about cases is available. Our findings show that general-purpose GPT-4o attains the best performance in both the multiple-choice task (average accuracy of 87.9%) and the open-ended task (BERTScore F1 of 0.764), outperforming several LLMs with a focus on the medical domain such as Meditron-70B and MedLM-Large. Moreover, GPT-4o was able to maintain 87% and 88% of its performance with only the first 20% of tokens of the case presentation in multiple-choice and free text, respectively, highlighting the potential of LLMs to aid in early diagnosis in real-world cases. CUPCase expands our ability to evaluate LLMs for clinical decision support in an open and reproducible manner.
[ { "version": "v1", "created": "Sat, 8 Mar 2025 13:21:44 GMT" } ]
2025-03-11T00:00:00
[ [ "Perets", "Oriel", "" ], [ "Shoham", "Ofir Ben", "" ], [ "Grinberg", "Nir", "" ], [ "Rappoport", "Nadav", "" ] ]
TITLE: CUPCase: Clinically Uncommon Patient Cases and Diagnoses Dataset ABSTRACT: Medical benchmark datasets significantly contribute to developing Large Language Models (LLMs) for medical knowledge extraction, diagnosis, summarization, and other uses. Yet, current benchmarks are mainly derived from exam questions given to medical students or cases described in the medical literature, lacking the complexity of real-world patient cases that deviate from classic textbook abstractions. These include rare diseases, uncommon presentations of common diseases, and unexpected treatment responses. Here, we construct Clinically Uncommon Patient Cases and Diagnosis Dataset (CUPCase) based on 3,562 real-world case reports from BMC, including diagnoses in open-ended textual format and as multiple-choice options with distractors. Using this dataset, we evaluate the ability of state-of-the-art LLMs, including both general-purpose and Clinical LLMs, to identify and correctly diagnose a patient case, and test models' performance when only partial information about cases is available. Our findings show that general-purpose GPT-4o attains the best performance in both the multiple-choice task (average accuracy of 87.9%) and the open-ended task (BERTScore F1 of 0.764), outperforming several LLMs with a focus on the medical domain such as Meditron-70B and MedLM-Large. Moreover, GPT-4o was able to maintain 87% and 88% of its performance with only the first 20% of tokens of the case presentation in multiple-choice and free text, respectively, highlighting the potential of LLMs to aid in early diagnosis in real-world cases. CUPCase expands our ability to evaluate LLMs for clinical decision support in an open and reproducible manner.
new_dataset
0.872565
2503.06213
Ruiyu Wang
Ruiyu Wang, Sen Wang, Xinxin Zuo, Qiang Sun
Lifelong Learning with Task-Specific Adaptation: Addressing the Stability-Plasticity Dilemma
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Lifelong learning (LL) aims to continuously acquire new knowledge while retaining previously learned knowledge. A central challenge in LL is the stability-plasticity dilemma, which requires models to balance the preservation of previous knowledge (stability) with the ability to learn new tasks (plasticity). While parameter-efficient fine-tuning (PEFT) has been widely adopted in large language models, its application to lifelong learning remains underexplored. To bridge this gap, this paper proposes AdaLL, an adapter-based framework designed to address the dilemma through a simple, universal, and effective strategy. AdaLL co-trains the backbone network and adapters under regularization constraints, enabling the backbone to capture task-invariant features while allowing the adapters to specialize in task-specific information. Unlike methods that freeze the backbone network, AdaLL incrementally enhances the backbone's capabilities across tasks while minimizing interference through backbone regularization. This architectural design significantly improves both stability and plasticity, effectively eliminating the stability-plasticity dilemma. Extensive experiments demonstrate that AdaLL consistently outperforms existing methods across various configurations, including dataset choices, task sequences, and task scales.
[ { "version": "v1", "created": "Sat, 8 Mar 2025 13:33:38 GMT" } ]
2025-03-11T00:00:00
[ [ "Wang", "Ruiyu", "" ], [ "Wang", "Sen", "" ], [ "Zuo", "Xinxin", "" ], [ "Sun", "Qiang", "" ] ]
TITLE: Lifelong Learning with Task-Specific Adaptation: Addressing the Stability-Plasticity Dilemma ABSTRACT: Lifelong learning (LL) aims to continuously acquire new knowledge while retaining previously learned knowledge. A central challenge in LL is the stability-plasticity dilemma, which requires models to balance the preservation of previous knowledge (stability) with the ability to learn new tasks (plasticity). While parameter-efficient fine-tuning (PEFT) has been widely adopted in large language models, its application to lifelong learning remains underexplored. To bridge this gap, this paper proposes AdaLL, an adapter-based framework designed to address the dilemma through a simple, universal, and effective strategy. AdaLL co-trains the backbone network and adapters under regularization constraints, enabling the backbone to capture task-invariant features while allowing the adapters to specialize in task-specific information. Unlike methods that freeze the backbone network, AdaLL incrementally enhances the backbone's capabilities across tasks while minimizing interference through backbone regularization. This architectural design significantly improves both stability and plasticity, effectively eliminating the stability-plasticity dilemma. Extensive experiments demonstrate that AdaLL consistently outperforms existing methods across various configurations, including dataset choices, task sequences, and task scales.
no_new_dataset
0.945651
2503.06222
Meng Wang
Meng Wang, Fan Wu, Yunchuan Qin, Ruihui Li, Zhuo Tang, Kenli Li
Vision-based 3D Semantic Scene Completion via Capture Dynamic Representations
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The vision-based semantic scene completion task aims to predict dense geometric and semantic 3D scene representations from 2D images. However, the presence of dynamic objects in the scene seriously affects the accuracy of the model inferring 3D structures from 2D images. Existing methods simply stack multiple frames of image input to increase dense scene semantic information, but ignore the fact that dynamic objects and non-texture areas violate multi-view consistency and matching reliability. To address these issues, we propose a novel method, CDScene: Vision-based Robust Semantic Scene Completion via Capturing Dynamic Representations. First, we leverage a multimodal large-scale model to extract 2D explicit semantics and align them into 3D space. Second, we exploit the characteristics of monocular and stereo depth to decouple scene information into dynamic and static features. The dynamic features contain structural relationships around dynamic objects, and the static features contain dense contextual spatial information. Finally, we design a dynamic-static adaptive fusion module to effectively extract and aggregate complementary features, achieving robust and accurate semantic scene completion in autonomous driving scenarios. Extensive experimental results on the SemanticKITTI, SSCBench-KITTI360, and SemanticKITTI-C datasets demonstrate the superiority and robustness of CDScene over existing state-of-the-art methods.
[ { "version": "v1", "created": "Sat, 8 Mar 2025 13:49:43 GMT" } ]
2025-03-11T00:00:00
[ [ "Wang", "Meng", "" ], [ "Wu", "Fan", "" ], [ "Qin", "Yunchuan", "" ], [ "Li", "Ruihui", "" ], [ "Tang", "Zhuo", "" ], [ "Li", "Kenli", "" ] ]
TITLE: Vision-based 3D Semantic Scene Completion via Capture Dynamic Representations ABSTRACT: The vision-based semantic scene completion task aims to predict dense geometric and semantic 3D scene representations from 2D images. However, the presence of dynamic objects in the scene seriously affects the accuracy of the model inferring 3D structures from 2D images. Existing methods simply stack multiple frames of image input to increase dense scene semantic information, but ignore the fact that dynamic objects and non-texture areas violate multi-view consistency and matching reliability. To address these issues, we propose a novel method, CDScene: Vision-based Robust Semantic Scene Completion via Capturing Dynamic Representations. First, we leverage a multimodal large-scale model to extract 2D explicit semantics and align them into 3D space. Second, we exploit the characteristics of monocular and stereo depth to decouple scene information into dynamic and static features. The dynamic features contain structural relationships around dynamic objects, and the static features contain dense contextual spatial information. Finally, we design a dynamic-static adaptive fusion module to effectively extract and aggregate complementary features, achieving robust and accurate semantic scene completion in autonomous driving scenarios. Extensive experimental results on the SemanticKITTI, SSCBench-KITTI360, and SemanticKITTI-C datasets demonstrate the superiority and robustness of CDScene over existing state-of-the-art methods.
no_new_dataset
0.941493
2503.06229
Federico Mazzoni
Federico Mazzoni, Riccardo Guidotti, Alessio Malizia
A Frank System for Co-Evolutionary Hybrid Decision-Making
13 pages
Advances in Intelligent Data Analysis XXII, Lecture Notes in Computer Science, vol. 14642, Springer, pp. 236-248, 2024
10.1007/978-3-031-58553-1_19
null
cs.HC cs.AI cs.CY cs.LG
http://creativecommons.org/licenses/by/4.0/
We introduce Frank, a human-in-the-loop system for co-evolutionary hybrid decision-making aiding the user to label records from an un-labeled dataset. Frank employs incremental learning to ``evolve'' in parallel with the user's decisions, by training an interpretable machine learning model on the records labeled by the user. Furthermore, Frank advances state-of-the-art approaches by offering inconsistency controls, explanations, fairness checks, and bad-faith safeguards simultaneously. We evaluate our proposal by simulating the users' behavior with various levels of expertise and reliance on Frank's suggestions. The experiments show that Frank's intervention leads to improvements in the accuracy and the fairness of the decisions.
[ { "version": "v1", "created": "Sat, 8 Mar 2025 14:06:16 GMT" } ]
2025-03-11T00:00:00
[ [ "Mazzoni", "Federico", "" ], [ "Guidotti", "Riccardo", "" ], [ "Malizia", "Alessio", "" ] ]
TITLE: A Frank System for Co-Evolutionary Hybrid Decision-Making ABSTRACT: We introduce Frank, a human-in-the-loop system for co-evolutionary hybrid decision-making aiding the user to label records from an un-labeled dataset. Frank employs incremental learning to ``evolve'' in parallel with the user's decisions, by training an interpretable machine learning model on the records labeled by the user. Furthermore, Frank advances state-of-the-art approaches by offering inconsistency controls, explanations, fairness checks, and bad-faith safeguards simultaneously. We evaluate our proposal by simulating the users' behavior with various levels of expertise and reliance on Frank's suggestions. The experiments show that Frank's intervention leads to improvements in the accuracy and the fairness of the decisions.
no_new_dataset
0.952442
2503.06231
Aditya Shankar
Aditya Shankar, Lydia Y. Chen, Arie van Deursen, Rihan Hai
WaveStitch: Flexible and Fast Conditional Time Series Generation with Diffusion Models
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Generating temporal data under constraints is critical for forecasting, imputation, and synthesis. These datasets often include auxiliary conditions that influence the values within the time series signal. Existing methods face three key challenges: (1) they fail to adapt to conditions at inference time; (2) they rely on sequential generation, which slows the generation speed; and (3) they inefficiently encode categorical features, leading to increased sparsity and input sizes. We propose WaveStitch, a novel method that addresses these challenges by leveraging denoising diffusion probabilistic models to efficiently generate accurate temporal data under given auxiliary constraints. WaveStitch overcomes these limitations by: (1) modeling interactions between constraints and signals to generalize to new, unseen conditions; (2) enabling the parallel synthesis of sequential segments with a novel "stitching" mechanism to enforce coherence across segments; and (3) encoding categorical features as compact periodic signals while preserving temporal patterns. Extensive evaluations across diverse datasets highlight WaveStitch's ability to generalize to unseen conditions during inference, achieving up to a 10x lower mean-squared-error compared to the state-of-the-art methods. Moreover, WaveStitch generates data up to 460x faster than autoregressive methods while maintaining comparable accuracy. By efficiently encoding categorical features, WaveStitch provides a robust and efficient solution for temporal data generation. Our code is open-sourced: https://github.com/adis98/HierarchicalTS
[ { "version": "v1", "created": "Sat, 8 Mar 2025 14:12:10 GMT" } ]
2025-03-11T00:00:00
[ [ "Shankar", "Aditya", "" ], [ "Chen", "Lydia Y.", "" ], [ "van Deursen", "Arie", "" ], [ "Hai", "Rihan", "" ] ]
TITLE: WaveStitch: Flexible and Fast Conditional Time Series Generation with Diffusion Models ABSTRACT: Generating temporal data under constraints is critical for forecasting, imputation, and synthesis. These datasets often include auxiliary conditions that influence the values within the time series signal. Existing methods face three key challenges: (1) they fail to adapt to conditions at inference time; (2) they rely on sequential generation, which slows the generation speed; and (3) they inefficiently encode categorical features, leading to increased sparsity and input sizes. We propose WaveStitch, a novel method that addresses these challenges by leveraging denoising diffusion probabilistic models to efficiently generate accurate temporal data under given auxiliary constraints. WaveStitch overcomes these limitations by: (1) modeling interactions between constraints and signals to generalize to new, unseen conditions; (2) enabling the parallel synthesis of sequential segments with a novel "stitching" mechanism to enforce coherence across segments; and (3) encoding categorical features as compact periodic signals while preserving temporal patterns. Extensive evaluations across diverse datasets highlight WaveStitch's ability to generalize to unseen conditions during inference, achieving up to a 10x lower mean-squared-error compared to the state-of-the-art methods. Moreover, WaveStitch generates data up to 460x faster than autoregressive methods while maintaining comparable accuracy. By efficiently encoding categorical features, WaveStitch provides a robust and efficient solution for temporal data generation. Our code is open-sourced: https://github.com/adis98/HierarchicalTS
no_new_dataset
0.946695
2503.06247
Minghao Fu
Minghao Fu, Danning Li, Aryan Gadhiya, Benjamin Lambright, Mohamed Alowais, Mohab Bahnassy, Saad El Dine Elletter, Hawau Olamide Toyin, Haiyan Jiang, Kun Zhang, Hanan Aldarmaki
Infant Cry Detection Using Causal Temporal Representation
Accepted to ICASSP 2025
null
null
null
cs.SD cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper addresses a major challenge in acoustic event detection, in particular infant cry detection in the presence of other sounds and background noises: the lack of precise annotated data. We present two contributions for supervised and unsupervised infant cry detection. The first is an annotated dataset for cry segmentation, which enables supervised models to achieve state-of-the-art performance. Additionally, we propose a novel unsupervised method, Causal Representation Spare Transition Clustering (CRSTC), based on causal temporal representation, which helps address the issue of data scarcity more generally. By integrating the detected cry segments, we significantly improve the performance of downstream infant cry classification, highlighting the potential of this approach for infant care applications.
[ { "version": "v1", "created": "Sat, 8 Mar 2025 15:15:23 GMT" } ]
2025-03-11T00:00:00
[ [ "Fu", "Minghao", "" ], [ "Li", "Danning", "" ], [ "Gadhiya", "Aryan", "" ], [ "Lambright", "Benjamin", "" ], [ "Alowais", "Mohamed", "" ], [ "Bahnassy", "Mohab", "" ], [ "Elletter", "Saad El Dine", "" ], [ "Toyin", "Hawau Olamide", "" ], [ "Jiang", "Haiyan", "" ], [ "Zhang", "Kun", "" ], [ "Aldarmaki", "Hanan", "" ] ]
TITLE: Infant Cry Detection Using Causal Temporal Representation ABSTRACT: This paper addresses a major challenge in acoustic event detection, in particular infant cry detection in the presence of other sounds and background noises: the lack of precise annotated data. We present two contributions for supervised and unsupervised infant cry detection. The first is an annotated dataset for cry segmentation, which enables supervised models to achieve state-of-the-art performance. Additionally, we propose a novel unsupervised method, Causal Representation Spare Transition Clustering (CRSTC), based on causal temporal representation, which helps address the issue of data scarcity more generally. By integrating the detected cry segments, we significantly improve the performance of downstream infant cry classification, highlighting the potential of this approach for infant care applications.
no_new_dataset
0.95418
2503.06261
Wei-En Tai
Wei-En Tai, Yu-Lin Shih, Cheng Sun, Yu-Chiang Frank Wang, Hwann-Tzong Chen
Segment Anything, Even Occluded
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Amodal instance segmentation, which aims to detect and segment both visible and invisible parts of objects in images, plays a crucial role in various applications including autonomous driving, robotic manipulation, and scene understanding. While existing methods require training both front-end detectors and mask decoders jointly, this approach lacks flexibility and fails to leverage the strengths of pre-existing modal detectors. To address this limitation, we propose SAMEO, a novel framework that adapts the Segment Anything Model (SAM) as a versatile mask decoder capable of interfacing with various front-end detectors to enable mask prediction even for partially occluded objects. Acknowledging the constraints of limited amodal segmentation datasets, we introduce Amodal-LVIS, a large-scale synthetic dataset comprising 300K images derived from the modal LVIS and LVVIS datasets. This dataset significantly expands the training data available for amodal segmentation research. Our experimental results demonstrate that our approach, when trained on the newly extended dataset, including Amodal-LVIS, achieves remarkable zero-shot performance on both COCOA-cls and D2SA benchmarks, highlighting its potential for generalization to unseen scenarios.
[ { "version": "v1", "created": "Sat, 8 Mar 2025 16:14:57 GMT" } ]
2025-03-11T00:00:00
[ [ "Tai", "Wei-En", "" ], [ "Shih", "Yu-Lin", "" ], [ "Sun", "Cheng", "" ], [ "Wang", "Yu-Chiang Frank", "" ], [ "Chen", "Hwann-Tzong", "" ] ]
TITLE: Segment Anything, Even Occluded ABSTRACT: Amodal instance segmentation, which aims to detect and segment both visible and invisible parts of objects in images, plays a crucial role in various applications including autonomous driving, robotic manipulation, and scene understanding. While existing methods require training both front-end detectors and mask decoders jointly, this approach lacks flexibility and fails to leverage the strengths of pre-existing modal detectors. To address this limitation, we propose SAMEO, a novel framework that adapts the Segment Anything Model (SAM) as a versatile mask decoder capable of interfacing with various front-end detectors to enable mask prediction even for partially occluded objects. Acknowledging the constraints of limited amodal segmentation datasets, we introduce Amodal-LVIS, a large-scale synthetic dataset comprising 300K images derived from the modal LVIS and LVVIS datasets. This dataset significantly expands the training data available for amodal segmentation research. Our experimental results demonstrate that our approach, when trained on the newly extended dataset, including Amodal-LVIS, achieves remarkable zero-shot performance on both COCOA-cls and D2SA benchmarks, highlighting its potential for generalization to unseen scenarios.
new_dataset
0.961678
2503.06268
Shaobin Zhuang
Shaobin Zhuang, Zhipeng Huang, Binxin Yang, Ying Zhang, Fangyikang Wang, Canmiao Fu, Chong Sun, Zheng-Jun Zha, Chen Li, Yali Wang
Get In Video: Add Anything You Want to the Video
Project page:https://zhuangshaobin.github.io/GetInVideo-project/
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Video editing increasingly demands the ability to incorporate specific real-world instances into existing footage, yet current approaches fundamentally fail to capture the unique visual characteristics of particular subjects and ensure natural instance/scene interactions. We formalize this overlooked yet critical editing paradigm as "Get-In-Video Editing", where users provide reference images to precisely specify visual elements they wish to incorporate into videos. Addressing this task's dual challenges, severe training data scarcity and technical challenges in maintaining spatiotemporal coherence, we introduce three key contributions. First, we develop GetIn-1M dataset created through our automated Recognize-Track-Erase pipeline, which sequentially performs video captioning, salient instance identification, object detection, temporal tracking, and instance removal to generate high-quality video editing pairs with comprehensive annotations (reference image, tracking mask, instance prompt). Second, we present GetInVideo, a novel end-to-end framework that leverages a diffusion transformer architecture with 3D full attention to process reference images, condition videos, and masks simultaneously, maintaining temporal coherence, preserving visual identity, and ensuring natural scene interactions when integrating reference objects into videos. Finally, we establish GetInBench, the first comprehensive benchmark for Get-In-Video Editing scenario, demonstrating our approach's superior performance through extensive evaluations. Our work enables accessible, high-quality incorporation of specific real-world subjects into videos, significantly advancing personalized video editing capabilities.
[ { "version": "v1", "created": "Sat, 8 Mar 2025 16:27:53 GMT" } ]
2025-03-11T00:00:00
[ [ "Zhuang", "Shaobin", "" ], [ "Huang", "Zhipeng", "" ], [ "Yang", "Binxin", "" ], [ "Zhang", "Ying", "" ], [ "Wang", "Fangyikang", "" ], [ "Fu", "Canmiao", "" ], [ "Sun", "Chong", "" ], [ "Zha", "Zheng-Jun", "" ], [ "Li", "Chen", "" ], [ "Wang", "Yali", "" ] ]
TITLE: Get In Video: Add Anything You Want to the Video ABSTRACT: Video editing increasingly demands the ability to incorporate specific real-world instances into existing footage, yet current approaches fundamentally fail to capture the unique visual characteristics of particular subjects and ensure natural instance/scene interactions. We formalize this overlooked yet critical editing paradigm as "Get-In-Video Editing", where users provide reference images to precisely specify visual elements they wish to incorporate into videos. Addressing this task's dual challenges, severe training data scarcity and technical challenges in maintaining spatiotemporal coherence, we introduce three key contributions. First, we develop GetIn-1M dataset created through our automated Recognize-Track-Erase pipeline, which sequentially performs video captioning, salient instance identification, object detection, temporal tracking, and instance removal to generate high-quality video editing pairs with comprehensive annotations (reference image, tracking mask, instance prompt). Second, we present GetInVideo, a novel end-to-end framework that leverages a diffusion transformer architecture with 3D full attention to process reference images, condition videos, and masks simultaneously, maintaining temporal coherence, preserving visual identity, and ensuring natural scene interactions when integrating reference objects into videos. Finally, we establish GetInBench, the first comprehensive benchmark for Get-In-Video Editing scenario, demonstrating our approach's superior performance through extensive evaluations. Our work enables accessible, high-quality incorporation of specific real-world subjects into videos, significantly advancing personalized video editing capabilities.
new_dataset
0.966092
2503.06269
Thomas Winninger
Thomas Winninger, Boussad Addad, Katarzyna Kapusta
Using Mechanistic Interpretability to Craft Adversarial Attacks against Large Language Models
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Traditional white-box methods for creating adversarial perturbations against LLMs typically rely only on gradient computation from the targeted model, ignoring the internal mechanisms responsible for attack success or failure. Conversely, interpretability studies that analyze these internal mechanisms lack practical applications beyond runtime interventions. We bridge this gap by introducing a novel white-box approach that leverages mechanistic interpretability techniques to craft practical adversarial inputs. Specifically, we first identify acceptance subspaces - sets of feature vectors that do not trigger the model's refusal mechanisms - then use gradient-based optimization to reroute embeddings from refusal subspaces to acceptance subspaces, effectively achieving jailbreaks. This targeted approach significantly reduces computation cost, achieving attack success rates of 80-95\% on state-of-the-art models including Gemma2, Llama3.2, and Qwen2.5 within minutes or even seconds, compared to existing techniques that often fail or require hours of computation. We believe this approach opens a new direction for both attack research and defense development. Furthermore, it showcases a practical application of mechanistic interpretability where other methods are less efficient, which highlights its utility. The code and generated datasets are available at https://github.com/Sckathach/subspace-rerouting.
[ { "version": "v1", "created": "Sat, 8 Mar 2025 16:29:45 GMT" } ]
2025-03-11T00:00:00
[ [ "Winninger", "Thomas", "" ], [ "Addad", "Boussad", "" ], [ "Kapusta", "Katarzyna", "" ] ]
TITLE: Using Mechanistic Interpretability to Craft Adversarial Attacks against Large Language Models ABSTRACT: Traditional white-box methods for creating adversarial perturbations against LLMs typically rely only on gradient computation from the targeted model, ignoring the internal mechanisms responsible for attack success or failure. Conversely, interpretability studies that analyze these internal mechanisms lack practical applications beyond runtime interventions. We bridge this gap by introducing a novel white-box approach that leverages mechanistic interpretability techniques to craft practical adversarial inputs. Specifically, we first identify acceptance subspaces - sets of feature vectors that do not trigger the model's refusal mechanisms - then use gradient-based optimization to reroute embeddings from refusal subspaces to acceptance subspaces, effectively achieving jailbreaks. This targeted approach significantly reduces computation cost, achieving attack success rates of 80-95\% on state-of-the-art models including Gemma2, Llama3.2, and Qwen2.5 within minutes or even seconds, compared to existing techniques that often fail or require hours of computation. We believe this approach opens a new direction for both attack research and defense development. Furthermore, it showcases a practical application of mechanistic interpretability where other methods are less efficient, which highlights its utility. The code and generated datasets are available at https://github.com/Sckathach/subspace-rerouting.
no_new_dataset
0.943712
2503.06272
Gubio Gome De Lima
Gubio Lima and Gustavo Miranda and Tiago de Souza Farias
Introdu\c{c}\~ao a rede neural para F\'isicos
26 pages, 19 figures
null
null
null
physics.class-ph physics.comp-ph
http://creativecommons.org/publicdomain/zero/1.0/
As t\'ecnicas de aprendizado de m\'aquina emergiram como ferramentas poderosas para enfrentar uma ampla gama de desafios. A integra\c{c}\~ao dessas t\'ecnicas com a f\'isica tem conduzido a abordagens inovadoras na compreens\~ao, controle e simula\c{c}\~ao de fen\^omenos f\'isicos. Este artigo visa proporcionar uma introdu\c{c}\~ao pr\'atica \`as redes neurais e seus conceitos fundamentais, destacando perspectivas recentes dos avan\c{c}os na interse\c{c}\~ao entre modelos de aprendizado de m\'aquina e sistemas f\'isicos. Al\'em disso, apresentamos um material pr\'atico para orientar o leitor em seus primeiros passos na aplica\c{c}\~ao de redes neurais para resolver problemas f\'isicos. Como exemplo ilustrativo, fornecemos quatro aplica\c{c}\~oes de complexidades crescentes para o problema de um p\^endulo simples, a saber: fit de par\^ametros de EDO do p\^endulo para aproxima\c{c}\~ao de \^angulo pequeno; Physics Inspired Neural Networks (PINNs) para encontrar solu\c{c}\~oes da EDO do p\^endulo em angulo pequeno; Autoencoders em dataset de imagens do p\^endulo para estima\c{c}\~ao de dimensionalidade de espa\c{c}o de par\^ametros do problema f\'isico; Uso de arquiteturas Sparse Identification of Non-Linear Dynamics (SINDy) para descoberta de modelos e express\~oes anal\'iticas para o problema do p\^endulo n\~ao linear (angulos grandes) .
[ { "version": "v1", "created": "Sat, 8 Mar 2025 16:33:52 GMT" } ]
2025-03-11T00:00:00
[ [ "Lima", "Gubio", "" ], [ "Miranda", "Gustavo", "" ], [ "Farias", "Tiago de Souza", "" ] ]
TITLE: Introdu\c{c}\~ao a rede neural para F\'isicos ABSTRACT: As t\'ecnicas de aprendizado de m\'aquina emergiram como ferramentas poderosas para enfrentar uma ampla gama de desafios. A integra\c{c}\~ao dessas t\'ecnicas com a f\'isica tem conduzido a abordagens inovadoras na compreens\~ao, controle e simula\c{c}\~ao de fen\^omenos f\'isicos. Este artigo visa proporcionar uma introdu\c{c}\~ao pr\'atica \`as redes neurais e seus conceitos fundamentais, destacando perspectivas recentes dos avan\c{c}os na interse\c{c}\~ao entre modelos de aprendizado de m\'aquina e sistemas f\'isicos. Al\'em disso, apresentamos um material pr\'atico para orientar o leitor em seus primeiros passos na aplica\c{c}\~ao de redes neurais para resolver problemas f\'isicos. Como exemplo ilustrativo, fornecemos quatro aplica\c{c}\~oes de complexidades crescentes para o problema de um p\^endulo simples, a saber: fit de par\^ametros de EDO do p\^endulo para aproxima\c{c}\~ao de \^angulo pequeno; Physics Inspired Neural Networks (PINNs) para encontrar solu\c{c}\~oes da EDO do p\^endulo em angulo pequeno; Autoencoders em dataset de imagens do p\^endulo para estima\c{c}\~ao de dimensionalidade de espa\c{c}o de par\^ametros do problema f\'isico; Uso de arquiteturas Sparse Identification of Non-Linear Dynamics (SINDy) para descoberta de modelos e express\~oes anal\'iticas para o problema do p\^endulo n\~ao linear (angulos grandes) .
no_new_dataset
0.947672
2503.06276
Songping Wang
Songping Wang, Xinquan Yue, Yueming Lyu, Caifeng Shan
Exploring Adversarial Transferability between Kolmogorov-arnold Networks
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Kolmogorov-Arnold Networks (KANs) have emerged as a transformative model paradigm, significantly impacting various fields. However, their adversarial robustness remains less underexplored, especially across different KAN architectures. To explore this critical safety issue, we conduct an analysis and find that due to overfitting to the specific basis functions of KANs, they possess poor adversarial transferability among different KANs. To tackle this challenge, we propose AdvKAN, the first transfer attack method for KANs. AdvKAN integrates two key components: 1) a Breakthrough-Defense Surrogate Model (BDSM), which employs a breakthrough-defense training strategy to mitigate overfitting to the specific structures of KANs. 2) a Global-Local Interaction (GLI) technique, which promotes sufficient interaction between adversarial gradients of hierarchical levels, further smoothing out loss surfaces of KANs. Both of them work together to enhance the strength of transfer attack among different KANs. Extensive experimental results on various KANs and datasets demonstrate the effectiveness of AdvKAN, which possesses notably superior attack capabilities and deeply reveals the vulnerabilities of KANs. Code will be released upon acceptance.
[ { "version": "v1", "created": "Sat, 8 Mar 2025 16:48:05 GMT" } ]
2025-03-11T00:00:00
[ [ "Wang", "Songping", "" ], [ "Yue", "Xinquan", "" ], [ "Lyu", "Yueming", "" ], [ "Shan", "Caifeng", "" ] ]
TITLE: Exploring Adversarial Transferability between Kolmogorov-arnold Networks ABSTRACT: Kolmogorov-Arnold Networks (KANs) have emerged as a transformative model paradigm, significantly impacting various fields. However, their adversarial robustness remains less underexplored, especially across different KAN architectures. To explore this critical safety issue, we conduct an analysis and find that due to overfitting to the specific basis functions of KANs, they possess poor adversarial transferability among different KANs. To tackle this challenge, we propose AdvKAN, the first transfer attack method for KANs. AdvKAN integrates two key components: 1) a Breakthrough-Defense Surrogate Model (BDSM), which employs a breakthrough-defense training strategy to mitigate overfitting to the specific structures of KANs. 2) a Global-Local Interaction (GLI) technique, which promotes sufficient interaction between adversarial gradients of hierarchical levels, further smoothing out loss surfaces of KANs. Both of them work together to enhance the strength of transfer attack among different KANs. Extensive experimental results on various KANs and datasets demonstrate the effectiveness of AdvKAN, which possesses notably superior attack capabilities and deeply reveals the vulnerabilities of KANs. Code will be released upon acceptance.
no_new_dataset
0.945096
2503.06282
Shuangzhi Li
Shuangzhi Li, Junlong Shen, Lei Ma, and Xingyu Li
From Dataset to Real-world: General 3D Object Detection via Generalized Cross-domain Few-shot Learning
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
LiDAR-based 3D object detection datasets have been pivotal for autonomous driving, yet they cover a limited range of objects, restricting the model's generalization across diverse deployment environments. To address this, we introduce the first generalized cross-domain few-shot (GCFS) task in 3D object detection, which focuses on adapting a source-pretrained model for high performance on both common and novel classes in a target domain with few-shot samples. Our solution integrates multi-modal fusion and contrastive-enhanced prototype learning within one framework, holistically overcoming challenges related to data scarcity and domain adaptation in the GCFS setting. The multi-modal fusion module utilizes 2D vision-language models to extract rich, open-set semantic knowledge. To address biases in point distributions across varying structural complexities, we particularly introduce a physically-aware box searching strategy that leverages laser imaging principles to generate high-quality 3D box proposals from 2D insights, enhancing object recall. To effectively capture domain-specific representations for each class from limited target data, we further propose a contrastive-enhanced prototype learning, which strengthens the model's adaptability. We evaluate our approach with three GCFS benchmark settings, and extensive experiments demonstrate the effectiveness of our solution for GCFS tasks. The code will be publicly available.
[ { "version": "v1", "created": "Sat, 8 Mar 2025 17:05:21 GMT" } ]
2025-03-11T00:00:00
[ [ "Li", "Shuangzhi", "" ], [ "Shen", "Junlong", "" ], [ "Ma", "Lei", "" ], [ "Li", "Xingyu", "" ] ]
TITLE: From Dataset to Real-world: General 3D Object Detection via Generalized Cross-domain Few-shot Learning ABSTRACT: LiDAR-based 3D object detection datasets have been pivotal for autonomous driving, yet they cover a limited range of objects, restricting the model's generalization across diverse deployment environments. To address this, we introduce the first generalized cross-domain few-shot (GCFS) task in 3D object detection, which focuses on adapting a source-pretrained model for high performance on both common and novel classes in a target domain with few-shot samples. Our solution integrates multi-modal fusion and contrastive-enhanced prototype learning within one framework, holistically overcoming challenges related to data scarcity and domain adaptation in the GCFS setting. The multi-modal fusion module utilizes 2D vision-language models to extract rich, open-set semantic knowledge. To address biases in point distributions across varying structural complexities, we particularly introduce a physically-aware box searching strategy that leverages laser imaging principles to generate high-quality 3D box proposals from 2D insights, enhancing object recall. To effectively capture domain-specific representations for each class from limited target data, we further propose a contrastive-enhanced prototype learning, which strengthens the model's adaptability. We evaluate our approach with three GCFS benchmark settings, and extensive experiments demonstrate the effectiveness of our solution for GCFS tasks. The code will be publicly available.
no_new_dataset
0.944536
2503.06296
Vinay Verma Kumar
Vinay Kumar Verma, Shreyas Sunil Kulkarni, Happy Mittal and Deepak Gupta
MoEMoE: Question Guided Dense and Scalable Sparse Mixture-of-Expert for Multi-source Multi-modal Answering
To appear at NAACL Industry Track
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
Question Answering (QA) and Visual Question Answering (VQA) are well-studied problems in the language and vision domain. One challenging scenario involves multiple sources of information, each of a different modality, where the answer to the question may exist in one or more sources. This scenario contains richer information but is highly complex to handle. In this work, we formulate a novel question-answer generation (QAG) framework in an environment containing multi-source, multimodal information. The answer may belong to any or all sources; therefore, selecting the most prominent answer source or an optimal combination of all sources for a given question is challenging. To address this issue, we propose a question-guided attention mechanism that learns attention across multiple sources and decodes this information for robust and unbiased answer generation. To learn attention within each source, we introduce an explicit alignment between questions and various information sources, which facilitates identifying the most pertinent parts of the source information relative to the question. Scalability in handling diverse questions poses a challenge. We address this by extending our model to a sparse mixture-of-experts (sparse-MoE) framework, enabling it to handle thousands of question types. Experiments on T5 and Flan-T5 using three datasets demonstrate the model's efficacy, supported by ablation studies.
[ { "version": "v1", "created": "Sat, 8 Mar 2025 18:09:13 GMT" } ]
2025-03-11T00:00:00
[ [ "Verma", "Vinay Kumar", "" ], [ "Kulkarni", "Shreyas Sunil", "" ], [ "Mittal", "Happy", "" ], [ "Gupta", "Deepak", "" ] ]
TITLE: MoEMoE: Question Guided Dense and Scalable Sparse Mixture-of-Expert for Multi-source Multi-modal Answering ABSTRACT: Question Answering (QA) and Visual Question Answering (VQA) are well-studied problems in the language and vision domain. One challenging scenario involves multiple sources of information, each of a different modality, where the answer to the question may exist in one or more sources. This scenario contains richer information but is highly complex to handle. In this work, we formulate a novel question-answer generation (QAG) framework in an environment containing multi-source, multimodal information. The answer may belong to any or all sources; therefore, selecting the most prominent answer source or an optimal combination of all sources for a given question is challenging. To address this issue, we propose a question-guided attention mechanism that learns attention across multiple sources and decodes this information for robust and unbiased answer generation. To learn attention within each source, we introduce an explicit alignment between questions and various information sources, which facilitates identifying the most pertinent parts of the source information relative to the question. Scalability in handling diverse questions poses a challenge. We address this by extending our model to a sparse mixture-of-experts (sparse-MoE) framework, enabling it to handle thousands of question types. Experiments on T5 and Flan-T5 using three datasets demonstrate the model's efficacy, supported by ablation studies.
no_new_dataset
0.941708
2503.06312
Zhitong Xiong
Zhitong Xiong, Yi Wang, Weikang Yu, Adam J Stewart, Jie Zhao, Nils Lehmann, Thomas Dujardin, Zhenghang Yuan, Pedram Ghamisi, Xiao Xiang Zhu
GeoLangBind: Unifying Earth Observation with Agglomerative Vision-Language Foundation Models
code & weights: https://github.com/xiong-zhitong/GeoLB-SigLIP
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Earth observation (EO) data, collected from diverse sensors with varying imaging principles, present significant challenges in creating unified analytical frameworks. We present GeoLangBind, a novel agglomerative vision--language foundation model that bridges the gap between heterogeneous EO data modalities using language as a unifying medium. Our approach aligns different EO data types into a shared language embedding space, enabling seamless integration and complementary feature learning from diverse sensor data. To achieve this, we construct a large-scale multimodal image--text dataset, GeoLangBind-2M, encompassing six data modalities. GeoLangBind leverages this dataset to develop a zero-shot foundation model capable of processing arbitrary numbers of EO data channels as input. Through our designed Modality-aware Knowledge Agglomeration (MaKA) module and progressive multimodal weight merging strategy, we create a powerful agglomerative foundation model that excels in both zero-shot vision--language comprehension and fine-grained visual understanding. Extensive evaluation across 23 datasets covering multiple tasks demonstrates GeoLangBind's superior performance and versatility in EO applications, offering a robust framework for various environmental monitoring and analysis tasks. The dataset and pretrained models will be publicly available.
[ { "version": "v1", "created": "Sat, 8 Mar 2025 19:10:04 GMT" } ]
2025-03-11T00:00:00
[ [ "Xiong", "Zhitong", "" ], [ "Wang", "Yi", "" ], [ "Yu", "Weikang", "" ], [ "Stewart", "Adam J", "" ], [ "Zhao", "Jie", "" ], [ "Lehmann", "Nils", "" ], [ "Dujardin", "Thomas", "" ], [ "Yuan", "Zhenghang", "" ], [ "Ghamisi", "Pedram", "" ], [ "Zhu", "Xiao Xiang", "" ] ]
TITLE: GeoLangBind: Unifying Earth Observation with Agglomerative Vision-Language Foundation Models ABSTRACT: Earth observation (EO) data, collected from diverse sensors with varying imaging principles, present significant challenges in creating unified analytical frameworks. We present GeoLangBind, a novel agglomerative vision--language foundation model that bridges the gap between heterogeneous EO data modalities using language as a unifying medium. Our approach aligns different EO data types into a shared language embedding space, enabling seamless integration and complementary feature learning from diverse sensor data. To achieve this, we construct a large-scale multimodal image--text dataset, GeoLangBind-2M, encompassing six data modalities. GeoLangBind leverages this dataset to develop a zero-shot foundation model capable of processing arbitrary numbers of EO data channels as input. Through our designed Modality-aware Knowledge Agglomeration (MaKA) module and progressive multimodal weight merging strategy, we create a powerful agglomerative foundation model that excels in both zero-shot vision--language comprehension and fine-grained visual understanding. Extensive evaluation across 23 datasets covering multiple tasks demonstrates GeoLangBind's superior performance and versatility in EO applications, offering a robust framework for various environmental monitoring and analysis tasks. The dataset and pretrained models will be publicly available.
new_dataset
0.958693
2503.06313
Chandan Kumar Sah
Chandan Kumar Sah, Ankit Kumar Shaw, Xiaoli Lian, Arsalan Shahid Baig, Tuopu Wen, Kun Jiang, Mengmeng Yang, Diange Yang
Advancing Autonomous Vehicle Intelligence: Deep Learning and Multimodal LLM for Traffic Sign Recognition and Robust Lane Detection
11 pages, 9 figures
null
null
null
cs.CV cs.AI cs.CL cs.LG cs.RO
http://creativecommons.org/licenses/by/4.0/
Autonomous vehicles (AVs) require reliable traffic sign recognition and robust lane detection capabilities to ensure safe navigation in complex and dynamic environments. This paper introduces an integrated approach combining advanced deep learning techniques and Multimodal Large Language Models (MLLMs) for comprehensive road perception. For traffic sign recognition, we systematically evaluate ResNet-50, YOLOv8, and RT-DETR, achieving state-of-the-art performance of 99.8% with ResNet-50, 98.0% accuracy with YOLOv8, and achieved 96.6% accuracy in RT-DETR despite its higher computational complexity. For lane detection, we propose a CNN-based segmentation method enhanced by polynomial curve fitting, which delivers high accuracy under favorable conditions. Furthermore, we introduce a lightweight, Multimodal, LLM-based framework that directly undergoes instruction tuning using small yet diverse datasets, eliminating the need for initial pretraining. This framework effectively handles various lane types, complex intersections, and merging zones, significantly enhancing lane detection reliability by reasoning under adverse conditions. Despite constraints in available training resources, our multimodal approach demonstrates advanced reasoning capabilities, achieving a Frame Overall Accuracy (FRM) of 53.87%, a Question Overall Accuracy (QNS) of 82.83%, lane detection accuracies of 99.6% in clear conditions and 93.0% at night, and robust performance in reasoning about lane invisibility due to rain (88.4%) or road degradation (95.6%). The proposed comprehensive framework markedly enhances AV perception reliability, thus contributing significantly to safer autonomous driving across diverse and challenging road scenarios.
[ { "version": "v1", "created": "Sat, 8 Mar 2025 19:12:36 GMT" } ]
2025-03-11T00:00:00
[ [ "Sah", "Chandan Kumar", "" ], [ "Shaw", "Ankit Kumar", "" ], [ "Lian", "Xiaoli", "" ], [ "Baig", "Arsalan Shahid", "" ], [ "Wen", "Tuopu", "" ], [ "Jiang", "Kun", "" ], [ "Yang", "Mengmeng", "" ], [ "Yang", "Diange", "" ] ]
TITLE: Advancing Autonomous Vehicle Intelligence: Deep Learning and Multimodal LLM for Traffic Sign Recognition and Robust Lane Detection ABSTRACT: Autonomous vehicles (AVs) require reliable traffic sign recognition and robust lane detection capabilities to ensure safe navigation in complex and dynamic environments. This paper introduces an integrated approach combining advanced deep learning techniques and Multimodal Large Language Models (MLLMs) for comprehensive road perception. For traffic sign recognition, we systematically evaluate ResNet-50, YOLOv8, and RT-DETR, achieving state-of-the-art performance of 99.8% with ResNet-50, 98.0% accuracy with YOLOv8, and achieved 96.6% accuracy in RT-DETR despite its higher computational complexity. For lane detection, we propose a CNN-based segmentation method enhanced by polynomial curve fitting, which delivers high accuracy under favorable conditions. Furthermore, we introduce a lightweight, Multimodal, LLM-based framework that directly undergoes instruction tuning using small yet diverse datasets, eliminating the need for initial pretraining. This framework effectively handles various lane types, complex intersections, and merging zones, significantly enhancing lane detection reliability by reasoning under adverse conditions. Despite constraints in available training resources, our multimodal approach demonstrates advanced reasoning capabilities, achieving a Frame Overall Accuracy (FRM) of 53.87%, a Question Overall Accuracy (QNS) of 82.83%, lane detection accuracies of 99.6% in clear conditions and 93.0% at night, and robust performance in reasoning about lane invisibility due to rain (88.4%) or road degradation (95.6%). The proposed comprehensive framework markedly enhances AV perception reliability, thus contributing significantly to safer autonomous driving across diverse and challenging road scenarios.
no_new_dataset
0.951188
2503.06317
Badhan Chandra Das
Badhan Chandra Das, M. Hadi Amini, Yanzhao Wu
Accurate and Efficient Two-Stage Gun Detection in Video
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Object detection in videos plays a crucial role in advancing applications such as public safety and anomaly detection. Existing methods have explored different techniques, including CNN, deep learning, and Transformers, for object detection and video classification. However, detecting tiny objects, e.g., guns, in videos remains challenging due to their small scale and varying appearances in complex scenes. Moreover, existing video analysis models for classification or detection often perform poorly in real-world gun detection scenarios due to limited labeled video datasets for training. Thus, developing efficient methods for effectively capturing tiny object features and designing models capable of accurate gun detection in real-world videos is imperative. To address these challenges, we make three original contributions in this paper. First, we conduct an empirical study of several existing video classification and object detection methods to identify guns in videos. Our extensive analysis shows that these methods may not accurately detect guns in videos. Second, we propose a novel two-stage gun detection method. In stage 1, we train an image-augmented model to effectively classify ``Gun'' videos. To make the detection more precise and efficient, stage 2 employs an object detection model to locate the exact region of the gun within video frames for videos classified as ``Gun'' by stage 1. Third, our experimental results demonstrate that the proposed domain-specific method achieves significant performance improvements and enhances efficiency compared with existing techniques. We also discuss challenges and future research directions in gun detection tasks in computer vision.
[ { "version": "v1", "created": "Sat, 8 Mar 2025 19:26:23 GMT" } ]
2025-03-11T00:00:00
[ [ "Das", "Badhan Chandra", "" ], [ "Amini", "M. Hadi", "" ], [ "Wu", "Yanzhao", "" ] ]
TITLE: Accurate and Efficient Two-Stage Gun Detection in Video ABSTRACT: Object detection in videos plays a crucial role in advancing applications such as public safety and anomaly detection. Existing methods have explored different techniques, including CNN, deep learning, and Transformers, for object detection and video classification. However, detecting tiny objects, e.g., guns, in videos remains challenging due to their small scale and varying appearances in complex scenes. Moreover, existing video analysis models for classification or detection often perform poorly in real-world gun detection scenarios due to limited labeled video datasets for training. Thus, developing efficient methods for effectively capturing tiny object features and designing models capable of accurate gun detection in real-world videos is imperative. To address these challenges, we make three original contributions in this paper. First, we conduct an empirical study of several existing video classification and object detection methods to identify guns in videos. Our extensive analysis shows that these methods may not accurately detect guns in videos. Second, we propose a novel two-stage gun detection method. In stage 1, we train an image-augmented model to effectively classify ``Gun'' videos. To make the detection more precise and efficient, stage 2 employs an object detection model to locate the exact region of the gun within video frames for videos classified as ``Gun'' by stage 1. Third, our experimental results demonstrate that the proposed domain-specific method achieves significant performance improvements and enhances efficiency compared with existing techniques. We also discuss challenges and future research directions in gun detection tasks in computer vision.
no_new_dataset
0.949153
2503.06321
Rumman Ahmed Prodhan
Md Ohiduzzaman Ovi, Maliha Sanjana, Fahad Fahad, Mahjabin Runa, Zarin Tasnim Rothy, Tanmoy Sarkar Pias, A.M. Tayeful Islam, and Rumman Ahmed Prodhan
Enhanced Pediatric Dental Segmentation Using a Custom SegUNet with VGG19 Backbone on Panoramic Radiographs
null
null
null
null
eess.IV cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pediatric dental segmentation is critical in dental diagnostics, presenting unique challenges due to variations in dental structures and the lower number of pediatric X-ray images. This study proposes a custom SegUNet model with a VGG19 backbone, designed explicitly for pediatric dental segmentation and applied to the Children's Dental Panoramic Radiographs dataset. The SegUNet architecture with a VGG19 backbone has been employed on this dataset for the first time, achieving state-of-the-art performance. The model reached an accuracy of 97.53%, a dice coefficient of 92.49%, and an intersection over union (IOU) of 91.46%, setting a new benchmark for this dataset. These results demonstrate the effectiveness of the VGG19 backbone in enhancing feature extraction and improving segmentation precision. Comprehensive evaluations across metrics, including precision, recall, and specificity, indicate the robustness of this approach. The model's ability to generalize across diverse dental structures makes it a valuable tool for clinical applications in pediatric dental care. It offers a reliable and efficient solution for automated dental diagnostics.
[ { "version": "v1", "created": "Sat, 8 Mar 2025 19:32:25 GMT" } ]
2025-03-11T00:00:00
[ [ "Ovi", "Md Ohiduzzaman", "" ], [ "Sanjana", "Maliha", "" ], [ "Fahad", "Fahad", "" ], [ "Runa", "Mahjabin", "" ], [ "Rothy", "Zarin Tasnim", "" ], [ "Pias", "Tanmoy Sarkar", "" ], [ "Islam", "A. M. Tayeful", "" ], [ "Prodhan", "Rumman Ahmed", "" ] ]
TITLE: Enhanced Pediatric Dental Segmentation Using a Custom SegUNet with VGG19 Backbone on Panoramic Radiographs ABSTRACT: Pediatric dental segmentation is critical in dental diagnostics, presenting unique challenges due to variations in dental structures and the lower number of pediatric X-ray images. This study proposes a custom SegUNet model with a VGG19 backbone, designed explicitly for pediatric dental segmentation and applied to the Children's Dental Panoramic Radiographs dataset. The SegUNet architecture with a VGG19 backbone has been employed on this dataset for the first time, achieving state-of-the-art performance. The model reached an accuracy of 97.53%, a dice coefficient of 92.49%, and an intersection over union (IOU) of 91.46%, setting a new benchmark for this dataset. These results demonstrate the effectiveness of the VGG19 backbone in enhancing feature extraction and improving segmentation precision. Comprehensive evaluations across metrics, including precision, recall, and specificity, indicate the robustness of this approach. The model's ability to generalize across diverse dental structures makes it a valuable tool for clinical applications in pediatric dental care. It offers a reliable and efficient solution for automated dental diagnostics.
new_dataset
0.964422
2503.06352
Xiao Yue
Xiao Yue, Guangzhi Qu, Lige Gan
GIN-Graph: A Generative Interpretation Network for Model-Level Explanation of Graph Neural Networks
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One significant challenge of exploiting Graph neural networks (GNNs) in real-life scenarios is that they are always treated as black boxes, therefore leading to the requirement of interpretability. Model-level interpretations explain what patterns maximize probability of predicting to a certain class. However, existing model-level interpretation methods pose several limitations such as generating invalid explanation graphs and requiring extreme fine-tuning on hyperparameters manually. In this paper, we propose a new Generative Interpretation Network for Model-Level Explanation of Graph Neural Networks (GIN-Graph), to generate reliable model-level explanation graphs. The implicit and likelihood-free generative adversarial networks are exploited to construct explanation graphs similar to original graphs, meanwhile maximizing the prediction probability for a certain class by adopting a novel objective function. Experimental results indicate that GIN-Graph can be easily applied to GNN models trained on a variety of graph datasets to create meaningful explanation graphs without requiring extensive fine-tuning on hyperparameters.
[ { "version": "v1", "created": "Sat, 8 Mar 2025 22:39:36 GMT" } ]
2025-03-11T00:00:00
[ [ "Yue", "Xiao", "" ], [ "Qu", "Guangzhi", "" ], [ "Gan", "Lige", "" ] ]
TITLE: GIN-Graph: A Generative Interpretation Network for Model-Level Explanation of Graph Neural Networks ABSTRACT: One significant challenge of exploiting Graph neural networks (GNNs) in real-life scenarios is that they are always treated as black boxes, therefore leading to the requirement of interpretability. Model-level interpretations explain what patterns maximize probability of predicting to a certain class. However, existing model-level interpretation methods pose several limitations such as generating invalid explanation graphs and requiring extreme fine-tuning on hyperparameters manually. In this paper, we propose a new Generative Interpretation Network for Model-Level Explanation of Graph Neural Networks (GIN-Graph), to generate reliable model-level explanation graphs. The implicit and likelihood-free generative adversarial networks are exploited to construct explanation graphs similar to original graphs, meanwhile maximizing the prediction probability for a certain class by adopting a novel objective function. Experimental results indicate that GIN-Graph can be easily applied to GNN models trained on a variety of graph datasets to create meaningful explanation graphs without requiring extensive fine-tuning on hyperparameters.
no_new_dataset
0.951188
2503.06362
Umberto Cappellazzo
Umberto Cappellazzo, Minsu Kim, Stavros Petridis
Adaptive Audio-Visual Speech Recognition via Matryoshka-Based Multimodal LLMs
null
null
null
null
cs.CV cs.MM cs.SD eess.AS
http://creativecommons.org/licenses/by/4.0/
Audio-Visual Speech Recognition (AVSR) leverages both audio and visual modalities to enhance speech recognition robustness, particularly in noisy environments. Recent advancements in Large Language Models (LLMs) have demonstrated their effectiveness in speech recognition, including AVSR. However, due to the significant length of speech representations, direct integration with LLMs imposes substantial computational costs. Prior approaches address this by compressing speech representations before feeding them into LLMs. However, higher compression ratios often lead to performance degradation, necessitating a trade-off between computational efficiency and recognition accuracy. To address this challenge, we propose Llama-MTSK, the first Matryoshka-based Multimodal LLM for AVSR, which enables flexible adaptation of the audio-visual token allocation based on specific computational constraints while preserving high performance. Our approach, inspired by Matryoshka Representation Learning, encodes audio-visual representations at multiple granularities within a single model, eliminating the need to train separate models for different compression levels. Moreover, to efficiently fine-tune the LLM, we introduce three LoRA-based Matryoshka strategies using global and scale-specific LoRA modules. Extensive evaluations on the two largest AVSR datasets demonstrate that Llama-MTSK achieves state-of-the-art results, matching or surpassing models trained independently at fixed compression levels.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 00:02:10 GMT" } ]
2025-03-11T00:00:00
[ [ "Cappellazzo", "Umberto", "" ], [ "Kim", "Minsu", "" ], [ "Petridis", "Stavros", "" ] ]
TITLE: Adaptive Audio-Visual Speech Recognition via Matryoshka-Based Multimodal LLMs ABSTRACT: Audio-Visual Speech Recognition (AVSR) leverages both audio and visual modalities to enhance speech recognition robustness, particularly in noisy environments. Recent advancements in Large Language Models (LLMs) have demonstrated their effectiveness in speech recognition, including AVSR. However, due to the significant length of speech representations, direct integration with LLMs imposes substantial computational costs. Prior approaches address this by compressing speech representations before feeding them into LLMs. However, higher compression ratios often lead to performance degradation, necessitating a trade-off between computational efficiency and recognition accuracy. To address this challenge, we propose Llama-MTSK, the first Matryoshka-based Multimodal LLM for AVSR, which enables flexible adaptation of the audio-visual token allocation based on specific computational constraints while preserving high performance. Our approach, inspired by Matryoshka Representation Learning, encodes audio-visual representations at multiple granularities within a single model, eliminating the need to train separate models for different compression levels. Moreover, to efficiently fine-tune the LLM, we introduce three LoRA-based Matryoshka strategies using global and scale-specific LoRA modules. Extensive evaluations on the two largest AVSR datasets demonstrate that Llama-MTSK achieves state-of-the-art results, matching or surpassing models trained independently at fixed compression levels.
no_new_dataset
0.943348
2503.06366
Henry Kvinge
Herman Chau, Helen Jenne, Davis Brown, Jesse He, Mark Raugas, Sara Billey, Henry Kvinge
Machine Learning meets Algebraic Combinatorics: A Suite of Datasets Capturing Research-level Conjecturing Ability in Pure Mathematics
26 pages, comments welcome
null
null
null
cs.LG cs.AI math.CO math.RT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With recent dramatic increases in AI system capabilities, there has been growing interest in utilizing machine learning for reasoning-heavy, quantitative tasks, particularly mathematics. While there are many resources capturing mathematics at the high-school, undergraduate, and graduate level, there are far fewer resources available that align with the level of difficulty and open endedness encountered by professional mathematicians working on open problems. To address this, we introduce a new collection of datasets, the Algebraic Combinatorics Dataset Repository (ACD Repo), representing either foundational results or open problems in algebraic combinatorics, a subfield of mathematics that studies discrete structures arising from abstract algebra. Further differentiating our dataset collection is the fact that it aims at the conjecturing process. Each dataset includes an open-ended research-level question and a large collection of examples (up to 10M in some cases) from which conjectures should be generated. We describe all nine datasets, the different ways machine learning models can be applied to them (e.g., training with narrow models followed by interpretability analysis or program synthesis with LLMs), and discuss some of the challenges involved in designing datasets like these.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 00:11:40 GMT" } ]
2025-03-11T00:00:00
[ [ "Chau", "Herman", "" ], [ "Jenne", "Helen", "" ], [ "Brown", "Davis", "" ], [ "He", "Jesse", "" ], [ "Raugas", "Mark", "" ], [ "Billey", "Sara", "" ], [ "Kvinge", "Henry", "" ] ]
TITLE: Machine Learning meets Algebraic Combinatorics: A Suite of Datasets Capturing Research-level Conjecturing Ability in Pure Mathematics ABSTRACT: With recent dramatic increases in AI system capabilities, there has been growing interest in utilizing machine learning for reasoning-heavy, quantitative tasks, particularly mathematics. While there are many resources capturing mathematics at the high-school, undergraduate, and graduate level, there are far fewer resources available that align with the level of difficulty and open endedness encountered by professional mathematicians working on open problems. To address this, we introduce a new collection of datasets, the Algebraic Combinatorics Dataset Repository (ACD Repo), representing either foundational results or open problems in algebraic combinatorics, a subfield of mathematics that studies discrete structures arising from abstract algebra. Further differentiating our dataset collection is the fact that it aims at the conjecturing process. Each dataset includes an open-ended research-level question and a large collection of examples (up to 10M in some cases) from which conjectures should be generated. We describe all nine datasets, the different ways machine learning models can be applied to them (e.g., training with narrow models followed by interpretability analysis or program synthesis with LLMs), and discuss some of the challenges involved in designing datasets like these.
new_dataset
0.969179
2503.06368
Leonardo Scabini
Leonardo Scabini, Kallil M. Zielinski, Emir Konuk, Ricardo T. Fares, Lucas C. Ribas, Kevin Smith, and Odemir M. Bruno
VORTEX: Challenging CNNs at Texture Recognition by using Vision Transformers with Orderless and Randomized Token Encodings
null
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Texture recognition has recently been dominated by ImageNet-pre-trained deep Convolutional Neural Networks (CNNs), with specialized modifications and feature engineering required to achieve state-of-the-art (SOTA) performance. However, although Vision Transformers (ViTs) were introduced a few years ago, little is known about their texture recognition ability. Therefore, in this work, we introduce VORTEX (ViTs with Orderless and Randomized Token Encodings for Texture Recognition), a novel method that enables the effective use of ViTs for texture analysis. VORTEX extracts multi-depth token embeddings from pre-trained ViT backbones and employs a lightweight module to aggregate hierarchical features and perform orderless encoding, obtaining a better image representation for texture recognition tasks. This approach allows seamless integration with any ViT with the common transformer architecture. Moreover, no fine-tuning of the backbone is performed, since they are used only as frozen feature extractors, and the features are fed to a linear SVM. We evaluate VORTEX on nine diverse texture datasets, demonstrating its ability to achieve or surpass SOTA performance in a variety of texture analysis scenarios. By bridging the gap between texture recognition with CNNs and transformer-based architectures, VORTEX paves the way for adopting emerging transformer foundation models. Furthermore, VORTEX demonstrates robust computational efficiency when coupled with ViT backbones compared to CNNs with similar costs. The method implementation and experimental scripts are publicly available in our online repository.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 00:36:02 GMT" } ]
2025-03-11T00:00:00
[ [ "Scabini", "Leonardo", "" ], [ "Zielinski", "Kallil M.", "" ], [ "Konuk", "Emir", "" ], [ "Fares", "Ricardo T.", "" ], [ "Ribas", "Lucas C.", "" ], [ "Smith", "Kevin", "" ], [ "Bruno", "Odemir M.", "" ] ]
TITLE: VORTEX: Challenging CNNs at Texture Recognition by using Vision Transformers with Orderless and Randomized Token Encodings ABSTRACT: Texture recognition has recently been dominated by ImageNet-pre-trained deep Convolutional Neural Networks (CNNs), with specialized modifications and feature engineering required to achieve state-of-the-art (SOTA) performance. However, although Vision Transformers (ViTs) were introduced a few years ago, little is known about their texture recognition ability. Therefore, in this work, we introduce VORTEX (ViTs with Orderless and Randomized Token Encodings for Texture Recognition), a novel method that enables the effective use of ViTs for texture analysis. VORTEX extracts multi-depth token embeddings from pre-trained ViT backbones and employs a lightweight module to aggregate hierarchical features and perform orderless encoding, obtaining a better image representation for texture recognition tasks. This approach allows seamless integration with any ViT with the common transformer architecture. Moreover, no fine-tuning of the backbone is performed, since they are used only as frozen feature extractors, and the features are fed to a linear SVM. We evaluate VORTEX on nine diverse texture datasets, demonstrating its ability to achieve or surpass SOTA performance in a variety of texture analysis scenarios. By bridging the gap between texture recognition with CNNs and transformer-based architectures, VORTEX paves the way for adopting emerging transformer foundation models. Furthermore, VORTEX demonstrates robust computational efficiency when coupled with ViT backbones compared to CNNs with similar costs. The method implementation and experimental scripts are publicly available in our online repository.
no_new_dataset
0.949529
2503.06370
Saba Sanami
Hesam Mosalli, Saba Sanami, Yu Yang, Hen-Geul Yeh, Amir G. Aghdam
Dynamic Load Balancing for EV Charging Stations Using Reinforcement Learning and Demand Prediction
19th Annual IEEE International Systems Conference (SysCon 2025)
null
null
null
eess.SY cs.SY
http://creativecommons.org/licenses/by-nc-nd/4.0/
This paper presents a method for load balancing and dynamic pricing in electric vehicle (EV) charging networks, utilizing reinforcement learning (RL) to enhance network performance. The proposed framework integrates a pre-trained graph neural network to predict demand elasticity and inform pricing decisions. The spatio-temporal EV charging demand prediction (EVCDP) dataset from Shenzhen is utilized to capture the geographic and temporal characteristics of the charging stations. The RL model dynamically adjusts prices at individual stations based on occupancy, maximum station capacity, and demand forecasts, ensuring an equitable network load distribution while preventing station overloads. By leveraging spatially-aware demand predictions and a carefully designed reward function, the framework achieves efficient load balancing and adaptive pricing strategies that respond to localized demand and global network dynamics, ensuring improved network stability and user satisfaction. The efficacy of the approach is validated through simulations on the dataset, showing significant improvements in load balancing and reduced overload as the RL agent iteratively interacts with the environment and learns to dynamically adjust pricing strategies based on real-time demand patterns and station constraints. The findings highlight the potential of adaptive pricing and load-balancing strategies to address the complexities of EV infrastructure, paving the way for scalable and user-centric solutions.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 00:40:35 GMT" } ]
2025-03-11T00:00:00
[ [ "Mosalli", "Hesam", "" ], [ "Sanami", "Saba", "" ], [ "Yang", "Yu", "" ], [ "Yeh", "Hen-Geul", "" ], [ "Aghdam", "Amir G.", "" ] ]
TITLE: Dynamic Load Balancing for EV Charging Stations Using Reinforcement Learning and Demand Prediction ABSTRACT: This paper presents a method for load balancing and dynamic pricing in electric vehicle (EV) charging networks, utilizing reinforcement learning (RL) to enhance network performance. The proposed framework integrates a pre-trained graph neural network to predict demand elasticity and inform pricing decisions. The spatio-temporal EV charging demand prediction (EVCDP) dataset from Shenzhen is utilized to capture the geographic and temporal characteristics of the charging stations. The RL model dynamically adjusts prices at individual stations based on occupancy, maximum station capacity, and demand forecasts, ensuring an equitable network load distribution while preventing station overloads. By leveraging spatially-aware demand predictions and a carefully designed reward function, the framework achieves efficient load balancing and adaptive pricing strategies that respond to localized demand and global network dynamics, ensuring improved network stability and user satisfaction. The efficacy of the approach is validated through simulations on the dataset, showing significant improvements in load balancing and reduced overload as the RL agent iteratively interacts with the environment and learns to dynamically adjust pricing strategies based on real-time demand patterns and station constraints. The findings highlight the potential of adaptive pricing and load-balancing strategies to address the complexities of EV infrastructure, paving the way for scalable and user-centric solutions.
no_new_dataset
0.951729
2503.06382
Yuanhao Cai
Guofeng Zhang, Ruyi Zha, Hao He, Yixun Liang, Alan Yuille, Hongdong Li, Yuanhao Cai
X-LRM: X-ray Large Reconstruction Model for Extremely Sparse-View Computed Tomography Recovery in One Second
A large reconstruction model and the largest dataset (16K samples) for sparse-view CT recovery
null
null
null
eess.IV cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sparse-view 3D CT reconstruction aims to recover volumetric structures from a limited number of 2D X-ray projections. Existing feedforward methods are constrained by the limited capacity of CNN-based architectures and the scarcity of large-scale training datasets. In this paper, we propose an X-ray Large Reconstruction Model (X-LRM) for extremely sparse-view (<10 views) CT reconstruction. X-LRM consists of two key components: X-former and X-triplane. Our X-former can handle an arbitrary number of input views using an MLP-based image tokenizer and a Transformer-based encoder. The output tokens are then upsampled into our X-triplane representation, which models the 3D radiodensity as an implicit neural field. To support the training of X-LRM, we introduce Torso-16K, a large-scale dataset comprising over 16K volume-projection pairs of various torso organs. Extensive experiments demonstrate that X-LRM outperforms the state-of-the-art method by 1.5 dB and achieves 27x faster speed and better flexibility. Furthermore, the downstream evaluation of lung segmentation tasks also suggests the practical value of our approach. Our code, pre-trained models, and dataset will be released at https://github.com/caiyuanhao1998/X-LRM
[ { "version": "v1", "created": "Sun, 9 Mar 2025 01:39:59 GMT" } ]
2025-03-11T00:00:00
[ [ "Zhang", "Guofeng", "" ], [ "Zha", "Ruyi", "" ], [ "He", "Hao", "" ], [ "Liang", "Yixun", "" ], [ "Yuille", "Alan", "" ], [ "Li", "Hongdong", "" ], [ "Cai", "Yuanhao", "" ] ]
TITLE: X-LRM: X-ray Large Reconstruction Model for Extremely Sparse-View Computed Tomography Recovery in One Second ABSTRACT: Sparse-view 3D CT reconstruction aims to recover volumetric structures from a limited number of 2D X-ray projections. Existing feedforward methods are constrained by the limited capacity of CNN-based architectures and the scarcity of large-scale training datasets. In this paper, we propose an X-ray Large Reconstruction Model (X-LRM) for extremely sparse-view (<10 views) CT reconstruction. X-LRM consists of two key components: X-former and X-triplane. Our X-former can handle an arbitrary number of input views using an MLP-based image tokenizer and a Transformer-based encoder. The output tokens are then upsampled into our X-triplane representation, which models the 3D radiodensity as an implicit neural field. To support the training of X-LRM, we introduce Torso-16K, a large-scale dataset comprising over 16K volume-projection pairs of various torso organs. Extensive experiments demonstrate that X-LRM outperforms the state-of-the-art method by 1.5 dB and achieves 27x faster speed and better flexibility. Furthermore, the downstream evaluation of lung segmentation tasks also suggests the practical value of our approach. Our code, pre-trained models, and dataset will be released at https://github.com/caiyuanhao1998/X-LRM
new_dataset
0.962036
2503.06387
Joseph Wilson
Anurag Swarnim Yadav and Joseph N. Wilson
R+R: Security Vulnerability Dataset Quality Is Critical
15 pages, 1 figure, 35 tables. To be published in Proceedings of the 2024 Annual Computer Security Applications Conference (ACSAC)
null
null
null
cs.SE cs.CR
http://creativecommons.org/licenses/by/4.0/
Large Language Models (LLMs) are of great interest in vulnerability detection and repair. The effectiveness of these models hinges on the quality of the datasets used for both training and evaluation. Our investigation reveals that a number of studies featured in prominent software engineering conferences have employed datasets that are plagued by high duplication rates, questionable label accuracy, and incomplete samples. Using these datasets for experimentation will yield incorrect results that are significantly different from actual expected behavior. For example, the state-of-the-art VulRepair Model, which is reported to have 44% accuracy, on average yielded 9% accuracy when test-set duplicates were removed from its training set and 13% accuracy when training-set duplicates were removed from its test set. In an effort to tackle these data quality concerns, we have retrained models from several papers without duplicates and conducted an accuracy assessment of labels for the top ten most hazardous Common Weakness Enumerations (CWEs). Our findings indicate that 56% of the samples had incorrect labels and 44% comprised incomplete samples--only 31% were both accurate and complete. Finally, we employ transfer learning using a large deduplicated bugfix corpus to show that these models can exhibit better performance if given larger amounts of high-quality pre-training data, leading us to conclude that while previous studies have over-estimated performance due to poor dataset quality, this does not demonstrate that better performance is not possible.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 01:49:30 GMT" } ]
2025-03-11T00:00:00
[ [ "Yadav", "Anurag Swarnim", "" ], [ "Wilson", "Joseph N.", "" ] ]
TITLE: R+R: Security Vulnerability Dataset Quality Is Critical ABSTRACT: Large Language Models (LLMs) are of great interest in vulnerability detection and repair. The effectiveness of these models hinges on the quality of the datasets used for both training and evaluation. Our investigation reveals that a number of studies featured in prominent software engineering conferences have employed datasets that are plagued by high duplication rates, questionable label accuracy, and incomplete samples. Using these datasets for experimentation will yield incorrect results that are significantly different from actual expected behavior. For example, the state-of-the-art VulRepair Model, which is reported to have 44% accuracy, on average yielded 9% accuracy when test-set duplicates were removed from its training set and 13% accuracy when training-set duplicates were removed from its test set. In an effort to tackle these data quality concerns, we have retrained models from several papers without duplicates and conducted an accuracy assessment of labels for the top ten most hazardous Common Weakness Enumerations (CWEs). Our findings indicate that 56% of the samples had incorrect labels and 44% comprised incomplete samples--only 31% were both accurate and complete. Finally, we employ transfer learning using a large deduplicated bugfix corpus to show that these models can exhibit better performance if given larger amounts of high-quality pre-training data, leading us to conclude that while previous studies have over-estimated performance due to poor dataset quality, this does not demonstrate that better performance is not possible.
no_new_dataset
0.946349
2503.06392
Tao Feng
Tao Feng, Yunke Zhang, Huandong Wang, Yong Li
EPR-GAIL: An EPR-Enhanced Hierarchical Imitation Learning Framework to Simulate Complex User Consumption Behaviors
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
User consumption behavior data, which records individuals' online spending history at various types of stores, has been widely used in various applications, such as store recommendation, site selection, and sale forecasting. However, its high worth is limited due to deficiencies in data comprehensiveness and changes of application scenarios. Thus, generating high-quality sequential consumption data by simulating complex user consumption behaviors is of great importance to real-world applications. Two branches of existing sequence generation methods are both limited in quality. Model-based methods with simplified assumptions fail to model the complex decision process of user consumption, while data-driven methods that emulate real-world data are prone to noises, unobserved behaviors, and dynamic decision space. In this work, we propose to enhance the fidelity and trustworthiness of the data-driven Generative Adversarial Imitation Learning (GAIL) method by blending it with the Exploration and Preferential Return EPR model . The core idea of our EPR-GAIL framework is to model user consumption behaviors as a complex EPR decision process, which consists of purchase, exploration, and preference decisions. Specifically, we design the hierarchical policy function in the generator as a realization of the EPR decision process and employ the probability distributions of the EPR model to guide the reward function in the discriminator. Extensive experiments on two real-world datasets of user consumption behaviors on an online platform demonstrate that the EPR-GAIL framework outperforms the best state-of-the-art baseline by over 19\% in terms of data fidelity. Furthermore, the generated consumption behavior data can improve the performance of sale prediction and location recommendation by up to 35.29% and 11.19%, respectively, validating its advantage for practical applications.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 01:56:42 GMT" } ]
2025-03-11T00:00:00
[ [ "Feng", "Tao", "" ], [ "Zhang", "Yunke", "" ], [ "Wang", "Huandong", "" ], [ "Li", "Yong", "" ] ]
TITLE: EPR-GAIL: An EPR-Enhanced Hierarchical Imitation Learning Framework to Simulate Complex User Consumption Behaviors ABSTRACT: User consumption behavior data, which records individuals' online spending history at various types of stores, has been widely used in various applications, such as store recommendation, site selection, and sale forecasting. However, its high worth is limited due to deficiencies in data comprehensiveness and changes of application scenarios. Thus, generating high-quality sequential consumption data by simulating complex user consumption behaviors is of great importance to real-world applications. Two branches of existing sequence generation methods are both limited in quality. Model-based methods with simplified assumptions fail to model the complex decision process of user consumption, while data-driven methods that emulate real-world data are prone to noises, unobserved behaviors, and dynamic decision space. In this work, we propose to enhance the fidelity and trustworthiness of the data-driven Generative Adversarial Imitation Learning (GAIL) method by blending it with the Exploration and Preferential Return EPR model . The core idea of our EPR-GAIL framework is to model user consumption behaviors as a complex EPR decision process, which consists of purchase, exploration, and preference decisions. Specifically, we design the hierarchical policy function in the generator as a realization of the EPR decision process and employ the probability distributions of the EPR model to guide the reward function in the discriminator. Extensive experiments on two real-world datasets of user consumption behaviors on an online platform demonstrate that the EPR-GAIL framework outperforms the best state-of-the-art baseline by over 19\% in terms of data fidelity. Furthermore, the generated consumption behavior data can improve the performance of sale prediction and location recommendation by up to 35.29% and 11.19%, respectively, validating its advantage for practical applications.
no_new_dataset
0.946597
2503.06395
Tao Feng
Tao Feng, Yunke Zhang, Xiaochen Fan, Huandong Wang, Yong Li
Causal Discovery and Inference towards Urban Elements and Associated Factors
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
To uncover the city's fundamental functioning mechanisms, it is important to acquire a deep understanding of complicated relationships among citizens, location, and mobility behaviors. Previous research studies have applied direct correlation analysis to investigate such relationships. Nevertheless, due to the ubiquitous confounding effects, empirical correlation analysis may not accurately reflect underlying causal relationships among basic urban elements. In this paper, we propose a novel urban causal computing framework to comprehensively explore causalities and confounding effects among a variety of factors across different types of urban elements. In particular, we design a reinforcement learning algorithm to discover the potential causal graph, which depicts the causal relations between urban factors. The causal graph further serves as the guidance for estimating causal effects between pair-wise urban factors by propensity score matching. After removing the confounding effects from correlations, we leverage significance levels of causal effects in downstream urban mobility prediction tasks. Experimental studies on open-source urban datasets show that the discovered causal graph demonstrates a hierarchical structure, where citizens affect locations, and they both cause changes in urban mobility behaviors. Experimental results in urban mobility prediction tasks further show that the proposed method can effectively reduce confounding effects and enhance performance of urban computing tasks.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 02:15:04 GMT" } ]
2025-03-11T00:00:00
[ [ "Feng", "Tao", "" ], [ "Zhang", "Yunke", "" ], [ "Fan", "Xiaochen", "" ], [ "Wang", "Huandong", "" ], [ "Li", "Yong", "" ] ]
TITLE: Causal Discovery and Inference towards Urban Elements and Associated Factors ABSTRACT: To uncover the city's fundamental functioning mechanisms, it is important to acquire a deep understanding of complicated relationships among citizens, location, and mobility behaviors. Previous research studies have applied direct correlation analysis to investigate such relationships. Nevertheless, due to the ubiquitous confounding effects, empirical correlation analysis may not accurately reflect underlying causal relationships among basic urban elements. In this paper, we propose a novel urban causal computing framework to comprehensively explore causalities and confounding effects among a variety of factors across different types of urban elements. In particular, we design a reinforcement learning algorithm to discover the potential causal graph, which depicts the causal relations between urban factors. The causal graph further serves as the guidance for estimating causal effects between pair-wise urban factors by propensity score matching. After removing the confounding effects from correlations, we leverage significance levels of causal effects in downstream urban mobility prediction tasks. Experimental studies on open-source urban datasets show that the discovered causal graph demonstrates a hierarchical structure, where citizens affect locations, and they both cause changes in urban mobility behaviors. Experimental results in urban mobility prediction tasks further show that the proposed method can effectively reduce confounding effects and enhance performance of urban computing tasks.
no_new_dataset
0.946941
2503.06396
Qiqi Bao
Enqiang Zhu and Qiqi Bao and Yu Zhang and Chanjuan Liu
Optimizing Minimum Vertex Cover Solving via a GCN-assisted Heuristic Algorithm
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The problem of finding a minimum vertex cover (MVC) in a graph is a well-known NP-hard problem with significant practical applications in optimization and scheduling. Its complexity, combined with the increasing scale of problems, underscores the need for efficient and effective algorithms. However, existing heuristic algorithms for MVC often rely on simplistic initialization strategies and overlook the impact of edge attributes and neighborhood information on vertex selection. In this paper, we introduce GCNIVC, a novel heuristic search algorithm designed to address the limitations of existing methods for solving MVC problems in large-scale graphs. Our approach features two main innovations. First, it utilizes a Graph Convolutional Network (GCN) to capture the global structure of graphs, which enables the generation of high-quality initial solutions that enhance the efficiency of the subsequent search process. Second, GCNIVC introduces a new heuristic that employs three containers and the concept of double-covered edges (dc-edges), improving search efficiency and providing greater flexibility for adding and removing operations based on edge attributes. Through extensive experiments on benchmark datasets, we demonstrate that GCNIVC outperforms state-of-the-art MVC algorithms in terms of both accuracy and efficiency. Our results highlight the effectiveness of GCNIVC's GCN-assisted initialization and its edge-informed search strategy. This study not only advances the understanding of MVC problem-solving but also contributes a new tool for addressing large-scale graph optimization challenges.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 02:31:03 GMT" } ]
2025-03-11T00:00:00
[ [ "Zhu", "Enqiang", "" ], [ "Bao", "Qiqi", "" ], [ "Zhang", "Yu", "" ], [ "Liu", "Chanjuan", "" ] ]
TITLE: Optimizing Minimum Vertex Cover Solving via a GCN-assisted Heuristic Algorithm ABSTRACT: The problem of finding a minimum vertex cover (MVC) in a graph is a well-known NP-hard problem with significant practical applications in optimization and scheduling. Its complexity, combined with the increasing scale of problems, underscores the need for efficient and effective algorithms. However, existing heuristic algorithms for MVC often rely on simplistic initialization strategies and overlook the impact of edge attributes and neighborhood information on vertex selection. In this paper, we introduce GCNIVC, a novel heuristic search algorithm designed to address the limitations of existing methods for solving MVC problems in large-scale graphs. Our approach features two main innovations. First, it utilizes a Graph Convolutional Network (GCN) to capture the global structure of graphs, which enables the generation of high-quality initial solutions that enhance the efficiency of the subsequent search process. Second, GCNIVC introduces a new heuristic that employs three containers and the concept of double-covered edges (dc-edges), improving search efficiency and providing greater flexibility for adding and removing operations based on edge attributes. Through extensive experiments on benchmark datasets, we demonstrate that GCNIVC outperforms state-of-the-art MVC algorithms in terms of both accuracy and efficiency. Our results highlight the effectiveness of GCNIVC's GCN-assisted initialization and its edge-informed search strategy. This study not only advances the understanding of MVC problem-solving but also contributes a new tool for addressing large-scale graph optimization challenges.
no_new_dataset
0.944995
2503.06397
Yanyu Zhu
Yanyu Zhu, Licheng Bai, Jintao Xu, Jiwei Tang, Hai-tao Zheng
Removing Averaging: Personalized Lip-Sync Driven Characters Based on Identity Adapter
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Recent advances in diffusion-based lip-syncing generative models have demonstrated their ability to produce highly synchronized talking face videos for visual dubbing. Although these models excel at lip synchronization, they often struggle to maintain fine-grained control over facial details in generated images. In this work, we identify "lip averaging" phenomenon where the model fails to preserve subtle facial details when dubbing unseen in-the-wild videos. This issue arises because the commonly used UNet backbone primarily integrates audio features into visual representations in the latent space via cross-attention mechanisms and multi-scale fusion, but it struggles to retain fine-grained lip details in the generated faces. To address this issue, we propose UnAvgLip, which extracts identity embeddings from reference videos to generate highly faithful facial sequences while maintaining accurate lip synchronization. Specifically, our method comprises two primary components: (1) an Identity Perceiver module that encodes facial embeddings to align with conditioned audio features; and (2) an ID-CrossAttn module that injects facial embeddings into the generation process, enhancing model's capability of identity retention. Extensive experiments demonstrate that, at a modest training and inference cost, UnAvgLip effectively mitigates the "averaging" phenomenon in lip inpainting, significantly preserving unique facial characteristics while maintaining precise lip synchronization. Compared with the original approach, our method demonstrates significant improvements of 5% on the identity consistency metric and 2% on the SSIM metric across two benchmark datasets (HDTF and LRW).
[ { "version": "v1", "created": "Sun, 9 Mar 2025 02:36:31 GMT" } ]
2025-03-11T00:00:00
[ [ "Zhu", "Yanyu", "" ], [ "Bai", "Licheng", "" ], [ "Xu", "Jintao", "" ], [ "Tang", "Jiwei", "" ], [ "Zheng", "Hai-tao", "" ] ]
TITLE: Removing Averaging: Personalized Lip-Sync Driven Characters Based on Identity Adapter ABSTRACT: Recent advances in diffusion-based lip-syncing generative models have demonstrated their ability to produce highly synchronized talking face videos for visual dubbing. Although these models excel at lip synchronization, they often struggle to maintain fine-grained control over facial details in generated images. In this work, we identify "lip averaging" phenomenon where the model fails to preserve subtle facial details when dubbing unseen in-the-wild videos. This issue arises because the commonly used UNet backbone primarily integrates audio features into visual representations in the latent space via cross-attention mechanisms and multi-scale fusion, but it struggles to retain fine-grained lip details in the generated faces. To address this issue, we propose UnAvgLip, which extracts identity embeddings from reference videos to generate highly faithful facial sequences while maintaining accurate lip synchronization. Specifically, our method comprises two primary components: (1) an Identity Perceiver module that encodes facial embeddings to align with conditioned audio features; and (2) an ID-CrossAttn module that injects facial embeddings into the generation process, enhancing model's capability of identity retention. Extensive experiments demonstrate that, at a modest training and inference cost, UnAvgLip effectively mitigates the "averaging" phenomenon in lip inpainting, significantly preserving unique facial characteristics while maintaining precise lip synchronization. Compared with the original approach, our method demonstrates significant improvements of 5% on the identity consistency metric and 2% on the SSIM metric across two benchmark datasets (HDTF and LRW).
no_new_dataset
0.946745
2503.06398
Tao Feng
Tao Feng, Yunke Zhang, Huandong Wang, Yong Li
Causality Enhanced Origin-Destination Flow Prediction in Data-Scarce Cities
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Accurate origin-destination (OD) flow prediction is of great importance to developing cities, as it can contribute to optimize urban structures and layouts. However, with the common issues of missing regional features and lacking OD flow data, it is quite daunting to predict OD flow in developing cities. To address this challenge, we propose a novel Causality-Enhanced OD Flow Prediction (CE-OFP), a unified framework that aims to transfer urban knowledge between cities and achieve accuracy improvements in OD flow predictions across data-scarce cities. In specific, we propose a novel reinforcement learning model to discover universal causalities among urban features in data-rich cities and build corresponding causal graphs. Then, we further build Causality-Enhanced Variational Auto-Encoder (CE-VAE) to incorporate causal graphs for effective feature reconstruction in data-scarce cities. Finally, with the reconstructed features, we devise a knowledge distillation method with a graph attention network to migrate the OD prediction model from data-rich cities to data-scare cities. Extensive experiments on two pairs of real-world datasets validate that the proposed CE-OFP remarkably outperforms state-of-the-art baselines, which can reduce the RMSE of OD flow prediction for data-scarce cities by up to 11%.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 02:36:36 GMT" } ]
2025-03-11T00:00:00
[ [ "Feng", "Tao", "" ], [ "Zhang", "Yunke", "" ], [ "Wang", "Huandong", "" ], [ "Li", "Yong", "" ] ]
TITLE: Causality Enhanced Origin-Destination Flow Prediction in Data-Scarce Cities ABSTRACT: Accurate origin-destination (OD) flow prediction is of great importance to developing cities, as it can contribute to optimize urban structures and layouts. However, with the common issues of missing regional features and lacking OD flow data, it is quite daunting to predict OD flow in developing cities. To address this challenge, we propose a novel Causality-Enhanced OD Flow Prediction (CE-OFP), a unified framework that aims to transfer urban knowledge between cities and achieve accuracy improvements in OD flow predictions across data-scarce cities. In specific, we propose a novel reinforcement learning model to discover universal causalities among urban features in data-rich cities and build corresponding causal graphs. Then, we further build Causality-Enhanced Variational Auto-Encoder (CE-VAE) to incorporate causal graphs for effective feature reconstruction in data-scarce cities. Finally, with the reconstructed features, we devise a knowledge distillation method with a graph attention network to migrate the OD prediction model from data-rich cities to data-scare cities. Extensive experiments on two pairs of real-world datasets validate that the proposed CE-OFP remarkably outperforms state-of-the-art baselines, which can reduce the RMSE of OD flow prediction for data-scarce cities by up to 11%.
no_new_dataset
0.946843
2503.06422
Yuteng Zhang
Lin Zhang, Yuteng Zhang, Dusit Niyato, Lei Ren, Pengfei Gu, Zhen Chen, Yuanjun Laili, Wentong Cai and Agostino Bruzzone
GenAI for Simulation Model in Model-Based Systems Engineering
This work has been submitted to the IEEE for possible publication
null
null
null
cs.SE cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Generative AI (GenAI) has demonstrated remarkable capabilities in code generation, and its integration into complex product modeling and simulation code generation can significantly enhance the efficiency of the system design phase in Model-Based Systems Engineering (MBSE). In this study, we introduce a generative system design methodology framework for MBSE, offering a practical approach for the intelligent generation of simulation models for system physical properties. First, we employ inference techniques, generative models, and integrated modeling and simulation languages to construct simulation models for system physical properties based on product design documents. Subsequently, we fine-tune the language model used for simulation model generation on an existing library of simulation models and additional datasets generated through generative modeling. Finally, we introduce evaluation metrics for the generated simulation models for system physical properties. Our proposed approach to simulation model generation presents the innovative concept of scalable templates for simulation models. Using these templates, GenAI generates simulation models for system physical properties through code completion. The experimental results demonstrate that, for mainstream open-source Transformer-based models, the quality of the simulation model is significantly improved using the simulation model generation method proposed in this paper.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 03:33:25 GMT" } ]
2025-03-11T00:00:00
[ [ "Zhang", "Lin", "" ], [ "Zhang", "Yuteng", "" ], [ "Niyato", "Dusit", "" ], [ "Ren", "Lei", "" ], [ "Gu", "Pengfei", "" ], [ "Chen", "Zhen", "" ], [ "Laili", "Yuanjun", "" ], [ "Cai", "Wentong", "" ], [ "Bruzzone", "Agostino", "" ] ]
TITLE: GenAI for Simulation Model in Model-Based Systems Engineering ABSTRACT: Generative AI (GenAI) has demonstrated remarkable capabilities in code generation, and its integration into complex product modeling and simulation code generation can significantly enhance the efficiency of the system design phase in Model-Based Systems Engineering (MBSE). In this study, we introduce a generative system design methodology framework for MBSE, offering a practical approach for the intelligent generation of simulation models for system physical properties. First, we employ inference techniques, generative models, and integrated modeling and simulation languages to construct simulation models for system physical properties based on product design documents. Subsequently, we fine-tune the language model used for simulation model generation on an existing library of simulation models and additional datasets generated through generative modeling. Finally, we introduce evaluation metrics for the generated simulation models for system physical properties. Our proposed approach to simulation model generation presents the innovative concept of scalable templates for simulation models. Using these templates, GenAI generates simulation models for system physical properties through code completion. The experimental results demonstrate that, for mainstream open-source Transformer-based models, the quality of the simulation model is significantly improved using the simulation model generation method proposed in this paper.
no_new_dataset
0.948489
2503.06428
Tianshu Huang
Tianshu Huang, Arjun Ramesh, Emily Ruppel, Nuno Pereira, Anthony Rowe, Carlee Joe-Wong
Interference-Aware Edge Runtime Prediction with Conformal Matrix Completion
To appear at MLSys 2025
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Accurately estimating workload runtime is a longstanding goal in computer systems, and plays a key role in efficient resource provisioning, latency minimization, and various other system management tasks. Runtime prediction is particularly important for managing increasingly complex distributed systems in which more sophisticated processing is pushed to the edge in search of better latency. Previous approaches for runtime prediction in edge systems suffer from poor data efficiency or require intensive instrumentation; these challenges are compounded in heterogeneous edge computing environments, where historical runtime data may be sparsely available and instrumentation is often challenging. Moreover, edge computing environments often feature multi-tenancy due to limited resources at the network edge, potentially leading to interference between workloads and further complicating the runtime prediction problem. Drawing from insights across machine learning and computer systems, we design a matrix factorization-inspired method that generates accurate interference-aware predictions with tight provably-guaranteed uncertainty bounds. We validate our method on a novel WebAssembly runtime dataset collected from 24 unique devices, achieving a prediction error of 5.2% -- 2x better than a naive application of existing methods.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 03:41:32 GMT" } ]
2025-03-11T00:00:00
[ [ "Huang", "Tianshu", "" ], [ "Ramesh", "Arjun", "" ], [ "Ruppel", "Emily", "" ], [ "Pereira", "Nuno", "" ], [ "Rowe", "Anthony", "" ], [ "Joe-Wong", "Carlee", "" ] ]
TITLE: Interference-Aware Edge Runtime Prediction with Conformal Matrix Completion ABSTRACT: Accurately estimating workload runtime is a longstanding goal in computer systems, and plays a key role in efficient resource provisioning, latency minimization, and various other system management tasks. Runtime prediction is particularly important for managing increasingly complex distributed systems in which more sophisticated processing is pushed to the edge in search of better latency. Previous approaches for runtime prediction in edge systems suffer from poor data efficiency or require intensive instrumentation; these challenges are compounded in heterogeneous edge computing environments, where historical runtime data may be sparsely available and instrumentation is often challenging. Moreover, edge computing environments often feature multi-tenancy due to limited resources at the network edge, potentially leading to interference between workloads and further complicating the runtime prediction problem. Drawing from insights across machine learning and computer systems, we design a matrix factorization-inspired method that generates accurate interference-aware predictions with tight provably-guaranteed uncertainty bounds. We validate our method on a novel WebAssembly runtime dataset collected from 24 unique devices, achieving a prediction error of 5.2% -- 2x better than a naive application of existing methods.
new_dataset
0.962356
2503.06430
Zhangchi Qiu
Zhangchi Qiu, Linhao Luo, Zicheng Zhao, Shirui Pan and Alan Wee-Chung Liew
Graph Retrieval-Augmented LLM for Conversational Recommendation Systems
Accepted by PAKDD 2025
null
null
null
cs.CL cs.AI cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Conversational Recommender Systems (CRSs) have emerged as a transformative paradigm for offering personalized recommendations through natural language dialogue. However, they face challenges with knowledge sparsity, as users often provide brief, incomplete preference statements. While recent methods have integrated external knowledge sources to mitigate this, they still struggle with semantic understanding and complex preference reasoning. Recent Large Language Models (LLMs) demonstrate promising capabilities in natural language understanding and reasoning, showing significant potential for CRSs. Nevertheless, due to the lack of domain knowledge, existing LLM-based CRSs either produce hallucinated recommendations or demand expensive domain-specific training, which largely limits their applicability. In this work, we present G-CRS (Graph Retrieval-Augmented Large Language Model for Conversational Recommender Systems), a novel training-free framework that combines graph retrieval-augmented generation and in-context learning to enhance LLMs' recommendation capabilities. Specifically, G-CRS employs a two-stage retrieve-and-recommend architecture, where a GNN-based graph reasoner first identifies candidate items, followed by Personalized PageRank exploration to jointly discover potential items and similar user interactions. These retrieved contexts are then transformed into structured prompts for LLM reasoning, enabling contextually grounded recommendations without task-specific training. Extensive experiments on two public datasets show that G-CRS achieves superior recommendation performance compared to existing methods without requiring task-specific training.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 03:56:22 GMT" } ]
2025-03-11T00:00:00
[ [ "Qiu", "Zhangchi", "" ], [ "Luo", "Linhao", "" ], [ "Zhao", "Zicheng", "" ], [ "Pan", "Shirui", "" ], [ "Liew", "Alan Wee-Chung", "" ] ]
TITLE: Graph Retrieval-Augmented LLM for Conversational Recommendation Systems ABSTRACT: Conversational Recommender Systems (CRSs) have emerged as a transformative paradigm for offering personalized recommendations through natural language dialogue. However, they face challenges with knowledge sparsity, as users often provide brief, incomplete preference statements. While recent methods have integrated external knowledge sources to mitigate this, they still struggle with semantic understanding and complex preference reasoning. Recent Large Language Models (LLMs) demonstrate promising capabilities in natural language understanding and reasoning, showing significant potential for CRSs. Nevertheless, due to the lack of domain knowledge, existing LLM-based CRSs either produce hallucinated recommendations or demand expensive domain-specific training, which largely limits their applicability. In this work, we present G-CRS (Graph Retrieval-Augmented Large Language Model for Conversational Recommender Systems), a novel training-free framework that combines graph retrieval-augmented generation and in-context learning to enhance LLMs' recommendation capabilities. Specifically, G-CRS employs a two-stage retrieve-and-recommend architecture, where a GNN-based graph reasoner first identifies candidate items, followed by Personalized PageRank exploration to jointly discover potential items and similar user interactions. These retrieved contexts are then transformed into structured prompts for LLM reasoning, enabling contextually grounded recommendations without task-specific training. Extensive experiments on two public datasets show that G-CRS achieves superior recommendation performance compared to existing methods without requiring task-specific training.
no_new_dataset
0.949153
2503.06435
Adrian Chow
Adrian Chow, Evelien Riddell, Yimu Wang, Sean Sedwards, Krzysztof Czarnecki
OV-SCAN: Semantically Consistent Alignment for Novel Object Discovery in Open-Vocabulary 3D Object Detection
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Open-vocabulary 3D object detection for autonomous driving aims to detect novel objects beyond the predefined training label sets in point cloud scenes. Existing approaches achieve this by connecting traditional 3D object detectors with vision-language models (VLMs) to regress 3D bounding boxes for novel objects and perform open-vocabulary classification through cross-modal alignment between 3D and 2D features. However, achieving robust cross-modal alignment remains a challenge due to semantic inconsistencies when generating corresponding 3D and 2D feature pairs. To overcome this challenge, we present OV-SCAN, an Open-Vocabulary 3D framework that enforces Semantically Consistent Alignment for Novel object discovery. OV-SCAN employs two core strategies: discovering precise 3D annotations and filtering out low-quality or corrupted alignment pairs (arising from 3D annotation, occlusion-induced, or resolution-induced noise). Extensive experiments on the nuScenes dataset demonstrate that OV-SCAN achieves state-of-the-art performance.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 04:22:08 GMT" } ]
2025-03-11T00:00:00
[ [ "Chow", "Adrian", "" ], [ "Riddell", "Evelien", "" ], [ "Wang", "Yimu", "" ], [ "Sedwards", "Sean", "" ], [ "Czarnecki", "Krzysztof", "" ] ]
TITLE: OV-SCAN: Semantically Consistent Alignment for Novel Object Discovery in Open-Vocabulary 3D Object Detection ABSTRACT: Open-vocabulary 3D object detection for autonomous driving aims to detect novel objects beyond the predefined training label sets in point cloud scenes. Existing approaches achieve this by connecting traditional 3D object detectors with vision-language models (VLMs) to regress 3D bounding boxes for novel objects and perform open-vocabulary classification through cross-modal alignment between 3D and 2D features. However, achieving robust cross-modal alignment remains a challenge due to semantic inconsistencies when generating corresponding 3D and 2D feature pairs. To overcome this challenge, we present OV-SCAN, an Open-Vocabulary 3D framework that enforces Semantically Consistent Alignment for Novel object discovery. OV-SCAN employs two core strategies: discovering precise 3D annotations and filtering out low-quality or corrupted alignment pairs (arising from 3D annotation, occlusion-induced, or resolution-induced noise). Extensive experiments on the nuScenes dataset demonstrate that OV-SCAN achieves state-of-the-art performance.
no_new_dataset
0.942401
2503.06436
Fan Meng
Fan Meng
Physics-Informed Residual Neural Ordinary Differential Equations for Enhanced Tropical Cyclone Intensity Forecasting
14 pages, 9 figures
null
null
null
physics.ao-ph cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Accurate tropical cyclone (TC) intensity prediction is crucial for mitigating storm hazards, yet its complex dynamics pose challenges to traditional methods. Here, we introduce a Physics-Informed Residual Neural Ordinary Differential Equation (PIR-NODE) model to precisely forecast TC intensity evolution. This model leverages the powerful non-linear fitting capabilities of deep learning, integrates residual connections to enhance model depth and training stability, and explicitly models the continuous temporal evolution of TC intensity using Neural ODEs. Experimental results in the SHIPS dataset demonstrate that the PIR-NODE model achieves a significant improvement in 24-hour intensity prediction accuracy compared to traditional statistical models and benchmark deep learning methods, with a 25. 2\% reduction in the root mean square error (RMSE) and a 19.5\% increase in R-square (R2) relative to a baseline of neural network. Crucially, the residual structure effectively preserves initial state information, and the model exhibits robust generalization capabilities. This study details the PIR-NODE model architecture, physics-informed integration strategies, and comprehensive experimental validation, revealing the substantial potential of deep learning techniques in predicting complex geophysical systems and laying the foundation for future refined TC forecasting research.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 04:23:07 GMT" } ]
2025-03-11T00:00:00
[ [ "Meng", "Fan", "" ] ]
TITLE: Physics-Informed Residual Neural Ordinary Differential Equations for Enhanced Tropical Cyclone Intensity Forecasting ABSTRACT: Accurate tropical cyclone (TC) intensity prediction is crucial for mitigating storm hazards, yet its complex dynamics pose challenges to traditional methods. Here, we introduce a Physics-Informed Residual Neural Ordinary Differential Equation (PIR-NODE) model to precisely forecast TC intensity evolution. This model leverages the powerful non-linear fitting capabilities of deep learning, integrates residual connections to enhance model depth and training stability, and explicitly models the continuous temporal evolution of TC intensity using Neural ODEs. Experimental results in the SHIPS dataset demonstrate that the PIR-NODE model achieves a significant improvement in 24-hour intensity prediction accuracy compared to traditional statistical models and benchmark deep learning methods, with a 25. 2\% reduction in the root mean square error (RMSE) and a 19.5\% increase in R-square (R2) relative to a baseline of neural network. Crucially, the residual structure effectively preserves initial state information, and the model exhibits robust generalization capabilities. This study details the PIR-NODE model architecture, physics-informed integration strategies, and comprehensive experimental validation, revealing the substantial potential of deep learning techniques in predicting complex geophysical systems and laying the foundation for future refined TC forecasting research.
no_new_dataset
0.951549
2503.06439
Nuoa Lei
Nuoa Lei, Arman Shehabi, Jun Lu, Zhi Cao, Jonathan Koomey, Sarah Smith, Eric Masanet
Generalizable Machine Learning Models for Predicting Data Center Server Power, Efficiency, and Throughput
null
null
null
null
cs.LG cs.SY eess.SY
http://creativecommons.org/licenses/by/4.0/
In the rapidly evolving digital era, comprehending the intricate dynamics influencing server power consumption, efficiency, and performance is crucial for sustainable data center operations. However, existing models lack the ability to provide a detailed and reliable understanding of these intricate relationships. This study employs a machine learning-based approach, using the SPECPower_ssj2008 database, to facilitate user-friendly and generalizable server modeling. The resulting models demonstrate high accuracy, with errors falling within approximately 10% on the testing dataset, showcasing their practical utility and generalizability. Through meticulous analysis, predictive features related to hardware availability date, server workload level, and specifications are identified, providing insights into optimizing energy conservation, efficiency, and performance in server deployment and operation. By systematically measuring biases and uncertainties, the study underscores the need for caution when employing historical data for prospective server modeling, considering the dynamic nature of technology landscapes. Collectively, this work offers valuable insights into the sustainable deployment and operation of servers in data centers, paving the way for enhanced resource use efficiency and more environmentally conscious practices.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 04:39:53 GMT" } ]
2025-03-11T00:00:00
[ [ "Lei", "Nuoa", "" ], [ "Shehabi", "Arman", "" ], [ "Lu", "Jun", "" ], [ "Cao", "Zhi", "" ], [ "Koomey", "Jonathan", "" ], [ "Smith", "Sarah", "" ], [ "Masanet", "Eric", "" ] ]
TITLE: Generalizable Machine Learning Models for Predicting Data Center Server Power, Efficiency, and Throughput ABSTRACT: In the rapidly evolving digital era, comprehending the intricate dynamics influencing server power consumption, efficiency, and performance is crucial for sustainable data center operations. However, existing models lack the ability to provide a detailed and reliable understanding of these intricate relationships. This study employs a machine learning-based approach, using the SPECPower_ssj2008 database, to facilitate user-friendly and generalizable server modeling. The resulting models demonstrate high accuracy, with errors falling within approximately 10% on the testing dataset, showcasing their practical utility and generalizability. Through meticulous analysis, predictive features related to hardware availability date, server workload level, and specifications are identified, providing insights into optimizing energy conservation, efficiency, and performance in server deployment and operation. By systematically measuring biases and uncertainties, the study underscores the need for caution when employing historical data for prospective server modeling, considering the dynamic nature of technology landscapes. Collectively, this work offers valuable insights into the sustainable deployment and operation of servers in data centers, paving the way for enhanced resource use efficiency and more environmentally conscious practices.
no_new_dataset
0.948394
2503.06441
Huaming Du
Huaming Du, Lei Yuan, Qing Yang, Xingyan Chen, Yu Zhao, Han Ji, Fuzhen Zhuang, Carl Yang, Gang Kou
Identifying Evidence Subgraphs for Financial Risk Detection via Graph Counterfactual and Factual Reasoning
null
null
null
null
cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Company financial risks pose a significant threat to personal wealth and national economic stability, stimulating increasing attention towards the development of efficient andtimely methods for monitoring them. Current approaches tend to use graph neural networks (GNNs) to model the momentum spillover effect of risks. However, due to the black-box nature of GNNs, these methods leave much to be improved for precise and reliable explanations towards company risks. In this paper, we propose CF3, a novel Counterfactual and Factual learning method for company Financial risk detection, which generates evidence subgraphs on company knowledge graphs to reliably detect and explain company financial risks. Specifically, we first propose a meta-path attribution process based on Granger causality, selecting the meta-paths most relevant to the target node labels to construct an attribution subgraph. Subsequently, we propose anedge-type-aware graph generator to identify important edges, and we also devise a layer-based feature masker to recognize crucial node features. Finally, we utilize counterfactual-factual reasoning and a loss function based on attribution subgraphs to jointly guide the learning of the graph generator and feature masker. Extensive experiments on three real-world datasets demonstrate the superior performance of our method compared to state-of-the-art approaches in the field of financial risk detection.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 04:45:39 GMT" } ]
2025-03-11T00:00:00
[ [ "Du", "Huaming", "" ], [ "Yuan", "Lei", "" ], [ "Yang", "Qing", "" ], [ "Chen", "Xingyan", "" ], [ "Zhao", "Yu", "" ], [ "Ji", "Han", "" ], [ "Zhuang", "Fuzhen", "" ], [ "Yang", "Carl", "" ], [ "Kou", "Gang", "" ] ]
TITLE: Identifying Evidence Subgraphs for Financial Risk Detection via Graph Counterfactual and Factual Reasoning ABSTRACT: Company financial risks pose a significant threat to personal wealth and national economic stability, stimulating increasing attention towards the development of efficient andtimely methods for monitoring them. Current approaches tend to use graph neural networks (GNNs) to model the momentum spillover effect of risks. However, due to the black-box nature of GNNs, these methods leave much to be improved for precise and reliable explanations towards company risks. In this paper, we propose CF3, a novel Counterfactual and Factual learning method for company Financial risk detection, which generates evidence subgraphs on company knowledge graphs to reliably detect and explain company financial risks. Specifically, we first propose a meta-path attribution process based on Granger causality, selecting the meta-paths most relevant to the target node labels to construct an attribution subgraph. Subsequently, we propose anedge-type-aware graph generator to identify important edges, and we also devise a layer-based feature masker to recognize crucial node features. Finally, we utilize counterfactual-factual reasoning and a loss function based on attribution subgraphs to jointly guide the learning of the graph generator and feature masker. Extensive experiments on three real-world datasets demonstrate the superior performance of our method compared to state-of-the-art approaches in the field of financial risk detection.
no_new_dataset
0.944944
2503.06444
Zuqing Li
Zuqing Li, Jianzhong Qi, Junhao Gan
CtrTab: Tabular Data Synthesis with High-Dimensional and Limited Data
null
null
null
null
cs.LG cs.AI cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Diffusion-based tabular data synthesis models have yielded promising results. However, we observe that when the data dimensionality increases, existing models tend to degenerate and may perform even worse than simpler, non-diffusion-based models. This is because limited training samples in high-dimensional space often hinder generative models from capturing the distribution accurately. To address this issue, we propose CtrTab-a condition controlled diffusion model for tabular data synthesis-to improve the performance of diffusion-based generative models in high-dimensional, low-data scenarios. Through CtrTab, we inject samples with added Laplace noise as control signals to improve data diversity and show its resemblance to L2 regularization, which enhances model robustness. Experimental results across multiple datasets show that CtrTab outperforms state-of-the-art models, with performance gap in accuracy over 80% on average. Our source code will be released upon paper publication.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 05:01:56 GMT" } ]
2025-03-11T00:00:00
[ [ "Li", "Zuqing", "" ], [ "Qi", "Jianzhong", "" ], [ "Gan", "Junhao", "" ] ]
TITLE: CtrTab: Tabular Data Synthesis with High-Dimensional and Limited Data ABSTRACT: Diffusion-based tabular data synthesis models have yielded promising results. However, we observe that when the data dimensionality increases, existing models tend to degenerate and may perform even worse than simpler, non-diffusion-based models. This is because limited training samples in high-dimensional space often hinder generative models from capturing the distribution accurately. To address this issue, we propose CtrTab-a condition controlled diffusion model for tabular data synthesis-to improve the performance of diffusion-based generative models in high-dimensional, low-data scenarios. Through CtrTab, we inject samples with added Laplace noise as control signals to improve data diversity and show its resemblance to L2 regularization, which enhances model robustness. Experimental results across multiple datasets show that CtrTab outperforms state-of-the-art models, with performance gap in accuracy over 80% on average. Our source code will be released upon paper publication.
no_new_dataset
0.949576
2503.06445
Junlong Zhang
Shuai Pang, Junlong Zhang, and Lei Dong
Socioeconomic centers in cities worldwide
null
null
null
null
physics.soc-ph cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Urban centers serve as engines of regional development, yet accurately defining and identifying the socioeconomic centers of cities globally remains a big challenge. Existing mapping efforts are often limited to large cities in developed regions and rely on data sources that are unavailable in many developing countries. This data scarcity hinders the establishment of consistent urban indicators, such as accessibility, to assess progress towards the United Nations Sustainable Development Goals (SDGs). Here, we develop and validate a global map of the socioeconomic centers of cities for 2020 by integrating nighttime light and population density data within an advanced geospatial modeling framework. Our analysis reveals that monocentric cities -- the standard urban model -- still dominate our planet, accounting for over 80% of cities worldwide. However, these monocentric cities encompass only approximately 20% of the total urbanized area, urban population, and nighttime light intensity; this 80/20 pattern underscores significant disparities in urban development. Further analysis, combined with socioeconomic datasets, reveals a marked difference between developed and developing regions: high-income countries exhibit greater polycentricity than low-income countries, demonstrating a positive correlation between urban sprawl and economic growth. Our global dataset and findings provide critical insights into urban structure and development, with important implications for urban planning, policymaking, and the formulation of indicators for urban sustainability assessment.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 05:02:28 GMT" } ]
2025-03-11T00:00:00
[ [ "Pang", "Shuai", "" ], [ "Zhang", "Junlong", "" ], [ "Dong", "Lei", "" ] ]
TITLE: Socioeconomic centers in cities worldwide ABSTRACT: Urban centers serve as engines of regional development, yet accurately defining and identifying the socioeconomic centers of cities globally remains a big challenge. Existing mapping efforts are often limited to large cities in developed regions and rely on data sources that are unavailable in many developing countries. This data scarcity hinders the establishment of consistent urban indicators, such as accessibility, to assess progress towards the United Nations Sustainable Development Goals (SDGs). Here, we develop and validate a global map of the socioeconomic centers of cities for 2020 by integrating nighttime light and population density data within an advanced geospatial modeling framework. Our analysis reveals that monocentric cities -- the standard urban model -- still dominate our planet, accounting for over 80% of cities worldwide. However, these monocentric cities encompass only approximately 20% of the total urbanized area, urban population, and nighttime light intensity; this 80/20 pattern underscores significant disparities in urban development. Further analysis, combined with socioeconomic datasets, reveals a marked difference between developed and developing regions: high-income countries exhibit greater polycentricity than low-income countries, demonstrating a positive correlation between urban sprawl and economic growth. Our global dataset and findings provide critical insights into urban structure and development, with important implications for urban planning, policymaking, and the formulation of indicators for urban sustainability assessment.
no_new_dataset
0.538377
2503.06461
Hongsin Lee
Seungju Cho, Hongsin Lee, Changick Kim
Long-tailed Adversarial Training with Self-Distillation
ICLR 2025
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Adversarial training significantly enhances adversarial robustness, yet superior performance is predominantly achieved on balanced datasets. Addressing adversarial robustness in the context of unbalanced or long-tailed distributions is considerably more challenging, mainly due to the scarcity of tail data instances. Previous research on adversarial robustness within long-tailed distributions has primarily focused on combining traditional long-tailed natural training with existing adversarial robustness methods. In this study, we provide an in-depth analysis for the challenge that adversarial training struggles to achieve high performance on tail classes in long-tailed distributions. Furthermore, we propose a simple yet effective solution to advance adversarial robustness on long-tailed distributions through a novel self-distillation technique. Specifically, this approach leverages a balanced self-teacher model, which is trained using a balanced dataset sampled from the original long-tailed dataset. Our extensive experiments demonstrate state-of-the-art performance in both clean and robust accuracy for long-tailed adversarial robustness, with significant improvements in tail class performance on various datasets. We improve the accuracy against PGD attacks for tail classes by 20.3, 7.1, and 3.8 percentage points on CIFAR-10, CIFAR-100, and Tiny-ImageNet, respectively, while achieving the highest robust accuracy.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 05:39:36 GMT" } ]
2025-03-11T00:00:00
[ [ "Cho", "Seungju", "" ], [ "Lee", "Hongsin", "" ], [ "Kim", "Changick", "" ] ]
TITLE: Long-tailed Adversarial Training with Self-Distillation ABSTRACT: Adversarial training significantly enhances adversarial robustness, yet superior performance is predominantly achieved on balanced datasets. Addressing adversarial robustness in the context of unbalanced or long-tailed distributions is considerably more challenging, mainly due to the scarcity of tail data instances. Previous research on adversarial robustness within long-tailed distributions has primarily focused on combining traditional long-tailed natural training with existing adversarial robustness methods. In this study, we provide an in-depth analysis for the challenge that adversarial training struggles to achieve high performance on tail classes in long-tailed distributions. Furthermore, we propose a simple yet effective solution to advance adversarial robustness on long-tailed distributions through a novel self-distillation technique. Specifically, this approach leverages a balanced self-teacher model, which is trained using a balanced dataset sampled from the original long-tailed dataset. Our extensive experiments demonstrate state-of-the-art performance in both clean and robust accuracy for long-tailed adversarial robustness, with significant improvements in tail class performance on various datasets. We improve the accuracy against PGD attacks for tail classes by 20.3, 7.1, and 3.8 percentage points on CIFAR-10, CIFAR-100, and Tiny-ImageNet, respectively, while achieving the highest robust accuracy.
no_new_dataset
0.947769
2503.06467
Qiming Xia
Shijia Zhao, Qiming Xia, Xusheng Guo, Pufan Zou, Maoji Zheng, Hai Wu, Chenglu Wen, Cheng Wang
SP3D: Boosting Sparsely-Supervised 3D Object Detection via Accurate Cross-Modal Semantic Prompts
11 pages, 3 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, sparsely-supervised 3D object detection has gained great attention, achieving performance close to fully-supervised 3D objectors while requiring only a few annotated instances. Nevertheless, these methods suffer challenges when accurate labels are extremely absent. In this paper, we propose a boosting strategy, termed SP3D, explicitly utilizing the cross-modal semantic prompts generated from Large Multimodal Models (LMMs) to boost the 3D detector with robust feature discrimination capability under sparse annotation settings. Specifically, we first develop a Confident Points Semantic Transfer (CPST) module that generates accurate cross-modal semantic prompts through boundary-constrained center cluster selection. Based on these accurate semantic prompts, which we treat as seed points, we introduce a Dynamic Cluster Pseudo-label Generation (DCPG) module to yield pseudo-supervision signals from the geometry shape of multi-scale neighbor points. Additionally, we design a Distribution Shape score (DS score) that chooses high-quality supervision signals for the initial training of the 3D detector. Experiments on the KITTI dataset and Waymo Open Dataset (WOD) have validated that SP3D can enhance the performance of sparsely supervised detectors by a large margin under meager labeling conditions. Moreover, we verified SP3D in the zero-shot setting, where its performance exceeded that of the state-of-the-art methods. The code is available at https://github.com/xmuqimingxia/SP3D.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 06:08:04 GMT" } ]
2025-03-11T00:00:00
[ [ "Zhao", "Shijia", "" ], [ "Xia", "Qiming", "" ], [ "Guo", "Xusheng", "" ], [ "Zou", "Pufan", "" ], [ "Zheng", "Maoji", "" ], [ "Wu", "Hai", "" ], [ "Wen", "Chenglu", "" ], [ "Wang", "Cheng", "" ] ]
TITLE: SP3D: Boosting Sparsely-Supervised 3D Object Detection via Accurate Cross-Modal Semantic Prompts ABSTRACT: Recently, sparsely-supervised 3D object detection has gained great attention, achieving performance close to fully-supervised 3D objectors while requiring only a few annotated instances. Nevertheless, these methods suffer challenges when accurate labels are extremely absent. In this paper, we propose a boosting strategy, termed SP3D, explicitly utilizing the cross-modal semantic prompts generated from Large Multimodal Models (LMMs) to boost the 3D detector with robust feature discrimination capability under sparse annotation settings. Specifically, we first develop a Confident Points Semantic Transfer (CPST) module that generates accurate cross-modal semantic prompts through boundary-constrained center cluster selection. Based on these accurate semantic prompts, which we treat as seed points, we introduce a Dynamic Cluster Pseudo-label Generation (DCPG) module to yield pseudo-supervision signals from the geometry shape of multi-scale neighbor points. Additionally, we design a Distribution Shape score (DS score) that chooses high-quality supervision signals for the initial training of the 3D detector. Experiments on the KITTI dataset and Waymo Open Dataset (WOD) have validated that SP3D can enhance the performance of sparsely supervised detectors by a large margin under meager labeling conditions. Moreover, we verified SP3D in the zero-shot setting, where its performance exceeded that of the state-of-the-art methods. The code is available at https://github.com/xmuqimingxia/SP3D.
no_new_dataset
0.947478
2503.06474
Kong Huanjun
Huanjun Kong, Zhefan Wang, Chenyang Wang, Zhe Ma, Nanqing Dong
HuixiangDou2: A Robustly Optimized GraphRAG Approach
11 pages
null
null
null
cs.IR cs.AI cs.CL
http://creativecommons.org/licenses/by/4.0/
Large Language Models (LLMs) perform well on familiar queries but struggle with specialized or emerging topics. Graph-based Retrieval-Augmented Generation (GraphRAG) addresses this by structuring domain knowledge as a graph for dynamic retrieval. However, existing pipelines involve complex engineering workflows, making it difficult to isolate the impact of individual components. Evaluating retrieval effectiveness is also challenging due to dataset overlap with LLM pretraining data. In this work, we introduce HuixiangDou2, a robustly optimized GraphRAG framework. Specifically, we leverage the effectiveness of dual-level retrieval and optimize its performance in a 32k context for maximum precision, and compare logic-based retrieval and dual-level retrieval to enhance overall functionality. Our implementation includes comparative experiments on a test set, where Qwen2.5-7B-Instruct initially underperformed. With our approach, the score improved significantly from 60 to 74.5, as illustrated in the Figure. Experiments on domain-specific datasets reveal that dual-level retrieval enhances fuzzy matching, while logic-form retrieval improves structured reasoning. Furthermore, we propose a multi-stage verification mechanism to improve retrieval robustness without increasing computational cost. Empirical results show significant accuracy gains over baselines, highlighting the importance of adaptive retrieval. To support research and adoption, we release HuixiangDou2 as an open-source resource https://github.com/tpoisonooo/huixiangdou2.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 06:20:24 GMT" } ]
2025-03-11T00:00:00
[ [ "Kong", "Huanjun", "" ], [ "Wang", "Zhefan", "" ], [ "Wang", "Chenyang", "" ], [ "Ma", "Zhe", "" ], [ "Dong", "Nanqing", "" ] ]
TITLE: HuixiangDou2: A Robustly Optimized GraphRAG Approach ABSTRACT: Large Language Models (LLMs) perform well on familiar queries but struggle with specialized or emerging topics. Graph-based Retrieval-Augmented Generation (GraphRAG) addresses this by structuring domain knowledge as a graph for dynamic retrieval. However, existing pipelines involve complex engineering workflows, making it difficult to isolate the impact of individual components. Evaluating retrieval effectiveness is also challenging due to dataset overlap with LLM pretraining data. In this work, we introduce HuixiangDou2, a robustly optimized GraphRAG framework. Specifically, we leverage the effectiveness of dual-level retrieval and optimize its performance in a 32k context for maximum precision, and compare logic-based retrieval and dual-level retrieval to enhance overall functionality. Our implementation includes comparative experiments on a test set, where Qwen2.5-7B-Instruct initially underperformed. With our approach, the score improved significantly from 60 to 74.5, as illustrated in the Figure. Experiments on domain-specific datasets reveal that dual-level retrieval enhances fuzzy matching, while logic-form retrieval improves structured reasoning. Furthermore, we propose a multi-stage verification mechanism to improve retrieval robustness without increasing computational cost. Empirical results show significant accuracy gains over baselines, highlighting the importance of adaptive retrieval. To support research and adoption, we release HuixiangDou2 as an open-source resource https://github.com/tpoisonooo/huixiangdou2.
no_new_dataset
0.944638
2503.06477
Chuheng Wei
Chuheng Wei, Ziye Qin, Siyan Li, Ziyan Zhang, Xuanpeng Zhao, Amr Abdelraouf, Rohit Gupta, Kyungtae Han, Matthew J. Barth, Guoyuan Wu
PDB: Not All Drivers Are the Same -- A Personalized Dataset for Understanding Driving Behavior
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Driving behavior is inherently personal, influenced by individual habits, decision-making styles, and physiological states. However, most existing datasets treat all drivers as homogeneous, overlooking driver-specific variability. To address this gap, we introduce the Personalized Driving Behavior (PDB) dataset, a multi-modal dataset designed to capture personalization in driving behavior under naturalistic driving conditions. Unlike conventional datasets, PDB minimizes external influences by maintaining consistent routes, vehicles, and lighting conditions across sessions. It includes sources from 128-line LiDAR, front-facing camera video, GNSS, 9-axis IMU, CAN bus data (throttle, brake, steering angle), and driver-specific signals such as facial video and heart rate. The dataset features 12 participants, approximately 270,000 LiDAR frames, 1.6 million images, and 6.6 TB of raw sensor data. The processed trajectory dataset consists of 1,669 segments, each spanning 10 seconds with a 0.2-second interval. By explicitly capturing drivers' behavior, PDB serves as a unique resource for human factor analysis, driver identification, and personalized mobility applications, contributing to the development of human-centric intelligent transportation systems.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 06:28:39 GMT" } ]
2025-03-11T00:00:00
[ [ "Wei", "Chuheng", "" ], [ "Qin", "Ziye", "" ], [ "Li", "Siyan", "" ], [ "Zhang", "Ziyan", "" ], [ "Zhao", "Xuanpeng", "" ], [ "Abdelraouf", "Amr", "" ], [ "Gupta", "Rohit", "" ], [ "Han", "Kyungtae", "" ], [ "Barth", "Matthew J.", "" ], [ "Wu", "Guoyuan", "" ] ]
TITLE: PDB: Not All Drivers Are the Same -- A Personalized Dataset for Understanding Driving Behavior ABSTRACT: Driving behavior is inherently personal, influenced by individual habits, decision-making styles, and physiological states. However, most existing datasets treat all drivers as homogeneous, overlooking driver-specific variability. To address this gap, we introduce the Personalized Driving Behavior (PDB) dataset, a multi-modal dataset designed to capture personalization in driving behavior under naturalistic driving conditions. Unlike conventional datasets, PDB minimizes external influences by maintaining consistent routes, vehicles, and lighting conditions across sessions. It includes sources from 128-line LiDAR, front-facing camera video, GNSS, 9-axis IMU, CAN bus data (throttle, brake, steering angle), and driver-specific signals such as facial video and heart rate. The dataset features 12 participants, approximately 270,000 LiDAR frames, 1.6 million images, and 6.6 TB of raw sensor data. The processed trajectory dataset consists of 1,669 segments, each spanning 10 seconds with a 0.2-second interval. By explicitly capturing drivers' behavior, PDB serves as a unique resource for human factor analysis, driver identification, and personalized mobility applications, contributing to the development of human-centric intelligent transportation systems.
new_dataset
0.965867
2503.06479
Ali Sarabadani
Ali Sarabadani (1), Kheirolah Rahsepar Fard (2), and Hamid Dalvand (3) ((1) Department of Computer Engineering and Information Technology, University of Qom, Iran, (2) Department of Computer Engineering and Information Technology, University of Qom, Iran, (3) Department of Occupational Therapy, School of Rehabilitation, Tehran University of Medical Sciences, Iran)
ExKG-LLM: Leveraging Large Language Models for Automated Expansion of Cognitive Neuroscience Knowledge Graphs
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
The paper introduces ExKG-LLM, a framework designed to automate the expansion of cognitive neuroscience knowledge graphs (CNKG) using large language models (LLMs). It addresses limitations in existing tools by enhancing accuracy, completeness, and usefulness in CNKG. The framework leverages a large dataset of scientific papers and clinical reports, applying state-of-the-art LLMs to extract, optimize, and integrate new entities and relationships. Evaluation metrics include precision, recall, and graph density. Results show significant improvements: precision (0.80, +6.67%), recall (0.81, +15.71%), F1 score (0.805, +11.81%), and increased edge nodes (21.13% and 31.92%). Graph density slightly decreased, reflecting a broader but more fragmented structure. Engagement rates rose by 20%, while CNKG diameter increased to 15, indicating a more distributed structure. Time complexity improved to O(n log n), but space complexity rose to O(n2), indicating higher memory usage. ExKG-LLM demonstrates potential for enhancing knowledge generation, semantic search, and clinical decision-making in cognitive neuroscience, adaptable to broader scientific fields.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 06:32:56 GMT" } ]
2025-03-11T00:00:00
[ [ "Sarabadani", "Ali", "" ], [ "Fard", "Kheirolah Rahsepar", "" ], [ "Dalvand", "Hamid", "" ] ]
TITLE: ExKG-LLM: Leveraging Large Language Models for Automated Expansion of Cognitive Neuroscience Knowledge Graphs ABSTRACT: The paper introduces ExKG-LLM, a framework designed to automate the expansion of cognitive neuroscience knowledge graphs (CNKG) using large language models (LLMs). It addresses limitations in existing tools by enhancing accuracy, completeness, and usefulness in CNKG. The framework leverages a large dataset of scientific papers and clinical reports, applying state-of-the-art LLMs to extract, optimize, and integrate new entities and relationships. Evaluation metrics include precision, recall, and graph density. Results show significant improvements: precision (0.80, +6.67%), recall (0.81, +15.71%), F1 score (0.805, +11.81%), and increased edge nodes (21.13% and 31.92%). Graph density slightly decreased, reflecting a broader but more fragmented structure. Engagement rates rose by 20%, while CNKG diameter increased to 15, indicating a more distributed structure. Time complexity improved to O(n log n), but space complexity rose to O(n2), indicating higher memory usage. ExKG-LLM demonstrates potential for enhancing knowledge generation, semantic search, and clinical decision-making in cognitive neuroscience, adaptable to broader scientific fields.
no_new_dataset
0.952926
2503.06482
Honglin Li
Honglin Li, Zhongyi Shui, Yunlong Zhang, Chenglu Zhu, Lin Yang
PathVQ: Reforming Computational Pathology Foundation Model for Whole Slide Image Analysis via Vector Quantization
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Computational pathology and whole-slide image (WSI) analysis are pivotal in cancer diagnosis and prognosis. However, the ultra-high resolution of WSIs presents significant modeling challenges. Recent advancements in pathology foundation models have improved performance, yet most approaches rely on [CLS] token representation of tile ViT as slide-level inputs (16x16 pixels is refereed as patch and 224x224 pixels as tile). This discards critical spatial details from patch tokens, limiting downstream WSI analysis tasks. We find that leveraging all spatial patch tokens benefits WSI analysis but incurs nearly 200x higher storage and training costs (e.g., 196 tokens in ViT$_{224}$). To address this, we introduce vector quantized (VQ) distillation on patch feature, which efficiently compresses spatial patch tokens using discrete indices and a decoder. Our method reduces token dimensionality from 1024 to 16, achieving a 64x compression rate while preserving reconstruction fidelity. Furthermore, we employ a multi-scale VQ (MSVQ) strategy, which not only enhances VQ reconstruction performance but also serves as a Self-supervised Learning (SSL) supervision for a seamless slide-level pretraining objective. Built upon the quantized patch features and supervision targets of tile via MSVQ, we develop a progressive convolutional module and slide-level SSL to extract representations with rich spatial-information for downstream WSI tasks. Extensive evaluations on multiple datasets demonstrate the effectiveness of our approach, achieving state-of-the-art performance in WSI analysis. Code will be available soon.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 06:51:08 GMT" } ]
2025-03-11T00:00:00
[ [ "Li", "Honglin", "" ], [ "Shui", "Zhongyi", "" ], [ "Zhang", "Yunlong", "" ], [ "Zhu", "Chenglu", "" ], [ "Yang", "Lin", "" ] ]
TITLE: PathVQ: Reforming Computational Pathology Foundation Model for Whole Slide Image Analysis via Vector Quantization ABSTRACT: Computational pathology and whole-slide image (WSI) analysis are pivotal in cancer diagnosis and prognosis. However, the ultra-high resolution of WSIs presents significant modeling challenges. Recent advancements in pathology foundation models have improved performance, yet most approaches rely on [CLS] token representation of tile ViT as slide-level inputs (16x16 pixels is refereed as patch and 224x224 pixels as tile). This discards critical spatial details from patch tokens, limiting downstream WSI analysis tasks. We find that leveraging all spatial patch tokens benefits WSI analysis but incurs nearly 200x higher storage and training costs (e.g., 196 tokens in ViT$_{224}$). To address this, we introduce vector quantized (VQ) distillation on patch feature, which efficiently compresses spatial patch tokens using discrete indices and a decoder. Our method reduces token dimensionality from 1024 to 16, achieving a 64x compression rate while preserving reconstruction fidelity. Furthermore, we employ a multi-scale VQ (MSVQ) strategy, which not only enhances VQ reconstruction performance but also serves as a Self-supervised Learning (SSL) supervision for a seamless slide-level pretraining objective. Built upon the quantized patch features and supervision targets of tile via MSVQ, we develop a progressive convolutional module and slide-level SSL to extract representations with rich spatial-information for downstream WSI tasks. Extensive evaluations on multiple datasets demonstrate the effectiveness of our approach, achieving state-of-the-art performance in WSI analysis. Code will be available soon.
no_new_dataset
0.947381
2503.06484
Xiao Wang
Xiao Wang, Yuehang Li, Fuling Wang, Bo Jiang, Yaowei Wang, Yonghong Tian, Jin Tang, Bin Luo
Sign Language Translation using Frame and Event Stream: Benchmark Dataset and Algorithms
In Peer Review
null
null
null
cs.CV cs.AI cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurate sign language understanding serves as a crucial communication channel for individuals with disabilities. Current sign language translation algorithms predominantly rely on RGB frames, which may be limited by fixed frame rates, variable lighting conditions, and motion blur caused by rapid hand movements. Inspired by the recent successful application of event cameras in other fields, we propose to leverage event streams to assist RGB cameras in capturing gesture data, addressing the various challenges mentioned above. Specifically, we first collect a large-scale RGB-Event sign language translation dataset using the DVS346 camera, termed VECSL, which contains 15,676 RGB-Event samples, 15,191 glosses, and covers 2,568 Chinese characters. These samples were gathered across a diverse range of indoor and outdoor environments, capturing multiple viewing angles, varying light intensities, and different camera motions. Due to the absence of benchmark algorithms for comparison in this new task, we retrained and evaluated multiple state-of-the-art SLT algorithms, and believe that this benchmark can effectively support subsequent related research. Additionally, we propose a novel RGB-Event sign language translation framework (i.e., M$^2$-SLT) that incorporates fine-grained micro-sign and coarse-grained macro-sign retrieval, achieving state-of-the-art results on the proposed dataset. Both the source code and dataset will be released on https://github.com/Event-AHU/OpenESL.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 06:55:46 GMT" } ]
2025-03-11T00:00:00
[ [ "Wang", "Xiao", "" ], [ "Li", "Yuehang", "" ], [ "Wang", "Fuling", "" ], [ "Jiang", "Bo", "" ], [ "Wang", "Yaowei", "" ], [ "Tian", "Yonghong", "" ], [ "Tang", "Jin", "" ], [ "Luo", "Bin", "" ] ]
TITLE: Sign Language Translation using Frame and Event Stream: Benchmark Dataset and Algorithms ABSTRACT: Accurate sign language understanding serves as a crucial communication channel for individuals with disabilities. Current sign language translation algorithms predominantly rely on RGB frames, which may be limited by fixed frame rates, variable lighting conditions, and motion blur caused by rapid hand movements. Inspired by the recent successful application of event cameras in other fields, we propose to leverage event streams to assist RGB cameras in capturing gesture data, addressing the various challenges mentioned above. Specifically, we first collect a large-scale RGB-Event sign language translation dataset using the DVS346 camera, termed VECSL, which contains 15,676 RGB-Event samples, 15,191 glosses, and covers 2,568 Chinese characters. These samples were gathered across a diverse range of indoor and outdoor environments, capturing multiple viewing angles, varying light intensities, and different camera motions. Due to the absence of benchmark algorithms for comparison in this new task, we retrained and evaluated multiple state-of-the-art SLT algorithms, and believe that this benchmark can effectively support subsequent related research. Additionally, we propose a novel RGB-Event sign language translation framework (i.e., M$^2$-SLT) that incorporates fine-grained micro-sign and coarse-grained macro-sign retrieval, achieving state-of-the-art results on the proposed dataset. Both the source code and dataset will be released on https://github.com/Event-AHU/OpenESL.
new_dataset
0.959269
2503.06487
Rina Mishra
Rina Mishra and Gaurav Varshney
A Study of Effectiveness of Brand Domain Identification Features for Phishing Detection in 2025
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Phishing websites continue to pose a significant security challenge, making the development of robust detection mechanisms essential. Brand Domain Identification (BDI) serves as a crucial step in many phishing detection approaches. This study systematically evaluates the effectiveness of features employed over the past decade for BDI, focusing on their weighted importance in phishing detection as of 2025. The primary objective is to determine whether the identified brand domain matches the claimed domain, utilizing popular features for phishing detection. To validate feature importance and evaluate performance, we conducted two experiments on a dataset comprising 4,667 legitimate sites and 4,561 phishing sites. In Experiment 1, we used the Weka tool to identify optimized and important feature sets out of 5: CN Information(CN), Logo Domain(LD),Form Action Domain(FAD),Most Common Link in Domain(MCLD) and Cookie Domain through its 4 Attribute Ranking Evaluator. The results revealed that none of the features were redundant, and Random Forest emerged as the best classifier, achieving an impressive accuracy of 99.7\% with an average response time of 0.08 seconds. In Experiment 2, we trained five machine learning models, including Random Forest, Decision Tree, Support Vector Machine, Multilayer Perceptron, and XGBoost to assess the performance of individual BDI features and their combinations. The results demonstrated an accuracy of 99.8\%, achieved with feature combinations of only three features: Most Common Link Domain, Logo Domain, Form Action and Most Common Link Domain,CN Info,Logo Domain using Random Forest as the best classifier. This study underscores the importance of leveraging key domain features for efficient phishing detection and paves the way for the development of real-time, scalable detection systems.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 07:14:04 GMT" } ]
2025-03-11T00:00:00
[ [ "Mishra", "Rina", "" ], [ "Varshney", "Gaurav", "" ] ]
TITLE: A Study of Effectiveness of Brand Domain Identification Features for Phishing Detection in 2025 ABSTRACT: Phishing websites continue to pose a significant security challenge, making the development of robust detection mechanisms essential. Brand Domain Identification (BDI) serves as a crucial step in many phishing detection approaches. This study systematically evaluates the effectiveness of features employed over the past decade for BDI, focusing on their weighted importance in phishing detection as of 2025. The primary objective is to determine whether the identified brand domain matches the claimed domain, utilizing popular features for phishing detection. To validate feature importance and evaluate performance, we conducted two experiments on a dataset comprising 4,667 legitimate sites and 4,561 phishing sites. In Experiment 1, we used the Weka tool to identify optimized and important feature sets out of 5: CN Information(CN), Logo Domain(LD),Form Action Domain(FAD),Most Common Link in Domain(MCLD) and Cookie Domain through its 4 Attribute Ranking Evaluator. The results revealed that none of the features were redundant, and Random Forest emerged as the best classifier, achieving an impressive accuracy of 99.7\% with an average response time of 0.08 seconds. In Experiment 2, we trained five machine learning models, including Random Forest, Decision Tree, Support Vector Machine, Multilayer Perceptron, and XGBoost to assess the performance of individual BDI features and their combinations. The results demonstrated an accuracy of 99.8\%, achieved with feature combinations of only three features: Most Common Link Domain, Logo Domain, Form Action and Most Common Link Domain,CN Info,Logo Domain using Random Forest as the best classifier. This study underscores the importance of leveraging key domain features for efficient phishing detection and paves the way for the development of real-time, scalable detection systems.
no_new_dataset
0.950365
2503.06488
Shijun Cheng
Shijun Cheng, Mohammad H. Taufik, and Tariq Alkhalifah
Seismic wavefield solutions via physics-guided generative neural operator
null
null
null
null
physics.geo-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current neural operators often struggle to generalize to complex, out-of-distribution conditions, limiting their ability in seismic wavefield representation. To address this, we propose a generative neural operator (GNO) that leverages generative diffusion models (GDMs) to learn the underlying statistical distribution of scattered wavefields while incorporating a physics-guided sampling process at each inference step. This physics guidance enforces wave equation-based constraints corresponding to specific velocity models, driving the iteratively generated wavefields toward physically consistent solutions. By training the diffusion model on wavefields corresponding to a diverse dataset of velocity models, frequencies, and source positions, our GNO enables to rapidly synthesize high-fidelity wavefields at inference time. Numerical experiments demonstrate that our GNO not only produces accurate wavefields matching numerical reference solutions, but also generalizes effectively to previously unseen velocity models and frequencies.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 07:19:12 GMT" } ]
2025-03-11T00:00:00
[ [ "Cheng", "Shijun", "" ], [ "Taufik", "Mohammad H.", "" ], [ "Alkhalifah", "Tariq", "" ] ]
TITLE: Seismic wavefield solutions via physics-guided generative neural operator ABSTRACT: Current neural operators often struggle to generalize to complex, out-of-distribution conditions, limiting their ability in seismic wavefield representation. To address this, we propose a generative neural operator (GNO) that leverages generative diffusion models (GDMs) to learn the underlying statistical distribution of scattered wavefields while incorporating a physics-guided sampling process at each inference step. This physics guidance enforces wave equation-based constraints corresponding to specific velocity models, driving the iteratively generated wavefields toward physically consistent solutions. By training the diffusion model on wavefields corresponding to a diverse dataset of velocity models, frequencies, and source positions, our GNO enables to rapidly synthesize high-fidelity wavefields at inference time. Numerical experiments demonstrate that our GNO not only produces accurate wavefields matching numerical reference solutions, but also generalizes effectively to previously unseen velocity models and frequencies.
no_new_dataset
0.955899
2503.06492
Yanling Wang
Yanling Wang, Yihan Zhao, Xiaodong Chen, Shasha Guo, Lixin Liu, Haoyang Li, Yong Xiao, Jing Zhang, Qi Li, Ke Xu
VisualSimpleQA: A Benchmark for Decoupled Evaluation of Large Vision-Language Models in Fact-Seeking Question Answering
null
null
null
null
cs.CL cs.CV
http://creativecommons.org/licenses/by/4.0/
Large vision-language models (LVLMs) have demonstrated remarkable achievements, yet the generation of non-factual responses remains prevalent in fact-seeking question answering (QA). Current multimodal fact-seeking benchmarks primarily focus on comparing model outputs to ground truth answers, providing limited insights into the performance of modality-specific modules. To bridge this gap, we introduce VisualSimpleQA, a multimodal fact-seeking benchmark with two key features. First, it enables streamlined and decoupled evaluation of LVLMs in visual and linguistic modalities. Second, it incorporates well-defined difficulty criteria to guide human annotation and facilitates the extraction of a challenging subset, VisualSimpleQA-hard. Experiments on 15 LVLMs show that even state-of-the-art models such as GPT-4o achieve merely 60%+ correctness in multimodal fact-seeking QA on VisualSimpleQA and 30%+ on VisualSimpleQA-hard. Furthermore, the decoupled evaluation across these models highlights substantial opportunities for improvement in both visual and linguistic modules. The dataset is available at https://huggingface.co/datasets/WYLing/VisualSimpleQA.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 07:25:32 GMT" } ]
2025-03-11T00:00:00
[ [ "Wang", "Yanling", "" ], [ "Zhao", "Yihan", "" ], [ "Chen", "Xiaodong", "" ], [ "Guo", "Shasha", "" ], [ "Liu", "Lixin", "" ], [ "Li", "Haoyang", "" ], [ "Xiao", "Yong", "" ], [ "Zhang", "Jing", "" ], [ "Li", "Qi", "" ], [ "Xu", "Ke", "" ] ]
TITLE: VisualSimpleQA: A Benchmark for Decoupled Evaluation of Large Vision-Language Models in Fact-Seeking Question Answering ABSTRACT: Large vision-language models (LVLMs) have demonstrated remarkable achievements, yet the generation of non-factual responses remains prevalent in fact-seeking question answering (QA). Current multimodal fact-seeking benchmarks primarily focus on comparing model outputs to ground truth answers, providing limited insights into the performance of modality-specific modules. To bridge this gap, we introduce VisualSimpleQA, a multimodal fact-seeking benchmark with two key features. First, it enables streamlined and decoupled evaluation of LVLMs in visual and linguistic modalities. Second, it incorporates well-defined difficulty criteria to guide human annotation and facilitates the extraction of a challenging subset, VisualSimpleQA-hard. Experiments on 15 LVLMs show that even state-of-the-art models such as GPT-4o achieve merely 60%+ correctness in multimodal fact-seeking QA on VisualSimpleQA and 30%+ on VisualSimpleQA-hard. Furthermore, the decoupled evaluation across these models highlights substantial opportunities for improvement in both visual and linguistic modules. The dataset is available at https://huggingface.co/datasets/WYLing/VisualSimpleQA.
new_dataset
0.950549
2503.06497
Enming Zhang
Enming Zhang, Peizhe Gong, Xingyuan Dai, Yisheng Lv, Qinghai Miao
Evaluation of Safety Cognition Capability in Vision-Language Models for Autonomous Driving
null
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Assessing the safety of vision-language models (VLMs) in autonomous driving is particularly important; however, existing work mainly focuses on traditional benchmark evaluations. As interactive components within autonomous driving systems, VLMs must maintain strong safety cognition during interactions. From this perspective, we propose a novel evaluation method: Safety Cognitive Driving Benchmark (SCD-Bench) . To address the large-scale annotation challenge for SCD-Bench, we develop the Autonomous Driving Image-Text Annotation System (ADA) . Additionally, to ensure data quality in SCD-Bench, our dataset undergoes manual refinement by experts with professional knowledge in autonomous driving. We further develop an automated evaluation method based on large language models (LLMs). To verify its effectiveness, we compare its evaluation results with those of expert human evaluations, achieving a consistency rate of 99.74%. Preliminary experimental results indicate that existing open-source models still lack sufficient safety cognition, showing a significant gap compared to GPT-4o. Notably, lightweight models (1B-4B) demonstrate minimal safety cognition. However, since lightweight models are crucial for autonomous driving systems, this presents a significant challenge for integrating VLMs into the field.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 07:53:19 GMT" } ]
2025-03-11T00:00:00
[ [ "Zhang", "Enming", "" ], [ "Gong", "Peizhe", "" ], [ "Dai", "Xingyuan", "" ], [ "Lv", "Yisheng", "" ], [ "Miao", "Qinghai", "" ] ]
TITLE: Evaluation of Safety Cognition Capability in Vision-Language Models for Autonomous Driving ABSTRACT: Assessing the safety of vision-language models (VLMs) in autonomous driving is particularly important; however, existing work mainly focuses on traditional benchmark evaluations. As interactive components within autonomous driving systems, VLMs must maintain strong safety cognition during interactions. From this perspective, we propose a novel evaluation method: Safety Cognitive Driving Benchmark (SCD-Bench) . To address the large-scale annotation challenge for SCD-Bench, we develop the Autonomous Driving Image-Text Annotation System (ADA) . Additionally, to ensure data quality in SCD-Bench, our dataset undergoes manual refinement by experts with professional knowledge in autonomous driving. We further develop an automated evaluation method based on large language models (LLMs). To verify its effectiveness, we compare its evaluation results with those of expert human evaluations, achieving a consistency rate of 99.74%. Preliminary experimental results indicate that existing open-source models still lack sufficient safety cognition, showing a significant gap compared to GPT-4o. Notably, lightweight models (1B-4B) demonstrate minimal safety cognition. However, since lightweight models are crucial for autonomous driving systems, this presents a significant challenge for integrating VLMs into the field.
new_dataset
0.608361
2503.06500
Yanwei Huang
Yanwei Huang, Yan Miao, Di Weng, Adam Perer, Yingcai Wu
StructVizor: Interactive Profiling of Semi-Structured Textual Data
Accepted for CHI 2025
null
10.1145/3706598.3713484
null
cs.HC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Data profiling plays a critical role in understanding the structure of complex datasets and supporting numerous downstream tasks, such as social media analytics and financial fraud detection. While existing research predominantly focuses on structured data formats, a substantial portion of semi-structured textual data still requires ad-hoc and arduous manual profiling to extract and comprehend its internal structures. In this work, we propose StructVizor, an interactive profiling system that facilitates sensemaking and transformation of semi-structured textual data. Our tool mainly addresses two challenges: a) extracting and visualizing the diverse structural patterns within data, such as how information is organized or related, and b) enabling users to efficiently perform various wrangling operations on textual data. Through automatic data parsing and structure mining, StructVizor enables visual analytics of structural patterns, while incorporating novel interactions to enable profile-based data wrangling. A comparative user study involving 12 participants demonstrates the system's usability and its effectiveness in supporting exploratory data analysis and transformation tasks.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 08:03:32 GMT" } ]
2025-03-11T00:00:00
[ [ "Huang", "Yanwei", "" ], [ "Miao", "Yan", "" ], [ "Weng", "Di", "" ], [ "Perer", "Adam", "" ], [ "Wu", "Yingcai", "" ] ]
TITLE: StructVizor: Interactive Profiling of Semi-Structured Textual Data ABSTRACT: Data profiling plays a critical role in understanding the structure of complex datasets and supporting numerous downstream tasks, such as social media analytics and financial fraud detection. While existing research predominantly focuses on structured data formats, a substantial portion of semi-structured textual data still requires ad-hoc and arduous manual profiling to extract and comprehend its internal structures. In this work, we propose StructVizor, an interactive profiling system that facilitates sensemaking and transformation of semi-structured textual data. Our tool mainly addresses two challenges: a) extracting and visualizing the diverse structural patterns within data, such as how information is organized or related, and b) enabling users to efficiently perform various wrangling operations on textual data. Through automatic data parsing and structure mining, StructVizor enables visual analytics of structural patterns, while incorporating novel interactions to enable profile-based data wrangling. A comparative user study involving 12 participants demonstrates the system's usability and its effectiveness in supporting exploratory data analysis and transformation tasks.
no_new_dataset
0.948346
2503.06501
Huaqi Tao
Huaqi Tao, Bingxi Liu, Calvin Chen, Tingjun Huang, He Li, Jinqiang Cui, Hong Zhang
TextInPlace: Indoor Visual Place Recognition in Repetitive Structures with Scene Text Spotting and Verification
8 pages,5 figures
null
null
null
cs.CV cs.RO
http://creativecommons.org/licenses/by-nc-sa/4.0/
Visual Place Recognition (VPR) is a crucial capability for long-term autonomous robots, enabling them to identify previously visited locations using visual information. However, existing methods remain limited in indoor settings due to the highly repetitive structures inherent in such environments. We observe that scene text typically appears in indoor spaces, serving to distinguish visually similar but different places. This inspires us to propose TextInPlace, a simple yet effective VPR framework that integrates Scene Text Spotting (STS) to mitigate visual perceptual ambiguity in repetitive indoor environments. Specifically, TextInPlace adopts a dual-branch architecture within a local parameter sharing network. The VPR branch employs attention-based aggregation to extract global descriptors for coarse-grained retrieval, while the STS branch utilizes a bridging text spotter to detect and recognize scene text. Finally, the discriminative text is filtered to compute text similarity and re-rank the top-K retrieved images. To bridge the gap between current text-based repetitive indoor scene datasets and the typical scenarios encountered in robot navigation, we establish an indoor VPR benchmark dataset, called Maze-with-Text. Extensive experiments on both custom and public datasets demonstrate that TextInPlace achieves superior performance over existing methods that rely solely on appearance information. The dataset, code, and trained models are publicly available at https://github.com/HqiTao/TextInPlace.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 08:03:41 GMT" } ]
2025-03-11T00:00:00
[ [ "Tao", "Huaqi", "" ], [ "Liu", "Bingxi", "" ], [ "Chen", "Calvin", "" ], [ "Huang", "Tingjun", "" ], [ "Li", "He", "" ], [ "Cui", "Jinqiang", "" ], [ "Zhang", "Hong", "" ] ]
TITLE: TextInPlace: Indoor Visual Place Recognition in Repetitive Structures with Scene Text Spotting and Verification ABSTRACT: Visual Place Recognition (VPR) is a crucial capability for long-term autonomous robots, enabling them to identify previously visited locations using visual information. However, existing methods remain limited in indoor settings due to the highly repetitive structures inherent in such environments. We observe that scene text typically appears in indoor spaces, serving to distinguish visually similar but different places. This inspires us to propose TextInPlace, a simple yet effective VPR framework that integrates Scene Text Spotting (STS) to mitigate visual perceptual ambiguity in repetitive indoor environments. Specifically, TextInPlace adopts a dual-branch architecture within a local parameter sharing network. The VPR branch employs attention-based aggregation to extract global descriptors for coarse-grained retrieval, while the STS branch utilizes a bridging text spotter to detect and recognize scene text. Finally, the discriminative text is filtered to compute text similarity and re-rank the top-K retrieved images. To bridge the gap between current text-based repetitive indoor scene datasets and the typical scenarios encountered in robot navigation, we establish an indoor VPR benchmark dataset, called Maze-with-Text. Extensive experiments on both custom and public datasets demonstrate that TextInPlace achieves superior performance over existing methods that rely solely on appearance information. The dataset, code, and trained models are publicly available at https://github.com/HqiTao/TextInPlace.
new_dataset
0.973418
2503.06505
Xirui Hu
Xirui Hu, Jiahao Wang, Hao Chen, Weizhan Zhang, Benqi Wang, Yikun Li, Haishun Nan
DynamicID: Zero-Shot Multi-ID Image Personalization with Flexible Facial Editability
17 pages, 16 figures
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent advancements in text-to-image generation have spurred interest in personalized human image generation, which aims to create novel images featuring specific human identities as reference images indicate. Although existing methods achieve high-fidelity identity preservation, they often struggle with limited multi-ID usability and inadequate facial editability. We present DynamicID, a tuning-free framework supported by a dual-stage training paradigm that inherently facilitates both single-ID and multi-ID personalized generation with high fidelity and flexible facial editability. Our key innovations include: 1) Semantic-Activated Attention (SAA), which employs query-level activation gating to minimize disruption to the original model when injecting ID features and achieve multi-ID personalization without requiring multi-ID samples during training. 2) Identity-Motion Reconfigurator (IMR), which leverages contrastive learning to effectively disentangle and re-entangle facial motion and identity features, thereby enabling flexible facial editing. Additionally, we have developed a curated VariFace-10k facial dataset, comprising 10k unique individuals, each represented by 35 distinct facial images. Experimental results demonstrate that DynamicID outperforms state-of-the-art methods in identity fidelity, facial editability, and multi-ID personalization capability.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 08:16:19 GMT" } ]
2025-03-11T00:00:00
[ [ "Hu", "Xirui", "" ], [ "Wang", "Jiahao", "" ], [ "Chen", "Hao", "" ], [ "Zhang", "Weizhan", "" ], [ "Wang", "Benqi", "" ], [ "Li", "Yikun", "" ], [ "Nan", "Haishun", "" ] ]
TITLE: DynamicID: Zero-Shot Multi-ID Image Personalization with Flexible Facial Editability ABSTRACT: Recent advancements in text-to-image generation have spurred interest in personalized human image generation, which aims to create novel images featuring specific human identities as reference images indicate. Although existing methods achieve high-fidelity identity preservation, they often struggle with limited multi-ID usability and inadequate facial editability. We present DynamicID, a tuning-free framework supported by a dual-stage training paradigm that inherently facilitates both single-ID and multi-ID personalized generation with high fidelity and flexible facial editability. Our key innovations include: 1) Semantic-Activated Attention (SAA), which employs query-level activation gating to minimize disruption to the original model when injecting ID features and achieve multi-ID personalization without requiring multi-ID samples during training. 2) Identity-Motion Reconfigurator (IMR), which leverages contrastive learning to effectively disentangle and re-entangle facial motion and identity features, thereby enabling flexible facial editing. Additionally, we have developed a curated VariFace-10k facial dataset, comprising 10k unique individuals, each represented by 35 distinct facial images. Experimental results demonstrate that DynamicID outperforms state-of-the-art methods in identity fidelity, facial editability, and multi-ID personalization capability.
new_dataset
0.950503
2503.06511
Bohan Lin
Yiting Zheng, Bohan Lin, Jinqian Chen, Jihua Zhu
HFedCKD: Toward Robust Heterogeneous Federated Learning via Data-free Knowledge Distillation and Two-way Contrast
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most current federated learning frameworks are modeled as static processes, ignoring the dynamic characteristics of the learning system. Under the limited communication budget of the central server, the flexible model architecture of a large number of clients participating in knowledge transfer requires a lower participation rate, active clients have uneven contributions, and the client scale seriously hinders the performance of FL. We consider a more general and practical federation scenario and propose a system heterogeneous federation method based on data-free knowledge distillation and two-way contrast (HFedCKD). We apply the Inverse Probability Weighted Distillation (IPWD) strategy to the data-free knowledge transfer framework. The generator completes the data features of the nonparticipating clients. IPWD implements a dynamic evaluation of the prediction contribution of each client under different data distributions. Based on the antibiased weighting of its prediction loss, the weight distribution of each client is effectively adjusted to fairly integrate the knowledge of participating clients. At the same time, the local model is split into a feature extractor and a classifier. Through differential contrast learning, the feature extractor is aligned with the global model in the feature space, while the classifier maintains personalized decision-making capabilities. HFedCKD effectively alleviates the knowledge offset caused by a low participation rate under data-free knowledge distillation and improves the performance and stability of the model. We conduct extensive experiments on image and IoT datasets to comprehensively evaluate and verify the generalization and robustness of the proposed HFedCKD framework.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 08:32:57 GMT" } ]
2025-03-11T00:00:00
[ [ "Zheng", "Yiting", "" ], [ "Lin", "Bohan", "" ], [ "Chen", "Jinqian", "" ], [ "Zhu", "Jihua", "" ] ]
TITLE: HFedCKD: Toward Robust Heterogeneous Federated Learning via Data-free Knowledge Distillation and Two-way Contrast ABSTRACT: Most current federated learning frameworks are modeled as static processes, ignoring the dynamic characteristics of the learning system. Under the limited communication budget of the central server, the flexible model architecture of a large number of clients participating in knowledge transfer requires a lower participation rate, active clients have uneven contributions, and the client scale seriously hinders the performance of FL. We consider a more general and practical federation scenario and propose a system heterogeneous federation method based on data-free knowledge distillation and two-way contrast (HFedCKD). We apply the Inverse Probability Weighted Distillation (IPWD) strategy to the data-free knowledge transfer framework. The generator completes the data features of the nonparticipating clients. IPWD implements a dynamic evaluation of the prediction contribution of each client under different data distributions. Based on the antibiased weighting of its prediction loss, the weight distribution of each client is effectively adjusted to fairly integrate the knowledge of participating clients. At the same time, the local model is split into a feature extractor and a classifier. Through differential contrast learning, the feature extractor is aligned with the global model in the feature space, while the classifier maintains personalized decision-making capabilities. HFedCKD effectively alleviates the knowledge offset caused by a low participation rate under data-free knowledge distillation and improves the performance and stability of the model. We conduct extensive experiments on image and IoT datasets to comprehensively evaluate and verify the generalization and robustness of the proposed HFedCKD framework.
no_new_dataset
0.947527
2503.06519
Wenhui Zhang
Wenhui Zhang, Huiyu Xu, Zhibo Wang, Zeqing He, Ziqi Zhu, Kui Ren
Can Small Language Models Reliably Resist Jailbreak Attacks? A Comprehensive Evaluation
19 pages, 12 figures
null
null
null
cs.CR cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Small language models (SLMs) have emerged as promising alternatives to large language models (LLMs) due to their low computational demands, enhanced privacy guarantees and comparable performance in specific domains through light-weight fine-tuning. Deploying SLMs on edge devices, such as smartphones and smart vehicles, has become a growing trend. However, the security implications of SLMs have received less attention than LLMs, particularly regarding jailbreak attacks, which is recognized as one of the top threats of LLMs by the OWASP. In this paper, we conduct the first large-scale empirical study of SLMs' vulnerabilities to jailbreak attacks. Through systematically evaluation on 63 SLMs from 15 mainstream SLM families against 8 state-of-the-art jailbreak methods, we demonstrate that 47.6% of evaluated SLMs show high susceptibility to jailbreak attacks (ASR > 40%) and 38.1% of them can not even resist direct harmful query (ASR > 50%). We further analyze the reasons behind the vulnerabilities and identify four key factors: model size, model architecture, training datasets and training techniques. Moreover, we assess the effectiveness of three prompt-level defense methods and find that none of them achieve perfect performance, with detection accuracy varying across different SLMs and attack methods. Notably, we point out that the inherent security awareness play a critical role in SLM security, and models with strong security awareness could timely terminate unsafe response with little reminder. Building upon the findings, we highlight the urgent need for security-by-design approaches in SLM development and provide valuable insights for building more trustworthy SLM ecosystem.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 08:47:16 GMT" } ]
2025-03-11T00:00:00
[ [ "Zhang", "Wenhui", "" ], [ "Xu", "Huiyu", "" ], [ "Wang", "Zhibo", "" ], [ "He", "Zeqing", "" ], [ "Zhu", "Ziqi", "" ], [ "Ren", "Kui", "" ] ]
TITLE: Can Small Language Models Reliably Resist Jailbreak Attacks? A Comprehensive Evaluation ABSTRACT: Small language models (SLMs) have emerged as promising alternatives to large language models (LLMs) due to their low computational demands, enhanced privacy guarantees and comparable performance in specific domains through light-weight fine-tuning. Deploying SLMs on edge devices, such as smartphones and smart vehicles, has become a growing trend. However, the security implications of SLMs have received less attention than LLMs, particularly regarding jailbreak attacks, which is recognized as one of the top threats of LLMs by the OWASP. In this paper, we conduct the first large-scale empirical study of SLMs' vulnerabilities to jailbreak attacks. Through systematically evaluation on 63 SLMs from 15 mainstream SLM families against 8 state-of-the-art jailbreak methods, we demonstrate that 47.6% of evaluated SLMs show high susceptibility to jailbreak attacks (ASR > 40%) and 38.1% of them can not even resist direct harmful query (ASR > 50%). We further analyze the reasons behind the vulnerabilities and identify four key factors: model size, model architecture, training datasets and training techniques. Moreover, we assess the effectiveness of three prompt-level defense methods and find that none of them achieve perfect performance, with detection accuracy varying across different SLMs and attack methods. Notably, we point out that the inherent security awareness play a critical role in SLM security, and models with strong security awareness could timely terminate unsafe response with little reminder. Building upon the findings, we highlight the urgent need for security-by-design approaches in SLM development and provide valuable insights for building more trustworthy SLM ecosystem.
no_new_dataset
0.94256
2503.06531
Jie He
Jie He, Yu Fu
MetaXCR: Reinforcement-Based Meta-Transfer Learning for Cross-Lingual Commonsense Reasoning
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
Commonsense reasoning (CR) has been studied in many pieces of domain and has achieved great progress with the aid of large datasets. Unfortunately, most existing CR datasets are built in English, so most previous work focus on English. Furthermore, as the annotation of commonsense reasoning is costly, it is impossible to build a large dataset for every novel task. Therefore, there are growing appeals for Cross-lingual Low-Resource Commonsense Reasoning, which aims to leverage diverse existed English datasets to help the model adapt to new cross-lingual target datasets with limited labeled data. In this paper, we propose a multi-source adapter for cross-lingual low-resource Commonsense Reasoning (MetaXCR). In this framework, we first extend meta learning by incorporating multiple training datasets to learn a generalized task adapters across different tasks. Then, we further introduce a reinforcement-based sampling strategy to help the model sample the source task that is the most helpful to the target task. Finally, we introduce two types of cross-lingual meta-adaption methods to enhance the performance of models on target languages. Extensive experiments demonstrate MetaXCR is superior over state-of-the-arts, while being trained with fewer parameters than other work.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 09:27:57 GMT" } ]
2025-03-11T00:00:00
[ [ "He", "Jie", "" ], [ "Fu", "Yu", "" ] ]
TITLE: MetaXCR: Reinforcement-Based Meta-Transfer Learning for Cross-Lingual Commonsense Reasoning ABSTRACT: Commonsense reasoning (CR) has been studied in many pieces of domain and has achieved great progress with the aid of large datasets. Unfortunately, most existing CR datasets are built in English, so most previous work focus on English. Furthermore, as the annotation of commonsense reasoning is costly, it is impossible to build a large dataset for every novel task. Therefore, there are growing appeals for Cross-lingual Low-Resource Commonsense Reasoning, which aims to leverage diverse existed English datasets to help the model adapt to new cross-lingual target datasets with limited labeled data. In this paper, we propose a multi-source adapter for cross-lingual low-resource Commonsense Reasoning (MetaXCR). In this framework, we first extend meta learning by incorporating multiple training datasets to learn a generalized task adapters across different tasks. Then, we further introduce a reinforcement-based sampling strategy to help the model sample the source task that is the most helpful to the target task. Finally, we introduce two types of cross-lingual meta-adaption methods to enhance the performance of models on target languages. Extensive experiments demonstrate MetaXCR is superior over state-of-the-arts, while being trained with fewer parameters than other work.
no_new_dataset
0.947817
2503.06534
Xingwei Tan
Xingwei Tan, Chen Lyu, Hafiz Muhammad Umer, Sahrish Khan, Mahathi Parvatham, Lois Arthurs, Simon Cullen, Shelley Wilson, Arshad Jhumka, Gabriele Pergola
SafeSpeech: A Comprehensive and Interactive Tool for Analysing Sexist and Abusive Language in Conversations
NAACL 2025 system demonstration camera-ready
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Detecting toxic language including sexism, harassment and abusive behaviour, remains a critical challenge, particularly in its subtle and context-dependent forms. Existing approaches largely focus on isolated message-level classification, overlooking toxicity that emerges across conversational contexts. To promote and enable future research in this direction, we introduce SafeSpeech, a comprehensive platform for toxic content detection and analysis that bridges message-level and conversation-level insights. The platform integrates fine-tuned classifiers and large language models (LLMs) to enable multi-granularity detection, toxic-aware conversation summarization, and persona profiling. SafeSpeech also incorporates explainability mechanisms, such as perplexity gain analysis, to highlight the linguistic elements driving predictions. Evaluations on benchmark datasets, including EDOS, OffensEval, and HatEval, demonstrate the reproduction of state-of-the-art performance across multiple tasks, including fine-grained sexism detection.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 09:31:17 GMT" } ]
2025-03-11T00:00:00
[ [ "Tan", "Xingwei", "" ], [ "Lyu", "Chen", "" ], [ "Umer", "Hafiz Muhammad", "" ], [ "Khan", "Sahrish", "" ], [ "Parvatham", "Mahathi", "" ], [ "Arthurs", "Lois", "" ], [ "Cullen", "Simon", "" ], [ "Wilson", "Shelley", "" ], [ "Jhumka", "Arshad", "" ], [ "Pergola", "Gabriele", "" ] ]
TITLE: SafeSpeech: A Comprehensive and Interactive Tool for Analysing Sexist and Abusive Language in Conversations ABSTRACT: Detecting toxic language including sexism, harassment and abusive behaviour, remains a critical challenge, particularly in its subtle and context-dependent forms. Existing approaches largely focus on isolated message-level classification, overlooking toxicity that emerges across conversational contexts. To promote and enable future research in this direction, we introduce SafeSpeech, a comprehensive platform for toxic content detection and analysis that bridges message-level and conversation-level insights. The platform integrates fine-tuned classifiers and large language models (LLMs) to enable multi-granularity detection, toxic-aware conversation summarization, and persona profiling. SafeSpeech also incorporates explainability mechanisms, such as perplexity gain analysis, to highlight the linguistic elements driving predictions. Evaluations on benchmark datasets, including EDOS, OffensEval, and HatEval, demonstrate the reproduction of state-of-the-art performance across multiple tasks, including fine-grained sexism detection.
no_new_dataset
0.94428
2503.06537
Xiaoyang Liu
Xiaoyang Liu, Yuquan Wang, Zheng Chen, Jiezhang Cao, He Zhang, Yulun Zhang and Xiaokang Yang
One-Step Diffusion Model for Image Motion-Deblurring
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Currently, methods for single-image deblurring based on CNNs and transformers have demonstrated promising performance. However, these methods often suffer from perceptual limitations, poor generalization ability, and struggle with heavy or complex blur. While diffusion-based methods can partially address these shortcomings, their multi-step denoising process limits their practical usage. In this paper, we conduct an in-depth exploration of diffusion models in deblurring and propose a one-step diffusion model for deblurring (OSDD), a novel framework that reduces the denoising process to a single step, significantly improving inference efficiency while maintaining high fidelity. To tackle fidelity loss in diffusion models, we introduce an enhanced variational autoencoder (eVAE), which improves structural restoration. Additionally, we construct a high-quality synthetic deblurring dataset to mitigate perceptual collapse and design a dynamic dual-adapter (DDA) to enhance perceptual quality while preserving fidelity. Extensive experiments demonstrate that our method achieves strong performance on both full and no-reference metrics. Our code and pre-trained model will be publicly available at https://github.com/xyLiu339/OSDD.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 09:39:57 GMT" } ]
2025-03-11T00:00:00
[ [ "Liu", "Xiaoyang", "" ], [ "Wang", "Yuquan", "" ], [ "Chen", "Zheng", "" ], [ "Cao", "Jiezhang", "" ], [ "Zhang", "He", "" ], [ "Zhang", "Yulun", "" ], [ "Yang", "Xiaokang", "" ] ]
TITLE: One-Step Diffusion Model for Image Motion-Deblurring ABSTRACT: Currently, methods for single-image deblurring based on CNNs and transformers have demonstrated promising performance. However, these methods often suffer from perceptual limitations, poor generalization ability, and struggle with heavy or complex blur. While diffusion-based methods can partially address these shortcomings, their multi-step denoising process limits their practical usage. In this paper, we conduct an in-depth exploration of diffusion models in deblurring and propose a one-step diffusion model for deblurring (OSDD), a novel framework that reduces the denoising process to a single step, significantly improving inference efficiency while maintaining high fidelity. To tackle fidelity loss in diffusion models, we introduce an enhanced variational autoencoder (eVAE), which improves structural restoration. Additionally, we construct a high-quality synthetic deblurring dataset to mitigate perceptual collapse and design a dynamic dual-adapter (DDA) to enhance perceptual quality while preserving fidelity. Extensive experiments demonstrate that our method achieves strong performance on both full and no-reference metrics. Our code and pre-trained model will be publicly available at https://github.com/xyLiu339/OSDD.
new_dataset
0.961207
2503.06542
Yukang Feng
Jianwen Sun, Yukang Feng, Chuanhao Li, Fanrui Zhang, Zizhen Li, Jiaxin Ai, Sizhuo Zhou, Yu Dai, Shenglin Zhang, Kaipeng Zhang
ARMOR v0.1: Empowering Autoregressive Multimodal Understanding Model with Interleaved Multimodal Generation via Asymmetric Synergy
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Unified models (UniMs) for multimodal understanding and generation have recently received much attention in the area of vision and language. Existing UniMs are designed to simultaneously learn both multimodal understanding and generation capabilities, demanding substantial computational resources, and often struggle to generate interleaved text-image. We present ARMOR, a resource-efficient and pure autoregressive framework that achieves both understanding and generation by fine-tuning existing multimodal large language models (MLLMs). Specifically, ARMOR extends existing MLLMs from three perspectives: (1) For model architecture, an asymmetric encoder-decoder architecture with a forward-switching mechanism is introduced to unify embedding space integrating textual and visual modalities for enabling natural text-image interleaved generation with minimal computational overhead. (2) For training data, a meticulously curated, high-quality interleaved dataset is collected for fine-tuning MLLMs. (3) For the training algorithm, we propose a ``what or how to generate" algorithm to empower existing MLLMs with multimodal generation capabilities while preserving their multimodal understanding capabilities, through three progressive training stages based on the collected dataset. Experimental results demonstrate that ARMOR upgrades existing MLLMs to UniMs with promising image generation capabilities, using limited training resources. Our code will be released soon at https://armor.github.io.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 10:15:39 GMT" } ]
2025-03-11T00:00:00
[ [ "Sun", "Jianwen", "" ], [ "Feng", "Yukang", "" ], [ "Li", "Chuanhao", "" ], [ "Zhang", "Fanrui", "" ], [ "Li", "Zizhen", "" ], [ "Ai", "Jiaxin", "" ], [ "Zhou", "Sizhuo", "" ], [ "Dai", "Yu", "" ], [ "Zhang", "Shenglin", "" ], [ "Zhang", "Kaipeng", "" ] ]
TITLE: ARMOR v0.1: Empowering Autoregressive Multimodal Understanding Model with Interleaved Multimodal Generation via Asymmetric Synergy ABSTRACT: Unified models (UniMs) for multimodal understanding and generation have recently received much attention in the area of vision and language. Existing UniMs are designed to simultaneously learn both multimodal understanding and generation capabilities, demanding substantial computational resources, and often struggle to generate interleaved text-image. We present ARMOR, a resource-efficient and pure autoregressive framework that achieves both understanding and generation by fine-tuning existing multimodal large language models (MLLMs). Specifically, ARMOR extends existing MLLMs from three perspectives: (1) For model architecture, an asymmetric encoder-decoder architecture with a forward-switching mechanism is introduced to unify embedding space integrating textual and visual modalities for enabling natural text-image interleaved generation with minimal computational overhead. (2) For training data, a meticulously curated, high-quality interleaved dataset is collected for fine-tuning MLLMs. (3) For the training algorithm, we propose a ``what or how to generate" algorithm to empower existing MLLMs with multimodal generation capabilities while preserving their multimodal understanding capabilities, through three progressive training stages based on the collected dataset. Experimental results demonstrate that ARMOR upgrades existing MLLMs to UniMs with promising image generation capabilities, using limited training resources. Our code will be released soon at https://armor.github.io.
no_new_dataset
0.795817
2503.06550
Fan Yin
Fan Yin and Philippe Laban and Xiangyu Peng and Yilun Zhou and Yixin Mao and Vaibhav Vats and Linnea Ross and Divyansh Agarwal and Caiming Xiong and Chien-Sheng Wu
BingoGuard: LLM Content Moderation Tools with Risk Levels
10 pages, 4 figures, 4 tables. ICLR 2025 poster
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Malicious content generated by large language models (LLMs) can pose varying degrees of harm. Although existing LLM-based moderators can detect harmful content, they struggle to assess risk levels and may miss lower-risk outputs. Accurate risk assessment allows platforms with different safety thresholds to tailor content filtering and rejection. In this paper, we introduce per-topic severity rubrics for 11 harmful topics and build BingoGuard, an LLM-based moderation system designed to predict both binary safety labels and severity levels. To address the lack of annotations on levels of severity, we propose a scalable generate-then-filter framework that first generates responses across different severity levels and then filters out low-quality responses. Using this framework, we create BingoGuardTrain, a training dataset with 54,897 examples covering a variety of topics, response severity, styles, and BingoGuardTest, a test set with 988 examples explicitly labeled based on our severity rubrics that enables fine-grained analysis on model behaviors on different severity levels. Our BingoGuard-8B, trained on BingoGuardTrain, achieves the state-of-the-art performance on several moderation benchmarks, including WildGuardTest and HarmBench, as well as BingoGuardTest, outperforming best public models, WildGuard, by 4.3\%. Our analysis demonstrates that incorporating severity levels into training significantly enhances detection performance and enables the model to effectively gauge the severity of harmful responses.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 10:43:09 GMT" } ]
2025-03-11T00:00:00
[ [ "Yin", "Fan", "" ], [ "Laban", "Philippe", "" ], [ "Peng", "Xiangyu", "" ], [ "Zhou", "Yilun", "" ], [ "Mao", "Yixin", "" ], [ "Vats", "Vaibhav", "" ], [ "Ross", "Linnea", "" ], [ "Agarwal", "Divyansh", "" ], [ "Xiong", "Caiming", "" ], [ "Wu", "Chien-Sheng", "" ] ]
TITLE: BingoGuard: LLM Content Moderation Tools with Risk Levels ABSTRACT: Malicious content generated by large language models (LLMs) can pose varying degrees of harm. Although existing LLM-based moderators can detect harmful content, they struggle to assess risk levels and may miss lower-risk outputs. Accurate risk assessment allows platforms with different safety thresholds to tailor content filtering and rejection. In this paper, we introduce per-topic severity rubrics for 11 harmful topics and build BingoGuard, an LLM-based moderation system designed to predict both binary safety labels and severity levels. To address the lack of annotations on levels of severity, we propose a scalable generate-then-filter framework that first generates responses across different severity levels and then filters out low-quality responses. Using this framework, we create BingoGuardTrain, a training dataset with 54,897 examples covering a variety of topics, response severity, styles, and BingoGuardTest, a test set with 988 examples explicitly labeled based on our severity rubrics that enables fine-grained analysis on model behaviors on different severity levels. Our BingoGuard-8B, trained on BingoGuardTrain, achieves the state-of-the-art performance on several moderation benchmarks, including WildGuardTest and HarmBench, as well as BingoGuardTest, outperforming best public models, WildGuard, by 4.3\%. Our analysis demonstrates that incorporating severity levels into training significantly enhances detection performance and enables the model to effectively gauge the severity of harmful responses.
new_dataset
0.945901
2503.06553
Jiaxin Ai
Jiaxin Ai, Pengfei Zhou, Zhaopan Xu, Ming Li, Fanrui Zhang, Zizhen Li, Jianwen Sun, Yukang Feng, Baojin Huang, Zhongyuan Wang, Kaipeng Zhang
ProJudge: A Multi-Modal Multi-Discipline Benchmark and Instruction-Tuning Dataset for MLLM-based Process Judges
null
null
null
null
cs.AI cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As multi-modal large language models (MLLMs) frequently exhibit errors when solving scientific problems, evaluating the validity of their reasoning processes is critical for ensuring reliability and uncovering fine-grained model weaknesses. Since human evaluation is laborious and costly, prompting MLLMs as automated process judges has become a common practice. However, the reliability of these model-based judges remains uncertain. To address this, we introduce ProJudgeBench, the first comprehensive benchmark specifically designed for evaluating abilities of MLLM-based process judges. ProJudgeBench comprises 2,400 test cases and 50,118 step-level labels, spanning four scientific disciplines with diverse difficulty levels and multi-modal content. In ProJudgeBench, each step is meticulously annotated by human experts for correctness, error type, and explanation, enabling a systematic evaluation of judges' capabilities to detect, classify and diagnose errors. Evaluation on ProJudgeBench reveals a significant performance gap between open-source and proprietary models. To bridge this gap, we further propose ProJudge-173k, a large-scale instruction-tuning dataset, and a Dynamic Dual-Phase fine-tuning strategy that encourages models to explicitly reason through problem-solving before assessing solutions. Both contributions significantly enhance the process evaluation capabilities of open-source models. All the resources will be released to foster future research of reliable multi-modal process evaluation.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 10:55:51 GMT" } ]
2025-03-11T00:00:00
[ [ "Ai", "Jiaxin", "" ], [ "Zhou", "Pengfei", "" ], [ "Xu", "Zhaopan", "" ], [ "Li", "Ming", "" ], [ "Zhang", "Fanrui", "" ], [ "Li", "Zizhen", "" ], [ "Sun", "Jianwen", "" ], [ "Feng", "Yukang", "" ], [ "Huang", "Baojin", "" ], [ "Wang", "Zhongyuan", "" ], [ "Zhang", "Kaipeng", "" ] ]
TITLE: ProJudge: A Multi-Modal Multi-Discipline Benchmark and Instruction-Tuning Dataset for MLLM-based Process Judges ABSTRACT: As multi-modal large language models (MLLMs) frequently exhibit errors when solving scientific problems, evaluating the validity of their reasoning processes is critical for ensuring reliability and uncovering fine-grained model weaknesses. Since human evaluation is laborious and costly, prompting MLLMs as automated process judges has become a common practice. However, the reliability of these model-based judges remains uncertain. To address this, we introduce ProJudgeBench, the first comprehensive benchmark specifically designed for evaluating abilities of MLLM-based process judges. ProJudgeBench comprises 2,400 test cases and 50,118 step-level labels, spanning four scientific disciplines with diverse difficulty levels and multi-modal content. In ProJudgeBench, each step is meticulously annotated by human experts for correctness, error type, and explanation, enabling a systematic evaluation of judges' capabilities to detect, classify and diagnose errors. Evaluation on ProJudgeBench reveals a significant performance gap between open-source and proprietary models. To bridge this gap, we further propose ProJudge-173k, a large-scale instruction-tuning dataset, and a Dynamic Dual-Phase fine-tuning strategy that encourages models to explicitly reason through problem-solving before assessing solutions. Both contributions significantly enhance the process evaluation capabilities of open-source models. All the resources will be released to foster future research of reliable multi-modal process evaluation.
new_dataset
0.959573
2503.06554
Chengcheng Zhu
Chengcheng Zhu, Jiale Zhang, Di Wu, Guodong Long
BDPFL: Backdoor Defense for Personalized Federated Learning via Explainable Distillation
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Federated learning is a distributed learning paradigm that facilitates the collaborative training of a global model across multiple clients while preserving the privacy of local datasets. To address inherent challenges related to data heterogeneity and satisfy personalized needs, a new direction within FL, known as personalized Federated Learning (pFL), has gradually evolved. Extensive attention has been directed toward developing novel frameworks and methods to enhance the performance of pFL. Regrettably, the aspect of security in pFL has been largely overlooked. Our objective is to fill this gap. Similar to FL, pFL is susceptible to backdoor attacks. However, existing backdoor defense strategies are primarily tailored to general FL frameworks, and pFL lacks robustness against backdoor attacks. We propose a novel, backdoor-robust pFL framework named BDPFL to address these challenges. First, BDPFL introduces layer-wise mutual distillation that enables clients to learn their personalized local models while mitigating potential backdoors. Then, BDPFL employs explanation heatmap to learn high-quality intermediate representations and enhance the effect of eliminating deeper and more entrenched backdoors. Moreover, we perform empirical evaluations of BDPFL's performance on three datasets and compare BDPFL with four backdoor defense methods. The experiments demonstrate that BDPFL outperforms baseline methods and is effective under various settings.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 10:59:18 GMT" } ]
2025-03-11T00:00:00
[ [ "Zhu", "Chengcheng", "" ], [ "Zhang", "Jiale", "" ], [ "Wu", "Di", "" ], [ "Long", "Guodong", "" ] ]
TITLE: BDPFL: Backdoor Defense for Personalized Federated Learning via Explainable Distillation ABSTRACT: Federated learning is a distributed learning paradigm that facilitates the collaborative training of a global model across multiple clients while preserving the privacy of local datasets. To address inherent challenges related to data heterogeneity and satisfy personalized needs, a new direction within FL, known as personalized Federated Learning (pFL), has gradually evolved. Extensive attention has been directed toward developing novel frameworks and methods to enhance the performance of pFL. Regrettably, the aspect of security in pFL has been largely overlooked. Our objective is to fill this gap. Similar to FL, pFL is susceptible to backdoor attacks. However, existing backdoor defense strategies are primarily tailored to general FL frameworks, and pFL lacks robustness against backdoor attacks. We propose a novel, backdoor-robust pFL framework named BDPFL to address these challenges. First, BDPFL introduces layer-wise mutual distillation that enables clients to learn their personalized local models while mitigating potential backdoors. Then, BDPFL employs explanation heatmap to learn high-quality intermediate representations and enhance the effect of eliminating deeper and more entrenched backdoors. Moreover, we perform empirical evaluations of BDPFL's performance on three datasets and compare BDPFL with four backdoor defense methods. The experiments demonstrate that BDPFL outperforms baseline methods and is effective under various settings.
no_new_dataset
0.938237
2503.06563
Jiangdong Cai
Jiangdong Cai, Haotian Jiang, Zhenrong Shen, Yonghao Li, Honglin Xiong, Lichi Zhang, Qian Wang
LSA: Latent Style Augmentation Towards Stain-Agnostic Cervical Cancer Screening
null
null
null
null
eess.IV cs.AI cs.CV
http://creativecommons.org/licenses/by/4.0/
The deployment of computer-aided diagnosis systems for cervical cancer screening using whole slide images (WSIs) faces critical challenges due to domain shifts caused by staining variations across different scanners and imaging environments. While existing stain augmentation methods improve patch-level robustness, they fail to scale to WSIs due to two key limitations: (1) inconsistent stain patterns when extending patch operations to gigapixel slides, and (2) prohibitive computational/storage costs from offline processing of augmented WSIs.To address this, we propose Latent Style Augmentation (LSA), a framework that performs efficient, online stain augmentation directly on WSI-level latent features. We first introduce WSAug, a WSI-level stain augmentation method ensuring consistent stain across patches within a WSI. Using offline-augmented WSIs by WSAug, we design and train Stain Transformer, which can simulate targeted style in the latent space, efficiently enhancing the robustness of the WSI-level classifier. We validate our method on a multi-scanner WSI dataset for cervical cancer diagnosis. Despite being trained on data from a single scanner, our approach achieves significant performance improvements on out-of-distribution data from other scanners. Code will be available at https://github.com/caijd2000/LSA.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 11:33:59 GMT" } ]
2025-03-11T00:00:00
[ [ "Cai", "Jiangdong", "" ], [ "Jiang", "Haotian", "" ], [ "Shen", "Zhenrong", "" ], [ "Li", "Yonghao", "" ], [ "Xiong", "Honglin", "" ], [ "Zhang", "Lichi", "" ], [ "Wang", "Qian", "" ] ]
TITLE: LSA: Latent Style Augmentation Towards Stain-Agnostic Cervical Cancer Screening ABSTRACT: The deployment of computer-aided diagnosis systems for cervical cancer screening using whole slide images (WSIs) faces critical challenges due to domain shifts caused by staining variations across different scanners and imaging environments. While existing stain augmentation methods improve patch-level robustness, they fail to scale to WSIs due to two key limitations: (1) inconsistent stain patterns when extending patch operations to gigapixel slides, and (2) prohibitive computational/storage costs from offline processing of augmented WSIs.To address this, we propose Latent Style Augmentation (LSA), a framework that performs efficient, online stain augmentation directly on WSI-level latent features. We first introduce WSAug, a WSI-level stain augmentation method ensuring consistent stain across patches within a WSI. Using offline-augmented WSIs by WSAug, we design and train Stain Transformer, which can simulate targeted style in the latent space, efficiently enhancing the robustness of the WSI-level classifier. We validate our method on a multi-scanner WSI dataset for cervical cancer diagnosis. Despite being trained on data from a single scanner, our approach achieves significant performance improvements on out-of-distribution data from other scanners. Code will be available at https://github.com/caijd2000/LSA.
no_new_dataset
0.949295
2503.06565
Shi-Jie Li
Shijie Li, Xun Xu, Si Yong Yeo, Xulei Yang
Future-Aware Interaction Network For Motion Forecasting
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Motion forecasting is a crucial component of autonomous driving systems, enabling the generation of accurate and smooth future trajectories to ensure safe navigation to the destination. In previous methods, potential future trajectories are often absent in the scene encoding stage, which may lead to suboptimal outcomes. Additionally, prior approaches typically employ transformer architectures for spatiotemporal modeling of trajectories and map information, which suffer from the quadratic scaling complexity of the transformer architecture. In this work, we propose an interaction-based method, named Future-Aware Interaction Network, that introduces potential future trajectories into scene encoding for a comprehensive traffic representation. Furthermore, a State Space Model (SSM), specifically Mamba, is introduced for both spatial and temporal modeling. To adapt Mamba for spatial interaction modeling, we propose an adaptive reordering strategy that transforms unordered data into a structured sequence. Additionally, Mamba is employed to refine generated future trajectories temporally, ensuring more consistent predictions. These enhancements not only improve model efficiency but also enhance the accuracy and diversity of predictions. We conduct comprehensive experiments on the widely used Argoverse 1 and Argoverse 2 datasets, demonstrating that the proposed method achieves superior performance compared to previous approaches in a more efficient way. The code will be released according to the acceptance.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 11:38:34 GMT" } ]
2025-03-11T00:00:00
[ [ "Li", "Shijie", "" ], [ "Xu", "Xun", "" ], [ "Yeo", "Si Yong", "" ], [ "Yang", "Xulei", "" ] ]
TITLE: Future-Aware Interaction Network For Motion Forecasting ABSTRACT: Motion forecasting is a crucial component of autonomous driving systems, enabling the generation of accurate and smooth future trajectories to ensure safe navigation to the destination. In previous methods, potential future trajectories are often absent in the scene encoding stage, which may lead to suboptimal outcomes. Additionally, prior approaches typically employ transformer architectures for spatiotemporal modeling of trajectories and map information, which suffer from the quadratic scaling complexity of the transformer architecture. In this work, we propose an interaction-based method, named Future-Aware Interaction Network, that introduces potential future trajectories into scene encoding for a comprehensive traffic representation. Furthermore, a State Space Model (SSM), specifically Mamba, is introduced for both spatial and temporal modeling. To adapt Mamba for spatial interaction modeling, we propose an adaptive reordering strategy that transforms unordered data into a structured sequence. Additionally, Mamba is employed to refine generated future trajectories temporally, ensuring more consistent predictions. These enhancements not only improve model efficiency but also enhance the accuracy and diversity of predictions. We conduct comprehensive experiments on the widely used Argoverse 1 and Argoverse 2 datasets, demonstrating that the proposed method achieves superior performance compared to previous approaches in a more efficient way. The code will be released according to the acceptance.
no_new_dataset
0.950411
2503.06567
Yao Cheng
Yao Cheng, Yibo Zhao, Jiapeng Zhu, Yao Liu, Xing Sun, Xiang Li
Human Cognition Inspired RAG with Knowledge Graph for Complex Problem Solving
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large language models (LLMs) have demonstrated transformative potential across various domains, yet they face significant challenges in knowledge integration and complex problem reasoning, often leading to hallucinations and unreliable outputs. Retrieval-Augmented Generation (RAG) has emerged as a promising solution to enhance LLMs accuracy by incorporating external knowledge. However, traditional RAG systems struggle with processing complex relational information and multi-step reasoning, limiting their effectiveness in advanced problem-solving tasks. To address these limitations, we propose CogGRAG, a cognition inspired graph-based RAG framework, designed to improve LLMs performance in Knowledge Graph Question Answering (KGQA). Inspired by the human cognitive process of decomposing complex problems and performing self-verification, our framework introduces a three-stage methodology: decomposition, retrieval, and reasoning with self-verification. By integrating these components, CogGRAG enhances the accuracy of LLMs in complex problem solving. We conduct systematic experiments with three LLM backbones on four benchmark datasets, where CogGRAG outperforms the baselines.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 11:50:39 GMT" } ]
2025-03-11T00:00:00
[ [ "Cheng", "Yao", "" ], [ "Zhao", "Yibo", "" ], [ "Zhu", "Jiapeng", "" ], [ "Liu", "Yao", "" ], [ "Sun", "Xing", "" ], [ "Li", "Xiang", "" ] ]
TITLE: Human Cognition Inspired RAG with Knowledge Graph for Complex Problem Solving ABSTRACT: Large language models (LLMs) have demonstrated transformative potential across various domains, yet they face significant challenges in knowledge integration and complex problem reasoning, often leading to hallucinations and unreliable outputs. Retrieval-Augmented Generation (RAG) has emerged as a promising solution to enhance LLMs accuracy by incorporating external knowledge. However, traditional RAG systems struggle with processing complex relational information and multi-step reasoning, limiting their effectiveness in advanced problem-solving tasks. To address these limitations, we propose CogGRAG, a cognition inspired graph-based RAG framework, designed to improve LLMs performance in Knowledge Graph Question Answering (KGQA). Inspired by the human cognitive process of decomposing complex problems and performing self-verification, our framework introduces a three-stage methodology: decomposition, retrieval, and reasoning with self-verification. By integrating these components, CogGRAG enhances the accuracy of LLMs in complex problem solving. We conduct systematic experiments with three LLM backbones on four benchmark datasets, where CogGRAG outperforms the baselines.
no_new_dataset
0.941115
2503.06569
Shi-Jie Li
Shijie Li, Zhongyao Cheng, Rong Li, Shuai Li, Juergen Gall, Xun Xu, Xulei Yang
Global-Aware Monocular Semantic Scene Completion with State Space Models
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Monocular Semantic Scene Completion (MonoSSC) reconstructs and interprets 3D environments from a single image, enabling diverse real-world applications. However, existing methods are often constrained by the local receptive field of Convolutional Neural Networks (CNNs), making it challenging to handle the non-uniform distribution of projected points (Fig. \ref{fig:perspective}) and effectively reconstruct missing information caused by the 3D-to-2D projection. In this work, we introduce GA-MonoSSC, a hybrid architecture for MonoSSC that effectively captures global context in both the 2D image domain and 3D space. Specifically, we propose a Dual-Head Multi-Modality Encoder, which leverages a Transformer architecture to capture spatial relationships across all features in the 2D image domain, enabling more comprehensive 2D feature extraction. Additionally, we introduce the Frustum Mamba Decoder, built on the State Space Model (SSM), to efficiently capture long-range dependencies in 3D space. Furthermore, we propose a frustum reordering strategy within the Frustum Mamba Decoder to mitigate feature discontinuities in the reordered voxel sequence, ensuring better alignment with the scan mechanism of the State Space Model (SSM) for improved 3D representation learning. We conduct extensive experiments on the widely used Occ-ScanNet and NYUv2 datasets, demonstrating that our proposed method achieves state-of-the-art performance, validating its effectiveness. The code will be released upon acceptance.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 11:55:40 GMT" } ]
2025-03-11T00:00:00
[ [ "Li", "Shijie", "" ], [ "Cheng", "Zhongyao", "" ], [ "Li", "Rong", "" ], [ "Li", "Shuai", "" ], [ "Gall", "Juergen", "" ], [ "Xu", "Xun", "" ], [ "Yang", "Xulei", "" ] ]
TITLE: Global-Aware Monocular Semantic Scene Completion with State Space Models ABSTRACT: Monocular Semantic Scene Completion (MonoSSC) reconstructs and interprets 3D environments from a single image, enabling diverse real-world applications. However, existing methods are often constrained by the local receptive field of Convolutional Neural Networks (CNNs), making it challenging to handle the non-uniform distribution of projected points (Fig. \ref{fig:perspective}) and effectively reconstruct missing information caused by the 3D-to-2D projection. In this work, we introduce GA-MonoSSC, a hybrid architecture for MonoSSC that effectively captures global context in both the 2D image domain and 3D space. Specifically, we propose a Dual-Head Multi-Modality Encoder, which leverages a Transformer architecture to capture spatial relationships across all features in the 2D image domain, enabling more comprehensive 2D feature extraction. Additionally, we introduce the Frustum Mamba Decoder, built on the State Space Model (SSM), to efficiently capture long-range dependencies in 3D space. Furthermore, we propose a frustum reordering strategy within the Frustum Mamba Decoder to mitigate feature discontinuities in the reordered voxel sequence, ensuring better alignment with the scan mechanism of the State Space Model (SSM) for improved 3D representation learning. We conduct extensive experiments on the widely used Occ-ScanNet and NYUv2 datasets, demonstrating that our proposed method achieves state-of-the-art performance, validating its effectiveness. The code will be released upon acceptance.
no_new_dataset
0.952442
2503.06573
Gili Lior
Gili Lior, Asaf Yehudai, Ariel Gera, Liat Ein-Dor
WildIFEval: Instruction Following in the Wild
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Recent LLMs have shown remarkable success in following user instructions, yet handling instructions with multiple constraints remains a significant challenge. In this work, we introduce WildIFEval - a large-scale dataset of 12K real user instructions with diverse, multi-constraint conditions. Unlike prior datasets, our collection spans a broad lexical and topical spectrum of constraints, in natural user prompts. We categorize these constraints into eight high-level classes to capture their distribution and dynamics in real-world scenarios. Leveraging WildIFEval, we conduct extensive experiments to benchmark the instruction-following capabilities of leading LLMs. Our findings reveal that all evaluated models experience performance degradation with an increasing number of constraints. Thus, we show that all models have a large room for improvement on such tasks. Moreover, we observe that the specific type of constraint plays a critical role in model performance. We release our dataset to promote further research on instruction-following under complex, realistic conditions.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 12:06:29 GMT" } ]
2025-03-11T00:00:00
[ [ "Lior", "Gili", "" ], [ "Yehudai", "Asaf", "" ], [ "Gera", "Ariel", "" ], [ "Ein-Dor", "Liat", "" ] ]
TITLE: WildIFEval: Instruction Following in the Wild ABSTRACT: Recent LLMs have shown remarkable success in following user instructions, yet handling instructions with multiple constraints remains a significant challenge. In this work, we introduce WildIFEval - a large-scale dataset of 12K real user instructions with diverse, multi-constraint conditions. Unlike prior datasets, our collection spans a broad lexical and topical spectrum of constraints, in natural user prompts. We categorize these constraints into eight high-level classes to capture their distribution and dynamics in real-world scenarios. Leveraging WildIFEval, we conduct extensive experiments to benchmark the instruction-following capabilities of leading LLMs. Our findings reveal that all evaluated models experience performance degradation with an increasing number of constraints. Thus, we show that all models have a large room for improvement on such tasks. Moreover, we observe that the specific type of constraint plays a critical role in model performance. We release our dataset to promote further research on instruction-following under complex, realistic conditions.
new_dataset
0.958109
2503.06580
Yuxiang Zhang
Yuxiang Zhang, Yuqi Yang, Jiangming Shu, Xinyan Wen, Jitao Sang
Agent models: Internalizing Chain-of-Action Generation into Reasoning models
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Traditional agentic workflows rely on external prompts to manage interactions with tools and the environment, which limits the autonomy of reasoning models. We position \emph{Large Agent Models (LAMs)} that internalize the generation of \emph{Chain-of-Action (CoA)}, enabling the model to autonomously decide when and how to use external tools. Our proposed AutoCoA framework combines supervised fine-tuning (SFT) and reinforcement learning (RL), allowing the model to seamlessly switch between reasoning and action while efficiently managing environment interactions. Main components include step-level action triggering, trajectory-level CoA optimization, and an internal world model to reduce real-environment interaction costs. Evaluations on open-domain QA tasks demonstrate that AutoCoA-trained agent models significantly outperform ReAct-based workflows in task completion, especially in tasks that require long-term reasoning and multi-step actions. Code and dataset are available at https://github.com/ADaM-BJTU/AutoCoA
[ { "version": "v1", "created": "Sun, 9 Mar 2025 12:19:47 GMT" } ]
2025-03-11T00:00:00
[ [ "Zhang", "Yuxiang", "" ], [ "Yang", "Yuqi", "" ], [ "Shu", "Jiangming", "" ], [ "Wen", "Xinyan", "" ], [ "Sang", "Jitao", "" ] ]
TITLE: Agent models: Internalizing Chain-of-Action Generation into Reasoning models ABSTRACT: Traditional agentic workflows rely on external prompts to manage interactions with tools and the environment, which limits the autonomy of reasoning models. We position \emph{Large Agent Models (LAMs)} that internalize the generation of \emph{Chain-of-Action (CoA)}, enabling the model to autonomously decide when and how to use external tools. Our proposed AutoCoA framework combines supervised fine-tuning (SFT) and reinforcement learning (RL), allowing the model to seamlessly switch between reasoning and action while efficiently managing environment interactions. Main components include step-level action triggering, trajectory-level CoA optimization, and an internal world model to reduce real-environment interaction costs. Evaluations on open-domain QA tasks demonstrate that AutoCoA-trained agent models significantly outperform ReAct-based workflows in task completion, especially in tasks that require long-term reasoning and multi-step actions. Code and dataset are available at https://github.com/ADaM-BJTU/AutoCoA
no_new_dataset
0.945399
2503.06587
Yang Zhou
Xiaoming Peng, Yixin Yang, Yang Zhou, Hui Huang
Introducing Unbiased Depth into 2D Gaussian Splatting for High-accuracy Surface Reconstruction
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, 2D Gaussian Splatting (2DGS) has demonstrated superior geometry reconstruction quality than the popular 3DGS by using 2D surfels to approximate thin surfaces. However, it falls short when dealing with glossy surfaces, resulting in visible holes in these areas. We found the reflection discontinuity causes the issue. To fit the jump from diffuse to specular reflection at different viewing angles, depth bias is introduced in the optimized Gaussian primitives. To address that, we first replace the depth distortion loss in 2DGS with a novel depth convergence loss, which imposes a strong constraint on depth continuity. Then, we rectified the depth criterion in determining the actual surface, which fully accounts for all the intersecting Gaussians along the ray. Qualitative and quantitative evaluations across various datasets reveal that our method significantly improves reconstruction quality, with more complete and accurate surfaces than 2DGS.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 12:38:01 GMT" } ]
2025-03-11T00:00:00
[ [ "Peng", "Xiaoming", "" ], [ "Yang", "Yixin", "" ], [ "Zhou", "Yang", "" ], [ "Huang", "Hui", "" ] ]
TITLE: Introducing Unbiased Depth into 2D Gaussian Splatting for High-accuracy Surface Reconstruction ABSTRACT: Recently, 2D Gaussian Splatting (2DGS) has demonstrated superior geometry reconstruction quality than the popular 3DGS by using 2D surfels to approximate thin surfaces. However, it falls short when dealing with glossy surfaces, resulting in visible holes in these areas. We found the reflection discontinuity causes the issue. To fit the jump from diffuse to specular reflection at different viewing angles, depth bias is introduced in the optimized Gaussian primitives. To address that, we first replace the depth distortion loss in 2DGS with a novel depth convergence loss, which imposes a strong constraint on depth continuity. Then, we rectified the depth criterion in determining the actual surface, which fully accounts for all the intersecting Gaussians along the ray. Qualitative and quantitative evaluations across various datasets reveal that our method significantly improves reconstruction quality, with more complete and accurate surfaces than 2DGS.
no_new_dataset
0.947186
2503.06588
Yaxuan Li
Yaxuan Li, Han Jiang, Yifei Ma, Shihua Qin, Fangxu Xing
Speech Audio Generation from dynamic MRI via a Knowledge Enhanced Conditional Variational Autoencoder
null
null
null
null
cs.SD cs.CV
http://creativecommons.org/licenses/by/4.0/
Dynamic Magnetic Resonance Imaging (MRI) of the vocal tract has become an increasingly adopted imaging modality for speech motor studies. Beyond image signals, systematic data loss, noise pollution, and audio file corruption can occur due to the unpredictability of the MRI acquisition environment. In such cases, generating audio from images is critical for data recovery in both clinical and research applications. However, this remains challenging due to hardware constraints, acoustic interference, and data corruption. Existing solutions, such as denoising and multi-stage synthesis methods, face limitations in audio fidelity and generalizability. To address these challenges, we propose a Knowledge Enhanced Conditional Variational Autoencoder (KE-CVAE), a novel two-step "knowledge enhancement + variational inference" framework for generating speech audio signals from cine dynamic MRI sequences. This approach introduces two key innovations: (1) integration of unlabeled MRI data for knowledge enhancement, and (2) a variational inference architecture to improve generative modeling capacity. To the best of our knowledge, this is one of the first attempts at synthesizing speech audio directly from dynamic MRI video sequences. The proposed method was trained and evaluated on an open-source dynamic vocal tract MRI dataset recorded during speech. Experimental results demonstrate its effectiveness in generating natural speech waveforms while addressing MRI-specific acoustic challenges, outperforming conventional deep learning-based synthesis approaches.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 12:40:16 GMT" } ]
2025-03-11T00:00:00
[ [ "Li", "Yaxuan", "" ], [ "Jiang", "Han", "" ], [ "Ma", "Yifei", "" ], [ "Qin", "Shihua", "" ], [ "Xing", "Fangxu", "" ] ]
TITLE: Speech Audio Generation from dynamic MRI via a Knowledge Enhanced Conditional Variational Autoencoder ABSTRACT: Dynamic Magnetic Resonance Imaging (MRI) of the vocal tract has become an increasingly adopted imaging modality for speech motor studies. Beyond image signals, systematic data loss, noise pollution, and audio file corruption can occur due to the unpredictability of the MRI acquisition environment. In such cases, generating audio from images is critical for data recovery in both clinical and research applications. However, this remains challenging due to hardware constraints, acoustic interference, and data corruption. Existing solutions, such as denoising and multi-stage synthesis methods, face limitations in audio fidelity and generalizability. To address these challenges, we propose a Knowledge Enhanced Conditional Variational Autoencoder (KE-CVAE), a novel two-step "knowledge enhancement + variational inference" framework for generating speech audio signals from cine dynamic MRI sequences. This approach introduces two key innovations: (1) integration of unlabeled MRI data for knowledge enhancement, and (2) a variational inference architecture to improve generative modeling capacity. To the best of our knowledge, this is one of the first attempts at synthesizing speech audio directly from dynamic MRI video sequences. The proposed method was trained and evaluated on an open-source dynamic vocal tract MRI dataset recorded during speech. Experimental results demonstrate its effectiveness in generating natural speech waveforms while addressing MRI-specific acoustic challenges, outperforming conventional deep learning-based synthesis approaches.
no_new_dataset
0.949153
2503.06594
Yingfeng Luo
Yingfeng Luo, Tong Zheng, Yongyu Mu, Bei Li, Qinghong Zhang, Yongqi Gao, Ziqiang Xu, Peinan Feng, Xiaoqian Liu, Tong Xiao, Jingbo Zhu
Beyond Decoder-only: Large Language Models Can be Good Encoders for Machine Translation
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The field of neural machine translation (NMT) has changed with the advent of large language models (LLMs). Much of the recent emphasis in natural language processing (NLP) has been on modeling machine translation and many other problems using a single pre-trained Transformer decoder, while encoder-decoder architectures, which were the standard in earlier NMT models, have received relatively less attention. In this paper, we explore translation models that are universal, efficient, and easy to optimize, by marrying the world of LLMs with the world of NMT. We apply LLMs to NMT encoding and leave the NMT decoder unchanged. We also develop methods for adapting LLMs to work better with the NMT decoder. Furthermore, we construct a new dataset involving multiple tasks to assess how well the machine translation system generalizes across various tasks. Evaluations on the WMT and our datasets show that results using our method match or surpass a range of baselines in terms of translation quality, but achieve $2.4 \sim 6.5 \times$ inference speedups and a $75\%$ reduction in the memory footprint of the KV cache. It also demonstrates strong generalization across a variety of translation-related tasks.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 12:54:05 GMT" } ]
2025-03-11T00:00:00
[ [ "Luo", "Yingfeng", "" ], [ "Zheng", "Tong", "" ], [ "Mu", "Yongyu", "" ], [ "Li", "Bei", "" ], [ "Zhang", "Qinghong", "" ], [ "Gao", "Yongqi", "" ], [ "Xu", "Ziqiang", "" ], [ "Feng", "Peinan", "" ], [ "Liu", "Xiaoqian", "" ], [ "Xiao", "Tong", "" ], [ "Zhu", "Jingbo", "" ] ]
TITLE: Beyond Decoder-only: Large Language Models Can be Good Encoders for Machine Translation ABSTRACT: The field of neural machine translation (NMT) has changed with the advent of large language models (LLMs). Much of the recent emphasis in natural language processing (NLP) has been on modeling machine translation and many other problems using a single pre-trained Transformer decoder, while encoder-decoder architectures, which were the standard in earlier NMT models, have received relatively less attention. In this paper, we explore translation models that are universal, efficient, and easy to optimize, by marrying the world of LLMs with the world of NMT. We apply LLMs to NMT encoding and leave the NMT decoder unchanged. We also develop methods for adapting LLMs to work better with the NMT decoder. Furthermore, we construct a new dataset involving multiple tasks to assess how well the machine translation system generalizes across various tasks. Evaluations on the WMT and our datasets show that results using our method match or surpass a range of baselines in terms of translation quality, but achieve $2.4 \sim 6.5 \times$ inference speedups and a $75\%$ reduction in the memory footprint of the KV cache. It also demonstrates strong generalization across a variety of translation-related tasks.
new_dataset
0.962497
2503.06598
Hao Xu
Hao Xu, Tengfei Xue, Dongnan Liu, Yuqian Chen, Fan Zhang, Carl-Fredrik Westin, Ron Kikinis, Lauren J. O'Donnell, Weidong Cai
MultiCo3D: Multi-Label Voxel Contrast for One-Shot Incremental Segmentation of 3D Neuroimages
13 pages, 6 figures, 6 tables
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
3D neuroimages provide a comprehensive view of brain structure and function, aiding in precise localization and functional connectivity analysis. Segmentation of white matter (WM) tracts using 3D neuroimages is vital for understanding the brain's structural connectivity in both healthy and diseased states. One-shot Class Incremental Semantic Segmentation (OCIS) refers to effectively segmenting new (novel) classes using only a single sample while retaining knowledge of old (base) classes without forgetting. Voxel-contrastive OCIS methods adjust the feature space to alleviate the feature overlap problem between the base and novel classes. However, since WM tract segmentation is a multi-label segmentation task, existing single-label voxel contrastive-based methods may cause inherent contradictions. To address this, we propose a new multi-label voxel contrast framework called MultiCo3D for one-shot class incremental tract segmentation. Our method utilizes uncertainty distillation to preserve base tract segmentation knowledge while adjusting the feature space with multi-label voxel contrast to alleviate feature overlap when learning novel tracts and dynamically weighting multi losses to balance overall loss. We compare our method against several state-of-the-art (SOTA) approaches. The experimental results show that our method significantly enhances one-shot class incremental tract segmentation accuracy across five different experimental setups on HCP and Preto datasets.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 13:06:20 GMT" } ]
2025-03-11T00:00:00
[ [ "Xu", "Hao", "" ], [ "Xue", "Tengfei", "" ], [ "Liu", "Dongnan", "" ], [ "Chen", "Yuqian", "" ], [ "Zhang", "Fan", "" ], [ "Westin", "Carl-Fredrik", "" ], [ "Kikinis", "Ron", "" ], [ "O'Donnell", "Lauren J.", "" ], [ "Cai", "Weidong", "" ] ]
TITLE: MultiCo3D: Multi-Label Voxel Contrast for One-Shot Incremental Segmentation of 3D Neuroimages ABSTRACT: 3D neuroimages provide a comprehensive view of brain structure and function, aiding in precise localization and functional connectivity analysis. Segmentation of white matter (WM) tracts using 3D neuroimages is vital for understanding the brain's structural connectivity in both healthy and diseased states. One-shot Class Incremental Semantic Segmentation (OCIS) refers to effectively segmenting new (novel) classes using only a single sample while retaining knowledge of old (base) classes without forgetting. Voxel-contrastive OCIS methods adjust the feature space to alleviate the feature overlap problem between the base and novel classes. However, since WM tract segmentation is a multi-label segmentation task, existing single-label voxel contrastive-based methods may cause inherent contradictions. To address this, we propose a new multi-label voxel contrast framework called MultiCo3D for one-shot class incremental tract segmentation. Our method utilizes uncertainty distillation to preserve base tract segmentation knowledge while adjusting the feature space with multi-label voxel contrast to alleviate feature overlap when learning novel tracts and dynamically weighting multi losses to balance overall loss. We compare our method against several state-of-the-art (SOTA) approaches. The experimental results show that our method significantly enhances one-shot class incremental tract segmentation accuracy across five different experimental setups on HCP and Preto datasets.
no_new_dataset
0.951774
2503.06602
Pranoy Panda
Pranoy Panda, Siddharth Tandon, Vineeth N Balasubramanian
FW-Shapley: Real-time Estimation of Weighted Shapley Values
Accepted at ICASSP 2024
null
10.1109/ICASSP48485.2024.10446778
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Fair credit assignment is essential in various machine learning (ML) applications, and Shapley values have emerged as a valuable tool for this purpose. However, in critical ML applications such as data valuation and feature attribution, the uniform weighting of Shapley values across subset cardinalities leads to unintuitive credit assignments. To address this, weighted Shapley values were proposed as a generalization, allowing different weights for subsets with different cardinalities. Despite their advantages, similar to Shapley values, Weighted Shapley values suffer from exponential compute costs, making them impractical for high-dimensional datasets. To tackle this issue, we present two key contributions. Firstly, we provide a weighted least squares characterization of weighted Shapley values. Next, using this characterization, we propose Fast Weighted Shapley (FW-Shapley), an amortized framework for efficiently computing weighted Shapley values using a learned estimator. We further show that our estimator's training procedure is theoretically valid even though we do not use ground truth Weighted Shapley values during training. On the feature attribution task, we outperform the learned estimator FastSHAP by $27\%$ (on average) in terms of Inclusion AUC. For data valuation, we are much faster (14 times) while being comparable to the state-of-the-art KNN Shapley.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 13:13:14 GMT" } ]
2025-03-11T00:00:00
[ [ "Panda", "Pranoy", "" ], [ "Tandon", "Siddharth", "" ], [ "Balasubramanian", "Vineeth N", "" ] ]
TITLE: FW-Shapley: Real-time Estimation of Weighted Shapley Values ABSTRACT: Fair credit assignment is essential in various machine learning (ML) applications, and Shapley values have emerged as a valuable tool for this purpose. However, in critical ML applications such as data valuation and feature attribution, the uniform weighting of Shapley values across subset cardinalities leads to unintuitive credit assignments. To address this, weighted Shapley values were proposed as a generalization, allowing different weights for subsets with different cardinalities. Despite their advantages, similar to Shapley values, Weighted Shapley values suffer from exponential compute costs, making them impractical for high-dimensional datasets. To tackle this issue, we present two key contributions. Firstly, we provide a weighted least squares characterization of weighted Shapley values. Next, using this characterization, we propose Fast Weighted Shapley (FW-Shapley), an amortized framework for efficiently computing weighted Shapley values using a learned estimator. We further show that our estimator's training procedure is theoretically valid even though we do not use ground truth Weighted Shapley values during training. On the feature attribution task, we outperform the learned estimator FastSHAP by $27\%$ (on average) in terms of Inclusion AUC. For data valuation, we are much faster (14 times) while being comparable to the state-of-the-art KNN Shapley.
no_new_dataset
0.949435
2503.06604
Renhao Lu
Renhao Lu
Steerable Pyramid Weighted Loss: Multi-Scale Adaptive Weighting for Semantic Segmentation
9 pages, 4 figures
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Semantic segmentation is a core task in computer vision with applications in biomedical imaging, remote sensing, and autonomous driving. While standard loss functions such as cross-entropy and Dice loss perform well in general cases, they often struggle with fine structures, particularly in tasks involving thin structures or closely packed objects. Various weight map-based loss functions have been proposed to address this issue by assigning higher loss weights to pixels prone to misclassification. However, these methods typically rely on precomputed or runtime-generated weight maps based on distance transforms, which impose significant computational costs and fail to adapt to evolving network predictions. In this paper, we propose a novel steerable pyramid-based weighted (SPW) loss function that efficiently generates adaptive weight maps. Unlike traditional boundary-aware losses that depend on static or iteratively updated distance maps, our method leverages steerable pyramids to dynamically emphasize regions across multiple frequency bands (capturing features at different scales) while maintaining computational efficiency. Additionally, by incorporating network predictions into the weight computation, our approach enables adaptive refinement during training. We evaluate our method on the SNEMI3D, GlaS, and DRIVE datasets, benchmarking it against 11 state-of-the-art loss functions. Our results demonstrate that the proposed SPW loss function achieves superior pixel precision and segmentation accuracy with minimal computational overhead. This work provides an effective and efficient solution for improving semantic segmentation, particularly for applications requiring multiscale feature representation. The code is avaiable at https://anonymous.4open.science/r/SPW-0884
[ { "version": "v1", "created": "Sun, 9 Mar 2025 13:15:01 GMT" } ]
2025-03-11T00:00:00
[ [ "Lu", "Renhao", "" ] ]
TITLE: Steerable Pyramid Weighted Loss: Multi-Scale Adaptive Weighting for Semantic Segmentation ABSTRACT: Semantic segmentation is a core task in computer vision with applications in biomedical imaging, remote sensing, and autonomous driving. While standard loss functions such as cross-entropy and Dice loss perform well in general cases, they often struggle with fine structures, particularly in tasks involving thin structures or closely packed objects. Various weight map-based loss functions have been proposed to address this issue by assigning higher loss weights to pixels prone to misclassification. However, these methods typically rely on precomputed or runtime-generated weight maps based on distance transforms, which impose significant computational costs and fail to adapt to evolving network predictions. In this paper, we propose a novel steerable pyramid-based weighted (SPW) loss function that efficiently generates adaptive weight maps. Unlike traditional boundary-aware losses that depend on static or iteratively updated distance maps, our method leverages steerable pyramids to dynamically emphasize regions across multiple frequency bands (capturing features at different scales) while maintaining computational efficiency. Additionally, by incorporating network predictions into the weight computation, our approach enables adaptive refinement during training. We evaluate our method on the SNEMI3D, GlaS, and DRIVE datasets, benchmarking it against 11 state-of-the-art loss functions. Our results demonstrate that the proposed SPW loss function achieves superior pixel precision and segmentation accuracy with minimal computational overhead. This work provides an effective and efficient solution for improving semantic segmentation, particularly for applications requiring multiscale feature representation. The code is avaiable at https://anonymous.4open.science/r/SPW-0884
no_new_dataset
0.949902
2503.06606
Pranoy Panda
Pranoy Panda, Kancheti Sai Srinivas, Vineeth N Balasubramanian, Gaurav Sinha
Interpretable Model Drift Detection
Accepted at CODS-COMAD 2024
null
10.1145/3632410.3632434
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Data in the real world often has an evolving distribution. Thus, machine learning models trained on such data get outdated over time. This phenomenon is called model drift. Knowledge of this drift serves two purposes: (i) Retain an accurate model and (ii) Discovery of knowledge or insights about change in the relationship between input features and output variable w.r.t. the model. Most existing works focus only on detecting model drift but offer no interpretability. In this work, we take a principled approach to study the problem of interpretable model drift detection from a risk perspective using a feature-interaction aware hypothesis testing framework, which enjoys guarantees on test power. The proposed framework is generic, i.e., it can be adapted to both classification and regression tasks. Experiments on several standard drift detection datasets show that our method is superior to existing interpretable methods (especially on real-world datasets) and on par with state-of-the-art black-box drift detection methods. We also quantitatively and qualitatively study the interpretability aspect including a case study on USENET2 dataset. We find our method focuses on model and drift sensitive features compared to baseline interpretable drift detectors.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 13:19:06 GMT" } ]
2025-03-11T00:00:00
[ [ "Panda", "Pranoy", "" ], [ "Srinivas", "Kancheti Sai", "" ], [ "Balasubramanian", "Vineeth N", "" ], [ "Sinha", "Gaurav", "" ] ]
TITLE: Interpretable Model Drift Detection ABSTRACT: Data in the real world often has an evolving distribution. Thus, machine learning models trained on such data get outdated over time. This phenomenon is called model drift. Knowledge of this drift serves two purposes: (i) Retain an accurate model and (ii) Discovery of knowledge or insights about change in the relationship between input features and output variable w.r.t. the model. Most existing works focus only on detecting model drift but offer no interpretability. In this work, we take a principled approach to study the problem of interpretable model drift detection from a risk perspective using a feature-interaction aware hypothesis testing framework, which enjoys guarantees on test power. The proposed framework is generic, i.e., it can be adapted to both classification and regression tasks. Experiments on several standard drift detection datasets show that our method is superior to existing interpretable methods (especially on real-world datasets) and on par with state-of-the-art black-box drift detection methods. We also quantitatively and qualitatively study the interpretability aspect including a case study on USENET2 dataset. We find our method focuses on model and drift sensitive features compared to baseline interpretable drift detectors.
no_new_dataset
0.944228
2503.06608
Ruchi Bhatt
Ruchi Bhatt, Shreya Bansal, Amanpreet Chander, Rupinder Kaur, Malya Singh, Mohan Kankanhalli, Abdulmotaleb El Saddik, Mukesh Kumar Saini
GroMo: Plant Growth Modeling with Multiview Images
7 pages, 5 Figures, 3 Tables
null
null
null
cs.CV cs.LG cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding plant growth dynamics is essential for applications in agriculture and plant phenotyping. We present the Growth Modelling (GroMo) challenge, which is designed for two primary tasks: (1) plant age prediction and (2) leaf count estimation, both essential for crop monitoring and precision agriculture. For this challenge, we introduce GroMo25, a dataset with images of four crops: radish, okra, wheat, and mustard. Each crop consists of multiple plants (p1, p2, ..., pn) captured over different days (d1, d2, ..., dm) and categorized into five levels (L1, L2, L3, L4, L5). Each plant is captured from 24 different angles with a 15-degree gap between images. Participants are required to perform both tasks for all four crops with these multiview images. We proposed a Multiview Vision Transformer (MVVT) model for the GroMo challenge and evaluated the crop-wise performance on GroMo25. MVVT reports an average MAE of 7.74 for age prediction and an MAE of 5.52 for leaf count. The GroMo Challenge aims to advance plant phenotyping research by encouraging innovative solutions for tracking and predicting plant growth. The GitHub repository is publicly available at https://github.com/mriglab/GroMo-Plant-Growth-Modeling-with-Multiview-Images.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 13:23:16 GMT" } ]
2025-03-11T00:00:00
[ [ "Bhatt", "Ruchi", "" ], [ "Bansal", "Shreya", "" ], [ "Chander", "Amanpreet", "" ], [ "Kaur", "Rupinder", "" ], [ "Singh", "Malya", "" ], [ "Kankanhalli", "Mohan", "" ], [ "Saddik", "Abdulmotaleb El", "" ], [ "Saini", "Mukesh Kumar", "" ] ]
TITLE: GroMo: Plant Growth Modeling with Multiview Images ABSTRACT: Understanding plant growth dynamics is essential for applications in agriculture and plant phenotyping. We present the Growth Modelling (GroMo) challenge, which is designed for two primary tasks: (1) plant age prediction and (2) leaf count estimation, both essential for crop monitoring and precision agriculture. For this challenge, we introduce GroMo25, a dataset with images of four crops: radish, okra, wheat, and mustard. Each crop consists of multiple plants (p1, p2, ..., pn) captured over different days (d1, d2, ..., dm) and categorized into five levels (L1, L2, L3, L4, L5). Each plant is captured from 24 different angles with a 15-degree gap between images. Participants are required to perform both tasks for all four crops with these multiview images. We proposed a Multiview Vision Transformer (MVVT) model for the GroMo challenge and evaluated the crop-wise performance on GroMo25. MVVT reports an average MAE of 7.74 for age prediction and an MAE of 5.52 for leaf count. The GroMo Challenge aims to advance plant phenotyping research by encouraging innovative solutions for tracking and predicting plant growth. The GitHub repository is publicly available at https://github.com/mriglab/GroMo-Plant-Growth-Modeling-with-Multiview-Images.
new_dataset
0.959724
2503.06611
Raghvendra Cowlagi
Alexandra E. Ballentine and Raghvendra V. Cowlagi
Inverse Reinforcement Learning for Minimum-Exposure Paths in Spatiotemporally Varying Scalar Fields
Joint submission to MECC-JAVS 2025
null
null
null
cs.LG cs.SY eess.SY
http://creativecommons.org/licenses/by-nc-nd/4.0/
Performance and reliability analyses of autonomous vehicles (AVs) can benefit from tools that ``amplify'' small datasets to synthesize larger volumes of plausible samples of the AV's behavior. We consider a specific instance of this data synthesis problem that addresses minimizing the AV's exposure to adverse environmental conditions during travel to a fixed goal location. The environment is characterized by a threat field, which is a strictly positive scalar field with higher intensities corresponding to hazardous and unfavorable conditions for the AV. We address the problem of synthesizing datasets of minimum exposure paths that resemble a training dataset of such paths. The main contribution of this paper is an inverse reinforcement learning (IRL) model to solve this problem. We consider time-invariant (static) as well as time-varying (dynamic) threat fields. We find that the proposed IRL model provides excellent performance in synthesizing paths from initial conditions not seen in the training dataset, when the threat field is the same as that used for training. Furthermore, we evaluate model performance on unseen threat fields and find low error in that case as well. Finally, we demonstrate the model's ability to synthesize distinct datasets when trained on different datasets with distinct characteristics.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 13:30:11 GMT" } ]
2025-03-11T00:00:00
[ [ "Ballentine", "Alexandra E.", "" ], [ "Cowlagi", "Raghvendra V.", "" ] ]
TITLE: Inverse Reinforcement Learning for Minimum-Exposure Paths in Spatiotemporally Varying Scalar Fields ABSTRACT: Performance and reliability analyses of autonomous vehicles (AVs) can benefit from tools that ``amplify'' small datasets to synthesize larger volumes of plausible samples of the AV's behavior. We consider a specific instance of this data synthesis problem that addresses minimizing the AV's exposure to adverse environmental conditions during travel to a fixed goal location. The environment is characterized by a threat field, which is a strictly positive scalar field with higher intensities corresponding to hazardous and unfavorable conditions for the AV. We address the problem of synthesizing datasets of minimum exposure paths that resemble a training dataset of such paths. The main contribution of this paper is an inverse reinforcement learning (IRL) model to solve this problem. We consider time-invariant (static) as well as time-varying (dynamic) threat fields. We find that the proposed IRL model provides excellent performance in synthesizing paths from initial conditions not seen in the training dataset, when the threat field is the same as that used for training. Furthermore, we evaluate model performance on unseen threat fields and find low error in that case as well. Finally, we demonstrate the model's ability to synthesize distinct datasets when trained on different datasets with distinct characteristics.
no_new_dataset
0.946349
2503.06614
Qian Zeng
Qian Zeng, Xin Lin, Jingyi Gao and Yang Yu
Using Subgraph GNNs for Node Classification:an Overlooked Potential Approach
16 pages
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Previous studies have demonstrated the strong performance of Graph Neural Networks (GNNs) in node classification. However, most existing GNNs adopt a node-centric perspective and rely on global message passing, leading to high computational and memory costs that hinder scalability. To mitigate these challenges, subgraph-based methods have been introduced, leveraging local subgraphs as approximations of full computational trees. While this approach improves efficiency, it often suffers from performance degradation due to the loss of global contextual information, limiting its effectiveness compared to global GNNs. To address this trade-off between scalability and classification accuracy, we reformulate the node classification task as a subgraph classification problem and propose SubGND (Subgraph GNN for NoDe). This framework introduces a differentiated zero-padding strategy and an Ego-Alter subgraph representation method to resolve label conflicts while incorporating an Adaptive Feature Scaling Mechanism to dynamically adjust feature contributions based on dataset-specific dependencies. Experimental results on six benchmark datasets demonstrate that SubGND achieves performance comparable to or surpassing global message-passing GNNs, particularly in heterophilic settings, highlighting its effectiveness and scalability as a promising solution for node classification.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 13:37:38 GMT" } ]
2025-03-11T00:00:00
[ [ "Zeng", "Qian", "" ], [ "Lin", "Xin", "" ], [ "Gao", "Jingyi", "" ], [ "Yu", "Yang", "" ] ]
TITLE: Using Subgraph GNNs for Node Classification:an Overlooked Potential Approach ABSTRACT: Previous studies have demonstrated the strong performance of Graph Neural Networks (GNNs) in node classification. However, most existing GNNs adopt a node-centric perspective and rely on global message passing, leading to high computational and memory costs that hinder scalability. To mitigate these challenges, subgraph-based methods have been introduced, leveraging local subgraphs as approximations of full computational trees. While this approach improves efficiency, it often suffers from performance degradation due to the loss of global contextual information, limiting its effectiveness compared to global GNNs. To address this trade-off between scalability and classification accuracy, we reformulate the node classification task as a subgraph classification problem and propose SubGND (Subgraph GNN for NoDe). This framework introduces a differentiated zero-padding strategy and an Ego-Alter subgraph representation method to resolve label conflicts while incorporating an Adaptive Feature Scaling Mechanism to dynamically adjust feature contributions based on dataset-specific dependencies. Experimental results on six benchmark datasets demonstrate that SubGND achieves performance comparable to or surpassing global message-passing GNNs, particularly in heterophilic settings, highlighting its effectiveness and scalability as a promising solution for node classification.
no_new_dataset
0.943712
2503.06623
Feng Liu
Sijie Zhao, Feng Liu, Xueliang Zhang, Hao Chen, Tao Han, Junchao Gong, Ran Tao, Pengfeng Xiao, Lei Bai, Wanli Ouyang
Transforming Weather Data from Pixel to Latent Space
8 pages, 6 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The increasing impact of climate change and extreme weather events has spurred growing interest in deep learning for weather research. However, existing studies often rely on weather data in pixel space, which presents several challenges such as smooth outputs in model outputs, limited applicability to a single pressure-variable subset (PVS), and high data storage and computational costs. To address these challenges, we propose a novel Weather Latent Autoencoder (WLA) that transforms weather data from pixel space to latent space, enabling efficient weather task modeling. By decoupling weather reconstruction from downstream tasks, WLA improves the accuracy and sharpness of weather task model results. The incorporated Pressure-Variable Unified Module transforms multiple PVS into a unified representation, enhancing the adaptability of the model in multiple weather scenarios. Furthermore, weather tasks can be performed in a low-storage latent space of WLA rather than a high-storage pixel space, thus significantly reducing data storage and computational costs. Through extensive experimentation, we demonstrate its superior compression and reconstruction performance, enabling the creation of the ERA5-latent dataset with unified representations of multiple PVS from ERA5 data. The compressed full PVS in the ERA5-latent dataset reduces the original 244.34 TB of data to 0.43 TB. The downstream task further demonstrates that task models can apply to multiple PVS with low data costs in latent space and achieve superior performance compared to models in pixel space. Code, ERA5-latent data, and pre-trained models are available at https://anonymous.4open.science/r/Weather-Latent-Autoencoder-8467.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 13:55:33 GMT" } ]
2025-03-11T00:00:00
[ [ "Zhao", "Sijie", "" ], [ "Liu", "Feng", "" ], [ "Zhang", "Xueliang", "" ], [ "Chen", "Hao", "" ], [ "Han", "Tao", "" ], [ "Gong", "Junchao", "" ], [ "Tao", "Ran", "" ], [ "Xiao", "Pengfeng", "" ], [ "Bai", "Lei", "" ], [ "Ouyang", "Wanli", "" ] ]
TITLE: Transforming Weather Data from Pixel to Latent Space ABSTRACT: The increasing impact of climate change and extreme weather events has spurred growing interest in deep learning for weather research. However, existing studies often rely on weather data in pixel space, which presents several challenges such as smooth outputs in model outputs, limited applicability to a single pressure-variable subset (PVS), and high data storage and computational costs. To address these challenges, we propose a novel Weather Latent Autoencoder (WLA) that transforms weather data from pixel space to latent space, enabling efficient weather task modeling. By decoupling weather reconstruction from downstream tasks, WLA improves the accuracy and sharpness of weather task model results. The incorporated Pressure-Variable Unified Module transforms multiple PVS into a unified representation, enhancing the adaptability of the model in multiple weather scenarios. Furthermore, weather tasks can be performed in a low-storage latent space of WLA rather than a high-storage pixel space, thus significantly reducing data storage and computational costs. Through extensive experimentation, we demonstrate its superior compression and reconstruction performance, enabling the creation of the ERA5-latent dataset with unified representations of multiple PVS from ERA5 data. The compressed full PVS in the ERA5-latent dataset reduces the original 244.34 TB of data to 0.43 TB. The downstream task further demonstrates that task models can apply to multiple PVS with low data costs in latent space and achieve superior performance compared to models in pixel space. Code, ERA5-latent data, and pre-trained models are available at https://anonymous.4open.science/r/Weather-Latent-Autoencoder-8467.
no_new_dataset
0.901704
2503.06624
Nankai Lin
Meiyu Zeng, Xingming Liao, Canyu Chen, Nankai Lin, Zhuowei Wang, Chong Chen, Aimin Yang
Chameleon: On the Scene Diversity and Domain Variety of AI-Generated Videos Detection
17 pages
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Artificial intelligence generated content (AIGC), known as DeepFakes, has emerged as a growing concern because it is being utilized as a tool for spreading disinformation. While much research exists on identifying AI-generated text and images, research on detecting AI-generated videos is limited. Existing datasets for AI-generated videos detection exhibit limitations in terms of diversity, complexity, and realism. To address these issues, this paper focuses on AI-generated videos detection and constructs a diverse dataset named Chameleon. We generate videos through multiple generation tools and various real video sources. At the same time, we preserve the videos' real-world complexity, including scene switches and dynamic perspective changes, and expand beyond face-centered detection to include human actions and environment generation. Our work bridges the gap between AI-generated dataset construction and real-world forensic needs, offering a valuable benchmark to counteract the evolving threats of AI-generated content.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 13:58:43 GMT" } ]
2025-03-11T00:00:00
[ [ "Zeng", "Meiyu", "" ], [ "Liao", "Xingming", "" ], [ "Chen", "Canyu", "" ], [ "Lin", "Nankai", "" ], [ "Wang", "Zhuowei", "" ], [ "Chen", "Chong", "" ], [ "Yang", "Aimin", "" ] ]
TITLE: Chameleon: On the Scene Diversity and Domain Variety of AI-Generated Videos Detection ABSTRACT: Artificial intelligence generated content (AIGC), known as DeepFakes, has emerged as a growing concern because it is being utilized as a tool for spreading disinformation. While much research exists on identifying AI-generated text and images, research on detecting AI-generated videos is limited. Existing datasets for AI-generated videos detection exhibit limitations in terms of diversity, complexity, and realism. To address these issues, this paper focuses on AI-generated videos detection and constructs a diverse dataset named Chameleon. We generate videos through multiple generation tools and various real video sources. At the same time, we preserve the videos' real-world complexity, including scene switches and dynamic perspective changes, and expand beyond face-centered detection to include human actions and environment generation. Our work bridges the gap between AI-generated dataset construction and real-world forensic needs, offering a valuable benchmark to counteract the evolving threats of AI-generated content.
new_dataset
0.949059
2503.06629
Tomasz Kryjak
Hiroshi Nakano and Krzysztof Blachut and Kamil Jeziorek and Piotr Wzorek and Manon Dampfhoffer and Thomas Mesquida and Hiroaki Nishi and Tomasz Kryjak and Thomas Dalgaty
Hardware-Accelerated Event-Graph Neural Networks for Low-Latency Time-Series Classification on SoC FPGA
Paper accepted for the 21st International Symposium on Applied Reconfigurable Computing ARC 2025, Sevilla, Spain, April 9-11, 2025
null
null
null
cs.LG cs.AI eess.SP
http://creativecommons.org/licenses/by/4.0/
As the quantities of data recorded by embedded edge sensors grow, so too does the need for intelligent local processing. Such data often comes in the form of time-series signals, based on which real-time predictions can be made locally using an AI model. However, a hardware-software approach capable of making low-latency predictions with low power consumption is required. In this paper, we present a hardware implementation of an event-graph neural network for time-series classification. We leverage an artificial cochlea model to convert the input time-series signals into a sparse event-data format that allows the event-graph to drastically reduce the number of calculations relative to other AI methods. We implemented the design on a SoC FPGA and applied it to the real-time processing of the Spiking Heidelberg Digits (SHD) dataset to benchmark our approach against competitive solutions. Our method achieves a floating-point accuracy of 92.7% on the SHD dataset for the base model, which is only 2.4% and 2% less than the state-of-the-art models with over 10% and 67% fewer model parameters, respectively. It also outperforms FPGA-based spiking neural network implementations by 19.3% and 4.5%, achieving 92.3% accuracy for the quantised model while using fewer computational resources and reducing latency.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 14:08:46 GMT" } ]
2025-03-11T00:00:00
[ [ "Nakano", "Hiroshi", "" ], [ "Blachut", "Krzysztof", "" ], [ "Jeziorek", "Kamil", "" ], [ "Wzorek", "Piotr", "" ], [ "Dampfhoffer", "Manon", "" ], [ "Mesquida", "Thomas", "" ], [ "Nishi", "Hiroaki", "" ], [ "Kryjak", "Tomasz", "" ], [ "Dalgaty", "Thomas", "" ] ]
TITLE: Hardware-Accelerated Event-Graph Neural Networks for Low-Latency Time-Series Classification on SoC FPGA ABSTRACT: As the quantities of data recorded by embedded edge sensors grow, so too does the need for intelligent local processing. Such data often comes in the form of time-series signals, based on which real-time predictions can be made locally using an AI model. However, a hardware-software approach capable of making low-latency predictions with low power consumption is required. In this paper, we present a hardware implementation of an event-graph neural network for time-series classification. We leverage an artificial cochlea model to convert the input time-series signals into a sparse event-data format that allows the event-graph to drastically reduce the number of calculations relative to other AI methods. We implemented the design on a SoC FPGA and applied it to the real-time processing of the Spiking Heidelberg Digits (SHD) dataset to benchmark our approach against competitive solutions. Our method achieves a floating-point accuracy of 92.7% on the SHD dataset for the base model, which is only 2.4% and 2% less than the state-of-the-art models with over 10% and 67% fewer model parameters, respectively. It also outperforms FPGA-based spiking neural network implementations by 19.3% and 4.5%, achieving 92.3% accuracy for the quantised model while using fewer computational resources and reducing latency.
no_new_dataset
0.947575
2503.06633
Yu Zhou
Yu Zhou and Bingyan Liu
BTFL: A Bayesian-based Test-Time Generalization Method for Internal and External Data Distributions in Federated learning
accepted as KDD 2025 research track paper
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Federated Learning (FL) enables multiple clients to collaboratively develop a global model while maintaining data privacy. However, online FL deployment faces challenges due to distribution shifts and evolving test samples. Personalized Federated Learning (PFL) tailors the global model to individual client distributions, but struggles with Out-Of-Distribution (OOD) samples during testing, leading to performance degradation. In real-world scenarios, balancing personalization and generalization during online testing is crucial and existing methods primarily focus on training-phase generalization. To address the test-time trade-off, we introduce a new scenario: Test-time Generalization for Internal and External Distributions in Federated Learning (TGFL), which evaluates adaptability under Internal Distribution (IND) and External Distribution (EXD). We propose BTFL, a Bayesian-based test-time generalization method for TGFL, which balances generalization and personalization at the sample level during testing. BTFL employs a two-head architecture to store local and global knowledge, interpolating predictions via a dual-Bayesian framework that considers both historical test data and current sample characteristics with theoretical guarantee and faster speed. Our experiments demonstrate that BTFL achieves improved performance across various datasets and models with less time cost. The source codes are made publicly available at https://github.com/ZhouYuCS/BTFL .
[ { "version": "v1", "created": "Sun, 9 Mar 2025 14:16:34 GMT" } ]
2025-03-11T00:00:00
[ [ "Zhou", "Yu", "" ], [ "Liu", "Bingyan", "" ] ]
TITLE: BTFL: A Bayesian-based Test-Time Generalization Method for Internal and External Data Distributions in Federated learning ABSTRACT: Federated Learning (FL) enables multiple clients to collaboratively develop a global model while maintaining data privacy. However, online FL deployment faces challenges due to distribution shifts and evolving test samples. Personalized Federated Learning (PFL) tailors the global model to individual client distributions, but struggles with Out-Of-Distribution (OOD) samples during testing, leading to performance degradation. In real-world scenarios, balancing personalization and generalization during online testing is crucial and existing methods primarily focus on training-phase generalization. To address the test-time trade-off, we introduce a new scenario: Test-time Generalization for Internal and External Distributions in Federated Learning (TGFL), which evaluates adaptability under Internal Distribution (IND) and External Distribution (EXD). We propose BTFL, a Bayesian-based test-time generalization method for TGFL, which balances generalization and personalization at the sample level during testing. BTFL employs a two-head architecture to store local and global knowledge, interpolating predictions via a dual-Bayesian framework that considers both historical test data and current sample characteristics with theoretical guarantee and faster speed. Our experiments demonstrate that BTFL achieves improved performance across various datasets and models with less time cost. The source codes are made publicly available at https://github.com/ZhouYuCS/BTFL .
no_new_dataset
0.947624
2503.06637
Lei Shi
Lei Shi, Andreas Bulling
CLAD: Constrained Latent Action Diffusion for Vision-Language Procedure Planning
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
We propose CLAD -- a Constrained Latent Action Diffusion model for vision-language procedure planning in instructional videos. Procedure planning is the challenging task of predicting intermediate actions given a visual observation of a start and a goal state. However, future interactive AI systems must also be able to plan procedures using multi-modal input, e.g., where visual observations are augmented with language descriptions. To tackle this vision-language procedure planning task, our method uses a Variational Autoencoder (VAE) to learn the latent representation of actions and observations as constraints and integrate them into the diffusion process. This approach exploits that the latent space of diffusion models already has semantics that can be used. We use the latent constraints to steer the diffusion model to better generate actions. We report extensive experiments on the popular CrossTask, Coin, and NIV datasets and show that our method outperforms state-of-the-art methods by a large margin. By evaluating ablated versions of our method, we further show that the proposed integration of the action and observation representations learnt in the VAE latent space is key to these performance improvements.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 14:31:46 GMT" } ]
2025-03-11T00:00:00
[ [ "Shi", "Lei", "" ], [ "Bulling", "Andreas", "" ] ]
TITLE: CLAD: Constrained Latent Action Diffusion for Vision-Language Procedure Planning ABSTRACT: We propose CLAD -- a Constrained Latent Action Diffusion model for vision-language procedure planning in instructional videos. Procedure planning is the challenging task of predicting intermediate actions given a visual observation of a start and a goal state. However, future interactive AI systems must also be able to plan procedures using multi-modal input, e.g., where visual observations are augmented with language descriptions. To tackle this vision-language procedure planning task, our method uses a Variational Autoencoder (VAE) to learn the latent representation of actions and observations as constraints and integrate them into the diffusion process. This approach exploits that the latent space of diffusion models already has semantics that can be used. We use the latent constraints to steer the diffusion model to better generate actions. We report extensive experiments on the popular CrossTask, Coin, and NIV datasets and show that our method outperforms state-of-the-art methods by a large margin. By evaluating ablated versions of our method, we further show that the proposed integration of the action and observation representations learnt in the VAE latent space is key to these performance improvements.
no_new_dataset
0.94366
2503.06647
Hassan Kazemi Tehrani
Hassan Kazemi Tehrani, Jun Cai, Abbas Yekanlou, and Sylvia Santosa
Personalized Class Incremental Context-Aware Food Classification for Food Intake Monitoring Systems
null
null
null
null
cs.CV cs.CE cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurate food intake monitoring is crucial for maintaining a healthy diet and preventing nutrition-related diseases. With the diverse range of foods consumed across various cultures, classic food classification models have limitations due to their reliance on fixed-sized food datasets. Studies show that people consume only a small range of foods across the existing ones, each consuming a unique set of foods. Existing class-incremental models have low accuracy for the new classes and lack personalization. This paper introduces a personalized, class-incremental food classification model designed to overcome these challenges and improve the performance of food intake monitoring systems. Our approach adapts itself to the new array of food classes, maintaining applicability and accuracy, both for new and existing classes by using personalization. Our model's primary focus is personalization, which improves classification accuracy by prioritizing a subset of foods based on an individual's eating habits, including meal frequency, times, and locations. A modified version of DSN is utilized to expand on the appearance of new food classes. Additionally, we propose a comprehensive framework that integrates this model into a food intake monitoring system. This system analyzes meal images provided by users, makes use of a smart scale to estimate food weight, utilizes a nutrient content database to calculate the amount of each macro-nutrient, and creates a dietary user profile through a mobile application. Finally, experimental evaluations on two new benchmark datasets FOOD101-Personal and VFN-Personal, personalized versions of well-known datasets for food classification, are conducted to demonstrate the effectiveness of our model in improving the classification accuracy of both new and existing classes, addressing the limitations of both conventional and class-incremental food classification models.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 14:50:56 GMT" } ]
2025-03-11T00:00:00
[ [ "Tehrani", "Hassan Kazemi", "" ], [ "Cai", "Jun", "" ], [ "Yekanlou", "Abbas", "" ], [ "Santosa", "Sylvia", "" ] ]
TITLE: Personalized Class Incremental Context-Aware Food Classification for Food Intake Monitoring Systems ABSTRACT: Accurate food intake monitoring is crucial for maintaining a healthy diet and preventing nutrition-related diseases. With the diverse range of foods consumed across various cultures, classic food classification models have limitations due to their reliance on fixed-sized food datasets. Studies show that people consume only a small range of foods across the existing ones, each consuming a unique set of foods. Existing class-incremental models have low accuracy for the new classes and lack personalization. This paper introduces a personalized, class-incremental food classification model designed to overcome these challenges and improve the performance of food intake monitoring systems. Our approach adapts itself to the new array of food classes, maintaining applicability and accuracy, both for new and existing classes by using personalization. Our model's primary focus is personalization, which improves classification accuracy by prioritizing a subset of foods based on an individual's eating habits, including meal frequency, times, and locations. A modified version of DSN is utilized to expand on the appearance of new food classes. Additionally, we propose a comprehensive framework that integrates this model into a food intake monitoring system. This system analyzes meal images provided by users, makes use of a smart scale to estimate food weight, utilizes a nutrient content database to calculate the amount of each macro-nutrient, and creates a dietary user profile through a mobile application. Finally, experimental evaluations on two new benchmark datasets FOOD101-Personal and VFN-Personal, personalized versions of well-known datasets for food classification, are conducted to demonstrate the effectiveness of our model in improving the classification accuracy of both new and existing classes, addressing the limitations of both conventional and class-incremental food classification models.
no_new_dataset
0.956063
2503.06648
Hender Lin
Hender Lin
Enhancing NLP Robustness and Generalization through LLM-Generated Contrast Sets: A Scalable Framework for Systematic Evaluation and Adversarial Training
null
null
null
null
cs.CL cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Standard NLP benchmarks often fail to capture vulnerabilities stemming from dataset artifacts and spurious correlations. Contrast sets address this gap by challenging models near decision boundaries but are traditionally labor-intensive to create and limited in diversity. This study leverages large language models to automate the generation of diverse contrast sets. Using the SNLI dataset, we created a 3,000-example contrast set to evaluate and improve model robustness. Fine-tuning on these contrast sets enhanced performance on systematically perturbed examples, maintained standard test accuracy, and modestly improved generalization to novel perturbations. This automated approach offers a scalable solution for evaluating and improving NLP models, addressing systematic generalization challenges, and advancing robustness in real-world applications.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 14:52:53 GMT" } ]
2025-03-11T00:00:00
[ [ "Lin", "Hender", "" ] ]
TITLE: Enhancing NLP Robustness and Generalization through LLM-Generated Contrast Sets: A Scalable Framework for Systematic Evaluation and Adversarial Training ABSTRACT: Standard NLP benchmarks often fail to capture vulnerabilities stemming from dataset artifacts and spurious correlations. Contrast sets address this gap by challenging models near decision boundaries but are traditionally labor-intensive to create and limited in diversity. This study leverages large language models to automate the generation of diverse contrast sets. Using the SNLI dataset, we created a 3,000-example contrast set to evaluate and improve model robustness. Fine-tuning on these contrast sets enhanced performance on systematically perturbed examples, maintained standard test accuracy, and modestly improved generalization to novel perturbations. This automated approach offers a scalable solution for evaluating and improving NLP models, addressing systematic generalization challenges, and advancing robustness in real-world applications.
no_new_dataset
0.926503
2503.06651
Tengjiao Wang
Tengjiao Wang, Zhenyu Kang, Ting Li, Zhihui Chen, Shaobo Wang, Yingpei Lin, Yan Wang, and Yichuan Yu
Electromagnetic Information Theory: Fundamentals, Paradigm Shifts, and Applications
null
null
null
null
cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by-nc-nd/4.0/
This paper explores the emerging research direction of electromagnetic information theory (EIT), which aims to integrate traditional Shannon-based methodologies with physical consistency, particularly the electromagnetic properties of communication channels. We propose an EIT-based multiple-input multiple-output (MIMO) paradigm that enhances conventional spatially-discrete MIMO models by incorporating the concepts of electromagnetic (EM) precoding and EM combining. This approach aims to improve the modeling of next-generation systems while remaining consistent with Shannon's theoretical foundations. We explore typical EIT applications, such as densely spaced MIMO, near-field communications, and tri-polarized antennas, and analyze their channel characteristics through theoretical simulations and measured datasets. The paper also discusses critical research challenges and opportunities for EIT applications from an industrial perspective, emphasizing the field's potential for practical applications.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 15:05:40 GMT" } ]
2025-03-11T00:00:00
[ [ "Wang", "Tengjiao", "" ], [ "Kang", "Zhenyu", "" ], [ "Li", "Ting", "" ], [ "Chen", "Zhihui", "" ], [ "Wang", "Shaobo", "" ], [ "Lin", "Yingpei", "" ], [ "Wang", "Yan", "" ], [ "Yu", "Yichuan", "" ] ]
TITLE: Electromagnetic Information Theory: Fundamentals, Paradigm Shifts, and Applications ABSTRACT: This paper explores the emerging research direction of electromagnetic information theory (EIT), which aims to integrate traditional Shannon-based methodologies with physical consistency, particularly the electromagnetic properties of communication channels. We propose an EIT-based multiple-input multiple-output (MIMO) paradigm that enhances conventional spatially-discrete MIMO models by incorporating the concepts of electromagnetic (EM) precoding and EM combining. This approach aims to improve the modeling of next-generation systems while remaining consistent with Shannon's theoretical foundations. We explore typical EIT applications, such as densely spaced MIMO, near-field communications, and tri-polarized antennas, and analyze their channel characteristics through theoretical simulations and measured datasets. The paper also discusses critical research challenges and opportunities for EIT applications from an industrial perspective, emphasizing the field's potential for practical applications.
no_new_dataset
0.945248
2503.06664
Tommaso Bendinelli
Tommaso Bendinelli, Artur Dox, Christian Holz
Exploring LLM Agents for Cleaning Tabular Machine Learning Datasets
14 pages, 1 main figure, 3 plots, Published at ICLR 2025 Workshop on Foundation Models in the Wild
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
High-quality, error-free datasets are a key ingredient in building reliable, accurate, and unbiased machine learning (ML) models. However, real world datasets often suffer from errors due to sensor malfunctions, data entry mistakes, or improper data integration across multiple sources that can severely degrade model performance. Detecting and correcting these issues typically require tailor-made solutions and demand extensive domain expertise. Consequently, automation is challenging, rendering the process labor-intensive and tedious. In this study, we investigate whether Large Language Models (LLMs) can help alleviate the burden of manual data cleaning. We set up an experiment in which an LLM, paired with Python, is tasked with cleaning the training dataset to improve the performance of a learning algorithm without having the ability to modify the training pipeline or perform any feature engineering. We run this experiment on multiple Kaggle datasets that have been intentionally corrupted with errors. Our results show that LLMs can identify and correct erroneous entries, such as illogical values or outlier, by leveraging contextual information from other features within the same row, as well as feedback from previous iterations. However, they struggle to detect more complex errors that require understanding data distribution across multiple rows, such as trends and biases.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 15:29:46 GMT" } ]
2025-03-11T00:00:00
[ [ "Bendinelli", "Tommaso", "" ], [ "Dox", "Artur", "" ], [ "Holz", "Christian", "" ] ]
TITLE: Exploring LLM Agents for Cleaning Tabular Machine Learning Datasets ABSTRACT: High-quality, error-free datasets are a key ingredient in building reliable, accurate, and unbiased machine learning (ML) models. However, real world datasets often suffer from errors due to sensor malfunctions, data entry mistakes, or improper data integration across multiple sources that can severely degrade model performance. Detecting and correcting these issues typically require tailor-made solutions and demand extensive domain expertise. Consequently, automation is challenging, rendering the process labor-intensive and tedious. In this study, we investigate whether Large Language Models (LLMs) can help alleviate the burden of manual data cleaning. We set up an experiment in which an LLM, paired with Python, is tasked with cleaning the training dataset to improve the performance of a learning algorithm without having the ability to modify the training pipeline or perform any feature engineering. We run this experiment on multiple Kaggle datasets that have been intentionally corrupted with errors. Our results show that LLMs can identify and correct erroneous entries, such as illogical values or outlier, by leveraging contextual information from other features within the same row, as well as feedback from previous iterations. However, they struggle to detect more complex errors that require understanding data distribution across multiple rows, such as trends and biases.
no_new_dataset
0.941654
2503.06678
Hantao Zhou
Hantao Zhou, Rui Yang, Longxiang Tang, Guanyi Qin, Yan Zhang, Runze Hu, Xiu Li
Gamma: Toward Generic Image Assessment with Mixture of Assessment Experts
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Image assessment aims to evaluate the quality and aesthetics of images and has been applied across various scenarios, such as natural and AIGC scenes. Existing methods mostly address these sub-tasks or scenes individually. While some works attempt to develop unified image assessment models, they have struggled to achieve satisfactory performance or cover a broad spectrum of assessment scenarios. In this paper, we present \textbf{Gamma}, a \textbf{G}eneric im\textbf{A}ge assess\textbf{M}ent model using \textbf{M}ixture of \textbf{A}ssessment Experts, which can effectively assess images from diverse scenes through mixed-dataset training. Achieving unified training in image assessment presents significant challenges due to annotation biases across different datasets. To address this issue, we first propose a Mixture of Assessment Experts (MoAE) module, which employs shared and adaptive experts to dynamically learn common and specific knowledge for different datasets, respectively. In addition, we introduce a Scene-based Differential Prompt (SDP) strategy, which uses scene-specific prompts to provide prior knowledge and guidance during the learning process, further boosting adaptation for various scenes. Our Gamma model is trained and evaluated on 12 datasets spanning 6 image assessment scenarios. Extensive experiments show that our unified Gamma outperforms other state-of-the-art mixed-training methods by significant margins while covering more scenes. Code: https://github.com/zht8506/Gamma.
[ { "version": "v1", "created": "Sun, 9 Mar 2025 16:07:58 GMT" } ]
2025-03-11T00:00:00
[ [ "Zhou", "Hantao", "" ], [ "Yang", "Rui", "" ], [ "Tang", "Longxiang", "" ], [ "Qin", "Guanyi", "" ], [ "Zhang", "Yan", "" ], [ "Hu", "Runze", "" ], [ "Li", "Xiu", "" ] ]
TITLE: Gamma: Toward Generic Image Assessment with Mixture of Assessment Experts ABSTRACT: Image assessment aims to evaluate the quality and aesthetics of images and has been applied across various scenarios, such as natural and AIGC scenes. Existing methods mostly address these sub-tasks or scenes individually. While some works attempt to develop unified image assessment models, they have struggled to achieve satisfactory performance or cover a broad spectrum of assessment scenarios. In this paper, we present \textbf{Gamma}, a \textbf{G}eneric im\textbf{A}ge assess\textbf{M}ent model using \textbf{M}ixture of \textbf{A}ssessment Experts, which can effectively assess images from diverse scenes through mixed-dataset training. Achieving unified training in image assessment presents significant challenges due to annotation biases across different datasets. To address this issue, we first propose a Mixture of Assessment Experts (MoAE) module, which employs shared and adaptive experts to dynamically learn common and specific knowledge for different datasets, respectively. In addition, we introduce a Scene-based Differential Prompt (SDP) strategy, which uses scene-specific prompts to provide prior knowledge and guidance during the learning process, further boosting adaptation for various scenes. Our Gamma model is trained and evaluated on 12 datasets spanning 6 image assessment scenarios. Extensive experiments show that our unified Gamma outperforms other state-of-the-art mixed-training methods by significant margins while covering more scenes. Code: https://github.com/zht8506/Gamma.
no_new_dataset
0.941975