id
stringlengths
9
16
submitter
stringlengths
3
64
authors
stringlengths
5
6.63k
title
stringlengths
7
245
comments
stringlengths
1
482
journal-ref
stringlengths
4
382
doi
stringlengths
9
151
report-no
stringclasses
984 values
categories
stringlengths
5
108
license
stringclasses
9 values
abstract
stringlengths
83
3.41k
versions
listlengths
1
20
update_date
timestamp[s]date
2007-05-23 00:00:00
2025-04-11 00:00:00
authors_parsed
listlengths
1
427
prompt
stringlengths
166
3.49k
label
stringclasses
2 values
prob
float64
0.5
0.98
2503.09153
Chaowei Zhang
Chaowei Zhang, Zongling Feng, Zewei Zhang, Jipeng Qiang, Guandong Xu, and Yun Li
Is LLMs Hallucination Usable? LLM-based Negative Reasoning for Fake News Detection
9 pages, 12 figures, conference
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The questionable responses caused by knowledge hallucination may lead to LLMs' unstable ability in decision-making. However, it has never been investigated whether the LLMs' hallucination is possibly usable to generate negative reasoning for facilitating the detection of fake news. This study proposes a novel supervised self-reinforced reasoning rectification approach - SR$^3$ that yields both common reasonable reasoning and wrong understandings (negative reasoning) for news via LLMs reflection for semantic consistency learning. Upon that, we construct a negative reasoning-based news learning model called - \emph{NRFE}, which leverages positive or negative news-reasoning pairs for learning the semantic consistency between them. To avoid the impact of label-implicated reasoning, we deploy a student model - \emph{NRFE-D} that only takes news content as input to inspect the performance of our method by distilling the knowledge from \emph{NRFE}. The experimental results verified on three popular fake news datasets demonstrate the superiority of our method compared with three kinds of baselines including prompting on LLMs, fine-tuning on pre-trained SLMs, and other representative fake news detection methods.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 08:29:59 GMT" } ]
2025-03-13T00:00:00
[ [ "Zhang", "Chaowei", "" ], [ "Feng", "Zongling", "" ], [ "Zhang", "Zewei", "" ], [ "Qiang", "Jipeng", "" ], [ "Xu", "Guandong", "" ], [ "Li", "Yun", "" ] ]
TITLE: Is LLMs Hallucination Usable? LLM-based Negative Reasoning for Fake News Detection ABSTRACT: The questionable responses caused by knowledge hallucination may lead to LLMs' unstable ability in decision-making. However, it has never been investigated whether the LLMs' hallucination is possibly usable to generate negative reasoning for facilitating the detection of fake news. This study proposes a novel supervised self-reinforced reasoning rectification approach - SR$^3$ that yields both common reasonable reasoning and wrong understandings (negative reasoning) for news via LLMs reflection for semantic consistency learning. Upon that, we construct a negative reasoning-based news learning model called - \emph{NRFE}, which leverages positive or negative news-reasoning pairs for learning the semantic consistency between them. To avoid the impact of label-implicated reasoning, we deploy a student model - \emph{NRFE-D} that only takes news content as input to inspect the performance of our method by distilling the knowledge from \emph{NRFE}. The experimental results verified on three popular fake news datasets demonstrate the superiority of our method compared with three kinds of baselines including prompting on LLMs, fine-tuning on pre-trained SLMs, and other representative fake news detection methods.
no_new_dataset
0.945551
2503.09154
Yunyang Ge
Chengshu Zhao, Yunyang Ge, Xinhua Cheng, Bin Zhu, Yatian Pang, Bin Lin, Fan Yang, Feng Gao, Li Yuan
SwapAnyone: Consistent and Realistic Video Synthesis for Swapping Any Person into Any Video
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Video body-swapping aims to replace the body in an existing video with a new body from arbitrary sources, which has garnered more attention in recent years. Existing methods treat video body-swapping as a composite of multiple tasks instead of an independent task and typically rely on various models to achieve video body-swapping sequentially. However, these methods fail to achieve end-to-end optimization for the video body-swapping which causes issues such as variations in luminance among frames, disorganized occlusion relationships, and the noticeable separation between bodies and background. In this work, we define video body-swapping as an independent task and propose three critical consistencies: identity consistency, motion consistency, and environment consistency. We introduce an end-to-end model named SwapAnyone, treating video body-swapping as a video inpainting task with reference fidelity and motion control. To improve the ability to maintain environmental harmony, particularly luminance harmony in the resulting video, we introduce a novel EnvHarmony strategy for training our model progressively. Additionally, we provide a dataset named HumanAction-32K covering various videos about human actions. Extensive experiments demonstrate that our method achieves State-Of-The-Art (SOTA) performance among open-source methods while approaching or surpassing closed-source models across multiple dimensions. All code, model weights, and the HumanAction-32K dataset will be open-sourced at https://github.com/PKU-YuanGroup/SwapAnyone.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 08:30:29 GMT" } ]
2025-03-13T00:00:00
[ [ "Zhao", "Chengshu", "" ], [ "Ge", "Yunyang", "" ], [ "Cheng", "Xinhua", "" ], [ "Zhu", "Bin", "" ], [ "Pang", "Yatian", "" ], [ "Lin", "Bin", "" ], [ "Yang", "Fan", "" ], [ "Gao", "Feng", "" ], [ "Yuan", "Li", "" ] ]
TITLE: SwapAnyone: Consistent and Realistic Video Synthesis for Swapping Any Person into Any Video ABSTRACT: Video body-swapping aims to replace the body in an existing video with a new body from arbitrary sources, which has garnered more attention in recent years. Existing methods treat video body-swapping as a composite of multiple tasks instead of an independent task and typically rely on various models to achieve video body-swapping sequentially. However, these methods fail to achieve end-to-end optimization for the video body-swapping which causes issues such as variations in luminance among frames, disorganized occlusion relationships, and the noticeable separation between bodies and background. In this work, we define video body-swapping as an independent task and propose three critical consistencies: identity consistency, motion consistency, and environment consistency. We introduce an end-to-end model named SwapAnyone, treating video body-swapping as a video inpainting task with reference fidelity and motion control. To improve the ability to maintain environmental harmony, particularly luminance harmony in the resulting video, we introduce a novel EnvHarmony strategy for training our model progressively. Additionally, we provide a dataset named HumanAction-32K covering various videos about human actions. Extensive experiments demonstrate that our method achieves State-Of-The-Art (SOTA) performance among open-source methods while approaching or surpassing closed-source models across multiple dimensions. All code, model weights, and the HumanAction-32K dataset will be open-sourced at https://github.com/PKU-YuanGroup/SwapAnyone.
new_dataset
0.964456
2503.09159
Andrej Tschalzev
Andrej Tschalzev, Lennart Purucker, Stefan L\"udtke, Frank Hutter, Christian Bartelt, Heiner Stuckenschmidt
Unreflected Use of Tabular Data Repositories Can Undermine Research Quality
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Data repositories have accumulated a large number of tabular datasets from various domains. Machine Learning researchers are actively using these datasets to evaluate novel approaches. Consequently, data repositories have an important standing in tabular data research. They not only host datasets but also provide information on how to use them in supervised learning tasks. In this paper, we argue that, despite great achievements in usability, the unreflected usage of datasets from data repositories may have led to reduced research quality and scientific rigor. We present examples from prominent recent studies that illustrate the problematic use of datasets from OpenML, a large data repository for tabular data. Our illustrations help users of data repositories avoid falling into the traps of (1) using suboptimal model selection strategies, (2) overlooking strong baselines, and (3) inappropriate preprocessing. In response, we discuss possible solutions for how data repositories can prevent the inappropriate use of datasets and become the cornerstones for improved overall quality of empirical research studies.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 08:41:49 GMT" } ]
2025-03-13T00:00:00
[ [ "Tschalzev", "Andrej", "" ], [ "Purucker", "Lennart", "" ], [ "Lüdtke", "Stefan", "" ], [ "Hutter", "Frank", "" ], [ "Bartelt", "Christian", "" ], [ "Stuckenschmidt", "Heiner", "" ] ]
TITLE: Unreflected Use of Tabular Data Repositories Can Undermine Research Quality ABSTRACT: Data repositories have accumulated a large number of tabular datasets from various domains. Machine Learning researchers are actively using these datasets to evaluate novel approaches. Consequently, data repositories have an important standing in tabular data research. They not only host datasets but also provide information on how to use them in supervised learning tasks. In this paper, we argue that, despite great achievements in usability, the unreflected usage of datasets from data repositories may have led to reduced research quality and scientific rigor. We present examples from prominent recent studies that illustrate the problematic use of datasets from OpenML, a large data repository for tabular data. Our illustrations help users of data repositories avoid falling into the traps of (1) using suboptimal model selection strategies, (2) overlooking strong baselines, and (3) inappropriate preprocessing. In response, we discuss possible solutions for how data repositories can prevent the inappropriate use of datasets and become the cornerstones for improved overall quality of empirical research studies.
no_new_dataset
0.952706
2503.09170
Aman Prakash
Aman Prakash, Nikumani Choudhury, Anakhi Hazarika, Alekhya Gorrela
Effective Feature Selection for Predicting Spreading Factor with ML in Large LoRaWAN-based Mobile IoT Networks
Accepted at 31st National Conference on Communications
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
LoRaWAN is a low-power long-range protocol that enables reliable and robust communication. This paper addresses the challenge of predicting the spreading factor (SF) in LoRaWAN networks using machine learning (ML) techniques. Optimal SF allocation is crucial for optimizing data transmission in IoT-enabled mobile devices, yet it remains a challenging task due to the fluctuation in environment and network conditions. We evaluated ML model performance across a large publicly available dataset to explore the best feature across key LoRaWAN features such as RSSI, SNR, frequency, distance between end devices and gateways, and antenna height of the end device, further, we also experimented with 31 different combinations possible for 5 features. We trained and evaluated the model using k-nearest neighbors (k-NN), Decision Tree Classifier (DTC), Random Forest (RF), and Multinomial Logistic Regression (MLR) algorithms. The combination of RSSI and SNR was identified as the best feature set. The finding of this paper provides valuable information for reducing the overall cost of dataset collection for ML model training and extending the battery life of LoRaWAN devices. This work contributes to a more reliable LoRaWAN system by understanding the importance of specific feature sets for optimized SF allocation.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 08:58:28 GMT" } ]
2025-03-13T00:00:00
[ [ "Prakash", "Aman", "" ], [ "Choudhury", "Nikumani", "" ], [ "Hazarika", "Anakhi", "" ], [ "Gorrela", "Alekhya", "" ] ]
TITLE: Effective Feature Selection for Predicting Spreading Factor with ML in Large LoRaWAN-based Mobile IoT Networks ABSTRACT: LoRaWAN is a low-power long-range protocol that enables reliable and robust communication. This paper addresses the challenge of predicting the spreading factor (SF) in LoRaWAN networks using machine learning (ML) techniques. Optimal SF allocation is crucial for optimizing data transmission in IoT-enabled mobile devices, yet it remains a challenging task due to the fluctuation in environment and network conditions. We evaluated ML model performance across a large publicly available dataset to explore the best feature across key LoRaWAN features such as RSSI, SNR, frequency, distance between end devices and gateways, and antenna height of the end device, further, we also experimented with 31 different combinations possible for 5 features. We trained and evaluated the model using k-nearest neighbors (k-NN), Decision Tree Classifier (DTC), Random Forest (RF), and Multinomial Logistic Regression (MLR) algorithms. The combination of RSSI and SNR was identified as the best feature set. The finding of this paper provides valuable information for reducing the overall cost of dataset collection for ML model training and extending the battery life of LoRaWAN devices. This work contributes to a more reliable LoRaWAN system by understanding the importance of specific feature sets for optimized SF allocation.
no_new_dataset
0.953319
2503.09181
Katsumi Takahashi
Katsumi Takahashi, Koh Takeuchi, Hisashi Kashima
Dynamic Feature Selection from Variable Feature Sets Using Features of Features
null
null
null
null
cs.LG cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine learning models usually assume that a set of feature values used to obtain an output is fixed in advance. However, in many real-world problems, a cost is associated with measuring these features. To address the issue of reducing measurement costs, various methods have been proposed to dynamically select which features to measure, but existing methods assume that the set of measurable features remains constant, which makes them unsuitable for cases where the set of measurable features varies from instance to instance. To overcome this limitation, we define a new problem setting for Dynamic Feature Selection (DFS) with variable feature sets and propose a deep learning method that utilizes prior information about each feature, referred to as ''features of features''. Experimental results on several datasets demonstrate that the proposed method effectively selects features based on the prior information, even when the set of measurable features changes from instance to instance.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 09:13:21 GMT" } ]
2025-03-13T00:00:00
[ [ "Takahashi", "Katsumi", "" ], [ "Takeuchi", "Koh", "" ], [ "Kashima", "Hisashi", "" ] ]
TITLE: Dynamic Feature Selection from Variable Feature Sets Using Features of Features ABSTRACT: Machine learning models usually assume that a set of feature values used to obtain an output is fixed in advance. However, in many real-world problems, a cost is associated with measuring these features. To address the issue of reducing measurement costs, various methods have been proposed to dynamically select which features to measure, but existing methods assume that the set of measurable features remains constant, which makes them unsuitable for cases where the set of measurable features varies from instance to instance. To overcome this limitation, we define a new problem setting for Dynamic Feature Selection (DFS) with variable feature sets and propose a deep learning method that utilizes prior information about each feature, referred to as ''features of features''. Experimental results on several datasets demonstrate that the proposed method effectively selects features based on the prior information, even when the set of measurable features changes from instance to instance.
no_new_dataset
0.945901
2503.09185
Yuanyang Zhang
Yuanyang Zhang, Yijie Lin, Weiqing Yan, Li Yao, Xinhang Wan, Guangyuan Li, Chao Zhang, Guanzhou Ke, Jie Xu
Incomplete Multi-view Clustering via Diffusion Contrastive Generation
null
AAAI 2025
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Incomplete multi-view clustering (IMVC) has garnered increasing attention in recent years due to the common issue of missing data in multi-view datasets. The primary approach to address this challenge involves recovering the missing views before applying conventional multi-view clustering methods. Although imputation-based IMVC methods have achieved significant improvements, they still encounter notable limitations: 1) heavy reliance on paired data for training the data recovery module, which is impractical in real scenarios with high missing data rates; 2) the generated data often lacks diversity and discriminability, resulting in suboptimal clustering results. To address these shortcomings, we propose a novel IMVC method called Diffusion Contrastive Generation (DCG). Motivated by the consistency between the diffusion and clustering processes, DCG learns the distribution characteristics to enhance clustering by applying forward diffusion and reverse denoising processes to intra-view data. By performing contrastive learning on a limited set of paired multi-view samples, DCG can align the generated views with the real views, facilitating accurate recovery of views across arbitrary missing view scenarios. Additionally, DCG integrates instance-level and category-level interactive learning to exploit the consistent and complementary information available in multi-view data, achieving robust and end-to-end clustering. Extensive experiments demonstrate that our method outperforms state-of-the-art approaches.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 09:27:25 GMT" } ]
2025-03-13T00:00:00
[ [ "Zhang", "Yuanyang", "" ], [ "Lin", "Yijie", "" ], [ "Yan", "Weiqing", "" ], [ "Yao", "Li", "" ], [ "Wan", "Xinhang", "" ], [ "Li", "Guangyuan", "" ], [ "Zhang", "Chao", "" ], [ "Ke", "Guanzhou", "" ], [ "Xu", "Jie", "" ] ]
TITLE: Incomplete Multi-view Clustering via Diffusion Contrastive Generation ABSTRACT: Incomplete multi-view clustering (IMVC) has garnered increasing attention in recent years due to the common issue of missing data in multi-view datasets. The primary approach to address this challenge involves recovering the missing views before applying conventional multi-view clustering methods. Although imputation-based IMVC methods have achieved significant improvements, they still encounter notable limitations: 1) heavy reliance on paired data for training the data recovery module, which is impractical in real scenarios with high missing data rates; 2) the generated data often lacks diversity and discriminability, resulting in suboptimal clustering results. To address these shortcomings, we propose a novel IMVC method called Diffusion Contrastive Generation (DCG). Motivated by the consistency between the diffusion and clustering processes, DCG learns the distribution characteristics to enhance clustering by applying forward diffusion and reverse denoising processes to intra-view data. By performing contrastive learning on a limited set of paired multi-view samples, DCG can align the generated views with the real views, facilitating accurate recovery of views across arbitrary missing view scenarios. Additionally, DCG integrates instance-level and category-level interactive learning to exploit the consistent and complementary information available in multi-view data, achieving robust and end-to-end clustering. Extensive experiments demonstrate that our method outperforms state-of-the-art approaches.
no_new_dataset
0.948106
2503.09186
Jian-Jian Jiang
Jian-Jian Jiang, Xiao-Ming Wu, Yi-Xiang He, Ling-An Zeng, Yi-Lin Wei, Dandan Zhang, Wei-Shi Zheng
Rethinking Bimanual Robotic Manipulation: Learning with Decoupled Interaction Framework
14 pages, 8 figures
null
null
null
cs.RO cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bimanual robotic manipulation is an emerging and critical topic in the robotics community. Previous works primarily rely on integrated control models that take the perceptions and states of both arms as inputs to directly predict their actions. However, we think bimanual manipulation involves not only coordinated tasks but also various uncoordinated tasks that do not require explicit cooperation during execution, such as grasping objects with the closest hand, which integrated control frameworks ignore to consider due to their enforced cooperation in the early inputs. In this paper, we propose a novel decoupled interaction framework that considers the characteristics of different tasks in bimanual manipulation. The key insight of our framework is to assign an independent model to each arm to enhance the learning of uncoordinated tasks, while introducing a selective interaction module that adaptively learns weights from its own arm to improve the learning of coordinated tasks. Extensive experiments on seven tasks in the RoboTwin dataset demonstrate that: (1) Our framework achieves outstanding performance, with a 23.5% boost over the SOTA method. (2) Our framework is flexible and can be seamlessly integrated into existing methods. (3) Our framework can be effectively extended to multi-agent manipulation tasks, achieving a 28% boost over the integrated control SOTA. (4) The performance boost stems from the decoupled design itself, surpassing the SOTA by 16.5% in success rate with only 1/6 of the model size.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 09:28:41 GMT" } ]
2025-03-13T00:00:00
[ [ "Jiang", "Jian-Jian", "" ], [ "Wu", "Xiao-Ming", "" ], [ "He", "Yi-Xiang", "" ], [ "Zeng", "Ling-An", "" ], [ "Wei", "Yi-Lin", "" ], [ "Zhang", "Dandan", "" ], [ "Zheng", "Wei-Shi", "" ] ]
TITLE: Rethinking Bimanual Robotic Manipulation: Learning with Decoupled Interaction Framework ABSTRACT: Bimanual robotic manipulation is an emerging and critical topic in the robotics community. Previous works primarily rely on integrated control models that take the perceptions and states of both arms as inputs to directly predict their actions. However, we think bimanual manipulation involves not only coordinated tasks but also various uncoordinated tasks that do not require explicit cooperation during execution, such as grasping objects with the closest hand, which integrated control frameworks ignore to consider due to their enforced cooperation in the early inputs. In this paper, we propose a novel decoupled interaction framework that considers the characteristics of different tasks in bimanual manipulation. The key insight of our framework is to assign an independent model to each arm to enhance the learning of uncoordinated tasks, while introducing a selective interaction module that adaptively learns weights from its own arm to improve the learning of coordinated tasks. Extensive experiments on seven tasks in the RoboTwin dataset demonstrate that: (1) Our framework achieves outstanding performance, with a 23.5% boost over the SOTA method. (2) Our framework is flexible and can be seamlessly integrated into existing methods. (3) Our framework can be effectively extended to multi-agent manipulation tasks, achieving a 28% boost over the integrated control SOTA. (4) The performance boost stems from the decoupled design itself, surpassing the SOTA by 16.5% in success rate with only 1/6 of the model size.
no_new_dataset
0.9463
2503.09187
Qipeng Mei
Qipeng Mei, Dimitri Bulatov and Dorota Iwaszczuk
Polygonizing Roof Segments from High-Resolution Aerial Images Using Yolov8-Based Edge Detection
12 pages, 6 figures, conference paper (VISAPP 2025, part of the 20th International Joint Conference on Computer Vision, Imaging, and Computer Graphics Theory and Applications)
null
10.5220/0013130400003912
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
This study presents a novel approach for roof detail extraction and vectorization using remote sensing images. Unlike previous geometric-primitive-based methods that rely on the detection of corners, our method focuses on edge detection as the primary mechanism for roof reconstruction, while utilizing geometric relationships to define corners and faces. We adapt the YOLOv8 OBB model, originally designed for rotated object detection, to extract roof edges effectively. Our method demonstrates robustness against noise and occlusion, leading to precise vectorized representations of building roofs. Experiments conducted on the SGA and Melville datasets highlight the method's effectiveness. At the raster level, our model outperforms the state-of-the-art foundation segmentation model (SAM), achieving a mIoU between 0.85 and 1 for most samples and an ovIoU close to 0.97. At the vector level, evaluation using the Hausdorff distance, PolyS metric, and our raster-vector-metric demonstrates significant improvements after polygonization, with a close approximation to the reference data. The method successfully handles diverse roof structures and refines edge gaps, even on complex roof structures of new, excluded from training datasets. Our findings underscore the potential of this approach to address challenges in automatic roof structure vectorization, supporting various applications such as urban terrain reconstruction.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 09:29:10 GMT" } ]
2025-03-13T00:00:00
[ [ "Mei", "Qipeng", "" ], [ "Bulatov", "Dimitri", "" ], [ "Iwaszczuk", "Dorota", "" ] ]
TITLE: Polygonizing Roof Segments from High-Resolution Aerial Images Using Yolov8-Based Edge Detection ABSTRACT: This study presents a novel approach for roof detail extraction and vectorization using remote sensing images. Unlike previous geometric-primitive-based methods that rely on the detection of corners, our method focuses on edge detection as the primary mechanism for roof reconstruction, while utilizing geometric relationships to define corners and faces. We adapt the YOLOv8 OBB model, originally designed for rotated object detection, to extract roof edges effectively. Our method demonstrates robustness against noise and occlusion, leading to precise vectorized representations of building roofs. Experiments conducted on the SGA and Melville datasets highlight the method's effectiveness. At the raster level, our model outperforms the state-of-the-art foundation segmentation model (SAM), achieving a mIoU between 0.85 and 1 for most samples and an ovIoU close to 0.97. At the vector level, evaluation using the Hausdorff distance, PolyS metric, and our raster-vector-metric demonstrates significant improvements after polygonization, with a close approximation to the reference data. The method successfully handles diverse roof structures and refines edge gaps, even on complex roof structures of new, excluded from training datasets. Our findings underscore the potential of this approach to address challenges in automatic roof structure vectorization, supporting various applications such as urban terrain reconstruction.
no_new_dataset
0.949995
2503.09191
Juana Valeria Hurtado
Juana Valeria Hurtado, Sajad Marvi, Rohit Mohan, and Abhinav Valada
Learning Appearance and Motion Cues for Panoptic Tracking
null
null
null
null
cs.CV cs.RO
http://creativecommons.org/licenses/by/4.0/
Panoptic tracking enables pixel-level scene interpretation of videos by integrating instance tracking in panoptic segmentation. This provides robots with a spatio-temporal understanding of the environment, an essential attribute for their operation in dynamic environments. In this paper, we propose a novel approach for panoptic tracking that simultaneously captures general semantic information and instance-specific appearance and motion features. Unlike existing methods that overlook dynamic scene attributes, our approach leverages both appearance and motion cues through dedicated network heads. These interconnected heads employ multi-scale deformable convolutions that reason about scene motion offsets with semantic context and motion-enhanced appearance features to learn tracking embeddings. Furthermore, we introduce a novel two-step fusion module that integrates the outputs from both heads by first matching instances from the current time step with propagated instances from previous time steps and subsequently refines associations using motion-enhanced appearance embeddings, improving robustness in challenging scenarios. Extensive evaluations of our proposed \netname model on two benchmark datasets demonstrate that it achieves state-of-the-art performance in panoptic tracking accuracy, surpassing prior methods in maintaining object identities over time. To facilitate future research, we make the code available at http://panoptictracking.cs.uni-freiburg.de
[ { "version": "v1", "created": "Wed, 12 Mar 2025 09:32:29 GMT" } ]
2025-03-13T00:00:00
[ [ "Hurtado", "Juana Valeria", "" ], [ "Marvi", "Sajad", "" ], [ "Mohan", "Rohit", "" ], [ "Valada", "Abhinav", "" ] ]
TITLE: Learning Appearance and Motion Cues for Panoptic Tracking ABSTRACT: Panoptic tracking enables pixel-level scene interpretation of videos by integrating instance tracking in panoptic segmentation. This provides robots with a spatio-temporal understanding of the environment, an essential attribute for their operation in dynamic environments. In this paper, we propose a novel approach for panoptic tracking that simultaneously captures general semantic information and instance-specific appearance and motion features. Unlike existing methods that overlook dynamic scene attributes, our approach leverages both appearance and motion cues through dedicated network heads. These interconnected heads employ multi-scale deformable convolutions that reason about scene motion offsets with semantic context and motion-enhanced appearance features to learn tracking embeddings. Furthermore, we introduce a novel two-step fusion module that integrates the outputs from both heads by first matching instances from the current time step with propagated instances from previous time steps and subsequently refines associations using motion-enhanced appearance embeddings, improving robustness in challenging scenarios. Extensive evaluations of our proposed \netname model on two benchmark datasets demonstrate that it achieves state-of-the-art performance in panoptic tracking accuracy, surpassing prior methods in maintaining object identities over time. To facilitate future research, we make the code available at http://panoptictracking.cs.uni-freiburg.de
no_new_dataset
0.950915
2503.09196
Sanggyu Chong
Federico Grasselli, Sanggyu Chong, Venkat Kapil, Silvia Bonfanti and Kevin Rossi
Uncertainty in the era of machine learning for atomistic modeling
21 pages, 8 figures
null
null
null
physics.chem-ph
http://creativecommons.org/licenses/by/4.0/
The widespread adoption of machine learning surrogate models has significantly improved the scale and complexity of systems and processes that can be explored accurately and efficiently using atomistic modeling. However, the inherently data-driven nature of machine learning models introduces uncertainties that must be quantified, understood, and effectively managed to ensure reliable predictions and conclusions. Building upon these premises, in this Perspective, we first overview state-of-the-art uncertainty estimation methods, from Bayesian frameworks to ensembling techniques, and discuss their application in atomistic modeling. We then examine the interplay between model accuracy, uncertainty, training dataset composition, data acquisition strategies, model transferability, and robustness. In doing so, we synthesize insights from the existing literature and highlight areas of ongoing debate.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 09:39:18 GMT" } ]
2025-03-13T00:00:00
[ [ "Grasselli", "Federico", "" ], [ "Chong", "Sanggyu", "" ], [ "Kapil", "Venkat", "" ], [ "Bonfanti", "Silvia", "" ], [ "Rossi", "Kevin", "" ] ]
TITLE: Uncertainty in the era of machine learning for atomistic modeling ABSTRACT: The widespread adoption of machine learning surrogate models has significantly improved the scale and complexity of systems and processes that can be explored accurately and efficiently using atomistic modeling. However, the inherently data-driven nature of machine learning models introduces uncertainties that must be quantified, understood, and effectively managed to ensure reliable predictions and conclusions. Building upon these premises, in this Perspective, we first overview state-of-the-art uncertainty estimation methods, from Bayesian frameworks to ensembling techniques, and discuss their application in atomistic modeling. We then examine the interplay between model accuracy, uncertainty, training dataset composition, data acquisition strategies, model transferability, and robustness. In doing so, we synthesize insights from the existing literature and highlight areas of ongoing debate.
no_new_dataset
0.950088
2503.09197
Zicheng Zhang
Zicheng Zhang, Haoning Wu, Ziheng Jia, Weisi Lin, Guangtao Zhai
Teaching LMMs for Image Quality Scoring and Interpreting
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Image quality scoring and interpreting are two fundamental components of Image Quality Assessment (IQA). The former quantifies image quality, while the latter enables descriptive question answering about image quality. Traditionally, these two tasks have been addressed independently. However, from the perspective of the Human Visual System (HVS) and the Perception-Decision Integration Model, they are inherently interconnected: interpreting serves as the foundation for scoring, while scoring provides an abstract summary of interpreting. Thus, unifying these capabilities within a single model is both intuitive and logically coherent. In this paper, we propose Q-SiT (Quality Scoring and Interpreting joint Teaching), a unified framework that enables large multimodal models (LMMs) to learn both image quality scoring and interpreting simultaneously. We achieve this by transforming conventional IQA datasets into learnable question-answering datasets and incorporating human-annotated quality interpreting data for training. Furthermore, we introduce an efficient scoring & interpreting balance strategy, which first determines the optimal data mix ratio on lightweight LMMs and then maps this ratio to primary LMMs for fine-tuning adjustment. This strategy not only mitigates task interference and enhances cross-task knowledge transfer but also significantly reduces computational costs compared to direct optimization on full-scale LMMs. With this joint learning framework and corresponding training strategy, we develop Q-SiT, the first model capable of simultaneously performing image quality scoring and interpreting tasks, along with its lightweight variant, Q-SiT-mini. Experimental results demonstrate that Q-SiT achieves strong performance in both tasks with superior generalization IQA abilities.Project page at https://github.com/Q-Future/Q-SiT.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 09:39:33 GMT" } ]
2025-03-13T00:00:00
[ [ "Zhang", "Zicheng", "" ], [ "Wu", "Haoning", "" ], [ "Jia", "Ziheng", "" ], [ "Lin", "Weisi", "" ], [ "Zhai", "Guangtao", "" ] ]
TITLE: Teaching LMMs for Image Quality Scoring and Interpreting ABSTRACT: Image quality scoring and interpreting are two fundamental components of Image Quality Assessment (IQA). The former quantifies image quality, while the latter enables descriptive question answering about image quality. Traditionally, these two tasks have been addressed independently. However, from the perspective of the Human Visual System (HVS) and the Perception-Decision Integration Model, they are inherently interconnected: interpreting serves as the foundation for scoring, while scoring provides an abstract summary of interpreting. Thus, unifying these capabilities within a single model is both intuitive and logically coherent. In this paper, we propose Q-SiT (Quality Scoring and Interpreting joint Teaching), a unified framework that enables large multimodal models (LMMs) to learn both image quality scoring and interpreting simultaneously. We achieve this by transforming conventional IQA datasets into learnable question-answering datasets and incorporating human-annotated quality interpreting data for training. Furthermore, we introduce an efficient scoring & interpreting balance strategy, which first determines the optimal data mix ratio on lightweight LMMs and then maps this ratio to primary LMMs for fine-tuning adjustment. This strategy not only mitigates task interference and enhances cross-task knowledge transfer but also significantly reduces computational costs compared to direct optimization on full-scale LMMs. With this joint learning framework and corresponding training strategy, we develop Q-SiT, the first model capable of simultaneously performing image quality scoring and interpreting tasks, along with its lightweight variant, Q-SiT-mini. Experimental results demonstrate that Q-SiT achieves strong performance in both tasks with superior generalization IQA abilities.Project page at https://github.com/Q-Future/Q-SiT.
no_new_dataset
0.954351
2503.09200
Chichun Zhou
Lei Liu, Yuchao Lu, Ling An, Huajie Liang, Chichun Zhou, Zhenyu Zhang
Time-EAPCR: A Deep Learning-Based Novel Approach for Anomaly Detection Applied to the Environmental Field
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As human activities intensify, environmental systems such as aquatic ecosystems and water treatment systems face increasingly complex pressures, impacting ecological balance, public health, and sustainable development, making intelligent anomaly monitoring essential. However, traditional monitoring methods suffer from delayed responses, insufficient data processing capabilities, and weak generalisation, making them unsuitable for complex environmental monitoring needs.In recent years, machine learning has been widely applied to anomaly detection, but the multi-dimensional features and spatiotemporal dynamics of environmental ecological data, especially the long-term dependencies and strong variability in the time dimension, limit the effectiveness of traditional methods.Deep learning, with its ability to automatically learn features, captures complex nonlinear relationships, improving detection performance. However, its application in environmental monitoring is still in its early stages and requires further exploration.This paper introduces a new deep learning method, Time-EAPCR (Time-Embedding-Attention-Permutated CNN-Residual), and applies it to environmental science. The method uncovers feature correlations, captures temporal evolution patterns, and enables precise anomaly detection in environmental systems.We validated Time-EAPCR's high accuracy and robustness across four publicly available environmental datasets. Experimental results show that the method efficiently handles multi-source data, improves detection accuracy, and excels across various scenarios with strong adaptability and generalisation. Additionally, a real-world river monitoring dataset confirmed the feasibility of its deployment, providing reliable technical support for environmental monitoring.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 09:44:15 GMT" } ]
2025-03-13T00:00:00
[ [ "Liu", "Lei", "" ], [ "Lu", "Yuchao", "" ], [ "An", "Ling", "" ], [ "Liang", "Huajie", "" ], [ "Zhou", "Chichun", "" ], [ "Zhang", "Zhenyu", "" ] ]
TITLE: Time-EAPCR: A Deep Learning-Based Novel Approach for Anomaly Detection Applied to the Environmental Field ABSTRACT: As human activities intensify, environmental systems such as aquatic ecosystems and water treatment systems face increasingly complex pressures, impacting ecological balance, public health, and sustainable development, making intelligent anomaly monitoring essential. However, traditional monitoring methods suffer from delayed responses, insufficient data processing capabilities, and weak generalisation, making them unsuitable for complex environmental monitoring needs.In recent years, machine learning has been widely applied to anomaly detection, but the multi-dimensional features and spatiotemporal dynamics of environmental ecological data, especially the long-term dependencies and strong variability in the time dimension, limit the effectiveness of traditional methods.Deep learning, with its ability to automatically learn features, captures complex nonlinear relationships, improving detection performance. However, its application in environmental monitoring is still in its early stages and requires further exploration.This paper introduces a new deep learning method, Time-EAPCR (Time-Embedding-Attention-Permutated CNN-Residual), and applies it to environmental science. The method uncovers feature correlations, captures temporal evolution patterns, and enables precise anomaly detection in environmental systems.We validated Time-EAPCR's high accuracy and robustness across four publicly available environmental datasets. Experimental results show that the method efficiently handles multi-source data, improves detection accuracy, and excels across various scenarios with strong adaptability and generalisation. Additionally, a real-world river monitoring dataset confirmed the feasibility of its deployment, providing reliable technical support for environmental monitoring.
no_new_dataset
0.941223
2503.09217
Fengjie Li
Fengjie Li, Jiajun Jiang, Jiajun Sun, Hongyu Zhang
Evaluating the Generalizability of LLMs in Automated Program Repair
5 pages, 1 figure, to be published in ICSE2025-NIER
null
null
null
cs.SE cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
LLM-based automated program repair methods have attracted significant attention for their state-of-the-art performance. However, they were primarily evaluated on a few well known datasets like Defects4J, raising questions about their effectiveness on new datasets. In this study, we evaluate 11 top-performing LLMs on DEFECTS4J-TRANS, a new dataset derived from transforming Defects4J while maintaining the original semantics. Results from experiments on both Defects4J and DEFECTS4J-TRANS show that all studied LLMs have limited generalizability in APR tasks, with the average number of correct and plausible patches decreasing by 49.48% and 42.90%, respectively, on DEFECTS4J-TRANS. Further investigation into incorporating additional repair-relevant information in repair prompts reveals that, although this information significantly enhances the LLMs' capabilities (increasing the number of correct and plausible patches by up to 136.67% and 121.82%, respectively), performance still falls short of their original results. This indicates that prompt engineering alone is insufficient to substantially enhance LLMs' repair capabilities. Based on our study, we also offer several recommendations for future research.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 10:03:58 GMT" } ]
2025-03-13T00:00:00
[ [ "Li", "Fengjie", "" ], [ "Jiang", "Jiajun", "" ], [ "Sun", "Jiajun", "" ], [ "Zhang", "Hongyu", "" ] ]
TITLE: Evaluating the Generalizability of LLMs in Automated Program Repair ABSTRACT: LLM-based automated program repair methods have attracted significant attention for their state-of-the-art performance. However, they were primarily evaluated on a few well known datasets like Defects4J, raising questions about their effectiveness on new datasets. In this study, we evaluate 11 top-performing LLMs on DEFECTS4J-TRANS, a new dataset derived from transforming Defects4J while maintaining the original semantics. Results from experiments on both Defects4J and DEFECTS4J-TRANS show that all studied LLMs have limited generalizability in APR tasks, with the average number of correct and plausible patches decreasing by 49.48% and 42.90%, respectively, on DEFECTS4J-TRANS. Further investigation into incorporating additional repair-relevant information in repair prompts reveals that, although this information significantly enhances the LLMs' capabilities (increasing the number of correct and plausible patches by up to 136.67% and 121.82%, respectively), performance still falls short of their original results. This indicates that prompt engineering alone is insufficient to substantially enhance LLMs' repair capabilities. Based on our study, we also offer several recommendations for future research.
no_new_dataset
0.647993
2503.09218
Jie He
Jie He, Simon Yu, Deyi Xiong, V\'ictor Guti\'errez-Basulto, Jeff Z. Pan
N2C2: Nearest Neighbor Enhanced Confidence Calibration for Cross-Lingual In-Context Learning
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
Recent advancements of in-context learning (ICL) show language models can significantly improve their performance when demonstrations are provided. However, little attention has been paid to model calibration and prediction confidence of ICL in cross-lingual scenarios. To bridge this gap, we conduct a thorough analysis of ICL for cross-lingual sentiment classification. Our findings suggest that ICL performs poorly in cross-lingual scenarios, exhibiting low accuracy and presenting high calibration errors. In response, we propose a novel approach, N2C2, which employs a -nearest neighbors augmented classifier for prediction confidence calibration. N2C2 narrows the prediction gap by leveraging a datastore of cached few-shot instances. Specifically, N2C2 integrates the predictions from the datastore and incorporates confidence-aware distribution, semantically consistent retrieval representation, and adaptive neighbor combination modules to effectively utilize the limited number of supporting instances. Evaluation on two multilingual sentiment classification datasets demonstrates that N2C2 outperforms traditional ICL. It surpasses fine tuning, prompt tuning and recent state-of-the-art methods in terms of accuracy and calibration errors.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 10:05:05 GMT" } ]
2025-03-13T00:00:00
[ [ "He", "Jie", "" ], [ "Yu", "Simon", "" ], [ "Xiong", "Deyi", "" ], [ "Gutiérrez-Basulto", "Víctor", "" ], [ "Pan", "Jeff Z.", "" ] ]
TITLE: N2C2: Nearest Neighbor Enhanced Confidence Calibration for Cross-Lingual In-Context Learning ABSTRACT: Recent advancements of in-context learning (ICL) show language models can significantly improve their performance when demonstrations are provided. However, little attention has been paid to model calibration and prediction confidence of ICL in cross-lingual scenarios. To bridge this gap, we conduct a thorough analysis of ICL for cross-lingual sentiment classification. Our findings suggest that ICL performs poorly in cross-lingual scenarios, exhibiting low accuracy and presenting high calibration errors. In response, we propose a novel approach, N2C2, which employs a -nearest neighbors augmented classifier for prediction confidence calibration. N2C2 narrows the prediction gap by leveraging a datastore of cached few-shot instances. Specifically, N2C2 integrates the predictions from the datastore and incorporates confidence-aware distribution, semantically consistent retrieval representation, and adaptive neighbor combination modules to effectively utilize the limited number of supporting instances. Evaluation on two multilingual sentiment classification datasets demonstrates that N2C2 outperforms traditional ICL. It surpasses fine tuning, prompt tuning and recent state-of-the-art methods in terms of accuracy and calibration errors.
no_new_dataset
0.943608
2503.09219
Runzhe Zhan
Xinyi Yang, Runzhe Zhan, Derek F. Wong, Shu Yang, Junchao Wu, Lidia S. Chao
Rethinking Prompt-based Debiasing in Large Language Models
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Investigating bias in large language models (LLMs) is crucial for developing trustworthy AI. While prompt-based through prompt engineering is common, its effectiveness relies on the assumption that models inherently understand biases. Our study systematically analyzed this assumption using the BBQ and StereoSet benchmarks on both open-source models as well as commercial GPT model. Experimental results indicate that prompt-based is often superficial; for instance, the Llama2-7B-Chat model misclassified over 90% of unbiased content as biased, despite achieving high accuracy in identifying bias issues on the BBQ dataset. Additionally, specific evaluation and question settings in bias benchmarks often lead LLMs to choose "evasive answers", disregarding the core of the question and the relevance of the response to the context. Moreover, the apparent success of previous methods may stem from flawed evaluation metrics. Our research highlights a potential "false prosperity" in prompt-base efforts and emphasizes the need to rethink bias metrics to ensure truly trustworthy AI.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 10:06:03 GMT" } ]
2025-03-13T00:00:00
[ [ "Yang", "Xinyi", "" ], [ "Zhan", "Runzhe", "" ], [ "Wong", "Derek F.", "" ], [ "Yang", "Shu", "" ], [ "Wu", "Junchao", "" ], [ "Chao", "Lidia S.", "" ] ]
TITLE: Rethinking Prompt-based Debiasing in Large Language Models ABSTRACT: Investigating bias in large language models (LLMs) is crucial for developing trustworthy AI. While prompt-based through prompt engineering is common, its effectiveness relies on the assumption that models inherently understand biases. Our study systematically analyzed this assumption using the BBQ and StereoSet benchmarks on both open-source models as well as commercial GPT model. Experimental results indicate that prompt-based is often superficial; for instance, the Llama2-7B-Chat model misclassified over 90% of unbiased content as biased, despite achieving high accuracy in identifying bias issues on the BBQ dataset. Additionally, specific evaluation and question settings in bias benchmarks often lead LLMs to choose "evasive answers", disregarding the core of the question and the relevance of the response to the context. Moreover, the apparent success of previous methods may stem from flawed evaluation metrics. Our research highlights a potential "false prosperity" in prompt-base efforts and emphasizes the need to rethink bias metrics to ensure truly trustworthy AI.
no_new_dataset
0.947381
2503.09221
Hannah Kniesel
Hannah Kniesel, Pedro Hermosilla, Timo Ropinski
Active Learning Inspired ControlNet Guidance for Augmenting Semantic Segmentation Datasets
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent advances in conditional image generation from diffusion models have shown great potential in achieving impressive image quality while preserving the constraints introduced by the user. In particular, ControlNet enables precise alignment between ground truth segmentation masks and the generated image content, allowing the enhancement of training datasets in segmentation tasks. This raises a key question: Can ControlNet additionally be guided to generate the most informative synthetic samples for a specific task? Inspired by active learning, where the most informative real-world samples are selected based on sample difficulty or model uncertainty, we propose the first approach to integrate active learning-based selection metrics into the backward diffusion process for sample generation. Specifically, we explore uncertainty, query by committee, and expected model change, which are commonly used in active learning, and demonstrate their application for guiding the sample generation process through gradient approximation. Our method is training-free, modifying only the backward diffusion process, allowing it to be used on any pretrained ControlNet. Using this process, we show that segmentation models trained with guided synthetic data outperform those trained on non-guided synthetic data. Our work underscores the need for advanced control mechanisms for diffusion-based models, which are not only aligned with image content but additionally downstream task performance, highlighting the true potential of synthetic data generation.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 10:09:27 GMT" } ]
2025-03-13T00:00:00
[ [ "Kniesel", "Hannah", "" ], [ "Hermosilla", "Pedro", "" ], [ "Ropinski", "Timo", "" ] ]
TITLE: Active Learning Inspired ControlNet Guidance for Augmenting Semantic Segmentation Datasets ABSTRACT: Recent advances in conditional image generation from diffusion models have shown great potential in achieving impressive image quality while preserving the constraints introduced by the user. In particular, ControlNet enables precise alignment between ground truth segmentation masks and the generated image content, allowing the enhancement of training datasets in segmentation tasks. This raises a key question: Can ControlNet additionally be guided to generate the most informative synthetic samples for a specific task? Inspired by active learning, where the most informative real-world samples are selected based on sample difficulty or model uncertainty, we propose the first approach to integrate active learning-based selection metrics into the backward diffusion process for sample generation. Specifically, we explore uncertainty, query by committee, and expected model change, which are commonly used in active learning, and demonstrate their application for guiding the sample generation process through gradient approximation. Our method is training-free, modifying only the backward diffusion process, allowing it to be used on any pretrained ControlNet. Using this process, we show that segmentation models trained with guided synthetic data outperform those trained on non-guided synthetic data. Our work underscores the need for advanced control mechanisms for diffusion-based models, which are not only aligned with image content but additionally downstream task performance, highlighting the true potential of synthetic data generation.
no_new_dataset
0.953057
2503.09223
Tian Tang
Tian Tang, Zhixing Tian, Zhenyu Zhu, Chenyang Wang, Haiqing Hu, Guoyu Tang, Lin Liu, Sulong Xu
LREF: A Novel LLM-based Relevance Framework for E-commerce
null
null
10.1145/3701716.3715246
null
cs.IR cs.AI
http://creativecommons.org/licenses/by/4.0/
Query and product relevance prediction is a critical component for ensuring a smooth user experience in e-commerce search. Traditional studies mainly focus on BERT-based models to assess the semantic relevance between queries and products. However, the discriminative paradigm and limited knowledge capacity of these approaches restrict their ability to comprehend the relevance between queries and products fully. With the rapid advancement of Large Language Models (LLMs), recent research has begun to explore their application to industrial search systems, as LLMs provide extensive world knowledge and flexible optimization for reasoning processes. Nonetheless, directly leveraging LLMs for relevance prediction tasks introduces new challenges, including a high demand for data quality, the necessity for meticulous optimization of reasoning processes, and an optimistic bias that can result in over-recall. To overcome the above problems, this paper proposes a novel framework called the LLM-based RElevance Framework (LREF) aimed at enhancing e-commerce search relevance. The framework comprises three main stages: supervised fine-tuning (SFT) with Data Selection, Multiple Chain of Thought (Multi-CoT) tuning, and Direct Preference Optimization (DPO) for de-biasing. We evaluate the performance of the framework through a series of offline experiments on large-scale real-world datasets, as well as online A/B testing. The results indicate significant improvements in both offline and online metrics. Ultimately, the model was deployed in a well-known e-commerce application, yielding substantial commercial benefits.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 10:10:30 GMT" } ]
2025-03-13T00:00:00
[ [ "Tang", "Tian", "" ], [ "Tian", "Zhixing", "" ], [ "Zhu", "Zhenyu", "" ], [ "Wang", "Chenyang", "" ], [ "Hu", "Haiqing", "" ], [ "Tang", "Guoyu", "" ], [ "Liu", "Lin", "" ], [ "Xu", "Sulong", "" ] ]
TITLE: LREF: A Novel LLM-based Relevance Framework for E-commerce ABSTRACT: Query and product relevance prediction is a critical component for ensuring a smooth user experience in e-commerce search. Traditional studies mainly focus on BERT-based models to assess the semantic relevance between queries and products. However, the discriminative paradigm and limited knowledge capacity of these approaches restrict their ability to comprehend the relevance between queries and products fully. With the rapid advancement of Large Language Models (LLMs), recent research has begun to explore their application to industrial search systems, as LLMs provide extensive world knowledge and flexible optimization for reasoning processes. Nonetheless, directly leveraging LLMs for relevance prediction tasks introduces new challenges, including a high demand for data quality, the necessity for meticulous optimization of reasoning processes, and an optimistic bias that can result in over-recall. To overcome the above problems, this paper proposes a novel framework called the LLM-based RElevance Framework (LREF) aimed at enhancing e-commerce search relevance. The framework comprises three main stages: supervised fine-tuning (SFT) with Data Selection, Multiple Chain of Thought (Multi-CoT) tuning, and Direct Preference Optimization (DPO) for de-biasing. We evaluate the performance of the framework through a series of offline experiments on large-scale real-world datasets, as well as online A/B testing. The results indicate significant improvements in both offline and online metrics. Ultimately, the model was deployed in a well-known e-commerce application, yielding substantial commercial benefits.
no_new_dataset
0.946695
2503.09225
Xuxiang Sun
Xuxiang Sun, Xianglin Shan, Yilang Liu, Weiwei Zhang
Towards a Generalized SA Model: Symbolic Regression-Based Correction for Separated Flows
null
null
null
null
physics.flu-dyn physics.comp-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This study focuses on the numerical simulation of high Reynolds number separated flows and proposes a data-driven approach to improve the predictive capability of the SA turbulence model. First, data assimilation was performed on two typical airfoils with high angle-of-attack separated flows to obtain a high-fidelity flow field dataset. Based on this dataset, a white-box model was developed using symbolic regression to modify the production term of the SA model. To validate the effectiveness of the modified model, multiple representative airfoils and wings, such as the SC1095 airfoil, DU91-W2-250 airfoil, and ONERA-M6 wing, were selected as test cases. A wide range of flow conditions was considered, including subsonic to transonic regimes, Reynolds numbers ranging from hundreds of thousands to tens of millions, and angles of attack varying from small to large. The results indicate that the modified model significantly improves the prediction accuracy of separated flows while maintaining the predictive capability for attached flows. It notably enhances the reproduction of separated vortex structures and flow separation locations, reducing the mean relative error in lift prediction at stall angles by 69.2% and improving computational accuracy by more than three times. Furthermore, validation using a zero-pressure-gradient flat plate case confirms the modified model's ability to accurately predict the turbulent boundary layer velocity profile and skin friction coefficient distribution. The findings of this study provide new insights and methodologies for the numerical simulation of high Reynolds number separated flows, contributing to more accurate modeling of complex flow phenomena in engineering applications.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 10:16:05 GMT" } ]
2025-03-13T00:00:00
[ [ "Sun", "Xuxiang", "" ], [ "Shan", "Xianglin", "" ], [ "Liu", "Yilang", "" ], [ "Zhang", "Weiwei", "" ] ]
TITLE: Towards a Generalized SA Model: Symbolic Regression-Based Correction for Separated Flows ABSTRACT: This study focuses on the numerical simulation of high Reynolds number separated flows and proposes a data-driven approach to improve the predictive capability of the SA turbulence model. First, data assimilation was performed on two typical airfoils with high angle-of-attack separated flows to obtain a high-fidelity flow field dataset. Based on this dataset, a white-box model was developed using symbolic regression to modify the production term of the SA model. To validate the effectiveness of the modified model, multiple representative airfoils and wings, such as the SC1095 airfoil, DU91-W2-250 airfoil, and ONERA-M6 wing, were selected as test cases. A wide range of flow conditions was considered, including subsonic to transonic regimes, Reynolds numbers ranging from hundreds of thousands to tens of millions, and angles of attack varying from small to large. The results indicate that the modified model significantly improves the prediction accuracy of separated flows while maintaining the predictive capability for attached flows. It notably enhances the reproduction of separated vortex structures and flow separation locations, reducing the mean relative error in lift prediction at stall angles by 69.2% and improving computational accuracy by more than three times. Furthermore, validation using a zero-pressure-gradient flat plate case confirms the modified model's ability to accurately predict the turbulent boundary layer velocity profile and skin friction coefficient distribution. The findings of this study provide new insights and methodologies for the numerical simulation of high Reynolds number separated flows, contributing to more accurate modeling of complex flow phenomena in engineering applications.
no_new_dataset
0.919498
2503.09226
Omer Noy Klein
Omer Noy Klein, Alihan H\"uy\"uk, Ron Shamir, Uri Shalit, Mihaela van der Schaar
Towards Regulatory-Confirmed Adaptive Clinical Trials: Machine Learning Opportunities and Solutions
AISTATS 2025
null
null
null
stat.ML cs.LG
http://creativecommons.org/licenses/by/4.0/
Randomized Controlled Trials (RCTs) are the gold standard for evaluating the effect of new medical treatments. Treatments must pass stringent regulatory conditions in order to be approved for widespread use, yet even after the regulatory barriers are crossed, real-world challenges might arise: Who should get the treatment? What is its true clinical utility? Are there discrepancies in the treatment effectiveness across diverse and under-served populations? We introduce two new objectives for future clinical trials that integrate regulatory constraints and treatment policy value for both the entire population and under-served populations, thus answering some of the questions above in advance. Designed to meet these objectives, we formulate Randomize First Augment Next (RFAN), a new framework for designing Phase III clinical trials. Our framework consists of a standard randomized component followed by an adaptive one, jointly meant to efficiently and safely acquire and assign patients into treatment arms during the trial. Then, we propose strategies for implementing RFAN based on causal, deep Bayesian active learning. Finally, we empirically evaluate the performance of our framework using synthetic and real-world semi-synthetic datasets.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 10:17:54 GMT" } ]
2025-03-13T00:00:00
[ [ "Klein", "Omer Noy", "" ], [ "Hüyük", "Alihan", "" ], [ "Shamir", "Ron", "" ], [ "Shalit", "Uri", "" ], [ "van der Schaar", "Mihaela", "" ] ]
TITLE: Towards Regulatory-Confirmed Adaptive Clinical Trials: Machine Learning Opportunities and Solutions ABSTRACT: Randomized Controlled Trials (RCTs) are the gold standard for evaluating the effect of new medical treatments. Treatments must pass stringent regulatory conditions in order to be approved for widespread use, yet even after the regulatory barriers are crossed, real-world challenges might arise: Who should get the treatment? What is its true clinical utility? Are there discrepancies in the treatment effectiveness across diverse and under-served populations? We introduce two new objectives for future clinical trials that integrate regulatory constraints and treatment policy value for both the entire population and under-served populations, thus answering some of the questions above in advance. Designed to meet these objectives, we formulate Randomize First Augment Next (RFAN), a new framework for designing Phase III clinical trials. Our framework consists of a standard randomized component followed by an adaptive one, jointly meant to efficiently and safely acquire and assign patients into treatment arms during the trial. Then, we propose strategies for implementing RFAN based on causal, deep Bayesian active learning. Finally, we empirically evaluate the performance of our framework using synthetic and real-world semi-synthetic datasets.
no_new_dataset
0.943452
2503.09239
Umar Salman
Di Zhao, Umar Salman and Zongjie Wang
Risk Assessment of Distribution Networks Considering Climate Change and Vegetation Management Impacts
null
null
null
null
eess.SY cs.SY
http://creativecommons.org/licenses/by/4.0/
This paper presents a comprehensive risk assessment model for power distribution networks with a focus on the influence of climate conditions and vegetation management on outage risks. Using a dataset comprising outage records, meteorological indicators, and vegetation metrics, this paper develops a logistic regression model that outperformed several alternatives, effectively identifying risk factors in highly imbalanced data. Key features impacting outages include wind speed, vegetation density, quantified as the enhanced vegetation index (EVI), and snow type, with wet snow and autumn conditions exhibiting the strongest positive effects. The analysis also shows complex interactions, such as the combined effect of wind speed and EVI, suggesting that vegetation density can moderate the impact of high winds on outages. Simulation case studies, based on a test dataset of 618 samples, demonstrated that the model achieved an 80\% match rate with real-world data within an error tolerance of \(\pm 0.05\), showcasing the effectiveness and robustness of the proposed model while highlighting its potential to inform preventive strategies for mitigating outage risks in power distribution networks under high-risk environmental conditions. Future work will integrate vegetation height data from Lidar and explore alternative modeling approaches to capture potential non-linear relationships.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 10:34:31 GMT" } ]
2025-03-13T00:00:00
[ [ "Zhao", "Di", "" ], [ "Salman", "Umar", "" ], [ "Wang", "Zongjie", "" ] ]
TITLE: Risk Assessment of Distribution Networks Considering Climate Change and Vegetation Management Impacts ABSTRACT: This paper presents a comprehensive risk assessment model for power distribution networks with a focus on the influence of climate conditions and vegetation management on outage risks. Using a dataset comprising outage records, meteorological indicators, and vegetation metrics, this paper develops a logistic regression model that outperformed several alternatives, effectively identifying risk factors in highly imbalanced data. Key features impacting outages include wind speed, vegetation density, quantified as the enhanced vegetation index (EVI), and snow type, with wet snow and autumn conditions exhibiting the strongest positive effects. The analysis also shows complex interactions, such as the combined effect of wind speed and EVI, suggesting that vegetation density can moderate the impact of high winds on outages. Simulation case studies, based on a test dataset of 618 samples, demonstrated that the model achieved an 80\% match rate with real-world data within an error tolerance of \(\pm 0.05\), showcasing the effectiveness and robustness of the proposed model while highlighting its potential to inform preventive strategies for mitigating outage risks in power distribution networks under high-risk environmental conditions. Future work will integrate vegetation height data from Lidar and explore alternative modeling approaches to capture potential non-linear relationships.
no_new_dataset
0.924279
2503.09251
Yigang Chen
Yigang Chen, Xiang Ji, Ziyue Zhang, Yuming Zhou, Yang-Chi-Dung Lin, Hsi-Yuan Huang, Tao Zhang, Yi Lai, Ke Chen, Chang Su, Xingqiao Lin, Zihao Zhu, Yanggyi Zhang, Kangping Wei, Jiehui Fu, Yixian Huang, Shidong Cui, Shih-Chung Yen, Ariel Warshel, and Hsien-Da Huang
SCOPE-DTI: Semi-Inductive Dataset Construction and Framework Optimization for Practical Usability Enhancement in Deep Learning-Based Drug Target Interaction Prediction
null
null
null
null
cs.LG cs.AI q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Deep learning-based drug-target interaction (DTI) prediction methods have demonstrated strong performance; however, real-world applicability remains constrained by limited data diversity and modeling complexity. To address these challenges, we propose SCOPE-DTI, a unified framework combining a large-scale, balanced semi-inductive human DTI dataset with advanced deep learning modeling. Constructed from 13 public repositories, the SCOPE dataset expands data volume by up to 100-fold compared to common benchmarks such as the Human dataset. The SCOPE model integrates three-dimensional protein and compound representations, graph neural networks, and bilinear attention mechanisms to effectively capture cross domain interaction patterns, significantly outperforming state-of-the-art methods across various DTI prediction tasks. Additionally, SCOPE-DTI provides a user-friendly interface and database. We further validate its effectiveness by experimentally identifying anticancer targets of Ginsenoside Rh1. By offering comprehensive data, advanced modeling, and accessible tools, SCOPE-DTI accelerates drug discovery research.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 10:46:25 GMT" } ]
2025-03-13T00:00:00
[ [ "Chen", "Yigang", "" ], [ "Ji", "Xiang", "" ], [ "Zhang", "Ziyue", "" ], [ "Zhou", "Yuming", "" ], [ "Lin", "Yang-Chi-Dung", "" ], [ "Huang", "Hsi-Yuan", "" ], [ "Zhang", "Tao", "" ], [ "Lai", "Yi", "" ], [ "Chen", "Ke", "" ], [ "Su", "Chang", "" ], [ "Lin", "Xingqiao", "" ], [ "Zhu", "Zihao", "" ], [ "Zhang", "Yanggyi", "" ], [ "Wei", "Kangping", "" ], [ "Fu", "Jiehui", "" ], [ "Huang", "Yixian", "" ], [ "Cui", "Shidong", "" ], [ "Yen", "Shih-Chung", "" ], [ "Warshel", "Ariel", "" ], [ "Huang", "Hsien-Da", "" ] ]
TITLE: SCOPE-DTI: Semi-Inductive Dataset Construction and Framework Optimization for Practical Usability Enhancement in Deep Learning-Based Drug Target Interaction Prediction ABSTRACT: Deep learning-based drug-target interaction (DTI) prediction methods have demonstrated strong performance; however, real-world applicability remains constrained by limited data diversity and modeling complexity. To address these challenges, we propose SCOPE-DTI, a unified framework combining a large-scale, balanced semi-inductive human DTI dataset with advanced deep learning modeling. Constructed from 13 public repositories, the SCOPE dataset expands data volume by up to 100-fold compared to common benchmarks such as the Human dataset. The SCOPE model integrates three-dimensional protein and compound representations, graph neural networks, and bilinear attention mechanisms to effectively capture cross domain interaction patterns, significantly outperforming state-of-the-art methods across various DTI prediction tasks. Additionally, SCOPE-DTI provides a user-friendly interface and database. We further validate its effectiveness by experimentally identifying anticancer targets of Ginsenoside Rh1. By offering comprehensive data, advanced modeling, and accessible tools, SCOPE-DTI accelerates drug discovery research.
no_new_dataset
0.930899
2503.09260
Wei He
Wei He and Shangzhi Zhang and Chun-Guang Li and Xianbiao Qi and Rong Xiao and Jun Guo
Neural Normalized Cut: A Differential and Generalizable Approach for Spectral Clustering
5 figures, 8 tables, accepted by Pattern Recognition (2025-03-11)
null
10.1016/j.patcog.2025.111545
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Spectral clustering, as a popular tool for data clustering, requires an eigen-decomposition step on a given affinity to obtain the spectral embedding. Nevertheless, such a step suffers from the lack of generalizability and scalability. Moreover, the obtained spectral embeddings can hardly provide a good approximation to the ground-truth partition and thus a k-means step is adopted to quantize the embedding. In this paper, we propose a simple yet effective scalable and generalizable approach, called Neural Normalized Cut (NeuNcut), to learn the clustering membership for spectral clustering directly. In NeuNcut, we properly reparameterize the unknown cluster membership via a neural network, and train the neural network via stochastic gradient descent with a properly relaxed normalized cut loss. As a result, our NeuNcut enjoys a desired generalization ability to directly infer clustering membership for out-of-sample unseen data and hence brings us an efficient way to handle clustering task with ultra large-scale data. We conduct extensive experiments on both synthetic data and benchmark datasets and experimental results validate the effectiveness and the superiority of our approach. Our code is available at: https://github.com/hewei98/NeuNcut.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 11:00:16 GMT" } ]
2025-03-13T00:00:00
[ [ "He", "Wei", "" ], [ "Zhang", "Shangzhi", "" ], [ "Li", "Chun-Guang", "" ], [ "Qi", "Xianbiao", "" ], [ "Xiao", "Rong", "" ], [ "Guo", "Jun", "" ] ]
TITLE: Neural Normalized Cut: A Differential and Generalizable Approach for Spectral Clustering ABSTRACT: Spectral clustering, as a popular tool for data clustering, requires an eigen-decomposition step on a given affinity to obtain the spectral embedding. Nevertheless, such a step suffers from the lack of generalizability and scalability. Moreover, the obtained spectral embeddings can hardly provide a good approximation to the ground-truth partition and thus a k-means step is adopted to quantize the embedding. In this paper, we propose a simple yet effective scalable and generalizable approach, called Neural Normalized Cut (NeuNcut), to learn the clustering membership for spectral clustering directly. In NeuNcut, we properly reparameterize the unknown cluster membership via a neural network, and train the neural network via stochastic gradient descent with a properly relaxed normalized cut loss. As a result, our NeuNcut enjoys a desired generalization ability to directly infer clustering membership for out-of-sample unseen data and hence brings us an efficient way to handle clustering task with ultra large-scale data. We conduct extensive experiments on both synthetic data and benchmark datasets and experimental results validate the effectiveness and the superiority of our approach. Our code is available at: https://github.com/hewei98/NeuNcut.
no_new_dataset
0.94887
2503.09269
Renato Portugal
Leandro C. Souza, Renato Portugal
Single-Qudit Quantum Neural Networks for Multiclass Classification
24 pages, 3 figures, 6 tables
null
null
null
quant-ph cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
This paper proposes a single-qudit quantum neural network for multiclass classification, by using the enhanced representational capacity of high-dimensional qudit states. Our design employs an $d$-dimensional unitary operator, where $d$ corresponds to the number of classes, constructed using the Cayley transform of a skew-symmetric matrix, to efficiently encode and process class information. This architecture enables a direct mapping between class labels and quantum measurement outcomes, reducing circuit depth and computational overhead. To optimize network parameters, we introduce a hybrid training approach that combines an extended activation function -- derived from a truncated multivariable Taylor series expansion -- with support vector machine optimization for weight determination. We evaluate our model on the MNIST and EMNIST datasets, demonstrating competitive accuracy while maintaining a compact single-qudit quantum circuit. Our findings highlight the potential of qudit-based QNNs as scalable alternatives to classical deep learning models, particularly for multiclass classification. However, practical implementation remains constrained by current quantum hardware limitations. This research advances quantum machine learning by demonstrating the feasibility of higher-dimensional quantum systems for efficient learning tasks.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 11:12:05 GMT" } ]
2025-03-13T00:00:00
[ [ "Souza", "Leandro C.", "" ], [ "Portugal", "Renato", "" ] ]
TITLE: Single-Qudit Quantum Neural Networks for Multiclass Classification ABSTRACT: This paper proposes a single-qudit quantum neural network for multiclass classification, by using the enhanced representational capacity of high-dimensional qudit states. Our design employs an $d$-dimensional unitary operator, where $d$ corresponds to the number of classes, constructed using the Cayley transform of a skew-symmetric matrix, to efficiently encode and process class information. This architecture enables a direct mapping between class labels and quantum measurement outcomes, reducing circuit depth and computational overhead. To optimize network parameters, we introduce a hybrid training approach that combines an extended activation function -- derived from a truncated multivariable Taylor series expansion -- with support vector machine optimization for weight determination. We evaluate our model on the MNIST and EMNIST datasets, demonstrating competitive accuracy while maintaining a compact single-qudit quantum circuit. Our findings highlight the potential of qudit-based QNNs as scalable alternatives to classical deep learning models, particularly for multiclass classification. However, practical implementation remains constrained by current quantum hardware limitations. This research advances quantum machine learning by demonstrating the feasibility of higher-dimensional quantum systems for efficient learning tasks.
no_new_dataset
0.945751
2503.09276
Linzhao Jia
Linzhao Jia, Changyong Qi, Yuang Wei, Han Sun, Xiaozhe Yang
Fine-Tuning Large Language Models for Educational Support: Leveraging Gagne's Nine Events of Instruction for Lesson Planning
null
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Effective lesson planning is crucial in education process, serving as the cornerstone for high-quality teaching and the cultivation of a conducive learning atmosphere. This study investigates how large language models (LLMs) can enhance teacher preparation by incorporating them with Gagne's Nine Events of Instruction, especially in the field of mathematics education in compulsory education. It investigates two distinct methodologies: the development of Chain of Thought (CoT) prompts to direct LLMs in generating content that aligns with instructional events, and the application of fine-tuning approaches like Low-Rank Adaptation (LoRA) to enhance model performance. This research starts with creating a comprehensive dataset based on math curriculum standards and Gagne's instructional events. The first method involves crafting CoT-optimized prompts to generate detailed, logically coherent responses from LLMs, improving their ability to create educationally relevant content. The second method uses specialized datasets to fine-tune open-source models, enhancing their educational content generation and analysis capabilities. This study contributes to the evolving dialogue on the integration of AI in education, illustrating innovative strategies for leveraging LLMs to bolster teaching and learning processes.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 11:22:13 GMT" } ]
2025-03-13T00:00:00
[ [ "Jia", "Linzhao", "" ], [ "Qi", "Changyong", "" ], [ "Wei", "Yuang", "" ], [ "Sun", "Han", "" ], [ "Yang", "Xiaozhe", "" ] ]
TITLE: Fine-Tuning Large Language Models for Educational Support: Leveraging Gagne's Nine Events of Instruction for Lesson Planning ABSTRACT: Effective lesson planning is crucial in education process, serving as the cornerstone for high-quality teaching and the cultivation of a conducive learning atmosphere. This study investigates how large language models (LLMs) can enhance teacher preparation by incorporating them with Gagne's Nine Events of Instruction, especially in the field of mathematics education in compulsory education. It investigates two distinct methodologies: the development of Chain of Thought (CoT) prompts to direct LLMs in generating content that aligns with instructional events, and the application of fine-tuning approaches like Low-Rank Adaptation (LoRA) to enhance model performance. This research starts with creating a comprehensive dataset based on math curriculum standards and Gagne's instructional events. The first method involves crafting CoT-optimized prompts to generate detailed, logically coherent responses from LLMs, improving their ability to create educationally relevant content. The second method uses specialized datasets to fine-tune open-source models, enhancing their educational content generation and analysis capabilities. This study contributes to the evolving dialogue on the integration of AI in education, illustrating innovative strategies for leveraging LLMs to bolster teaching and learning processes.
new_dataset
0.956472
2503.09277
Haoxuan Wang
Haoxuan Wang, Jinlong Peng, Qingdong He, Hao Yang, Ying Jin, Jiafu Wu, Xiaobin Hu, Yanjie Pan, Zhenye Gan, Mingmin Chi, Bo Peng, Yabiao Wang
UniCombine: Unified Multi-Conditional Combination with Diffusion Transformer
null
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the rapid development of diffusion models in image generation, the demand for more powerful and flexible controllable frameworks is increasing. Although existing methods can guide generation beyond text prompts, the challenge of effectively combining multiple conditional inputs while maintaining consistency with all of them remains unsolved. To address this, we introduce UniCombine, a DiT-based multi-conditional controllable generative framework capable of handling any combination of conditions, including but not limited to text prompts, spatial maps, and subject images. Specifically, we introduce a novel Conditional MMDiT Attention mechanism and incorporate a trainable LoRA module to build both the training-free and training-based versions. Additionally, we propose a new pipeline to construct SubjectSpatial200K, the first dataset designed for multi-conditional generative tasks covering both the subject-driven and spatially-aligned conditions. Extensive experimental results on multi-conditional generation demonstrate the outstanding universality and powerful capability of our approach with state-of-the-art performance.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 11:22:47 GMT" } ]
2025-03-13T00:00:00
[ [ "Wang", "Haoxuan", "" ], [ "Peng", "Jinlong", "" ], [ "He", "Qingdong", "" ], [ "Yang", "Hao", "" ], [ "Jin", "Ying", "" ], [ "Wu", "Jiafu", "" ], [ "Hu", "Xiaobin", "" ], [ "Pan", "Yanjie", "" ], [ "Gan", "Zhenye", "" ], [ "Chi", "Mingmin", "" ], [ "Peng", "Bo", "" ], [ "Wang", "Yabiao", "" ] ]
TITLE: UniCombine: Unified Multi-Conditional Combination with Diffusion Transformer ABSTRACT: With the rapid development of diffusion models in image generation, the demand for more powerful and flexible controllable frameworks is increasing. Although existing methods can guide generation beyond text prompts, the challenge of effectively combining multiple conditional inputs while maintaining consistency with all of them remains unsolved. To address this, we introduce UniCombine, a DiT-based multi-conditional controllable generative framework capable of handling any combination of conditions, including but not limited to text prompts, spatial maps, and subject images. Specifically, we introduce a novel Conditional MMDiT Attention mechanism and incorporate a trainable LoRA module to build both the training-free and training-based versions. Additionally, we propose a new pipeline to construct SubjectSpatial200K, the first dataset designed for multi-conditional generative tasks covering both the subject-driven and spatially-aligned conditions. Extensive experimental results on multi-conditional generation demonstrate the outstanding universality and powerful capability of our approach with state-of-the-art performance.
new_dataset
0.963092
2503.09279
Luozheng Qin
Luozheng Qin, Zhiyu Tan, Mengping Yang, Xiaomeng Yang, Hao Li
Cockatiel: Ensembling Synthetic and Human Preferenced Training for Detailed Video Caption
For more details, please refer to our project page: https://sais-fuxi.github.io/projects/cockatiel/
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Video Detailed Captioning (VDC) is a crucial task for vision-language bridging, enabling fine-grained descriptions of complex video content. In this paper, we first comprehensively benchmark current state-of-the-art approaches and systematically identified two critical limitations: biased capability towards specific captioning aspect and misalignment with human preferences. To address these deficiencies, we propose Cockatiel, a novel three-stage training pipeline that ensembles synthetic and human-aligned training for improving VDC performance. In the first stage, we derive a scorer from a meticulously annotated dataset to select synthetic captions high-performing on certain fine-grained video-caption alignment and human-preferred while disregarding others. Then, we train Cockatiel-13B, using this curated dataset to infuse it with assembled model strengths and human preferences. Finally, we further distill Cockatiel-8B from Cockatiel-13B for the ease of usage. Extensive quantitative and qualitative experiments reflect the effectiveness of our method, as we not only set new state-of-the-art performance on VDCSCORE in a dimension-balanced way but also surpass leading alternatives on human preference by a large margin as depicted by the human evaluation results.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 11:25:04 GMT" } ]
2025-03-13T00:00:00
[ [ "Qin", "Luozheng", "" ], [ "Tan", "Zhiyu", "" ], [ "Yang", "Mengping", "" ], [ "Yang", "Xiaomeng", "" ], [ "Li", "Hao", "" ] ]
TITLE: Cockatiel: Ensembling Synthetic and Human Preferenced Training for Detailed Video Caption ABSTRACT: Video Detailed Captioning (VDC) is a crucial task for vision-language bridging, enabling fine-grained descriptions of complex video content. In this paper, we first comprehensively benchmark current state-of-the-art approaches and systematically identified two critical limitations: biased capability towards specific captioning aspect and misalignment with human preferences. To address these deficiencies, we propose Cockatiel, a novel three-stage training pipeline that ensembles synthetic and human-aligned training for improving VDC performance. In the first stage, we derive a scorer from a meticulously annotated dataset to select synthetic captions high-performing on certain fine-grained video-caption alignment and human-preferred while disregarding others. Then, we train Cockatiel-13B, using this curated dataset to infuse it with assembled model strengths and human preferences. Finally, we further distill Cockatiel-8B from Cockatiel-13B for the ease of usage. Extensive quantitative and qualitative experiments reflect the effectiveness of our method, as we not only set new state-of-the-art performance on VDCSCORE in a dimension-balanced way but also surpass leading alternatives on human preference by a large margin as depicted by the human evaluation results.
new_dataset
0.606994
2503.09283
Xiangbin Wei
Xiangbin Wei
Noise2Score3D: Tweedie's Approach for Unsupervised Point Cloud Denoising
arXiv admin note: substantial text overlap with arXiv:2502.16826
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Building on recent advances in Bayesian statistics and image denoising, we propose Noise2Score3D, a fully unsupervised framework for point cloud denoising. Noise2Score3D learns the score function of the underlying point cloud distribution directly from noisy data, eliminating the need for clean data during training. Using Tweedie's formula, our method performs denoising in a single step, avoiding the iterative processes used in existing unsupervised methods, thus improving both accuracy and efficiency. Additionally, we introduce Total Variation for Point Clouds as a denoising quality metric, which allows for the estimation of unknown noise parameters. Experimental results demonstrate that Noise2Score3D achieves state-of-the-art performance on standard benchmarks among unsupervised learning methods in Chamfer distance and point-to-mesh metrics. Noise2Score3D also demonstrates strong generalization ability beyond training datasets. Our method, by addressing the generalization issue and challenge of the absence of clean data in learning-based methods, paves the way for learning-based point cloud denoising methods in real-world applications.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 11:28:04 GMT" } ]
2025-03-13T00:00:00
[ [ "Wei", "Xiangbin", "" ] ]
TITLE: Noise2Score3D: Tweedie's Approach for Unsupervised Point Cloud Denoising ABSTRACT: Building on recent advances in Bayesian statistics and image denoising, we propose Noise2Score3D, a fully unsupervised framework for point cloud denoising. Noise2Score3D learns the score function of the underlying point cloud distribution directly from noisy data, eliminating the need for clean data during training. Using Tweedie's formula, our method performs denoising in a single step, avoiding the iterative processes used in existing unsupervised methods, thus improving both accuracy and efficiency. Additionally, we introduce Total Variation for Point Clouds as a denoising quality metric, which allows for the estimation of unknown noise parameters. Experimental results demonstrate that Noise2Score3D achieves state-of-the-art performance on standard benchmarks among unsupervised learning methods in Chamfer distance and point-to-mesh metrics. Noise2Score3D also demonstrates strong generalization ability beyond training datasets. Our method, by addressing the generalization issue and challenge of the absence of clean data in learning-based methods, paves the way for learning-based point cloud denoising methods in real-world applications.
no_new_dataset
0.94801
2503.09291
Xinjian Luo
Xinjian Luo, Ting Yu, Xiaokui Xiao
Prompt Inference Attack on Distributed Large Language Model Inference Frameworks
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The inference process of modern large language models (LLMs) demands prohibitive computational resources, rendering them infeasible for deployment on consumer-grade devices. To address this limitation, recent studies propose distributed LLM inference frameworks, which employ split learning principles to enable collaborative LLM inference on resource-constrained hardware. However, distributing LLM layers across participants requires the transmission of intermediate outputs, which may introduce privacy risks to the original input prompts - a critical issue that has yet to be thoroughly explored in the literature. In this paper, we rigorously examine the privacy vulnerabilities of distributed LLM inference frameworks by designing and evaluating three prompt inference attacks aimed at reconstructing input prompts from intermediate LLM outputs. These attacks are developed under various query and data constraints to reflect diverse real-world LLM service scenarios. Specifically, the first attack assumes an unlimited query budget and access to an auxiliary dataset sharing the same distribution as the target prompts. The second attack also leverages unlimited queries but uses an auxiliary dataset with a distribution differing from the target prompts. The third attack operates under the most restrictive scenario, with limited query budgets and no auxiliary dataset available. We evaluate these attacks on a range of LLMs, including state-of-the-art models such as Llama-3.2 and Phi-3.5, as well as widely-used models like GPT-2 and BERT for comparative analysis. Our experiments show that the first two attacks achieve reconstruction accuracies exceeding 90%, while the third achieves accuracies typically above 50%, even under stringent constraints. These findings highlight privacy risks in distributed LLM inference frameworks, issuing a strong alert on their deployment in real-world applications.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 11:36:29 GMT" } ]
2025-03-13T00:00:00
[ [ "Luo", "Xinjian", "" ], [ "Yu", "Ting", "" ], [ "Xiao", "Xiaokui", "" ] ]
TITLE: Prompt Inference Attack on Distributed Large Language Model Inference Frameworks ABSTRACT: The inference process of modern large language models (LLMs) demands prohibitive computational resources, rendering them infeasible for deployment on consumer-grade devices. To address this limitation, recent studies propose distributed LLM inference frameworks, which employ split learning principles to enable collaborative LLM inference on resource-constrained hardware. However, distributing LLM layers across participants requires the transmission of intermediate outputs, which may introduce privacy risks to the original input prompts - a critical issue that has yet to be thoroughly explored in the literature. In this paper, we rigorously examine the privacy vulnerabilities of distributed LLM inference frameworks by designing and evaluating three prompt inference attacks aimed at reconstructing input prompts from intermediate LLM outputs. These attacks are developed under various query and data constraints to reflect diverse real-world LLM service scenarios. Specifically, the first attack assumes an unlimited query budget and access to an auxiliary dataset sharing the same distribution as the target prompts. The second attack also leverages unlimited queries but uses an auxiliary dataset with a distribution differing from the target prompts. The third attack operates under the most restrictive scenario, with limited query budgets and no auxiliary dataset available. We evaluate these attacks on a range of LLMs, including state-of-the-art models such as Llama-3.2 and Phi-3.5, as well as widely-used models like GPT-2 and BERT for comparative analysis. Our experiments show that the first two attacks achieve reconstruction accuracies exceeding 90%, while the third achieves accuracies typically above 50%, even under stringent constraints. These findings highlight privacy risks in distributed LLM inference frameworks, issuing a strong alert on their deployment in real-world applications.
no_new_dataset
0.941439
2503.09294
Peng Hu
Peng Hu, Chunming He, Lei Xu, Jingduo Tian, Sina Farsiu, Yulun Zhang, Pei Liu and Xiu Li
IQPFR: An Image Quality Prior for Blind Face Restoration and Beyond
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Blind Face Restoration (BFR) addresses the challenge of reconstructing degraded low-quality (LQ) facial images into high-quality (HQ) outputs. Conventional approaches predominantly rely on learning feature representations from ground-truth (GT) data; however, inherent imperfections in GT datasets constrain restoration performance to the mean quality level of the training data, rather than attaining maximally attainable visual quality. To overcome this limitation, we propose a novel framework that incorporates an Image Quality Prior (IQP) derived from No-Reference Image Quality Assessment (NR-IQA) models to guide the restoration process toward optimal HQ reconstructions. Our methodology synergizes this IQP with a learned codebook prior through two critical innovations: (1) During codebook learning, we devise a dual-branch codebook architecture that disentangles feature extraction into universal structural components and HQ-specific attributes, ensuring comprehensive representation of both common and high-quality facial characteristics. (2) In the codebook lookup stage, we implement a quality-conditioned Transformer-based framework. NR-IQA-derived quality scores act as dynamic conditioning signals to steer restoration toward the highest feasible quality standard. This score-conditioned paradigm enables plug-and-play enhancement of existing BFR architectures without modifying the original structure. We also formulate a discrete representation-based quality optimization strategy that circumvents over-optimization artifacts prevalent in continuous latent space approaches. Extensive experiments demonstrate that our method outperforms state-of-the-art techniques across multiple benchmarks. Besides, our quality-conditioned framework demonstrates consistent performance improvements when integrated with prior BFR models. The code will be released.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 11:39:51 GMT" } ]
2025-03-13T00:00:00
[ [ "Hu", "Peng", "" ], [ "He", "Chunming", "" ], [ "Xu", "Lei", "" ], [ "Tian", "Jingduo", "" ], [ "Farsiu", "Sina", "" ], [ "Zhang", "Yulun", "" ], [ "Liu", "Pei", "" ], [ "Li", "Xiu", "" ] ]
TITLE: IQPFR: An Image Quality Prior for Blind Face Restoration and Beyond ABSTRACT: Blind Face Restoration (BFR) addresses the challenge of reconstructing degraded low-quality (LQ) facial images into high-quality (HQ) outputs. Conventional approaches predominantly rely on learning feature representations from ground-truth (GT) data; however, inherent imperfections in GT datasets constrain restoration performance to the mean quality level of the training data, rather than attaining maximally attainable visual quality. To overcome this limitation, we propose a novel framework that incorporates an Image Quality Prior (IQP) derived from No-Reference Image Quality Assessment (NR-IQA) models to guide the restoration process toward optimal HQ reconstructions. Our methodology synergizes this IQP with a learned codebook prior through two critical innovations: (1) During codebook learning, we devise a dual-branch codebook architecture that disentangles feature extraction into universal structural components and HQ-specific attributes, ensuring comprehensive representation of both common and high-quality facial characteristics. (2) In the codebook lookup stage, we implement a quality-conditioned Transformer-based framework. NR-IQA-derived quality scores act as dynamic conditioning signals to steer restoration toward the highest feasible quality standard. This score-conditioned paradigm enables plug-and-play enhancement of existing BFR architectures without modifying the original structure. We also formulate a discrete representation-based quality optimization strategy that circumvents over-optimization artifacts prevalent in continuous latent space approaches. Extensive experiments demonstrate that our method outperforms state-of-the-art techniques across multiple benchmarks. Besides, our quality-conditioned framework demonstrates consistent performance improvements when integrated with prior BFR models. The code will be released.
no_new_dataset
0.945197
2503.09296
Bingzheng Jiang
Bingzheng Jiang, Jiayuan Wang, Han Ding, Lijun Zhu
MonoSLAM: Robust Monocular SLAM with Global Structure Optimization
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a robust monocular visual SLAM system that simultaneously utilizes point, line, and vanishing point features for accurate camera pose estimation and mapping. To address the critical challenge of achieving reliable localization in low-texture environments, where traditional point-based systems often fail due to insufficient visual features, we introduce a novel approach leveraging Global Primitives structural information to improve the system's robustness and accuracy performance. Our key innovation lies in constructing vanishing points from line features and proposing a weighted fusion strategy to build Global Primitives in the world coordinate system. This strategy associates multiple frames with non-overlapping regions and formulates a multi-frame reprojection error optimization, significantly improving tracking accuracy in texture-scarce scenarios. Evaluations on various datasets show that our system outperforms state-of-the-art methods in trajectory precision, particularly in challenging environments.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 11:43:10 GMT" } ]
2025-03-13T00:00:00
[ [ "Jiang", "Bingzheng", "" ], [ "Wang", "Jiayuan", "" ], [ "Ding", "Han", "" ], [ "Zhu", "Lijun", "" ] ]
TITLE: MonoSLAM: Robust Monocular SLAM with Global Structure Optimization ABSTRACT: This paper presents a robust monocular visual SLAM system that simultaneously utilizes point, line, and vanishing point features for accurate camera pose estimation and mapping. To address the critical challenge of achieving reliable localization in low-texture environments, where traditional point-based systems often fail due to insufficient visual features, we introduce a novel approach leveraging Global Primitives structural information to improve the system's robustness and accuracy performance. Our key innovation lies in constructing vanishing points from line features and proposing a weighted fusion strategy to build Global Primitives in the world coordinate system. This strategy associates multiple frames with non-overlapping regions and formulates a multi-frame reprojection error optimization, significantly improving tracking accuracy in texture-scarce scenarios. Evaluations on various datasets show that our system outperforms state-of-the-art methods in trajectory precision, particularly in challenging environments.
no_new_dataset
0.948537
2503.09302
Augustine O. Nwajana
Halima I. Kure, Pradipta Sarkar, Ahmed B. Ndanusa, and Augustine O. Nwajana
Detecting and Preventing Data Poisoning Attacks on AI Models
9 pages, 8 figures
null
null
null
cs.CR eess.IV
http://creativecommons.org/licenses/by/4.0/
This paper investigates the critical issue of data poisoning attacks on AI models, a growing concern in the ever-evolving landscape of artificial intelligence and cybersecurity. As advanced technology systems become increasingly prevalent across various sectors, the need for robust defence mechanisms against adversarial attacks becomes paramount. The study aims to develop and evaluate novel techniques for detecting and preventing data poisoning attacks, focusing on both theoretical frameworks and practical applications. Through a comprehensive literature review, experimental validation using the CIFAR-10 and Insurance Claims datasets, and the development of innovative algorithms, this paper seeks to enhance the resilience of AI models against malicious data manipulation. The study explores various methods, including anomaly detection, robust optimization strategies, and ensemble learning, to identify and mitigate the effects of poisoned data during model training. Experimental results indicate that data poisoning significantly degrades model performance, reducing classification accuracy by up to 27% in image recognition tasks (CIFAR-10) and 22% in fraud detection models (Insurance Claims dataset). The proposed defence mechanisms, including statistical anomaly detection and adversarial training, successfully mitigated poisoning effects, improving model robustness and restoring accuracy levels by an average of 15-20%. The findings further demonstrate that ensemble learning techniques provide an additional layer of resilience, reducing false positives and false negatives caused by adversarial data injections.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 11:55:01 GMT" } ]
2025-03-13T00:00:00
[ [ "Kure", "Halima I.", "" ], [ "Sarkar", "Pradipta", "" ], [ "Ndanusa", "Ahmed B.", "" ], [ "Nwajana", "Augustine O.", "" ] ]
TITLE: Detecting and Preventing Data Poisoning Attacks on AI Models ABSTRACT: This paper investigates the critical issue of data poisoning attacks on AI models, a growing concern in the ever-evolving landscape of artificial intelligence and cybersecurity. As advanced technology systems become increasingly prevalent across various sectors, the need for robust defence mechanisms against adversarial attacks becomes paramount. The study aims to develop and evaluate novel techniques for detecting and preventing data poisoning attacks, focusing on both theoretical frameworks and practical applications. Through a comprehensive literature review, experimental validation using the CIFAR-10 and Insurance Claims datasets, and the development of innovative algorithms, this paper seeks to enhance the resilience of AI models against malicious data manipulation. The study explores various methods, including anomaly detection, robust optimization strategies, and ensemble learning, to identify and mitigate the effects of poisoned data during model training. Experimental results indicate that data poisoning significantly degrades model performance, reducing classification accuracy by up to 27% in image recognition tasks (CIFAR-10) and 22% in fraud detection models (Insurance Claims dataset). The proposed defence mechanisms, including statistical anomaly detection and adversarial training, successfully mitigated poisoning effects, improving model robustness and restoring accuracy levels by an average of 15-20%. The findings further demonstrate that ensemble learning techniques provide an additional layer of resilience, reducing false positives and false negatives caused by adversarial data injections.
no_new_dataset
0.946151
2503.09321
Gorjan Radevski
Gorjan Radevski, Teodora Popordanoska, Matthew B. Blaschko, Tinne Tuytelaars
DAVE: Diagnostic benchmark for Audio Visual Evaluation
First two authors contributed equally
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Audio-visual understanding is a rapidly evolving field that seeks to integrate and interpret information from both auditory and visual modalities. Despite recent advances in multi-modal learning, existing benchmarks often suffer from strong visual bias -- where answers can be inferred from visual data alone -- and provide only aggregate scores that conflate multiple sources of error. This makes it difficult to determine whether models struggle with visual understanding, audio interpretation, or audio-visual alignment. In this work, we introduce DAVE (Diagnostic Audio Visual Evaluation), a novel benchmark dataset designed to systematically evaluate audio-visual models across controlled challenges. DAVE alleviates existing limitations by (i) ensuring both modalities are necessary to answer correctly and (ii) decoupling evaluation into atomic subcategories. Our detailed analysis of state-of-the-art models reveals specific failure modes and provides targeted insights for improvement. By offering this standardized diagnostic framework, we aim to facilitate more robust development of audio-visual models. The dataset is released: https://github.com/gorjanradevski/dave
[ { "version": "v1", "created": "Wed, 12 Mar 2025 12:12:46 GMT" } ]
2025-03-13T00:00:00
[ [ "Radevski", "Gorjan", "" ], [ "Popordanoska", "Teodora", "" ], [ "Blaschko", "Matthew B.", "" ], [ "Tuytelaars", "Tinne", "" ] ]
TITLE: DAVE: Diagnostic benchmark for Audio Visual Evaluation ABSTRACT: Audio-visual understanding is a rapidly evolving field that seeks to integrate and interpret information from both auditory and visual modalities. Despite recent advances in multi-modal learning, existing benchmarks often suffer from strong visual bias -- where answers can be inferred from visual data alone -- and provide only aggregate scores that conflate multiple sources of error. This makes it difficult to determine whether models struggle with visual understanding, audio interpretation, or audio-visual alignment. In this work, we introduce DAVE (Diagnostic Audio Visual Evaluation), a novel benchmark dataset designed to systematically evaluate audio-visual models across controlled challenges. DAVE alleviates existing limitations by (i) ensuring both modalities are necessary to answer correctly and (ii) decoupling evaluation into atomic subcategories. Our detailed analysis of state-of-the-art models reveals specific failure modes and provides targeted insights for improvement. By offering this standardized diagnostic framework, we aim to facilitate more robust development of audio-visual models. The dataset is released: https://github.com/gorjanradevski/dave
new_dataset
0.957118
2503.09330
Thomas De Min
Thomas De Min, Subhankar Roy, St\'ephane Lathuili\`ere, Elisa Ricci, Massimiliano Mancini
Group-robust Machine Unlearning
Work in progress
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Machine unlearning is an emerging paradigm to remove the influence of specific training data (i.e., the forget set) from a model while preserving its knowledge of the rest of the data (i.e., the retain set). Previous approaches assume the forget data to be uniformly distributed from all training datapoints. However, if the data to unlearn is dominant in one group, we empirically show that performance for this group degrades, leading to fairness issues. This work tackles the overlooked problem of non-uniformly distributed forget sets, which we call group-robust machine unlearning, by presenting a simple, effective strategy that mitigates the performance loss in dominant groups via sample distribution reweighting. Moreover, we present MIU (Mutual Information-aware Machine Unlearning), the first approach for group robustness in approximate machine unlearning. MIU minimizes the mutual information between model features and group information, achieving unlearning while reducing performance degradation in the dominant group of the forget set. Additionally, MIU exploits sample distribution reweighting and mutual information calibration with the original model to preserve group robustness. We conduct experiments on three datasets and show that MIU outperforms standard methods, achieving unlearning without compromising model robustness. Source code available at https://github.com/tdemin16/group-robust_machine_unlearning.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 12:24:05 GMT" } ]
2025-03-13T00:00:00
[ [ "De Min", "Thomas", "" ], [ "Roy", "Subhankar", "" ], [ "Lathuilière", "Stéphane", "" ], [ "Ricci", "Elisa", "" ], [ "Mancini", "Massimiliano", "" ] ]
TITLE: Group-robust Machine Unlearning ABSTRACT: Machine unlearning is an emerging paradigm to remove the influence of specific training data (i.e., the forget set) from a model while preserving its knowledge of the rest of the data (i.e., the retain set). Previous approaches assume the forget data to be uniformly distributed from all training datapoints. However, if the data to unlearn is dominant in one group, we empirically show that performance for this group degrades, leading to fairness issues. This work tackles the overlooked problem of non-uniformly distributed forget sets, which we call group-robust machine unlearning, by presenting a simple, effective strategy that mitigates the performance loss in dominant groups via sample distribution reweighting. Moreover, we present MIU (Mutual Information-aware Machine Unlearning), the first approach for group robustness in approximate machine unlearning. MIU minimizes the mutual information between model features and group information, achieving unlearning while reducing performance degradation in the dominant group of the forget set. Additionally, MIU exploits sample distribution reweighting and mutual information calibration with the original model to preserve group robustness. We conduct experiments on three datasets and show that MIU outperforms standard methods, achieving unlearning without compromising model robustness. Source code available at https://github.com/tdemin16/group-robust_machine_unlearning.
no_new_dataset
0.948775
2503.09332
Dai Sun
Dai Sun and Huhao Guan and Kun Zhang and Xike Xie and S. Kevin Zhou
SDD-4DGS: Static-Dynamic Aware Decoupling in Gaussian Splatting for 4D Scene Reconstruction
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Dynamic and static components in scenes often exhibit distinct properties, yet most 4D reconstruction methods treat them indiscriminately, leading to suboptimal performance in both cases. This work introduces SDD-4DGS, the first framework for static-dynamic decoupled 4D scene reconstruction based on Gaussian Splatting. Our approach is built upon a novel probabilistic dynamic perception coefficient that is naturally integrated into the Gaussian reconstruction pipeline, enabling adaptive separation of static and dynamic components. With carefully designed implementation strategies to realize this theoretical framework, our method effectively facilitates explicit learning of motion patterns for dynamic elements while maintaining geometric stability for static structures. Extensive experiments on five benchmark datasets demonstrate that SDD-4DGS consistently outperforms state-of-the-art methods in reconstruction fidelity, with enhanced detail restoration for static structures and precise modeling of dynamic motions. The code will be released.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 12:25:58 GMT" } ]
2025-03-13T00:00:00
[ [ "Sun", "Dai", "" ], [ "Guan", "Huhao", "" ], [ "Zhang", "Kun", "" ], [ "Xie", "Xike", "" ], [ "Zhou", "S. Kevin", "" ] ]
TITLE: SDD-4DGS: Static-Dynamic Aware Decoupling in Gaussian Splatting for 4D Scene Reconstruction ABSTRACT: Dynamic and static components in scenes often exhibit distinct properties, yet most 4D reconstruction methods treat them indiscriminately, leading to suboptimal performance in both cases. This work introduces SDD-4DGS, the first framework for static-dynamic decoupled 4D scene reconstruction based on Gaussian Splatting. Our approach is built upon a novel probabilistic dynamic perception coefficient that is naturally integrated into the Gaussian reconstruction pipeline, enabling adaptive separation of static and dynamic components. With carefully designed implementation strategies to realize this theoretical framework, our method effectively facilitates explicit learning of motion patterns for dynamic elements while maintaining geometric stability for static structures. Extensive experiments on five benchmark datasets demonstrate that SDD-4DGS consistently outperforms state-of-the-art methods in reconstruction fidelity, with enhanced detail restoration for static structures and precise modeling of dynamic motions. The code will be released.
no_new_dataset
0.943034
2503.09342
Aymen Mir
Aymen Mir, Arthur Moreau, Helisa Dhamo, Zhensong Zhang, Eduardo P\'erez-Pellitero
GASPACHO: Gaussian Splatting for Controllable Humans and Objects
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present GASPACHO: a method for generating photorealistic controllable renderings of human-object interactions. Given a set of multi-view RGB images of human-object interactions, our method reconstructs animatable templates of the human and object as separate sets of Gaussians simultaneously. Different from existing work, which focuses on human reconstruction and ignores objects as background, our method explicitly reconstructs both humans and objects, thereby allowing for controllable renderings of novel human object interactions in different poses from novel-camera viewpoints. During reconstruction, we constrain the Gaussians that generate rendered images to be a linear function of a set of canonical Gaussians. By simply changing the parameters of the linear deformation functions after training, our method can generate renderings of novel human-object interaction in novel poses from novel camera viewpoints. We learn the 3D Gaussian properties of the canonical Gaussians on the underlying 2D manifold of the canonical human and object templates. This in turns requires a canonical object template with a fixed UV unwrapping. To define such an object template, we use a feature based representation to track the object across the multi-view sequence. We further propose an occlusion aware photometric loss that allows for reconstructions under significant occlusions. Several experiments on two human-object datasets - BEHAVE and DNA-Rendering - demonstrate that our method allows for high-quality reconstruction of human and object templates under significant occlusion and the synthesis of controllable renderings of novel human-object interactions in novel human poses from novel camera views.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 12:38:48 GMT" } ]
2025-03-13T00:00:00
[ [ "Mir", "Aymen", "" ], [ "Moreau", "Arthur", "" ], [ "Dhamo", "Helisa", "" ], [ "Zhang", "Zhensong", "" ], [ "Pérez-Pellitero", "Eduardo", "" ] ]
TITLE: GASPACHO: Gaussian Splatting for Controllable Humans and Objects ABSTRACT: We present GASPACHO: a method for generating photorealistic controllable renderings of human-object interactions. Given a set of multi-view RGB images of human-object interactions, our method reconstructs animatable templates of the human and object as separate sets of Gaussians simultaneously. Different from existing work, which focuses on human reconstruction and ignores objects as background, our method explicitly reconstructs both humans and objects, thereby allowing for controllable renderings of novel human object interactions in different poses from novel-camera viewpoints. During reconstruction, we constrain the Gaussians that generate rendered images to be a linear function of a set of canonical Gaussians. By simply changing the parameters of the linear deformation functions after training, our method can generate renderings of novel human-object interaction in novel poses from novel camera viewpoints. We learn the 3D Gaussian properties of the canonical Gaussians on the underlying 2D manifold of the canonical human and object templates. This in turns requires a canonical object template with a fixed UV unwrapping. To define such an object template, we use a feature based representation to track the object across the multi-view sequence. We further propose an occlusion aware photometric loss that allows for reconstructions under significant occlusions. Several experiments on two human-object datasets - BEHAVE and DNA-Rendering - demonstrate that our method allows for high-quality reconstruction of human and object templates under significant occlusion and the synthesis of controllable renderings of novel human-object interactions in novel human poses from novel camera views.
no_new_dataset
0.954732
2503.09344
Lu Qi
Lehan Yang, Lu Qi, Xiangtai Li, Sheng Li, Varun Jampani, Ming-Hsuan Yang
Unified Dense Prediction of Video Diffusion
Accepted by CVPR2025
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
We present a unified network for simultaneously generating videos and their corresponding entity segmentation and depth maps from text prompts. We utilize colormap to represent entity masks and depth maps, tightly integrating dense prediction with RGB video generation. Introducing dense prediction information improves video generation's consistency and motion smoothness without increasing computational costs. Incorporating learnable task embeddings brings multiple dense prediction tasks into a single model, enhancing flexibility and further boosting performance. We further propose a large-scale dense prediction video dataset~\datasetname, addressing the issue that existing datasets do not concurrently contain captions, videos, segmentation, or depth maps. Comprehensive experiments demonstrate the high efficiency of our method, surpassing the state-of-the-art in terms of video quality, consistency, and motion smoothness.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 12:41:02 GMT" } ]
2025-03-13T00:00:00
[ [ "Yang", "Lehan", "" ], [ "Qi", "Lu", "" ], [ "Li", "Xiangtai", "" ], [ "Li", "Sheng", "" ], [ "Jampani", "Varun", "" ], [ "Yang", "Ming-Hsuan", "" ] ]
TITLE: Unified Dense Prediction of Video Diffusion ABSTRACT: We present a unified network for simultaneously generating videos and their corresponding entity segmentation and depth maps from text prompts. We utilize colormap to represent entity masks and depth maps, tightly integrating dense prediction with RGB video generation. Introducing dense prediction information improves video generation's consistency and motion smoothness without increasing computational costs. Incorporating learnable task embeddings brings multiple dense prediction tasks into a single model, enhancing flexibility and further boosting performance. We further propose a large-scale dense prediction video dataset~\datasetname, addressing the issue that existing datasets do not concurrently contain captions, videos, segmentation, or depth maps. Comprehensive experiments demonstrate the high efficiency of our method, surpassing the state-of-the-art in terms of video quality, consistency, and motion smoothness.
new_dataset
0.945851
2503.09349
Simon Geirnaert
Simon Geirnaert, Jonas Vanthornhout, Tom Francart, Alexander Bertrand
Performance Modeling for Correlation-based Neural Decoding of Auditory Attention to Speech
null
null
null
null
eess.SP cs.SD eess.AS q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Correlation-based auditory attention decoding (AAD) algorithms exploit neural tracking mechanisms to determine listener attention among competing speech sources via, e.g., electroencephalography signals. The correlation coefficients between the decoded neural responses and encoded speech stimuli of the different speakers then serve as AAD decision variables. A critical trade-off exists between the temporal resolution (the decision window length used to compute these correlations) and the AAD accuracy. This trade-off is typically characterized by evaluating AAD accuracy across multiple window lengths, leading to the performance curve. We propose a novel method to model this trade-off curve using labeled correlations from only a single decision window length. Our approach models the (un)attended correlations with a normal distribution after applying the Fisher transformation, enabling accurate AAD accuracy prediction across different window lengths. We validate the method on two distinct AAD implementations: a linear decoder and the non-linear VLAAI deep neural network, evaluated on separate datasets. Results show consistently low modeling errors of approximately 2 percent points, with 94% of true accuracies falling within estimated 95%-confidence intervals. The proposed method enables efficient performance curve modeling without extensive multi-window length evaluation, facilitating practical applications in, e.g., performance tracking in neuro-steered hearing devices to continuously adapt the system parameters over time.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 12:51:46 GMT" } ]
2025-03-13T00:00:00
[ [ "Geirnaert", "Simon", "" ], [ "Vanthornhout", "Jonas", "" ], [ "Francart", "Tom", "" ], [ "Bertrand", "Alexander", "" ] ]
TITLE: Performance Modeling for Correlation-based Neural Decoding of Auditory Attention to Speech ABSTRACT: Correlation-based auditory attention decoding (AAD) algorithms exploit neural tracking mechanisms to determine listener attention among competing speech sources via, e.g., electroencephalography signals. The correlation coefficients between the decoded neural responses and encoded speech stimuli of the different speakers then serve as AAD decision variables. A critical trade-off exists between the temporal resolution (the decision window length used to compute these correlations) and the AAD accuracy. This trade-off is typically characterized by evaluating AAD accuracy across multiple window lengths, leading to the performance curve. We propose a novel method to model this trade-off curve using labeled correlations from only a single decision window length. Our approach models the (un)attended correlations with a normal distribution after applying the Fisher transformation, enabling accurate AAD accuracy prediction across different window lengths. We validate the method on two distinct AAD implementations: a linear decoder and the non-linear VLAAI deep neural network, evaluated on separate datasets. Results show consistently low modeling errors of approximately 2 percent points, with 94% of true accuracies falling within estimated 95%-confidence intervals. The proposed method enables efficient performance curve modeling without extensive multi-window length evaluation, facilitating practical applications in, e.g., performance tracking in neuro-steered hearing devices to continuously adapt the system parameters over time.
no_new_dataset
0.948394
2503.09354
Christoph Huber
Christoph Huber, Dino Knoll and Michael Guthe
Fully-Synthetic Training for Visual Quality Inspection in Automotive Production
Accepted for publication in Procedia CIRP
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Visual Quality Inspection plays a crucial role in modern manufacturing environments as it ensures customer safety and satisfaction. The introduction of Computer Vision (CV) has revolutionized visual quality inspection by improving the accuracy and efficiency of defect detection. However, traditional CV models heavily rely on extensive datasets for training, which can be costly, time-consuming, and error-prone. To overcome these challenges, synthetic images have emerged as a promising alternative. They offer a cost-effective solution with automatically generated labels. In this paper, we propose a pipeline for generating synthetic images using domain randomization. We evaluate our approach in three real inspection scenarios and demonstrate that an object detection model trained solely on synthetic data can outperform models trained on real images.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 12:58:30 GMT" } ]
2025-03-13T00:00:00
[ [ "Huber", "Christoph", "" ], [ "Knoll", "Dino", "" ], [ "Guthe", "Michael", "" ] ]
TITLE: Fully-Synthetic Training for Visual Quality Inspection in Automotive Production ABSTRACT: Visual Quality Inspection plays a crucial role in modern manufacturing environments as it ensures customer safety and satisfaction. The introduction of Computer Vision (CV) has revolutionized visual quality inspection by improving the accuracy and efficiency of defect detection. However, traditional CV models heavily rely on extensive datasets for training, which can be costly, time-consuming, and error-prone. To overcome these challenges, synthetic images have emerged as a promising alternative. They offer a cost-effective solution with automatically generated labels. In this paper, we propose a pipeline for generating synthetic images using domain randomization. We evaluate our approach in three real inspection scenarios and demonstrate that an object detection model trained solely on synthetic data can outperform models trained on real images.
no_new_dataset
0.953362
2503.09355
Lianyuan Yu
Lianyuan Yu, Xiuzhen Guo, Ji Shi, Hongxiao Wang, and Hongwei Li
GIGP: A Global Information Interacting and Geometric Priors Focusing Framework for Semi-supervised Medical Image Segmentation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Semi-supervised learning enhances medical image segmentation by leveraging unlabeled data, reducing reliance on extensive labeled datasets. On the one hand, the distribution discrepancy between limited labeled data and abundant unlabeled data can hinder model generalization. Most existing methods rely on local similarity matching, which may introduce bias. In contrast, Mamba effectively models global context with linear complexity, learning more comprehensive data representations. On the other hand, medical images usually exhibit consistent anatomical structures defined by geometric features. Most existing methods fail to fully utilize global geometric priors, such as volumes, moments etc. In this work, we introduce a global information interaction and geometric priors focus framework (GIGP). Firstly, we present a Global Information Interaction Mamba module to reduce distribution discrepancy between labeled and unlabeled data. Secondly, we propose a Geometric Moment Attention Mechanism to extract richer global geometric features. Finally, we propose Global Geometric Perturbation Consistency to simulate organ dynamics and geometric variations, enhancing the ability of the model to learn generalized features. The superior performance on the NIH Pancreas and Left Atrium datasets demonstrates the effectiveness of our approach.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 12:59:38 GMT" } ]
2025-03-13T00:00:00
[ [ "Yu", "Lianyuan", "" ], [ "Guo", "Xiuzhen", "" ], [ "Shi", "Ji", "" ], [ "Wang", "Hongxiao", "" ], [ "Li", "Hongwei", "" ] ]
TITLE: GIGP: A Global Information Interacting and Geometric Priors Focusing Framework for Semi-supervised Medical Image Segmentation ABSTRACT: Semi-supervised learning enhances medical image segmentation by leveraging unlabeled data, reducing reliance on extensive labeled datasets. On the one hand, the distribution discrepancy between limited labeled data and abundant unlabeled data can hinder model generalization. Most existing methods rely on local similarity matching, which may introduce bias. In contrast, Mamba effectively models global context with linear complexity, learning more comprehensive data representations. On the other hand, medical images usually exhibit consistent anatomical structures defined by geometric features. Most existing methods fail to fully utilize global geometric priors, such as volumes, moments etc. In this work, we introduce a global information interaction and geometric priors focus framework (GIGP). Firstly, we present a Global Information Interaction Mamba module to reduce distribution discrepancy between labeled and unlabeled data. Secondly, we propose a Geometric Moment Attention Mechanism to extract richer global geometric features. Finally, we propose Global Geometric Perturbation Consistency to simulate organ dynamics and geometric variations, enhancing the ability of the model to learn generalized features. The superior performance on the NIH Pancreas and Left Atrium datasets demonstrates the effectiveness of our approach.
no_new_dataset
0.951774
2503.09358
Jiushen Cai
Jiushen Cai, Weihang Zhang, Hanruo Liu, Ningli Wang, and Huiqi Li
RetSTA: An LLM-Based Approach for Standardizing Clinical Fundus Image Reports
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Standardization of clinical reports is crucial for improving the quality of healthcare and facilitating data integration. The lack of unified standards, including format, terminology, and style, is a great challenge in clinical fundus diagnostic reports, which increases the difficulty for large language models (LLMs) to understand the data. To address this, we construct a bilingual standard terminology, containing fundus clinical terms and commonly used descriptions in clinical diagnosis. Then, we establish two models, RetSTA-7B-Zero and RetSTA-7B. RetSTA-7B-Zero, fine-tuned on an augmented dataset simulating clinical scenarios, demonstrates powerful standardization behaviors. However, it encounters a challenge of limitation to cover a wider range of diseases. To further enhance standardization performance, we build RetSTA-7B, which integrates a substantial amount of standardized data generated by RetSTA-7B-Zero along with corresponding English data, covering diverse complex clinical scenarios and achieving report-level standardization for the first time. Experimental results demonstrate that RetSTA-7B outperforms other compared LLMs in bilingual standardization task, which validates its superior performance and generalizability. The checkpoints are available at https://github.com/AB-Story/RetSTA-7B.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 13:00:57 GMT" } ]
2025-03-13T00:00:00
[ [ "Cai", "Jiushen", "" ], [ "Zhang", "Weihang", "" ], [ "Liu", "Hanruo", "" ], [ "Wang", "Ningli", "" ], [ "Li", "Huiqi", "" ] ]
TITLE: RetSTA: An LLM-Based Approach for Standardizing Clinical Fundus Image Reports ABSTRACT: Standardization of clinical reports is crucial for improving the quality of healthcare and facilitating data integration. The lack of unified standards, including format, terminology, and style, is a great challenge in clinical fundus diagnostic reports, which increases the difficulty for large language models (LLMs) to understand the data. To address this, we construct a bilingual standard terminology, containing fundus clinical terms and commonly used descriptions in clinical diagnosis. Then, we establish two models, RetSTA-7B-Zero and RetSTA-7B. RetSTA-7B-Zero, fine-tuned on an augmented dataset simulating clinical scenarios, demonstrates powerful standardization behaviors. However, it encounters a challenge of limitation to cover a wider range of diseases. To further enhance standardization performance, we build RetSTA-7B, which integrates a substantial amount of standardized data generated by RetSTA-7B-Zero along with corresponding English data, covering diverse complex clinical scenarios and achieving report-level standardization for the first time. Experimental results demonstrate that RetSTA-7B outperforms other compared LLMs in bilingual standardization task, which validates its superior performance and generalizability. The checkpoints are available at https://github.com/AB-Story/RetSTA-7B.
no_new_dataset
0.94079
2503.09361
Katharina Prasse
Katharina Prasse, Marcel Kleinmann, Inken Adam, Kerstin Beckersjuergen, Andreas Edte, Jona Frroku, Timotheus Gumpp, Steffen Jung, Isaac Bravo, Stefanie Walter, Margret Keuper
Deep Learning for Climate Action: Computer Vision Analysis of Visual Narratives on X
null
null
null
null
cs.CV cs.SI
http://creativecommons.org/licenses/by-sa/4.0/
Climate change is one of the most pressing challenges of the 21st century, sparking widespread discourse across social media platforms. Activists, policymakers, and researchers seek to understand public sentiment and narratives while access to social media data has become increasingly restricted in the post-API era. In this study, we analyze a dataset of climate change-related tweets from X (formerly Twitter) shared in 2019, containing 730k tweets along with the shared images. Our approach integrates statistical analysis, image classification, object detection, and sentiment analysis to explore visual narratives in climate discourse. Additionally, we introduce a graphical user interface (GUI) to facilitate interactive data exploration. Our findings reveal key themes in climate communication, highlight sentiment divergence between images and text, and underscore the strengths and limitations of foundation models in analyzing social media imagery. By releasing our code and tools, we aim to support future research on the intersection of climate change, social media, and computer vision.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 13:03:49 GMT" } ]
2025-03-13T00:00:00
[ [ "Prasse", "Katharina", "" ], [ "Kleinmann", "Marcel", "" ], [ "Adam", "Inken", "" ], [ "Beckersjuergen", "Kerstin", "" ], [ "Edte", "Andreas", "" ], [ "Frroku", "Jona", "" ], [ "Gumpp", "Timotheus", "" ], [ "Jung", "Steffen", "" ], [ "Bravo", "Isaac", "" ], [ "Walter", "Stefanie", "" ], [ "Keuper", "Margret", "" ] ]
TITLE: Deep Learning for Climate Action: Computer Vision Analysis of Visual Narratives on X ABSTRACT: Climate change is one of the most pressing challenges of the 21st century, sparking widespread discourse across social media platforms. Activists, policymakers, and researchers seek to understand public sentiment and narratives while access to social media data has become increasingly restricted in the post-API era. In this study, we analyze a dataset of climate change-related tweets from X (formerly Twitter) shared in 2019, containing 730k tweets along with the shared images. Our approach integrates statistical analysis, image classification, object detection, and sentiment analysis to explore visual narratives in climate discourse. Additionally, we introduce a graphical user interface (GUI) to facilitate interactive data exploration. Our findings reveal key themes in climate communication, highlight sentiment divergence between images and text, and underscore the strengths and limitations of foundation models in analyzing social media imagery. By releasing our code and tools, we aim to support future research on the intersection of climate change, social media, and computer vision.
no_new_dataset
0.878835
2503.09363
Yuxiang Wang
Yuxiang Wang, Wenqi Fan, Suhang Wang, Yao Ma
Towards Graph Foundation Models: A Transferability Perspective
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, Graph Foundation Models (GFMs) have gained significant attention for their potential to generalize across diverse graph domains and tasks. Some works focus on Domain-Specific GFMs, which are designed to address a variety of tasks within a specific domain, while others aim to create General-Purpose GFMs that extend the capabilities of domain-specific models to multiple domains. Regardless of the type, transferability is crucial for applying GFMs across different domains and tasks. However, achieving strong transferability is a major challenge due to the structural, feature, and distributional variations in graph data. To date, there has been no systematic research examining and analyzing GFMs from the perspective of transferability. To bridge the gap, we present the first comprehensive taxonomy that categorizes and analyzes existing GFMs through the lens of transferability, structuring GFMs around their application scope (domain-specific vs. general-purpose) and their approaches to knowledge acquisition and transfer. We provide a structured perspective on current progress and identify potential pathways for advancing GFM generalization across diverse graph datasets and tasks. We aims to shed light on the current landscape of GFMs and inspire future research directions in GFM development.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 13:04:05 GMT" } ]
2025-03-13T00:00:00
[ [ "Wang", "Yuxiang", "" ], [ "Fan", "Wenqi", "" ], [ "Wang", "Suhang", "" ], [ "Ma", "Yao", "" ] ]
TITLE: Towards Graph Foundation Models: A Transferability Perspective ABSTRACT: In recent years, Graph Foundation Models (GFMs) have gained significant attention for their potential to generalize across diverse graph domains and tasks. Some works focus on Domain-Specific GFMs, which are designed to address a variety of tasks within a specific domain, while others aim to create General-Purpose GFMs that extend the capabilities of domain-specific models to multiple domains. Regardless of the type, transferability is crucial for applying GFMs across different domains and tasks. However, achieving strong transferability is a major challenge due to the structural, feature, and distributional variations in graph data. To date, there has been no systematic research examining and analyzing GFMs from the perspective of transferability. To bridge the gap, we present the first comprehensive taxonomy that categorizes and analyzes existing GFMs through the lens of transferability, structuring GFMs around their application scope (domain-specific vs. general-purpose) and their approaches to knowledge acquisition and transfer. We provide a structured perspective on current progress and identify potential pathways for advancing GFM generalization across diverse graph datasets and tasks. We aims to shed light on the current landscape of GFMs and inspire future research directions in GFM development.
no_new_dataset
0.939748
2503.09366
Ziyi Huang
Ziyi Huang, Yang Li, Dushuai Li, Yao Mu, Hongmao Qin and Nan Zheng
Post-interactive Multimodal Trajectory Prediction for Autonomous Driving
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modeling the interactions among agents for trajectory prediction of autonomous driving has been challenging due to the inherent uncertainty in agents' behavior. The interactions involved in the predicted trajectories of agents, also called post-interactions, have rarely been considered in trajectory prediction models. To this end, we propose a coarse-to-fine Transformer for multimodal trajectory prediction, i.e., Pioformer, which explicitly extracts the post-interaction features to enhance the prediction accuracy. Specifically, we first build a Coarse Trajectory Network to generate coarse trajectories based on the observed trajectories and lane segments, in which the low-order interaction features are extracted with the graph neural networks. Next, we build a hypergraph neural network-based Trajectory Proposal Network to generate trajectory proposals, where the high-order interaction features are learned by the hypergraphs. Finally, the trajectory proposals are sent to the Proposal Refinement Network for further refinement. The observed trajectories and trajectory proposals are concatenated together as the inputs of the Proposal Refinement Network, in which the post-interaction features are learned by combining the previous interaction features and trajectory consistency features. Moreover, we propose a three-stage training scheme to facilitate the learning process. Extensive experiments on the Argoverse 1 dataset demonstrate the superiority of our method. Compared with the baseline HiVT-64, our model has reduced the prediction errors by 4.4%, 8.4%, 14.4%, 5.7% regarding metrics minADE6, minFDE6, MR6, and brier-minFDE6, respectively.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 13:10:09 GMT" } ]
2025-03-13T00:00:00
[ [ "Huang", "Ziyi", "" ], [ "Li", "Yang", "" ], [ "Li", "Dushuai", "" ], [ "Mu", "Yao", "" ], [ "Qin", "Hongmao", "" ], [ "Zheng", "Nan", "" ] ]
TITLE: Post-interactive Multimodal Trajectory Prediction for Autonomous Driving ABSTRACT: Modeling the interactions among agents for trajectory prediction of autonomous driving has been challenging due to the inherent uncertainty in agents' behavior. The interactions involved in the predicted trajectories of agents, also called post-interactions, have rarely been considered in trajectory prediction models. To this end, we propose a coarse-to-fine Transformer for multimodal trajectory prediction, i.e., Pioformer, which explicitly extracts the post-interaction features to enhance the prediction accuracy. Specifically, we first build a Coarse Trajectory Network to generate coarse trajectories based on the observed trajectories and lane segments, in which the low-order interaction features are extracted with the graph neural networks. Next, we build a hypergraph neural network-based Trajectory Proposal Network to generate trajectory proposals, where the high-order interaction features are learned by the hypergraphs. Finally, the trajectory proposals are sent to the Proposal Refinement Network for further refinement. The observed trajectories and trajectory proposals are concatenated together as the inputs of the Proposal Refinement Network, in which the post-interaction features are learned by combining the previous interaction features and trajectory consistency features. Moreover, we propose a three-stage training scheme to facilitate the learning process. Extensive experiments on the Argoverse 1 dataset demonstrate the superiority of our method. Compared with the baseline HiVT-64, our model has reduced the prediction errors by 4.4%, 8.4%, 14.4%, 5.7% regarding metrics minADE6, minFDE6, MR6, and brier-minFDE6, respectively.
no_new_dataset
0.951188
2503.09370
Yang Nan
Yang Nan, Huichi Zhou, Xiaodan Xing, Giorgos Papanastasiou, Lei Zhu, Zhifan Gao, Alejandro F Fangi, Guang Yang
Revisiting Medical Image Retrieval via Knowledge Consolidation
null
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As artificial intelligence and digital medicine increasingly permeate healthcare systems, robust governance frameworks are essential to ensure ethical, secure, and effective implementation. In this context, medical image retrieval becomes a critical component of clinical data management, playing a vital role in decision-making and safeguarding patient information. Existing methods usually learn hash functions using bottleneck features, which fail to produce representative hash codes from blended embeddings. Although contrastive hashing has shown superior performance, current approaches often treat image retrieval as a classification task, using category labels to create positive/negative pairs. Moreover, many methods fail to address the out-of-distribution (OOD) issue when models encounter external OOD queries or adversarial attacks. In this work, we propose a novel method to consolidate knowledge of hierarchical features and optimisation functions. We formulate the knowledge consolidation by introducing Depth-aware Representation Fusion (DaRF) and Structure-aware Contrastive Hashing (SCH). DaRF adaptively integrates shallow and deep representations into blended features, and SCH incorporates image fingerprints to enhance the adaptability of positive/negative pairings. These blended features further facilitate OOD detection and content-based recommendation, contributing to a secure AI-driven healthcare environment. Moreover, we present a content-guided ranking to improve the robustness and reproducibility of retrieval results. Our comprehensive assessments demonstrate that the proposed method could effectively recognise OOD samples and significantly outperform existing approaches in medical image retrieval (p<0.05). In particular, our method achieves a 5.6-38.9% improvement in mean Average Precision on the anatomical radiology dataset.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 13:16:42 GMT" } ]
2025-03-13T00:00:00
[ [ "Nan", "Yang", "" ], [ "Zhou", "Huichi", "" ], [ "Xing", "Xiaodan", "" ], [ "Papanastasiou", "Giorgos", "" ], [ "Zhu", "Lei", "" ], [ "Gao", "Zhifan", "" ], [ "Fangi", "Alejandro F", "" ], [ "Yang", "Guang", "" ] ]
TITLE: Revisiting Medical Image Retrieval via Knowledge Consolidation ABSTRACT: As artificial intelligence and digital medicine increasingly permeate healthcare systems, robust governance frameworks are essential to ensure ethical, secure, and effective implementation. In this context, medical image retrieval becomes a critical component of clinical data management, playing a vital role in decision-making and safeguarding patient information. Existing methods usually learn hash functions using bottleneck features, which fail to produce representative hash codes from blended embeddings. Although contrastive hashing has shown superior performance, current approaches often treat image retrieval as a classification task, using category labels to create positive/negative pairs. Moreover, many methods fail to address the out-of-distribution (OOD) issue when models encounter external OOD queries or adversarial attacks. In this work, we propose a novel method to consolidate knowledge of hierarchical features and optimisation functions. We formulate the knowledge consolidation by introducing Depth-aware Representation Fusion (DaRF) and Structure-aware Contrastive Hashing (SCH). DaRF adaptively integrates shallow and deep representations into blended features, and SCH incorporates image fingerprints to enhance the adaptability of positive/negative pairings. These blended features further facilitate OOD detection and content-based recommendation, contributing to a secure AI-driven healthcare environment. Moreover, we present a content-guided ranking to improve the robustness and reproducibility of retrieval results. Our comprehensive assessments demonstrate that the proposed method could effectively recognise OOD samples and significantly outperform existing approaches in medical image retrieval (p<0.05). In particular, our method achieves a 5.6-38.9% improvement in mean Average Precision on the anatomical radiology dataset.
no_new_dataset
0.9434
2503.09378
Fangzheng Qi
Fangzheng Qi, Zhenjie Hou, En Lin, Xing Li, iuzhen Liang, Xinwen Zhou
Pig behavior dataset and Spatial-temporal perception and enhancement networks based on the attention mechanism for pig behavior recognition
null
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The recognition of pig behavior plays a crucial role in smart farming and welfare assurance for pigs. Currently, in the field of pig behavior recognition, the lack of publicly available behavioral datasets not only limits the development of innovative algorithms but also hampers model robustness and algorithm optimization.This paper proposes a dataset containing 13 pig behaviors that significantly impact welfare.Based on this dataset, this paper proposes a spatial-temporal perception and enhancement networks based on the attention mechanism to model the spatiotemporal features of pig behaviors and their associated interaction areas in video data. The network is composed of a spatiotemporal perception network and a spatiotemporal feature enhancement network. The spatiotemporal perception network is responsible for establishing connections between the pigs and the key regions of their behaviors in the video data. The spatiotemporal feature enhancement network further strengthens the important spatial features of individual pigs and captures the long-term dependencies of the spatiotemporal features of individual behaviors by remodeling these connections, thereby enhancing the model's perception of spatiotemporal changes in pig behaviors. Experimental results demonstrate that on the dataset established in this paper, our proposed model achieves a MAP score of 75.92%, which is an 8.17% improvement over the best-performing traditional model. This study not only improces the accuracy and generalizability of individual pig behavior recognition but also provides new technological tools for modern smart farming. The dataset and related code will be made publicly available alongside this paper.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 13:27:29 GMT" } ]
2025-03-13T00:00:00
[ [ "Qi", "Fangzheng", "" ], [ "Hou", "Zhenjie", "" ], [ "Lin", "En", "" ], [ "Li", "Xing", "" ], [ "Liang", "iuzhen", "" ], [ "Zhou", "Xinwen", "" ] ]
TITLE: Pig behavior dataset and Spatial-temporal perception and enhancement networks based on the attention mechanism for pig behavior recognition ABSTRACT: The recognition of pig behavior plays a crucial role in smart farming and welfare assurance for pigs. Currently, in the field of pig behavior recognition, the lack of publicly available behavioral datasets not only limits the development of innovative algorithms but also hampers model robustness and algorithm optimization.This paper proposes a dataset containing 13 pig behaviors that significantly impact welfare.Based on this dataset, this paper proposes a spatial-temporal perception and enhancement networks based on the attention mechanism to model the spatiotemporal features of pig behaviors and their associated interaction areas in video data. The network is composed of a spatiotemporal perception network and a spatiotemporal feature enhancement network. The spatiotemporal perception network is responsible for establishing connections between the pigs and the key regions of their behaviors in the video data. The spatiotemporal feature enhancement network further strengthens the important spatial features of individual pigs and captures the long-term dependencies of the spatiotemporal features of individual behaviors by remodeling these connections, thereby enhancing the model's perception of spatiotemporal changes in pig behaviors. Experimental results demonstrate that on the dataset established in this paper, our proposed model achieves a MAP score of 75.92%, which is an 8.17% improvement over the best-performing traditional model. This study not only improces the accuracy and generalizability of individual pig behavior recognition but also provides new technological tools for modern smart farming. The dataset and related code will be made publicly available alongside this paper.
new_dataset
0.951908
2503.09382
Jiani Huang
Jiani Huang, Shijie Wang, Liang-bo Ning, Wenqi Fan, Shuaiqiang Wang, Dawei Yin, Qing Li
Towards Next-Generation Recommender Systems: A Benchmark for Personalized Recommendation Assistant with LLMs
null
null
null
null
cs.IR cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recommender systems (RecSys) are widely used across various modern digital platforms and have garnered significant attention. Traditional recommender systems usually focus only on fixed and simple recommendation scenarios, making it difficult to generalize to new and unseen recommendation tasks in an interactive paradigm. Recently, the advancement of large language models (LLMs) has revolutionized the foundational architecture of RecSys, driving their evolution into more intelligent and interactive personalized recommendation assistants. However, most existing studies rely on fixed task-specific prompt templates to generate recommendations and evaluate the performance of personalized assistants, which limits the comprehensive assessments of their capabilities. This is because commonly used datasets lack high-quality textual user queries that reflect real-world recommendation scenarios, making them unsuitable for evaluating LLM-based personalized recommendation assistants. To address this gap, we introduce RecBench+, a new dataset benchmark designed to access LLMs' ability to handle intricate user recommendation needs in the era of LLMs. RecBench+ encompasses a diverse set of queries that span both hard conditions and soft preferences, with varying difficulty levels. We evaluated commonly used LLMs on RecBench+ and uncovered below findings: 1) LLMs demonstrate preliminary abilities to act as recommendation assistants, 2) LLMs are better at handling queries with explicitly stated conditions, while facing challenges with queries that require reasoning or contain misleading information. Our dataset has been released at https://github.com/jiani-huang/RecBench.git.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 13:28:23 GMT" } ]
2025-03-13T00:00:00
[ [ "Huang", "Jiani", "" ], [ "Wang", "Shijie", "" ], [ "Ning", "Liang-bo", "" ], [ "Fan", "Wenqi", "" ], [ "Wang", "Shuaiqiang", "" ], [ "Yin", "Dawei", "" ], [ "Li", "Qing", "" ] ]
TITLE: Towards Next-Generation Recommender Systems: A Benchmark for Personalized Recommendation Assistant with LLMs ABSTRACT: Recommender systems (RecSys) are widely used across various modern digital platforms and have garnered significant attention. Traditional recommender systems usually focus only on fixed and simple recommendation scenarios, making it difficult to generalize to new and unseen recommendation tasks in an interactive paradigm. Recently, the advancement of large language models (LLMs) has revolutionized the foundational architecture of RecSys, driving their evolution into more intelligent and interactive personalized recommendation assistants. However, most existing studies rely on fixed task-specific prompt templates to generate recommendations and evaluate the performance of personalized assistants, which limits the comprehensive assessments of their capabilities. This is because commonly used datasets lack high-quality textual user queries that reflect real-world recommendation scenarios, making them unsuitable for evaluating LLM-based personalized recommendation assistants. To address this gap, we introduce RecBench+, a new dataset benchmark designed to access LLMs' ability to handle intricate user recommendation needs in the era of LLMs. RecBench+ encompasses a diverse set of queries that span both hard conditions and soft preferences, with varying difficulty levels. We evaluated commonly used LLMs on RecBench+ and uncovered below findings: 1) LLMs demonstrate preliminary abilities to act as recommendation assistants, 2) LLMs are better at handling queries with explicitly stated conditions, while facing challenges with queries that require reasoning or contain misleading information. Our dataset has been released at https://github.com/jiani-huang/RecBench.git.
new_dataset
0.964422
2503.09394
Qiao Xiaozhen
Xiaozhen Qiao, Peng Huang, Jiakang Yuan, Xianda Guo, Bowen Ye, Zhe Sun, Xuelong Li
Bidirectional Prototype-Reward co-Evolution for Test-Time Adaptation of Vision-Language Models
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Test-time adaptation (TTA) is crucial in maintaining Vision-Language Models (VLMs) performance when facing real-world distribution shifts, particularly when the source data or target labels are inaccessible. Existing TTA methods rely on CLIP's output probability distribution for feature evaluation, which can introduce biases under domain shifts. This misalignment may cause features to be misclassified due to text priors or incorrect textual associations. To address these limitations, we propose Bidirectional Prototype-Reward co-Evolution (BPRE), a novel TTA framework for VLMs that integrates feature quality assessment with prototype evolution through a synergistic feedback loop. BPRE first employs a Multi-Dimensional Quality-Aware Reward Module to evaluate feature quality and guide prototype refinement precisely. The continuous refinement of prototype quality through Prototype-Reward Interactive Evolution will subsequently enhance the computation of more robust Multi-Dimensional Quality-Aware Reward Scores. Through the bidirectional interaction, the precision of rewards and the evolution of prototypes mutually reinforce each other, forming a self-evolving cycle. Extensive experiments are conducted across 15 diverse recognition datasets encompassing natural distribution shifts and cross-dataset generalization scenarios. Results demonstrate that BPRE consistently achieves superior average performance compared to state-of-the-art methods across different model architectures, such as ResNet-50 and ViT-B/16. By emphasizing comprehensive feature evaluation and bidirectional knowledge refinement, BPRE advances VLM generalization capabilities, offering a new perspective on TTA.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 13:40:33 GMT" } ]
2025-03-13T00:00:00
[ [ "Qiao", "Xiaozhen", "" ], [ "Huang", "Peng", "" ], [ "Yuan", "Jiakang", "" ], [ "Guo", "Xianda", "" ], [ "Ye", "Bowen", "" ], [ "Sun", "Zhe", "" ], [ "Li", "Xuelong", "" ] ]
TITLE: Bidirectional Prototype-Reward co-Evolution for Test-Time Adaptation of Vision-Language Models ABSTRACT: Test-time adaptation (TTA) is crucial in maintaining Vision-Language Models (VLMs) performance when facing real-world distribution shifts, particularly when the source data or target labels are inaccessible. Existing TTA methods rely on CLIP's output probability distribution for feature evaluation, which can introduce biases under domain shifts. This misalignment may cause features to be misclassified due to text priors or incorrect textual associations. To address these limitations, we propose Bidirectional Prototype-Reward co-Evolution (BPRE), a novel TTA framework for VLMs that integrates feature quality assessment with prototype evolution through a synergistic feedback loop. BPRE first employs a Multi-Dimensional Quality-Aware Reward Module to evaluate feature quality and guide prototype refinement precisely. The continuous refinement of prototype quality through Prototype-Reward Interactive Evolution will subsequently enhance the computation of more robust Multi-Dimensional Quality-Aware Reward Scores. Through the bidirectional interaction, the precision of rewards and the evolution of prototypes mutually reinforce each other, forming a self-evolving cycle. Extensive experiments are conducted across 15 diverse recognition datasets encompassing natural distribution shifts and cross-dataset generalization scenarios. Results demonstrate that BPRE consistently achieves superior average performance compared to state-of-the-art methods across different model architectures, such as ResNet-50 and ViT-B/16. By emphasizing comprehensive feature evaluation and bidirectional knowledge refinement, BPRE advances VLM generalization capabilities, offering a new perspective on TTA.
no_new_dataset
0.950549
2503.09399
Tobias Nauen
Tobias Christian Nauen, Brian Moser, Federico Raue, Stanislav Frolov, Andreas Dengel
ForAug: Recombining Foregrounds and Backgrounds to Improve Vision Transformer Training with Bias Mitigation
null
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Transformers, particularly Vision Transformers (ViTs), have achieved state-of-the-art performance in large-scale image classification. However, they often require large amounts of data and can exhibit biases that limit their robustness and generalizability. This paper introduces ForAug, a novel data augmentation scheme that addresses these challenges and explicitly includes inductive biases, which commonly are part of the neural network architecture, into the training data. ForAug is constructed by using pretrained foundation models to separate and recombine foreground objects with different backgrounds, enabling fine-grained control over image composition during training. It thus increases the data diversity and effective number of training samples. We demonstrate that training on ForNet, the application of ForAug to ImageNet, significantly improves the accuracy of ViTs and other architectures by up to 4.5 percentage points (p.p.) on ImageNet and 7.3 p.p. on downstream tasks. Importantly, ForAug enables novel ways of analyzing model behavior and quantifying biases. Namely, we introduce metrics for background robustness, foreground focus, center bias, and size bias and show that training on ForNet substantially reduces these biases compared to training on ImageNet. In summary, ForAug provides a valuable tool for analyzing and mitigating biases, enabling the development of more robust and reliable computer vision models. Our code and dataset are publicly available at https://github.com/tobna/ForAug.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 13:49:45 GMT" } ]
2025-03-13T00:00:00
[ [ "Nauen", "Tobias Christian", "" ], [ "Moser", "Brian", "" ], [ "Raue", "Federico", "" ], [ "Frolov", "Stanislav", "" ], [ "Dengel", "Andreas", "" ] ]
TITLE: ForAug: Recombining Foregrounds and Backgrounds to Improve Vision Transformer Training with Bias Mitigation ABSTRACT: Transformers, particularly Vision Transformers (ViTs), have achieved state-of-the-art performance in large-scale image classification. However, they often require large amounts of data and can exhibit biases that limit their robustness and generalizability. This paper introduces ForAug, a novel data augmentation scheme that addresses these challenges and explicitly includes inductive biases, which commonly are part of the neural network architecture, into the training data. ForAug is constructed by using pretrained foundation models to separate and recombine foreground objects with different backgrounds, enabling fine-grained control over image composition during training. It thus increases the data diversity and effective number of training samples. We demonstrate that training on ForNet, the application of ForAug to ImageNet, significantly improves the accuracy of ViTs and other architectures by up to 4.5 percentage points (p.p.) on ImageNet and 7.3 p.p. on downstream tasks. Importantly, ForAug enables novel ways of analyzing model behavior and quantifying biases. Namely, we introduce metrics for background robustness, foreground focus, center bias, and size bias and show that training on ForNet substantially reduces these biases compared to training on ImageNet. In summary, ForAug provides a valuable tool for analyzing and mitigating biases, enabling the development of more robust and reliable computer vision models. Our code and dataset are publicly available at https://github.com/tobna/ForAug.
no_new_dataset
0.943348
2503.09407
Elaine Zosa
Elaine Zosa and Ville Komulainen and Sampo Pyysalo
Got Compute, but No Data: Lessons From Post-training a Finnish LLM
7 pages
Proceedings of the Joint 25th Nordic Conference on Computational Linguistics and 11th Baltic Conference on Human Language Technologies (NoDaLiDa/Baltic-HLT 2025)
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As LLMs gain more popularity as chatbots and general assistants, methods have been developed to enable LLMs to follow instructions and align with human preferences. These methods have found success in the field, but their effectiveness has not been demonstrated outside of high-resource languages. In this work, we discuss our experiences in post-training an LLM for instruction-following for English and Finnish. We use a multilingual LLM to translate instruction and preference datasets from English to Finnish. We perform instruction tuning and preference optimization in English and Finnish and evaluate the instruction-following capabilities of the model in both languages. Our results show that with a few hundred Finnish instruction samples we can obtain competitive performance in Finnish instruction-following. We also found that although preference optimization in English offers some cross-lingual benefits, we obtain our best results by using preference data from both languages. We release our model, datasets, and recipes under open licenses at https://huggingface.co/LumiOpen/Poro-34B-chat-OpenAssistant
[ { "version": "v1", "created": "Wed, 12 Mar 2025 13:58:43 GMT" } ]
2025-03-13T00:00:00
[ [ "Zosa", "Elaine", "" ], [ "Komulainen", "Ville", "" ], [ "Pyysalo", "Sampo", "" ] ]
TITLE: Got Compute, but No Data: Lessons From Post-training a Finnish LLM ABSTRACT: As LLMs gain more popularity as chatbots and general assistants, methods have been developed to enable LLMs to follow instructions and align with human preferences. These methods have found success in the field, but their effectiveness has not been demonstrated outside of high-resource languages. In this work, we discuss our experiences in post-training an LLM for instruction-following for English and Finnish. We use a multilingual LLM to translate instruction and preference datasets from English to Finnish. We perform instruction tuning and preference optimization in English and Finnish and evaluate the instruction-following capabilities of the model in both languages. Our results show that with a few hundred Finnish instruction samples we can obtain competitive performance in Finnish instruction-following. We also found that although preference optimization in English offers some cross-lingual benefits, we obtain our best results by using preference data from both languages. We release our model, datasets, and recipes under open licenses at https://huggingface.co/LumiOpen/Poro-34B-chat-OpenAssistant
no_new_dataset
0.634034
2503.09408
Xiuzhen Guo
Xiuzhen Guo, Lianyuan Yu, Ji Shi, Na Lei, Hongxiao Wang
Diff-CL: A Novel Cross Pseudo-Supervision Method for Semi-supervised Medical Image Segmentation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Semi-supervised learning utilizes insights from unlabeled data to improve model generalization, thereby reducing reliance on large labeled datasets. Most existing studies focus on limited samples and fail to capture the overall data distribution. We contend that combining distributional information with detailed information is crucial for achieving more robust and accurate segmentation results. On the one hand, with its robust generative capabilities, diffusion models (DM) learn data distribution effectively. However, it struggles with fine detail capture, leading to generated images with misleading details. Combining DM with convolutional neural networks (CNNs) enables the former to learn data distribution while the latter corrects fine details. While capturing complete high-frequency details by CNNs requires substantial computational resources and is susceptible to local noise. On the other hand, given that both labeled and unlabeled data come from the same distribution, we believe that regions in unlabeled data similar to overall class semantics to labeled data are likely to belong to the same class, while regions with minimal similarity are less likely to. This work introduces a semi-supervised medical image segmentation framework from the distribution perspective (Diff-CL). Firstly, we propose a cross-pseudo-supervision learning mechanism between diffusion and convolution segmentation networks. Secondly, we design a high-frequency mamba module to capture boundary and detail information globally. Finally, we apply contrastive learning for label propagation from labeled to unlabeled data. Our method achieves state-of-the-art (SOTA) performance across three datasets, including left atrium, brain tumor, and NIH pancreas datasets.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 13:59:09 GMT" } ]
2025-03-13T00:00:00
[ [ "Guo", "Xiuzhen", "" ], [ "Yu", "Lianyuan", "" ], [ "Shi", "Ji", "" ], [ "Lei", "Na", "" ], [ "Wang", "Hongxiao", "" ] ]
TITLE: Diff-CL: A Novel Cross Pseudo-Supervision Method for Semi-supervised Medical Image Segmentation ABSTRACT: Semi-supervised learning utilizes insights from unlabeled data to improve model generalization, thereby reducing reliance on large labeled datasets. Most existing studies focus on limited samples and fail to capture the overall data distribution. We contend that combining distributional information with detailed information is crucial for achieving more robust and accurate segmentation results. On the one hand, with its robust generative capabilities, diffusion models (DM) learn data distribution effectively. However, it struggles with fine detail capture, leading to generated images with misleading details. Combining DM with convolutional neural networks (CNNs) enables the former to learn data distribution while the latter corrects fine details. While capturing complete high-frequency details by CNNs requires substantial computational resources and is susceptible to local noise. On the other hand, given that both labeled and unlabeled data come from the same distribution, we believe that regions in unlabeled data similar to overall class semantics to labeled data are likely to belong to the same class, while regions with minimal similarity are less likely to. This work introduces a semi-supervised medical image segmentation framework from the distribution perspective (Diff-CL). Firstly, we propose a cross-pseudo-supervision learning mechanism between diffusion and convolution segmentation networks. Secondly, we design a high-frequency mamba module to capture boundary and detail information globally. Finally, we apply contrastive learning for label propagation from labeled to unlabeled data. Our method achieves state-of-the-art (SOTA) performance across three datasets, including left atrium, brain tumor, and NIH pancreas datasets.
no_new_dataset
0.953966
2503.09410
Chen Zhao
Jiale Wang, Chen Zhao, Wei Ke, Tong Zhang
Monte Carlo Diffusion for Generalizable Learning-Based RANSAC
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Random Sample Consensus (RANSAC) is a fundamental approach for robustly estimating parametric models from noisy data. Existing learning-based RANSAC methods utilize deep learning to enhance the robustness of RANSAC against outliers. However, these approaches are trained and tested on the data generated by the same algorithms, leading to limited generalization to out-of-distribution data during inference. Therefore, in this paper, we introduce a novel diffusion-based paradigm that progressively injects noise into ground-truth data, simulating the noisy conditions for training learning-based RANSAC. To enhance data diversity, we incorporate Monte Carlo sampling into the diffusion paradigm, approximating diverse data distributions by introducing different types of randomness at multiple stages. We evaluate our approach in the context of feature matching through comprehensive experiments on the ScanNet and MegaDepth datasets. The experimental results demonstrate that our Monte Carlo diffusion mechanism significantly improves the generalization ability of learning-based RANSAC. We also develop extensive ablation studies that highlight the effectiveness of key components in our framework.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 14:01:18 GMT" } ]
2025-03-13T00:00:00
[ [ "Wang", "Jiale", "" ], [ "Zhao", "Chen", "" ], [ "Ke", "Wei", "" ], [ "Zhang", "Tong", "" ] ]
TITLE: Monte Carlo Diffusion for Generalizable Learning-Based RANSAC ABSTRACT: Random Sample Consensus (RANSAC) is a fundamental approach for robustly estimating parametric models from noisy data. Existing learning-based RANSAC methods utilize deep learning to enhance the robustness of RANSAC against outliers. However, these approaches are trained and tested on the data generated by the same algorithms, leading to limited generalization to out-of-distribution data during inference. Therefore, in this paper, we introduce a novel diffusion-based paradigm that progressively injects noise into ground-truth data, simulating the noisy conditions for training learning-based RANSAC. To enhance data diversity, we incorporate Monte Carlo sampling into the diffusion paradigm, approximating diverse data distributions by introducing different types of randomness at multiple stages. We evaluate our approach in the context of feature matching through comprehensive experiments on the ScanNet and MegaDepth datasets. The experimental results demonstrate that our Monte Carlo diffusion mechanism significantly improves the generalization ability of learning-based RANSAC. We also develop extensive ablation studies that highlight the effectiveness of key components in our framework.
no_new_dataset
0.948394
2503.09416
WeiYing Xue
Qi Liu, Weiying Xue, Yuxiao Wang, Zhenao Wei
OpenVidVRD: Open-Vocabulary Video Visual Relation Detection via Prompt-Driven Semantic Space Alignment
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
The video visual relation detection (VidVRD) task is to identify objects and their relationships in videos, which is challenging due to the dynamic content, high annotation costs, and long-tailed distribution of relations. Visual language models (VLMs) help explore open-vocabulary visual relation detection tasks, yet often overlook the connections between various visual regions and their relations. Moreover, using VLMs to directly identify visual relations in videos poses significant challenges because of the large disparity between images and videos. Therefore, we propose a novel open-vocabulary VidVRD framework, termed OpenVidVRD, which transfers VLMs' rich knowledge and powerful capabilities to improve VidVRD tasks through prompt learning. Specificall y, We use VLM to extract text representations from automatically generated region captions based on the video's regions. Next, we develop a spatiotemporal refiner module to derive object-level relationship representations in the video by integrating cross-modal spatiotemporal complementary information. Furthermore, a prompt-driven strategy to align semantic spaces is employed to harness the semantic understanding of VLMs, enhancing the overall generalization ability of OpenVidVRD. Extensive experiments conducted on the VidVRD and VidOR public datasets show that the proposed model outperforms existing methods.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 14:13:17 GMT" } ]
2025-03-13T00:00:00
[ [ "Liu", "Qi", "" ], [ "Xue", "Weiying", "" ], [ "Wang", "Yuxiao", "" ], [ "Wei", "Zhenao", "" ] ]
TITLE: OpenVidVRD: Open-Vocabulary Video Visual Relation Detection via Prompt-Driven Semantic Space Alignment ABSTRACT: The video visual relation detection (VidVRD) task is to identify objects and their relationships in videos, which is challenging due to the dynamic content, high annotation costs, and long-tailed distribution of relations. Visual language models (VLMs) help explore open-vocabulary visual relation detection tasks, yet often overlook the connections between various visual regions and their relations. Moreover, using VLMs to directly identify visual relations in videos poses significant challenges because of the large disparity between images and videos. Therefore, we propose a novel open-vocabulary VidVRD framework, termed OpenVidVRD, which transfers VLMs' rich knowledge and powerful capabilities to improve VidVRD tasks through prompt learning. Specificall y, We use VLM to extract text representations from automatically generated region captions based on the video's regions. Next, we develop a spatiotemporal refiner module to derive object-level relationship representations in the video by integrating cross-modal spatiotemporal complementary information. Furthermore, a prompt-driven strategy to align semantic spaces is employed to harness the semantic understanding of VLMs, enhancing the overall generalization ability of OpenVidVRD. Extensive experiments conducted on the VidVRD and VidOR public datasets show that the proposed model outperforms existing methods.
no_new_dataset
0.947137
2503.09427
Yaorui Shi
Yaorui Shi, Jiaqi Yang, Sihang Li, Junfeng Fang, Xiang Wang, Zhiyuan Liu, Yang Zhang
Multimodal Language Modeling for High-Accuracy Single Cell Transcriptomics Analysis and Generation
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pre-trained language models (PLMs) have revolutionized scientific research, yet their application to single-cell analysis remains limited. Text PLMs cannot process single-cell RNA sequencing data, while cell PLMs lack the ability to handle free text, restricting their use in multimodal tasks. Existing efforts to bridge these modalities often suffer from information loss or inadequate single-modal pre-training, leading to suboptimal performances. To address these challenges, we propose Single-Cell MultiModal Generative Pre-trained Transformer (scMMGPT), a unified PLM for joint cell and text modeling. scMMGPT effectively integrates the state-of-the-art cell and text PLMs, facilitating cross-modal knowledge sharing for improved performance. To bridge the text-cell modality gap, scMMGPT leverages dedicated cross-modal projectors, and undergoes extensive pre-training on 27 million cells -- the largest dataset for multimodal cell-text PLMs to date. This large-scale pre-training enables scMMGPT to excel in joint cell-text tasks, achieving an 84\% relative improvement of textual discrepancy for cell description generation, 20.5\% higher accuracy for cell type annotation, and 4\% improvement in $k$-NN accuracy for text-conditioned pseudo-cell generation, outperforming baselines.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 14:26:16 GMT" } ]
2025-03-13T00:00:00
[ [ "Shi", "Yaorui", "" ], [ "Yang", "Jiaqi", "" ], [ "Li", "Sihang", "" ], [ "Fang", "Junfeng", "" ], [ "Wang", "Xiang", "" ], [ "Liu", "Zhiyuan", "" ], [ "Zhang", "Yang", "" ] ]
TITLE: Multimodal Language Modeling for High-Accuracy Single Cell Transcriptomics Analysis and Generation ABSTRACT: Pre-trained language models (PLMs) have revolutionized scientific research, yet their application to single-cell analysis remains limited. Text PLMs cannot process single-cell RNA sequencing data, while cell PLMs lack the ability to handle free text, restricting their use in multimodal tasks. Existing efforts to bridge these modalities often suffer from information loss or inadequate single-modal pre-training, leading to suboptimal performances. To address these challenges, we propose Single-Cell MultiModal Generative Pre-trained Transformer (scMMGPT), a unified PLM for joint cell and text modeling. scMMGPT effectively integrates the state-of-the-art cell and text PLMs, facilitating cross-modal knowledge sharing for improved performance. To bridge the text-cell modality gap, scMMGPT leverages dedicated cross-modal projectors, and undergoes extensive pre-training on 27 million cells -- the largest dataset for multimodal cell-text PLMs to date. This large-scale pre-training enables scMMGPT to excel in joint cell-text tasks, achieving an 84\% relative improvement of textual discrepancy for cell description generation, 20.5\% higher accuracy for cell type annotation, and 4\% improvement in $k$-NN accuracy for text-conditioned pseudo-cell generation, outperforming baselines.
no_new_dataset
0.951051
2503.09439
Qijian Zhang
Qijian Zhang, Xiaozheng Jian, Xuan Zhang, Wenping Wang, Junhui Hou
SuperCarver: Texture-Consistent 3D Geometry Super-Resolution for High-Fidelity Surface Detail Generation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Traditional production workflow of high-precision 3D mesh assets necessitates a cumbersome and laborious process of manual sculpting by specialized modelers. The recent years have witnessed remarkable advances in AI-empowered 3D content creation. However, although the latest state-of-the-arts are already capable of generating plausible structures and intricate appearances from images or text prompts, the actual mesh surfaces are typically over-smoothing and lack geometric details. This paper introduces SuperCarver, a 3D geometry super-resolution framework particularly tailored for adding texture-consistent surface details to given coarse meshes. Technically, we start by rendering the original textured mesh into the image domain from multiple viewpoints. To achieve geometric detail generation, we develop a deterministic prior-guided normal diffusion model fine-tuned on a carefully curated dataset of paired low-poly and high-poly normal renderings. To optimize mesh structures from potentially imperfect normal map predictions, we design a simple yet effective noise-resistant inverse rendering scheme based on distance field deformation. Extensive experiments show that SuperCarver generates realistic and expressive surface details as depicted by specific texture appearances, making it a powerful tool for automatically upgrading massive outdated low-quality assets and shortening the iteration cycle of high-quality mesh production in practical applications.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 14:38:45 GMT" } ]
2025-03-13T00:00:00
[ [ "Zhang", "Qijian", "" ], [ "Jian", "Xiaozheng", "" ], [ "Zhang", "Xuan", "" ], [ "Wang", "Wenping", "" ], [ "Hou", "Junhui", "" ] ]
TITLE: SuperCarver: Texture-Consistent 3D Geometry Super-Resolution for High-Fidelity Surface Detail Generation ABSTRACT: Traditional production workflow of high-precision 3D mesh assets necessitates a cumbersome and laborious process of manual sculpting by specialized modelers. The recent years have witnessed remarkable advances in AI-empowered 3D content creation. However, although the latest state-of-the-arts are already capable of generating plausible structures and intricate appearances from images or text prompts, the actual mesh surfaces are typically over-smoothing and lack geometric details. This paper introduces SuperCarver, a 3D geometry super-resolution framework particularly tailored for adding texture-consistent surface details to given coarse meshes. Technically, we start by rendering the original textured mesh into the image domain from multiple viewpoints. To achieve geometric detail generation, we develop a deterministic prior-guided normal diffusion model fine-tuned on a carefully curated dataset of paired low-poly and high-poly normal renderings. To optimize mesh structures from potentially imperfect normal map predictions, we design a simple yet effective noise-resistant inverse rendering scheme based on distance field deformation. Extensive experiments show that SuperCarver generates realistic and expressive surface details as depicted by specific texture appearances, making it a powerful tool for automatically upgrading massive outdated low-quality assets and shortening the iteration cycle of high-quality mesh production in practical applications.
new_dataset
0.965119
2503.09443
Julian Spravil
Julian Spravil and Sebastian Houben and Sven Behnke
Florenz: Scaling Laws for Systematic Generalization in Vision-Language Models
null
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cross-lingual transfer enables vision-language models (VLMs) to perform vision tasks in various languages with training data only in one language. Current approaches rely on large pre-trained multilingual language models. However, they face the curse of multilinguality, sacrificing downstream task performance for multilingual capabilities, struggling with lexical ambiguities, and falling behind recent advances. In this work, we study the scaling laws of systematic generalization with monolingual VLMs for multilingual tasks, focusing on the impact of model size and seen training samples. We propose Florenz, a monolingual encoder-decoder VLM with 0.4B to 11.2B parameters combining the pre-trained VLM Florence-2 and the large language model Gemma-2. Florenz is trained with varying compute budgets on a synthetic dataset that features intentionally incomplete language coverage for image captioning, thus, testing generalization from the fully covered translation task. We show that not only does indirectly learning unseen task-language pairs adhere to a scaling law, but also that with our data generation pipeline and the proposed Florenz model family, image captioning abilities can emerge in a specific language even when only data for the translation task is available. Fine-tuning on a mix of downstream datasets yields competitive performance and demonstrates promising scaling trends in multimodal machine translation (Multi30K, CoMMuTE), lexical disambiguation (CoMMuTE), and image captioning (Multi30K, XM3600, COCO Karpathy).
[ { "version": "v1", "created": "Wed, 12 Mar 2025 14:41:10 GMT" } ]
2025-03-13T00:00:00
[ [ "Spravil", "Julian", "" ], [ "Houben", "Sebastian", "" ], [ "Behnke", "Sven", "" ] ]
TITLE: Florenz: Scaling Laws for Systematic Generalization in Vision-Language Models ABSTRACT: Cross-lingual transfer enables vision-language models (VLMs) to perform vision tasks in various languages with training data only in one language. Current approaches rely on large pre-trained multilingual language models. However, they face the curse of multilinguality, sacrificing downstream task performance for multilingual capabilities, struggling with lexical ambiguities, and falling behind recent advances. In this work, we study the scaling laws of systematic generalization with monolingual VLMs for multilingual tasks, focusing on the impact of model size and seen training samples. We propose Florenz, a monolingual encoder-decoder VLM with 0.4B to 11.2B parameters combining the pre-trained VLM Florence-2 and the large language model Gemma-2. Florenz is trained with varying compute budgets on a synthetic dataset that features intentionally incomplete language coverage for image captioning, thus, testing generalization from the fully covered translation task. We show that not only does indirectly learning unseen task-language pairs adhere to a scaling law, but also that with our data generation pipeline and the proposed Florenz model family, image captioning abilities can emerge in a specific language even when only data for the translation task is available. Fine-tuning on a mix of downstream datasets yields competitive performance and demonstrates promising scaling trends in multimodal machine translation (Multi30K, CoMMuTE), lexical disambiguation (CoMMuTE), and image captioning (Multi30K, XM3600, COCO Karpathy).
no_new_dataset
0.947186
2503.09453
Xiangjian Jiang
Xiangjian Jiang, Nikola Simidjievski, Mateja Jamnik
How Well Does Your Tabular Generator Learn the Structure of Tabular Data?
Accepted by ICLR 2025 workshops (DeLTa and SynthData)
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Heterogeneous tabular data poses unique challenges in generative modelling due to its fundamentally different underlying data structure compared to homogeneous modalities, such as images and text. Although previous research has sought to adapt the successes of generative modelling in homogeneous modalities to the tabular domain, defining an effective generator for tabular data remains an open problem. One major reason is that the evaluation criteria inherited from other modalities often fail to adequately assess whether tabular generative models effectively capture or utilise the unique structural information encoded in tabular data. In this paper, we carefully examine the limitations of the prevailing evaluation framework and introduce $\textbf{TabStruct}$, a novel evaluation benchmark that positions structural fidelity as a core evaluation dimension. Specifically, TabStruct evaluates the alignment of causal structures in real and synthetic data, providing a direct measure of how effectively tabular generative models learn the structure of tabular data. Through extensive experiments using generators from eight categories on seven datasets with expert-validated causal graphical structures, we show that structural fidelity offers a task-independent, domain-agnostic evaluation dimension. Our findings highlight the importance of tabular data structure and offer practical guidance for developing more effective and robust tabular generative models. Code is available at https://github.com/SilenceX12138/TabStruct.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 14:54:58 GMT" } ]
2025-03-13T00:00:00
[ [ "Jiang", "Xiangjian", "" ], [ "Simidjievski", "Nikola", "" ], [ "Jamnik", "Mateja", "" ] ]
TITLE: How Well Does Your Tabular Generator Learn the Structure of Tabular Data? ABSTRACT: Heterogeneous tabular data poses unique challenges in generative modelling due to its fundamentally different underlying data structure compared to homogeneous modalities, such as images and text. Although previous research has sought to adapt the successes of generative modelling in homogeneous modalities to the tabular domain, defining an effective generator for tabular data remains an open problem. One major reason is that the evaluation criteria inherited from other modalities often fail to adequately assess whether tabular generative models effectively capture or utilise the unique structural information encoded in tabular data. In this paper, we carefully examine the limitations of the prevailing evaluation framework and introduce $\textbf{TabStruct}$, a novel evaluation benchmark that positions structural fidelity as a core evaluation dimension. Specifically, TabStruct evaluates the alignment of causal structures in real and synthetic data, providing a direct measure of how effectively tabular generative models learn the structure of tabular data. Through extensive experiments using generators from eight categories on seven datasets with expert-validated causal graphical structures, we show that structural fidelity offers a task-independent, domain-agnostic evaluation dimension. Our findings highlight the importance of tabular data structure and offer practical guidance for developing more effective and robust tabular generative models. Code is available at https://github.com/SilenceX12138/TabStruct.
no_new_dataset
0.925567
2503.09474
Mobarakol Islam
Jiayuan Huang, Runlong He, Danyal Z. Khan, Evangelos Mazomenos, Danail Stoyanov, Hani J. Marcus, Matthew J. Clarkson, Mobarakol Islam
SurgicalVLM-Agent: Towards an Interactive AI Co-Pilot for Pituitary Surgery
11 pages
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Image-guided surgery demands adaptive, real-time decision support, yet static AI models struggle with structured task planning and providing interactive guidance. Large vision-language models (VLMs) offer a promising solution by enabling dynamic task planning and predictive decision support. We introduce SurgicalVLM-Agent, an AI co-pilot for image-guided pituitary surgery, capable of conversation, planning, and task execution. The agent dynamically processes surgeon queries and plans the tasks such as MRI tumor segmentation, endoscope anatomy segmentation, overlaying preoperative imaging with intraoperative views, instrument tracking, and surgical visual question answering (VQA). To enable structured task planning, we develop the PitAgent dataset, a surgical context-aware dataset covering segmentation, overlaying, instrument localization, tool tracking, tool-tissue interactions, phase identification, and surgical activity recognition. Additionally, we propose FFT-GaLore, a fast Fourier transform (FFT)-based gradient projection technique for efficient low-rank adaptation, optimizing fine-tuning for LLaMA 3.2 in surgical environments. We validate SurgicalVLM-Agent by assessing task planning and prompt generation on our PitAgent dataset and evaluating zero-shot VQA using a public pituitary dataset. Results demonstrate state-of-the-art performance in task planning and query interpretation, with highly semantically meaningful VQA responses, advancing AI-driven surgical assistance.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 15:30:39 GMT" } ]
2025-03-13T00:00:00
[ [ "Huang", "Jiayuan", "" ], [ "He", "Runlong", "" ], [ "Khan", "Danyal Z.", "" ], [ "Mazomenos", "Evangelos", "" ], [ "Stoyanov", "Danail", "" ], [ "Marcus", "Hani J.", "" ], [ "Clarkson", "Matthew J.", "" ], [ "Islam", "Mobarakol", "" ] ]
TITLE: SurgicalVLM-Agent: Towards an Interactive AI Co-Pilot for Pituitary Surgery ABSTRACT: Image-guided surgery demands adaptive, real-time decision support, yet static AI models struggle with structured task planning and providing interactive guidance. Large vision-language models (VLMs) offer a promising solution by enabling dynamic task planning and predictive decision support. We introduce SurgicalVLM-Agent, an AI co-pilot for image-guided pituitary surgery, capable of conversation, planning, and task execution. The agent dynamically processes surgeon queries and plans the tasks such as MRI tumor segmentation, endoscope anatomy segmentation, overlaying preoperative imaging with intraoperative views, instrument tracking, and surgical visual question answering (VQA). To enable structured task planning, we develop the PitAgent dataset, a surgical context-aware dataset covering segmentation, overlaying, instrument localization, tool tracking, tool-tissue interactions, phase identification, and surgical activity recognition. Additionally, we propose FFT-GaLore, a fast Fourier transform (FFT)-based gradient projection technique for efficient low-rank adaptation, optimizing fine-tuning for LLaMA 3.2 in surgical environments. We validate SurgicalVLM-Agent by assessing task planning and prompt generation on our PitAgent dataset and evaluating zero-shot VQA using a public pituitary dataset. Results demonstrate state-of-the-art performance in task planning and query interpretation, with highly semantically meaningful VQA responses, advancing AI-driven surgical assistance.
new_dataset
0.961534
2503.09485
Kadir Ozcoban
Kadir \"Oz\c{c}oban, Murat Manguo\u{g}lu, Emrullah Fatih Yetkin
A Novel Approach for Intrinsic Dimension Estimation
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The real-life data have a complex and non-linear structure due to their nature. These non-linearities and the large number of features can usually cause problems such as the empty-space phenomenon and the well-known curse of dimensionality. Finding the nearly optimal representation of the dataset in a lower-dimensional space (i.e. dimensionality reduction) offers an applicable mechanism for improving the success of machine learning tasks. However, estimating the required data dimension for the nearly optimal representation (intrinsic dimension) can be very costly, particularly if one deals with big data. We propose a highly efficient and robust intrinsic dimension estimation approach that only relies on matrix-vector products for dimensionality reduction methods. An experimental study is also conducted to compare the performance of proposed method with state of the art approaches.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 15:42:39 GMT" } ]
2025-03-13T00:00:00
[ [ "Özçoban", "Kadir", "" ], [ "Manguoğlu", "Murat", "" ], [ "Yetkin", "Emrullah Fatih", "" ] ]
TITLE: A Novel Approach for Intrinsic Dimension Estimation ABSTRACT: The real-life data have a complex and non-linear structure due to their nature. These non-linearities and the large number of features can usually cause problems such as the empty-space phenomenon and the well-known curse of dimensionality. Finding the nearly optimal representation of the dataset in a lower-dimensional space (i.e. dimensionality reduction) offers an applicable mechanism for improving the success of machine learning tasks. However, estimating the required data dimension for the nearly optimal representation (intrinsic dimension) can be very costly, particularly if one deals with big data. We propose a highly efficient and robust intrinsic dimension estimation approach that only relies on matrix-vector products for dimensionality reduction methods. An experimental study is also conducted to compare the performance of proposed method with state of the art approaches.
no_new_dataset
0.952175
2503.09493
Romain Thoreau
Romain Thoreau, Valerio Marsocci and Dawa Derksen
Parameter-Efficient Adaptation of Geospatial Foundation Models through Embedding Deflection
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As large-scale heterogeneous data sets become increasingly available, adapting foundation models at low cost has become a key issue. Seminal works in natural language processing, e.g. Low-Rank Adaptation (LoRA), leverage the low "intrinsic rank" of parameter updates during adaptation. In this paper, we argue that incorporating stronger inductive biases in both data and models can enhance the adaptation of Geospatial Foundation Models (GFMs), pretrained on RGB satellite images, to other types of optical satellite data. Specifically, the pretrained parameters of GFMs serve as a strong prior for the spatial structure of multispectral images. For this reason, we introduce DEFLECT (Deflecting Embeddings for Finetuning Latent representations for Earth and Climate Tasks), a novel strategy for adapting GFMs to multispectral satellite imagery with very few additional parameters. DEFLECT improves the representation capabilities of the extracted features, particularly enhancing spectral information, which is essential for geoscience and environmental-related tasks. We demonstrate the effectiveness of our method across three different GFMs and five diverse datasets, ranging from forest monitoring to marine environment segmentation. Compared to competing methods, DEFLECT achieves on-par or higher accuracy with 5-10$\times$ fewer parameters for classification and segmentation tasks. The code will be made publicly available.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 15:53:58 GMT" } ]
2025-03-13T00:00:00
[ [ "Thoreau", "Romain", "" ], [ "Marsocci", "Valerio", "" ], [ "Derksen", "Dawa", "" ] ]
TITLE: Parameter-Efficient Adaptation of Geospatial Foundation Models through Embedding Deflection ABSTRACT: As large-scale heterogeneous data sets become increasingly available, adapting foundation models at low cost has become a key issue. Seminal works in natural language processing, e.g. Low-Rank Adaptation (LoRA), leverage the low "intrinsic rank" of parameter updates during adaptation. In this paper, we argue that incorporating stronger inductive biases in both data and models can enhance the adaptation of Geospatial Foundation Models (GFMs), pretrained on RGB satellite images, to other types of optical satellite data. Specifically, the pretrained parameters of GFMs serve as a strong prior for the spatial structure of multispectral images. For this reason, we introduce DEFLECT (Deflecting Embeddings for Finetuning Latent representations for Earth and Climate Tasks), a novel strategy for adapting GFMs to multispectral satellite imagery with very few additional parameters. DEFLECT improves the representation capabilities of the extracted features, particularly enhancing spectral information, which is essential for geoscience and environmental-related tasks. We demonstrate the effectiveness of our method across three different GFMs and five diverse datasets, ranging from forest monitoring to marine environment segmentation. Compared to competing methods, DEFLECT achieves on-par or higher accuracy with 5-10$\times$ fewer parameters for classification and segmentation tasks. The code will be made publicly available.
no_new_dataset
0.951953
2503.09499
Daoyuan Chen
Zhe Xu, Daoyuan Chen, Zhenqing Ling, Yaliang Li, Ying Shen
MindGYM: Enhancing Vision-Language Models via Synthetic Self-Challenging Questions
16 pages
null
null
null
cs.CV cs.AI cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large vision-language models (VLMs) face challenges in achieving robust, transferable reasoning abilities due to reliance on labor-intensive manual instruction datasets or computationally expensive self-supervised methods. To address these issues, we introduce MindGYM, a framework that enhances VLMs through synthetic self-challenging questions, consisting of three stages: (1) Seed Single-Hop Question Synthesis, generating cognitive questions across textual (e.g., logical deduction) and multimodal contexts (e.g., diagram-based queries) spanning eight semantic areas like ethical analysis; (2) Challenging Multi-Hop Question Synthesis, combining seed questions via diverse principles like bridging, visual-textual alignment, to create multi-step problems demanding deeper reasoning; and (3) Thinking-Induced Curriculum Fine-Tuning, a structured pipeline that progressively trains the model from scaffolded reasoning to standalone inference. By leveraging the model's self-synthesis capability, MindGYM achieves high data efficiency (e.g., +16% gains on MathVision-Mini with only 400 samples), computational efficiency (reducing both training and inference costs), and robust generalization across tasks. Extensive evaluations on seven benchmarks demonstrate superior performance over strong baselines, with notable improvements (+15.77% win rates) in reasoning depth and breadth validated via GPT-based scoring. MindGYM underscores the viability of self-challenging for refining VLM capabilities while minimizing human intervention and resource demands. Code and data are released to advance multimodal reasoning research.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 16:03:03 GMT" } ]
2025-03-13T00:00:00
[ [ "Xu", "Zhe", "" ], [ "Chen", "Daoyuan", "" ], [ "Ling", "Zhenqing", "" ], [ "Li", "Yaliang", "" ], [ "Shen", "Ying", "" ] ]
TITLE: MindGYM: Enhancing Vision-Language Models via Synthetic Self-Challenging Questions ABSTRACT: Large vision-language models (VLMs) face challenges in achieving robust, transferable reasoning abilities due to reliance on labor-intensive manual instruction datasets or computationally expensive self-supervised methods. To address these issues, we introduce MindGYM, a framework that enhances VLMs through synthetic self-challenging questions, consisting of three stages: (1) Seed Single-Hop Question Synthesis, generating cognitive questions across textual (e.g., logical deduction) and multimodal contexts (e.g., diagram-based queries) spanning eight semantic areas like ethical analysis; (2) Challenging Multi-Hop Question Synthesis, combining seed questions via diverse principles like bridging, visual-textual alignment, to create multi-step problems demanding deeper reasoning; and (3) Thinking-Induced Curriculum Fine-Tuning, a structured pipeline that progressively trains the model from scaffolded reasoning to standalone inference. By leveraging the model's self-synthesis capability, MindGYM achieves high data efficiency (e.g., +16% gains on MathVision-Mini with only 400 samples), computational efficiency (reducing both training and inference costs), and robust generalization across tasks. Extensive evaluations on seven benchmarks demonstrate superior performance over strong baselines, with notable improvements (+15.77% win rates) in reasoning depth and breadth validated via GPT-based scoring. MindGYM underscores the viability of self-challenging for refining VLM capabilities while minimizing human intervention and resource demands. Code and data are released to advance multimodal reasoning research.
no_new_dataset
0.949482
2503.09504
Bakary Badjie Mr
Bakary Badjie, Jos\'e Cec\'ilio, Ant\'onio Casimiro
Double-Stage Feature-Level Clustering-Based Mixture of Experts Framework
14 Pages, 1 Figure, and 3 Tables
null
null
null
cs.LG cs.AI cs.CV cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Mixture-of-Experts (MoE) model has succeeded in deep learning (DL). However, its complex architecture and advantages over dense models in image classification remain unclear. In previous studies, MoE performance has often been affected by noise and outliers in the input space. Some approaches incorporate input clustering for training MoE models, but most clustering algorithms lack access to labeled data, limiting their effectiveness. This paper introduces the Double-stage Feature-level Clustering and Pseudo-labeling-based Mixture of Experts (DFCP-MoE) framework, which consists of input feature extraction, feature-level clustering, and a computationally efficient pseudo-labeling strategy. This approach reduces the impact of noise and outliers while leveraging a small subset of labeled data to label a large portion of unlabeled inputs. We propose a conditional end-to-end joint training method that improves expert specialization by training the MoE model on well-labeled, clustered inputs. Unlike traditional MoE and dense models, the DFCP-MoE framework effectively captures input space diversity, leading to competitive inference results. We validate our approach on three benchmark datasets for multi-class classification tasks.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 16:13:50 GMT" } ]
2025-03-13T00:00:00
[ [ "Badjie", "Bakary", "" ], [ "Cecílio", "José", "" ], [ "Casimiro", "António", "" ] ]
TITLE: Double-Stage Feature-Level Clustering-Based Mixture of Experts Framework ABSTRACT: The Mixture-of-Experts (MoE) model has succeeded in deep learning (DL). However, its complex architecture and advantages over dense models in image classification remain unclear. In previous studies, MoE performance has often been affected by noise and outliers in the input space. Some approaches incorporate input clustering for training MoE models, but most clustering algorithms lack access to labeled data, limiting their effectiveness. This paper introduces the Double-stage Feature-level Clustering and Pseudo-labeling-based Mixture of Experts (DFCP-MoE) framework, which consists of input feature extraction, feature-level clustering, and a computationally efficient pseudo-labeling strategy. This approach reduces the impact of noise and outliers while leveraging a small subset of labeled data to label a large portion of unlabeled inputs. We propose a conditional end-to-end joint training method that improves expert specialization by training the MoE model on well-labeled, clustered inputs. Unlike traditional MoE and dense models, the DFCP-MoE framework effectively captures input space diversity, leading to competitive inference results. We validate our approach on three benchmark datasets for multi-class classification tasks.
no_new_dataset
0.950641
2503.09510
Rosalia Tufano
Rosalia Tufano, Gabriele Bavota
Automating Code Review: A Systematic Literature Review
null
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Code Review consists in assessing the code written by teammates with the goal of increasing code quality. Empirical studies documented the benefits brought by such a practice that, however, has its cost to pay in terms of developers' time. For this reason, researchers have proposed techniques and tools to automate code review tasks such as the reviewers selection (i.e., identifying suitable reviewers for a given code change) or the actual review of a given change (i.e., recommending improvements to the contributor as a human reviewer would do). Given the substantial amount of papers recently published on the topic, it may be challenging for researchers and practitioners to get a complete overview of the state-of-the-art. We present a systematic literature review (SLR) featuring 119 papers concerning the automation of code review tasks. We provide: (i) a categorization of the code review tasks automated in the literature; (ii) an overview of the under-the-hood techniques used for the automation, including the datasets used for training data-driven techniques; (iii) publicly available techniques and datasets used for their evaluation, with a description of the evaluation metrics usually adopted for each task. The SLR is concluded by a discussion of the current limitations of the state-of-the-art, with insights for future research directions.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 16:19:10 GMT" } ]
2025-03-13T00:00:00
[ [ "Tufano", "Rosalia", "" ], [ "Bavota", "Gabriele", "" ] ]
TITLE: Automating Code Review: A Systematic Literature Review ABSTRACT: Code Review consists in assessing the code written by teammates with the goal of increasing code quality. Empirical studies documented the benefits brought by such a practice that, however, has its cost to pay in terms of developers' time. For this reason, researchers have proposed techniques and tools to automate code review tasks such as the reviewers selection (i.e., identifying suitable reviewers for a given code change) or the actual review of a given change (i.e., recommending improvements to the contributor as a human reviewer would do). Given the substantial amount of papers recently published on the topic, it may be challenging for researchers and practitioners to get a complete overview of the state-of-the-art. We present a systematic literature review (SLR) featuring 119 papers concerning the automation of code review tasks. We provide: (i) a categorization of the code review tasks automated in the literature; (ii) an overview of the under-the-hood techniques used for the automation, including the datasets used for training data-driven techniques; (iii) publicly available techniques and datasets used for their evaluation, with a description of the evaluation metrics usually adopted for each task. The SLR is concluded by a discussion of the current limitations of the state-of-the-art, with insights for future research directions.
no_new_dataset
0.946892
2503.09514
Bin Hu
Bin Hu and Chenqiang Gao and Shurui Liu and Junjie Guo and Fang Chen and Fangcen Liu
CM-Diff: A Single Generative Network for Bidirectional Cross-Modality Translation Diffusion Model Between Infrared and Visible Images
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The image translation method represents a crucial approach for mitigating information deficiencies in the infrared and visible modalities, while also facilitating the enhancement of modality-specific datasets. However, existing methods for infrared and visible image translation either achieve unidirectional modality translation or rely on cycle consistency for bidirectional modality translation, which may result in suboptimal performance. In this work, we present the cross-modality translation diffusion model (CM-Diff) for simultaneously modeling data distributions in both the infrared and visible modalities. We address this challenge by combining translation direction labels for guidance during training with cross-modality feature control. Specifically, we view the establishment of the mapping relationship between the two modalities as the process of learning data distributions and understanding modality differences, achieved through a novel Bidirectional Diffusion Training (BDT) strategy. Additionally, we propose a Statistical Constraint Inference (SCI) strategy to ensure the generated image closely adheres to the data distribution of the target modality. Experimental results demonstrate the superiority of our CM-Diff over state-of-the-art methods, highlighting its potential for generating dual-modality datasets.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 16:25:18 GMT" } ]
2025-03-13T00:00:00
[ [ "Hu", "Bin", "" ], [ "Gao", "Chenqiang", "" ], [ "Liu", "Shurui", "" ], [ "Guo", "Junjie", "" ], [ "Chen", "Fang", "" ], [ "Liu", "Fangcen", "" ] ]
TITLE: CM-Diff: A Single Generative Network for Bidirectional Cross-Modality Translation Diffusion Model Between Infrared and Visible Images ABSTRACT: The image translation method represents a crucial approach for mitigating information deficiencies in the infrared and visible modalities, while also facilitating the enhancement of modality-specific datasets. However, existing methods for infrared and visible image translation either achieve unidirectional modality translation or rely on cycle consistency for bidirectional modality translation, which may result in suboptimal performance. In this work, we present the cross-modality translation diffusion model (CM-Diff) for simultaneously modeling data distributions in both the infrared and visible modalities. We address this challenge by combining translation direction labels for guidance during training with cross-modality feature control. Specifically, we view the establishment of the mapping relationship between the two modalities as the process of learning data distributions and understanding modality differences, achieved through a novel Bidirectional Diffusion Training (BDT) strategy. Additionally, we propose a Statistical Constraint Inference (SCI) strategy to ensure the generated image closely adheres to the data distribution of the target modality. Experimental results demonstrate the superiority of our CM-Diff over state-of-the-art methods, highlighting its potential for generating dual-modality datasets.
no_new_dataset
0.95253
2503.09527
Peng Chen
Peng Chen, Pi Bu, Yingyao Wang, Xinyi Wang, Ziming Wang, Jie Guo, Yingxiu Zhao, Qi Zhu, Jun Song, Siran Yang, Jiamang Wang, Bo Zheng
CombatVLA: An Efficient Vision-Language-Action Model for Combat Tasks in 3D Action Role-Playing Games
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Recent advances in Vision-Language-Action models (VLAs) have expanded the capabilities of embodied intelligence. However, significant challenges remain in real-time decision-making in complex 3D environments, which demand second-level responses, high-resolution perception, and tactical reasoning under dynamic conditions. To advance the field, we introduce CombatVLA, an efficient VLA model optimized for combat tasks in 3D action role-playing games(ARPGs). Specifically, our CombatVLA is a 3B model trained on video-action pairs collected by an action tracker, where the data is formatted as action-of-thought (AoT) sequences. Thereafter, CombatVLA seamlessly integrates into an action execution framework, allowing efficient inference through our truncated AoT strategy. Experimental results demonstrate that CombatVLA not only outperforms all existing models on the combat understanding benchmark but also achieves a 50-fold acceleration in game combat. Moreover, it has a higher task success rate than human players. We will open-source all resources, including the action tracker, dataset, benchmark, model weights, training code, and the implementation of the framework at https://combatvla.github.io/.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 16:42:26 GMT" } ]
2025-03-13T00:00:00
[ [ "Chen", "Peng", "" ], [ "Bu", "Pi", "" ], [ "Wang", "Yingyao", "" ], [ "Wang", "Xinyi", "" ], [ "Wang", "Ziming", "" ], [ "Guo", "Jie", "" ], [ "Zhao", "Yingxiu", "" ], [ "Zhu", "Qi", "" ], [ "Song", "Jun", "" ], [ "Yang", "Siran", "" ], [ "Wang", "Jiamang", "" ], [ "Zheng", "Bo", "" ] ]
TITLE: CombatVLA: An Efficient Vision-Language-Action Model for Combat Tasks in 3D Action Role-Playing Games ABSTRACT: Recent advances in Vision-Language-Action models (VLAs) have expanded the capabilities of embodied intelligence. However, significant challenges remain in real-time decision-making in complex 3D environments, which demand second-level responses, high-resolution perception, and tactical reasoning under dynamic conditions. To advance the field, we introduce CombatVLA, an efficient VLA model optimized for combat tasks in 3D action role-playing games(ARPGs). Specifically, our CombatVLA is a 3B model trained on video-action pairs collected by an action tracker, where the data is formatted as action-of-thought (AoT) sequences. Thereafter, CombatVLA seamlessly integrates into an action execution framework, allowing efficient inference through our truncated AoT strategy. Experimental results demonstrate that CombatVLA not only outperforms all existing models on the combat understanding benchmark but also achieves a 50-fold acceleration in game combat. Moreover, it has a higher task success rate than human players. We will open-source all resources, including the action tracker, dataset, benchmark, model weights, training code, and the implementation of the framework at https://combatvla.github.io/.
no_new_dataset
0.940735
2503.09535
Utku Ozbulak
Minjae Chung, Jong Bum Won, Ganghyun Kim, Yujin Kim, Utku Ozbulak
Evaluating Visual Explanations of Attention Maps for Transformer-based Medical Imaging
Accepted for publication in MICCAI 2024 Workshop on Interpretability of Machine Intelligence in Medical Image Computing (iMIMIC)
null
10.1007/978-3-031-77610-6_11
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although Vision Transformers (ViTs) have recently demonstrated superior performance in medical imaging problems, they face explainability issues similar to previous architectures such as convolutional neural networks. Recent research efforts suggest that attention maps, which are part of decision-making process of ViTs can potentially address the explainability issue by identifying regions influencing predictions, especially in models pretrained with self-supervised learning. In this work, we compare the visual explanations of attention maps to other commonly used methods for medical imaging problems. To do so, we employ four distinct medical imaging datasets that involve the identification of (1) colonic polyps, (2) breast tumors, (3) esophageal inflammation, and (4) bone fractures and hardware implants. Through large-scale experiments on the aforementioned datasets using various supervised and self-supervised pretrained ViTs, we find that although attention maps show promise under certain conditions and generally surpass GradCAM in explainability, they are outperformed by transformer-specific interpretability methods. Our findings indicate that the efficacy of attention maps as a method of interpretability is context-dependent and may be limited as they do not consistently provide the comprehensive insights required for robust medical decision-making.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 16:52:52 GMT" } ]
2025-03-13T00:00:00
[ [ "Chung", "Minjae", "" ], [ "Won", "Jong Bum", "" ], [ "Kim", "Ganghyun", "" ], [ "Kim", "Yujin", "" ], [ "Ozbulak", "Utku", "" ] ]
TITLE: Evaluating Visual Explanations of Attention Maps for Transformer-based Medical Imaging ABSTRACT: Although Vision Transformers (ViTs) have recently demonstrated superior performance in medical imaging problems, they face explainability issues similar to previous architectures such as convolutional neural networks. Recent research efforts suggest that attention maps, which are part of decision-making process of ViTs can potentially address the explainability issue by identifying regions influencing predictions, especially in models pretrained with self-supervised learning. In this work, we compare the visual explanations of attention maps to other commonly used methods for medical imaging problems. To do so, we employ four distinct medical imaging datasets that involve the identification of (1) colonic polyps, (2) breast tumors, (3) esophageal inflammation, and (4) bone fractures and hardware implants. Through large-scale experiments on the aforementioned datasets using various supervised and self-supervised pretrained ViTs, we find that although attention maps show promise under certain conditions and generally surpass GradCAM in explainability, they are outperformed by transformer-specific interpretability methods. Our findings indicate that the efficacy of attention maps as a method of interpretability is context-dependent and may be limited as they do not consistently provide the comprehensive insights required for robust medical decision-making.
no_new_dataset
0.945851
2503.09537
Shuokang Huang
Shuokang Huang, Julie A. McCann
GenHPE: Generative Counterfactuals for 3D Human Pose Estimation with Radio Frequency Signals
null
null
null
null
cs.CV cs.AI cs.MM eess.SP
http://creativecommons.org/licenses/by-nc-sa/4.0/
Human pose estimation (HPE) detects the positions of human body joints for various applications. Compared to using cameras, HPE using radio frequency (RF) signals is non-intrusive and more robust to adverse conditions, exploiting the signal variations caused by human interference. However, existing studies focus on single-domain HPE confined by domain-specific confounders, which cannot generalize to new domains and result in diminished HPE performance. Specifically, the signal variations caused by different human body parts are entangled, containing subject-specific confounders. RF signals are also intertwined with environmental noise, involving environment-specific confounders. In this paper, we propose GenHPE, a 3D HPE approach that generates counterfactual RF signals to eliminate domain-specific confounders. GenHPE trains generative models conditioned on human skeleton labels, learning how human body parts and confounders interfere with RF signals. We manipulate skeleton labels (i.e., removing body parts) as counterfactual conditions for generative models to synthesize counterfactual RF signals. The differences between counterfactual signals approximately eliminate domain-specific confounders and regularize an encoder-decoder model to learn domain-independent representations. Such representations help GenHPE generalize to new subjects/environments for cross-domain 3D HPE. We evaluate GenHPE on three public datasets from WiFi, ultra-wideband, and millimeter wave. Experimental results show that GenHPE outperforms state-of-the-art methods and reduces estimation errors by up to 52.2mm for cross-subject HPE and 10.6mm for cross-environment HPE.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 16:53:58 GMT" } ]
2025-03-13T00:00:00
[ [ "Huang", "Shuokang", "" ], [ "McCann", "Julie A.", "" ] ]
TITLE: GenHPE: Generative Counterfactuals for 3D Human Pose Estimation with Radio Frequency Signals ABSTRACT: Human pose estimation (HPE) detects the positions of human body joints for various applications. Compared to using cameras, HPE using radio frequency (RF) signals is non-intrusive and more robust to adverse conditions, exploiting the signal variations caused by human interference. However, existing studies focus on single-domain HPE confined by domain-specific confounders, which cannot generalize to new domains and result in diminished HPE performance. Specifically, the signal variations caused by different human body parts are entangled, containing subject-specific confounders. RF signals are also intertwined with environmental noise, involving environment-specific confounders. In this paper, we propose GenHPE, a 3D HPE approach that generates counterfactual RF signals to eliminate domain-specific confounders. GenHPE trains generative models conditioned on human skeleton labels, learning how human body parts and confounders interfere with RF signals. We manipulate skeleton labels (i.e., removing body parts) as counterfactual conditions for generative models to synthesize counterfactual RF signals. The differences between counterfactual signals approximately eliminate domain-specific confounders and regularize an encoder-decoder model to learn domain-independent representations. Such representations help GenHPE generalize to new subjects/environments for cross-domain 3D HPE. We evaluate GenHPE on three public datasets from WiFi, ultra-wideband, and millimeter wave. Experimental results show that GenHPE outperforms state-of-the-art methods and reduces estimation errors by up to 52.2mm for cross-subject HPE and 10.6mm for cross-environment HPE.
no_new_dataset
0.946597
2503.09551
Christopher Leon
Christopher Leon, Petr M. Anisimov, Nikolai Yampolsky, Alexander Scheinker
Using Convolutional Neural Networks to Accelerate 3D Coherent Synchrotron Radiation Computations
null
null
null
null
physics.acc-ph
http://creativecommons.org/licenses/by/4.0/
Calculating the effects of Coherent Synchrotron Radiation (CSR) is one of the most computationally expensive tasks in accelerator physics. Here, we use convolutional neural networks (CNN's), along with a latent conditional diffusion (LCD) model, trained on physics-based simulations to speed up calculations. Specifically, we produce the 3D CSR wakefields generated by electron bunches in circular orbit in the steady-state condition. Two datasets are used for training and testing the models: wakefields generated by three-dimensional Gaussian electron distributions and wakefields from a sum of up to 25 three-dimensional Gaussian distributions. The CNN's are able to accurately produce the 3D wakefields $\sim 250-1000$ times faster than the numerical calculations, while the LCD has a gain of a factor of $\sim 34$. We also test the extrapolation and out-of-distribution generalization ability of the models. They generalize well on distributions with larger spreads than what they were trained on, but struggle with smaller spreads.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 17:08:13 GMT" } ]
2025-03-13T00:00:00
[ [ "Leon", "Christopher", "" ], [ "Anisimov", "Petr M.", "" ], [ "Yampolsky", "Nikolai", "" ], [ "Scheinker", "Alexander", "" ] ]
TITLE: Using Convolutional Neural Networks to Accelerate 3D Coherent Synchrotron Radiation Computations ABSTRACT: Calculating the effects of Coherent Synchrotron Radiation (CSR) is one of the most computationally expensive tasks in accelerator physics. Here, we use convolutional neural networks (CNN's), along with a latent conditional diffusion (LCD) model, trained on physics-based simulations to speed up calculations. Specifically, we produce the 3D CSR wakefields generated by electron bunches in circular orbit in the steady-state condition. Two datasets are used for training and testing the models: wakefields generated by three-dimensional Gaussian electron distributions and wakefields from a sum of up to 25 three-dimensional Gaussian distributions. The CNN's are able to accurately produce the 3D wakefields $\sim 250-1000$ times faster than the numerical calculations, while the LCD has a gain of a factor of $\sim 34$. We also test the extrapolation and out-of-distribution generalization ability of the models. They generalize well on distributions with larger spreads than what they were trained on, but struggle with smaller spreads.
no_new_dataset
0.948106
2503.09556
Tim B\"uchner
Tim B\"uchner, Christoph Anders, Orlando Guntinas-Lichius, Joachim Denzler
Electromyography-Informed Facial Expression Reconstruction for Physiological-Based Synthesis and Analysis
Accepted at CVPR 2025, 41 pages, 37 figures, 8 tables
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
The relationship between muscle activity and resulting facial expressions is crucial for various fields, including psychology, medicine, and entertainment. The synchronous recording of facial mimicry and muscular activity via surface electromyography (sEMG) provides a unique window into these complex dynamics. Unfortunately, existing methods for facial analysis cannot handle electrode occlusion, rendering them ineffective. Even with occlusion-free reference images of the same person, variations in expression intensity and execution are unmatchable. Our electromyography-informed facial expression reconstruction (EIFER) approach is a novel method to restore faces under sEMG occlusion faithfully in an adversarial manner. We decouple facial geometry and visual appearance (e.g., skin texture, lighting, electrodes) by combining a 3D Morphable Model (3DMM) with neural unpaired image-to-image translation via reference recordings. Then, EIFER learns a bidirectional mapping between 3DMM expression parameters and muscle activity, establishing correspondence between the two domains. We validate the effectiveness of our approach through experiments on a dataset of synchronized sEMG recordings and facial mimicry, demonstrating faithful geometry and appearance reconstruction. Further, we synthesize expressions based on muscle activity and how observed expressions can predict dynamic muscle activity. Consequently, EIFER introduces a new paradigm for facial electromyography, which could be extended to other forms of multi-modal face recordings.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 17:21:10 GMT" } ]
2025-03-13T00:00:00
[ [ "Büchner", "Tim", "" ], [ "Anders", "Christoph", "" ], [ "Guntinas-Lichius", "Orlando", "" ], [ "Denzler", "Joachim", "" ] ]
TITLE: Electromyography-Informed Facial Expression Reconstruction for Physiological-Based Synthesis and Analysis ABSTRACT: The relationship between muscle activity and resulting facial expressions is crucial for various fields, including psychology, medicine, and entertainment. The synchronous recording of facial mimicry and muscular activity via surface electromyography (sEMG) provides a unique window into these complex dynamics. Unfortunately, existing methods for facial analysis cannot handle electrode occlusion, rendering them ineffective. Even with occlusion-free reference images of the same person, variations in expression intensity and execution are unmatchable. Our electromyography-informed facial expression reconstruction (EIFER) approach is a novel method to restore faces under sEMG occlusion faithfully in an adversarial manner. We decouple facial geometry and visual appearance (e.g., skin texture, lighting, electrodes) by combining a 3D Morphable Model (3DMM) with neural unpaired image-to-image translation via reference recordings. Then, EIFER learns a bidirectional mapping between 3DMM expression parameters and muscle activity, establishing correspondence between the two domains. We validate the effectiveness of our approach through experiments on a dataset of synchronized sEMG recordings and facial mimicry, demonstrating faithful geometry and appearance reconstruction. Further, we synthesize expressions based on muscle activity and how observed expressions can predict dynamic muscle activity. Consequently, EIFER introduces a new paradigm for facial electromyography, which could be extended to other forms of multi-modal face recordings.
no_new_dataset
0.850282
2503.09565
Zixiang Chen
Zixiang Chen, Greg Yang, Qingyue Zhao, Quanquan Gu
Global Convergence and Rich Feature Learning in $L$-Layer Infinite-Width Neural Networks under $\mu$P Parametrization
29 pages, 5 figures, 2 tables
null
null
null
cs.LG cs.AI math.OC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite deep neural networks' powerful representation learning capabilities, theoretical understanding of how networks can simultaneously achieve meaningful feature learning and global convergence remains elusive. Existing approaches like the neural tangent kernel (NTK) are limited because features stay close to their initialization in this parametrization, leaving open questions about feature properties during substantial evolution. In this paper, we investigate the training dynamics of infinitely wide, $L$-layer neural networks using the tensor program (TP) framework. Specifically, we show that, when trained with stochastic gradient descent (SGD) under the Maximal Update parametrization ($\mu$P) and mild conditions on the activation function, SGD enables these networks to learn linearly independent features that substantially deviate from their initial values. This rich feature space captures relevant data information and ensures that any convergent point of the training process is a global minimum. Our analysis leverages both the interactions among features across layers and the properties of Gaussian random variables, providing new insights into deep representation learning. We further validate our theoretical findings through experiments on real-world datasets.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 17:33:13 GMT" } ]
2025-03-13T00:00:00
[ [ "Chen", "Zixiang", "" ], [ "Yang", "Greg", "" ], [ "Zhao", "Qingyue", "" ], [ "Gu", "Quanquan", "" ] ]
TITLE: Global Convergence and Rich Feature Learning in $L$-Layer Infinite-Width Neural Networks under $\mu$P Parametrization ABSTRACT: Despite deep neural networks' powerful representation learning capabilities, theoretical understanding of how networks can simultaneously achieve meaningful feature learning and global convergence remains elusive. Existing approaches like the neural tangent kernel (NTK) are limited because features stay close to their initialization in this parametrization, leaving open questions about feature properties during substantial evolution. In this paper, we investigate the training dynamics of infinitely wide, $L$-layer neural networks using the tensor program (TP) framework. Specifically, we show that, when trained with stochastic gradient descent (SGD) under the Maximal Update parametrization ($\mu$P) and mild conditions on the activation function, SGD enables these networks to learn linearly independent features that substantially deviate from their initial values. This rich feature space captures relevant data information and ensures that any convergent point of the training process is a global minimum. Our analysis leverages both the interactions among features across layers and the properties of Gaussian random variables, providing new insights into deep representation learning. We further validate our theoretical findings through experiments on real-world datasets.
no_new_dataset
0.945951
2503.09576
Philippe Chlenski
Philippe Chlenski, Kaizhu Du, Dylan Satow, and Itsik Pe'er
Manify: A Python Library for Learning Non-Euclidean Representations
30 pages, 4 figures, 4 tables. Preprint
null
null
null
cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
We present Manify, an open-source Python library for non-Euclidean representation learning. Leveraging manifold learning techniques, Manify provides tools for learning embeddings in (products of) non-Euclidean spaces, performing classification and regression with data that lives in such spaces, and estimating the curvature of a manifold. Manify aims to advance research and applications in machine learning by offering a comprehensive suite of tools for manifold-based data analysis. Our source code, examples, datasets, results, and documentation are available at https://github.com/pchlenski/manify
[ { "version": "v1", "created": "Wed, 12 Mar 2025 17:44:40 GMT" } ]
2025-03-13T00:00:00
[ [ "Chlenski", "Philippe", "" ], [ "Du", "Kaizhu", "" ], [ "Satow", "Dylan", "" ], [ "Pe'er", "Itsik", "" ] ]
TITLE: Manify: A Python Library for Learning Non-Euclidean Representations ABSTRACT: We present Manify, an open-source Python library for non-Euclidean representation learning. Leveraging manifold learning techniques, Manify provides tools for learning embeddings in (products of) non-Euclidean spaces, performing classification and regression with data that lives in such spaces, and estimating the curvature of a manifold. Manify aims to advance research and applications in machine learning by offering a comprehensive suite of tools for manifold-based data analysis. Our source code, examples, datasets, results, and documentation are available at https://github.com/pchlenski/manify
no_new_dataset
0.944177
2503.09587
Nannan Wu
Nannan Wu, Zhuo Kuang, Zengqiang Yan, Ping Wang, Li Yu
Fair Federated Medical Image Classification Against Quality Shift via Inter-Client Progressive State Matching
Preprint
null
null
null
eess.IV cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite the potential of federated learning in medical applications, inconsistent imaging quality across institutions-stemming from lower-quality data from a minority of clients-biases federated models toward more common high-quality images. This raises significant fairness concerns. Existing fair federated learning methods have demonstrated some effectiveness in solving this problem by aligning a single 0th- or 1st-order state of convergence (e.g., training loss or sharpness). However, we argue in this work that fairness based on such a single state is still not an adequate surrogate for fairness during testing, as these single metrics fail to fully capture the convergence characteristics, making them suboptimal for guiding fair learning. To address this limitation, we develop a generalized framework. Specifically, we propose assessing convergence using multiple states, defined as sharpness or perturbed loss computed at varying search distances. Building on this comprehensive assessment, we propose promoting fairness for these states across clients to achieve our ultimate fairness objective. This is accomplished through the proposed method, FedISM+. In FedISM+, the search distance evolves over time, progressively focusing on different states. We then incorporate two components in local training and global aggregation to ensure cross-client fairness for each state. This gradually makes convergence equitable for all states, thereby improving fairness during testing. Our empirical evaluations, performed on the well-known RSNA ICH and ISIC 2019 datasets, demonstrate the superiority of FedISM+ over existing state-of-the-art methods for fair federated learning. The code is available at https://github.com/wnn2000/FFL4MIA.
[ { "version": "v1", "created": "Wed, 12 Mar 2025 17:56:28 GMT" } ]
2025-03-13T00:00:00
[ [ "Wu", "Nannan", "" ], [ "Kuang", "Zhuo", "" ], [ "Yan", "Zengqiang", "" ], [ "Wang", "Ping", "" ], [ "Yu", "Li", "" ] ]
TITLE: Fair Federated Medical Image Classification Against Quality Shift via Inter-Client Progressive State Matching ABSTRACT: Despite the potential of federated learning in medical applications, inconsistent imaging quality across institutions-stemming from lower-quality data from a minority of clients-biases federated models toward more common high-quality images. This raises significant fairness concerns. Existing fair federated learning methods have demonstrated some effectiveness in solving this problem by aligning a single 0th- or 1st-order state of convergence (e.g., training loss or sharpness). However, we argue in this work that fairness based on such a single state is still not an adequate surrogate for fairness during testing, as these single metrics fail to fully capture the convergence characteristics, making them suboptimal for guiding fair learning. To address this limitation, we develop a generalized framework. Specifically, we propose assessing convergence using multiple states, defined as sharpness or perturbed loss computed at varying search distances. Building on this comprehensive assessment, we propose promoting fairness for these states across clients to achieve our ultimate fairness objective. This is accomplished through the proposed method, FedISM+. In FedISM+, the search distance evolves over time, progressively focusing on different states. We then incorporate two components in local training and global aggregation to ensure cross-client fairness for each state. This gradually makes convergence equitable for all states, thereby improving fairness during testing. Our empirical evaluations, performed on the well-known RSNA ICH and ISIC 2019 datasets, demonstrate the superiority of FedISM+ over existing state-of-the-art methods for fair federated learning. The code is available at https://github.com/wnn2000/FFL4MIA.
no_new_dataset
0.947332
2103.01901
Shuxiao Chen
Shuxiao Chen, Qinqing Zheng, Qi Long, Weijie J. Su
Minimax Estimation for Personalized Federated Learning: An Alternative between FedAvg and Local Training?
JMLR published version
null
null
null
stat.ML cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A widely recognized difficulty in federated learning arises from the statistical heterogeneity among clients: local datasets often originate from distinct yet not entirely unrelated probability distributions, and personalization is, therefore, necessary to achieve optimal results from each individual's perspective. In this paper, we show how the excess risks of personalized federated learning using a smooth, strongly convex loss depend on data heterogeneity from a minimax point of view, with a focus on the FedAvg algorithm (McMahan et al., 2017) and pure local training (i.e., clients solve empirical risk minimization problems on their local datasets without any communication). Our main result reveals an approximate alternative between these two baseline algorithms for federated learning: the former algorithm is minimax rate optimal over a collection of instances when data heterogeneity is small, whereas the latter is minimax rate optimal when data heterogeneity is large, and the threshold is sharp up to a constant. As an implication, our results show that from a worst-case point of view, a dichotomous strategy that makes a choice between the two baseline algorithms is rate-optimal. Another implication is that the popular FedAvg following by local fine tuning strategy is also minimax optimal under additional regularity conditions. Our analysis relies on a new notion of algorithmic stability that takes into account the nature of federated learning.
[ { "version": "v1", "created": "Tue, 2 Mar 2021 17:58:20 GMT" }, { "version": "v2", "created": "Tue, 11 Mar 2025 02:36:12 GMT" } ]
2025-03-12T00:00:00
[ [ "Chen", "Shuxiao", "" ], [ "Zheng", "Qinqing", "" ], [ "Long", "Qi", "" ], [ "Su", "Weijie J.", "" ] ]
TITLE: Minimax Estimation for Personalized Federated Learning: An Alternative between FedAvg and Local Training? ABSTRACT: A widely recognized difficulty in federated learning arises from the statistical heterogeneity among clients: local datasets often originate from distinct yet not entirely unrelated probability distributions, and personalization is, therefore, necessary to achieve optimal results from each individual's perspective. In this paper, we show how the excess risks of personalized federated learning using a smooth, strongly convex loss depend on data heterogeneity from a minimax point of view, with a focus on the FedAvg algorithm (McMahan et al., 2017) and pure local training (i.e., clients solve empirical risk minimization problems on their local datasets without any communication). Our main result reveals an approximate alternative between these two baseline algorithms for federated learning: the former algorithm is minimax rate optimal over a collection of instances when data heterogeneity is small, whereas the latter is minimax rate optimal when data heterogeneity is large, and the threshold is sharp up to a constant. As an implication, our results show that from a worst-case point of view, a dichotomous strategy that makes a choice between the two baseline algorithms is rate-optimal. Another implication is that the popular FedAvg following by local fine tuning strategy is also minimax optimal under additional regularity conditions. Our analysis relies on a new notion of algorithmic stability that takes into account the nature of federated learning.
no_new_dataset
0.950595
2207.05195
Bohan Tang
Bohan Tang, Yiqi Zhong, Chenxin Xu, Wei-Tao Wu, Ulrich Neumann, Yanfeng Wang, Ya Zhang, and Siheng Chen
Collaborative Uncertainty Benefits Multi-Agent Multi-Modal Trajectory Forecasting
arXiv admin note: text overlap with arXiv:2110.13947
null
null
null
cs.CV stat.ML
http://creativecommons.org/licenses/by-nc-nd/4.0/
In multi-modal multi-agent trajectory forecasting, two major challenges have not been fully tackled: 1) how to measure the uncertainty brought by the interaction module that causes correlations among the predicted trajectories of multiple agents; 2) how to rank the multiple predictions and select the optimal predicted trajectory. In order to handle these challenges, this work first proposes a novel concept, collaborative uncertainty (CU), which models the uncertainty resulting from interaction modules. Then we build a general CU-aware regression framework with an original permutation-equivariant uncertainty estimator to do both tasks of regression and uncertainty estimation. Further, we apply the proposed framework to current SOTA multi-agent multi-modal forecasting systems as a plugin module, which enables the SOTA systems to 1) estimate the uncertainty in the multi-agent multi-modal trajectory forecasting task; 2) rank the multiple predictions and select the optimal one based on the estimated uncertainty. We conduct extensive experiments on a synthetic dataset and two public large-scale multi-agent trajectory forecasting benchmarks. Experiments show that: 1) on the synthetic dataset, the CU-aware regression framework allows the model to appropriately approximate the ground-truth Laplace distribution; 2) on the multi-agent trajectory forecasting benchmarks, the CU-aware regression framework steadily helps SOTA systems improve their performances. Specially, the proposed framework helps VectorNet improve by 262 cm regarding the Final Displacement Error of the chosen optimal prediction on the nuScenes dataset; 3) for multi-agent multi-modal trajectory forecasting systems, prediction uncertainty is positively correlated with future stochasticity; and 4) the estimated CU values are highly related to the interactive information among agents.
[ { "version": "v1", "created": "Mon, 11 Jul 2022 21:17:41 GMT" }, { "version": "v2", "created": "Tue, 11 Mar 2025 16:10:29 GMT" } ]
2025-03-12T00:00:00
[ [ "Tang", "Bohan", "" ], [ "Zhong", "Yiqi", "" ], [ "Xu", "Chenxin", "" ], [ "Wu", "Wei-Tao", "" ], [ "Neumann", "Ulrich", "" ], [ "Wang", "Yanfeng", "" ], [ "Zhang", "Ya", "" ], [ "Chen", "Siheng", "" ] ]
TITLE: Collaborative Uncertainty Benefits Multi-Agent Multi-Modal Trajectory Forecasting ABSTRACT: In multi-modal multi-agent trajectory forecasting, two major challenges have not been fully tackled: 1) how to measure the uncertainty brought by the interaction module that causes correlations among the predicted trajectories of multiple agents; 2) how to rank the multiple predictions and select the optimal predicted trajectory. In order to handle these challenges, this work first proposes a novel concept, collaborative uncertainty (CU), which models the uncertainty resulting from interaction modules. Then we build a general CU-aware regression framework with an original permutation-equivariant uncertainty estimator to do both tasks of regression and uncertainty estimation. Further, we apply the proposed framework to current SOTA multi-agent multi-modal forecasting systems as a plugin module, which enables the SOTA systems to 1) estimate the uncertainty in the multi-agent multi-modal trajectory forecasting task; 2) rank the multiple predictions and select the optimal one based on the estimated uncertainty. We conduct extensive experiments on a synthetic dataset and two public large-scale multi-agent trajectory forecasting benchmarks. Experiments show that: 1) on the synthetic dataset, the CU-aware regression framework allows the model to appropriately approximate the ground-truth Laplace distribution; 2) on the multi-agent trajectory forecasting benchmarks, the CU-aware regression framework steadily helps SOTA systems improve their performances. Specially, the proposed framework helps VectorNet improve by 262 cm regarding the Final Displacement Error of the chosen optimal prediction on the nuScenes dataset; 3) for multi-agent multi-modal trajectory forecasting systems, prediction uncertainty is positively correlated with future stochasticity; and 4) the estimated CU values are highly related to the interactive information among agents.
no_new_dataset
0.946941
2211.01717
Bohan Tang
Bohan Tang, Siheng Chen, Xiaowen Dong
Learning Hypergraphs From Signals With Dual Smoothness Prior
null
null
null
null
cs.LG cs.SI eess.SP stat.ML
http://creativecommons.org/licenses/by-nc-nd/4.0/
Hypergraph structure learning, which aims to learn the hypergraph structures from the observed signals to capture the intrinsic high-order relationships among the entities, becomes crucial when a hypergraph topology is not readily available in the datasets. There are two challenges that lie at the heart of this problem: 1) how to handle the huge search space of potential hyperedges, and 2) how to define meaningful criteria to measure the relationship between the signals observed on nodes and the hypergraph structure. In this paper, for the first challenge, we adopt the assumption that the ideal hypergraph structure can be derived from a learnable graph structure that captures the pairwise relations within signals. Further, we propose a hypergraph structure learning framework HGSL with a novel dual smoothness prior that reveals a mapping between the observed node signals and the hypergraph structure, whereby each hyperedge corresponds to a subgraph with both node signal smoothness and edge signal smoothness in the learnable graph structure. Finally, we conduct extensive experiments to evaluate HGSL on both synthetic and real world datasets. Experiments show that HGSL can efficiently infer meaningful hypergraph topologies from observed signals.
[ { "version": "v1", "created": "Thu, 3 Nov 2022 11:13:02 GMT" }, { "version": "v2", "created": "Thu, 16 Feb 2023 10:44:48 GMT" }, { "version": "v3", "created": "Tue, 14 Mar 2023 10:21:56 GMT" }, { "version": "v4", "created": "Tue, 11 Mar 2025 16:08:58 GMT" } ]
2025-03-12T00:00:00
[ [ "Tang", "Bohan", "" ], [ "Chen", "Siheng", "" ], [ "Dong", "Xiaowen", "" ] ]
TITLE: Learning Hypergraphs From Signals With Dual Smoothness Prior ABSTRACT: Hypergraph structure learning, which aims to learn the hypergraph structures from the observed signals to capture the intrinsic high-order relationships among the entities, becomes crucial when a hypergraph topology is not readily available in the datasets. There are two challenges that lie at the heart of this problem: 1) how to handle the huge search space of potential hyperedges, and 2) how to define meaningful criteria to measure the relationship between the signals observed on nodes and the hypergraph structure. In this paper, for the first challenge, we adopt the assumption that the ideal hypergraph structure can be derived from a learnable graph structure that captures the pairwise relations within signals. Further, we propose a hypergraph structure learning framework HGSL with a novel dual smoothness prior that reveals a mapping between the observed node signals and the hypergraph structure, whereby each hyperedge corresponds to a subgraph with both node signal smoothness and edge signal smoothness in the learnable graph structure. Finally, we conduct extensive experiments to evaluate HGSL on both synthetic and real world datasets. Experiments show that HGSL can efficiently infer meaningful hypergraph topologies from observed signals.
no_new_dataset
0.949763
2301.08995
Anoop Kadan
Anoop Kadan, Deepak P., Manjary P. Gangan, Savitha Sam Abraham, Lajish V. L
REDAffectiveLM: Leveraging Affect Enriched Embedding and Transformer-based Neural Language Model for Readers' Emotion Detection
null
null
10.1007/s10115-024-02194-4
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Technological advancements in web platforms allow people to express and share emotions towards textual write-ups written and shared by others. This brings about different interesting domains for analysis; emotion expressed by the writer and emotion elicited from the readers. In this paper, we propose a novel approach for Readers' Emotion Detection from short-text documents using a deep learning model called REDAffectiveLM. Within state-of-the-art NLP tasks, it is well understood that utilizing context-specific representations from transformer-based pre-trained language models helps achieve improved performance. Within this affective computing task, we explore how incorporating affective information can further enhance performance. Towards this, we leverage context-specific and affect enriched representations by using a transformer-based pre-trained language model in tandem with affect enriched Bi-LSTM+Attention. For empirical evaluation, we procure a new dataset REN-20k, besides using RENh-4k and SemEval-2007. We evaluate the performance of our REDAffectiveLM rigorously across these datasets, against a vast set of state-of-the-art baselines, where our model consistently outperforms baselines and obtains statistically significant results. Our results establish that utilizing affect enriched representation along with context-specific representation within a neural architecture can considerably enhance readers' emotion detection. Since the impact of affect enrichment specifically in readers' emotion detection isn't well explored, we conduct a detailed analysis over affect enriched Bi-LSTM+Attention using qualitative and quantitative model behavior evaluation techniques. We observe that compared to conventional semantic embedding, affect enriched embedding increases ability of the network to effectively identify and assign weightage to key terms responsible for readers' emotion detection.
[ { "version": "v1", "created": "Sat, 21 Jan 2023 19:28:25 GMT" } ]
2025-03-12T00:00:00
[ [ "Kadan", "Anoop", "" ], [ "P.", "Deepak", "" ], [ "Gangan", "Manjary P.", "" ], [ "Abraham", "Savitha Sam", "" ], [ "L", "Lajish V.", "" ] ]
TITLE: REDAffectiveLM: Leveraging Affect Enriched Embedding and Transformer-based Neural Language Model for Readers' Emotion Detection ABSTRACT: Technological advancements in web platforms allow people to express and share emotions towards textual write-ups written and shared by others. This brings about different interesting domains for analysis; emotion expressed by the writer and emotion elicited from the readers. In this paper, we propose a novel approach for Readers' Emotion Detection from short-text documents using a deep learning model called REDAffectiveLM. Within state-of-the-art NLP tasks, it is well understood that utilizing context-specific representations from transformer-based pre-trained language models helps achieve improved performance. Within this affective computing task, we explore how incorporating affective information can further enhance performance. Towards this, we leverage context-specific and affect enriched representations by using a transformer-based pre-trained language model in tandem with affect enriched Bi-LSTM+Attention. For empirical evaluation, we procure a new dataset REN-20k, besides using RENh-4k and SemEval-2007. We evaluate the performance of our REDAffectiveLM rigorously across these datasets, against a vast set of state-of-the-art baselines, where our model consistently outperforms baselines and obtains statistically significant results. Our results establish that utilizing affect enriched representation along with context-specific representation within a neural architecture can considerably enhance readers' emotion detection. Since the impact of affect enrichment specifically in readers' emotion detection isn't well explored, we conduct a detailed analysis over affect enriched Bi-LSTM+Attention using qualitative and quantitative model behavior evaluation techniques. We observe that compared to conventional semantic embedding, affect enriched embedding increases ability of the network to effectively identify and assign weightage to key terms responsible for readers' emotion detection.
new_dataset
0.966632
2305.18226
Christoforos Vasilatos
Christoforos Vasilatos, Manaar Alam, Talal Rahwan, Yasir Zaki and Michail Maniatakos
HowkGPT: Investigating the Detection of ChatGPT-generated University Student Homework through Context-Aware Perplexity Analysis
null
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As the use of Large Language Models (LLMs) in text generation tasks proliferates, concerns arise over their potential to compromise academic integrity. The education sector currently tussles with distinguishing student-authored homework assignments from AI-generated ones. This paper addresses the challenge by introducing HowkGPT, designed to identify homework assignments generated by AI. HowkGPT is built upon a dataset of academic assignments and accompanying metadata [17] and employs a pretrained LLM to compute perplexity scores for student-authored and ChatGPT-generated responses. These scores then assist in establishing a threshold for discerning the origin of a submitted assignment. Given the specificity and contextual nature of academic work, HowkGPT further refines its analysis by defining category-specific thresholds derived from the metadata, enhancing the precision of the detection. This study emphasizes the critical need for effective strategies to uphold academic integrity amidst the growing influence of LLMs and provides an approach to ensuring fair and accurate grading in educational institutions.
[ { "version": "v1", "created": "Fri, 26 May 2023 11:07:25 GMT" }, { "version": "v2", "created": "Wed, 7 Jun 2023 11:43:44 GMT" }, { "version": "v3", "created": "Tue, 11 Mar 2025 08:08:05 GMT" } ]
2025-03-12T00:00:00
[ [ "Vasilatos", "Christoforos", "" ], [ "Alam", "Manaar", "" ], [ "Rahwan", "Talal", "" ], [ "Zaki", "Yasir", "" ], [ "Maniatakos", "Michail", "" ] ]
TITLE: HowkGPT: Investigating the Detection of ChatGPT-generated University Student Homework through Context-Aware Perplexity Analysis ABSTRACT: As the use of Large Language Models (LLMs) in text generation tasks proliferates, concerns arise over their potential to compromise academic integrity. The education sector currently tussles with distinguishing student-authored homework assignments from AI-generated ones. This paper addresses the challenge by introducing HowkGPT, designed to identify homework assignments generated by AI. HowkGPT is built upon a dataset of academic assignments and accompanying metadata [17] and employs a pretrained LLM to compute perplexity scores for student-authored and ChatGPT-generated responses. These scores then assist in establishing a threshold for discerning the origin of a submitted assignment. Given the specificity and contextual nature of academic work, HowkGPT further refines its analysis by defining category-specific thresholds derived from the metadata, enhancing the precision of the detection. This study emphasizes the critical need for effective strategies to uphold academic integrity amidst the growing influence of LLMs and provides an approach to ensuring fair and accurate grading in educational institutions.
new_dataset
0.96738
2306.01740
John Cartlidge
Lawrence Clegg and John Cartlidge
Not feeling the buzz: Correction study of mispricing and inefficiency in online sportsbooks
24 pages, 2 figures. Revised argument; minor edits and funding acknowledgement; typos corrected. This paper is a replication and correction study and longer version of an accepted journal article. Replication code and data are available online: https://github.com/Faxulous/notFeelingTheBuzz
International Journal of Forecasting (2025), 41(2), pages 798-802
10.1016/j.ijforecast.2024.06.012
null
stat.AP cs.CE q-fin.GN q-fin.ST
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a replication and correction of a recent article (Ramirez, P., Reade, J.J., Singleton, C., Betting on a buzz: Mispricing and inefficiency in online sportsbooks, International Journal of Forecasting, 39:3, 2023, pp. 1413-1423, doi: 10.1016/j.ijforecast.2022.07.011). RRS measure profile page views on Wikipedia to generate a "buzz factor" metric for tennis players and show that it can be used to form a profitable gambling strategy by predicting bookmaker mispricing. Here, we use the same dataset as RRS to reproduce their results exactly, thus confirming the robustness of their mispricing claim. However, we discover that the published betting results are significantly affected by a single bet (the "Hercog" bet), which returns substantial outlier profits based on erroneously long odds. When this data quality issue is resolved, the majority of reported profits disappear and only one strategy, which bets on "competitive" matches, remains significantly profitable in the original out-of-sample period. While one profitable strategy offers weaker support than the original study, it still provides an indication that market inefficiencies may exist, as originally claimed by RRS. As an extension, we continue backtesting after 2020 on a cleaned dataset. Results show that (a) the "competitive" strategy generates no further profits, potentially suggesting markets have become more efficient, and (b) model coefficients estimated over this more recent period are no longer reliable predictors of bookmaker mispricing. We present this work as a case study demonstrating the importance of replication studies in sports forecasting, and the necessity to clean data. We open-source release comprehensive datasets and code.
[ { "version": "v1", "created": "Wed, 3 May 2023 19:25:33 GMT" }, { "version": "v2", "created": "Thu, 25 Jan 2024 16:33:33 GMT" }, { "version": "v3", "created": "Sat, 1 Jun 2024 09:03:30 GMT" }, { "version": "v4", "created": "Thu, 11 Jul 2024 14:50:38 GMT" } ]
2025-03-12T00:00:00
[ [ "Clegg", "Lawrence", "" ], [ "Cartlidge", "John", "" ] ]
TITLE: Not feeling the buzz: Correction study of mispricing and inefficiency in online sportsbooks ABSTRACT: We present a replication and correction of a recent article (Ramirez, P., Reade, J.J., Singleton, C., Betting on a buzz: Mispricing and inefficiency in online sportsbooks, International Journal of Forecasting, 39:3, 2023, pp. 1413-1423, doi: 10.1016/j.ijforecast.2022.07.011). RRS measure profile page views on Wikipedia to generate a "buzz factor" metric for tennis players and show that it can be used to form a profitable gambling strategy by predicting bookmaker mispricing. Here, we use the same dataset as RRS to reproduce their results exactly, thus confirming the robustness of their mispricing claim. However, we discover that the published betting results are significantly affected by a single bet (the "Hercog" bet), which returns substantial outlier profits based on erroneously long odds. When this data quality issue is resolved, the majority of reported profits disappear and only one strategy, which bets on "competitive" matches, remains significantly profitable in the original out-of-sample period. While one profitable strategy offers weaker support than the original study, it still provides an indication that market inefficiencies may exist, as originally claimed by RRS. As an extension, we continue backtesting after 2020 on a cleaned dataset. Results show that (a) the "competitive" strategy generates no further profits, potentially suggesting markets have become more efficient, and (b) model coefficients estimated over this more recent period are no longer reliable predictors of bookmaker mispricing. We present this work as a case study demonstrating the importance of replication studies in sports forecasting, and the necessity to clean data. We open-source release comprehensive datasets and code.
no_new_dataset
0.922552
2307.08359
Frederik Plahl
Andreas Zachariae and Julia Widera and Frederik Plahl and Bj\"orn Hein and Christian Wurll
Human Emergency Detection during Autonomous Hospital Transports
Preprint of the corresponding IAS18-2023 conference publication (Proceedings of the 18th International Conference IAS-18)
Lee, SG., An, J., Chong, N.Y., Strand, M., Kim, J.H. (eds) Intelligent Autonomous Systems 18. IAS 2023. Lecture Notes in Networks and Systems, vol 794. Springer, Cham
10.1007/978-3-031-44981-9_21
null
cs.RO
http://creativecommons.org/licenses/by-nc-sa/4.0/
Human transports in hospitals are labor-intensive and primarily performed in beds to save time. This transfer method does not promote the mobility or autonomy of the patient. To relieve the caregivers from this time-consuming task, a mobile robot is developed to autonomously transport humans around the hospital. It provides different transfer modes including walking and sitting in a wheelchair. The problem that this paper focuses on is to detect emergencies and ensure the well-being of the patient during the transport. For this purpose, the patient is tracked and monitored with a camera system. OpenPose is used for Human Pose Estimation and a trained classifier for emergency detection. We collected and published a dataset of 18,000 images in lab and hospital environments. It differs from related work because we have a moving robot with different transfer modes in a highly dynamic environment with multiple people in the scene using only RGB-D data. To improve the critical recall metric, we apply threshold moving and a time delay. We compare different models with an AutoML approach. This paper shows that emergencies while walking are best detected by a SVM with a recall of 95.8% on single frames. In the case of sitting transport, the best model achieves a recall of 62.2%. The contribution is to establish a baseline on this new dataset and to provide a proof of concept for the human emergency detection in this use case.
[ { "version": "v1", "created": "Mon, 17 Jul 2023 09:54:52 GMT" } ]
2025-03-12T00:00:00
[ [ "Zachariae", "Andreas", "" ], [ "Widera", "Julia", "" ], [ "Plahl", "Frederik", "" ], [ "Hein", "Björn", "" ], [ "Wurll", "Christian", "" ] ]
TITLE: Human Emergency Detection during Autonomous Hospital Transports ABSTRACT: Human transports in hospitals are labor-intensive and primarily performed in beds to save time. This transfer method does not promote the mobility or autonomy of the patient. To relieve the caregivers from this time-consuming task, a mobile robot is developed to autonomously transport humans around the hospital. It provides different transfer modes including walking and sitting in a wheelchair. The problem that this paper focuses on is to detect emergencies and ensure the well-being of the patient during the transport. For this purpose, the patient is tracked and monitored with a camera system. OpenPose is used for Human Pose Estimation and a trained classifier for emergency detection. We collected and published a dataset of 18,000 images in lab and hospital environments. It differs from related work because we have a moving robot with different transfer modes in a highly dynamic environment with multiple people in the scene using only RGB-D data. To improve the critical recall metric, we apply threshold moving and a time delay. We compare different models with an AutoML approach. This paper shows that emergencies while walking are best detected by a SVM with a recall of 95.8% on single frames. In the case of sitting transport, the best model achieves a recall of 62.2%. The contribution is to establish a baseline on this new dataset and to provide a proof of concept for the human emergency detection in this use case.
new_dataset
0.958499
2308.11912
Soonwoo Kwon
Soonwoo Kwon, Sojung Kim, Seunghyun Lee, Jin-Young Kim, Suyeong An, and Kyuseok Kim
Addressing Selection Bias in Computerized Adaptive Testing: A User-Wise Aggregate Influence Function Approach
CIKM 2023
null
null
null
cs.LG cs.AI cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Computerized Adaptive Testing (CAT) is a widely used, efficient test mode that adapts to the examinee's proficiency level in the test domain. CAT requires pre-trained item profiles, for CAT iteratively assesses the student real-time based on the registered items' profiles, and selects the next item to administer using candidate items' profiles. However, obtaining such item profiles is a costly process that involves gathering a large, dense item-response data, then training a diagnostic model on the collected data. In this paper, we explore the possibility of leveraging response data collected in the CAT service. We first show that this poses a unique challenge due to the inherent selection bias introduced by CAT, i.e., more proficient students will receive harder questions. Indeed, when naively training the diagnostic model using CAT response data, we observe that item profiles deviate significantly from the ground-truth. To tackle the selection bias issue, we propose the user-wise aggregate influence function method. Our intuition is to filter out users whose response data is heavily biased in an aggregate manner, as judged by how much perturbation the added data will introduce during parameter estimation. This way, we may enhance the performance of CAT while introducing minimal bias to the item profiles. We provide extensive experiments to demonstrate the superiority of our proposed method based on the three public datasets and one dataset that contains real-world CAT response data.
[ { "version": "v1", "created": "Wed, 23 Aug 2023 04:57:21 GMT" } ]
2025-03-12T00:00:00
[ [ "Kwon", "Soonwoo", "" ], [ "Kim", "Sojung", "" ], [ "Lee", "Seunghyun", "" ], [ "Kim", "Jin-Young", "" ], [ "An", "Suyeong", "" ], [ "Kim", "Kyuseok", "" ] ]
TITLE: Addressing Selection Bias in Computerized Adaptive Testing: A User-Wise Aggregate Influence Function Approach ABSTRACT: Computerized Adaptive Testing (CAT) is a widely used, efficient test mode that adapts to the examinee's proficiency level in the test domain. CAT requires pre-trained item profiles, for CAT iteratively assesses the student real-time based on the registered items' profiles, and selects the next item to administer using candidate items' profiles. However, obtaining such item profiles is a costly process that involves gathering a large, dense item-response data, then training a diagnostic model on the collected data. In this paper, we explore the possibility of leveraging response data collected in the CAT service. We first show that this poses a unique challenge due to the inherent selection bias introduced by CAT, i.e., more proficient students will receive harder questions. Indeed, when naively training the diagnostic model using CAT response data, we observe that item profiles deviate significantly from the ground-truth. To tackle the selection bias issue, we propose the user-wise aggregate influence function method. Our intuition is to filter out users whose response data is heavily biased in an aggregate manner, as judged by how much perturbation the added data will introduce during parameter estimation. This way, we may enhance the performance of CAT while introducing minimal bias to the item profiles. We provide extensive experiments to demonstrate the superiority of our proposed method based on the three public datasets and one dataset that contains real-world CAT response data.
no_new_dataset
0.913599
2309.12862
Yuwei Sun
Yuwei Sun, Hideya Ochiai, Zhirong Wu, Stephen Lin, Ryota Kanai
Associative Transformer
Accepted for CVPR 2025
null
null
null
cs.LG cs.CV cs.NE
http://creativecommons.org/licenses/by/4.0/
Emerging from the pairwise attention in conventional Transformers, there is a growing interest in sparse attention mechanisms that align more closely with localized, contextual learning in the biological brain. Existing studies such as the Coordination method employ iterative cross-attention mechanisms with a bottleneck to enable the sparse association of inputs. However, these methods are parameter inefficient and fail in more complex relational reasoning tasks. To this end, we propose Associative Transformer (AiT) to enhance the association among sparsely attended input tokens, improving parameter efficiency and performance in various vision tasks such as classification and relational reasoning. AiT leverages a learnable explicit memory comprising specialized priors that guide bottleneck attentions to facilitate the extraction of diverse localized tokens. Moreover, AiT employs an associative memory-based token reconstruction using a Hopfield energy function. The extensive empirical experiments demonstrate that AiT requires significantly fewer parameters and attention layers outperforming a broad range of sparse Transformer models. Additionally, AiT outperforms the SOTA sparse Transformer models including the Coordination method on the Sort-of-CLEVR dataset.
[ { "version": "v1", "created": "Fri, 22 Sep 2023 13:37:10 GMT" }, { "version": "v2", "created": "Thu, 23 Nov 2023 07:26:55 GMT" }, { "version": "v3", "created": "Wed, 31 Jan 2024 01:05:14 GMT" }, { "version": "v4", "created": "Tue, 11 Mar 2025 09:04:22 GMT" } ]
2025-03-12T00:00:00
[ [ "Sun", "Yuwei", "" ], [ "Ochiai", "Hideya", "" ], [ "Wu", "Zhirong", "" ], [ "Lin", "Stephen", "" ], [ "Kanai", "Ryota", "" ] ]
TITLE: Associative Transformer ABSTRACT: Emerging from the pairwise attention in conventional Transformers, there is a growing interest in sparse attention mechanisms that align more closely with localized, contextual learning in the biological brain. Existing studies such as the Coordination method employ iterative cross-attention mechanisms with a bottleneck to enable the sparse association of inputs. However, these methods are parameter inefficient and fail in more complex relational reasoning tasks. To this end, we propose Associative Transformer (AiT) to enhance the association among sparsely attended input tokens, improving parameter efficiency and performance in various vision tasks such as classification and relational reasoning. AiT leverages a learnable explicit memory comprising specialized priors that guide bottleneck attentions to facilitate the extraction of diverse localized tokens. Moreover, AiT employs an associative memory-based token reconstruction using a Hopfield energy function. The extensive empirical experiments demonstrate that AiT requires significantly fewer parameters and attention layers outperforming a broad range of sparse Transformer models. Additionally, AiT outperforms the SOTA sparse Transformer models including the Coordination method on the Sort-of-CLEVR dataset.
no_new_dataset
0.945701
2310.03617
Yihong Tang
Yihong Tang, Zhan Zhao, Weipeng Deng, Shuyu Lei, Yuebing Liang, Zhenliang Ma
RouteKG: A knowledge graph-based framework for route prediction on road networks
null
null
null
null
cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Short-term route prediction on road networks allows us to anticipate the future trajectories of road users, enabling various applications ranging from dynamic traffic control to personalized navigation. Despite recent advances in this area, existing methods focus primarily on learning sequential transition patterns, neglecting the inherent spatial relations in road networks that can affect human routing decisions. To fill this gap, this paper introduces RouteKG, a novel Knowledge Graph-based framework for route prediction. Specifically, we construct a Knowledge Graph on the road network to encode spatial relations, especially moving directions that are crucial for human navigation. Moreover, an n-ary tree-based algorithm is introduced to efficiently generate top-K routes in a batch mode, enhancing computational efficiency. To further optimize the prediction performance, a rank refinement module is incorporated to fine-tune the candidate route rankings. The model performance is evaluated using two real-world vehicle trajectory datasets from two Chinese cities under various practical scenarios. The results demonstrate a significant improvement in accuracy over baseline methods. We further validate the proposed method by utilizing the pre-trained model as a simulator for real-time traffic flow estimation at the link level. RouteKG holds great potential for transforming vehicle navigation, traffic management, and a variety of intelligent transportation tasks, playing a crucial role in advancing the core foundation of intelligent and connected urban systems.
[ { "version": "v1", "created": "Tue, 3 Oct 2023 10:40:35 GMT" }, { "version": "v2", "created": "Sat, 24 Feb 2024 20:03:32 GMT" }, { "version": "v3", "created": "Mon, 10 Mar 2025 21:11:50 GMT" } ]
2025-03-12T00:00:00
[ [ "Tang", "Yihong", "" ], [ "Zhao", "Zhan", "" ], [ "Deng", "Weipeng", "" ], [ "Lei", "Shuyu", "" ], [ "Liang", "Yuebing", "" ], [ "Ma", "Zhenliang", "" ] ]
TITLE: RouteKG: A knowledge graph-based framework for route prediction on road networks ABSTRACT: Short-term route prediction on road networks allows us to anticipate the future trajectories of road users, enabling various applications ranging from dynamic traffic control to personalized navigation. Despite recent advances in this area, existing methods focus primarily on learning sequential transition patterns, neglecting the inherent spatial relations in road networks that can affect human routing decisions. To fill this gap, this paper introduces RouteKG, a novel Knowledge Graph-based framework for route prediction. Specifically, we construct a Knowledge Graph on the road network to encode spatial relations, especially moving directions that are crucial for human navigation. Moreover, an n-ary tree-based algorithm is introduced to efficiently generate top-K routes in a batch mode, enhancing computational efficiency. To further optimize the prediction performance, a rank refinement module is incorporated to fine-tune the candidate route rankings. The model performance is evaluated using two real-world vehicle trajectory datasets from two Chinese cities under various practical scenarios. The results demonstrate a significant improvement in accuracy over baseline methods. We further validate the proposed method by utilizing the pre-trained model as a simulator for real-time traffic flow estimation at the link level. RouteKG holds great potential for transforming vehicle navigation, traffic management, and a variety of intelligent transportation tasks, playing a crucial role in advancing the core foundation of intelligent and connected urban systems.
no_new_dataset
0.942348
2310.10982
Sepehr Samavi
Sepehr Samavi, James R. Han, Florian Shkurti, Angela P. Schoellig
SICNav: Safe and Interactive Crowd Navigation using Model Predictive Control and Bilevel Optimization
Published in the IEEE Transactions on Robotics (T-RO)
Published 22 October 2024
10.1109/TRO.2024.3484634
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Robots need to predict and react to human motions to navigate through a crowd without collisions. Many existing methods decouple prediction from planning, which does not account for the interaction between robot and human motions and can lead to the robot getting stuck. We propose SICNav, a Model Predictive Control (MPC) method that jointly solves for robot motion and predicted crowd motion in closed-loop. We model each human in the crowd to be following an Optimal Reciprocal Collision Avoidance (ORCA) scheme and embed that model as a constraint in the robot's local planner, resulting in a bilevel nonlinear MPC optimization problem. We use a KKT-reformulation to cast the bilevel problem as a single level and use a nonlinear solver to optimize. Our MPC method can influence pedestrian motion while explicitly satisfying safety constraints in a single-robot multi-human environment. We analyze the performance of SICNav in two simulation environments and indoor experiments with a real robot to demonstrate safe robot motion that can influence the surrounding humans. We also validate the trajectory forecasting performance of ORCA on a human trajectory dataset.
[ { "version": "v1", "created": "Tue, 17 Oct 2023 04:07:08 GMT" }, { "version": "v2", "created": "Mon, 27 May 2024 22:06:39 GMT" }, { "version": "v3", "created": "Tue, 11 Mar 2025 10:09:54 GMT" } ]
2025-03-12T00:00:00
[ [ "Samavi", "Sepehr", "" ], [ "Han", "James R.", "" ], [ "Shkurti", "Florian", "" ], [ "Schoellig", "Angela P.", "" ] ]
TITLE: SICNav: Safe and Interactive Crowd Navigation using Model Predictive Control and Bilevel Optimization ABSTRACT: Robots need to predict and react to human motions to navigate through a crowd without collisions. Many existing methods decouple prediction from planning, which does not account for the interaction between robot and human motions and can lead to the robot getting stuck. We propose SICNav, a Model Predictive Control (MPC) method that jointly solves for robot motion and predicted crowd motion in closed-loop. We model each human in the crowd to be following an Optimal Reciprocal Collision Avoidance (ORCA) scheme and embed that model as a constraint in the robot's local planner, resulting in a bilevel nonlinear MPC optimization problem. We use a KKT-reformulation to cast the bilevel problem as a single level and use a nonlinear solver to optimize. Our MPC method can influence pedestrian motion while explicitly satisfying safety constraints in a single-robot multi-human environment. We analyze the performance of SICNav in two simulation environments and indoor experiments with a real robot to demonstrate safe robot motion that can influence the surrounding humans. We also validate the trajectory forecasting performance of ORCA on a human trajectory dataset.
no_new_dataset
0.942771
2311.09350
Wei-Di Chang
Wei-Di Chang, Francois Hogan, Scott Fujimoto, David Meger, and Gregory Dudek
Generalizable Imitation Learning Through Pre-Trained Representations
ICRA 2025 Version
null
null
null
cs.RO cs.AI
http://creativecommons.org/licenses/by/4.0/
In this paper, we leverage self-supervised vision transformer models and their emergent semantic abilities to improve the generalization abilities of imitation learning policies. We introduce DVK, an imitation learning algorithm that leverages rich pre-trained Visual Transformer patch-level embeddings to obtain better generalization when learning through demonstrations. Our learner sees the world by clustering appearance features into groups associated with semantic concepts, forming stable keypoints that generalize across a wide range of appearance variations and object types. We demonstrate how this representation enables generalized behaviour by evaluating imitation learning across a diverse dataset of object manipulation tasks. To facilitate further study of generalization in Imitation Learning, all of our code for the method and evaluation, as well as the dataset, is made available.
[ { "version": "v1", "created": "Wed, 15 Nov 2023 20:15:51 GMT" }, { "version": "v2", "created": "Mon, 10 Mar 2025 18:57:28 GMT" } ]
2025-03-12T00:00:00
[ [ "Chang", "Wei-Di", "" ], [ "Hogan", "Francois", "" ], [ "Fujimoto", "Scott", "" ], [ "Meger", "David", "" ], [ "Dudek", "Gregory", "" ] ]
TITLE: Generalizable Imitation Learning Through Pre-Trained Representations ABSTRACT: In this paper, we leverage self-supervised vision transformer models and their emergent semantic abilities to improve the generalization abilities of imitation learning policies. We introduce DVK, an imitation learning algorithm that leverages rich pre-trained Visual Transformer patch-level embeddings to obtain better generalization when learning through demonstrations. Our learner sees the world by clustering appearance features into groups associated with semantic concepts, forming stable keypoints that generalize across a wide range of appearance variations and object types. We demonstrate how this representation enables generalized behaviour by evaluating imitation learning across a diverse dataset of object manipulation tasks. To facilitate further study of generalization in Imitation Learning, all of our code for the method and evaluation, as well as the dataset, is made available.
no_new_dataset
0.940953
2311.10248
Kan Yang
Sheldon C. Ebron, Meiying Zhang and Kan Yang
Identifying the Truth of Global Model: A Generic Solution to Defend Against Byzantine and Backdoor Attacks in Federated Learning (full version)
Accepted to ACISP 2025. This is the full version
null
null
null
cs.LG cs.AI cs.CR cs.DC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Federated Learning (FL) enables multiple parties to train machine learning models collaboratively without sharing the raw training data. However, the federated nature of FL enables malicious clients to influence a trained model by injecting error model updates via Byzantine or backdoor attacks. To detect malicious model updates, a typical approach is to measure the distance between each model update and a \textit{ground-truth model update}. To find such \textit{ground-truth model updates}, existing defenses either require a benign root dataset on the server (e.g., FLTrust) or simply use trimmed mean or median as the threshold for clipping (e.g., FLAME). However, such benign root datasets are impractical, and the trimmed mean or median may also eliminate contributions from these underrepresented datasets. In this paper, we propose a generic solution, namely FedTruth, to defend against model poisoning attacks in FL, where the \textit{ground-truth model update} (i.e., the global model update) will be estimated among all the model updates with dynamic aggregation weights. Specifically, FedTruth does not have specific assumptions on the benign or malicious data distribution or access to a benign root dataset. Moreover, FedTruth considers the potential contributions from all benign clients. Our empirical results show that FedTruth can reduce the impacts of poisoned model updates against both Byzantine and backdoor attacks, and is also efficient in large-scale FL systems.
[ { "version": "v1", "created": "Fri, 17 Nov 2023 00:39:59 GMT" }, { "version": "v2", "created": "Mon, 10 Mar 2025 18:37:26 GMT" } ]
2025-03-12T00:00:00
[ [ "Ebron", "Sheldon C.", "" ], [ "Zhang", "Meiying", "" ], [ "Yang", "Kan", "" ] ]
TITLE: Identifying the Truth of Global Model: A Generic Solution to Defend Against Byzantine and Backdoor Attacks in Federated Learning (full version) ABSTRACT: Federated Learning (FL) enables multiple parties to train machine learning models collaboratively without sharing the raw training data. However, the federated nature of FL enables malicious clients to influence a trained model by injecting error model updates via Byzantine or backdoor attacks. To detect malicious model updates, a typical approach is to measure the distance between each model update and a \textit{ground-truth model update}. To find such \textit{ground-truth model updates}, existing defenses either require a benign root dataset on the server (e.g., FLTrust) or simply use trimmed mean or median as the threshold for clipping (e.g., FLAME). However, such benign root datasets are impractical, and the trimmed mean or median may also eliminate contributions from these underrepresented datasets. In this paper, we propose a generic solution, namely FedTruth, to defend against model poisoning attacks in FL, where the \textit{ground-truth model update} (i.e., the global model update) will be estimated among all the model updates with dynamic aggregation weights. Specifically, FedTruth does not have specific assumptions on the benign or malicious data distribution or access to a benign root dataset. Moreover, FedTruth considers the potential contributions from all benign clients. Our empirical results show that FedTruth can reduce the impacts of poisoned model updates against both Byzantine and backdoor attacks, and is also efficient in large-scale FL systems.
no_new_dataset
0.944995
2311.14282
Zheng Chen
Zheng Chen, Yulun Zhang, Jinjin Gu, Xin Yuan, Linghe Kong, Guihai Chen, Xiaokang Yang
Image Super-Resolution with Text Prompt Diffusion
Code is available at https://github.com/zhengchen1999/PromptSR
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Image super-resolution (SR) methods typically model degradation to improve reconstruction accuracy in complex and unknown degradation scenarios. However, extracting degradation information from low-resolution images is challenging, which limits the model performance. To boost image SR performance, one feasible approach is to introduce additional priors. Inspired by advancements in multi-modal methods and text prompt image processing, we introduce text prompts to image SR to provide degradation priors. Specifically, we first design a text-image generation pipeline to integrate text into the SR dataset through the text degradation representation and degradation model. By adopting a discrete design, the text representation is flexible and user-friendly. Meanwhile, we propose the PromptSR to realize the text prompt SR. The PromptSR leverages the latest multi-modal large language model (MLLM) to generate prompts from low-resolution images. It also utilizes the pre-trained language model (e.g., T5 or CLIP) to enhance text comprehension. We train the PromptSR on the text-image dataset. Extensive experiments indicate that introducing text prompts into SR, yields impressive results on both synthetic and real-world images. Code: https://github.com/zhengchen1999/PromptSR.
[ { "version": "v1", "created": "Fri, 24 Nov 2023 05:11:35 GMT" }, { "version": "v2", "created": "Tue, 12 Mar 2024 12:14:51 GMT" }, { "version": "v3", "created": "Tue, 8 Oct 2024 10:30:00 GMT" }, { "version": "v4", "created": "Thu, 10 Oct 2024 05:47:46 GMT" }, { "version": "v5", "created": "Tue, 11 Mar 2025 02:20:58 GMT" } ]
2025-03-12T00:00:00
[ [ "Chen", "Zheng", "" ], [ "Zhang", "Yulun", "" ], [ "Gu", "Jinjin", "" ], [ "Yuan", "Xin", "" ], [ "Kong", "Linghe", "" ], [ "Chen", "Guihai", "" ], [ "Yang", "Xiaokang", "" ] ]
TITLE: Image Super-Resolution with Text Prompt Diffusion ABSTRACT: Image super-resolution (SR) methods typically model degradation to improve reconstruction accuracy in complex and unknown degradation scenarios. However, extracting degradation information from low-resolution images is challenging, which limits the model performance. To boost image SR performance, one feasible approach is to introduce additional priors. Inspired by advancements in multi-modal methods and text prompt image processing, we introduce text prompts to image SR to provide degradation priors. Specifically, we first design a text-image generation pipeline to integrate text into the SR dataset through the text degradation representation and degradation model. By adopting a discrete design, the text representation is flexible and user-friendly. Meanwhile, we propose the PromptSR to realize the text prompt SR. The PromptSR leverages the latest multi-modal large language model (MLLM) to generate prompts from low-resolution images. It also utilizes the pre-trained language model (e.g., T5 or CLIP) to enhance text comprehension. We train the PromptSR on the text-image dataset. Extensive experiments indicate that introducing text prompts into SR, yields impressive results on both synthetic and real-world images. Code: https://github.com/zhengchen1999/PromptSR.
no_new_dataset
0.950365
2312.07896
Joshua Groen
Joshua Groen, Zixian Yang, Divyadharshini Muruganandham, Mauro Belgiovine, Lei Ying, Kaushik Chowdhury
From Classification to Optimization: Slicing and Resource Management with TRACTOR
Journal version of TRACTOR: Traffic Analysis and Classification Tool for Open RAN
null
null
null
eess.SY cs.NI cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
5G and beyond networks promise advancements in bandwidth, latency, and connectivity. The Open Radio Access Network (O-RAN) framework enhances flexibility through network slicing and closed-loop RAN control. Central to this evolution is integrating machine learning (ML) for dynamic network control. This paper presents a framework to optimize O-RAN operation. First, we build and share a robust O-RAN dataset from real-world traffic captured across diverse locations and mobility scenarios, replicated within a full-stack srsRAN-based O-RAN system using the Colosseum RF emulator. This dataset supports ML training and deployment. We then introduce a traffic classification approach leveraging various ML models, demonstrating rapid training, testing, and refinement to improve accuracy. With up to 99% offline accuracy and 92% online accuracy for specific slices, our framework adapts efficiently to different models and network conditions. Finally, we present a physical resource block (PRB) assignment optimization strategy using reinforcement learning to refine resource allocation. Our learned policy achieves a mean performance score (0.631), surpassing a manually configured expert policy (0.609) and a random baseline (0.588), demonstrating improved PRB utilization. More importantly, our approach exhibits lower variability, with the Coefficient of Variation (CV) reduced by up to an order of magnitude in three out of four cases, ensuring more consistent performance. Our contributions, including open-source tools and datasets, accelerate O-RAN and ML-driven network control research.
[ { "version": "v1", "created": "Wed, 13 Dec 2023 04:53:12 GMT" }, { "version": "v2", "created": "Tue, 11 Mar 2025 15:31:55 GMT" } ]
2025-03-12T00:00:00
[ [ "Groen", "Joshua", "" ], [ "Yang", "Zixian", "" ], [ "Muruganandham", "Divyadharshini", "" ], [ "Belgiovine", "Mauro", "" ], [ "Ying", "Lei", "" ], [ "Chowdhury", "Kaushik", "" ] ]
TITLE: From Classification to Optimization: Slicing and Resource Management with TRACTOR ABSTRACT: 5G and beyond networks promise advancements in bandwidth, latency, and connectivity. The Open Radio Access Network (O-RAN) framework enhances flexibility through network slicing and closed-loop RAN control. Central to this evolution is integrating machine learning (ML) for dynamic network control. This paper presents a framework to optimize O-RAN operation. First, we build and share a robust O-RAN dataset from real-world traffic captured across diverse locations and mobility scenarios, replicated within a full-stack srsRAN-based O-RAN system using the Colosseum RF emulator. This dataset supports ML training and deployment. We then introduce a traffic classification approach leveraging various ML models, demonstrating rapid training, testing, and refinement to improve accuracy. With up to 99% offline accuracy and 92% online accuracy for specific slices, our framework adapts efficiently to different models and network conditions. Finally, we present a physical resource block (PRB) assignment optimization strategy using reinforcement learning to refine resource allocation. Our learned policy achieves a mean performance score (0.631), surpassing a manually configured expert policy (0.609) and a random baseline (0.588), demonstrating improved PRB utilization. More importantly, our approach exhibits lower variability, with the Coefficient of Variation (CV) reduced by up to an order of magnitude in three out of four cases, ensuring more consistent performance. Our contributions, including open-source tools and datasets, accelerate O-RAN and ML-driven network control research.
no_new_dataset
0.932269
2312.08177
Peilin Cai
Peilin Cai
Advanced Image Segmentation Techniques for Neural Activity Detection via C-fos Immediate Early Gene Expression
The paper has major omissions, a factual error in the description of UNet in the illustration on the second page, and extra printing on the last page, which is terrible. The original document is also lost and cannot be modified
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper investigates the application of advanced image segmentation techniques to analyze C-fos immediate early gene expression, a crucial marker for neural activity. Due to the complexity and high variability of neural circuits, accurate segmentation of C-fos images is paramount for the development of new insights into neural function. Amidst this backdrop, this research aims to improve accuracy and minimize manual intervention in C-fos image segmentation by leveraging the capabilities of CNNs and the Unet model. We describe the development of a novel workflow for the segmentation process involving Convolutional Neural Networks (CNNs) and the Unet model, demonstrating their efficiency in various image segmentation tasks. Our workflow incorporates pre-processing steps such as cropping, image feature extraction, and clustering for the training dataset selection. We used an AutoEncoder model to extract features and implement constrained clustering to identify similarities and differences in image types. Additionally, we utilized manual and automatic labeling approaches to enhance the performance of our model. We demonstrated the effectiveness of our method in distinguishing areas with significant C-fos expression from normal tissue areas. Lastly, we implemented a modified Unet network for the detection of C-fos expressions. This research contributes to the development of more efficient and automated image segmentation methods, advancing the understanding of neural function in neuroscience research.
[ { "version": "v1", "created": "Wed, 13 Dec 2023 14:36:16 GMT" }, { "version": "v2", "created": "Mon, 10 Mar 2025 23:42:49 GMT" } ]
2025-03-12T00:00:00
[ [ "Cai", "Peilin", "" ] ]
TITLE: Advanced Image Segmentation Techniques for Neural Activity Detection via C-fos Immediate Early Gene Expression ABSTRACT: This paper investigates the application of advanced image segmentation techniques to analyze C-fos immediate early gene expression, a crucial marker for neural activity. Due to the complexity and high variability of neural circuits, accurate segmentation of C-fos images is paramount for the development of new insights into neural function. Amidst this backdrop, this research aims to improve accuracy and minimize manual intervention in C-fos image segmentation by leveraging the capabilities of CNNs and the Unet model. We describe the development of a novel workflow for the segmentation process involving Convolutional Neural Networks (CNNs) and the Unet model, demonstrating their efficiency in various image segmentation tasks. Our workflow incorporates pre-processing steps such as cropping, image feature extraction, and clustering for the training dataset selection. We used an AutoEncoder model to extract features and implement constrained clustering to identify similarities and differences in image types. Additionally, we utilized manual and automatic labeling approaches to enhance the performance of our model. We demonstrated the effectiveness of our method in distinguishing areas with significant C-fos expression from normal tissue areas. Lastly, we implemented a modified Unet network for the detection of C-fos expressions. This research contributes to the development of more efficient and automated image segmentation methods, advancing the understanding of neural function in neuroscience research.
no_new_dataset
0.952926
2401.00740
Zeke Zexi Hu
Zeke Zexi Hu, Xiaoming Chen, Vera Yuk Ying Chung, Yiran Shen
Beyond Subspace Isolation: Many-to-Many Transformer for Light Field Image Super-resolution
Accepted by IEEE Transactions on Multimedia
null
10.1109/TMM.2024.3521795
null
eess.IV cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The effective extraction of spatial-angular features plays a crucial role in light field image super-resolution (LFSR) tasks, and the introduction of convolution and Transformers leads to significant improvement in this area. Nevertheless, due to the large 4D data volume of light field images, many existing methods opted to decompose the data into a number of lower-dimensional subspaces and perform Transformers in each sub-space individually. As a side effect, these methods inadvertently restrict the self-attention mechanisms to a One-to-One scheme accessing only a limited subset of LF data, explicitly preventing comprehensive optimization on all spatial and angular cues. In this paper, we identify this limitation as subspace isolation and introduce a novel Many-to-Many Transformer (M2MT) to address it. M2MT aggregates angular information in the spatial subspace before performing the self-attention mechanism. It enables complete access to all information across all sub-aperture images (SAIs) in a light field image. Consequently, M2MT is enabled to comprehensively capture long-range correlation dependencies. With M2MT as the pivotal component, we develop a simple yet effective M2MT network for LFSR. Our experimental results demonstrate that M2MT achieves state-of-the-art performance across various public datasets. We further conduct in-depth analysis using local attribution maps (LAM) to obtain visual interpretability, and the results validate that M2MT is empowered with a truly non-local context in both spatial and angular subspaces to mitigate subspace isolation and acquire effective spatial-angular representation.
[ { "version": "v1", "created": "Mon, 1 Jan 2024 12:48:23 GMT" }, { "version": "v2", "created": "Tue, 11 Mar 2025 12:54:24 GMT" } ]
2025-03-12T00:00:00
[ [ "Hu", "Zeke Zexi", "" ], [ "Chen", "Xiaoming", "" ], [ "Chung", "Vera Yuk Ying", "" ], [ "Shen", "Yiran", "" ] ]
TITLE: Beyond Subspace Isolation: Many-to-Many Transformer for Light Field Image Super-resolution ABSTRACT: The effective extraction of spatial-angular features plays a crucial role in light field image super-resolution (LFSR) tasks, and the introduction of convolution and Transformers leads to significant improvement in this area. Nevertheless, due to the large 4D data volume of light field images, many existing methods opted to decompose the data into a number of lower-dimensional subspaces and perform Transformers in each sub-space individually. As a side effect, these methods inadvertently restrict the self-attention mechanisms to a One-to-One scheme accessing only a limited subset of LF data, explicitly preventing comprehensive optimization on all spatial and angular cues. In this paper, we identify this limitation as subspace isolation and introduce a novel Many-to-Many Transformer (M2MT) to address it. M2MT aggregates angular information in the spatial subspace before performing the self-attention mechanism. It enables complete access to all information across all sub-aperture images (SAIs) in a light field image. Consequently, M2MT is enabled to comprehensively capture long-range correlation dependencies. With M2MT as the pivotal component, we develop a simple yet effective M2MT network for LFSR. Our experimental results demonstrate that M2MT achieves state-of-the-art performance across various public datasets. We further conduct in-depth analysis using local attribution maps (LAM) to obtain visual interpretability, and the results validate that M2MT is empowered with a truly non-local context in both spatial and angular subspaces to mitigate subspace isolation and acquire effective spatial-angular representation.
no_new_dataset
0.950457
2401.01759
Lin Bai
Lin Bai, Caiyan Jia, Ziying Song, and Chaoqun Cui
VGA: Vision and Graph Fused Attention Network for Rumor Detection
null
null
10.1145/3722225
null
cs.SI cs.CL cs.CV cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the development of social media, rumors have been spread broadly on social media platforms, causing great harm to society. Beside textual information, many rumors also use manipulated images or conceal textual information within images to deceive people and avoid being detected, making multimodal rumor detection be a critical problem. The majority of multimodal rumor detection methods mainly concentrate on extracting features of source claims and their corresponding images, while ignoring the comments of rumors and their propagation structures. These comments and structures imply the wisdom of crowds and are proved to be crucial to debunk rumors. Moreover, these methods usually only extract visual features in a basic manner, seldom consider tampering or textual information in images. Therefore, in this study, we propose a novel Vision and Graph Fused Attention Network (VGA) for rumor detection to utilize propagation structures among posts so as to obtain the crowd opinions and further explore visual tampering features, as well as the textual information hidden in images. We conduct extensive experiments on three datasets, demonstrating that VGA can effectively detect multimodal rumors and outperform state-of-the-art methods significantly.
[ { "version": "v1", "created": "Wed, 3 Jan 2024 14:24:02 GMT" } ]
2025-03-12T00:00:00
[ [ "Bai", "Lin", "" ], [ "Jia", "Caiyan", "" ], [ "Song", "Ziying", "" ], [ "Cui", "Chaoqun", "" ] ]
TITLE: VGA: Vision and Graph Fused Attention Network for Rumor Detection ABSTRACT: With the development of social media, rumors have been spread broadly on social media platforms, causing great harm to society. Beside textual information, many rumors also use manipulated images or conceal textual information within images to deceive people and avoid being detected, making multimodal rumor detection be a critical problem. The majority of multimodal rumor detection methods mainly concentrate on extracting features of source claims and their corresponding images, while ignoring the comments of rumors and their propagation structures. These comments and structures imply the wisdom of crowds and are proved to be crucial to debunk rumors. Moreover, these methods usually only extract visual features in a basic manner, seldom consider tampering or textual information in images. Therefore, in this study, we propose a novel Vision and Graph Fused Attention Network (VGA) for rumor detection to utilize propagation structures among posts so as to obtain the crowd opinions and further explore visual tampering features, as well as the textual information hidden in images. We conduct extensive experiments on three datasets, demonstrating that VGA can effectively detect multimodal rumors and outperform state-of-the-art methods significantly.
no_new_dataset
0.94743
2402.01974
LIanhao Yin
Lianhao Yin, Yutong Ban, Jennifer Eckhoff, Ozanan Meireles, Daniela Rus, Guy Rosman
Hypergraph-Transformer (HGT) for Interactive Event Prediction in Laparoscopic and Robotic Surgery
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding and anticipating intraoperative events and actions is critical for intraoperative assistance and decision-making during minimally invasive surgery. Automated prediction of events, actions, and the following consequences is addressed through various computational approaches with the objective of augmenting surgeons' perception and decision-making capabilities. We propose a predictive neural network that is capable of understanding and predicting critical interactive aspects of surgical workflow from intra-abdominal video, while flexibly leveraging surgical knowledge graphs. The approach incorporates a hypergraph-transformer (HGT) structure that encodes expert knowledge into the network design and predicts the hidden embedding of the graph. We verify our approach on established surgical datasets and applications, including the detection and prediction of action triplets, and the achievement of the Critical View of Safety (CVS). Moreover, we address specific, safety-related tasks, such as predicting the clipping of cystic duct or artery without prior achievement of the CVS. Our results demonstrate the superiority of our approach compared to unstructured alternatives.
[ { "version": "v1", "created": "Sat, 3 Feb 2024 00:58:05 GMT" }, { "version": "v2", "created": "Mon, 10 Mar 2025 21:58:42 GMT" } ]
2025-03-12T00:00:00
[ [ "Yin", "Lianhao", "" ], [ "Ban", "Yutong", "" ], [ "Eckhoff", "Jennifer", "" ], [ "Meireles", "Ozanan", "" ], [ "Rus", "Daniela", "" ], [ "Rosman", "Guy", "" ] ]
TITLE: Hypergraph-Transformer (HGT) for Interactive Event Prediction in Laparoscopic and Robotic Surgery ABSTRACT: Understanding and anticipating intraoperative events and actions is critical for intraoperative assistance and decision-making during minimally invasive surgery. Automated prediction of events, actions, and the following consequences is addressed through various computational approaches with the objective of augmenting surgeons' perception and decision-making capabilities. We propose a predictive neural network that is capable of understanding and predicting critical interactive aspects of surgical workflow from intra-abdominal video, while flexibly leveraging surgical knowledge graphs. The approach incorporates a hypergraph-transformer (HGT) structure that encodes expert knowledge into the network design and predicts the hidden embedding of the graph. We verify our approach on established surgical datasets and applications, including the detection and prediction of action triplets, and the achievement of the Critical View of Safety (CVS). Moreover, we address specific, safety-related tasks, such as predicting the clipping of cystic duct or artery without prior achievement of the CVS. Our results demonstrate the superiority of our approach compared to unstructured alternatives.
no_new_dataset
0.944074
2402.04412
Andrew Stirn
Andrew A. Stirn and David A. Knowles
The VampPrior Mixture Model
null
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Widely used deep latent variable models (DLVMs), in particular Variational Autoencoders (VAEs), employ overly simplistic priors on the latent space. To achieve strong clustering performance, existing methods that replace the standard normal prior with a Gaussian mixture model (GMM) require defining the number of clusters to be close to the number of expected ground truth classes a-priori and are susceptible to poor initializations. We leverage VampPrior concepts (Tomczak and Welling, 2018) to fit a Bayesian GMM prior, resulting in the VampPrior Mixture Model (VMM), a novel prior for DLVMs. In a VAE, the VMM attains highly competitive clustering performance on benchmark datasets. Integrating the VMM into scVI (Lopez et al., 2018), a popular scRNA-seq integration method, significantly improves its performance and automatically arranges cells into clusters with similar biological characteristics.
[ { "version": "v1", "created": "Tue, 6 Feb 2024 21:18:34 GMT" }, { "version": "v2", "created": "Tue, 5 Mar 2024 03:52:18 GMT" }, { "version": "v3", "created": "Tue, 11 Mar 2025 03:56:24 GMT" } ]
2025-03-12T00:00:00
[ [ "Stirn", "Andrew A.", "" ], [ "Knowles", "David A.", "" ] ]
TITLE: The VampPrior Mixture Model ABSTRACT: Widely used deep latent variable models (DLVMs), in particular Variational Autoencoders (VAEs), employ overly simplistic priors on the latent space. To achieve strong clustering performance, existing methods that replace the standard normal prior with a Gaussian mixture model (GMM) require defining the number of clusters to be close to the number of expected ground truth classes a-priori and are susceptible to poor initializations. We leverage VampPrior concepts (Tomczak and Welling, 2018) to fit a Bayesian GMM prior, resulting in the VampPrior Mixture Model (VMM), a novel prior for DLVMs. In a VAE, the VMM attains highly competitive clustering performance on benchmark datasets. Integrating the VMM into scVI (Lopez et al., 2018), a popular scRNA-seq integration method, significantly improves its performance and automatically arranges cells into clusters with similar biological characteristics.
no_new_dataset
0.948775
2403.09905
Vishnu Sashank Dorbala
Vishnu Sashank Dorbala, Bhrij Patel, Amrit Singh Bedi, Dinesh Manocha
Right Place, Right Time! Dynamizing Topological Graphs for Embodied Navigation
18
null
null
null
cs.RO cs.CV
http://creativecommons.org/publicdomain/zero/1.0/
Embodied Navigation tasks often involve constructing topological graphs of a scene during exploration to facilitate high-level planning and decision-making for execution in continuous environments. Prior literature makes the assumption of static graphs with stationary targets, which does not hold in many real-world environments with moving objects. To address this, we present a novel formulation generalizing navigation to dynamic environments by introducing structured object transitions to dynamize static topological graphs called Object Transition Graphs (OTGs). OTGs simulate portable targets following structured routes inspired by human habits. We apply this technique to Matterport3D (MP3D), a popular simulator for evaluating embodied tasks. On these dynamized OTGs, we establish a navigation benchmark by evaluating Oracle-based, Reinforcement Learning, and Large Language Model (LLM)-based approaches on a multi-object finding task. Further, we quantify agent adaptability, and make key inferences such as agents employing learned decision-making strategies generalize better than those relying on privileged oracle knowledge. To the best of our knowledge, ours is the first work to introduce structured temporal dynamism on topological graphs for studying generalist embodied navigation policies. The code and dataset for our OTGs will be made publicly available to foster research on embodied navigation in dynamic scenes.
[ { "version": "v1", "created": "Thu, 14 Mar 2024 22:33:22 GMT" }, { "version": "v2", "created": "Sun, 1 Dec 2024 21:42:37 GMT" }, { "version": "v3", "created": "Mon, 10 Mar 2025 22:26:37 GMT" } ]
2025-03-12T00:00:00
[ [ "Dorbala", "Vishnu Sashank", "" ], [ "Patel", "Bhrij", "" ], [ "Bedi", "Amrit Singh", "" ], [ "Manocha", "Dinesh", "" ] ]
TITLE: Right Place, Right Time! Dynamizing Topological Graphs for Embodied Navigation ABSTRACT: Embodied Navigation tasks often involve constructing topological graphs of a scene during exploration to facilitate high-level planning and decision-making for execution in continuous environments. Prior literature makes the assumption of static graphs with stationary targets, which does not hold in many real-world environments with moving objects. To address this, we present a novel formulation generalizing navigation to dynamic environments by introducing structured object transitions to dynamize static topological graphs called Object Transition Graphs (OTGs). OTGs simulate portable targets following structured routes inspired by human habits. We apply this technique to Matterport3D (MP3D), a popular simulator for evaluating embodied tasks. On these dynamized OTGs, we establish a navigation benchmark by evaluating Oracle-based, Reinforcement Learning, and Large Language Model (LLM)-based approaches on a multi-object finding task. Further, we quantify agent adaptability, and make key inferences such as agents employing learned decision-making strategies generalize better than those relying on privileged oracle knowledge. To the best of our knowledge, ours is the first work to introduce structured temporal dynamism on topological graphs for studying generalist embodied navigation policies. The code and dataset for our OTGs will be made publicly available to foster research on embodied navigation in dynamic scenes.
no_new_dataset
0.871803
2404.02611
Ivan Sevillano-Garc\'ia
Iv\'an Sevillano-Garc\'ia, Juli\'an Luengo and Francisco Herrera
X-SHIELD: Regularization for eXplainable Artificial Intelligence
18 pages, 9 figures
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
As artificial intelligence systems become integral across domains, the demand for explainability grows, the called eXplainable artificial intelligence (XAI). Existing efforts primarily focus on generating and evaluating explanations for black-box models while a critical gap in directly enhancing models remains through these evaluations. It is important to consider the potential of this explanation process to improve model quality with a feedback on training as well. XAI may be used to improve model performance while boosting its explainability. Under this view, this paper introduces Transformation - Selective Hidden Input Evaluation for Learning Dynamics (T-SHIELD), a regularization family designed to improve model quality by hiding features of input, forcing the model to generalize without those features. Within this family, we propose the XAI - SHIELD(X-SHIELD), a regularization for explainable artificial intelligence, which uses explanations to select specific features to hide. In contrast to conventional approaches, X-SHIELD regularization seamlessly integrates into the objective function enhancing model explainability while also improving performance. Experimental validation on benchmark datasets underscores X-SHIELD's effectiveness in improving performance and overall explainability. The improvement is validated through experiments comparing models with and without the X-SHIELD regularization, with further analysis exploring the rationale behind its design choices. This establishes X-SHIELD regularization as a promising pathway for developing reliable artificial intelligence regularization.
[ { "version": "v1", "created": "Wed, 3 Apr 2024 09:56:38 GMT" }, { "version": "v2", "created": "Thu, 14 Nov 2024 22:53:12 GMT" }, { "version": "v3", "created": "Tue, 11 Mar 2025 12:24:01 GMT" } ]
2025-03-12T00:00:00
[ [ "Sevillano-García", "Iván", "" ], [ "Luengo", "Julián", "" ], [ "Herrera", "Francisco", "" ] ]
TITLE: X-SHIELD: Regularization for eXplainable Artificial Intelligence ABSTRACT: As artificial intelligence systems become integral across domains, the demand for explainability grows, the called eXplainable artificial intelligence (XAI). Existing efforts primarily focus on generating and evaluating explanations for black-box models while a critical gap in directly enhancing models remains through these evaluations. It is important to consider the potential of this explanation process to improve model quality with a feedback on training as well. XAI may be used to improve model performance while boosting its explainability. Under this view, this paper introduces Transformation - Selective Hidden Input Evaluation for Learning Dynamics (T-SHIELD), a regularization family designed to improve model quality by hiding features of input, forcing the model to generalize without those features. Within this family, we propose the XAI - SHIELD(X-SHIELD), a regularization for explainable artificial intelligence, which uses explanations to select specific features to hide. In contrast to conventional approaches, X-SHIELD regularization seamlessly integrates into the objective function enhancing model explainability while also improving performance. Experimental validation on benchmark datasets underscores X-SHIELD's effectiveness in improving performance and overall explainability. The improvement is validated through experiments comparing models with and without the X-SHIELD regularization, with further analysis exploring the rationale behind its design choices. This establishes X-SHIELD regularization as a promising pathway for developing reliable artificial intelligence regularization.
no_new_dataset
0.941331
2404.10419
Matthieu Futeral
Matthieu Futeral, Andrea Agostinelli, Marco Tagliasacchi, Neil Zeghidour, Eugene Kharitonov
MAD Speech: Measures of Acoustic Diversity of Speech
NAACL 2025
null
null
null
eess.AS cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Generative spoken language models produce speech in a wide range of voices, prosody, and recording conditions, seemingly approaching the diversity of natural speech. However, the extent to which generated speech is acoustically diverse remains unclear due to a lack of appropriate metrics. We address this gap by developing lightweight metrics of acoustic diversity, which we collectively refer to as MAD Speech. We focus on measuring five facets of acoustic diversity: voice, gender, emotion, accent, and background noise. We construct the metrics as a composition of specialized, per-facet embedding models and an aggregation function that measures diversity within the embedding space. Next, we build a series of datasets with a priori known diversity preferences for each facet. Using these datasets, we demonstrate that our proposed metrics achieve a stronger agreement with the ground-truth diversity than baselines. Finally, we showcase the applicability of our proposed metrics across several real-life evaluation scenarios. MAD Speech is made publicly accessible.
[ { "version": "v1", "created": "Tue, 16 Apr 2024 09:35:27 GMT" }, { "version": "v2", "created": "Tue, 11 Mar 2025 12:02:06 GMT" } ]
2025-03-12T00:00:00
[ [ "Futeral", "Matthieu", "" ], [ "Agostinelli", "Andrea", "" ], [ "Tagliasacchi", "Marco", "" ], [ "Zeghidour", "Neil", "" ], [ "Kharitonov", "Eugene", "" ] ]
TITLE: MAD Speech: Measures of Acoustic Diversity of Speech ABSTRACT: Generative spoken language models produce speech in a wide range of voices, prosody, and recording conditions, seemingly approaching the diversity of natural speech. However, the extent to which generated speech is acoustically diverse remains unclear due to a lack of appropriate metrics. We address this gap by developing lightweight metrics of acoustic diversity, which we collectively refer to as MAD Speech. We focus on measuring five facets of acoustic diversity: voice, gender, emotion, accent, and background noise. We construct the metrics as a composition of specialized, per-facet embedding models and an aggregation function that measures diversity within the embedding space. Next, we build a series of datasets with a priori known diversity preferences for each facet. Using these datasets, we demonstrate that our proposed metrics achieve a stronger agreement with the ground-truth diversity than baselines. Finally, we showcase the applicability of our proposed metrics across several real-life evaluation scenarios. MAD Speech is made publicly accessible.
new_dataset
0.958577
2404.11868
Azad Singh
Vandan Gorade, Azad Singh, and Deepak Mishra
OTCXR: Rethinking Self-supervised Alignment using Optimal Transport for Chest X-ray Analysis
null
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Self-supervised learning (SSL) has emerged as a promising technique for analyzing medical modalities such as X-rays due to its ability to learn without annotations. However, conventional SSL methods face challenges in achieving semantic alignment and capturing subtle details, which limits their ability to accurately represent the underlying anatomical structures and pathological features. To address these limitations, we propose OTCXR, a novel SSL framework that leverages optimal transport (OT) to learn dense semantic invariance. By integrating OT with our innovative Cross-Viewpoint Semantics Infusion Module (CV-SIM), OTCXR enhances the model's ability to capture not only local spatial features but also global contextual dependencies across different viewpoints. This approach enriches the effectiveness of SSL in the context of chest radiographs. Furthermore, OTCXR incorporates variance and covariance regularizations within the OT framework to prioritize clinically relevant information while suppressing less informative features. This ensures that the learned representations are comprehensive and discriminative, particularly beneficial for tasks such as thoracic disease diagnosis. We validate OTCXR's efficacy through comprehensive experiments on three publicly available chest X-ray datasets. Our empirical results demonstrate the superiority of OTCXR over state-of-the-art methods across all evaluated tasks, confirming its capability to learn semantically rich representations.
[ { "version": "v1", "created": "Thu, 18 Apr 2024 02:59:48 GMT" }, { "version": "v2", "created": "Wed, 24 Apr 2024 06:05:33 GMT" }, { "version": "v3", "created": "Sun, 12 May 2024 03:15:07 GMT" }, { "version": "v4", "created": "Tue, 11 Mar 2025 10:09:11 GMT" } ]
2025-03-12T00:00:00
[ [ "Gorade", "Vandan", "" ], [ "Singh", "Azad", "" ], [ "Mishra", "Deepak", "" ] ]
TITLE: OTCXR: Rethinking Self-supervised Alignment using Optimal Transport for Chest X-ray Analysis ABSTRACT: Self-supervised learning (SSL) has emerged as a promising technique for analyzing medical modalities such as X-rays due to its ability to learn without annotations. However, conventional SSL methods face challenges in achieving semantic alignment and capturing subtle details, which limits their ability to accurately represent the underlying anatomical structures and pathological features. To address these limitations, we propose OTCXR, a novel SSL framework that leverages optimal transport (OT) to learn dense semantic invariance. By integrating OT with our innovative Cross-Viewpoint Semantics Infusion Module (CV-SIM), OTCXR enhances the model's ability to capture not only local spatial features but also global contextual dependencies across different viewpoints. This approach enriches the effectiveness of SSL in the context of chest radiographs. Furthermore, OTCXR incorporates variance and covariance regularizations within the OT framework to prioritize clinically relevant information while suppressing less informative features. This ensures that the learned representations are comprehensive and discriminative, particularly beneficial for tasks such as thoracic disease diagnosis. We validate OTCXR's efficacy through comprehensive experiments on three publicly available chest X-ray datasets. Our empirical results demonstrate the superiority of OTCXR over state-of-the-art methods across all evaluated tasks, confirming its capability to learn semantically rich representations.
no_new_dataset
0.946349
2404.17884
Rodrigo Abad\'ia Heredia
Rodrigo Abad\'ia-Heredia, Adri\'an Corrochano, Manuel Lopez-Martin, Soledad Le Clainche
Generalization capabilities and robustness of hybrid models grounded in physics compared to purely deep learning models
24 pages, two column, 26 figures and 11 tables
Physics of Fluids 1 March 2025; 37 (3): 035149
10.1063/5.0253876
null
physics.flu-dyn cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
This study investigates the generalization capabilities and robustness of purely deep learning (DL) models and hybrid models based on physical principles in fluid dynamics applications, specifically focusing on iteratively forecasting the temporal evolution of flow dynamics. Three autoregressive models were compared: a hybrid model (POD-DL) that combines proper orthogonal decomposition (POD) with a long-short term memory (LSTM) layer, a convolutional autoencoder combined with a convolutional LSTM (ConvLSTM) layer and a variational autoencoder (VAE) combined with a ConvLSTM layer. These models were tested on two high-dimensional, nonlinear datasets representing the velocity field of flow past a circular cylinder in both laminar and turbulent regimes. The study used latent dimension methods, enabling a bijective reduction of high-dimensional dynamics into a lower-order space to facilitate future predictions. While the VAE and ConvLSTM models accurately predicted laminar flow, the hybrid POD-DL model outperformed the others across both laminar and turbulent flow regimes. This success is attributed to the model's ability to incorporate modal decomposition, reducing the dimensionality of the data, by a non-parametric method, and simplifying the forecasting component. By leveraging POD, the model not only gained insight into the underlying physics, improving prediction accuracy with less training data, but also reduce the number of trainable parameters as POD is non-parametric. The findings emphasize the potential of hybrid models, particularly those integrating modal decomposition and deep learning, in predicting complex flow dynamics.
[ { "version": "v1", "created": "Sat, 27 Apr 2024 12:43:02 GMT" }, { "version": "v2", "created": "Mon, 28 Oct 2024 15:31:11 GMT" }, { "version": "v3", "created": "Fri, 24 Jan 2025 19:50:59 GMT" }, { "version": "v4", "created": "Mon, 17 Feb 2025 15:37:58 GMT" } ]
2025-03-12T00:00:00
[ [ "Abadía-Heredia", "Rodrigo", "" ], [ "Corrochano", "Adrián", "" ], [ "Lopez-Martin", "Manuel", "" ], [ "Clainche", "Soledad Le", "" ] ]
TITLE: Generalization capabilities and robustness of hybrid models grounded in physics compared to purely deep learning models ABSTRACT: This study investigates the generalization capabilities and robustness of purely deep learning (DL) models and hybrid models based on physical principles in fluid dynamics applications, specifically focusing on iteratively forecasting the temporal evolution of flow dynamics. Three autoregressive models were compared: a hybrid model (POD-DL) that combines proper orthogonal decomposition (POD) with a long-short term memory (LSTM) layer, a convolutional autoencoder combined with a convolutional LSTM (ConvLSTM) layer and a variational autoencoder (VAE) combined with a ConvLSTM layer. These models were tested on two high-dimensional, nonlinear datasets representing the velocity field of flow past a circular cylinder in both laminar and turbulent regimes. The study used latent dimension methods, enabling a bijective reduction of high-dimensional dynamics into a lower-order space to facilitate future predictions. While the VAE and ConvLSTM models accurately predicted laminar flow, the hybrid POD-DL model outperformed the others across both laminar and turbulent flow regimes. This success is attributed to the model's ability to incorporate modal decomposition, reducing the dimensionality of the data, by a non-parametric method, and simplifying the forecasting component. By leveraging POD, the model not only gained insight into the underlying physics, improving prediction accuracy with less training data, but also reduce the number of trainable parameters as POD is non-parametric. The findings emphasize the potential of hybrid models, particularly those integrating modal decomposition and deep learning, in predicting complex flow dynamics.
no_new_dataset
0.949482
2406.03486
Soonwoo Kwon
Soonwoo Kwon, Sojung Kim, Minju Park, Seunghyun Lee, Kyuseok Kim
BIPED: Pedagogically Informed Tutoring System for ESL Education
ACL 2024
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large Language Models (LLMs) have a great potential to serve as readily available and cost-efficient Conversational Intelligent Tutoring Systems (CITS) for teaching L2 learners of English. Existing CITS, however, are designed to teach only simple concepts or lack the pedagogical depth necessary to address diverse learning strategies. To develop a more pedagogically informed CITS capable of teaching complex concepts, we construct a BIlingual PEDagogically-informed Tutoring Dataset (BIPED) of one-on-one, human-to-human English tutoring interactions. Through post-hoc analysis of the tutoring interactions, we come up with a lexicon of dialogue acts (34 tutor acts and 9 student acts), which we use to further annotate the collected dataset. Based on a two-step framework of first predicting the appropriate tutor act then generating the corresponding response, we implemented two CITS models using GPT-4 and SOLAR-KO, respectively. We experimentally demonstrate that the implemented models not only replicate the style of human teachers but also employ diverse and contextually appropriate pedagogical strategies.
[ { "version": "v1", "created": "Wed, 5 Jun 2024 17:49:24 GMT" } ]
2025-03-12T00:00:00
[ [ "Kwon", "Soonwoo", "" ], [ "Kim", "Sojung", "" ], [ "Park", "Minju", "" ], [ "Lee", "Seunghyun", "" ], [ "Kim", "Kyuseok", "" ] ]
TITLE: BIPED: Pedagogically Informed Tutoring System for ESL Education ABSTRACT: Large Language Models (LLMs) have a great potential to serve as readily available and cost-efficient Conversational Intelligent Tutoring Systems (CITS) for teaching L2 learners of English. Existing CITS, however, are designed to teach only simple concepts or lack the pedagogical depth necessary to address diverse learning strategies. To develop a more pedagogically informed CITS capable of teaching complex concepts, we construct a BIlingual PEDagogically-informed Tutoring Dataset (BIPED) of one-on-one, human-to-human English tutoring interactions. Through post-hoc analysis of the tutoring interactions, we come up with a lexicon of dialogue acts (34 tutor acts and 9 student acts), which we use to further annotate the collected dataset. Based on a two-step framework of first predicting the appropriate tutor act then generating the corresponding response, we implemented two CITS models using GPT-4 and SOLAR-KO, respectively. We experimentally demonstrate that the implemented models not only replicate the style of human teachers but also employ diverse and contextually appropriate pedagogical strategies.
new_dataset
0.95995
2406.06843
Jikai Wang
Jikai Wang, Qifan Zhang, Yu-Wei Chao, Bowen Wen, Xiaohu Guo, Yu Xiang
HO-Cap: A Capture System and Dataset for 3D Reconstruction and Pose Tracking of Hand-Object Interaction
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a data capture system and a new dataset, HO-Cap, for 3D reconstruction and pose tracking of hands and objects in videos. The system leverages multiple RGBD cameras and a HoloLens headset for data collection, avoiding the use of expensive 3D scanners or mocap systems. We propose a semi-automatic method for annotating the shape and pose of hands and objects in the collected videos, significantly reducing the annotation time compared to manual labeling. With this system, we captured a video dataset of humans interacting with objects to perform various tasks, including simple pick-and-place actions, handovers between hands, and using objects according to their affordance, which can serve as human demonstrations for research in embodied AI and robot manipulation. Our data capture setup and annotation framework will be available for the community to use in reconstructing 3D shapes of objects and human hands and tracking their poses in videos.
[ { "version": "v1", "created": "Mon, 10 Jun 2024 23:25:19 GMT" }, { "version": "v2", "created": "Sun, 16 Jun 2024 20:51:53 GMT" }, { "version": "v3", "created": "Wed, 4 Dec 2024 21:05:00 GMT" }, { "version": "v4", "created": "Tue, 11 Mar 2025 16:48:26 GMT" } ]
2025-03-12T00:00:00
[ [ "Wang", "Jikai", "" ], [ "Zhang", "Qifan", "" ], [ "Chao", "Yu-Wei", "" ], [ "Wen", "Bowen", "" ], [ "Guo", "Xiaohu", "" ], [ "Xiang", "Yu", "" ] ]
TITLE: HO-Cap: A Capture System and Dataset for 3D Reconstruction and Pose Tracking of Hand-Object Interaction ABSTRACT: We introduce a data capture system and a new dataset, HO-Cap, for 3D reconstruction and pose tracking of hands and objects in videos. The system leverages multiple RGBD cameras and a HoloLens headset for data collection, avoiding the use of expensive 3D scanners or mocap systems. We propose a semi-automatic method for annotating the shape and pose of hands and objects in the collected videos, significantly reducing the annotation time compared to manual labeling. With this system, we captured a video dataset of humans interacting with objects to perform various tasks, including simple pick-and-place actions, handovers between hands, and using objects according to their affordance, which can serve as human demonstrations for research in embodied AI and robot manipulation. Our data capture setup and annotation framework will be available for the community to use in reconstructing 3D shapes of objects and human hands and tracking their poses in videos.
new_dataset
0.953535
2406.07451
Xiaoyan Hu
Xiaoyan Hu, Ho-fung Leung, and Farzan Farnia
A Multi-Armed Bandit Approach to Online Selection and Evaluation of Generative Models
arXiv version
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Existing frameworks for evaluating and comparing generative models consider an offline setting, where the evaluator has access to large batches of data produced by the models. However, in practical scenarios, the goal is often to identify and select the best model using the fewest possible generated samples to minimize the costs of querying data from the sub-optimal models. In this work, we propose an online evaluation and selection framework to find the generative model that maximizes a standard assessment score among a group of available models. We view the task as a multi-armed bandit (MAB) and propose upper confidence bound (UCB) bandit algorithms to identify the model producing data with the best evaluation score that quantifies the quality and diversity of generated data. Specifically, we develop the MAB-based selection of generative models considering the Fr\'echet Distance (FD) and Inception Score (IS) metrics, resulting in the FD-UCB and IS-UCB algorithms. We prove regret bounds for these algorithms and present numerical results on standard image datasets. Our empirical results suggest the efficacy of MAB approaches for the sample-efficient evaluation and selection of deep generative models. The project code is available at https://github.com/yannxiaoyanhu/dgm-online-eval.
[ { "version": "v1", "created": "Tue, 11 Jun 2024 16:57:48 GMT" }, { "version": "v2", "created": "Thu, 31 Oct 2024 16:48:40 GMT" }, { "version": "v3", "created": "Tue, 11 Mar 2025 10:55:52 GMT" } ]
2025-03-12T00:00:00
[ [ "Hu", "Xiaoyan", "" ], [ "Leung", "Ho-fung", "" ], [ "Farnia", "Farzan", "" ] ]
TITLE: A Multi-Armed Bandit Approach to Online Selection and Evaluation of Generative Models ABSTRACT: Existing frameworks for evaluating and comparing generative models consider an offline setting, where the evaluator has access to large batches of data produced by the models. However, in practical scenarios, the goal is often to identify and select the best model using the fewest possible generated samples to minimize the costs of querying data from the sub-optimal models. In this work, we propose an online evaluation and selection framework to find the generative model that maximizes a standard assessment score among a group of available models. We view the task as a multi-armed bandit (MAB) and propose upper confidence bound (UCB) bandit algorithms to identify the model producing data with the best evaluation score that quantifies the quality and diversity of generated data. Specifically, we develop the MAB-based selection of generative models considering the Fr\'echet Distance (FD) and Inception Score (IS) metrics, resulting in the FD-UCB and IS-UCB algorithms. We prove regret bounds for these algorithms and present numerical results on standard image datasets. Our empirical results suggest the efficacy of MAB approaches for the sample-efficient evaluation and selection of deep generative models. The project code is available at https://github.com/yannxiaoyanhu/dgm-online-eval.
no_new_dataset
0.949529
2406.10118
Holy Lovenia
Holy Lovenia, Rahmad Mahendra, Salsabil Maulana Akbar, Lester James V. Miranda, Jennifer Santoso, Elyanah Aco, Akhdan Fadhilah, Jonibek Mansurov, Joseph Marvin Imperial, Onno P. Kampman, Joel Ruben Antony Moniz, Muhammad Ravi Shulthan Habibi, Frederikus Hudi, Railey Montalan, Ryan Ignatius, Joanito Agili Lopo, William Nixon, B\"orje F. Karlsson, James Jaya, Ryandito Diandaru, Yuze Gao, Patrick Amadeus, Bin Wang, Jan Christian Blaise Cruz, Chenxi Whitehouse, Ivan Halim Parmonangan, Maria Khelli, Wenyu Zhang, Lucky Susanto, Reynard Adha Ryanda, Sonny Lazuardi Hermawan, Dan John Velasco, Muhammad Dehan Al Kautsar, Willy Fitra Hendria, Yasmin Moslem, Noah Flynn, Muhammad Farid Adilazuarda, Haochen Li, Johanes Lee, R. Damanhuri, Shuo Sun, Muhammad Reza Qorib, Amirbek Djanibekov, Wei Qi Leong, Quyet V. Do, Niklas Muennighoff, Tanrada Pansuwan, Ilham Firdausi Putra, Yan Xu, Ngee Chia Tai, Ayu Purwarianti, Sebastian Ruder, William Tjhi, Peerat Limkonchotiwat, Alham Fikri Aji, Sedrick Keh, Genta Indra Winata, Ruochen Zhang, Fajri Koto, Zheng-Xin Yong, Samuel Cahyawijaya
SEACrowd: A Multilingual Multimodal Data Hub and Benchmark Suite for Southeast Asian Languages
https://seacrowd.github.io/ Published in EMNLP 2024
null
null
null
cs.CL
http://creativecommons.org/licenses/by-sa/4.0/
Southeast Asia (SEA) is a region rich in linguistic diversity and cultural variety, with over 1,300 indigenous languages and a population of 671 million people. However, prevailing AI models suffer from a significant lack of representation of texts, images, and audio datasets from SEA, compromising the quality of AI models for SEA languages. Evaluating models for SEA languages is challenging due to the scarcity of high-quality datasets, compounded by the dominance of English training data, raising concerns about potential cultural misrepresentation. To address these challenges, we introduce SEACrowd, a collaborative initiative that consolidates a comprehensive resource hub that fills the resource gap by providing standardized corpora in nearly 1,000 SEA languages across three modalities. Through our SEACrowd benchmarks, we assess the quality of AI models on 36 indigenous languages across 13 tasks, offering valuable insights into the current AI landscape in SEA. Furthermore, we propose strategies to facilitate greater AI advancements, maximizing potential utility and resource equity for the future of AI in SEA.
[ { "version": "v1", "created": "Fri, 14 Jun 2024 15:23:39 GMT" }, { "version": "v2", "created": "Fri, 5 Jul 2024 05:28:20 GMT" }, { "version": "v3", "created": "Mon, 8 Jul 2024 07:49:40 GMT" }, { "version": "v4", "created": "Tue, 8 Oct 2024 14:35:36 GMT" }, { "version": "v5", "created": "Tue, 11 Mar 2025 02:04:36 GMT" } ]
2025-03-12T00:00:00
[ [ "Lovenia", "Holy", "" ], [ "Mahendra", "Rahmad", "" ], [ "Akbar", "Salsabil Maulana", "" ], [ "Miranda", "Lester James V.", "" ], [ "Santoso", "Jennifer", "" ], [ "Aco", "Elyanah", "" ], [ "Fadhilah", "Akhdan", "" ], [ "Mansurov", "Jonibek", "" ], [ "Imperial", "Joseph Marvin", "" ], [ "Kampman", "Onno P.", "" ], [ "Moniz", "Joel Ruben Antony", "" ], [ "Habibi", "Muhammad Ravi Shulthan", "" ], [ "Hudi", "Frederikus", "" ], [ "Montalan", "Railey", "" ], [ "Ignatius", "Ryan", "" ], [ "Lopo", "Joanito Agili", "" ], [ "Nixon", "William", "" ], [ "Karlsson", "Börje F.", "" ], [ "Jaya", "James", "" ], [ "Diandaru", "Ryandito", "" ], [ "Gao", "Yuze", "" ], [ "Amadeus", "Patrick", "" ], [ "Wang", "Bin", "" ], [ "Cruz", "Jan Christian Blaise", "" ], [ "Whitehouse", "Chenxi", "" ], [ "Parmonangan", "Ivan Halim", "" ], [ "Khelli", "Maria", "" ], [ "Zhang", "Wenyu", "" ], [ "Susanto", "Lucky", "" ], [ "Ryanda", "Reynard Adha", "" ], [ "Hermawan", "Sonny Lazuardi", "" ], [ "Velasco", "Dan John", "" ], [ "Kautsar", "Muhammad Dehan Al", "" ], [ "Hendria", "Willy Fitra", "" ], [ "Moslem", "Yasmin", "" ], [ "Flynn", "Noah", "" ], [ "Adilazuarda", "Muhammad Farid", "" ], [ "Li", "Haochen", "" ], [ "Lee", "Johanes", "" ], [ "Damanhuri", "R.", "" ], [ "Sun", "Shuo", "" ], [ "Qorib", "Muhammad Reza", "" ], [ "Djanibekov", "Amirbek", "" ], [ "Leong", "Wei Qi", "" ], [ "Do", "Quyet V.", "" ], [ "Muennighoff", "Niklas", "" ], [ "Pansuwan", "Tanrada", "" ], [ "Putra", "Ilham Firdausi", "" ], [ "Xu", "Yan", "" ], [ "Tai", "Ngee Chia", "" ], [ "Purwarianti", "Ayu", "" ], [ "Ruder", "Sebastian", "" ], [ "Tjhi", "William", "" ], [ "Limkonchotiwat", "Peerat", "" ], [ "Aji", "Alham Fikri", "" ], [ "Keh", "Sedrick", "" ], [ "Winata", "Genta Indra", "" ], [ "Zhang", "Ruochen", "" ], [ "Koto", "Fajri", "" ], [ "Yong", "Zheng-Xin", "" ], [ "Cahyawijaya", "Samuel", "" ] ]
TITLE: SEACrowd: A Multilingual Multimodal Data Hub and Benchmark Suite for Southeast Asian Languages ABSTRACT: Southeast Asia (SEA) is a region rich in linguistic diversity and cultural variety, with over 1,300 indigenous languages and a population of 671 million people. However, prevailing AI models suffer from a significant lack of representation of texts, images, and audio datasets from SEA, compromising the quality of AI models for SEA languages. Evaluating models for SEA languages is challenging due to the scarcity of high-quality datasets, compounded by the dominance of English training data, raising concerns about potential cultural misrepresentation. To address these challenges, we introduce SEACrowd, a collaborative initiative that consolidates a comprehensive resource hub that fills the resource gap by providing standardized corpora in nearly 1,000 SEA languages across three modalities. Through our SEACrowd benchmarks, we assess the quality of AI models on 36 indigenous languages across 13 tasks, offering valuable insights into the current AI landscape in SEA. Furthermore, we propose strategies to facilitate greater AI advancements, maximizing potential utility and resource equity for the future of AI in SEA.
no_new_dataset
0.927888