id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
listlengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2503.07216 | Seanie Lee | Sangwoo Park, Seanie Lee, Byungjoo Kim, Sung Ju Hwang | FedRand: Enhancing Privacy in Federated Learning with Randomized LoRA
Subparameter Updates | Preprint | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Federated Learning (FL) is a widely used framework for training models in a
decentralized manner, ensuring that the central server does not have direct
access to data from local clients. However, this approach may still fail to
fully preserve data privacy, as models from local clients are exposed to the
central server during the aggregation process. This issue becomes even more
critical when training vision-language models (VLMs) with FL, as VLMs can
easily memorize training data instances, making them vulnerable to membership
inference attacks (MIAs). To address this challenge, we propose the FedRand
framework, which avoids disclosing the full set of client parameters. In this
framework, each client randomly selects subparameters of Low-Rank Adaptation
(LoRA) from the server and keeps the remaining counterparts of the LoRA weights
as private parameters. After training both parameters on the client's private
dataset, only the non-private client parameters are sent back to the server for
aggregation. This approach mitigates the risk of exposing client-side VLM
parameters, thereby enhancing data privacy. We empirically validate that
FedRand improves robustness against MIAs compared to relevant baselines while
achieving accuracy comparable to methods that communicate full LoRA parameters
across several benchmark datasets.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 11:55:50 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 12:49:15 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Park",
"Sangwoo",
""
],
[
"Lee",
"Seanie",
""
],
[
"Kim",
"Byungjoo",
""
],
[
"Hwang",
"Sung Ju",
""
]
]
| TITLE: FedRand: Enhancing Privacy in Federated Learning with Randomized LoRA
Subparameter Updates
ABSTRACT: Federated Learning (FL) is a widely used framework for training models in a
decentralized manner, ensuring that the central server does not have direct
access to data from local clients. However, this approach may still fail to
fully preserve data privacy, as models from local clients are exposed to the
central server during the aggregation process. This issue becomes even more
critical when training vision-language models (VLMs) with FL, as VLMs can
easily memorize training data instances, making them vulnerable to membership
inference attacks (MIAs). To address this challenge, we propose the FedRand
framework, which avoids disclosing the full set of client parameters. In this
framework, each client randomly selects subparameters of Low-Rank Adaptation
(LoRA) from the server and keeps the remaining counterparts of the LoRA weights
as private parameters. After training both parameters on the client's private
dataset, only the non-private client parameters are sent back to the server for
aggregation. This approach mitigates the risk of exposing client-side VLM
parameters, thereby enhancing data privacy. We empirically validate that
FedRand improves robustness against MIAs compared to relevant baselines while
achieving accuracy comparable to methods that communicate full LoRA parameters
across several benchmark datasets.
| no_new_dataset | 0.941007 |
2503.07232 | Chenglu Pan | Chenglu Pan, Xiaogang Xu, Ganggui Ding, Yunke Zhang, Wenbo Li, Jiarong
Xu, Qingbiao Wu | Boosting Diffusion-Based Text Image Super-Resolution Model Towards
Generalized Real-World Scenarios | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Restoring low-resolution text images presents a significant challenge, as it
requires maintaining both the fidelity and stylistic realism of the text in
restored images. Existing text image restoration methods often fall short in
hard situations, as the traditional super-resolution models cannot guarantee
clarity, while diffusion-based methods fail to maintain fidelity. In this
paper, we introduce a novel framework aimed at improving the generalization
ability of diffusion models for text image super-resolution (SR), especially
promoting fidelity. First, we propose a progressive data sampling strategy that
incorporates diverse image types at different stages of training, stabilizing
the convergence and improving the generalization. For the network architecture,
we leverage a pre-trained SR prior to provide robust spatial reasoning
capabilities, enhancing the model's ability to preserve textual information.
Additionally, we employ a cross-attention mechanism to better integrate textual
priors. To further reduce errors in textual priors, we utilize confidence
scores to dynamically adjust the importance of textual features during
training. Extensive experiments on real-world datasets demonstrate that our
approach not only produces text images with more realistic visual appearances
but also improves the accuracy of text structure.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 12:16:19 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 06:00:49 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Pan",
"Chenglu",
""
],
[
"Xu",
"Xiaogang",
""
],
[
"Ding",
"Ganggui",
""
],
[
"Zhang",
"Yunke",
""
],
[
"Li",
"Wenbo",
""
],
[
"Xu",
"Jiarong",
""
],
[
"Wu",
"Qingbiao",
""
]
]
| TITLE: Boosting Diffusion-Based Text Image Super-Resolution Model Towards
Generalized Real-World Scenarios
ABSTRACT: Restoring low-resolution text images presents a significant challenge, as it
requires maintaining both the fidelity and stylistic realism of the text in
restored images. Existing text image restoration methods often fall short in
hard situations, as the traditional super-resolution models cannot guarantee
clarity, while diffusion-based methods fail to maintain fidelity. In this
paper, we introduce a novel framework aimed at improving the generalization
ability of diffusion models for text image super-resolution (SR), especially
promoting fidelity. First, we propose a progressive data sampling strategy that
incorporates diverse image types at different stages of training, stabilizing
the convergence and improving the generalization. For the network architecture,
we leverage a pre-trained SR prior to provide robust spatial reasoning
capabilities, enhancing the model's ability to preserve textual information.
Additionally, we employ a cross-attention mechanism to better integrate textual
priors. To further reduce errors in textual priors, we utilize confidence
scores to dynamically adjust the importance of textual features during
training. Extensive experiments on real-world datasets demonstrate that our
approach not only produces text images with more realistic visual appearances
but also improves the accuracy of text structure.
| no_new_dataset | 0.94801 |
2503.07259 | Baiyu Chen | Baiyu Chen, Wilson Wongso, Zechen Li, Yonchanok Khaokaew, Hao Xue,
Flora Salim | COMODO: Cross-Modal Video-to-IMU Distillation for Efficient Egocentric
Human Activity Recognition | null | null | null | null | cs.CV cs.AI cs.LG cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Egocentric video-based models capture rich semantic information and have
demonstrated strong performance in human activity recognition (HAR). However,
their high power consumption, privacy concerns, and dependence on lighting
conditions limit their feasibility for continuous on-device recognition. In
contrast, inertial measurement unit (IMU) sensors offer an energy-efficient and
privacy-preserving alternative, yet they suffer from limited large-scale
annotated datasets, leading to weaker generalization in downstream tasks. To
bridge this gap, we propose COMODO, a cross-modal self-supervised distillation
framework that transfers rich semantic knowledge from the video modality to the
IMU modality without requiring labeled annotations. COMODO leverages a
pretrained and frozen video encoder to construct a dynamic instance queue,
aligning the feature distributions of video and IMU embeddings. By distilling
knowledge from video representations, our approach enables the IMU encoder to
inherit rich semantic information from video while preserving its efficiency
for real-world applications. Experiments on multiple egocentric HAR datasets
demonstrate that COMODO consistently improves downstream classification
performance, achieving results comparable to or exceeding fully supervised
fine-tuned models. Moreover, COMODO exhibits strong cross-dataset
generalization. Benefiting from its simplicity, our method is also generally
applicable to various video and time-series pre-trained models, offering the
potential to leverage more powerful teacher and student foundation models in
future research. The code is available at https://github.com/Breezelled/COMODO .
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 12:43:51 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Chen",
"Baiyu",
""
],
[
"Wongso",
"Wilson",
""
],
[
"Li",
"Zechen",
""
],
[
"Khaokaew",
"Yonchanok",
""
],
[
"Xue",
"Hao",
""
],
[
"Salim",
"Flora",
""
]
]
| TITLE: COMODO: Cross-Modal Video-to-IMU Distillation for Efficient Egocentric
Human Activity Recognition
ABSTRACT: Egocentric video-based models capture rich semantic information and have
demonstrated strong performance in human activity recognition (HAR). However,
their high power consumption, privacy concerns, and dependence on lighting
conditions limit their feasibility for continuous on-device recognition. In
contrast, inertial measurement unit (IMU) sensors offer an energy-efficient and
privacy-preserving alternative, yet they suffer from limited large-scale
annotated datasets, leading to weaker generalization in downstream tasks. To
bridge this gap, we propose COMODO, a cross-modal self-supervised distillation
framework that transfers rich semantic knowledge from the video modality to the
IMU modality without requiring labeled annotations. COMODO leverages a
pretrained and frozen video encoder to construct a dynamic instance queue,
aligning the feature distributions of video and IMU embeddings. By distilling
knowledge from video representations, our approach enables the IMU encoder to
inherit rich semantic information from video while preserving its efficiency
for real-world applications. Experiments on multiple egocentric HAR datasets
demonstrate that COMODO consistently improves downstream classification
performance, achieving results comparable to or exceeding fully supervised
fine-tuned models. Moreover, COMODO exhibits strong cross-dataset
generalization. Benefiting from its simplicity, our method is also generally
applicable to various video and time-series pre-trained models, offering the
potential to leverage more powerful teacher and student foundation models in
future research. The code is available at https://github.com/Breezelled/COMODO .
| no_new_dataset | 0.948585 |
2503.07499 | Calvin Yeung | Calvin Yeung, Tomohiro Suzuki, Ryota Tanaka, Zhuoer Yin, Keisuke Fujii | AthletePose3D: A Benchmark Dataset for 3D Human Pose Estimation and
Kinematic Validation in Athletic Movements | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Human pose estimation is a critical task in computer vision and sports
biomechanics, with applications spanning sports science, rehabilitation, and
biomechanical research. While significant progress has been made in monocular
3D pose estimation, current datasets often fail to capture the complex,
high-acceleration movements typical of competitive sports. In this work, we
introduce AthletePose3D, a novel dataset designed to address this gap.
AthletePose3D includes 12 types of sports motions across various disciplines,
with approximately 1.3 million frames and 165 thousand individual postures,
specifically capturing high-speed, high-acceleration athletic movements. We
evaluate state-of-the-art (SOTA) monocular 2D and 3D pose estimation models on
the dataset, revealing that models trained on conventional datasets perform
poorly on athletic motions. However, fine-tuning these models on AthletePose3D
notably reduces the SOTA model mean per joint position error (MPJPE) from 214mm
to 65mm-a reduction of over 69%. We also validate the kinematic accuracy of
monocular pose estimations through waveform analysis, highlighting strong
correlations in joint angle estimations but limitations in velocity estimation.
Our work provides a comprehensive evaluation of monocular pose estimation
models in the context of sports, contributing valuable insights for advancing
monocular pose estimation techniques in high-performance sports environments.
The dataset, code, and model checkpoints are available at:
https://github.com/calvinyeungck/AthletePose3D
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 16:16:02 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 16:51:19 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Yeung",
"Calvin",
""
],
[
"Suzuki",
"Tomohiro",
""
],
[
"Tanaka",
"Ryota",
""
],
[
"Yin",
"Zhuoer",
""
],
[
"Fujii",
"Keisuke",
""
]
]
| TITLE: AthletePose3D: A Benchmark Dataset for 3D Human Pose Estimation and
Kinematic Validation in Athletic Movements
ABSTRACT: Human pose estimation is a critical task in computer vision and sports
biomechanics, with applications spanning sports science, rehabilitation, and
biomechanical research. While significant progress has been made in monocular
3D pose estimation, current datasets often fail to capture the complex,
high-acceleration movements typical of competitive sports. In this work, we
introduce AthletePose3D, a novel dataset designed to address this gap.
AthletePose3D includes 12 types of sports motions across various disciplines,
with approximately 1.3 million frames and 165 thousand individual postures,
specifically capturing high-speed, high-acceleration athletic movements. We
evaluate state-of-the-art (SOTA) monocular 2D and 3D pose estimation models on
the dataset, revealing that models trained on conventional datasets perform
poorly on athletic motions. However, fine-tuning these models on AthletePose3D
notably reduces the SOTA model mean per joint position error (MPJPE) from 214mm
to 65mm-a reduction of over 69%. We also validate the kinematic accuracy of
monocular pose estimations through waveform analysis, highlighting strong
correlations in joint angle estimations but limitations in velocity estimation.
Our work provides a comprehensive evaluation of monocular pose estimation
models in the context of sports, contributing valuable insights for advancing
monocular pose estimation techniques in high-performance sports environments.
The dataset, code, and model checkpoints are available at:
https://github.com/calvinyeungck/AthletePose3D
| new_dataset | 0.965835 |
2503.07635 | Weixing Chen | Weixing Chen and Yang Liu and Binglin Chen and Jiandong Su and Yongsen
Zheng and Liang Lin | Cross-modal Causal Relation Alignment for Video Question Grounding | Accepted by CVPR 2025 | null | null | null | cs.LG cs.CL cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Video question grounding (VideoQG) requires models to answer the questions
and simultaneously infer the relevant video segments to support the answers.
However, existing VideoQG methods usually suffer from spurious cross-modal
correlations, leading to a failure to identify the dominant visual scenes that
align with the intended question. Moreover, vision-language models exhibit
unfaithful generalization performance and lack robustness on challenging
downstream tasks such as VideoQG. In this work, we propose a novel VideoQG
framework named Cross-modal Causal Relation Alignment (CRA), to eliminate
spurious correlations and improve the causal consistency between
question-answering and video temporal grounding. Our CRA involves three
essential components: i) Gaussian Smoothing Grounding (GSG) module for
estimating the time interval via cross-modal attention, which is de-noised by
an adaptive Gaussian filter, ii) Cross-Modal Alignment (CMA) enhances the
performance of weakly supervised VideoQG by leveraging bidirectional
contrastive learning between estimated video segments and QA features, iii)
Explicit Causal Intervention (ECI) module for multimodal deconfounding, which
involves front-door intervention for vision and back-door intervention for
language. Extensive experiments on two VideoQG datasets demonstrate the
superiority of our CRA in discovering visually grounded content and achieving
robust question reasoning. Codes are available at
https://github.com/WissingChen/CRA-GQA.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 01:36:32 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Chen",
"Weixing",
""
],
[
"Liu",
"Yang",
""
],
[
"Chen",
"Binglin",
""
],
[
"Su",
"Jiandong",
""
],
[
"Zheng",
"Yongsen",
""
],
[
"Lin",
"Liang",
""
]
]
| TITLE: Cross-modal Causal Relation Alignment for Video Question Grounding
ABSTRACT: Video question grounding (VideoQG) requires models to answer the questions
and simultaneously infer the relevant video segments to support the answers.
However, existing VideoQG methods usually suffer from spurious cross-modal
correlations, leading to a failure to identify the dominant visual scenes that
align with the intended question. Moreover, vision-language models exhibit
unfaithful generalization performance and lack robustness on challenging
downstream tasks such as VideoQG. In this work, we propose a novel VideoQG
framework named Cross-modal Causal Relation Alignment (CRA), to eliminate
spurious correlations and improve the causal consistency between
question-answering and video temporal grounding. Our CRA involves three
essential components: i) Gaussian Smoothing Grounding (GSG) module for
estimating the time interval via cross-modal attention, which is de-noised by
an adaptive Gaussian filter, ii) Cross-Modal Alignment (CMA) enhances the
performance of weakly supervised VideoQG by leveraging bidirectional
contrastive learning between estimated video segments and QA features, iii)
Explicit Causal Intervention (ECI) module for multimodal deconfounding, which
involves front-door intervention for vision and back-door intervention for
language. Extensive experiments on two VideoQG datasets demonstrate the
superiority of our CRA in discovering visually grounded content and achieving
robust question reasoning. Codes are available at
https://github.com/WissingChen/CRA-GQA.
| no_new_dataset | 0.94545 |
2503.07642 | Mike Van Ness | Mike Van Ness, Madeleine Udell | dnamite: A Python Package for Neural Additive Models | null | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | Additive models offer accurate and interpretable predictions for tabular
data, a critical tool for statistical modeling. Recent advances in Neural
Additive Models (NAMs) allow these models to handle complex machine learning
tasks, including feature selection and survival analysis, on large-scale data.
This paper introduces dnamite, a Python package that implements NAMs for these
advanced applications. dnamite provides a scikit-learn style interface to train
regression, classification, and survival analysis NAMs, with built-in support
for feature selection. We describe the methodology underlying dnamite, its
design principles, and its implementation. Through an application to the MIMIC
III clinical dataset, we demonstrate the utility of dnamite in a real-world
setting where feature selection and survival analysis are both important. The
package is publicly available via pip and documented at dnamite.readthedocs.io.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 00:24:54 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Van Ness",
"Mike",
""
],
[
"Udell",
"Madeleine",
""
]
]
| TITLE: dnamite: A Python Package for Neural Additive Models
ABSTRACT: Additive models offer accurate and interpretable predictions for tabular
data, a critical tool for statistical modeling. Recent advances in Neural
Additive Models (NAMs) allow these models to handle complex machine learning
tasks, including feature selection and survival analysis, on large-scale data.
This paper introduces dnamite, a Python package that implements NAMs for these
advanced applications. dnamite provides a scikit-learn style interface to train
regression, classification, and survival analysis NAMs, with built-in support
for feature selection. We describe the methodology underlying dnamite, its
design principles, and its implementation. Through an application to the MIMIC
III clinical dataset, we demonstrate the utility of dnamite in a real-world
setting where feature selection and survival analysis are both important. The
package is publicly available via pip and documented at dnamite.readthedocs.io.
| no_new_dataset | 0.939858 |
2503.07643 | Aidan Gao | Aidan Gao, Junhong Lin | ConstellationNet: Reinventing Spatial Clustering through GNNs | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Spatial clustering is a crucial field, finding universal use across
criminology, pathology, and urban planning. However, most spatial clustering
algorithms cannot pull information from nearby nodes and suffer performance
drops when dealing with higher dimensionality and large datasets, making them
suboptimal for large-scale and high-dimensional clustering. Due to modern data
growing in size and dimension, clustering algorithms become weaker when
addressing multifaceted issues. To improve upon this, we develop
ConstellationNet, a convolution neural network(CNN)-graph neural network(GNN)
framework that leverages the embedding power of a CNN, the neighbor aggregation
of a GNN, and a neural network's ability to deal with batched data to improve
spatial clustering and classification with graph augmented predictions.
ConstellationNet achieves state-of-the-art performance on both supervised
classification and unsupervised clustering across several datasets,
outperforming state-of-the-art classification and clustering while reducing
model size and training time by up to tenfold and improving baselines by 10
times. Because of its fast training and powerful nature, ConstellationNet holds
promise in fields like epidemiology and medical imaging, able to quickly train
on new data to develop robust responses.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 02:10:11 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Gao",
"Aidan",
""
],
[
"Lin",
"Junhong",
""
]
]
| TITLE: ConstellationNet: Reinventing Spatial Clustering through GNNs
ABSTRACT: Spatial clustering is a crucial field, finding universal use across
criminology, pathology, and urban planning. However, most spatial clustering
algorithms cannot pull information from nearby nodes and suffer performance
drops when dealing with higher dimensionality and large datasets, making them
suboptimal for large-scale and high-dimensional clustering. Due to modern data
growing in size and dimension, clustering algorithms become weaker when
addressing multifaceted issues. To improve upon this, we develop
ConstellationNet, a convolution neural network(CNN)-graph neural network(GNN)
framework that leverages the embedding power of a CNN, the neighbor aggregation
of a GNN, and a neural network's ability to deal with batched data to improve
spatial clustering and classification with graph augmented predictions.
ConstellationNet achieves state-of-the-art performance on both supervised
classification and unsupervised clustering across several datasets,
outperforming state-of-the-art classification and clustering while reducing
model size and training time by up to tenfold and improving baselines by 10
times. Because of its fast training and powerful nature, ConstellationNet holds
promise in fields like epidemiology and medical imaging, able to quickly train
on new data to develop robust responses.
| no_new_dataset | 0.950549 |
2503.07653 | Qasim Bin Saeed | Qasim Bin Saeed, Ijaz Ahmed | Early Detection of Mental Health Issues Using Social Media Posts | null | null | null | null | cs.LG cs.CL cs.SI | http://creativecommons.org/licenses/by/4.0/ | The increasing prevalence of mental health disorders, such as depression,
anxiety, and bipolar disorder, calls for immediate need in developing tools for
early detection and intervention. Social media platforms, like Reddit,
represent a rich source of user-generated content, reflecting emotional and
behavioral patterns. In this work, we propose a multi-modal deep learning
framework that integrates linguistic and temporal features for early detection
of mental health crises. Our approach is based on the method that utilizes a
BiLSTM network both for text and temporal feature analysis, modeling sequential
dependencies in a different manner, capturing contextual patterns quite well.
This work includes a cross-modal attention approach that allows fusion of such
outputs into context-aware classification of mental health conditions. The
model was then trained and evaluated on a dataset of labeled Reddit posts
preprocessed using text preprocessing, scaling of temporal features, and
encoding of labels. Experimental results indicate that the proposed
architecture performs better compared to traditional models with a validation
accuracy of 74.55% and F1-Score of 0.7376. This study presents the importance
of multi-modal learning for mental health detection and provides a baseline for
further improvements by using more advanced attention mechanisms and other data
modalities.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 23:08:08 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Saeed",
"Qasim Bin",
""
],
[
"Ahmed",
"Ijaz",
""
]
]
| TITLE: Early Detection of Mental Health Issues Using Social Media Posts
ABSTRACT: The increasing prevalence of mental health disorders, such as depression,
anxiety, and bipolar disorder, calls for immediate need in developing tools for
early detection and intervention. Social media platforms, like Reddit,
represent a rich source of user-generated content, reflecting emotional and
behavioral patterns. In this work, we propose a multi-modal deep learning
framework that integrates linguistic and temporal features for early detection
of mental health crises. Our approach is based on the method that utilizes a
BiLSTM network both for text and temporal feature analysis, modeling sequential
dependencies in a different manner, capturing contextual patterns quite well.
This work includes a cross-modal attention approach that allows fusion of such
outputs into context-aware classification of mental health conditions. The
model was then trained and evaluated on a dataset of labeled Reddit posts
preprocessed using text preprocessing, scaling of temporal features, and
encoding of labels. Experimental results indicate that the proposed
architecture performs better compared to traditional models with a validation
accuracy of 74.55% and F1-Score of 0.7376. This study presents the importance
of multi-modal learning for mental health detection and provides a baseline for
further improvements by using more advanced attention mechanisms and other data
modalities.
| no_new_dataset | 0.946448 |
2503.07657 | Jaewoo Song | Jaewoo Song and Fangzhen Lin | SplitQuantV2: Enhancing Low-Bit Quantization of LLMs Without GPUs | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The quantization of large language models (LLMs) is crucial for deploying
them on devices with limited computational resources. While advanced
quantization algorithms offer improved performance compared to the basic linear
quantization, they typically require high-end graphics processing units (GPUs),
are often restricted to specific deep neural network (DNN) frameworks, and
require calibration datasets. This limitation poses challenges for using such
algorithms on various neural processing units (NPUs) and edge AI devices, which
have diverse model formats and frameworks. In this paper, we show SplitQuantV2,
an innovative algorithm designed to enhance low-bit linear quantization of
LLMs, can achieve results comparable to those of advanced algorithms.
SplitQuantV2 preprocesses models by splitting linear and convolution layers
into functionally equivalent, quantization-friendly structures. The algorithm's
platform-agnostic, concise, and efficient nature allows for implementation
without the need for GPUs. Our evaluation on the Llama 3.2 1B Instruct model
using the AI2's Reasoning Challenge (ARC) dataset demonstrates that
SplitQuantV2 improves the accuracy of the INT4 quantization model by 11.76%p,
matching the performance of the original floating-point model. Remarkably,
SplitQuantV2 took only 2 minutes 6 seconds to preprocess the 1B model and
perform linear INT4 quantization using only an Apple M4 CPU. SplitQuantV2
provides a practical solution for low-bit quantization on LLMs, especially when
complex, computation-intensive algorithms are inaccessible due to hardware
limitations or framework incompatibilities.
| [
{
"version": "v1",
"created": "Fri, 7 Mar 2025 14:59:07 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Song",
"Jaewoo",
""
],
[
"Lin",
"Fangzhen",
""
]
]
| TITLE: SplitQuantV2: Enhancing Low-Bit Quantization of LLMs Without GPUs
ABSTRACT: The quantization of large language models (LLMs) is crucial for deploying
them on devices with limited computational resources. While advanced
quantization algorithms offer improved performance compared to the basic linear
quantization, they typically require high-end graphics processing units (GPUs),
are often restricted to specific deep neural network (DNN) frameworks, and
require calibration datasets. This limitation poses challenges for using such
algorithms on various neural processing units (NPUs) and edge AI devices, which
have diverse model formats and frameworks. In this paper, we show SplitQuantV2,
an innovative algorithm designed to enhance low-bit linear quantization of
LLMs, can achieve results comparable to those of advanced algorithms.
SplitQuantV2 preprocesses models by splitting linear and convolution layers
into functionally equivalent, quantization-friendly structures. The algorithm's
platform-agnostic, concise, and efficient nature allows for implementation
without the need for GPUs. Our evaluation on the Llama 3.2 1B Instruct model
using the AI2's Reasoning Challenge (ARC) dataset demonstrates that
SplitQuantV2 improves the accuracy of the INT4 quantization model by 11.76%p,
matching the performance of the original floating-point model. Remarkably,
SplitQuantV2 took only 2 minutes 6 seconds to preprocess the 1B model and
perform linear INT4 quantization using only an Apple M4 CPU. SplitQuantV2
provides a practical solution for low-bit quantization on LLMs, especially when
complex, computation-intensive algorithms are inaccessible due to hardware
limitations or framework incompatibilities.
| no_new_dataset | 0.944485 |
2503.07664 | Fateme Nateghi Haredasht | Fateme Nateghi Haredasht, Fatemeh Amrollahi, Manoj Maddali, Nicholas
Marshall, Stephen P. Ma, Lauren N. Cooper, Richard J. Medford, Sanjat
Kanjilal, Niaz Banaei, Stanley Deresinski, Mary K. Goldstein, Steven M. Asch,
Amy Chang, Jonathan H. Chen | Antibiotic Resistance Microbiology Dataset (ARMD): A De-identified
Resource for Studying Antimicrobial Resistance Using Electronic Health
Records | null | null | null | null | q-bio.QM cs.IR cs.LG stat.AP | http://creativecommons.org/licenses/by/4.0/ | The Antibiotic Resistance Microbiology Dataset (ARMD) is a de-identified
resource derived from electronic health records (EHR) that facilitates research
into antimicrobial resistance (AMR). ARMD encompasses data from adult patients,
focusing on microbiological cultures, antibiotic susceptibilities, and
associated clinical and demographic features. Key attributes include organism
identification, susceptibility patterns for 55 antibiotics, implied
susceptibility rules, and de-identified patient information. This dataset
supports studies on antimicrobial stewardship, causal inference, and clinical
decision-making. ARMD is designed to be reusable and interoperable, promoting
collaboration and innovation in combating AMR. This paper describes the
dataset's acquisition, structure, and utility while detailing its
de-identification process.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 21:28:12 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Haredasht",
"Fateme Nateghi",
""
],
[
"Amrollahi",
"Fatemeh",
""
],
[
"Maddali",
"Manoj",
""
],
[
"Marshall",
"Nicholas",
""
],
[
"Ma",
"Stephen P.",
""
],
[
"Cooper",
"Lauren N.",
""
],
[
"Medford",
"Richard J.",
""
],
[
"Kanjilal",
"Sanjat",
""
],
[
"Banaei",
"Niaz",
""
],
[
"Deresinski",
"Stanley",
""
],
[
"Goldstein",
"Mary K.",
""
],
[
"Asch",
"Steven M.",
""
],
[
"Chang",
"Amy",
""
],
[
"Chen",
"Jonathan H.",
""
]
]
| TITLE: Antibiotic Resistance Microbiology Dataset (ARMD): A De-identified
Resource for Studying Antimicrobial Resistance Using Electronic Health
Records
ABSTRACT: The Antibiotic Resistance Microbiology Dataset (ARMD) is a de-identified
resource derived from electronic health records (EHR) that facilitates research
into antimicrobial resistance (AMR). ARMD encompasses data from adult patients,
focusing on microbiological cultures, antibiotic susceptibilities, and
associated clinical and demographic features. Key attributes include organism
identification, susceptibility patterns for 55 antibiotics, implied
susceptibility rules, and de-identified patient information. This dataset
supports studies on antimicrobial stewardship, causal inference, and clinical
decision-making. ARMD is designed to be reusable and interoperable, promoting
collaboration and innovation in combating AMR. This paper describes the
dataset's acquisition, structure, and utility while detailing its
de-identification process.
| new_dataset | 0.932944 |
2503.07669 | Rong Li | Rong Li, Tao Deng, Siwei Feng, He Huang, Juncheng Jia, Di Yuan, and
Keqin Li | WECAR: An End-Edge Collaborative Inference and Training Framework for
WiFi-Based Continuous Human Activity Recognition | arXiv admin note: text overlap with arXiv:2502.17483 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | WiFi-based human activity recognition (HAR) holds significant promise for
ubiquitous sensing in smart environments. A critical challenge lies in enabling
systems to dynamically adapt to evolving scenarios, learning new activities
without catastrophic forgetting of prior knowledge, while adhering to the
stringent computational constraints of edge devices. Current approaches
struggle to reconcile these requirements due to prohibitive storage demands for
retaining historical data and inefficient parameter utilization. We propose
WECAR, an end-edge collaborative inference and training framework for
WiFi-based continuous HAR, which decouples computational workloads to overcome
these limitations. In this framework, edge devices handle model training,
lightweight optimization, and updates, while end devices perform efficient
inference. WECAR introduces two key innovations, i.e., dynamic continual
learning with parameter efficiency and hierarchical distillation for end
deployment. For the former, we propose a transformer-based architecture
enhanced by task-specific dynamic model expansion and stability-aware selective
retraining. For the latter, we propose a dual-phase distillation mechanism that
includes multi-head self-attention relation distillation and prefix relation
distillation. We implement WECAR based on heterogeneous hardware using Jetson
Nano as edge devices and the ESP32 as end devices, respectively. Our
experiments across three public WiFi datasets reveal that WECAR not only
outperforms several state-of-the-art methods in performance and parameter
efficiency, but also achieves a substantial reduction in the model's parameter
count post-optimization without sacrificing accuracy. This validates its
practicality for resource-constrained environments.
| [
{
"version": "v1",
"created": "Sun, 9 Mar 2025 03:40:27 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Li",
"Rong",
""
],
[
"Deng",
"Tao",
""
],
[
"Feng",
"Siwei",
""
],
[
"Huang",
"He",
""
],
[
"Jia",
"Juncheng",
""
],
[
"Yuan",
"Di",
""
],
[
"Li",
"Keqin",
""
]
]
| TITLE: WECAR: An End-Edge Collaborative Inference and Training Framework for
WiFi-Based Continuous Human Activity Recognition
ABSTRACT: WiFi-based human activity recognition (HAR) holds significant promise for
ubiquitous sensing in smart environments. A critical challenge lies in enabling
systems to dynamically adapt to evolving scenarios, learning new activities
without catastrophic forgetting of prior knowledge, while adhering to the
stringent computational constraints of edge devices. Current approaches
struggle to reconcile these requirements due to prohibitive storage demands for
retaining historical data and inefficient parameter utilization. We propose
WECAR, an end-edge collaborative inference and training framework for
WiFi-based continuous HAR, which decouples computational workloads to overcome
these limitations. In this framework, edge devices handle model training,
lightweight optimization, and updates, while end devices perform efficient
inference. WECAR introduces two key innovations, i.e., dynamic continual
learning with parameter efficiency and hierarchical distillation for end
deployment. For the former, we propose a transformer-based architecture
enhanced by task-specific dynamic model expansion and stability-aware selective
retraining. For the latter, we propose a dual-phase distillation mechanism that
includes multi-head self-attention relation distillation and prefix relation
distillation. We implement WECAR based on heterogeneous hardware using Jetson
Nano as edge devices and the ESP32 as end devices, respectively. Our
experiments across three public WiFi datasets reveal that WECAR not only
outperforms several state-of-the-art methods in performance and parameter
efficiency, but also achieves a substantial reduction in the model's parameter
count post-optimization without sacrificing accuracy. This validates its
practicality for resource-constrained environments.
| no_new_dataset | 0.942718 |
2503.07680 | Yao Yongqiang | Yongqiang Yao, Jingru Tan, Kaihuan Liang, Feizhao Zhang, Yazhe Niu,
Jiahao Hu, Ruihao Gong, Dahua Lin, Ningyi Xu | Hierarchical Balance Packing: Towards Efficient Supervised Fine-tuning
for Long-Context LLM | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Training Long-Context Large Language Models (LLMs) is challenging, as hybrid
training with long-context and short-context data often leads to workload
imbalances. Existing works mainly use data packing to alleviate this issue but
fail to consider imbalanced attention computation and wasted communication
overhead. This paper proposes Hierarchical Balance Packing (HBP), which designs
a novel batch-construction method and training recipe to address those
inefficiencies. In particular, the HBP constructs multi-level data packing
groups, each optimized with a distinct packing length. It assigns training
samples to their optimal groups and configures each group with the most
effective settings, including sequential parallelism degree and gradient
checkpointing configuration. To effectively utilize multi-level groups of data,
we design a dynamic training pipeline specifically tailored to HBP, including
curriculum learning, adaptive sequential parallelism, and stable loss. Our
extensive experiments demonstrate that our method significantly reduces
training time over multiple datasets and open-source models while maintaining
strong performance. For the largest DeepSeek-V2 (236B) MOE model, our method
speeds up the training by 2.4$\times$ with competitive performance.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 10:52:50 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Yao",
"Yongqiang",
""
],
[
"Tan",
"Jingru",
""
],
[
"Liang",
"Kaihuan",
""
],
[
"Zhang",
"Feizhao",
""
],
[
"Niu",
"Yazhe",
""
],
[
"Hu",
"Jiahao",
""
],
[
"Gong",
"Ruihao",
""
],
[
"Lin",
"Dahua",
""
],
[
"Xu",
"Ningyi",
""
]
]
| TITLE: Hierarchical Balance Packing: Towards Efficient Supervised Fine-tuning
for Long-Context LLM
ABSTRACT: Training Long-Context Large Language Models (LLMs) is challenging, as hybrid
training with long-context and short-context data often leads to workload
imbalances. Existing works mainly use data packing to alleviate this issue but
fail to consider imbalanced attention computation and wasted communication
overhead. This paper proposes Hierarchical Balance Packing (HBP), which designs
a novel batch-construction method and training recipe to address those
inefficiencies. In particular, the HBP constructs multi-level data packing
groups, each optimized with a distinct packing length. It assigns training
samples to their optimal groups and configures each group with the most
effective settings, including sequential parallelism degree and gradient
checkpointing configuration. To effectively utilize multi-level groups of data,
we design a dynamic training pipeline specifically tailored to HBP, including
curriculum learning, adaptive sequential parallelism, and stable loss. Our
extensive experiments demonstrate that our method significantly reduces
training time over multiple datasets and open-source models while maintaining
strong performance. For the largest DeepSeek-V2 (236B) MOE model, our method
speeds up the training by 2.4$\times$ with competitive performance.
| no_new_dataset | 0.947866 |
2503.07682 | Shule Hao | Shule Hao, Junpeng Bao, Chuncheng Lu | A Time Series Multitask Framework Integrating a Large Language Model,
Pre-Trained Time Series Model, and Knowledge Graph | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Time series analysis is crucial in fields like finance, transportation, and
industry. However, traditional models often focus solely on temporal features,
limiting their ability to capture underlying information. This paper proposes a
novel time series multitask framework, called LTM, which integrates temporal
features with textual descriptions to enhance analytical and predictive
capabilities. LTM combines pre-trained time series model, large language model
(LLM), and knowledge graph to tackle time series tasks, including forecasting,
imputation, and anomaly detection. LTM achieves improved performance with a few
trainable parameters. It is very efficient and practical. LTM encodes time
series data into patches and enriches user-provided prompts using knowledge
graphs to generate enhanced prompts. A novel feature fusion method embeds
prompts into each patch encoding, which is processed by a frozen LLM, followed
by a feature enhancement module and a time decoder module. During fine-tuning
stage, cosine similarity between prompts and temporal patches is integrated
into the loss function to boost performance. Experiments on benchmark datasets
show that LTM significantly outperforms existing methods. It provides a robust
and versatile solution for time series tasks.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 11:25:01 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Hao",
"Shule",
""
],
[
"Bao",
"Junpeng",
""
],
[
"Lu",
"Chuncheng",
""
]
]
| TITLE: A Time Series Multitask Framework Integrating a Large Language Model,
Pre-Trained Time Series Model, and Knowledge Graph
ABSTRACT: Time series analysis is crucial in fields like finance, transportation, and
industry. However, traditional models often focus solely on temporal features,
limiting their ability to capture underlying information. This paper proposes a
novel time series multitask framework, called LTM, which integrates temporal
features with textual descriptions to enhance analytical and predictive
capabilities. LTM combines pre-trained time series model, large language model
(LLM), and knowledge graph to tackle time series tasks, including forecasting,
imputation, and anomaly detection. LTM achieves improved performance with a few
trainable parameters. It is very efficient and practical. LTM encodes time
series data into patches and enriches user-provided prompts using knowledge
graphs to generate enhanced prompts. A novel feature fusion method embeds
prompts into each patch encoding, which is processed by a frozen LLM, followed
by a feature enhancement module and a time decoder module. During fine-tuning
stage, cosine similarity between prompts and temporal patches is integrated
into the loss function to boost performance. Experiments on benchmark datasets
show that LTM significantly outperforms existing methods. It provides a robust
and versatile solution for time series tasks.
| no_new_dataset | 0.945651 |
2503.07687 | Samuel Gruffaz | Axel Roques, Samuel Gruffaz, Kyurae Kim, Alain Oliviero-Durmus,
Laurent Oudre | Personalized Convolutional Dictionary Learning of Physiological Time
Series | null | AISTATS 2025 | null | null | stat.ML cs.LG math.ST stat.TH | http://creativecommons.org/licenses/by/4.0/ | Human physiological signals tend to exhibit both global and local structures:
the former are shared across a population, while the latter reflect
inter-individual variability. For instance, kinetic measurements of the gait
cycle during locomotion present common characteristics, although idiosyncrasies
may be observed due to biomechanical disposition or pathology. To better
represent datasets with local-global structure, this work extends Convolutional
Dictionary Learning (CDL), a popular method for learning interpretable
representations, or dictionaries, of time-series data. In particular, we
propose Personalized CDL (PerCDL), in which a local dictionary models local
information as a personalized spatiotemporal transformation of a global
dictionary. The transformation is learnable and can combine operations such as
time warping and rotation. Formal computational and statistical guarantees for
PerCDL are provided and its effectiveness on synthetic and real human
locomotion data is demonstrated.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 14:27:21 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Roques",
"Axel",
""
],
[
"Gruffaz",
"Samuel",
""
],
[
"Kim",
"Kyurae",
""
],
[
"Oliviero-Durmus",
"Alain",
""
],
[
"Oudre",
"Laurent",
""
]
]
| TITLE: Personalized Convolutional Dictionary Learning of Physiological Time
Series
ABSTRACT: Human physiological signals tend to exhibit both global and local structures:
the former are shared across a population, while the latter reflect
inter-individual variability. For instance, kinetic measurements of the gait
cycle during locomotion present common characteristics, although idiosyncrasies
may be observed due to biomechanical disposition or pathology. To better
represent datasets with local-global structure, this work extends Convolutional
Dictionary Learning (CDL), a popular method for learning interpretable
representations, or dictionaries, of time-series data. In particular, we
propose Personalized CDL (PerCDL), in which a local dictionary models local
information as a personalized spatiotemporal transformation of a global
dictionary. The transformation is learnable and can combine operations such as
time warping and rotation. Formal computational and statistical guarantees for
PerCDL are provided and its effectiveness on synthetic and real human
locomotion data is demonstrated.
| no_new_dataset | 0.95018 |
2503.07691 | Thibaud Leteno | Thibaud Leteno, Michael Perrot, Charlotte Laclau, Antoine Gourru,
Christophe Gravier | Fair Text Classification via Transferable Representations | arXiv admin note: text overlap with arXiv:2311.12689 | null | null | null | cs.LG cs.CL | http://creativecommons.org/licenses/by/4.0/ | Group fairness is a central research topic in text classification, where
reaching fair treatment between sensitive groups (e.g., women and men) remains
an open challenge. We propose an approach that extends the use of the
Wasserstein Dependency Measure for learning unbiased neural text classifiers.
Given the challenge of distinguishing fair from unfair information in a text
encoder, we draw inspiration from adversarial training by inducing independence
between representations learned for the target label and those for a sensitive
attribute. We further show that Domain Adaptation can be efficiently leveraged
to remove the need for access to the sensitive attributes in the dataset we
cure. We provide both theoretical and empirical evidence that our approach is
well-founded.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 16:52:45 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Leteno",
"Thibaud",
""
],
[
"Perrot",
"Michael",
""
],
[
"Laclau",
"Charlotte",
""
],
[
"Gourru",
"Antoine",
""
],
[
"Gravier",
"Christophe",
""
]
]
| TITLE: Fair Text Classification via Transferable Representations
ABSTRACT: Group fairness is a central research topic in text classification, where
reaching fair treatment between sensitive groups (e.g., women and men) remains
an open challenge. We propose an approach that extends the use of the
Wasserstein Dependency Measure for learning unbiased neural text classifiers.
Given the challenge of distinguishing fair from unfair information in a text
encoder, we draw inspiration from adversarial training by inducing independence
between representations learned for the target label and those for a sensitive
attribute. We further show that Domain Adaptation can be efficiently leveraged
to remove the need for access to the sensitive attributes in the dataset we
cure. We provide both theoretical and empirical evidence that our approach is
well-founded.
| no_new_dataset | 0.947186 |
2503.07698 | Paul Boniol | Paul Boniol, Donato Tiano, Angela Bonifati, Themis Palpanas | Graphint: Graph-based Time Series Clustering Visualisation Tool | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | With the exponential growth of time series data across diverse domains, there
is a pressing need for effective analysis tools. Time series clustering is
important for identifying patterns in these datasets. However, prevailing
methods often encounter obstacles in maintaining data relationships and
ensuring interpretability. We present Graphint, an innovative system based on
the $k$-Graph methodology that addresses these challenges. Graphint integrates
a robust time series clustering algorithm with an interactive tool for
comparison and interpretation. More precisely, our system allows users to
compare results against competing approaches, identify discriminative
subsequences within specified datasets, and visualize the critical information
utilized by $k$-Graph to generate outputs. Overall, Graphint offers a
comprehensive solution for extracting actionable insights from complex temporal
datasets.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 17:20:02 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Boniol",
"Paul",
""
],
[
"Tiano",
"Donato",
""
],
[
"Bonifati",
"Angela",
""
],
[
"Palpanas",
"Themis",
""
]
]
| TITLE: Graphint: Graph-based Time Series Clustering Visualisation Tool
ABSTRACT: With the exponential growth of time series data across diverse domains, there
is a pressing need for effective analysis tools. Time series clustering is
important for identifying patterns in these datasets. However, prevailing
methods often encounter obstacles in maintaining data relationships and
ensuring interpretability. We present Graphint, an innovative system based on
the $k$-Graph methodology that addresses these challenges. Graphint integrates
a robust time series clustering algorithm with an interactive tool for
comparison and interpretation. More precisely, our system allows users to
compare results against competing approaches, identify discriminative
subsequences within specified datasets, and visualize the critical information
utilized by $k$-Graph to generate outputs. Overall, Graphint offers a
comprehensive solution for extracting actionable insights from complex temporal
datasets.
| no_new_dataset | 0.951997 |
2503.07701 | Mark Niklas M\"uller | Konstantinos Vergopoulos, Mark Niklas M\"uller, Martin Vechev | Automated Benchmark Generation for Repository-Level Coding Tasks | Accepted at DL4C@ICLR'25 and FMWild@ICLR'25 | null | null | null | cs.SE cs.AI | http://creativecommons.org/licenses/by/4.0/ | Code Agent development is an extremely active research area, where a reliable
performance metric is critical for tracking progress and guiding new
developments. This demand is underscored by the meteoric rise in popularity of
SWE-Bench. This benchmark challenges code agents to generate patches addressing
GitHub issues given the full repository as context. The correctness of
generated patches is then evaluated by executing a human-written test suite
extracted from the repository after the issue's resolution. However,
constructing benchmarks like SWE-Bench requires substantial manual effort to
set up historically accurate execution environments for testing. Crucially,
this severely limits the number of considered repositories, e.g., just 12 for
SWE-Bench. Considering so few repositories, selected for their popularity runs
the risk of leading to a distributional mismatch, i.e., the measured
performance may not be representative of real-world scenarios potentially
misguiding development efforts. In this work, we address this challenge and
introduce SetUpAgent, a fully automated system capable of historically accurate
dependency setup, test execution, and result parsing. Using SetUpAgent, we
generate two new datasets: (i) SWEE-Bench an extended version of SWE-Bench
encompassing hundreds of repositories, and (ii) SWA-Bench a benchmark focusing
on applications rather than libraries. Comparing these datasets to SWE-Bench
with respect to their characteristics and code agent performance, we find
significant distributional differences, including lower issue description
quality and detail level, higher fix complexity, and most importantly up to 40%
lower agent success rates.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 17:42:49 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Vergopoulos",
"Konstantinos",
""
],
[
"Müller",
"Mark Niklas",
""
],
[
"Vechev",
"Martin",
""
]
]
| TITLE: Automated Benchmark Generation for Repository-Level Coding Tasks
ABSTRACT: Code Agent development is an extremely active research area, where a reliable
performance metric is critical for tracking progress and guiding new
developments. This demand is underscored by the meteoric rise in popularity of
SWE-Bench. This benchmark challenges code agents to generate patches addressing
GitHub issues given the full repository as context. The correctness of
generated patches is then evaluated by executing a human-written test suite
extracted from the repository after the issue's resolution. However,
constructing benchmarks like SWE-Bench requires substantial manual effort to
set up historically accurate execution environments for testing. Crucially,
this severely limits the number of considered repositories, e.g., just 12 for
SWE-Bench. Considering so few repositories, selected for their popularity runs
the risk of leading to a distributional mismatch, i.e., the measured
performance may not be representative of real-world scenarios potentially
misguiding development efforts. In this work, we address this challenge and
introduce SetUpAgent, a fully automated system capable of historically accurate
dependency setup, test execution, and result parsing. Using SetUpAgent, we
generate two new datasets: (i) SWEE-Bench an extended version of SWE-Bench
encompassing hundreds of repositories, and (ii) SWA-Bench a benchmark focusing
on applications rather than libraries. Comparing these datasets to SWE-Bench
with respect to their characteristics and code agent performance, we find
significant distributional differences, including lower issue description
quality and detail level, higher fix complexity, and most importantly up to 40%
lower agent success rates.
| no_new_dataset | 0.869493 |
2503.07739 | Cameron Smith | Cameron Smith, Basile Van Hoorick, Vitor Guizilini, Yue Wang | SIRE: SE(3) Intrinsic Rigidity Embeddings | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motion serves as a powerful cue for scene perception and understanding by
separating independently moving surfaces and organizing the physical world into
distinct entities. We introduce SIRE, a self-supervised method for motion
discovery of objects and dynamic scene reconstruction from casual scenes by
learning intrinsic rigidity embeddings from videos. Our method trains an image
encoder to estimate scene rigidity and geometry, supervised by a simple 4D
reconstruction loss: a least-squares solver uses the estimated geometry and
rigidity to lift 2D point track trajectories into SE(3) tracks, which are
simply re-projected back to 2D and compared against the original 2D
trajectories for supervision. Crucially, our framework is fully end-to-end
differentiable and can be optimized either on video datasets to learn
generalizable image priors, or even on a single video to capture scene-specific
structure - highlighting strong data efficiency. We demonstrate the
effectiveness of our rigidity embeddings and geometry across multiple settings,
including downstream object segmentation, SE(3) rigid motion estimation, and
self-supervised depth estimation. Our findings suggest that SIRE can learn
strong geometry and motion rigidity priors from video data, with minimal
supervision.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 18:00:30 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Smith",
"Cameron",
""
],
[
"Van Hoorick",
"Basile",
""
],
[
"Guizilini",
"Vitor",
""
],
[
"Wang",
"Yue",
""
]
]
| TITLE: SIRE: SE(3) Intrinsic Rigidity Embeddings
ABSTRACT: Motion serves as a powerful cue for scene perception and understanding by
separating independently moving surfaces and organizing the physical world into
distinct entities. We introduce SIRE, a self-supervised method for motion
discovery of objects and dynamic scene reconstruction from casual scenes by
learning intrinsic rigidity embeddings from videos. Our method trains an image
encoder to estimate scene rigidity and geometry, supervised by a simple 4D
reconstruction loss: a least-squares solver uses the estimated geometry and
rigidity to lift 2D point track trajectories into SE(3) tracks, which are
simply re-projected back to 2D and compared against the original 2D
trajectories for supervision. Crucially, our framework is fully end-to-end
differentiable and can be optimized either on video datasets to learn
generalizable image priors, or even on a single video to capture scene-specific
structure - highlighting strong data efficiency. We demonstrate the
effectiveness of our rigidity embeddings and geometry across multiple settings,
including downstream object segmentation, SE(3) rigid motion estimation, and
self-supervised depth estimation. Our findings suggest that SIRE can learn
strong geometry and motion rigidity priors from video data, with minimal
supervision.
| no_new_dataset | 0.94887 |
2503.07743 | Jo\~ao Carlos Virgolino Soares | Michael Adlerstein, Jo\~ao Carlos Virgolino Soares, Angelo Bratta,
Claudio Semini | SANDRO: a Robust Solver with a Splitting Strategy for Point Cloud
Registration | Accepted to the IEEE International Conference on Robotics and
Automation (ICRA) 2025 | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Point cloud registration is a critical problem in computer vision and
robotics, especially in the field of navigation. Current methods often fail
when faced with high outlier rates or take a long time to converge to a
suitable solution. In this work, we introduce a novel algorithm for point cloud
registration called SANDRO (Splitting strategy for point cloud Alignment using
Non-convex anD Robust Optimization), which combines an Iteratively Reweighted
Least Squares (IRLS) framework with a robust loss function with graduated
non-convexity. This approach is further enhanced by a splitting strategy
designed to handle high outlier rates and skewed distributions of outliers.
SANDRO is capable of addressing important limitations of existing methods, as
in challenging scenarios where the presence of high outlier rates and point
cloud symmetries significantly hinder convergence. SANDRO achieves superior
performance in terms of success rate when compared to the state-of-the-art
methods, demonstrating a 20% improvement from the current state of the art when
tested on the Redwood real dataset and 60% improvement when tested on synthetic
data.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 18:00:47 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Adlerstein",
"Michael",
""
],
[
"Soares",
"João Carlos Virgolino",
""
],
[
"Bratta",
"Angelo",
""
],
[
"Semini",
"Claudio",
""
]
]
| TITLE: SANDRO: a Robust Solver with a Splitting Strategy for Point Cloud
Registration
ABSTRACT: Point cloud registration is a critical problem in computer vision and
robotics, especially in the field of navigation. Current methods often fail
when faced with high outlier rates or take a long time to converge to a
suitable solution. In this work, we introduce a novel algorithm for point cloud
registration called SANDRO (Splitting strategy for point cloud Alignment using
Non-convex anD Robust Optimization), which combines an Iteratively Reweighted
Least Squares (IRLS) framework with a robust loss function with graduated
non-convexity. This approach is further enhanced by a splitting strategy
designed to handle high outlier rates and skewed distributions of outliers.
SANDRO is capable of addressing important limitations of existing methods, as
in challenging scenarios where the presence of high outlier rates and point
cloud symmetries significantly hinder convergence. SANDRO achieves superior
performance in terms of success rate when compared to the state-of-the-art
methods, demonstrating a 20% improvement from the current state of the art when
tested on the Redwood real dataset and 60% improvement when tested on synthetic
data.
| no_new_dataset | 0.951233 |
2503.07766 | Badhan Kumar Das | Badhan Kumar Das, Ajay Singh, Saahil Islam, Gengyan Zhao, Andreas
Maier | SegResMamba: An Efficient Architecture for 3D Medical Image Segmentation | null | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | The Transformer architecture has opened a new paradigm in the domain of deep
learning with its ability to model long-range dependencies and capture global
context and has outpaced the traditional Convolution Neural Networks (CNNs) in
many aspects. However, applying Transformer models to 3D medical image datasets
presents significant challenges due to their high training time, and memory
requirements, which not only hinder scalability but also contribute to elevated
CO$_2$ footprint. This has led to an exploration of alternative models that can
maintain or even improve performance while being more efficient and
environmentally sustainable. Recent advancements in Structured State Space
Models (SSMs) effectively address some of the inherent limitations of
Transformers, particularly their high memory and computational demands.
Inspired by these advancements, we propose an efficient 3D segmentation model
for medical imaging called SegResMamba, designed to reduce computation
complexity, memory usage, training time, and environmental impact while
maintaining high performance. Our model uses less than half the memory during
training compared to other state-of-the-art (SOTA) architectures, achieving
comparable performance with significantly reduced resource demands.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 18:40:28 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Das",
"Badhan Kumar",
""
],
[
"Singh",
"Ajay",
""
],
[
"Islam",
"Saahil",
""
],
[
"Zhao",
"Gengyan",
""
],
[
"Maier",
"Andreas",
""
]
]
| TITLE: SegResMamba: An Efficient Architecture for 3D Medical Image Segmentation
ABSTRACT: The Transformer architecture has opened a new paradigm in the domain of deep
learning with its ability to model long-range dependencies and capture global
context and has outpaced the traditional Convolution Neural Networks (CNNs) in
many aspects. However, applying Transformer models to 3D medical image datasets
presents significant challenges due to their high training time, and memory
requirements, which not only hinder scalability but also contribute to elevated
CO$_2$ footprint. This has led to an exploration of alternative models that can
maintain or even improve performance while being more efficient and
environmentally sustainable. Recent advancements in Structured State Space
Models (SSMs) effectively address some of the inherent limitations of
Transformers, particularly their high memory and computational demands.
Inspired by these advancements, we propose an efficient 3D segmentation model
for medical imaging called SegResMamba, designed to reduce computation
complexity, memory usage, training time, and environmental impact while
maintaining high performance. Our model uses less than half the memory during
training compared to other state-of-the-art (SOTA) architectures, achieving
comparable performance with significantly reduced resource demands.
| no_new_dataset | 0.951188 |
2503.07770 | Miguel Silva | Jos\'e Gon\c{c}alves, Miguel Silva, Bernardo Cabral, Tiago Dias, Eva
Maia, Isabel Pra\c{c}a, Ricardo Severino, Lu\'is Lino Ferreira | Evaluating LLaMA 3.2 for Software Vulnerability Detection | 14 pages, 4 tables, EICC 2025: European Interdisciplinary
Cybersecurity Conference 2025 | null | null | null | cs.LG cs.AI cs.CR cs.SE | http://creativecommons.org/licenses/by/4.0/ | Deep Learning (DL) has emerged as a powerful tool for vulnerability
detection, often outperforming traditional solutions. However, developing
effective DL models requires large amounts of real-world data, which can be
difficult to obtain in sufficient quantities. To address this challenge,
DiverseVul dataset has been curated as the largest dataset of vulnerable and
non-vulnerable C/C++ functions extracted exclusively from real-world projects.
Its goal is to provide high-quality, large-scale samples for training DL
models. However, during our study several inconsistencies were identified in
the raw dataset while applying pre-processing techniques, highlighting the need
for a refined version. In this work, we present a refined version of DiverseVul
dataset, which is used to fine-tune a large language model, LLaMA 3.2, for
vulnerability detection. Experimental results show that the use of
pre-processing techniques led to an improvement in performance, with the model
achieving an F1-Score of 66%, a competitive result when compared to our
baseline, which achieved a 47% F1-Score in software vulnerability detection.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 18:47:41 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Gonçalves",
"José",
""
],
[
"Silva",
"Miguel",
""
],
[
"Cabral",
"Bernardo",
""
],
[
"Dias",
"Tiago",
""
],
[
"Maia",
"Eva",
""
],
[
"Praça",
"Isabel",
""
],
[
"Severino",
"Ricardo",
""
],
[
"Ferreira",
"Luís Lino",
""
]
]
| TITLE: Evaluating LLaMA 3.2 for Software Vulnerability Detection
ABSTRACT: Deep Learning (DL) has emerged as a powerful tool for vulnerability
detection, often outperforming traditional solutions. However, developing
effective DL models requires large amounts of real-world data, which can be
difficult to obtain in sufficient quantities. To address this challenge,
DiverseVul dataset has been curated as the largest dataset of vulnerable and
non-vulnerable C/C++ functions extracted exclusively from real-world projects.
Its goal is to provide high-quality, large-scale samples for training DL
models. However, during our study several inconsistencies were identified in
the raw dataset while applying pre-processing techniques, highlighting the need
for a refined version. In this work, we present a refined version of DiverseVul
dataset, which is used to fine-tune a large language model, LLaMA 3.2, for
vulnerability detection. Experimental results show that the use of
pre-processing techniques led to an improvement in performance, with the model
achieving an F1-Score of 66%, a competitive result when compared to our
baseline, which achieved a 47% F1-Score in software vulnerability detection.
| new_dataset | 0.961134 |
2503.07772 | Liwei Che | Liwei Che, Tony Qingze Liu, Jing Jia, Weiyi Qin, Ruixiang Tang,
Vladimir Pavlovic | EAZY: Eliminating Hallucinations in LVLMs by Zeroing out Hallucinatory
Image Tokens | null | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Despite their remarkable potential, Large Vision-Language Models (LVLMs)
still face challenges with object hallucination, a problem where their
generated outputs mistakenly incorporate objects that do not actually exist.
Although most works focus on addressing this issue within the language-model
backbone, our work shifts the focus to the image input source, investigating
how specific image tokens contribute to hallucinations. Our analysis reveals a
striking finding: a small subset of image tokens with high attention scores are
the primary drivers of object hallucination. By removing these hallucinatory
image tokens (only 1.5% of all image tokens), the issue can be effectively
mitigated. This finding holds consistently across different models and
datasets. Building on this insight, we introduce EAZY, a novel, training-free
method that automatically identifies and Eliminates hAllucinations by Zeroing
out hallucinatorY image tokens. We utilize EAZY for unsupervised object
hallucination detection, achieving 15% improvement compared to previous
methods. Additionally, EAZY demonstrates remarkable effectiveness in mitigating
hallucinations while preserving model utility and seamlessly adapting to
various LVLM architectures.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 18:53:39 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Che",
"Liwei",
""
],
[
"Liu",
"Tony Qingze",
""
],
[
"Jia",
"Jing",
""
],
[
"Qin",
"Weiyi",
""
],
[
"Tang",
"Ruixiang",
""
],
[
"Pavlovic",
"Vladimir",
""
]
]
| TITLE: EAZY: Eliminating Hallucinations in LVLMs by Zeroing out Hallucinatory
Image Tokens
ABSTRACT: Despite their remarkable potential, Large Vision-Language Models (LVLMs)
still face challenges with object hallucination, a problem where their
generated outputs mistakenly incorporate objects that do not actually exist.
Although most works focus on addressing this issue within the language-model
backbone, our work shifts the focus to the image input source, investigating
how specific image tokens contribute to hallucinations. Our analysis reveals a
striking finding: a small subset of image tokens with high attention scores are
the primary drivers of object hallucination. By removing these hallucinatory
image tokens (only 1.5% of all image tokens), the issue can be effectively
mitigated. This finding holds consistently across different models and
datasets. Building on this insight, we introduce EAZY, a novel, training-free
method that automatically identifies and Eliminates hAllucinations by Zeroing
out hallucinatorY image tokens. We utilize EAZY for unsupervised object
hallucination detection, achieving 15% improvement compared to previous
methods. Additionally, EAZY demonstrates remarkable effectiveness in mitigating
hallucinations while preserving model utility and seamlessly adapting to
various LVLM architectures.
| no_new_dataset | 0.942929 |
2503.07775 | Debabrota Basu | Debabrota Basu, Debarshi Chanda | Sublinear Algorithms for Wasserstein and Total Variation Distances:
Applications to Fairness and Privacy Auditing | null | null | null | null | cs.LG cs.CY cs.DS stat.CO | http://creativecommons.org/licenses/by/4.0/ | Resource-efficiently computing representations of probability distributions
and the distances between them while only having access to the samples is a
fundamental and useful problem across mathematical sciences. In this paper, we
propose a generic algorithmic framework to estimate the PDF and CDF of any
sub-Gaussian distribution while the samples from them arrive in a stream. We
compute mergeable summaries of distributions from the stream of samples that
require sublinear space w.r.t. the number of observed samples. This allows us
to estimate Wasserstein and Total Variation (TV) distances between any two
sub-Gaussian distributions while samples arrive in streams and from multiple
sources (e.g. federated learning). Our algorithms significantly improves on the
existing methods for distance estimation incurring super-linear time and linear
space complexities. In addition, we use the proposed estimators of Wasserstein
and TV distances to audit the fairness and privacy of the ML algorithms. We
empirically demonstrate the efficiency of the algorithms for estimating these
distances and auditing using both synthetic and real-world datasets.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 18:57:48 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Basu",
"Debabrota",
""
],
[
"Chanda",
"Debarshi",
""
]
]
| TITLE: Sublinear Algorithms for Wasserstein and Total Variation Distances:
Applications to Fairness and Privacy Auditing
ABSTRACT: Resource-efficiently computing representations of probability distributions
and the distances between them while only having access to the samples is a
fundamental and useful problem across mathematical sciences. In this paper, we
propose a generic algorithmic framework to estimate the PDF and CDF of any
sub-Gaussian distribution while the samples from them arrive in a stream. We
compute mergeable summaries of distributions from the stream of samples that
require sublinear space w.r.t. the number of observed samples. This allows us
to estimate Wasserstein and Total Variation (TV) distances between any two
sub-Gaussian distributions while samples arrive in streams and from multiple
sources (e.g. federated learning). Our algorithms significantly improves on the
existing methods for distance estimation incurring super-linear time and linear
space complexities. In addition, we use the proposed estimators of Wasserstein
and TV distances to audit the fairness and privacy of the ML algorithms. We
empirically demonstrate the efficiency of the algorithms for estimating these
distances and auditing using both synthetic and real-world datasets.
| no_new_dataset | 0.948298 |
2503.07799 | Pramit Saha | Pramit Saha, Divyanshu Mishra, Netzahualcoyotl Hernandez-Cruz, Olga
Patey, Aris Papageorghiou, Yuki M. Asano, J. Alison Noble | Self-supervised Normality Learning and Divergence Vector-guided Model
Merging for Zero-shot Congenital Heart Disease Detection in Fetal Ultrasound
Videos | null | null | null | null | cs.CV cs.AI cs.ET cs.LG | http://creativecommons.org/licenses/by/4.0/ | Congenital Heart Disease (CHD) is one of the leading causes of fetal
mortality, yet the scarcity of labeled CHD data and strict privacy regulations
surrounding fetal ultrasound (US) imaging present significant challenges for
the development of deep learning-based models for CHD detection. Centralised
collection of large real-world datasets for rare conditions, such as CHD, from
large populations requires significant co-ordination and resource. In addition,
data governance rules increasingly prevent data sharing between sites. To
address these challenges, we introduce, for the first time, a novel
privacy-preserving, zero-shot CHD detection framework that formulates CHD
detection as a normality modeling problem integrated with model merging. In our
framework dubbed Sparse Tube Ultrasound Distillation (STUD), each hospital site
first trains a sparse video tube-based self-supervised video anomaly detection
(VAD) model on normal fetal heart US clips with self-distillation loss. This
enables site-specific models to independently learn the distribution of healthy
cases. To aggregate knowledge across the decentralized models while maintaining
privacy, we propose a Divergence Vector-Guided Model Merging approach,
DivMerge, that combines site-specific models into a single VAD model without
data exchange. Our approach preserves domain-agnostic rich spatio-temporal
representations, ensuring generalization to unseen CHD cases. We evaluated our
approach on real-world fetal US data collected from 5 hospital sites. Our
merged model outperformed site-specific models by 23.77% and 30.13% in accuracy
and F1-score respectively on external test sets.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 19:27:15 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Saha",
"Pramit",
""
],
[
"Mishra",
"Divyanshu",
""
],
[
"Hernandez-Cruz",
"Netzahualcoyotl",
""
],
[
"Patey",
"Olga",
""
],
[
"Papageorghiou",
"Aris",
""
],
[
"Asano",
"Yuki M.",
""
],
[
"Noble",
"J. Alison",
""
]
]
| TITLE: Self-supervised Normality Learning and Divergence Vector-guided Model
Merging for Zero-shot Congenital Heart Disease Detection in Fetal Ultrasound
Videos
ABSTRACT: Congenital Heart Disease (CHD) is one of the leading causes of fetal
mortality, yet the scarcity of labeled CHD data and strict privacy regulations
surrounding fetal ultrasound (US) imaging present significant challenges for
the development of deep learning-based models for CHD detection. Centralised
collection of large real-world datasets for rare conditions, such as CHD, from
large populations requires significant co-ordination and resource. In addition,
data governance rules increasingly prevent data sharing between sites. To
address these challenges, we introduce, for the first time, a novel
privacy-preserving, zero-shot CHD detection framework that formulates CHD
detection as a normality modeling problem integrated with model merging. In our
framework dubbed Sparse Tube Ultrasound Distillation (STUD), each hospital site
first trains a sparse video tube-based self-supervised video anomaly detection
(VAD) model on normal fetal heart US clips with self-distillation loss. This
enables site-specific models to independently learn the distribution of healthy
cases. To aggregate knowledge across the decentralized models while maintaining
privacy, we propose a Divergence Vector-Guided Model Merging approach,
DivMerge, that combines site-specific models into a single VAD model without
data exchange. Our approach preserves domain-agnostic rich spatio-temporal
representations, ensuring generalization to unseen CHD cases. We evaluated our
approach on real-world fetal US data collected from 5 hospital sites. Our
merged model outperformed site-specific models by 23.77% and 30.13% in accuracy
and F1-score respectively on external test sets.
| no_new_dataset | 0.950732 |
2503.07813 | Mozhgan Hadadi | Elvis Kimara, Mozhgan Hadadi, Jackson Godbersen, Aditya Balu, Talukder
Jubery, Yawei Li, Adarsh Krishnamurthy, Patrick S. Schnable, and Baskar
Ganapathysubramanian | AgriField3D: A Curated 3D Point Cloud and Procedural Model Dataset of
Field-Grown Maize from a Diversity Panel | Elvis Kimara and Mozhgan Hadadi contributed equally to this work | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The application of artificial intelligence (AI) in three-dimensional (3D)
agricultural research, particularly for maize, has been limited by the scarcity
of large-scale, diverse datasets. While 2D image datasets are abundant, they
fail to capture essential structural details such as leaf architecture, plant
volume, and spatial arrangements that 3D data provide. To address this
limitation, we present AgriField3D
(https://baskargroup.github.io/AgriField3D/), a curated dataset of 3D point
clouds of field-grown maize plants from a diverse genetic panel, designed to be
AI-ready for advancing agricultural research. Our dataset comprises over 1,000
high-quality point clouds collected using a Terrestrial Laser Scanner,
complemented by procedural models that provide structured, parametric
representations of maize plants. These procedural models, generated using
Non-Uniform Rational B-Splines (NURBS) and optimized via a two-step process
combining Particle Swarm Optimization (PSO) and differentiable programming,
enable precise, scalable reconstructions of leaf surfaces and plant
architectures. To enhance usability, we performed graph-based segmentation to
isolate individual leaves and stalks, ensuring consistent labeling across all
samples. We also conducted rigorous manual quality control on all datasets,
correcting errors in segmentation, ensuring accurate leaf ordering, and
validating metadata annotations. The dataset further includes metadata
detailing plant morphology and quality, alongside multi-resolution subsampled
versions (100k, 50k, 10k points) optimized for various computational needs. By
integrating point cloud data of field grown plants with high-fidelity
procedural models and ensuring meticulous manual validation, AgriField3D
provides a comprehensive foundation for AI-driven phenotyping, plant structural
analysis, and 3D applications in agricultural research.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 19:53:20 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Kimara",
"Elvis",
""
],
[
"Hadadi",
"Mozhgan",
""
],
[
"Godbersen",
"Jackson",
""
],
[
"Balu",
"Aditya",
""
],
[
"Jubery",
"Talukder",
""
],
[
"Li",
"Yawei",
""
],
[
"Krishnamurthy",
"Adarsh",
""
],
[
"Schnable",
"Patrick S.",
""
],
[
"Ganapathysubramanian",
"Baskar",
""
]
]
| TITLE: AgriField3D: A Curated 3D Point Cloud and Procedural Model Dataset of
Field-Grown Maize from a Diversity Panel
ABSTRACT: The application of artificial intelligence (AI) in three-dimensional (3D)
agricultural research, particularly for maize, has been limited by the scarcity
of large-scale, diverse datasets. While 2D image datasets are abundant, they
fail to capture essential structural details such as leaf architecture, plant
volume, and spatial arrangements that 3D data provide. To address this
limitation, we present AgriField3D
(https://baskargroup.github.io/AgriField3D/), a curated dataset of 3D point
clouds of field-grown maize plants from a diverse genetic panel, designed to be
AI-ready for advancing agricultural research. Our dataset comprises over 1,000
high-quality point clouds collected using a Terrestrial Laser Scanner,
complemented by procedural models that provide structured, parametric
representations of maize plants. These procedural models, generated using
Non-Uniform Rational B-Splines (NURBS) and optimized via a two-step process
combining Particle Swarm Optimization (PSO) and differentiable programming,
enable precise, scalable reconstructions of leaf surfaces and plant
architectures. To enhance usability, we performed graph-based segmentation to
isolate individual leaves and stalks, ensuring consistent labeling across all
samples. We also conducted rigorous manual quality control on all datasets,
correcting errors in segmentation, ensuring accurate leaf ordering, and
validating metadata annotations. The dataset further includes metadata
detailing plant morphology and quality, alongside multi-resolution subsampled
versions (100k, 50k, 10k points) optimized for various computational needs. By
integrating point cloud data of field grown plants with high-fidelity
procedural models and ensuring meticulous manual validation, AgriField3D
provides a comprehensive foundation for AI-driven phenotyping, plant structural
analysis, and 3D applications in agricultural research.
| no_new_dataset | 0.847968 |
2503.07821 | Anh Kiet Duong | Anh-Kiet Duong | Elderly Activity Recognition in the Wild: Results from the EAR Challenge | 2 pages, EAR-CV4Smalls@WACV2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | This paper presents our solution for the Elderly Action Recognition (EAR)
Challenge, part of the Computer Vision for Smalls Workshop at WACV 2025. The
competition focuses on recognizing Activities of Daily Living (ADLs) performed
by the elderly, covering six action categories with a diverse dataset. Our
approach builds upon a state-of-the-art action recognition model, fine-tuned
through transfer learning on elderly-specific datasets to enhance adaptability.
To improve generalization and mitigate dataset bias, we carefully curated
training data from multiple publicly available sources and applied targeted
pre-processing techniques. Our solution currently achieves 0.81455 accuracy on
the public leaderboard, highlighting its effectiveness in classifying elderly
activities. Source codes are publicly available at
https://github.com/ffyyytt/EAR-WACV25-DAKiet-TSM.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 20:07:05 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Duong",
"Anh-Kiet",
""
]
]
| TITLE: Elderly Activity Recognition in the Wild: Results from the EAR Challenge
ABSTRACT: This paper presents our solution for the Elderly Action Recognition (EAR)
Challenge, part of the Computer Vision for Smalls Workshop at WACV 2025. The
competition focuses on recognizing Activities of Daily Living (ADLs) performed
by the elderly, covering six action categories with a diverse dataset. Our
approach builds upon a state-of-the-art action recognition model, fine-tuned
through transfer learning on elderly-specific datasets to enhance adaptability.
To improve generalization and mitigate dataset bias, we carefully curated
training data from multiple publicly available sources and applied targeted
pre-processing techniques. Our solution currently achieves 0.81455 accuracy on
the public leaderboard, highlighting its effectiveness in classifying elderly
activities. Source codes are publicly available at
https://github.com/ffyyytt/EAR-WACV25-DAKiet-TSM.
| no_new_dataset | 0.94545 |
2503.07823 | Maurizio Ferrari Dacrema | Maurizio Ferrari Dacrema, Michael Benigni and Nicola Ferro | Reproducibility and Artifact Consistency of the SIGIR 2022 Recommender
Systems Papers Based on Message Passing | null | null | null | null | cs.IR cs.DL cs.LG cs.NE | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Graph-based techniques relying on neural networks and embeddings have gained
attention as a way to develop Recommender Systems (RS) with several papers on
the topic presented at SIGIR 2022 and 2023. Given the importance of ensuring
that published research is methodologically sound and reproducible, in this
paper we analyze 10 graph-based RS papers, most of which were published at
SIGIR 2022, and assess their impact on subsequent work published in SIGIR 2023.
Our analysis reveals several critical points that require attention: (i) the
prevalence of bad practices, such as erroneous data splits or information
leakage between training and testing data, which call into question the
validity of the results; (ii) frequent inconsistencies between the provided
artifacts (source code and data) and their descriptions in the paper, causing
uncertainty about what is actually being evaluated; and (iii) the preference
for new or complex baselines that are weaker compared to simpler ones, creating
the impression of continuous improvement even when, particularly for the
Amazon-Book dataset, the state-of-the-art has significantly worsened. Due to
these issues, we are unable to confirm the claims made in most of the papers we
examined and attempted to reproduce.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 20:09:04 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Dacrema",
"Maurizio Ferrari",
""
],
[
"Benigni",
"Michael",
""
],
[
"Ferro",
"Nicola",
""
]
]
| TITLE: Reproducibility and Artifact Consistency of the SIGIR 2022 Recommender
Systems Papers Based on Message Passing
ABSTRACT: Graph-based techniques relying on neural networks and embeddings have gained
attention as a way to develop Recommender Systems (RS) with several papers on
the topic presented at SIGIR 2022 and 2023. Given the importance of ensuring
that published research is methodologically sound and reproducible, in this
paper we analyze 10 graph-based RS papers, most of which were published at
SIGIR 2022, and assess their impact on subsequent work published in SIGIR 2023.
Our analysis reveals several critical points that require attention: (i) the
prevalence of bad practices, such as erroneous data splits or information
leakage between training and testing data, which call into question the
validity of the results; (ii) frequent inconsistencies between the provided
artifacts (source code and data) and their descriptions in the paper, causing
uncertainty about what is actually being evaluated; and (iii) the preference
for new or complex baselines that are weaker compared to simpler ones, creating
the impression of continuous improvement even when, particularly for the
Amazon-Book dataset, the state-of-the-art has significantly worsened. Due to
these issues, we are unable to confirm the claims made in most of the papers we
examined and attempted to reproduce.
| no_new_dataset | 0.949012 |
2503.07825 | Prarthana Bhattacharyya | Prarthana Bhattacharyya, Joshua Mitton, Ryan Page, Owen Morgan, Oliver
Powell, Benjamin Menzies, Gabriel Homewood, Kemi Jacobs, Paolo Baesso, Taru
Muhonen, Richard Vigars and Louis Berridge | Helios 2.0: A Robust, Ultra-Low Power Gesture Recognition System
Optimised for Event-Sensor based Wearables | 15 pages, 17 figures. Prarthana Bhattacharyya, Joshua Mitton, Ryan
Page, Owen Morgan, and Oliver Powell contributed equally to this paper | null | null | null | cs.HC cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | We present an advance in wearable technology: a mobile-optimized, real-time,
ultra-low-power event camera system that enables natural hand gesture control
for smart glasses, dramatically improving user experience. While hand gesture
recognition in computer vision has advanced significantly, critical challenges
remain in creating systems that are intuitive, adaptable across diverse users
and environments, and energy-efficient enough for practical wearable
applications. Our approach tackles these challenges through carefully selected
microgestures: lateral thumb swipes across the index finger (in both
directions) and a double pinch between thumb and index fingertips. These
human-centered interactions leverage natural hand movements, ensuring intuitive
usability without requiring users to learn complex command sequences. To
overcome variability in users and environments, we developed a novel simulation
methodology that enables comprehensive domain sampling without extensive
real-world data collection. Our power-optimised architecture maintains
exceptional performance, achieving F1 scores above 80\% on benchmark datasets
featuring diverse users and environments. The resulting models operate at just
6-8 mW when exploiting the Qualcomm Snapdragon Hexagon DSP, with our 2-channel
implementation exceeding 70\% F1 accuracy and our 6-channel model surpassing
80\% F1 accuracy across all gesture classes in user studies. These results were
achieved using only synthetic training data. This improves on the
state-of-the-art for F1 accuracy by 20\% with a power reduction 25x when using
DSP. This advancement brings deploying ultra-low-power vision systems in
wearable devices closer and opens new possibilities for seamless human-computer
interaction.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 20:12:06 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Bhattacharyya",
"Prarthana",
""
],
[
"Mitton",
"Joshua",
""
],
[
"Page",
"Ryan",
""
],
[
"Morgan",
"Owen",
""
],
[
"Powell",
"Oliver",
""
],
[
"Menzies",
"Benjamin",
""
],
[
"Homewood",
"Gabriel",
""
],
[
"Jacobs",
"Kemi",
""
],
[
"Baesso",
"Paolo",
""
],
[
"Muhonen",
"Taru",
""
],
[
"Vigars",
"Richard",
""
],
[
"Berridge",
"Louis",
""
]
]
| TITLE: Helios 2.0: A Robust, Ultra-Low Power Gesture Recognition System
Optimised for Event-Sensor based Wearables
ABSTRACT: We present an advance in wearable technology: a mobile-optimized, real-time,
ultra-low-power event camera system that enables natural hand gesture control
for smart glasses, dramatically improving user experience. While hand gesture
recognition in computer vision has advanced significantly, critical challenges
remain in creating systems that are intuitive, adaptable across diverse users
and environments, and energy-efficient enough for practical wearable
applications. Our approach tackles these challenges through carefully selected
microgestures: lateral thumb swipes across the index finger (in both
directions) and a double pinch between thumb and index fingertips. These
human-centered interactions leverage natural hand movements, ensuring intuitive
usability without requiring users to learn complex command sequences. To
overcome variability in users and environments, we developed a novel simulation
methodology that enables comprehensive domain sampling without extensive
real-world data collection. Our power-optimised architecture maintains
exceptional performance, achieving F1 scores above 80\% on benchmark datasets
featuring diverse users and environments. The resulting models operate at just
6-8 mW when exploiting the Qualcomm Snapdragon Hexagon DSP, with our 2-channel
implementation exceeding 70\% F1 accuracy and our 6-channel model surpassing
80\% F1 accuracy across all gesture classes in user studies. These results were
achieved using only synthetic training data. This improves on the
state-of-the-art for F1 accuracy by 20\% with a power reduction 25x when using
DSP. This advancement brings deploying ultra-low-power vision systems in
wearable devices closer and opens new possibilities for seamless human-computer
interaction.
| no_new_dataset | 0.949576 |
2503.07833 | Samir Abdaljalil | Samir Abdaljalil, Hasan Kurban, Erchin Serpedin | HalluVerse25: Fine-grained Multilingual Benchmark Dataset for LLM
Hallucinations | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) are increasingly used in various contexts, yet
remain prone to generating non-factual content, commonly referred to as
"hallucinations". The literature categorizes hallucinations into several types,
including entity-level, relation-level, and sentence-level hallucinations.
However, existing hallucination datasets often fail to capture fine-grained
hallucinations in multilingual settings. In this work, we introduce
HalluVerse25, a multilingual LLM hallucination dataset that categorizes
fine-grained hallucinations in English, Arabic, and Turkish. Our dataset
construction pipeline uses an LLM to inject hallucinations into factual
biographical sentences, followed by a rigorous human annotation process to
ensure data quality. We evaluate several LLMs on HalluVerse25, providing
valuable insights into how proprietary models perform in detecting
LLM-generated hallucinations across different contexts.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 20:24:07 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Abdaljalil",
"Samir",
""
],
[
"Kurban",
"Hasan",
""
],
[
"Serpedin",
"Erchin",
""
]
]
| TITLE: HalluVerse25: Fine-grained Multilingual Benchmark Dataset for LLM
Hallucinations
ABSTRACT: Large Language Models (LLMs) are increasingly used in various contexts, yet
remain prone to generating non-factual content, commonly referred to as
"hallucinations". The literature categorizes hallucinations into several types,
including entity-level, relation-level, and sentence-level hallucinations.
However, existing hallucination datasets often fail to capture fine-grained
hallucinations in multilingual settings. In this work, we introduce
HalluVerse25, a multilingual LLM hallucination dataset that categorizes
fine-grained hallucinations in English, Arabic, and Turkish. Our dataset
construction pipeline uses an LLM to inject hallucinations into factual
biographical sentences, followed by a rigorous human annotation process to
ensure data quality. We evaluate several LLMs on HalluVerse25, providing
valuable insights into how proprietary models perform in detecting
LLM-generated hallucinations across different contexts.
| new_dataset | 0.958809 |
2503.07839 | Jose Mendoza-Cortes | Austin Rodriguez and Justin S. Smith and Jose L. Mendoza-Cortes | Does Hessian Data Improve the Performance of Machine Learning
Potentials? | null | null | null | null | physics.chem-ph | http://creativecommons.org/licenses/by/4.0/ | Integrating machine learning into reactive chemistry, materials discovery,
and drug design is revolutionizing the development of novel molecules and
materials. Machine Learning Interatomic Potentials (MLIPs) accurately predict
energies and forces at quantum chemistry levels, surpassing traditional
methods. Incorporating force fitting into MLIP training significantly improves
the representation of potential-energy surfaces (PES), enhancing model
transferability and reliability. This study introduces and evaluates
incorporating Hessian matrix training into MLIPs, capturing second-order
curvature information of PES. Our analysis specifically examines MLIPs trained
solely on stable molecular geometries, assessing their extrapolation
capabilities to non-equilibrium configurations. We show that integrating
Hessian information substantially improves MLIP performance in predicting
energies, forces, and Hessians for non-equilibrium structures. Hessian-trained
MLIPs notably enhance reaction pathway modeling, transition state
identification, and vibrational spectra accuracy, benefiting molecular dynamics
simulations and Nudged Elastic Band (NEB) calculations. By comparing models
trained with various combinations of energy, force, and Hessian data on a
small-molecule reactive dataset, we demonstrate Hessian inclusion leads to
improved accuracy in reaction modeling and vibrational analyses while
simultaneously reducing the total data needed for effective training. The
primary trade-off is increased computational expense, as Hessian training
demands more resources than conventional methods. Our results offer
comprehensive insights into the strengths and limitations of Hessian
integration in MLIP training, enabling practitioners in computational chemistry
to make informed decisions aligned with their research goals and available
computational resources.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 20:36:17 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Rodriguez",
"Austin",
""
],
[
"Smith",
"Justin S.",
""
],
[
"Mendoza-Cortes",
"Jose L.",
""
]
]
| TITLE: Does Hessian Data Improve the Performance of Machine Learning
Potentials?
ABSTRACT: Integrating machine learning into reactive chemistry, materials discovery,
and drug design is revolutionizing the development of novel molecules and
materials. Machine Learning Interatomic Potentials (MLIPs) accurately predict
energies and forces at quantum chemistry levels, surpassing traditional
methods. Incorporating force fitting into MLIP training significantly improves
the representation of potential-energy surfaces (PES), enhancing model
transferability and reliability. This study introduces and evaluates
incorporating Hessian matrix training into MLIPs, capturing second-order
curvature information of PES. Our analysis specifically examines MLIPs trained
solely on stable molecular geometries, assessing their extrapolation
capabilities to non-equilibrium configurations. We show that integrating
Hessian information substantially improves MLIP performance in predicting
energies, forces, and Hessians for non-equilibrium structures. Hessian-trained
MLIPs notably enhance reaction pathway modeling, transition state
identification, and vibrational spectra accuracy, benefiting molecular dynamics
simulations and Nudged Elastic Band (NEB) calculations. By comparing models
trained with various combinations of energy, force, and Hessian data on a
small-molecule reactive dataset, we demonstrate Hessian inclusion leads to
improved accuracy in reaction modeling and vibrational analyses while
simultaneously reducing the total data needed for effective training. The
primary trade-off is increased computational expense, as Hessian training
demands more resources than conventional methods. Our results offer
comprehensive insights into the strengths and limitations of Hessian
integration in MLIP training, enabling practitioners in computational chemistry
to make informed decisions aligned with their research goals and available
computational resources.
| no_new_dataset | 0.950824 |
2503.07851 | Guillaume Qu\'etant | Guillaume Qu\'etant, Pavlo Molchanov, Slava Voloshynovskiy | TwinTURBO: Semi-Supervised Fine-Tuning of Foundation Models via Mutual
Information Decompositions for Downstream Task and Latent Spaces | null | null | null | null | cs.LG cs.CV cs.IT math.IT stat.ML | http://creativecommons.org/licenses/by/4.0/ | We present a semi-supervised fine-tuning framework for foundation models that
utilises mutual information decomposition to address the challenges of training
for a limited amount of labelled data. Our approach derives two distinct lower
bounds: i) for the downstream task space, such as classification, optimised
using conditional and marginal cross-entropy alongside Kullback-Leibler
divergence, and ii) for the latent space representation, regularised and
aligned using a contrastive-like decomposition. This fine-tuning strategy
retains the pre-trained structure of the foundation model, modifying only a
specialised projector module comprising a small transformer and a token
aggregation technique. Experiments on several datasets demonstrate significant
improvements in classification tasks under extremely low-labelled conditions by
effectively leveraging unlabelled data.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 20:56:54 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Quétant",
"Guillaume",
""
],
[
"Molchanov",
"Pavlo",
""
],
[
"Voloshynovskiy",
"Slava",
""
]
]
| TITLE: TwinTURBO: Semi-Supervised Fine-Tuning of Foundation Models via Mutual
Information Decompositions for Downstream Task and Latent Spaces
ABSTRACT: We present a semi-supervised fine-tuning framework for foundation models that
utilises mutual information decomposition to address the challenges of training
for a limited amount of labelled data. Our approach derives two distinct lower
bounds: i) for the downstream task space, such as classification, optimised
using conditional and marginal cross-entropy alongside Kullback-Leibler
divergence, and ii) for the latent space representation, regularised and
aligned using a contrastive-like decomposition. This fine-tuning strategy
retains the pre-trained structure of the foundation model, modifying only a
specialised projector module comprising a small transformer and a token
aggregation technique. Experiments on several datasets demonstrate significant
improvements in classification tasks under extremely low-labelled conditions by
effectively leveraging unlabelled data.
| no_new_dataset | 0.947137 |
2503.07853 | Depanshu Sani | Depanshu Sani and Saket Anand | Learning and Evaluating Hierarchical Feature Representations | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hierarchy-aware representations ensure that the semantically closer classes
are mapped closer in the feature space, thereby reducing the severity of
mistakes while enabling consistent coarse-level class predictions. Towards this
end, we propose a novel framework, Hierarchical Composition of Orthogonal
Subspaces (Hier-COS), which learns to map deep feature embeddings into a vector
space that is, by design, consistent with the structure of a given taxonomy
tree. Our approach augments neural network backbones with a simple
transformation module that maps learned discriminative features to subspaces
defined using a fixed orthogonal frame. This construction naturally improves
the severity of mistakes and promotes hierarchical consistency. Furthermore, we
highlight the fundamental limitations of existing hierarchical evaluation
metrics popularly used by the vision community and introduce a preference-based
metric, Hierarchically Ordered Preference Score (HOPS), to overcome these
limitations. We benchmark our method on multiple large and challenging datasets
having deep label hierarchies (ranging from 3 - 12 levels) and compare with
several baselines and SOTA. Through extensive experiments, we demonstrate that
Hier-COS achieves state-of-the-art hierarchical performance across all the
datasets while simultaneously beating top-1 accuracy in all but one case. We
also demonstrate the performance of a Vision Transformer (ViT) backbone and
show that learning a transformation module alone can map the learned features
from a pre-trained ViT to Hier-COS and yield substantial performance benefits.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 20:59:41 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Sani",
"Depanshu",
""
],
[
"Anand",
"Saket",
""
]
]
| TITLE: Learning and Evaluating Hierarchical Feature Representations
ABSTRACT: Hierarchy-aware representations ensure that the semantically closer classes
are mapped closer in the feature space, thereby reducing the severity of
mistakes while enabling consistent coarse-level class predictions. Towards this
end, we propose a novel framework, Hierarchical Composition of Orthogonal
Subspaces (Hier-COS), which learns to map deep feature embeddings into a vector
space that is, by design, consistent with the structure of a given taxonomy
tree. Our approach augments neural network backbones with a simple
transformation module that maps learned discriminative features to subspaces
defined using a fixed orthogonal frame. This construction naturally improves
the severity of mistakes and promotes hierarchical consistency. Furthermore, we
highlight the fundamental limitations of existing hierarchical evaluation
metrics popularly used by the vision community and introduce a preference-based
metric, Hierarchically Ordered Preference Score (HOPS), to overcome these
limitations. We benchmark our method on multiple large and challenging datasets
having deep label hierarchies (ranging from 3 - 12 levels) and compare with
several baselines and SOTA. Through extensive experiments, we demonstrate that
Hier-COS achieves state-of-the-art hierarchical performance across all the
datasets while simultaneously beating top-1 accuracy in all but one case. We
also demonstrate the performance of a Vision Transformer (ViT) backbone and
show that learning a transformation module alone can map the learned features
from a pre-trained ViT to Hier-COS and yield substantial performance benefits.
| no_new_dataset | 0.95018 |
2503.07856 | Qiang Zhu | Qiang Zhu, Yuxuan Jiang, Shuyuan Zhu, Fan Zhang, David Bull, Bing Zeng | Blind Video Super-Resolution based on Implicit Kernels | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Blind video super-resolution (BVSR) is a low-level vision task which aims to
generate high-resolution videos from low-resolution counterparts in unknown
degradation scenarios. Existing approaches typically predict blur kernels that
are spatially invariant in each video frame or even the entire video. These
methods do not consider potential spatio-temporal varying degradations in
videos, resulting in suboptimal BVSR performance. In this context, we propose a
novel BVSR model based on Implicit Kernels, BVSR-IK, which constructs a
multi-scale kernel dictionary parameterized by implicit neural representations.
It also employs a newly designed recurrent Transformer to predict the
coefficient weights for accurate filtering in both frame correction and feature
alignment. Experimental results have demonstrated the effectiveness of the
proposed BVSR-IK, when compared with four state-of-the-art BVSR models on three
commonly used datasets, with BVSR-IK outperforming the second best approach,
FMA-Net, by up to 0.59 dB in PSNR. Source code will be available at
https://github.com.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 21:01:32 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Zhu",
"Qiang",
""
],
[
"Jiang",
"Yuxuan",
""
],
[
"Zhu",
"Shuyuan",
""
],
[
"Zhang",
"Fan",
""
],
[
"Bull",
"David",
""
],
[
"Zeng",
"Bing",
""
]
]
| TITLE: Blind Video Super-Resolution based on Implicit Kernels
ABSTRACT: Blind video super-resolution (BVSR) is a low-level vision task which aims to
generate high-resolution videos from low-resolution counterparts in unknown
degradation scenarios. Existing approaches typically predict blur kernels that
are spatially invariant in each video frame or even the entire video. These
methods do not consider potential spatio-temporal varying degradations in
videos, resulting in suboptimal BVSR performance. In this context, we propose a
novel BVSR model based on Implicit Kernels, BVSR-IK, which constructs a
multi-scale kernel dictionary parameterized by implicit neural representations.
It also employs a newly designed recurrent Transformer to predict the
coefficient weights for accurate filtering in both frame correction and feature
alignment. Experimental results have demonstrated the effectiveness of the
proposed BVSR-IK, when compared with four state-of-the-art BVSR models on three
commonly used datasets, with BVSR-IK outperforming the second best approach,
FMA-Net, by up to 0.59 dB in PSNR. Source code will be available at
https://github.com.
| no_new_dataset | 0.944536 |
2503.07860 | James Burgess | James Burgess, Xiaohan Wang, Yuhui Zhang, Anita Rau, Alejandro Lozano,
Lisa Dunlap, Trevor Darrell, Serena Yeung-Levy | Video Action Differencing | ICLR 2025 (International Conference on Learning Representations)
Project page: http://jmhb0.github.io/viddiff Benchmark:
https://huggingface.co/datasets/jmhb/VidDiffBench | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | How do two individuals differ when performing the same action? In this work,
we introduce Video Action Differencing (VidDiff), the novel task of identifying
subtle differences between videos of the same action, which has many
applications, such as coaching and skill learning. To enable development on
this new task, we first create VidDiffBench, a benchmark dataset containing 549
video pairs, with human annotations of 4,469 fine-grained action differences
and 2,075 localization timestamps indicating where these differences occur. Our
experiments demonstrate that VidDiffBench poses a significant challenge for
state-of-the-art large multimodal models (LMMs), such as GPT-4o and Qwen2-VL.
By analyzing failure cases of LMMs on VidDiffBench, we highlight two key
challenges for this task: localizing relevant sub-actions over two videos and
fine-grained frame comparison. To overcome these, we propose the VidDiff
method, an agentic workflow that breaks the task into three stages: action
difference proposal, keyframe localization, and frame differencing, each stage
utilizing specialized foundation models. To encourage future research in this
new task, we release the benchmark at
https://huggingface.co/datasets/jmhb/VidDiffBench and code at
http://jmhb0.github.io/viddiff.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 21:18:32 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Burgess",
"James",
""
],
[
"Wang",
"Xiaohan",
""
],
[
"Zhang",
"Yuhui",
""
],
[
"Rau",
"Anita",
""
],
[
"Lozano",
"Alejandro",
""
],
[
"Dunlap",
"Lisa",
""
],
[
"Darrell",
"Trevor",
""
],
[
"Yeung-Levy",
"Serena",
""
]
]
| TITLE: Video Action Differencing
ABSTRACT: How do two individuals differ when performing the same action? In this work,
we introduce Video Action Differencing (VidDiff), the novel task of identifying
subtle differences between videos of the same action, which has many
applications, such as coaching and skill learning. To enable development on
this new task, we first create VidDiffBench, a benchmark dataset containing 549
video pairs, with human annotations of 4,469 fine-grained action differences
and 2,075 localization timestamps indicating where these differences occur. Our
experiments demonstrate that VidDiffBench poses a significant challenge for
state-of-the-art large multimodal models (LMMs), such as GPT-4o and Qwen2-VL.
By analyzing failure cases of LMMs on VidDiffBench, we highlight two key
challenges for this task: localizing relevant sub-actions over two videos and
fine-grained frame comparison. To overcome these, we propose the VidDiff
method, an agentic workflow that breaks the task into three stages: action
difference proposal, keyframe localization, and frame differencing, each stage
utilizing specialized foundation models. To encourage future research in this
new task, we release the benchmark at
https://huggingface.co/datasets/jmhb/VidDiffBench and code at
http://jmhb0.github.io/viddiff.
| new_dataset | 0.955693 |
2503.07870 | Antonio Vitale | Antonio Vitale and Emanuela Guglielmi and Rocco Oliveto and Simone
Scalabrino | Personalized Code Readability Assessment: Are We There Yet? | null | null | null | null | cs.SE | http://creativecommons.org/licenses/by/4.0/ | Unreadable code could be a breeding ground for errors. Thus, previous work
defined approaches based on machine learning to automatically assess code
readability that can warn developers when some code artifacts (e.g., classes)
become unreadable. Given datasets of code snippets manually evaluated by
several developers in terms of their perceived readability, such approaches (i)
establish a snippet-level ground truth, and (ii) train a binary
(readable/unreadable) or a ternary (readable/neutral/unreadable) code
readability classifier. Given this procedure, all existing approaches neglect
the subjectiveness of code readability, i.e., the possible different
developer-specific nuances in the code readability perception. In this paper,
we aim to understand to what extent it is possible to assess code readability
as subjectively perceived by developers through a personalized code readability
assessment approach. This problem is significantly more challenging than the
snippet-level classification problem: We assume that, in a realistic scenario,
a given developer is keen to provide only a few code readability evaluations,
thus less data is available. For this reason, we adopt an LLM with few-shot
learning to achieve our goal. Our results, however, show that such an approach
achieves worse results than a state-of-the-art feature-based model that is
trained to work at the snippet-level. We tried to understand why this happens
by looking more closely at the quality of the available code readability
datasets and assessed the consistency of the inter-developer evaluations. We
observed that up to a third of the evaluations are self-contradictory. Our
negative results call for new and more reliable code readability datasets.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 21:37:15 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Vitale",
"Antonio",
""
],
[
"Guglielmi",
"Emanuela",
""
],
[
"Oliveto",
"Rocco",
""
],
[
"Scalabrino",
"Simone",
""
]
]
| TITLE: Personalized Code Readability Assessment: Are We There Yet?
ABSTRACT: Unreadable code could be a breeding ground for errors. Thus, previous work
defined approaches based on machine learning to automatically assess code
readability that can warn developers when some code artifacts (e.g., classes)
become unreadable. Given datasets of code snippets manually evaluated by
several developers in terms of their perceived readability, such approaches (i)
establish a snippet-level ground truth, and (ii) train a binary
(readable/unreadable) or a ternary (readable/neutral/unreadable) code
readability classifier. Given this procedure, all existing approaches neglect
the subjectiveness of code readability, i.e., the possible different
developer-specific nuances in the code readability perception. In this paper,
we aim to understand to what extent it is possible to assess code readability
as subjectively perceived by developers through a personalized code readability
assessment approach. This problem is significantly more challenging than the
snippet-level classification problem: We assume that, in a realistic scenario,
a given developer is keen to provide only a few code readability evaluations,
thus less data is available. For this reason, we adopt an LLM with few-shot
learning to achieve our goal. Our results, however, show that such an approach
achieves worse results than a state-of-the-art feature-based model that is
trained to work at the snippet-level. We tried to understand why this happens
by looking more closely at the quality of the available code readability
datasets and assessed the consistency of the inter-developer evaluations. We
observed that up to a third of the evaluations are self-contradictory. Our
negative results call for new and more reliable code readability datasets.
| no_new_dataset | 0.939526 |
2503.07871 | Zekun Li | Zekun Li, Malcolm Grossman, Eric (Ehsan) Qasemi, Mihir Kulkarni, Muhao
Chen, Yao-Yi Chiang | MapQA: Open-domain Geospatial Question Answering on Map Data | null | null | null | null | cs.CL cs.AI cs.IR | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Geospatial question answering (QA) is a fundamental task in navigation and
point of interest (POI) searches. While existing geospatial QA datasets exist,
they are limited in both scale and diversity, often relying solely on textual
descriptions of geo-entities without considering their geometries. A major
challenge in scaling geospatial QA datasets for reasoning lies in the
complexity of geospatial relationships, which require integrating spatial
structures, topological dependencies, and multi-hop reasoning capabilities that
most text-based QA datasets lack. To address these limitations, we introduce
MapQA, a novel dataset that not only provides question-answer pairs but also
includes the geometries of geo-entities referenced in the questions. MapQA is
constructed using SQL query templates to extract question-answer pairs from
OpenStreetMap (OSM) for two study regions: Southern California and Illinois. It
consists of 3,154 QA pairs spanning nine question types that require geospatial
reasoning, such as neighborhood inference and geo-entity type identification.
Compared to existing datasets, MapQA expands both the number and diversity of
geospatial question types. We explore two approaches to tackle this challenge:
(1) a retrieval-based language model that ranks candidate geo-entities by
embedding similarity, and (2) a large language model (LLM) that generates SQL
queries from natural language questions and geo-entity attributes, which are
then executed against an OSM database. Our findings indicate that
retrieval-based methods effectively capture concepts like closeness and
direction but struggle with questions that require explicit computations (e.g.,
distance calculations). LLMs (e.g., GPT and Gemini) excel at generating SQL
queries for one-hop reasoning but face challenges with multi-hop reasoning,
highlighting a key bottleneck in advancing geospatial QA systems.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 21:37:22 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Li",
"Zekun",
"",
"Ehsan"
],
[
"Grossman",
"Malcolm",
"",
"Ehsan"
],
[
"Eric",
"",
"",
"Ehsan"
],
[
"Qasemi",
"",
""
],
[
"Kulkarni",
"Mihir",
""
],
[
"Chen",
"Muhao",
""
],
[
"Chiang",
"Yao-Yi",
""
]
]
| TITLE: MapQA: Open-domain Geospatial Question Answering on Map Data
ABSTRACT: Geospatial question answering (QA) is a fundamental task in navigation and
point of interest (POI) searches. While existing geospatial QA datasets exist,
they are limited in both scale and diversity, often relying solely on textual
descriptions of geo-entities without considering their geometries. A major
challenge in scaling geospatial QA datasets for reasoning lies in the
complexity of geospatial relationships, which require integrating spatial
structures, topological dependencies, and multi-hop reasoning capabilities that
most text-based QA datasets lack. To address these limitations, we introduce
MapQA, a novel dataset that not only provides question-answer pairs but also
includes the geometries of geo-entities referenced in the questions. MapQA is
constructed using SQL query templates to extract question-answer pairs from
OpenStreetMap (OSM) for two study regions: Southern California and Illinois. It
consists of 3,154 QA pairs spanning nine question types that require geospatial
reasoning, such as neighborhood inference and geo-entity type identification.
Compared to existing datasets, MapQA expands both the number and diversity of
geospatial question types. We explore two approaches to tackle this challenge:
(1) a retrieval-based language model that ranks candidate geo-entities by
embedding similarity, and (2) a large language model (LLM) that generates SQL
queries from natural language questions and geo-entity attributes, which are
then executed against an OSM database. Our findings indicate that
retrieval-based methods effectively capture concepts like closeness and
direction but struggle with questions that require explicit computations (e.g.,
distance calculations). LLMs (e.g., GPT and Gemini) excel at generating SQL
queries for one-hop reasoning but face challenges with multi-hop reasoning,
highlighting a key bottleneck in advancing geospatial QA systems.
| new_dataset | 0.959649 |
2503.07874 | Chenyu Zhang | Chenyu Zhang, Yihao Luo, Yinzhe Wu, Choon Hwai Yap, Guang Yang | Topology-Preserving Loss for Accurate and Anatomically Consistent
Cardiac Mesh Reconstruction | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate cardiac mesh reconstruction from volumetric data is essential for
personalized cardiac modeling and clinical analysis. However, existing
deformation-based approaches are prone to topological inconsistencies,
particularly membrane penetration, which undermines the anatomical plausibility
of the reconstructed mesh. To address this issue, we introduce
Topology-Preserving Mesh Loss (TPM Loss), a novel loss function that explicitly
enforces topological constraints during mesh deformation. By identifying
topology-violating points, TPM Loss ensures spatially consistent
reconstructions. Extensive experiments on CT and MRI datasets show that TPM
Loss reduces topology violations by up to 93.1% while maintaining high
segmentation accuracy (DSC: 89.1%-92.9%) and improving mesh fidelity (Chamfer
Distance reduction up to 0.26 mm). These results demonstrate that TPM Loss
effectively prevents membrane penetration and significantly improves cardiac
mesh quality, enabling more accurate and anatomically consistent cardiac
reconstructions.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 21:46:57 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Zhang",
"Chenyu",
""
],
[
"Luo",
"Yihao",
""
],
[
"Wu",
"Yinzhe",
""
],
[
"Yap",
"Choon Hwai",
""
],
[
"Yang",
"Guang",
""
]
]
| TITLE: Topology-Preserving Loss for Accurate and Anatomically Consistent
Cardiac Mesh Reconstruction
ABSTRACT: Accurate cardiac mesh reconstruction from volumetric data is essential for
personalized cardiac modeling and clinical analysis. However, existing
deformation-based approaches are prone to topological inconsistencies,
particularly membrane penetration, which undermines the anatomical plausibility
of the reconstructed mesh. To address this issue, we introduce
Topology-Preserving Mesh Loss (TPM Loss), a novel loss function that explicitly
enforces topological constraints during mesh deformation. By identifying
topology-violating points, TPM Loss ensures spatially consistent
reconstructions. Extensive experiments on CT and MRI datasets show that TPM
Loss reduces topology violations by up to 93.1% while maintaining high
segmentation accuracy (DSC: 89.1%-92.9%) and improving mesh fidelity (Chamfer
Distance reduction up to 0.26 mm). These results demonstrate that TPM Loss
effectively prevents membrane penetration and significantly improves cardiac
mesh quality, enabling more accurate and anatomically consistent cardiac
reconstructions.
| no_new_dataset | 0.954223 |
2503.07879 | Alex Fang | Alex Fang, Hadi Pouransari, Matt Jordan, Alexander Toshev, Vaishaal
Shankar, Ludwig Schmidt, Tom Gunter | Datasets, Documents, and Repetitions: The Practicalities of Unequal Data
Quality | null | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Data filtering has become a powerful tool for improving model performance
while reducing computational cost. However, as large language model compute
budgets continue to grow, the limited data volume provided by heavily filtered
and deduplicated datasets will become a practical constraint. In efforts to
better understand how to proceed, we study model performance at various compute
budgets and across multiple pre-training datasets created through data
filtering and deduplication. We find that, given appropriate modifications to
the training recipe, repeating existing aggressively filtered datasets for up
to ten epochs can outperform training on the ten times larger superset for a
single epoch across multiple compute budget orders of magnitude. While this
finding relies on repeating the dataset for many epochs, we also investigate
repeats within these datasets at the document level. We find that not all
documents within a dataset are equal, and we can create better datasets
relative to a token budget by explicitly manipulating the counts of individual
documents. We conclude by arguing that even as large language models scale,
data filtering remains an important direction of research.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 21:51:17 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Fang",
"Alex",
""
],
[
"Pouransari",
"Hadi",
""
],
[
"Jordan",
"Matt",
""
],
[
"Toshev",
"Alexander",
""
],
[
"Shankar",
"Vaishaal",
""
],
[
"Schmidt",
"Ludwig",
""
],
[
"Gunter",
"Tom",
""
]
]
| TITLE: Datasets, Documents, and Repetitions: The Practicalities of Unequal Data
Quality
ABSTRACT: Data filtering has become a powerful tool for improving model performance
while reducing computational cost. However, as large language model compute
budgets continue to grow, the limited data volume provided by heavily filtered
and deduplicated datasets will become a practical constraint. In efforts to
better understand how to proceed, we study model performance at various compute
budgets and across multiple pre-training datasets created through data
filtering and deduplication. We find that, given appropriate modifications to
the training recipe, repeating existing aggressively filtered datasets for up
to ten epochs can outperform training on the ten times larger superset for a
single epoch across multiple compute budget orders of magnitude. While this
finding relies on repeating the dataset for many epochs, we also investigate
repeats within these datasets at the document level. We find that not all
documents within a dataset are equal, and we can create better datasets
relative to a token budget by explicitly manipulating the counts of individual
documents. We conclude by arguing that even as large language models scale,
data filtering remains an important direction of research.
| no_new_dataset | 0.949809 |
2503.07882 | Onat Gungor | Cagla Ipek Kocal, Onat Gungor, Aaron Tartz, Tajana Rosing, Baris
Aksanli | ReLATE: Resilient Learner Selection for Multivariate Time-Series
Classification Against Adversarial Attacks | Accepted by the AAAI-25 Workshop on Artificial Intelligence for Time
Series Analysis (AI4TS) | null | null | null | cs.LG cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Minimizing computational overhead in time-series classification, particularly
in deep learning models, presents a significant challenge. This challenge is
further compounded by adversarial attacks, emphasizing the need for resilient
methods that ensure robust performance and efficient model selection. We
introduce ReLATE, a framework that identifies robust learners based on dataset
similarity, reduces computational overhead, and enhances resilience. ReLATE
maintains multiple deep learning models in well-known adversarial attack
scenarios, capturing model performance. ReLATE identifies the most analogous
dataset to a given target using a similarity metric, then applies the optimal
model from the most similar dataset. ReLATE reduces computational overhead by
an average of 81.2%, enhancing adversarial resilience and streamlining robust
model selection, all without sacrificing performance, within 4.2% of Oracle.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 21:55:50 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Kocal",
"Cagla Ipek",
""
],
[
"Gungor",
"Onat",
""
],
[
"Tartz",
"Aaron",
""
],
[
"Rosing",
"Tajana",
""
],
[
"Aksanli",
"Baris",
""
]
]
| TITLE: ReLATE: Resilient Learner Selection for Multivariate Time-Series
Classification Against Adversarial Attacks
ABSTRACT: Minimizing computational overhead in time-series classification, particularly
in deep learning models, presents a significant challenge. This challenge is
further compounded by adversarial attacks, emphasizing the need for resilient
methods that ensure robust performance and efficient model selection. We
introduce ReLATE, a framework that identifies robust learners based on dataset
similarity, reduces computational overhead, and enhances resilience. ReLATE
maintains multiple deep learning models in well-known adversarial attack
scenarios, capturing model performance. ReLATE identifies the most analogous
dataset to a given target using a similarity metric, then applies the optimal
model from the most similar dataset. ReLATE reduces computational overhead by
an average of 81.2%, enhancing adversarial resilience and streamlining robust
model selection, all without sacrificing performance, within 4.2% of Oracle.
| no_new_dataset | 0.951097 |
2503.07911 | Xing Zi | Xing Zi, Kairui Jin, Xian Tao, Jun Li, Ali Braytee, Rajiv Ratn Shah
and Mukesh Prasad | Visual and Text Prompt Segmentation: A Novel Multi-Model Framework for
Remote Sensing | Under Review - IEEE Journal of Selected Topics in Applied Earth
Observations and Remote Sensing | null | null | null | cs.MM cs.AI cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pixel-level segmentation is essential in remote sensing, where foundational
vision models like CLIP and Segment Anything Model(SAM) have demonstrated
significant capabilities in zero-shot segmentation tasks. Despite their
advances, challenges specific to remote sensing remain substantial. Firstly,
The SAM without clear prompt constraints, often generates redundant masks, and
making post-processing more complex. Secondly, the CLIP model, mainly designed
for global feature alignment in foundational models, often overlooks local
objects crucial to remote sensing. This oversight leads to inaccurate
recognition or misplaced focus in multi-target remote sensing imagery. Thirdly,
both models have not been pre-trained on multi-scale aerial views, increasing
the likelihood of detection failures. To tackle these challenges, we introduce
the innovative VTPSeg pipeline, utilizing the strengths of Grounding DINO,
CLIP, and SAM for enhanced open-vocabulary image segmentation. The Grounding
DINO+(GD+) module generates initial candidate bounding boxes, while the CLIP
Filter++(CLIP++) module uses a combination of visual and textual prompts to
refine and filter out irrelevant object bounding boxes, ensuring that only
pertinent objects are considered. Subsequently, these refined bounding boxes
serve as specific prompts for the FastSAM model, which executes precise
segmentation. Our VTPSeg is validated by experimental and ablation study
results on five popular remote sensing image segmentation datasets.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 23:15:57 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Zi",
"Xing",
""
],
[
"Jin",
"Kairui",
""
],
[
"Tao",
"Xian",
""
],
[
"Li",
"Jun",
""
],
[
"Braytee",
"Ali",
""
],
[
"Shah",
"Rajiv Ratn",
""
],
[
"Prasad",
"Mukesh",
""
]
]
| TITLE: Visual and Text Prompt Segmentation: A Novel Multi-Model Framework for
Remote Sensing
ABSTRACT: Pixel-level segmentation is essential in remote sensing, where foundational
vision models like CLIP and Segment Anything Model(SAM) have demonstrated
significant capabilities in zero-shot segmentation tasks. Despite their
advances, challenges specific to remote sensing remain substantial. Firstly,
The SAM without clear prompt constraints, often generates redundant masks, and
making post-processing more complex. Secondly, the CLIP model, mainly designed
for global feature alignment in foundational models, often overlooks local
objects crucial to remote sensing. This oversight leads to inaccurate
recognition or misplaced focus in multi-target remote sensing imagery. Thirdly,
both models have not been pre-trained on multi-scale aerial views, increasing
the likelihood of detection failures. To tackle these challenges, we introduce
the innovative VTPSeg pipeline, utilizing the strengths of Grounding DINO,
CLIP, and SAM for enhanced open-vocabulary image segmentation. The Grounding
DINO+(GD+) module generates initial candidate bounding boxes, while the CLIP
Filter++(CLIP++) module uses a combination of visual and textual prompts to
refine and filter out irrelevant object bounding boxes, ensuring that only
pertinent objects are considered. Subsequently, these refined bounding boxes
serve as specific prompts for the FastSAM model, which executes precise
segmentation. Our VTPSeg is validated by experimental and ablation study
results on five popular remote sensing image segmentation datasets.
| no_new_dataset | 0.950273 |
2503.07917 | Jorge Hermosillo Valadez | Mauricio Toledo-Acosta and Luis \'Angel Ramos-Garc\'ia and Jorge
Hermosillo-Valadez | Hyperoctant Search Clustering: A Method for Clustering Data in
High-Dimensional Hyperspheres | 22 pages, 9 figures | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Clustering of high-dimensional data sets is a growing need in artificial
intelligence, machine learning and pattern recognition. In this paper, we
propose a new clustering method based on a combinatorial-topological approach
applied to regions of space defined by signs of coordinates (hyperoctants). In
high-dimensional spaces, this approach often reduces the size of the dataset
while preserving sufficient topological features. According to a density
criterion, the method builds clusters of data points based on the partitioning
of a graph, whose vertices represent hyperoctants, and whose edges connect
neighboring hyperoctants under the Levenshtein distance. We call this method
HyperOctant Search Clustering. We prove some mathematical properties of the
method. In order to as assess its performance, we choose the application of
topic detection, which is an important task in text mining. Our results suggest
that our method is more stable under variations of the main hyperparameter, and
remarkably, it is not only a clustering method, but also a tool to explore the
dataset from a topological perspective, as it directly provides information
about the number of hyperoctants where there are data points. We also discuss
the possible connections between our clustering method and other research
fields.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 23:41:44 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Toledo-Acosta",
"Mauricio",
""
],
[
"Ramos-García",
"Luis Ángel",
""
],
[
"Hermosillo-Valadez",
"Jorge",
""
]
]
| TITLE: Hyperoctant Search Clustering: A Method for Clustering Data in
High-Dimensional Hyperspheres
ABSTRACT: Clustering of high-dimensional data sets is a growing need in artificial
intelligence, machine learning and pattern recognition. In this paper, we
propose a new clustering method based on a combinatorial-topological approach
applied to regions of space defined by signs of coordinates (hyperoctants). In
high-dimensional spaces, this approach often reduces the size of the dataset
while preserving sufficient topological features. According to a density
criterion, the method builds clusters of data points based on the partitioning
of a graph, whose vertices represent hyperoctants, and whose edges connect
neighboring hyperoctants under the Levenshtein distance. We call this method
HyperOctant Search Clustering. We prove some mathematical properties of the
method. In order to as assess its performance, we choose the application of
topic detection, which is an important task in text mining. Our results suggest
that our method is more stable under variations of the main hyperparameter, and
remarkably, it is not only a clustering method, but also a tool to explore the
dataset from a topological perspective, as it directly provides information
about the number of hyperoctants where there are data points. We also discuss
the possible connections between our clustering method and other research
fields.
| no_new_dataset | 0.948632 |
2503.07926 | Ken Nakahara | Ken Nakahara, Roberto Calandra | Learning Gentle Grasping Using Vision, Sound, and Touch | 8 pages | null | null | null | cs.RO cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In our daily life, we often encounter objects that are fragile and can be
damaged by excessive grasping force, such as fruits. For these objects, it is
paramount to grasp gently -- not using the maximum amount of force possible,
but rather the minimum amount of force necessary. This paper proposes using
visual, tactile, and auditory signals to learn to grasp and regrasp objects
stably and gently. Specifically, we use audio signals as an indicator of
gentleness during the grasping, and then train end-to-end an action-conditional
model from raw visuo-tactile inputs that predicts both the stability and the
gentleness of future grasping candidates, thus allowing the selection and
execution of the most promising action. Experimental results on a
multi-fingered hand over 1,500 grasping trials demonstrated that our model is
useful for gentle grasping by validating the predictive performance (3.27\%
higher accuracy than the vision-only variant) and providing interpretations of
their behavior. Finally, real-world experiments confirmed that the grasping
performance with the trained multi-modal model outperformed other baselines
(17\% higher rate for stable and gentle grasps than vision-only). Our approach
requires neither tactile sensor calibration nor analytical force modeling,
drastically reducing the engineering effort to grasp fragile objects. Dataset
and videos are available at https://lasr.org/research/gentle-grasping.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 00:12:25 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Nakahara",
"Ken",
""
],
[
"Calandra",
"Roberto",
""
]
]
| TITLE: Learning Gentle Grasping Using Vision, Sound, and Touch
ABSTRACT: In our daily life, we often encounter objects that are fragile and can be
damaged by excessive grasping force, such as fruits. For these objects, it is
paramount to grasp gently -- not using the maximum amount of force possible,
but rather the minimum amount of force necessary. This paper proposes using
visual, tactile, and auditory signals to learn to grasp and regrasp objects
stably and gently. Specifically, we use audio signals as an indicator of
gentleness during the grasping, and then train end-to-end an action-conditional
model from raw visuo-tactile inputs that predicts both the stability and the
gentleness of future grasping candidates, thus allowing the selection and
execution of the most promising action. Experimental results on a
multi-fingered hand over 1,500 grasping trials demonstrated that our model is
useful for gentle grasping by validating the predictive performance (3.27\%
higher accuracy than the vision-only variant) and providing interpretations of
their behavior. Finally, real-world experiments confirmed that the grasping
performance with the trained multi-modal model outperformed other baselines
(17\% higher rate for stable and gentle grasps than vision-only). Our approach
requires neither tactile sensor calibration nor analytical force modeling,
drastically reducing the engineering effort to grasp fragile objects. Dataset
and videos are available at https://lasr.org/research/gentle-grasping.
| no_new_dataset | 0.952042 |
2503.07927 | Xia Li | Xia Li, Allen Kim | A Study to Evaluate the Impact of LoRA Fine-tuning on the Performance of
Non-functional Requirements Classification | null | null | null | null | cs.SE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Classifying Non-Functional Requirements (NFRs) in software development life
cycle is critical. Inspired by the theory of transfer learning, researchers
apply powerful pre-trained models for NFR classification. However, full
fine-tuning by updating all parameters of the pre-trained models is often
impractical due to the huge number of parameters involved (e.g., 175 billion
trainable parameters in GPT-3). In this paper, we apply Low-Rank Adaptation
(LoRA) fine-tuning approach into NFR classification based on prompt-based
learning to investigate its impact. The experiments show that LoRA can
significantly reduce the execution cost (up to 68% reduction) without too much
loss of effectiveness in classification (only 2%-3% decrease). The results show
that LoRA can be practical in more complicated classification cases with larger
dataset and pre-trained models.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 00:16:12 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Li",
"Xia",
""
],
[
"Kim",
"Allen",
""
]
]
| TITLE: A Study to Evaluate the Impact of LoRA Fine-tuning on the Performance of
Non-functional Requirements Classification
ABSTRACT: Classifying Non-Functional Requirements (NFRs) in software development life
cycle is critical. Inspired by the theory of transfer learning, researchers
apply powerful pre-trained models for NFR classification. However, full
fine-tuning by updating all parameters of the pre-trained models is often
impractical due to the huge number of parameters involved (e.g., 175 billion
trainable parameters in GPT-3). In this paper, we apply Low-Rank Adaptation
(LoRA) fine-tuning approach into NFR classification based on prompt-based
learning to investigate its impact. The experiments show that LoRA can
significantly reduce the execution cost (up to 68% reduction) without too much
loss of effectiveness in classification (only 2%-3% decrease). The results show
that LoRA can be practical in more complicated classification cases with larger
dataset and pre-trained models.
| no_new_dataset | 0.946399 |
2503.07928 | Hunter McNichols | Hunter McNichols, Andrew Lan | The StudyChat Dataset: Student Dialogues With ChatGPT in an Artificial
Intelligence Course | Pre-print | null | null | null | cs.AI cs.HC | http://creativecommons.org/licenses/by/4.0/ | The widespread availability of large language models (LLMs), such as ChatGPT,
has significantly impacted education, raising both opportunities and
challenges. Students can frequently interact with LLM-powered, interactive
learning tools, but their usage patterns need to be analyzed to ensure ethical
usage of these tools. To better understand how students interact with LLMs in
an academic setting, we introduce \textbf{StudyChat}, a publicly available
dataset capturing real-world student interactions with an LLM-powered tutoring
chatbot in a semester-long, university-level artificial intelligence (AI)
course. We deploy a web application that replicates ChatGPT's core
functionalities, and use it to log student interactions with the LLM while
working on programming assignments. We collect 1,197 conversations, which we
annotate using a dialogue act labeling schema inspired by observed interaction
patterns and prior research. Additionally, we analyze these interactions,
highlight behavioral trends, and analyze how specific usage patterns relate to
course outcomes. \textbf{StudyChat} provides a rich resource for the learning
sciences and AI in education communities, enabling further research into the
evolving role of LLMs in education.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 00:17:07 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"McNichols",
"Hunter",
""
],
[
"Lan",
"Andrew",
""
]
]
| TITLE: The StudyChat Dataset: Student Dialogues With ChatGPT in an Artificial
Intelligence Course
ABSTRACT: The widespread availability of large language models (LLMs), such as ChatGPT,
has significantly impacted education, raising both opportunities and
challenges. Students can frequently interact with LLM-powered, interactive
learning tools, but their usage patterns need to be analyzed to ensure ethical
usage of these tools. To better understand how students interact with LLMs in
an academic setting, we introduce \textbf{StudyChat}, a publicly available
dataset capturing real-world student interactions with an LLM-powered tutoring
chatbot in a semester-long, university-level artificial intelligence (AI)
course. We deploy a web application that replicates ChatGPT's core
functionalities, and use it to log student interactions with the LLM while
working on programming assignments. We collect 1,197 conversations, which we
annotate using a dialogue act labeling schema inspired by observed interaction
patterns and prior research. Additionally, we analyze these interactions,
highlight behavioral trends, and analyze how specific usage patterns relate to
course outcomes. \textbf{StudyChat} provides a rich resource for the learning
sciences and AI in education communities, enabling further research into the
evolving role of LLMs in education.
| new_dataset | 0.959116 |
2503.07934 | Erfaun Noorani | Erfaun Noorani, Pasan Dissanayake, Faisal Hamman, Sanghamitra Dutta | Counterfactual Explanations for Model Ensembles Using Entropic Risk
Measures | null | null | null | null | cs.LG cs.CY cs.SY eess.SY stat.ME stat.ML | http://creativecommons.org/licenses/by/4.0/ | Counterfactual explanations indicate the smallest change in input that can
translate to a different outcome for a machine learning model. Counterfactuals
have generated immense interest in high-stakes applications such as finance,
education, hiring, etc. In several use-cases, the decision-making process often
relies on an ensemble of models rather than just one. Despite significant
research on counterfactuals for one model, the problem of generating a single
counterfactual explanation for an ensemble of models has received limited
interest. Each individual model might lead to a different counterfactual,
whereas trying to find a counterfactual accepted by all models might
significantly increase cost (effort). We propose a novel strategy to find the
counterfactual for an ensemble of models using the perspective of entropic risk
measure. Entropic risk is a convex risk measure that satisfies several
desirable properties. We incorporate our proposed risk measure into a novel
constrained optimization to generate counterfactuals for ensembles that stay
valid for several models. The main significance of our measure is that it
provides a knob that allows for the generation of counterfactuals that stay
valid under an adjustable fraction of the models. We also show that a limiting
case of our entropic-risk-based strategy yields a counterfactual valid for all
models in the ensemble (worst-case min-max approach). We study the trade-off
between the cost (effort) for the counterfactual and its validity for an
ensemble by varying degrees of risk aversion, as determined by our risk
parameter knob. We validate our performance on real-world datasets.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 00:25:28 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Noorani",
"Erfaun",
""
],
[
"Dissanayake",
"Pasan",
""
],
[
"Hamman",
"Faisal",
""
],
[
"Dutta",
"Sanghamitra",
""
]
]
| TITLE: Counterfactual Explanations for Model Ensembles Using Entropic Risk
Measures
ABSTRACT: Counterfactual explanations indicate the smallest change in input that can
translate to a different outcome for a machine learning model. Counterfactuals
have generated immense interest in high-stakes applications such as finance,
education, hiring, etc. In several use-cases, the decision-making process often
relies on an ensemble of models rather than just one. Despite significant
research on counterfactuals for one model, the problem of generating a single
counterfactual explanation for an ensemble of models has received limited
interest. Each individual model might lead to a different counterfactual,
whereas trying to find a counterfactual accepted by all models might
significantly increase cost (effort). We propose a novel strategy to find the
counterfactual for an ensemble of models using the perspective of entropic risk
measure. Entropic risk is a convex risk measure that satisfies several
desirable properties. We incorporate our proposed risk measure into a novel
constrained optimization to generate counterfactuals for ensembles that stay
valid for several models. The main significance of our measure is that it
provides a knob that allows for the generation of counterfactuals that stay
valid under an adjustable fraction of the models. We also show that a limiting
case of our entropic-risk-based strategy yields a counterfactual valid for all
models in the ensemble (worst-case min-max approach). We study the trade-off
between the cost (effort) for the counterfactual and its validity for an
ensemble by varying degrees of risk aversion, as determined by our risk
parameter knob. We validate our performance on real-world datasets.
| no_new_dataset | 0.949201 |
2503.07938 | Xi Xiao | Chenrui Ma, Rongchang Zhao, Xi Xiao, Hongyang Xie, Tianyang Wang, Xiao
Wang, Hao Zhang, Yanning Shen | CAD-VAE: Leveraging Correlation-Aware Latents for Comprehensive Fair
Disentanglement | null | null | null | null | cs.LG cs.CV stat.ME | http://creativecommons.org/licenses/by/4.0/ | While deep generative models have significantly advanced representation
learning, they may inherit or amplify biases and fairness issues by encoding
sensitive attributes alongside predictive features. Enforcing strict
independence in disentanglement is often unrealistic when target and sensitive
factors are naturally correlated. To address this challenge, we propose CAD-VAE
(Correlation-Aware Disentangled VAE), which introduces a correlated latent code
to capture the shared information between target and sensitive attributes.
Given this correlated latent, our method effectively separates overlapping
factors without extra domain knowledge by directly minimizing the conditional
mutual information between target and sensitive codes. A relevance-driven
optimization strategy refines the correlated code by efficiently capturing
essential correlated features and eliminating redundancy. Extensive experiments
on benchmark datasets demonstrate that CAD-VAE produces fairer representations,
realistic counterfactuals, and improved fairness-aware image editing.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 00:32:56 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Ma",
"Chenrui",
""
],
[
"Zhao",
"Rongchang",
""
],
[
"Xiao",
"Xi",
""
],
[
"Xie",
"Hongyang",
""
],
[
"Wang",
"Tianyang",
""
],
[
"Wang",
"Xiao",
""
],
[
"Zhang",
"Hao",
""
],
[
"Shen",
"Yanning",
""
]
]
| TITLE: CAD-VAE: Leveraging Correlation-Aware Latents for Comprehensive Fair
Disentanglement
ABSTRACT: While deep generative models have significantly advanced representation
learning, they may inherit or amplify biases and fairness issues by encoding
sensitive attributes alongside predictive features. Enforcing strict
independence in disentanglement is often unrealistic when target and sensitive
factors are naturally correlated. To address this challenge, we propose CAD-VAE
(Correlation-Aware Disentangled VAE), which introduces a correlated latent code
to capture the shared information between target and sensitive attributes.
Given this correlated latent, our method effectively separates overlapping
factors without extra domain knowledge by directly minimizing the conditional
mutual information between target and sensitive codes. A relevance-driven
optimization strategy refines the correlated code by efficiently capturing
essential correlated features and eliminating redundancy. Extensive experiments
on benchmark datasets demonstrate that CAD-VAE produces fairer representations,
realistic counterfactuals, and improved fairness-aware image editing.
| no_new_dataset | 0.944177 |
2503.07940 | Hyungtae Lim | Minkyun Seo and Hyungtae Lim and Kanghee Lee and Luca Carlone and
Jaesik Park | BUFFER-X: Towards Zero-Shot Point Cloud Registration in Diverse Scenes | 20 pages, 14 figures | null | null | null | cs.CV cs.RO eess.IV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Recent advances in deep learning-based point cloud registration have improved
generalization, yet most methods still require retraining or manual parameter
tuning for each new environment. In this paper, we identify three key factors
limiting generalization: (a) reliance on environment-specific voxel size and
search radius, (b) poor out-of-domain robustness of learning-based keypoint
detectors, and (c) raw coordinate usage, which exacerbates scale discrepancies.
To address these issues, we present a zero-shot registration pipeline called
BUFFER-X by (a) adaptively determining voxel size/search radii, (b) using
farthest point sampling to bypass learned detectors, and (c) leveraging
patch-wise scale normalization for consistent coordinate bounds. In particular,
we present a multi-scale patch-based descriptor generation and a hierarchical
inlier search across scales to improve robustness in diverse scenes. We also
propose a novel generalizability benchmark using 11 datasets that cover various
indoor/outdoor scenarios and sensor modalities, demonstrating that BUFFER-X
achieves substantial generalization without prior information or manual
parameter tuning for the test datasets. Our code is available at
https://github.com/MIT-SPARK/BUFFER-X.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 00:40:45 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Seo",
"Minkyun",
""
],
[
"Lim",
"Hyungtae",
""
],
[
"Lee",
"Kanghee",
""
],
[
"Carlone",
"Luca",
""
],
[
"Park",
"Jaesik",
""
]
]
| TITLE: BUFFER-X: Towards Zero-Shot Point Cloud Registration in Diverse Scenes
ABSTRACT: Recent advances in deep learning-based point cloud registration have improved
generalization, yet most methods still require retraining or manual parameter
tuning for each new environment. In this paper, we identify three key factors
limiting generalization: (a) reliance on environment-specific voxel size and
search radius, (b) poor out-of-domain robustness of learning-based keypoint
detectors, and (c) raw coordinate usage, which exacerbates scale discrepancies.
To address these issues, we present a zero-shot registration pipeline called
BUFFER-X by (a) adaptively determining voxel size/search radii, (b) using
farthest point sampling to bypass learned detectors, and (c) leveraging
patch-wise scale normalization for consistent coordinate bounds. In particular,
we present a multi-scale patch-based descriptor generation and a hierarchical
inlier search across scales to improve robustness in diverse scenes. We also
propose a novel generalizability benchmark using 11 datasets that cover various
indoor/outdoor scenarios and sensor modalities, demonstrating that BUFFER-X
achieves substantial generalization without prior information or manual
parameter tuning for the test datasets. Our code is available at
https://github.com/MIT-SPARK/BUFFER-X.
| no_new_dataset | 0.946151 |
2503.07943 | Kunal Chaturvedi | Taoxu Zhao, Meisi Li, Kehao Chen, Liye Wang, Xucheng Zhou, Kunal
Chaturvedi, Mukesh Prasad, Ali Anaissi, Ali Braytee | Enhancing Sentiment Analysis through Multimodal Fusion: A BERT-DINOv2
Approach | 12 pages | null | null | null | cs.CV cs.CL | http://creativecommons.org/licenses/by/4.0/ | Multimodal sentiment analysis enhances conventional sentiment analysis, which
traditionally relies solely on text, by incorporating information from
different modalities such as images, text, and audio. This paper proposes a
novel multimodal sentiment analysis architecture that integrates text and image
data to provide a more comprehensive understanding of sentiments. For text
feature extraction, we utilize BERT, a natural language processing model. For
image feature extraction, we employ DINOv2, a vision-transformer-based model.
The textual and visual latent features are integrated using proposed fusion
techniques, namely the Basic Fusion Model, Self Attention Fusion Model, and
Dual Attention Fusion Model. Experiments on three datasets, Memotion 7k
dataset, MVSA single dataset, and MVSA multi dataset, demonstrate the viability
and practicality of the proposed multimodal architecture.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 00:53:45 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Zhao",
"Taoxu",
""
],
[
"Li",
"Meisi",
""
],
[
"Chen",
"Kehao",
""
],
[
"Wang",
"Liye",
""
],
[
"Zhou",
"Xucheng",
""
],
[
"Chaturvedi",
"Kunal",
""
],
[
"Prasad",
"Mukesh",
""
],
[
"Anaissi",
"Ali",
""
],
[
"Braytee",
"Ali",
""
]
]
| TITLE: Enhancing Sentiment Analysis through Multimodal Fusion: A BERT-DINOv2
Approach
ABSTRACT: Multimodal sentiment analysis enhances conventional sentiment analysis, which
traditionally relies solely on text, by incorporating information from
different modalities such as images, text, and audio. This paper proposes a
novel multimodal sentiment analysis architecture that integrates text and image
data to provide a more comprehensive understanding of sentiments. For text
feature extraction, we utilize BERT, a natural language processing model. For
image feature extraction, we employ DINOv2, a vision-transformer-based model.
The textual and visual latent features are integrated using proposed fusion
techniques, namely the Basic Fusion Model, Self Attention Fusion Model, and
Dual Attention Fusion Model. Experiments on three datasets, Memotion 7k
dataset, MVSA single dataset, and MVSA multi dataset, demonstrate the viability
and practicality of the proposed multimodal architecture.
| no_new_dataset | 0.946843 |
2503.07950 | Deng Yifei | Yifei Deng, Zhengyu Chen, Ziheng Xu, Chenglong Li, Jin Tang | Text-RGBT Person Retrieval: Multilevel Global-Local Cross-Modal
Alignment and A High-quality Benchmark | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The performance of traditional text-image person retrieval task is easily
affected by lighting variations due to imaging limitations of visible spectrum
sensors. In this work, we design a novel task called text-RGBT person retrieval
that integrates complementary benefits from thermal and visible modalities for
robust person retrieval in challenging environments. Aligning text and
multi-modal visual representations is the key issue in text-RGBT person
retrieval, but the heterogeneity between visible and thermal modalities may
interfere with the alignment of visual and text modalities. To handle this
problem, we propose a Multi-level Global-local cross-modal Alignment Network
(MGANet), which sufficiently mines the relationships between modality-specific
and modality-collaborative visual with the text, for text-RGBT person
retrieval. To promote the research and development of this field, we create a
high-quality text-RGBT person retrieval dataset, RGBT-PEDES. RGBT-PEDES
contains 1,822 identities from different age groups and genders with 4,723
pairs of calibrated RGB and thermal images, and covers high-diverse scenes from
both daytime and nighttime with a various of challenges such as occlusion, weak
alignment and adverse lighting conditions. Additionally, we carefully annotate
7,987 fine-grained textual descriptions for all RGBT person image pairs.
Extensive experiments on RGBT-PEDES demonstrate that our method outperforms
existing text-image person retrieval methods. The code and dataset will be
released upon the acceptance.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 01:19:45 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Deng",
"Yifei",
""
],
[
"Chen",
"Zhengyu",
""
],
[
"Xu",
"Ziheng",
""
],
[
"Li",
"Chenglong",
""
],
[
"Tang",
"Jin",
""
]
]
| TITLE: Text-RGBT Person Retrieval: Multilevel Global-Local Cross-Modal
Alignment and A High-quality Benchmark
ABSTRACT: The performance of traditional text-image person retrieval task is easily
affected by lighting variations due to imaging limitations of visible spectrum
sensors. In this work, we design a novel task called text-RGBT person retrieval
that integrates complementary benefits from thermal and visible modalities for
robust person retrieval in challenging environments. Aligning text and
multi-modal visual representations is the key issue in text-RGBT person
retrieval, but the heterogeneity between visible and thermal modalities may
interfere with the alignment of visual and text modalities. To handle this
problem, we propose a Multi-level Global-local cross-modal Alignment Network
(MGANet), which sufficiently mines the relationships between modality-specific
and modality-collaborative visual with the text, for text-RGBT person
retrieval. To promote the research and development of this field, we create a
high-quality text-RGBT person retrieval dataset, RGBT-PEDES. RGBT-PEDES
contains 1,822 identities from different age groups and genders with 4,723
pairs of calibrated RGB and thermal images, and covers high-diverse scenes from
both daytime and nighttime with a various of challenges such as occlusion, weak
alignment and adverse lighting conditions. Additionally, we carefully annotate
7,987 fine-grained textual descriptions for all RGBT person image pairs.
Extensive experiments on RGBT-PEDES demonstrate that our method outperforms
existing text-image person retrieval methods. The code and dataset will be
released upon the acceptance.
| new_dataset | 0.965053 |
2503.07952 | Yanyu Zhang | Yanyu Zhang, Dongming Wang, Jie Xu, Mengyuan Liu, Pengxiang Zhu, Wei
Ren | NeRF-VIO: Map-Based Visual-Inertial Odometry with Initialization
Leveraging Neural Radiance Fields | null | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A prior map serves as a foundational reference for localization in
context-aware applications such as augmented reality (AR). Providing valuable
contextual information about the environment, the prior map is a vital tool for
mitigating drift. In this paper, we propose a map-based visual-inertial
localization algorithm (NeRF-VIO) with initialization using neural radiance
fields (NeRF). Our algorithm utilizes a multilayer perceptron model and
redefines the loss function as the geodesic distance on \(SE(3)\), ensuring the
invariance of the initialization model under a frame change within
\(\mathfrak{se}(3)\). The evaluation demonstrates that our model outperforms
existing NeRF-based initialization solution in both accuracy and efficiency. By
integrating a two-stage update mechanism within a multi-state constraint Kalman
filter (MSCKF) framework, the state of NeRF-VIO is constrained by both captured
images from an onboard camera and rendered images from a pre-trained NeRF
model. The proposed algorithm is validated using a real-world AR dataset, the
results indicate that our two-stage update pipeline outperforms MSCKF across
all data sequences.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 01:23:22 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Zhang",
"Yanyu",
""
],
[
"Wang",
"Dongming",
""
],
[
"Xu",
"Jie",
""
],
[
"Liu",
"Mengyuan",
""
],
[
"Zhu",
"Pengxiang",
""
],
[
"Ren",
"Wei",
""
]
]
| TITLE: NeRF-VIO: Map-Based Visual-Inertial Odometry with Initialization
Leveraging Neural Radiance Fields
ABSTRACT: A prior map serves as a foundational reference for localization in
context-aware applications such as augmented reality (AR). Providing valuable
contextual information about the environment, the prior map is a vital tool for
mitigating drift. In this paper, we propose a map-based visual-inertial
localization algorithm (NeRF-VIO) with initialization using neural radiance
fields (NeRF). Our algorithm utilizes a multilayer perceptron model and
redefines the loss function as the geodesic distance on \(SE(3)\), ensuring the
invariance of the initialization model under a frame change within
\(\mathfrak{se}(3)\). The evaluation demonstrates that our model outperforms
existing NeRF-based initialization solution in both accuracy and efficiency. By
integrating a two-stage update mechanism within a multi-state constraint Kalman
filter (MSCKF) framework, the state of NeRF-VIO is constrained by both captured
images from an onboard camera and rendered images from a pre-trained NeRF
model. The proposed algorithm is validated using a real-world AR dataset, the
results indicate that our two-stage update pipeline outperforms MSCKF across
all data sequences.
| no_new_dataset | 0.95096 |
2503.07955 | Yanyu Zhang | Yanyu Zhang, Jie Xu, Wei Ren | PLK-Calib: Single-shot and Target-less LiDAR-Camera Extrinsic
Calibration using Pl\"ucker Lines | null | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate LiDAR-Camera (LC) calibration is challenging but crucial for
autonomous systems and robotics. In this paper, we propose two single-shot and
target-less algorithms to estimate the calibration parameters between LiDAR and
camera using line features. The first algorithm constructs line-to-line
constraints by defining points-to-line projection errors and minimizes the
projection error. The second algorithm (PLK-Calib) utilizes the
co-perpendicular and co-parallel geometric properties of lines in Pl\"ucker
(PLK) coordinate, and decouples the rotation and translation into two
constraints, enabling more accurate estimates. Our degenerate analysis and
Monte Carlo simulation indicate that three nonparallel line pairs are the
minimal requirements to estimate the extrinsic parameters. Furthermore, we
collect an LC calibration dataset with varying extrinsic under three different
scenarios and use it to evaluate the performance of our proposed algorithms.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 01:28:47 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Zhang",
"Yanyu",
""
],
[
"Xu",
"Jie",
""
],
[
"Ren",
"Wei",
""
]
]
| TITLE: PLK-Calib: Single-shot and Target-less LiDAR-Camera Extrinsic
Calibration using Pl\"ucker Lines
ABSTRACT: Accurate LiDAR-Camera (LC) calibration is challenging but crucial for
autonomous systems and robotics. In this paper, we propose two single-shot and
target-less algorithms to estimate the calibration parameters between LiDAR and
camera using line features. The first algorithm constructs line-to-line
constraints by defining points-to-line projection errors and minimizes the
projection error. The second algorithm (PLK-Calib) utilizes the
co-perpendicular and co-parallel geometric properties of lines in Pl\"ucker
(PLK) coordinate, and decouples the rotation and translation into two
constraints, enabling more accurate estimates. Our degenerate analysis and
Monte Carlo simulation indicate that three nonparallel line pairs are the
minimal requirements to estimate the extrinsic parameters. Furthermore, we
collect an LC calibration dataset with varying extrinsic under three different
scenarios and use it to evaluate the performance of our proposed algorithms.
| no_new_dataset | 0.951504 |
2503.07961 | Xin-Jian Xu | Murong Yang, Shihui Ying, Xin-Jian Xu | Overlap-aware meta-learning attention to enhance hypergraph neural
networks for node classification | latex, 45 pages, 5 figures, 3 tables | null | null | null | cs.LG cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although hypergraph neural networks (HGNNs) have emerged as a powerful
framework for analyzing complex datasets, their practical performance often
remains limited. On one hand, existing networks typically employ a single type
of attention mechanism, focusing on either structural or feature similarities
during message passing. On the other hand, assuming that all nodes in current
hypergraph models have the same level of overlap may lead to suboptimal
generalization. To overcome these limitations, we propose a novel framework,
overlap-aware meta-learning attention for hypergraph neural networks
(OMA-HGNN). First, we introduce a hypergraph attention mechanism that
integrates both structural and feature similarities. Specifically, we linearly
combine their respective losses with weighted factors for the HGNN model.
Second, we partition nodes into different tasks based on their diverse overlap
levels and develop a multi-task Meta-Weight-Net (MWN) to determine the
corresponding weighted factors. Third, we jointly train the internal MWN model
with the losses from the external HGNN model and train the external model with
the weighted factors from the internal model. To evaluate the effectiveness of
OMA-HGNN, we conducted experiments on six real-world datasets and benchmarked
its perfor-mance against nine state-of-the-art methods for node classification.
The results demonstrate that OMA-HGNN excels in learning superior node
representations and outperforms these baselines.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 01:38:39 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Yang",
"Murong",
""
],
[
"Ying",
"Shihui",
""
],
[
"Xu",
"Xin-Jian",
""
]
]
| TITLE: Overlap-aware meta-learning attention to enhance hypergraph neural
networks for node classification
ABSTRACT: Although hypergraph neural networks (HGNNs) have emerged as a powerful
framework for analyzing complex datasets, their practical performance often
remains limited. On one hand, existing networks typically employ a single type
of attention mechanism, focusing on either structural or feature similarities
during message passing. On the other hand, assuming that all nodes in current
hypergraph models have the same level of overlap may lead to suboptimal
generalization. To overcome these limitations, we propose a novel framework,
overlap-aware meta-learning attention for hypergraph neural networks
(OMA-HGNN). First, we introduce a hypergraph attention mechanism that
integrates both structural and feature similarities. Specifically, we linearly
combine their respective losses with weighted factors for the HGNN model.
Second, we partition nodes into different tasks based on their diverse overlap
levels and develop a multi-task Meta-Weight-Net (MWN) to determine the
corresponding weighted factors. Third, we jointly train the internal MWN model
with the losses from the external HGNN model and train the external model with
the weighted factors from the internal model. To evaluate the effectiveness of
OMA-HGNN, we conducted experiments on six real-world datasets and benchmarked
its perfor-mance against nine state-of-the-art methods for node classification.
The results demonstrate that OMA-HGNN excels in learning superior node
representations and outperforms these baselines.
| no_new_dataset | 0.951549 |
2503.07962 | Sascha Diefenbacher | Benjamin Sluijter, Sascha Diefenbacher, Wahid Bhimji, Benjamin Nachman | Discriminative versus Generative Approaches to Simulation-based
Inference | 11 pages, 8 figures | null | null | null | hep-ph cs.LG hep-ex | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most of the fundamental, emergent, and phenomenological parameters of
particle and nuclear physics are determined through parametric template fits.
Simulations are used to populate histograms which are then matched to data.
This approach is inherently lossy, since histograms are binned and
low-dimensional. Deep learning has enabled unbinned and high-dimensional
parameter estimation through neural likelihiood(-ratio) estimation. We compare
two approaches for neural simulation-based inference (NSBI): one based on
discriminative learning (classification) and one based on generative modeling.
These two approaches are directly evaluated on the same datasets, with a
similar level of hyperparameter optimization in both cases. In addition to a
Gaussian dataset, we study NSBI using a Higgs boson dataset from the FAIR
Universe Challenge. We find that both the direct likelihood and likelihood
ratio estimation are able to effectively extract parameters with reasonable
uncertainties. For the numerical examples and within the set of hyperparameters
studied, we found that the likelihood ratio method is more accurate and/or
precise. Both methods have a significant spread from the network training and
would require ensembling or other mitigation strategies in practice.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 01:38:54 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Sluijter",
"Benjamin",
""
],
[
"Diefenbacher",
"Sascha",
""
],
[
"Bhimji",
"Wahid",
""
],
[
"Nachman",
"Benjamin",
""
]
]
| TITLE: Discriminative versus Generative Approaches to Simulation-based
Inference
ABSTRACT: Most of the fundamental, emergent, and phenomenological parameters of
particle and nuclear physics are determined through parametric template fits.
Simulations are used to populate histograms which are then matched to data.
This approach is inherently lossy, since histograms are binned and
low-dimensional. Deep learning has enabled unbinned and high-dimensional
parameter estimation through neural likelihiood(-ratio) estimation. We compare
two approaches for neural simulation-based inference (NSBI): one based on
discriminative learning (classification) and one based on generative modeling.
These two approaches are directly evaluated on the same datasets, with a
similar level of hyperparameter optimization in both cases. In addition to a
Gaussian dataset, we study NSBI using a Higgs boson dataset from the FAIR
Universe Challenge. We find that both the direct likelihood and likelihood
ratio estimation are able to effectively extract parameters with reasonable
uncertainties. For the numerical examples and within the set of hyperparameters
studied, we found that the likelihood ratio method is more accurate and/or
precise. Both methods have a significant spread from the network training and
would require ensembling or other mitigation strategies in practice.
| no_new_dataset | 0.946448 |
2503.07968 | Bo-Wen Zhang | Yan Yan, Junyuan Liu and Bo-Wen Zhang | LabelCoRank: Revolutionizing Long Tail Multi-Label Classification with
Co-Occurrence Reranking | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Motivation: Despite recent advancements in semantic representation driven by
pre-trained and large-scale language models, addressing long tail challenges in
multi-label text classification remains a significant issue. Long tail
challenges have persistently posed difficulties in accurately classifying less
frequent labels. Current approaches often focus on improving text semantics
while neglecting the crucial role of label relationships. Results: This paper
introduces LabelCoRank, a novel approach inspired by ranking principles.
LabelCoRank leverages label co-occurrence relationships to refine initial label
classifications through a dual-stage reranking process. The first stage uses
initial classification results to form a preliminary ranking. In the second
stage, a label co-occurrence matrix is utilized to rerank the preliminary
results, enhancing the accuracy and relevance of the final classifications. By
integrating the reranked label representations as additional text features,
LabelCoRank effectively mitigates long tail issues in multi-labeltext
classification. Experimental evaluations on popular datasets including MAG-CS,
PubMed, and AAPD demonstrate the effectiveness and robustness of LabelCoRank.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 01:52:39 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Yan",
"Yan",
""
],
[
"Liu",
"Junyuan",
""
],
[
"Zhang",
"Bo-Wen",
""
]
]
| TITLE: LabelCoRank: Revolutionizing Long Tail Multi-Label Classification with
Co-Occurrence Reranking
ABSTRACT: Motivation: Despite recent advancements in semantic representation driven by
pre-trained and large-scale language models, addressing long tail challenges in
multi-label text classification remains a significant issue. Long tail
challenges have persistently posed difficulties in accurately classifying less
frequent labels. Current approaches often focus on improving text semantics
while neglecting the crucial role of label relationships. Results: This paper
introduces LabelCoRank, a novel approach inspired by ranking principles.
LabelCoRank leverages label co-occurrence relationships to refine initial label
classifications through a dual-stage reranking process. The first stage uses
initial classification results to form a preliminary ranking. In the second
stage, a label co-occurrence matrix is utilized to rerank the preliminary
results, enhancing the accuracy and relevance of the final classifications. By
integrating the reranked label representations as additional text features,
LabelCoRank effectively mitigates long tail issues in multi-labeltext
classification. Experimental evaluations on popular datasets including MAG-CS,
PubMed, and AAPD demonstrate the effectiveness and robustness of LabelCoRank.
| no_new_dataset | 0.946597 |
2503.07969 | Chen Liu | Chen Liu, Feng Qiu, Wei Zhang, Lincheng Li, Dadong Wang, Xin Yu | 7ABAW-Compound Expression Recognition via Curriculum Learning | Accepted by ECCVWorkshop as the report of the first place in 7th ABAW
Track2 Competition | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | With the advent of deep learning, expression recognition has made significant
advancements. However, due to the limited availability of annotated compound
expression datasets and the subtle variations of compound expressions, Compound
Emotion Recognition (CE) still holds considerable potential for exploration. To
advance this task, the 7th Affective Behavior Analysis in-the-wild (ABAW)
competition introduces the Compound Expression Challenge based on C-EXPR-DB, a
limited dataset without labels. In this paper, we present a curriculum
learning-based framework that initially trains the model on single-expression
tasks and subsequently incorporates multi-expression data. This design ensures
that our model first masters the fundamental features of basic expressions
before being exposed to the complexities of compound emotions. Specifically,
our designs can be summarized as follows: 1) Single-Expression Pre-training:
The model is first trained on datasets containing single expressions to learn
the foundational facial features associated with basic emotions. 2) Dynamic
Compound Expression Generation: Given the scarcity of annotated compound
expression datasets, we employ CutMix and Mixup techniques on the original
single-expression images to create hybrid images exhibiting characteristics of
multiple basic emotions. 3) Incremental Multi-Expression Integration: After
performing well on single-expression tasks, the model is progressively exposed
to multi-expression data, allowing the model to adapt to the complexity and
variability of compound expressions. The official results indicate that our
method achieves the \textbf{best} performance in this competition track with an
F-score of 0.6063. Our code is released at https://github.com/YenanLiu/ABAW7th.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 01:53:34 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Liu",
"Chen",
""
],
[
"Qiu",
"Feng",
""
],
[
"Zhang",
"Wei",
""
],
[
"Li",
"Lincheng",
""
],
[
"Wang",
"Dadong",
""
],
[
"Yu",
"Xin",
""
]
]
| TITLE: 7ABAW-Compound Expression Recognition via Curriculum Learning
ABSTRACT: With the advent of deep learning, expression recognition has made significant
advancements. However, due to the limited availability of annotated compound
expression datasets and the subtle variations of compound expressions, Compound
Emotion Recognition (CE) still holds considerable potential for exploration. To
advance this task, the 7th Affective Behavior Analysis in-the-wild (ABAW)
competition introduces the Compound Expression Challenge based on C-EXPR-DB, a
limited dataset without labels. In this paper, we present a curriculum
learning-based framework that initially trains the model on single-expression
tasks and subsequently incorporates multi-expression data. This design ensures
that our model first masters the fundamental features of basic expressions
before being exposed to the complexities of compound emotions. Specifically,
our designs can be summarized as follows: 1) Single-Expression Pre-training:
The model is first trained on datasets containing single expressions to learn
the foundational facial features associated with basic emotions. 2) Dynamic
Compound Expression Generation: Given the scarcity of annotated compound
expression datasets, we employ CutMix and Mixup techniques on the original
single-expression images to create hybrid images exhibiting characteristics of
multiple basic emotions. 3) Incremental Multi-Expression Integration: After
performing well on single-expression tasks, the model is progressively exposed
to multi-expression data, allowing the model to adapt to the complexity and
variability of compound expressions. The official results indicate that our
method achieves the \textbf{best} performance in this competition track with an
F-score of 0.6063. Our code is released at https://github.com/YenanLiu/ABAW7th.
| no_new_dataset | 0.947088 |
2503.07982 | Ziseok Lee | Sanghyun Jo, Ziseok Lee, Wooyeol Lee, Kyungsu Kim | DiffEGG: Diffusion-Driven Edge Generation as a Pixel-Annotation-Free
Alternative for Instance Annotation | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Achieving precise panoptic segmentation relies on pixel-wise instance
annotations, but obtaining such datasets is costly. Unsupervised instance
segmentation (UIS) eliminates annotation requirements but struggles with
adjacent instance merging and single-instance fragmentation, largely due to the
limitations of DINO-based backbones which lack strong instance separation cues.
Weakly-supervised panoptic segmentation (WPS) reduces annotation costs using
sparse labels (e.g., points, boxes), yet these annotations remain expensive and
introduce human bias and boundary errors. To address these challenges, we
propose DiffEGG (Diffusion-Driven EdGe Generation), a fully annotation-free
method that extracts instance-aware features from pretrained diffusion models
to generate precise instance edge maps. Unlike DINO-based UIS methods,
diffusion models inherently capture fine-grained, instance-aware features,
enabling more precise boundary delineation. For WPS, DiffEGG eliminates
annotation costs and human bias by operating without any form of manual
supervision, addressing the key limitations of prior best methods.
Additionally, we introduce RIP, a post-processing technique that fuses
DiffEGG's edge maps with segmentation masks in a task-agnostic manner. RIP
allows DiffEGG to be seamlessly integrated into various segmentation
frameworks. When applied to UIS, DiffEGG and RIP achieve an average $+4.4\text{
AP}$ improvement over prior best UIS methods. When combined with
weakly-supervised semantic segmentation (WSS), DiffEGG enables WPS without
instance annotations, outperforming prior best point-supervised WPS methods by
$+1.7\text{ PQ}$. These results demonstrate that DiffEGG's edge maps serve as a
cost-effective, annotation-free alternative to instance annotations,
significantly improving segmentation without human intervention. Code is
available at https://github.com/shjo-april/DiffEGG.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 02:34:33 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Jo",
"Sanghyun",
""
],
[
"Lee",
"Ziseok",
""
],
[
"Lee",
"Wooyeol",
""
],
[
"Kim",
"Kyungsu",
""
]
]
| TITLE: DiffEGG: Diffusion-Driven Edge Generation as a Pixel-Annotation-Free
Alternative for Instance Annotation
ABSTRACT: Achieving precise panoptic segmentation relies on pixel-wise instance
annotations, but obtaining such datasets is costly. Unsupervised instance
segmentation (UIS) eliminates annotation requirements but struggles with
adjacent instance merging and single-instance fragmentation, largely due to the
limitations of DINO-based backbones which lack strong instance separation cues.
Weakly-supervised panoptic segmentation (WPS) reduces annotation costs using
sparse labels (e.g., points, boxes), yet these annotations remain expensive and
introduce human bias and boundary errors. To address these challenges, we
propose DiffEGG (Diffusion-Driven EdGe Generation), a fully annotation-free
method that extracts instance-aware features from pretrained diffusion models
to generate precise instance edge maps. Unlike DINO-based UIS methods,
diffusion models inherently capture fine-grained, instance-aware features,
enabling more precise boundary delineation. For WPS, DiffEGG eliminates
annotation costs and human bias by operating without any form of manual
supervision, addressing the key limitations of prior best methods.
Additionally, we introduce RIP, a post-processing technique that fuses
DiffEGG's edge maps with segmentation masks in a task-agnostic manner. RIP
allows DiffEGG to be seamlessly integrated into various segmentation
frameworks. When applied to UIS, DiffEGG and RIP achieve an average $+4.4\text{
AP}$ improvement over prior best UIS methods. When combined with
weakly-supervised semantic segmentation (WSS), DiffEGG enables WPS without
instance annotations, outperforming prior best point-supervised WPS methods by
$+1.7\text{ PQ}$. These results demonstrate that DiffEGG's edge maps serve as a
cost-effective, annotation-free alternative to instance annotations,
significantly improving segmentation without human intervention. Code is
available at https://github.com/shjo-april/DiffEGG.
| no_new_dataset | 0.945751 |
2503.07988 | Dongruo Zhou | Zhiyong Wang, Chen Yang, John C.S. Lui, Dongruo Zhou | Provable Zero-Shot Generalization in Offline Reinforcement Learning | 30 pages, 1 figure, 1 table | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this work, we study offline reinforcement learning (RL) with zero-shot
generalization property (ZSG), where the agent has access to an offline dataset
including experiences from different environments, and the goal of the agent is
to train a policy over the training environments which performs well on test
environments without further interaction. Existing work showed that classical
offline RL fails to generalize to new, unseen environments. We propose
pessimistic empirical risk minimization (PERM) and pessimistic proximal policy
optimization (PPPO), which leverage pessimistic policy evaluation to guide
policy learning and enhance generalization. We show that both PERM and PPPO are
capable of finding a near-optimal policy with ZSG. Our result serves as a first
step in understanding the foundation of the generalization phenomenon in
offline reinforcement learning.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 02:44:32 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Wang",
"Zhiyong",
""
],
[
"Yang",
"Chen",
""
],
[
"Lui",
"John C. S.",
""
],
[
"Zhou",
"Dongruo",
""
]
]
| TITLE: Provable Zero-Shot Generalization in Offline Reinforcement Learning
ABSTRACT: In this work, we study offline reinforcement learning (RL) with zero-shot
generalization property (ZSG), where the agent has access to an offline dataset
including experiences from different environments, and the goal of the agent is
to train a policy over the training environments which performs well on test
environments without further interaction. Existing work showed that classical
offline RL fails to generalize to new, unseen environments. We propose
pessimistic empirical risk minimization (PERM) and pessimistic proximal policy
optimization (PPPO), which leverage pessimistic policy evaluation to guide
policy learning and enhance generalization. We show that both PERM and PPPO are
capable of finding a near-optimal policy with ZSG. Our result serves as a first
step in understanding the foundation of the generalization phenomenon in
offline reinforcement learning.
| no_new_dataset | 0.947721 |
2503.07990 | Katherine Xie | Katherine Xie, Nitya Babbar, Vicky Chen, Yoanna Turura | Enhancing Multilingual Language Models for Code-Switched Input Data | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | Code-switching, or alternating between languages within a single
conversation, presents challenges for multilingual language models on NLP
tasks. This research investigates if pre-training Multilingual BERT (mBERT) on
code-switched datasets improves the model's performance on critical NLP tasks
such as part of speech tagging, sentiment analysis, named entity recognition,
and language identification. We use a dataset of Spanglish tweets for
pre-training and evaluate the pre-trained model against a baseline model.
Our findings show that our pre-trained mBERT model outperforms or matches the
baseline model in the given tasks, with the most significant improvements seen
for parts of speech tagging. Additionally, our latent analysis uncovers more
homogenous English and Spanish embeddings for language identification tasks,
providing insights for future modeling work.
This research highlights potential for adapting multilingual LMs for
code-switched input data in order for advanced utility in globalized and
multilingual contexts. Future work includes extending experiments to other
language pairs, incorporating multiform data, and exploring methods for better
understanding context-dependent code-switches.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 02:49:41 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Xie",
"Katherine",
""
],
[
"Babbar",
"Nitya",
""
],
[
"Chen",
"Vicky",
""
],
[
"Turura",
"Yoanna",
""
]
]
| TITLE: Enhancing Multilingual Language Models for Code-Switched Input Data
ABSTRACT: Code-switching, or alternating between languages within a single
conversation, presents challenges for multilingual language models on NLP
tasks. This research investigates if pre-training Multilingual BERT (mBERT) on
code-switched datasets improves the model's performance on critical NLP tasks
such as part of speech tagging, sentiment analysis, named entity recognition,
and language identification. We use a dataset of Spanglish tweets for
pre-training and evaluate the pre-trained model against a baseline model.
Our findings show that our pre-trained mBERT model outperforms or matches the
baseline model in the given tasks, with the most significant improvements seen
for parts of speech tagging. Additionally, our latent analysis uncovers more
homogenous English and Spanish embeddings for language identification tasks,
providing insights for future modeling work.
This research highlights potential for adapting multilingual LMs for
code-switched input data in order for advanced utility in globalized and
multilingual contexts. Future work includes extending experiments to other
language pairs, incorporating multiform data, and exploring methods for better
understanding context-dependent code-switches.
| no_new_dataset | 0.914901 |
2503.07998 | Hangyang Kong | Hangyang Kong, Wenbo Zhou, Xuxiang He, Xiaotong Tu, Xinghao Ding | Efficient Dataset Distillation through Low-Rank Space Sampling | 9 pages, 5 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Huge amount of data is the key of the success of deep learning, however,
redundant information impairs the generalization ability of the model and
increases the burden of calculation. Dataset Distillation (DD) compresses the
original dataset into a smaller but representative subset for high-quality data
and efficient training strategies. Existing works for DD generate synthetic
images by treating each image as an independent entity, thereby overlooking the
common features among data. This paper proposes a dataset distillation method
based on Matching Training Trajectories with Low-rank Space Sampling(MTT-LSS),
which uses low-rank approximations to capture multiple low-dimensional manifold
subspaces of the original data. The synthetic data is represented by basis
vectors and shared dimension mappers from these subspaces, reducing the cost of
generating individual data points while effectively minimizing information
redundancy. The proposed method is tested on CIFAR-10, CIFAR-100, and SVHN
datasets, and outperforms the baseline methods by an average of 9.9%.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 02:59:17 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Kong",
"Hangyang",
""
],
[
"Zhou",
"Wenbo",
""
],
[
"He",
"Xuxiang",
""
],
[
"Tu",
"Xiaotong",
""
],
[
"Ding",
"Xinghao",
""
]
]
| TITLE: Efficient Dataset Distillation through Low-Rank Space Sampling
ABSTRACT: Huge amount of data is the key of the success of deep learning, however,
redundant information impairs the generalization ability of the model and
increases the burden of calculation. Dataset Distillation (DD) compresses the
original dataset into a smaller but representative subset for high-quality data
and efficient training strategies. Existing works for DD generate synthetic
images by treating each image as an independent entity, thereby overlooking the
common features among data. This paper proposes a dataset distillation method
based on Matching Training Trajectories with Low-rank Space Sampling(MTT-LSS),
which uses low-rank approximations to capture multiple low-dimensional manifold
subspaces of the original data. The synthetic data is represented by basis
vectors and shared dimension mappers from these subspaces, reducing the cost of
generating individual data points while effectively minimizing information
redundancy. The proposed method is tested on CIFAR-10, CIFAR-100, and SVHN
datasets, and outperforms the baseline methods by an average of 9.9%.
| no_new_dataset | 0.953708 |
2503.08002 | Yi Ding | Meghna Roy Chowdhury, Wei Xuan, Shreyas Sen, Yixue Zhao, Yi Ding | Predicting and Understanding College Student Mental Health with
Interpretable Machine Learning | 12 pages, 10 figures, ACM/IEEE International Conference on Connected
Health: Applications, Systems and Engineering Technologies (CHASE '25), June
24--26, 2025, New York, NY, USA | null | 10.1145/3721201.3721372 | null | cs.LG cs.CY | http://creativecommons.org/licenses/by/4.0/ | Mental health issues among college students have reached critical levels,
significantly impacting academic performance and overall wellbeing. Predicting
and understanding mental health status among college students is challenging
due to three main factors: the necessity for large-scale longitudinal datasets,
the prevalence of black-box machine learning models lacking transparency, and
the tendency of existing approaches to provide aggregated insights at the
population level rather than individualized understanding.
To tackle these challenges, this paper presents I-HOPE, the first
Interpretable Hierarchical mOdel for Personalized mEntal health prediction.
I-HOPE is a two-stage hierarchical model, validated on the College Experience
Study, the longest longitudinal mobile sensing dataset. This dataset spans five
years and captures data from both pre-pandemic periods and the COVID-19
pandemic. I-HOPE connects raw behavioral features to mental health status
through five defined behavioral categories as interaction labels. This approach
achieves a prediction accuracy of 91%, significantly surpassing the 60-70%
accuracy of baseline methods. In addition, our model distills complex patterns
into interpretable and individualized insights, enabling the future development
of tailored interventions and improving mental health support. The code is
available at https://github.com/roycmeghna/I-HOPE.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 03:07:37 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Chowdhury",
"Meghna Roy",
""
],
[
"Xuan",
"Wei",
""
],
[
"Sen",
"Shreyas",
""
],
[
"Zhao",
"Yixue",
""
],
[
"Ding",
"Yi",
""
]
]
| TITLE: Predicting and Understanding College Student Mental Health with
Interpretable Machine Learning
ABSTRACT: Mental health issues among college students have reached critical levels,
significantly impacting academic performance and overall wellbeing. Predicting
and understanding mental health status among college students is challenging
due to three main factors: the necessity for large-scale longitudinal datasets,
the prevalence of black-box machine learning models lacking transparency, and
the tendency of existing approaches to provide aggregated insights at the
population level rather than individualized understanding.
To tackle these challenges, this paper presents I-HOPE, the first
Interpretable Hierarchical mOdel for Personalized mEntal health prediction.
I-HOPE is a two-stage hierarchical model, validated on the College Experience
Study, the longest longitudinal mobile sensing dataset. This dataset spans five
years and captures data from both pre-pandemic periods and the COVID-19
pandemic. I-HOPE connects raw behavioral features to mental health status
through five defined behavioral categories as interaction labels. This approach
achieves a prediction accuracy of 91%, significantly surpassing the 60-70%
accuracy of baseline methods. In addition, our model distills complex patterns
into interpretable and individualized insights, enabling the future development
of tailored interventions and improving mental health support. The code is
available at https://github.com/roycmeghna/I-HOPE.
| new_dataset | 0.875308 |
2503.08008 | Fei Wang | Fei Wang, Tingting Zhang, Bintong Zhao, Libao Xing, Tiantian Wang, Han
Ding, Tony Xiao Han | A Survey on Wi-Fi Sensing Generalizability: Taxonomy, Techniques,
Datasets, and Future Research Prospects | 38 pages, 318 references | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Wi-Fi sensing has emerged as a transformative technology that leverages
ubiquitous wireless signals to enable a variety of applications ranging from
activity and gesture recognition to indoor localization and health monitoring.
However, the inherent dependency of Wi-Fi signals on environmental conditions
introduces significant generalization challenges,variations in surroundings,
human positions, and orientations often lead to inconsistent signal features,
impeding robust action recognition. In this survey, we review over 200 studies
on Wi-Fi sensing generalization, categorizing them along the entire sensing
pipeline: device deployment, signal processing, feature learning, and model
deployment. We systematically analyze state-of-the-art techniques, which are
employed to mitigate the adverse effects of environmental variability.
Moreover, we provide a comprehensive overview of open-source datasets such as
Widar3.0, XRF55, and XRFv2, highlighting their unique characteristics and
applicability for multimodal fusion and cross-modal tasks. Finally, we discuss
emerging research directions, such as multimodal approaches and the integration
of large language models,to inspire future advancements in this rapidly
evolving field. Our survey aims to serve as a valuable resource for
researchers, offering insights into current methodologies, available datasets,
and promising avenues for further investigation.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 03:18:20 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Wang",
"Fei",
""
],
[
"Zhang",
"Tingting",
""
],
[
"Zhao",
"Bintong",
""
],
[
"Xing",
"Libao",
""
],
[
"Wang",
"Tiantian",
""
],
[
"Ding",
"Han",
""
],
[
"Han",
"Tony Xiao",
""
]
]
| TITLE: A Survey on Wi-Fi Sensing Generalizability: Taxonomy, Techniques,
Datasets, and Future Research Prospects
ABSTRACT: Wi-Fi sensing has emerged as a transformative technology that leverages
ubiquitous wireless signals to enable a variety of applications ranging from
activity and gesture recognition to indoor localization and health monitoring.
However, the inherent dependency of Wi-Fi signals on environmental conditions
introduces significant generalization challenges,variations in surroundings,
human positions, and orientations often lead to inconsistent signal features,
impeding robust action recognition. In this survey, we review over 200 studies
on Wi-Fi sensing generalization, categorizing them along the entire sensing
pipeline: device deployment, signal processing, feature learning, and model
deployment. We systematically analyze state-of-the-art techniques, which are
employed to mitigate the adverse effects of environmental variability.
Moreover, we provide a comprehensive overview of open-source datasets such as
Widar3.0, XRF55, and XRFv2, highlighting their unique characteristics and
applicability for multimodal fusion and cross-modal tasks. Finally, we discuss
emerging research directions, such as multimodal approaches and the integration
of large language models,to inspire future advancements in this rapidly
evolving field. Our survey aims to serve as a valuable resource for
researchers, offering insights into current methodologies, available datasets,
and promising avenues for further investigation.
| no_new_dataset | 0.946151 |
2503.08010 | Chen-Yi Lu | Chen Yi Lu, Md Mehrab Tanjim, Ishita Dasgupta, Somdeb Sarkhel, Gang
Wu, Saayan Mitra, Somali Chaterji | SKALD: Learning-Based Shot Assembly for Coherent Multi-Shot Video
Creation | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We present SKALD, a multi-shot video assembly method that constructs coherent
video sequences from candidate shots with minimal reliance on text. Central to
our approach is the Learned Clip Assembly (LCA) score, a learning-based metric
that measures temporal and semantic relationships between shots to quantify
narrative coherence. We tackle the exponential complexity of combining multiple
shots with an efficient beam-search algorithm guided by the LCA score. To train
our model effectively with limited human annotations, we propose two tasks for
the LCA encoder: Shot Coherence Learning, which uses contrastive learning to
distinguish coherent and incoherent sequences, and Feature Regression, which
converts these learned representations into a real-valued coherence score. We
develop two variants: a base SKALD model that relies solely on visual coherence
and SKALD-text, which integrates auxiliary text information when available.
Experiments on the VSPD and our curated MSV3C datasets show that SKALD achieves
an improvement of up to 48.6% in IoU and a 43% speedup over the
state-of-the-art methods. A user study further validates our approach, with 45%
of participants favoring SKALD-assembled videos, compared to 22% preferring
text-based assembly methods.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 03:25:44 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Lu",
"Chen Yi",
""
],
[
"Tanjim",
"Md Mehrab",
""
],
[
"Dasgupta",
"Ishita",
""
],
[
"Sarkhel",
"Somdeb",
""
],
[
"Wu",
"Gang",
""
],
[
"Mitra",
"Saayan",
""
],
[
"Chaterji",
"Somali",
""
]
]
| TITLE: SKALD: Learning-Based Shot Assembly for Coherent Multi-Shot Video
Creation
ABSTRACT: We present SKALD, a multi-shot video assembly method that constructs coherent
video sequences from candidate shots with minimal reliance on text. Central to
our approach is the Learned Clip Assembly (LCA) score, a learning-based metric
that measures temporal and semantic relationships between shots to quantify
narrative coherence. We tackle the exponential complexity of combining multiple
shots with an efficient beam-search algorithm guided by the LCA score. To train
our model effectively with limited human annotations, we propose two tasks for
the LCA encoder: Shot Coherence Learning, which uses contrastive learning to
distinguish coherent and incoherent sequences, and Feature Regression, which
converts these learned representations into a real-valued coherence score. We
develop two variants: a base SKALD model that relies solely on visual coherence
and SKALD-text, which integrates auxiliary text information when available.
Experiments on the VSPD and our curated MSV3C datasets show that SKALD achieves
an improvement of up to 48.6% in IoU and a 43% speedup over the
state-of-the-art methods. A user study further validates our approach, with 45%
of participants favoring SKALD-assembled videos, compared to 22% preferring
text-based assembly methods.
| no_new_dataset | 0.944485 |
2503.08015 | Saurabh Kataria | Zhaoliang Chen, Cheng Ding, Saurabh Kataria, Runze Yan, Minxiao Wang,
Randall Lee, Xiao Hu | GPT-PPG: A GPT-based Foundation Model for Photoplethysmography Signals | null | null | null | null | cs.LG eess.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study introduces a novel application of a Generative Pre-trained
Transformer (GPT) model tailored for photoplethysmography (PPG) signals,
serving as a foundation model for various downstream tasks. Adapting the
standard GPT architecture to suit the continuous characteristics of PPG
signals, our approach demonstrates promising results. Our models are
pre-trained on our extensive dataset that contains more than 200 million 30s
PPG samples. We explored different supervised fine-tuning techniques to adapt
our model to downstream tasks, resulting in performance comparable to or
surpassing current state-of-the-art (SOTA) methods in tasks like atrial
fibrillation detection. A standout feature of our GPT model is its inherent
capability to perform generative tasks such as signal denoising effectively,
without the need for further fine-tuning. This success is attributed to the
generative nature of the GPT framework.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 03:45:31 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Chen",
"Zhaoliang",
""
],
[
"Ding",
"Cheng",
""
],
[
"Kataria",
"Saurabh",
""
],
[
"Yan",
"Runze",
""
],
[
"Wang",
"Minxiao",
""
],
[
"Lee",
"Randall",
""
],
[
"Hu",
"Xiao",
""
]
]
| TITLE: GPT-PPG: A GPT-based Foundation Model for Photoplethysmography Signals
ABSTRACT: This study introduces a novel application of a Generative Pre-trained
Transformer (GPT) model tailored for photoplethysmography (PPG) signals,
serving as a foundation model for various downstream tasks. Adapting the
standard GPT architecture to suit the continuous characteristics of PPG
signals, our approach demonstrates promising results. Our models are
pre-trained on our extensive dataset that contains more than 200 million 30s
PPG samples. We explored different supervised fine-tuning techniques to adapt
our model to downstream tasks, resulting in performance comparable to or
surpassing current state-of-the-art (SOTA) methods in tasks like atrial
fibrillation detection. A standout feature of our GPT model is its inherent
capability to perform generative tasks such as signal denoising effectively,
without the need for further fine-tuning. This success is attributed to the
generative nature of the GPT framework.
| no_new_dataset | 0.511747 |
2503.08016 | Akshat Ghiya | Akshat Ghiya, Ali K. AlShami, Jugal Kalita | SGNetPose+: Stepwise Goal-Driven Networks with Pose Information for
Trajectory Prediction in Autonomous Driving | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Predicting pedestrian trajectories is essential for autonomous driving
systems, as it significantly enhances safety and supports informed
decision-making. Accurate predictions enable the prevention of collisions,
anticipation of crossing intent, and improved overall system efficiency. In
this study, we present SGNetPose+, an enhancement of the SGNet architecture
designed to integrate skeleton information or body segment angles with bounding
boxes to predict pedestrian trajectories from video data to avoid hazards in
autonomous driving. Skeleton information was extracted using a pose estimation
model, and joint angles were computed based on the extracted joint data. We
also apply temporal data augmentation by horizontally flipping video frames to
increase the dataset size and improve performance. Our approach achieves
state-of-the-art results on the JAAD and PIE datasets using pose data with the
bounding boxes, outperforming the SGNet model. Code is available on Github:
SGNetPose+.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 03:45:51 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Ghiya",
"Akshat",
""
],
[
"AlShami",
"Ali K.",
""
],
[
"Kalita",
"Jugal",
""
]
]
| TITLE: SGNetPose+: Stepwise Goal-Driven Networks with Pose Information for
Trajectory Prediction in Autonomous Driving
ABSTRACT: Predicting pedestrian trajectories is essential for autonomous driving
systems, as it significantly enhances safety and supports informed
decision-making. Accurate predictions enable the prevention of collisions,
anticipation of crossing intent, and improved overall system efficiency. In
this study, we present SGNetPose+, an enhancement of the SGNet architecture
designed to integrate skeleton information or body segment angles with bounding
boxes to predict pedestrian trajectories from video data to avoid hazards in
autonomous driving. Skeleton information was extracted using a pose estimation
model, and joint angles were computed based on the extracted joint data. We
also apply temporal data augmentation by horizontally flipping video frames to
increase the dataset size and improve performance. Our approach achieves
state-of-the-art results on the JAAD and PIE datasets using pose data with the
bounding boxes, outperforming the SGNet model. Code is available on Github:
SGNetPose+.
| no_new_dataset | 0.947672 |
2503.08023 | Sudarshan Regmi | Sudarshan Regmi | AdaSCALE: Adaptive Scaling for OOD Detection | https://github.com/sudarshanregmi/AdaSCALE/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | The ability of the deep learning model to recognize when a sample falls
outside its learned distribution is critical for safe and reliable deployment.
Recent state-of-the-art out-of-distribution (OOD) detection methods leverage
activation shaping to improve the separation between in-distribution (ID) and
OOD inputs. These approaches resort to sample-specific scaling but apply a
static percentile threshold across all samples regardless of their nature,
resulting in suboptimal ID-OOD separability. In this work, we propose
\textbf{AdaSCALE}, an adaptive scaling procedure that dynamically adjusts the
percentile threshold based on a sample's estimated OOD likelihood. This
estimation leverages our key observation: OOD samples exhibit significantly
more pronounced activation shifts at high-magnitude activations under minor
perturbation compared to ID samples. AdaSCALE enables stronger scaling for
likely ID samples and weaker scaling for likely OOD samples, yielding highly
separable energy scores. Our approach achieves state-of-the-art OOD detection
performance, outperforming the latest rival OptFS by 14.94 in near-OOD and
21.67 in far-OOD datasets in average FPR@95 metric on the ImageNet-1k benchmark
across eight diverse architectures. The code is available at:
https://github.com/sudarshanregmi/AdaSCALE/
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 04:10:06 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Regmi",
"Sudarshan",
""
]
]
| TITLE: AdaSCALE: Adaptive Scaling for OOD Detection
ABSTRACT: The ability of the deep learning model to recognize when a sample falls
outside its learned distribution is critical for safe and reliable deployment.
Recent state-of-the-art out-of-distribution (OOD) detection methods leverage
activation shaping to improve the separation between in-distribution (ID) and
OOD inputs. These approaches resort to sample-specific scaling but apply a
static percentile threshold across all samples regardless of their nature,
resulting in suboptimal ID-OOD separability. In this work, we propose
\textbf{AdaSCALE}, an adaptive scaling procedure that dynamically adjusts the
percentile threshold based on a sample's estimated OOD likelihood. This
estimation leverages our key observation: OOD samples exhibit significantly
more pronounced activation shifts at high-magnitude activations under minor
perturbation compared to ID samples. AdaSCALE enables stronger scaling for
likely ID samples and weaker scaling for likely OOD samples, yielding highly
separable energy scores. Our approach achieves state-of-the-art OOD detection
performance, outperforming the latest rival OptFS by 14.94 in near-OOD and
21.67 in far-OOD datasets in average FPR@95 metric on the ImageNet-1k benchmark
across eight diverse architectures. The code is available at:
https://github.com/sudarshanregmi/AdaSCALE/
| no_new_dataset | 0.949342 |
2503.08026 | Jun Yan | Zhen Tan, Jun Yan, I-Hung Hsu, Rujun Han, Zifeng Wang, Long T. Le,
Yiwen Song, Yanfei Chen, Hamid Palangi, George Lee, Anand Iyer, Tianlong
Chen, Huan Liu, Chen-Yu Lee, Tomas Pfister | In Prospect and Retrospect: Reflective Memory Management for Long-term
Personalized Dialogue Agents | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) have made significant progress in open-ended
dialogue, yet their inability to retain and retrieve relevant information from
long-term interactions limits their effectiveness in applications requiring
sustained personalization. External memory mechanisms have been proposed to
address this limitation, enabling LLMs to maintain conversational continuity.
However, existing approaches struggle with two key challenges. First, rigid
memory granularity fails to capture the natural semantic structure of
conversations, leading to fragmented and incomplete representations. Second,
fixed retrieval mechanisms cannot adapt to diverse dialogue contexts and user
interaction patterns. In this work, we propose Reflective Memory Management
(RMM), a novel mechanism for long-term dialogue agents, integrating forward-
and backward-looking reflections: (1) Prospective Reflection, which dynamically
summarizes interactions across granularities-utterances, turns, and
sessions-into a personalized memory bank for effective future retrieval, and
(2) Retrospective Reflection, which iteratively refines the retrieval in an
online reinforcement learning (RL) manner based on LLMs' cited evidence.
Experiments show that RMM demonstrates consistent improvement across various
metrics and benchmarks. For example, RMM shows more than 10% accuracy
improvement over the baseline without memory management on the LongMemEval
dataset.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 04:15:52 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Tan",
"Zhen",
""
],
[
"Yan",
"Jun",
""
],
[
"Hsu",
"I-Hung",
""
],
[
"Han",
"Rujun",
""
],
[
"Wang",
"Zifeng",
""
],
[
"Le",
"Long T.",
""
],
[
"Song",
"Yiwen",
""
],
[
"Chen",
"Yanfei",
""
],
[
"Palangi",
"Hamid",
""
],
[
"Lee",
"George",
""
],
[
"Iyer",
"Anand",
""
],
[
"Chen",
"Tianlong",
""
],
[
"Liu",
"Huan",
""
],
[
"Lee",
"Chen-Yu",
""
],
[
"Pfister",
"Tomas",
""
]
]
| TITLE: In Prospect and Retrospect: Reflective Memory Management for Long-term
Personalized Dialogue Agents
ABSTRACT: Large Language Models (LLMs) have made significant progress in open-ended
dialogue, yet their inability to retain and retrieve relevant information from
long-term interactions limits their effectiveness in applications requiring
sustained personalization. External memory mechanisms have been proposed to
address this limitation, enabling LLMs to maintain conversational continuity.
However, existing approaches struggle with two key challenges. First, rigid
memory granularity fails to capture the natural semantic structure of
conversations, leading to fragmented and incomplete representations. Second,
fixed retrieval mechanisms cannot adapt to diverse dialogue contexts and user
interaction patterns. In this work, we propose Reflective Memory Management
(RMM), a novel mechanism for long-term dialogue agents, integrating forward-
and backward-looking reflections: (1) Prospective Reflection, which dynamically
summarizes interactions across granularities-utterances, turns, and
sessions-into a personalized memory bank for effective future retrieval, and
(2) Retrospective Reflection, which iteratively refines the retrieval in an
online reinforcement learning (RL) manner based on LLMs' cited evidence.
Experiments show that RMM demonstrates consistent improvement across various
metrics and benchmarks. For example, RMM shows more than 10% accuracy
improvement over the baseline without memory management on the LongMemEval
dataset.
| no_new_dataset | 0.946448 |
2503.08030 | Xiang Gao | Xiang Gao, Ankita Sinha, Kamalika Das | Learning to Search Effective Example Sequences for In-Context Learning | Accepted to appear at NAACL 2025 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) demonstrate impressive few-shot learning
capabilities, but their performance varies widely based on the sequence of
in-context examples. Key factors influencing this include the sequence's
length, composition, and arrangement, as well as its relation to the specific
query. Existing methods often tackle these factors in isolation, overlooking
their interdependencies. Moreover, the extensive search space for selecting
optimal sequences complicates the development of a holistic approach. In this
work, we introduce Beam Search-based Example Sequence Constructor (BESC), a
novel method for learning to construct optimal example sequences. BESC
addresses all key factors involved in sequence selection by considering them
jointly during inference, while incrementally building the sequence. This
design enables the use of beam search to significantly reduce the complexity of
the search space. Experiments across various datasets and language models show
notable improvements in performance.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 04:24:59 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Gao",
"Xiang",
""
],
[
"Sinha",
"Ankita",
""
],
[
"Das",
"Kamalika",
""
]
]
| TITLE: Learning to Search Effective Example Sequences for In-Context Learning
ABSTRACT: Large language models (LLMs) demonstrate impressive few-shot learning
capabilities, but their performance varies widely based on the sequence of
in-context examples. Key factors influencing this include the sequence's
length, composition, and arrangement, as well as its relation to the specific
query. Existing methods often tackle these factors in isolation, overlooking
their interdependencies. Moreover, the extensive search space for selecting
optimal sequences complicates the development of a holistic approach. In this
work, we introduce Beam Search-based Example Sequence Constructor (BESC), a
novel method for learning to construct optimal example sequences. BESC
addresses all key factors involved in sequence selection by considering them
jointly during inference, while incrementally building the sequence. This
design enables the use of beam search to significantly reduce the complexity of
the search space. Experiments across various datasets and language models show
notable improvements in performance.
| no_new_dataset | 0.94887 |
2503.08038 | Cui Jiequan | Jiequan Cui, Beier Zhu, Qingshan Xu, Zhuotao Tian, Xiaojuan Qi, Bei
Yu, Hanwang Zhang, Richang Hong | Generalized Kullback-Leibler Divergence Loss | extension of our NeurIPS paper "Decoupled Kullback-Leibler Divergence
Loss". arXiv admin note: substantial text overlap with arXiv:2305.13948 | null | null | null | cs.LG cs.AI cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In this paper, we delve deeper into the Kullback-Leibler (KL) Divergence loss
and mathematically prove that it is equivalent to the Decoupled
Kullback-Leibler (DKL) Divergence loss that consists of (1) a weighted Mean
Square Error (wMSE) loss and (2) a Cross-Entropy loss incorporating soft
labels. Thanks to the decoupled structure of DKL loss, we have identified two
areas for improvement. Firstly, we address the limitation of KL loss in
scenarios like knowledge distillation by breaking its asymmetric optimization
property along with a smoother weight function. This modification effectively
alleviates convergence challenges in optimization, particularly for classes
with high predicted scores in soft labels. Secondly, we introduce class-wise
global information into KL/DKL to reduce bias arising from individual samples.
With these two enhancements, we derive the Generalized Kullback-Leibler (GKL)
Divergence loss and evaluate its effectiveness by conducting experiments on
CIFAR-10/100, ImageNet, and vision-language datasets, focusing on adversarial
training, and knowledge distillation tasks. Specifically, we achieve new
state-of-the-art adversarial robustness on the public leaderboard --
RobustBench and competitive knowledge distillation performance across
CIFAR/ImageNet models and CLIP models, demonstrating the substantial practical
merits. Our code is available at https://github.com/jiequancui/DKL.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 04:43:33 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Cui",
"Jiequan",
""
],
[
"Zhu",
"Beier",
""
],
[
"Xu",
"Qingshan",
""
],
[
"Tian",
"Zhuotao",
""
],
[
"Qi",
"Xiaojuan",
""
],
[
"Yu",
"Bei",
""
],
[
"Zhang",
"Hanwang",
""
],
[
"Hong",
"Richang",
""
]
]
| TITLE: Generalized Kullback-Leibler Divergence Loss
ABSTRACT: In this paper, we delve deeper into the Kullback-Leibler (KL) Divergence loss
and mathematically prove that it is equivalent to the Decoupled
Kullback-Leibler (DKL) Divergence loss that consists of (1) a weighted Mean
Square Error (wMSE) loss and (2) a Cross-Entropy loss incorporating soft
labels. Thanks to the decoupled structure of DKL loss, we have identified two
areas for improvement. Firstly, we address the limitation of KL loss in
scenarios like knowledge distillation by breaking its asymmetric optimization
property along with a smoother weight function. This modification effectively
alleviates convergence challenges in optimization, particularly for classes
with high predicted scores in soft labels. Secondly, we introduce class-wise
global information into KL/DKL to reduce bias arising from individual samples.
With these two enhancements, we derive the Generalized Kullback-Leibler (GKL)
Divergence loss and evaluate its effectiveness by conducting experiments on
CIFAR-10/100, ImageNet, and vision-language datasets, focusing on adversarial
training, and knowledge distillation tasks. Specifically, we achieve new
state-of-the-art adversarial robustness on the public leaderboard --
RobustBench and competitive knowledge distillation performance across
CIFAR/ImageNet models and CLIP models, demonstrating the substantial practical
merits. Our code is available at https://github.com/jiequancui/DKL.
| no_new_dataset | 0.943138 |
2503.08042 | Naomi Baes | Naomi Baes, Rapha\"el Merx, Nick Haslam, Ekaterina Vylomova, Haim
Dubossarsky | A General Framework to Evaluate Methods for Assessing Dimensions of
Lexical Semantic Change Using LLM-Generated Synthetic Data | 36 pages, under review | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Lexical Semantic Change (LSC) offers insights into cultural and social
dynamics. Yet, the validity of methods for measuring kinds of LSC has yet to be
established due to the absence of historical benchmark datasets. To address
this gap, we develop a novel three-stage evaluation framework that involves: 1)
creating a scalable, domain-general methodology for generating synthetic
datasets that simulate theory-driven LSC across time, leveraging In-Context
Learning and a lexical database; 2) using these datasets to evaluate the
effectiveness of various methods; and 3) assessing their suitability for
specific dimensions and domains. We apply this framework to simulate changes
across key dimensions of LSC (SIB: Sentiment, Intensity, and Breadth) using
examples from psychology, and evaluate the sensitivity of selected methods to
detect these artificially induced changes. Our findings support the utility of
the synthetic data approach, validate the efficacy of tailored methods for
detecting synthetic changes in SIB, and reveal that a state-of-the-art LSC
model faces challenges in detecting affective dimensions of LSC. This framework
provides a valuable tool for dimension- and domain-specific bench-marking and
evaluation of LSC methods, with particular benefits for the social sciences.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 04:48:22 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Baes",
"Naomi",
""
],
[
"Merx",
"Raphaël",
""
],
[
"Haslam",
"Nick",
""
],
[
"Vylomova",
"Ekaterina",
""
],
[
"Dubossarsky",
"Haim",
""
]
]
| TITLE: A General Framework to Evaluate Methods for Assessing Dimensions of
Lexical Semantic Change Using LLM-Generated Synthetic Data
ABSTRACT: Lexical Semantic Change (LSC) offers insights into cultural and social
dynamics. Yet, the validity of methods for measuring kinds of LSC has yet to be
established due to the absence of historical benchmark datasets. To address
this gap, we develop a novel three-stage evaluation framework that involves: 1)
creating a scalable, domain-general methodology for generating synthetic
datasets that simulate theory-driven LSC across time, leveraging In-Context
Learning and a lexical database; 2) using these datasets to evaluate the
effectiveness of various methods; and 3) assessing their suitability for
specific dimensions and domains. We apply this framework to simulate changes
across key dimensions of LSC (SIB: Sentiment, Intensity, and Breadth) using
examples from psychology, and evaluate the sensitivity of selected methods to
detect these artificially induced changes. Our findings support the utility of
the synthetic data approach, validate the efficacy of tailored methods for
detecting synthetic changes in SIB, and reveal that a state-of-the-art LSC
model faces challenges in detecting affective dimensions of LSC. This framework
provides a valuable tool for dimension- and domain-specific bench-marking and
evaluation of LSC methods, with particular benefits for the social sciences.
| new_dataset | 0.900004 |
2503.08045 | Zhu Jiawen | Ying Fu Lim, Jiawen Zhu, Guansong Pang | Adapting Large Language Models for Parameter-Efficient Log Anomaly
Detection | 12 pages, 5 figures, accepted by PAKDD 2025 special session | null | null | null | cs.LG cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Log Anomaly Detection (LAD) seeks to identify atypical patterns in log data
that are crucial to assessing the security and condition of systems. Although
Large Language Models (LLMs) have shown tremendous success in various fields,
the use of LLMs in enabling the detection of log anomalies is largely
unexplored. This work aims to fill this gap. Due to the prohibitive costs
involved in fully fine-tuning LLMs, we explore the use of parameter-efficient
fine-tuning techniques (PEFTs) for adapting LLMs to LAD. To have an in-depth
exploration of the potential of LLM-driven LAD, we present a comprehensive
investigation of leveraging two of the most popular PEFTs -- Low-Rank
Adaptation (LoRA) and Representation Fine-tuning (ReFT) -- to tap into three
prominent LLMs of varying size, including RoBERTa, GPT-2, and Llama-3, for
parameter-efficient LAD. Comprehensive experiments on four public log datasets
are performed to reveal important insights into effective LLM-driven LAD in
several key perspectives, including the efficacy of these PEFT-based LLM-driven
LAD methods, their stability, sample efficiency, robustness w.r.t. unstable
logs, and cross-dataset generalization. Code is available at
https://github.com/mala-lab/LogADReft.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 05:00:19 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Lim",
"Ying Fu",
""
],
[
"Zhu",
"Jiawen",
""
],
[
"Pang",
"Guansong",
""
]
]
| TITLE: Adapting Large Language Models for Parameter-Efficient Log Anomaly
Detection
ABSTRACT: Log Anomaly Detection (LAD) seeks to identify atypical patterns in log data
that are crucial to assessing the security and condition of systems. Although
Large Language Models (LLMs) have shown tremendous success in various fields,
the use of LLMs in enabling the detection of log anomalies is largely
unexplored. This work aims to fill this gap. Due to the prohibitive costs
involved in fully fine-tuning LLMs, we explore the use of parameter-efficient
fine-tuning techniques (PEFTs) for adapting LLMs to LAD. To have an in-depth
exploration of the potential of LLM-driven LAD, we present a comprehensive
investigation of leveraging two of the most popular PEFTs -- Low-Rank
Adaptation (LoRA) and Representation Fine-tuning (ReFT) -- to tap into three
prominent LLMs of varying size, including RoBERTa, GPT-2, and Llama-3, for
parameter-efficient LAD. Comprehensive experiments on four public log datasets
are performed to reveal important insights into effective LLM-driven LAD in
several key perspectives, including the efficacy of these PEFT-based LLM-driven
LAD methods, their stability, sample efficiency, robustness w.r.t. unstable
logs, and cross-dataset generalization. Code is available at
https://github.com/mala-lab/LogADReft.
| no_new_dataset | 0.947962 |
2503.08055 | Nadarasar Bahavan | Nadarasar Bahavan, Sanjay Saha, Ken Chen, Sachith Seneviratne, Sanka
Rasnayaka, Saman Halgamuge | Unmasking the Unknown: Facial Deepfake Detection in the Open-Set
Paradigm | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Facial forgery methods such as deepfakes can be misused for identity
manipulation and spreading misinformation. They have evolved alongside
advancements in generative AI, leading to new and more sophisticated forgery
techniques that diverge from existing 'known' methods. Conventional deepfake
detection methods use the closedset paradigm, thus limiting their applicability
to detecting forgeries created using methods that are not part of the training
dataset. In this paper, we propose a shift from the closed-set paradigm for
deepfake detection. In the open-set paradigm, models are designed not only to
identify images created by known facial forgery methods but also to identify
and flag those produced by previously unknown methods as 'unknown' and not as
unforged/real/unmanipulated. In this paper, we propose an open-set deepfake
classification algorithm based on supervised contrastive learning. The open-set
paradigm used in our model allows it to function as a more robust tool capable
of handling emerging and unseen deepfake techniques, enhancing reliability and
confidence, and complementing forensic analysis. In open-set paradigm, we
identify three groups including the "unknown group that is neither considered
known deepfake nor real. We investigate deepfake open-set classification across
three scenarios, classifying deepfakes from unknown methods not as real,
distinguishing real images from deepfakes, and classifying deepfakes from known
methods, using the FaceForensics++ dataset as a benchmark. Our method achieves
state of the art results in the first two tasks and competitive results in the
third task.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 05:23:07 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Bahavan",
"Nadarasar",
""
],
[
"Saha",
"Sanjay",
""
],
[
"Chen",
"Ken",
""
],
[
"Seneviratne",
"Sachith",
""
],
[
"Rasnayaka",
"Sanka",
""
],
[
"Halgamuge",
"Saman",
""
]
]
| TITLE: Unmasking the Unknown: Facial Deepfake Detection in the Open-Set
Paradigm
ABSTRACT: Facial forgery methods such as deepfakes can be misused for identity
manipulation and spreading misinformation. They have evolved alongside
advancements in generative AI, leading to new and more sophisticated forgery
techniques that diverge from existing 'known' methods. Conventional deepfake
detection methods use the closedset paradigm, thus limiting their applicability
to detecting forgeries created using methods that are not part of the training
dataset. In this paper, we propose a shift from the closed-set paradigm for
deepfake detection. In the open-set paradigm, models are designed not only to
identify images created by known facial forgery methods but also to identify
and flag those produced by previously unknown methods as 'unknown' and not as
unforged/real/unmanipulated. In this paper, we propose an open-set deepfake
classification algorithm based on supervised contrastive learning. The open-set
paradigm used in our model allows it to function as a more robust tool capable
of handling emerging and unseen deepfake techniques, enhancing reliability and
confidence, and complementing forensic analysis. In open-set paradigm, we
identify three groups including the "unknown group that is neither considered
known deepfake nor real. We investigate deepfake open-set classification across
three scenarios, classifying deepfakes from unknown methods not as real,
distinguishing real images from deepfakes, and classifying deepfakes from known
methods, using the FaceForensics++ dataset as a benchmark. Our method achieves
state of the art results in the first two tasks and competitive results in the
third task.
| no_new_dataset | 0.949902 |
2503.08056 | Zewei Zhan | Zhongyu Mai, Zewei Zhan, Hanyu Guo, Yulang Huang, Weifeng Su | DDO-IN: Dual Domains Optimization for Implicit Neural Network to
Eliminate Motion Artifact in Magnetic Resonance Imaging | 10 pages, 2 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Magnetic resonance imaging (MRI) motion artifacts can seriously affect
clinical diagnostics, making it challenging to interpret images accurately.
Existing methods for eliminating motion artifacts struggle to retain fine
structural details and simultaneously lack the necessary vividness and
sharpness. In this study, we present a novel dual-domain optimization (DDO)
approach that integrates information from the pixel and frequency domains
guiding the recovery of clean magnetic resonance images through implicit neural
representations(INRs). Specifically, our approach leverages the low-frequency
components in the k-space as a reference to capture accurate tissue textures,
while high-frequency and pixel information contribute to recover details.
Furthermore, we design complementary masks and dynamic loss weighting
transitioning from global to local attention that effectively suppress
artifacts while retaining useful details for reconstruction. Experimental
results on the NYU fastMRI dataset demonstrate that our method outperforms
existing approaches in multiple evaluation metrics. Our code is available at
https://anonymous.4open.science/r/DDO-IN-A73B.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 05:26:03 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Mai",
"Zhongyu",
""
],
[
"Zhan",
"Zewei",
""
],
[
"Guo",
"Hanyu",
""
],
[
"Huang",
"Yulang",
""
],
[
"Su",
"Weifeng",
""
]
]
| TITLE: DDO-IN: Dual Domains Optimization for Implicit Neural Network to
Eliminate Motion Artifact in Magnetic Resonance Imaging
ABSTRACT: Magnetic resonance imaging (MRI) motion artifacts can seriously affect
clinical diagnostics, making it challenging to interpret images accurately.
Existing methods for eliminating motion artifacts struggle to retain fine
structural details and simultaneously lack the necessary vividness and
sharpness. In this study, we present a novel dual-domain optimization (DDO)
approach that integrates information from the pixel and frequency domains
guiding the recovery of clean magnetic resonance images through implicit neural
representations(INRs). Specifically, our approach leverages the low-frequency
components in the k-space as a reference to capture accurate tissue textures,
while high-frequency and pixel information contribute to recover details.
Furthermore, we design complementary masks and dynamic loss weighting
transitioning from global to local attention that effectively suppress
artifacts while retaining useful details for reconstruction. Experimental
results on the NYU fastMRI dataset demonstrate that our method outperforms
existing approaches in multiple evaluation metrics. Our code is available at
https://anonymous.4open.science/r/DDO-IN-A73B.
| no_new_dataset | 0.952706 |
2503.08057 | Wen Luo | Wen Luo, Feifan Song, Wei Li, Guangyue Peng, Shaohang Wei, Houfeng
Wang | Odysseus Navigates the Sirens' Song: Dynamic Focus Decoding for Factual
and Diverse Open-Ended Text Generation | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) are increasingly required to generate text that
is both factually accurate and diverse across various open-ended applications.
However, current stochastic decoding methods struggle to balance such
objectives. We introduce Dynamic Focus Decoding (DFD), a novel plug-and-play
stochastic approach that resolves this trade-off without requiring additional
data, knowledge, or models. DFD adaptively adjusts the decoding focus based on
distributional differences across layers, leveraging the modular and
hierarchical nature of factual knowledge within LLMs. This dynamic adjustment
improves factuality in knowledge-intensive decoding steps and promotes
diversity in less knowledge-reliant steps. DFD can be easily integrated with
existing decoding methods, enhancing both factuality and diversity with minimal
computational overhead. Extensive experiments across seven datasets demonstrate
that DFD significantly improves performance, providing a scalable and efficient
solution for open-ended text generation.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 05:27:28 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Luo",
"Wen",
""
],
[
"Song",
"Feifan",
""
],
[
"Li",
"Wei",
""
],
[
"Peng",
"Guangyue",
""
],
[
"Wei",
"Shaohang",
""
],
[
"Wang",
"Houfeng",
""
]
]
| TITLE: Odysseus Navigates the Sirens' Song: Dynamic Focus Decoding for Factual
and Diverse Open-Ended Text Generation
ABSTRACT: Large Language Models (LLMs) are increasingly required to generate text that
is both factually accurate and diverse across various open-ended applications.
However, current stochastic decoding methods struggle to balance such
objectives. We introduce Dynamic Focus Decoding (DFD), a novel plug-and-play
stochastic approach that resolves this trade-off without requiring additional
data, knowledge, or models. DFD adaptively adjusts the decoding focus based on
distributional differences across layers, leveraging the modular and
hierarchical nature of factual knowledge within LLMs. This dynamic adjustment
improves factuality in knowledge-intensive decoding steps and promotes
diversity in less knowledge-reliant steps. DFD can be easily integrated with
existing decoding methods, enhancing both factuality and diversity with minimal
computational overhead. Extensive experiments across seven datasets demonstrate
that DFD significantly improves performance, providing a scalable and efficient
solution for open-ended text generation.
| no_new_dataset | 0.947235 |
2503.08064 | Hyundong Jin | Hyundong Jin and Eunwoo Kim | Continual Learning for Multiple Modalities | 14 pages, 7 figures | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Continual learning aims to learn knowledge of tasks observed in sequential
time steps while mitigating the forgetting of previously learned knowledge.
Existing methods were proposed under the assumption of learning a single
modality (e.g., image) over time, which limits their applicability in scenarios
involving multiple modalities. In this work, we propose a novel continual
learning framework that accommodates multiple modalities (image, video, audio,
depth, and text). We train a model to align various modalities with text,
leveraging its rich semantic information. However, this increases the risk of
forgetting previously learned knowledge, exacerbated by the differing input
traits of each task. To alleviate the overwriting of the previous knowledge of
modalities, we propose a method for aggregating knowledge within and across
modalities. The aggregated knowledge is obtained by assimilating new
information through self-regularization within each modality and associating
knowledge between modalities by prioritizing contributions from relevant
modalities. Furthermore, we propose a strategy that re-aligns the embeddings of
modalities to resolve biased alignment between modalities. We evaluate the
proposed method in a wide range of continual learning scenarios using multiple
datasets with different modalities. Extensive experiments demonstrate that ours
outperforms existing methods in the scenarios, regardless of whether the
identity of the modality is given.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 05:50:13 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Jin",
"Hyundong",
""
],
[
"Kim",
"Eunwoo",
""
]
]
| TITLE: Continual Learning for Multiple Modalities
ABSTRACT: Continual learning aims to learn knowledge of tasks observed in sequential
time steps while mitigating the forgetting of previously learned knowledge.
Existing methods were proposed under the assumption of learning a single
modality (e.g., image) over time, which limits their applicability in scenarios
involving multiple modalities. In this work, we propose a novel continual
learning framework that accommodates multiple modalities (image, video, audio,
depth, and text). We train a model to align various modalities with text,
leveraging its rich semantic information. However, this increases the risk of
forgetting previously learned knowledge, exacerbated by the differing input
traits of each task. To alleviate the overwriting of the previous knowledge of
modalities, we propose a method for aggregating knowledge within and across
modalities. The aggregated knowledge is obtained by assimilating new
information through self-regularization within each modality and associating
knowledge between modalities by prioritizing contributions from relevant
modalities. Furthermore, we propose a strategy that re-aligns the embeddings of
modalities to resolve biased alignment between modalities. We evaluate the
proposed method in a wide range of continual learning scenarios using multiple
datasets with different modalities. Extensive experiments demonstrate that ours
outperforms existing methods in the scenarios, regardless of whether the
identity of the modality is given.
| no_new_dataset | 0.943191 |
2503.08067 | Amir Mansurian | Ali Veisi, Amir Mansourian | Context-aware Biases for Length Extrapolation | 11 pages, 8 figures, 1 table | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Transformers' ability to generalize to longer sequences than they have been
trained on, known as length extrapolation, degrades as sequence length
increases. Most of Relative Positional Encoding (RPE) methods address this
problem by either adding constant linear biases or learning general biases,
lacking the ability to specialize for different sequences. In this work,
inspired by ALiBi, we propose Context-aware Biases for Length Extrapolation
(Cable), that learns token-specific biases for each head in decoder-based
transformers. Cable learns adaptive, context-aware biases, overcoming the
limitations of fixed patterns by adding dynamic biases specific to each token
in the sequence. Results show that when tested on a sequence length of 1024, a
GPT-3 Medium (334M parameters) with our positional encoding, trained on a
sequence length of 512, achieves better perplexity (-0.65) than a similar
network with sinusoidal positional encoding trained on a sequence length of
1024. This is achieved with 48% lower memory usage, and only 3.5% higher
training time. Furthermore, our method notably improves the extrapolation
ability of existing RPE methods on the Edu-FineWeb10B and WikiText-103
datasets. Code is available at: https://github.com/axiomlab/Cable
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 05:54:58 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Veisi",
"Ali",
""
],
[
"Mansourian",
"Amir",
""
]
]
| TITLE: Context-aware Biases for Length Extrapolation
ABSTRACT: Transformers' ability to generalize to longer sequences than they have been
trained on, known as length extrapolation, degrades as sequence length
increases. Most of Relative Positional Encoding (RPE) methods address this
problem by either adding constant linear biases or learning general biases,
lacking the ability to specialize for different sequences. In this work,
inspired by ALiBi, we propose Context-aware Biases for Length Extrapolation
(Cable), that learns token-specific biases for each head in decoder-based
transformers. Cable learns adaptive, context-aware biases, overcoming the
limitations of fixed patterns by adding dynamic biases specific to each token
in the sequence. Results show that when tested on a sequence length of 1024, a
GPT-3 Medium (334M parameters) with our positional encoding, trained on a
sequence length of 512, achieves better perplexity (-0.65) than a similar
network with sinusoidal positional encoding trained on a sequence length of
1024. This is achieved with 48% lower memory usage, and only 3.5% higher
training time. Furthermore, our method notably improves the extrapolation
ability of existing RPE methods on the Edu-FineWeb10B and WikiText-103
datasets. Code is available at: https://github.com/axiomlab/Cable
| no_new_dataset | 0.955277 |
2503.08068 | Peili Song | Peili Song, Dezhen Song, Yifan Yang, Enfan Lan, and Jingtai Liu | Simulating Automotive Radar with Lidar and Camera Inputs | submitted to IROS 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Low-cost millimeter automotive radar has received more and more attention due
to its ability to handle adverse weather and lighting conditions in autonomous
driving. However, the lack of quality datasets hinders research and
development. We report a new method that is able to simulate 4D millimeter wave
radar signals including pitch, yaw, range, and Doppler velocity along with
radar signal strength (RSS) using camera image, light detection and ranging
(lidar) point cloud, and ego-velocity. The method is based on two new neural
networks: 1) DIS-Net, which estimates the spatial distribution and number of
radar signals, and 2) RSS-Net, which predicts the RSS of the signal based on
appearance and geometric information. We have implemented and tested our method
using open datasets from 3 different models of commercial automotive radar. The
experimental results show that our method can successfully generate
high-fidelity radar signals. Moreover, we have trained a popular object
detection neural network with data augmented by our synthesized radar. The
network outperforms the counterpart trained only on raw radar data, a promising
result to facilitate future radar-based research and development.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 05:59:43 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Song",
"Peili",
""
],
[
"Song",
"Dezhen",
""
],
[
"Yang",
"Yifan",
""
],
[
"Lan",
"Enfan",
""
],
[
"Liu",
"Jingtai",
""
]
]
| TITLE: Simulating Automotive Radar with Lidar and Camera Inputs
ABSTRACT: Low-cost millimeter automotive radar has received more and more attention due
to its ability to handle adverse weather and lighting conditions in autonomous
driving. However, the lack of quality datasets hinders research and
development. We report a new method that is able to simulate 4D millimeter wave
radar signals including pitch, yaw, range, and Doppler velocity along with
radar signal strength (RSS) using camera image, light detection and ranging
(lidar) point cloud, and ego-velocity. The method is based on two new neural
networks: 1) DIS-Net, which estimates the spatial distribution and number of
radar signals, and 2) RSS-Net, which predicts the RSS of the signal based on
appearance and geometric information. We have implemented and tested our method
using open datasets from 3 different models of commercial automotive radar. The
experimental results show that our method can successfully generate
high-fidelity radar signals. Moreover, we have trained a popular object
detection neural network with data augmented by our synthesized radar. The
network outperforms the counterpart trained only on raw radar data, a promising
result to facilitate future radar-based research and development.
| no_new_dataset | 0.950869 |
2503.08071 | Kai Deng | Kai Deng, Jian Yang, Shenlong Wang, Jin Xie | GigaSLAM: Large-Scale Monocular SLAM with Hierachical Gaussian Splats | null | null | null | null | cs.RO cs.CV | http://creativecommons.org/licenses/by/4.0/ | Tracking and mapping in large-scale, unbounded outdoor environments using
only monocular RGB input presents substantial challenges for existing SLAM
systems. Traditional Neural Radiance Fields (NeRF) and 3D Gaussian Splatting
(3DGS) SLAM methods are typically limited to small, bounded indoor settings. To
overcome these challenges, we introduce GigaSLAM, the first NeRF/3DGS-based
SLAM framework for kilometer-scale outdoor environments, as demonstrated on the
KITTI and KITTI 360 datasets. Our approach employs a hierarchical sparse voxel
map representation, where Gaussians are decoded by neural networks at multiple
levels of detail. This design enables efficient, scalable mapping and
high-fidelity viewpoint rendering across expansive, unbounded scenes. For
front-end tracking, GigaSLAM utilizes a metric depth model combined with
epipolar geometry and PnP algorithms to accurately estimate poses, while
incorporating a Bag-of-Words-based loop closure mechanism to maintain robust
alignment over long trajectories. Consequently, GigaSLAM delivers
high-precision tracking and visually faithful rendering on urban outdoor
benchmarks, establishing a robust SLAM solution for large-scale, long-term
scenarios, and significantly extending the applicability of Gaussian Splatting
SLAM systems to unbounded outdoor environments.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 06:05:15 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Deng",
"Kai",
""
],
[
"Yang",
"Jian",
""
],
[
"Wang",
"Shenlong",
""
],
[
"Xie",
"Jin",
""
]
]
| TITLE: GigaSLAM: Large-Scale Monocular SLAM with Hierachical Gaussian Splats
ABSTRACT: Tracking and mapping in large-scale, unbounded outdoor environments using
only monocular RGB input presents substantial challenges for existing SLAM
systems. Traditional Neural Radiance Fields (NeRF) and 3D Gaussian Splatting
(3DGS) SLAM methods are typically limited to small, bounded indoor settings. To
overcome these challenges, we introduce GigaSLAM, the first NeRF/3DGS-based
SLAM framework for kilometer-scale outdoor environments, as demonstrated on the
KITTI and KITTI 360 datasets. Our approach employs a hierarchical sparse voxel
map representation, where Gaussians are decoded by neural networks at multiple
levels of detail. This design enables efficient, scalable mapping and
high-fidelity viewpoint rendering across expansive, unbounded scenes. For
front-end tracking, GigaSLAM utilizes a metric depth model combined with
epipolar geometry and PnP algorithms to accurately estimate poses, while
incorporating a Bag-of-Words-based loop closure mechanism to maintain robust
alignment over long trajectories. Consequently, GigaSLAM delivers
high-precision tracking and visually faithful rendering on urban outdoor
benchmarks, establishing a robust SLAM solution for large-scale, long-term
scenarios, and significantly extending the applicability of Gaussian Splatting
SLAM systems to unbounded outdoor environments.
| no_new_dataset | 0.947866 |
2503.08075 | Haji Gul | Haji Gul, Abdul Ghani Naim, Ajaz Ahmad Bhat | MuCoS: Efficient Drug Target Discovery via Multi Context Aware Sampling
in Knowledge Graphs | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Accurate prediction of drug target interactions is critical for accelerating
drug discovery and elucidating complex biological mechanisms. In this work, we
frame drug target prediction as a link prediction task on heterogeneous
biomedical knowledge graphs (KG) that integrate drugs, proteins, diseases,
pathways, and other relevant entities. Conventional KG embedding methods such
as TransE and ComplEx SE are hindered by their reliance on computationally
intensive negative sampling and their limited generalization to unseen drug
target pairs. To address these challenges, we propose Multi Context Aware
Sampling (MuCoS), a novel framework that prioritizes high-density neighbours to
capture salient structural patterns and integrates these with contextual
embeddings derived from BERT. By unifying structural and textual modalities and
selectively sampling highly informative patterns, MuCoS circumvents the need
for negative sampling, significantly reducing computational overhead while
enhancing predictive accuracy for novel drug target associations and drug
targets. Extensive experiments on the KEGG50k dataset demonstrate that MuCoS
outperforms state-of-the-art baselines, achieving up to a 13\% improvement in
mean reciprocal rank (MRR) in predicting any relation in the dataset and a 6\%
improvement in dedicated drug target relation prediction.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 06:08:42 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Gul",
"Haji",
""
],
[
"Naim",
"Abdul Ghani",
""
],
[
"Bhat",
"Ajaz Ahmad",
""
]
]
| TITLE: MuCoS: Efficient Drug Target Discovery via Multi Context Aware Sampling
in Knowledge Graphs
ABSTRACT: Accurate prediction of drug target interactions is critical for accelerating
drug discovery and elucidating complex biological mechanisms. In this work, we
frame drug target prediction as a link prediction task on heterogeneous
biomedical knowledge graphs (KG) that integrate drugs, proteins, diseases,
pathways, and other relevant entities. Conventional KG embedding methods such
as TransE and ComplEx SE are hindered by their reliance on computationally
intensive negative sampling and their limited generalization to unseen drug
target pairs. To address these challenges, we propose Multi Context Aware
Sampling (MuCoS), a novel framework that prioritizes high-density neighbours to
capture salient structural patterns and integrates these with contextual
embeddings derived from BERT. By unifying structural and textual modalities and
selectively sampling highly informative patterns, MuCoS circumvents the need
for negative sampling, significantly reducing computational overhead while
enhancing predictive accuracy for novel drug target associations and drug
targets. Extensive experiments on the KEGG50k dataset demonstrate that MuCoS
outperforms state-of-the-art baselines, achieving up to a 13\% improvement in
mean reciprocal rank (MRR) in predicting any relation in the dataset and a 6\%
improvement in dedicated drug target relation prediction.
| no_new_dataset | 0.9462 |
2503.08078 | Yingjie Chen | Yingjie Chen, Jiarui Zhang, Tao Wang, Yun Liang | Trend-Aware Supervision: On Learning Invariance for Semi-Supervised
Facial Action Unit Intensity Estimation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the increasing need for facial behavior analysis, semi-supervised AU
intensity estimation using only keyframe annotations has emerged as a practical
and effective solution to relieve the burden of annotation. However, the lack
of annotations makes the spurious correlation problem caused by AU
co-occurrences and subject variation much more prominent, leading to non-robust
intensity estimation that is entangled among AUs and biased among subjects. We
observe that trend information inherent in keyframe annotations could act as
extra supervision and raising the awareness of AU-specific facial appearance
changing trends during training is the key to learning invariant AU-specific
features. To this end, we propose \textbf{T}rend-\textbf{A}ware
\textbf{S}upervision (TAS), which pursues three kinds of trend awareness,
including intra-trend ranking awareness, intra-trend speed awareness, and
inter-trend subject awareness. TAS alleviates the spurious correlation problem
by raising trend awareness during training to learn AU-specific features that
represent the corresponding facial appearance changes, to achieve intensity
estimation invariance. Experiments conducted on two commonly used AU benchmark
datasets, BP4D and DISFA, show the effectiveness of each kind of awareness. And
under trend-aware supervision, the performance can be improved without extra
computational or storage costs during inference.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 06:21:09 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Chen",
"Yingjie",
""
],
[
"Zhang",
"Jiarui",
""
],
[
"Wang",
"Tao",
""
],
[
"Liang",
"Yun",
""
]
]
| TITLE: Trend-Aware Supervision: On Learning Invariance for Semi-Supervised
Facial Action Unit Intensity Estimation
ABSTRACT: With the increasing need for facial behavior analysis, semi-supervised AU
intensity estimation using only keyframe annotations has emerged as a practical
and effective solution to relieve the burden of annotation. However, the lack
of annotations makes the spurious correlation problem caused by AU
co-occurrences and subject variation much more prominent, leading to non-robust
intensity estimation that is entangled among AUs and biased among subjects. We
observe that trend information inherent in keyframe annotations could act as
extra supervision and raising the awareness of AU-specific facial appearance
changing trends during training is the key to learning invariant AU-specific
features. To this end, we propose \textbf{T}rend-\textbf{A}ware
\textbf{S}upervision (TAS), which pursues three kinds of trend awareness,
including intra-trend ranking awareness, intra-trend speed awareness, and
inter-trend subject awareness. TAS alleviates the spurious correlation problem
by raising trend awareness during training to learn AU-specific features that
represent the corresponding facial appearance changes, to achieve intensity
estimation invariance. Experiments conducted on two commonly used AU benchmark
datasets, BP4D and DISFA, show the effectiveness of each kind of awareness. And
under trend-aware supervision, the performance can be improved without extra
computational or storage costs during inference.
| no_new_dataset | 0.957557 |
2503.08083 | Jie-Chung Chen | J. C. Chen | Degradation Self-Supervised Learning for Lithium-ion Battery Health
Diagnostics | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Health evaluation for lithium-ion batteries (LIBs) typically relies on
constant charging/discharging protocols, often neglecting scenarios involving
dynamic current profiles prevalent in electric vehicles. Conventional health
indicators for LIBs also depend on the uniformity of measured data, restricting
their adaptability to non-uniform conditions. In this study, a novel training
strategy for estimating LIB health based on the paradigm of self-supervised
learning is proposed. A multiresolution analysis technique, empirical wavelet
transform, is utilized to decompose non-stationary voltage signals in the
frequency domain. This allows the removal of ineffective components for the
health evaluation model. The transformer neural network serves as the model
backbone, and a loss function is designed to describe the capacity degradation
behavior with the assumption that the degradation in LIBs across most operating
conditions is inevitable and irreversible. The results show that the model can
learn the aging characteristics by analyzing sequences of voltage and current
profiles obtained at various time intervals from the same LIB cell. The
proposed method is successfully applied to the Stanford University LIB aging
dataset, derived from electric vehicle real driving profiles. Notably, this
approach achieves an average correlation coefficient of 0.9 between the
evaluated health index and the degradation of actual capacity, demonstrating
its efficacy in capturing LIB health degradation. This research highlights the
feasibility of training deep neural networks using unlabeled LIB data, offering
cost-efficient means and unleashing the potential of the measured information.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 06:29:13 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Chen",
"J. C.",
""
]
]
| TITLE: Degradation Self-Supervised Learning for Lithium-ion Battery Health
Diagnostics
ABSTRACT: Health evaluation for lithium-ion batteries (LIBs) typically relies on
constant charging/discharging protocols, often neglecting scenarios involving
dynamic current profiles prevalent in electric vehicles. Conventional health
indicators for LIBs also depend on the uniformity of measured data, restricting
their adaptability to non-uniform conditions. In this study, a novel training
strategy for estimating LIB health based on the paradigm of self-supervised
learning is proposed. A multiresolution analysis technique, empirical wavelet
transform, is utilized to decompose non-stationary voltage signals in the
frequency domain. This allows the removal of ineffective components for the
health evaluation model. The transformer neural network serves as the model
backbone, and a loss function is designed to describe the capacity degradation
behavior with the assumption that the degradation in LIBs across most operating
conditions is inevitable and irreversible. The results show that the model can
learn the aging characteristics by analyzing sequences of voltage and current
profiles obtained at various time intervals from the same LIB cell. The
proposed method is successfully applied to the Stanford University LIB aging
dataset, derived from electric vehicle real driving profiles. Notably, this
approach achieves an average correlation coefficient of 0.9 between the
evaluated health index and the degradation of actual capacity, demonstrating
its efficacy in capturing LIB health degradation. This research highlights the
feasibility of training deep neural networks using unlabeled LIB data, offering
cost-efficient means and unleashing the potential of the measured information.
| no_new_dataset | 0.950915 |
2503.08091 | Hao Zhang | Hao Zhang, Fuhui Zhou, Hongyang Du, Qihui Wu, Chau Yuen | Revolution of Wireless Signal Recognition for 6G: Recent Advances,
Challenges and Future Directions | submitted to IEEE Communications Surveys & Tutorials | null | null | null | eess.SP cs.AI | http://creativecommons.org/licenses/by/4.0/ | Wireless signal recognition (WSR) is a crucial technique for intelligent
communications and spectrum sharing in the next six-generation (6G) wireless
communication networks. It can be utilized to enhance network performance and
efficiency, improve quality of service (QoS), and improve network security and
reliability. Additionally, WSR can be applied for military applications such as
signal interception, signal race, and signal abduction. In the past decades,
great efforts have been made for the research of WSR. Earlier works mainly
focus on model-based methods, including likelihood-based (LB) and feature-based
(FB) methods, which have taken the leading position for many years. With the
emergence of artificial intelligence (AI), intelligent methods including
machine learning-based (ML-based) and deep learning-based (DL-based) methods
have been developed to extract the features of the received signals and perform
the classification. In this work, we provide a comprehensive review of WSR from
the view of applications, main tasks, recent advances, datasets and evaluation
metrics, challenges, and future directions. Specifically, intelligent WSR
methods are introduced from the perspective of model, data, learning and
implementation. Moreover, we analyze the challenges for WSR from the view of
complex, dynamic, and open 6G wireless environments and discuss the future
directions for WSR. This survey is expected to provide a comprehensive overview
of the state-of-the-art WSR techniques and inspire new research directions for
WSR in 6G networks.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 06:47:27 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Zhang",
"Hao",
""
],
[
"Zhou",
"Fuhui",
""
],
[
"Du",
"Hongyang",
""
],
[
"Wu",
"Qihui",
""
],
[
"Yuen",
"Chau",
""
]
]
| TITLE: Revolution of Wireless Signal Recognition for 6G: Recent Advances,
Challenges and Future Directions
ABSTRACT: Wireless signal recognition (WSR) is a crucial technique for intelligent
communications and spectrum sharing in the next six-generation (6G) wireless
communication networks. It can be utilized to enhance network performance and
efficiency, improve quality of service (QoS), and improve network security and
reliability. Additionally, WSR can be applied for military applications such as
signal interception, signal race, and signal abduction. In the past decades,
great efforts have been made for the research of WSR. Earlier works mainly
focus on model-based methods, including likelihood-based (LB) and feature-based
(FB) methods, which have taken the leading position for many years. With the
emergence of artificial intelligence (AI), intelligent methods including
machine learning-based (ML-based) and deep learning-based (DL-based) methods
have been developed to extract the features of the received signals and perform
the classification. In this work, we provide a comprehensive review of WSR from
the view of applications, main tasks, recent advances, datasets and evaluation
metrics, challenges, and future directions. Specifically, intelligent WSR
methods are introduced from the perspective of model, data, learning and
implementation. Moreover, we analyze the challenges for WSR from the view of
complex, dynamic, and open 6G wireless environments and discuss the future
directions for WSR. This survey is expected to provide a comprehensive overview
of the state-of-the-art WSR techniques and inspire new research directions for
WSR in 6G networks.
| no_new_dataset | 0.940134 |
2503.08094 | Arghya Pal | Arghya Pal, Sailaja Rajanala, CheeMing Ting, Raphael Phan | Denoising via Repainting: an image denoising method using layer wise
medical image repainting | null | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Medical image denoising is essential for improving the reliability of
clinical diagnosis and guiding subsequent image-based tasks. In this paper, we
propose a multi-scale approach that integrates anisotropic Gaussian filtering
with progressive Bezier-path redrawing. Our method constructs a scale-space
pyramid to mitigate noise while preserving critical structural details.
Starting at the coarsest scale, we segment partially denoised images into
coherent components and redraw each using a parametric Bezier path with
representative color. Through iterative refinements at finer scales, small and
intricate structures are accurately reconstructed, while large homogeneous
regions remain robustly smoothed. We employ both mean square error and
self-intersection constraints to maintain shape coherence during path
optimization. Empirical results on multiple MRI datasets demonstrate consistent
improvements in PSNR and SSIM over competing methods. This coarse-to-fine
framework offers a robust, data-efficient solution for cross-domain denoising,
reinforcing its potential clinical utility and versatility. Future work extends
this technique to three-dimensional data.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 06:54:37 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Pal",
"Arghya",
""
],
[
"Rajanala",
"Sailaja",
""
],
[
"Ting",
"CheeMing",
""
],
[
"Phan",
"Raphael",
""
]
]
| TITLE: Denoising via Repainting: an image denoising method using layer wise
medical image repainting
ABSTRACT: Medical image denoising is essential for improving the reliability of
clinical diagnosis and guiding subsequent image-based tasks. In this paper, we
propose a multi-scale approach that integrates anisotropic Gaussian filtering
with progressive Bezier-path redrawing. Our method constructs a scale-space
pyramid to mitigate noise while preserving critical structural details.
Starting at the coarsest scale, we segment partially denoised images into
coherent components and redraw each using a parametric Bezier path with
representative color. Through iterative refinements at finer scales, small and
intricate structures are accurately reconstructed, while large homogeneous
regions remain robustly smoothed. We employ both mean square error and
self-intersection constraints to maintain shape coherence during path
optimization. Empirical results on multiple MRI datasets demonstrate consistent
improvements in PSNR and SSIM over competing methods. This coarse-to-fine
framework offers a robust, data-efficient solution for cross-domain denoising,
reinforcing its potential clinical utility and versatility. Future work extends
this technique to three-dimensional data.
| no_new_dataset | 0.946941 |
2503.08133 | Taehyeon Eum | Taehyeon Eum, Jieun Choi, Tae-Kyun Kim | MGHanD: Multi-modal Guidance for authentic Hand Diffusion | 8 pages, 7 figures | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Diffusion-based methods have achieved significant successes in T2I
generation, providing realistic images from text prompts. Despite their
capabilities, these models face persistent challenges in generating realistic
human hands, often producing images with incorrect finger counts and
structurally deformed hands. MGHanD addresses this challenge by applying
multi-modal guidance during the inference process. For visual guidance, we
employ a discriminator trained on a dataset comprising paired real and
generated images with captions, derived from various hand-in-the-wild datasets.
We also employ textual guidance with LoRA adapter, which learns the direction
from `hands' towards more detailed prompts such as `natural hands', and
`anatomically correct fingers' at the latent level. A cumulative hand mask
which is gradually enlarged in the assigned time step is applied to the added
guidance, allowing the hand to be refined while maintaining the rich generative
capabilities of the pre-trained model. In the experiments, our method achieves
superior hand generation qualities, without any specific conditions or priors.
We carry out both quantitative and qualitative evaluations, along with user
studies, to showcase the benefits of our approach in producing high-quality
hand images.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 07:51:47 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Eum",
"Taehyeon",
""
],
[
"Choi",
"Jieun",
""
],
[
"Kim",
"Tae-Kyun",
""
]
]
| TITLE: MGHanD: Multi-modal Guidance for authentic Hand Diffusion
ABSTRACT: Diffusion-based methods have achieved significant successes in T2I
generation, providing realistic images from text prompts. Despite their
capabilities, these models face persistent challenges in generating realistic
human hands, often producing images with incorrect finger counts and
structurally deformed hands. MGHanD addresses this challenge by applying
multi-modal guidance during the inference process. For visual guidance, we
employ a discriminator trained on a dataset comprising paired real and
generated images with captions, derived from various hand-in-the-wild datasets.
We also employ textual guidance with LoRA adapter, which learns the direction
from `hands' towards more detailed prompts such as `natural hands', and
`anatomically correct fingers' at the latent level. A cumulative hand mask
which is gradually enlarged in the assigned time step is applied to the added
guidance, allowing the hand to be refined while maintaining the rich generative
capabilities of the pre-trained model. In the experiments, our method achieves
superior hand generation qualities, without any specific conditions or priors.
We carry out both quantitative and qualitative evaluations, along with user
studies, to showcase the benefits of our approach in producing high-quality
hand images.
| no_new_dataset | 0.9357 |
2503.08141 | Jonas Seng | Jonas Seng, Florian Peter Busch, Pooja Prasad, Devendra Singh Dhami,
Martin Mundt, Kristian Kersting | Scaling Probabilistic Circuits via Data Partitioning | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Probabilistic circuits (PCs) enable us to learn joint distributions over a
set of random variables and to perform various probabilistic queries in a
tractable fashion. Though the tractability property allows PCs to scale beyond
non-tractable models such as Bayesian Networks, scaling training and inference
of PCs to larger, real-world datasets remains challenging. To remedy the
situation, we show how PCs can be learned across multiple machines by
recursively partitioning a distributed dataset, thereby unveiling a deep
connection between PCs and federated learning (FL). This leads to federated
circuits (FCs) -- a novel and flexible federated learning (FL) framework that
(1) allows one to scale PCs on distributed learning environments (2) train PCs
faster and (3) unifies for the first time horizontal, vertical, and hybrid FL
in one framework by re-framing FL as a density estimation problem over
distributed datasets. We demonstrate FC's capability to scale PCs on various
large-scale datasets. Also, we show FC's versatility in handling horizontal,
vertical, and hybrid FL within a unified framework on multiple classification
tasks.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 07:59:56 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Seng",
"Jonas",
""
],
[
"Busch",
"Florian Peter",
""
],
[
"Prasad",
"Pooja",
""
],
[
"Dhami",
"Devendra Singh",
""
],
[
"Mundt",
"Martin",
""
],
[
"Kersting",
"Kristian",
""
]
]
| TITLE: Scaling Probabilistic Circuits via Data Partitioning
ABSTRACT: Probabilistic circuits (PCs) enable us to learn joint distributions over a
set of random variables and to perform various probabilistic queries in a
tractable fashion. Though the tractability property allows PCs to scale beyond
non-tractable models such as Bayesian Networks, scaling training and inference
of PCs to larger, real-world datasets remains challenging. To remedy the
situation, we show how PCs can be learned across multiple machines by
recursively partitioning a distributed dataset, thereby unveiling a deep
connection between PCs and federated learning (FL). This leads to federated
circuits (FCs) -- a novel and flexible federated learning (FL) framework that
(1) allows one to scale PCs on distributed learning environments (2) train PCs
faster and (3) unifies for the first time horizontal, vertical, and hybrid FL
in one framework by re-framing FL as a density estimation problem over
distributed datasets. We demonstrate FC's capability to scale PCs on various
large-scale datasets. Also, we show FC's versatility in handling horizontal,
vertical, and hybrid FL within a unified framework on multiple classification
tasks.
| no_new_dataset | 0.945901 |
2503.08147 | Qile He | Zhifeng Xie, Qile He, Youjia Zhu, Qiwei He, Mengtian Li | FilmComposer: LLM-Driven Music Production for Silent Film Clips | Project page: https://apple-jun.github.io/FilmComposer.github.io/ | null | null | null | cs.CV cs.MM cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we implement music production for silent film clips using
LLM-driven method. Given the strong professional demands of film music
production, we propose the FilmComposer, simulating the actual workflows of
professional musicians. FilmComposer is the first to combine large generative
models with a multi-agent approach, leveraging the advantages of both waveform
music and symbolic music generation. Additionally, FilmComposer is the first to
focus on the three core elements of music production for film-audio quality,
musicality, and musical development-and introduces various controls, such as
rhythm, semantics, and visuals, to enhance these key aspects. Specifically,
FilmComposer consists of the visual processing module, rhythm-controllable
MusicGen, and multi-agent assessment, arrangement and mix. In addition, our
framework can seamlessly integrate into the actual music production pipeline
and allows user intervention in every step, providing strong interactivity and
a high degree of creative freedom. Furthermore, we propose MusicPro-7k which
includes 7,418 film clips, music, description, rhythm spots and main melody,
considering the lack of a professional and high-quality film music dataset.
Finally, both the standard metrics and the new specialized metrics we propose
demonstrate that the music generated by our model achieves state-of-the-art
performance in terms of quality, consistency with video, diversity, musicality,
and musical development. Project page:
https://apple-jun.github.io/FilmComposer.github.io/
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 08:05:11 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Xie",
"Zhifeng",
""
],
[
"He",
"Qile",
""
],
[
"Zhu",
"Youjia",
""
],
[
"He",
"Qiwei",
""
],
[
"Li",
"Mengtian",
""
]
]
| TITLE: FilmComposer: LLM-Driven Music Production for Silent Film Clips
ABSTRACT: In this work, we implement music production for silent film clips using
LLM-driven method. Given the strong professional demands of film music
production, we propose the FilmComposer, simulating the actual workflows of
professional musicians. FilmComposer is the first to combine large generative
models with a multi-agent approach, leveraging the advantages of both waveform
music and symbolic music generation. Additionally, FilmComposer is the first to
focus on the three core elements of music production for film-audio quality,
musicality, and musical development-and introduces various controls, such as
rhythm, semantics, and visuals, to enhance these key aspects. Specifically,
FilmComposer consists of the visual processing module, rhythm-controllable
MusicGen, and multi-agent assessment, arrangement and mix. In addition, our
framework can seamlessly integrate into the actual music production pipeline
and allows user intervention in every step, providing strong interactivity and
a high degree of creative freedom. Furthermore, we propose MusicPro-7k which
includes 7,418 film clips, music, description, rhythm spots and main melody,
considering the lack of a professional and high-quality film music dataset.
Finally, both the standard metrics and the new specialized metrics we propose
demonstrate that the music generated by our model achieves state-of-the-art
performance in terms of quality, consistency with video, diversity, musicality,
and musical development. Project page:
https://apple-jun.github.io/FilmComposer.github.io/
| no_new_dataset | 0.929184 |
2503.08152 | Chengzhi Ma | Chengzhi Ma, Kunqian Li, Shuaixin Liu, and Han Mei | Depth-Assisted Network for Indiscernible Marine Object Counting with
Adaptive Motion-Differentiated Feature Encoding | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Indiscernible marine object counting encounters numerous challenges,
including limited visibility in underwater scenes, mutual occlusion and overlap
among objects, and the dynamic similarity in appearance, color, and texture
between the background and foreground. These factors significantly complicate
the counting process. To address the scarcity of video-based indiscernible
object counting datasets, we have developed a novel dataset comprising 50
videos, from which approximately 800 frames have been extracted and annotated
with around 40,800 point-wise object labels. This dataset accurately represents
real underwater environments where indiscernible marine objects are intricately
integrated with their surroundings, thereby comprehensively illustrating the
aforementioned challenges in object counting. To address these challenges, we
propose a depth-assisted network with adaptive motion-differentiated feature
encoding. The network consists of a backbone encoding module and three
branches: a depth-assisting branch, a density estimation branch, and a motion
weight generation branch. Depth-aware features extracted by the depth-assisting
branch are enhanced via a depth-enhanced encoder to improve object
representation. Meanwhile, weights from the motion weight generation branch
refine multi-scale perception features in the adaptive flow estimation module.
Experimental results demonstrate that our method not only achieves
state-of-the-art performance on the proposed dataset but also yields
competitive results on three additional video-based crowd counting datasets.
The pre-trained model, code, and dataset are publicly available at
https://github.com/OUCVisionGroup/VIMOC-Net.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 08:08:04 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Ma",
"Chengzhi",
""
],
[
"Li",
"Kunqian",
""
],
[
"Liu",
"Shuaixin",
""
],
[
"Mei",
"Han",
""
]
]
| TITLE: Depth-Assisted Network for Indiscernible Marine Object Counting with
Adaptive Motion-Differentiated Feature Encoding
ABSTRACT: Indiscernible marine object counting encounters numerous challenges,
including limited visibility in underwater scenes, mutual occlusion and overlap
among objects, and the dynamic similarity in appearance, color, and texture
between the background and foreground. These factors significantly complicate
the counting process. To address the scarcity of video-based indiscernible
object counting datasets, we have developed a novel dataset comprising 50
videos, from which approximately 800 frames have been extracted and annotated
with around 40,800 point-wise object labels. This dataset accurately represents
real underwater environments where indiscernible marine objects are intricately
integrated with their surroundings, thereby comprehensively illustrating the
aforementioned challenges in object counting. To address these challenges, we
propose a depth-assisted network with adaptive motion-differentiated feature
encoding. The network consists of a backbone encoding module and three
branches: a depth-assisting branch, a density estimation branch, and a motion
weight generation branch. Depth-aware features extracted by the depth-assisting
branch are enhanced via a depth-enhanced encoder to improve object
representation. Meanwhile, weights from the motion weight generation branch
refine multi-scale perception features in the adaptive flow estimation module.
Experimental results demonstrate that our method not only achieves
state-of-the-art performance on the proposed dataset but also yields
competitive results on three additional video-based crowd counting datasets.
The pre-trained model, code, and dataset are publicly available at
https://github.com/OUCVisionGroup/VIMOC-Net.
| new_dataset | 0.962462 |
2503.08153 | Jing Wang | Jing Wang, Ao Ma, Ke Cao, Jun Zheng, Zhanjie Zhang, Jiasong Feng,
Shanyuan Liu, Yuhang Ma, Bo Cheng, Dawei Leng, Yuhui Yin, Xiaodan Liang | WISA: World Simulator Assistant for Physics-Aware Text-to-Video
Generation | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Recent rapid advancements in text-to-video (T2V) generation, such as SoRA and
Kling, have shown great potential for building world simulators. However,
current T2V models struggle to grasp abstract physical principles and generate
videos that adhere to physical laws. This challenge arises primarily from a
lack of clear guidance on physical information due to a significant gap between
abstract physical principles and generation models. To this end, we introduce
the World Simulator Assistant (WISA), an effective framework for decomposing
and incorporating physical principles into T2V models. Specifically, WISA
decomposes physical principles into textual physical descriptions, qualitative
physical categories, and quantitative physical properties. To effectively embed
these physical attributes into the generation process, WISA incorporates
several key designs, including Mixture-of-Physical-Experts Attention (MoPA) and
a Physical Classifier, enhancing the model's physics awareness. Furthermore,
most existing datasets feature videos where physical phenomena are either
weakly represented or entangled with multiple co-occurring processes, limiting
their suitability as dedicated resources for learning explicit physical
principles. We propose a novel video dataset, WISA-32K, collected based on
qualitative physical categories. It consists of 32,000 videos, representing 17
physical laws across three domains of physics: dynamics, thermodynamics, and
optics. Experimental results demonstrate that WISA can effectively enhance the
compatibility of T2V models with real-world physical laws, achieving a
considerable improvement on the VideoPhy benchmark. The visual exhibitions of
WISA and WISA-32K are available in the https://360cvgroup.github.io/WISA/.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 08:10:03 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Wang",
"Jing",
""
],
[
"Ma",
"Ao",
""
],
[
"Cao",
"Ke",
""
],
[
"Zheng",
"Jun",
""
],
[
"Zhang",
"Zhanjie",
""
],
[
"Feng",
"Jiasong",
""
],
[
"Liu",
"Shanyuan",
""
],
[
"Ma",
"Yuhang",
""
],
[
"Cheng",
"Bo",
""
],
[
"Leng",
"Dawei",
""
],
[
"Yin",
"Yuhui",
""
],
[
"Liang",
"Xiaodan",
""
]
]
| TITLE: WISA: World Simulator Assistant for Physics-Aware Text-to-Video
Generation
ABSTRACT: Recent rapid advancements in text-to-video (T2V) generation, such as SoRA and
Kling, have shown great potential for building world simulators. However,
current T2V models struggle to grasp abstract physical principles and generate
videos that adhere to physical laws. This challenge arises primarily from a
lack of clear guidance on physical information due to a significant gap between
abstract physical principles and generation models. To this end, we introduce
the World Simulator Assistant (WISA), an effective framework for decomposing
and incorporating physical principles into T2V models. Specifically, WISA
decomposes physical principles into textual physical descriptions, qualitative
physical categories, and quantitative physical properties. To effectively embed
these physical attributes into the generation process, WISA incorporates
several key designs, including Mixture-of-Physical-Experts Attention (MoPA) and
a Physical Classifier, enhancing the model's physics awareness. Furthermore,
most existing datasets feature videos where physical phenomena are either
weakly represented or entangled with multiple co-occurring processes, limiting
their suitability as dedicated resources for learning explicit physical
principles. We propose a novel video dataset, WISA-32K, collected based on
qualitative physical categories. It consists of 32,000 videos, representing 17
physical laws across three domains of physics: dynamics, thermodynamics, and
optics. Experimental results demonstrate that WISA can effectively enhance the
compatibility of T2V models with real-world physical laws, achieving a
considerable improvement on the VideoPhy benchmark. The visual exhibitions of
WISA and WISA-32K are available in the https://360cvgroup.github.io/WISA/.
| new_dataset | 0.962778 |
2503.08156 | Yufan Chen | Yufan Chen, Ching Ting Leung, Jianwei Sun, Yong Huang, Linyan Li, Hao
Chen, Hanyu Gao | Towards Large-scale Chemical Reaction Image Parsing via a Multimodal
Large Language Model | null | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Artificial intelligence (AI) has demonstrated significant promise in
advancing organic chemistry research; however, its effectiveness depends on the
availability of high-quality chemical reaction data. Currently, most published
chemical reactions are not available in machine-readable form, limiting the
broader application of AI in this field. The extraction of published chemical
reactions into structured databases still relies heavily on manual curation,
and robust automatic parsing of chemical reaction images into machine-readable
data remains a significant challenge. To address this, we introduce the
Reaction Image Multimodal large language model (RxnIM), the first multimodal
large language model specifically designed to parse chemical reaction images
into machine-readable reaction data. RxnIM not only extracts key chemical
components from reaction images but also interprets the textual content that
describes reaction conditions. Together with specially designed large-scale
dataset generation method to support model training, our approach achieves
excellent performance, with an average F1 score of 88% on various benchmarks,
surpassing literature methods by 5%. This represents a crucial step toward the
automatic construction of large databases of machine-readable reaction data
parsed from images in the chemistry literature, providing essential data
resources for AI research in chemistry. The source code, model checkpoints, and
datasets developed in this work are released under permissive licenses. An
instance of the RxnIM web application can be accessed at
https://huggingface.co/spaces/CYF200127/RxnIM.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 08:11:23 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Chen",
"Yufan",
""
],
[
"Leung",
"Ching Ting",
""
],
[
"Sun",
"Jianwei",
""
],
[
"Huang",
"Yong",
""
],
[
"Li",
"Linyan",
""
],
[
"Chen",
"Hao",
""
],
[
"Gao",
"Hanyu",
""
]
]
| TITLE: Towards Large-scale Chemical Reaction Image Parsing via a Multimodal
Large Language Model
ABSTRACT: Artificial intelligence (AI) has demonstrated significant promise in
advancing organic chemistry research; however, its effectiveness depends on the
availability of high-quality chemical reaction data. Currently, most published
chemical reactions are not available in machine-readable form, limiting the
broader application of AI in this field. The extraction of published chemical
reactions into structured databases still relies heavily on manual curation,
and robust automatic parsing of chemical reaction images into machine-readable
data remains a significant challenge. To address this, we introduce the
Reaction Image Multimodal large language model (RxnIM), the first multimodal
large language model specifically designed to parse chemical reaction images
into machine-readable reaction data. RxnIM not only extracts key chemical
components from reaction images but also interprets the textual content that
describes reaction conditions. Together with specially designed large-scale
dataset generation method to support model training, our approach achieves
excellent performance, with an average F1 score of 88% on various benchmarks,
surpassing literature methods by 5%. This represents a crucial step toward the
automatic construction of large databases of machine-readable reaction data
parsed from images in the chemistry literature, providing essential data
resources for AI research in chemistry. The source code, model checkpoints, and
datasets developed in this work are released under permissive licenses. An
instance of the RxnIM web application can be accessed at
https://huggingface.co/spaces/CYF200127/RxnIM.
| new_dataset | 0.52109 |
2503.08157 | Zhanjie Zhang | Zhanjie Zhang, Ao Ma, Ke Cao, Jing Wang, Shanyuan Liu, Yuhang Ma, Bo
Cheng, Dawei Leng and Yuhui Yin | U-StyDiT: Ultra-high Quality Artistic Style Transfer Using Diffusion
Transformers | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Ultra-high quality artistic style transfer refers to repainting an ultra-high
quality content image using the style information learned from the style image.
Existing artistic style transfer methods can be categorized into style
reconstruction-based and content-style disentanglement-based style transfer
approaches. Although these methods can generate some artistic stylized images,
they still exhibit obvious artifacts and disharmonious patterns, which hinder
their ability to produce ultra-high quality artistic stylized images. To
address these issues, we propose a novel artistic image style transfer method,
U-StyDiT, which is built on transformer-based diffusion (DiT) and learns
content-style disentanglement, generating ultra-high quality artistic stylized
images. Specifically, we first design a Multi-view Style Modulator (MSM) to
learn style information from a style image from local and global perspectives,
conditioning U-StyDiT to generate stylized images with the learned style
information. Then, we introduce a StyDiT Block to learn content and style
conditions simultaneously from a style image. Additionally, we propose an
ultra-high quality artistic image dataset, Aes4M, comprising 10 categories,
each containing 400,000 style images. This dataset effectively solves the
problem that the existing style transfer methods cannot produce high-quality
artistic stylized images due to the size of the dataset and the quality of the
images in the dataset. Finally, the extensive qualitative and quantitative
experiments validate that our U-StyDiT can create higher quality stylized
images compared to state-of-the-art artistic style transfer methods. To our
knowledge, our proposed method is the first to address the generation of
ultra-high quality stylized images using transformer-based diffusion.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 08:12:38 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Zhang",
"Zhanjie",
""
],
[
"Ma",
"Ao",
""
],
[
"Cao",
"Ke",
""
],
[
"Wang",
"Jing",
""
],
[
"Liu",
"Shanyuan",
""
],
[
"Ma",
"Yuhang",
""
],
[
"Cheng",
"Bo",
""
],
[
"Leng",
"Dawei",
""
],
[
"Yin",
"Yuhui",
""
]
]
| TITLE: U-StyDiT: Ultra-high Quality Artistic Style Transfer Using Diffusion
Transformers
ABSTRACT: Ultra-high quality artistic style transfer refers to repainting an ultra-high
quality content image using the style information learned from the style image.
Existing artistic style transfer methods can be categorized into style
reconstruction-based and content-style disentanglement-based style transfer
approaches. Although these methods can generate some artistic stylized images,
they still exhibit obvious artifacts and disharmonious patterns, which hinder
their ability to produce ultra-high quality artistic stylized images. To
address these issues, we propose a novel artistic image style transfer method,
U-StyDiT, which is built on transformer-based diffusion (DiT) and learns
content-style disentanglement, generating ultra-high quality artistic stylized
images. Specifically, we first design a Multi-view Style Modulator (MSM) to
learn style information from a style image from local and global perspectives,
conditioning U-StyDiT to generate stylized images with the learned style
information. Then, we introduce a StyDiT Block to learn content and style
conditions simultaneously from a style image. Additionally, we propose an
ultra-high quality artistic image dataset, Aes4M, comprising 10 categories,
each containing 400,000 style images. This dataset effectively solves the
problem that the existing style transfer methods cannot produce high-quality
artistic stylized images due to the size of the dataset and the quality of the
images in the dataset. Finally, the extensive qualitative and quantitative
experiments validate that our U-StyDiT can create higher quality stylized
images compared to state-of-the-art artistic style transfer methods. To our
knowledge, our proposed method is the first to address the generation of
ultra-high quality stylized images using transformer-based diffusion.
| new_dataset | 0.964855 |
2503.08162 | Kangan Qian | Kangan Qian and Ziang Luo and Sicong Jiang and Zilin Huang and Jinyu
Miao and Zhikun Ma and Tianze Zhu and Jiayin Li and Yangfan He and Zheng Fu
and Yining Shi and Boyue Wang and Hezhe Lin and Ziyu Chen and Jiangbo Yu and
Xinyu Jiao and Mengmeng Yang and Kun Jiang and Diange Yang | FASIONAD++ : Integrating High-Level Instruction and Information
Bottleneck in FAt-Slow fusION Systems for Enhanced Safety in Autonomous
Driving with Adaptive Feedback | 8 pages, 4 figures | null | null | null | cs.RO cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ensuring safe, comfortable, and efficient planning is crucial for autonomous
driving systems. While end-to-end models trained on large datasets perform well
in standard driving scenarios, they struggle with complex low-frequency events.
Recent Large Language Models (LLMs) and Vision Language Models (VLMs)
advancements offer enhanced reasoning but suffer from computational
inefficiency. Inspired by the dual-process cognitive model "Thinking, Fast and
Slow", we propose $\textbf{FASIONAD}$ -- a novel dual-system framework that
synergizes a fast end-to-end planner with a VLM-based reasoning module. The
fast system leverages end-to-end learning to achieve real-time trajectory
generation in common scenarios, while the slow system activates through
uncertainty estimation to perform contextual analysis and complex scenario
resolution. Our architecture introduces three key innovations: (1) A dynamic
switching mechanism enabling slow system intervention based on real-time
uncertainty assessment; (2) An information bottleneck with high-level plan
feedback that optimizes the slow system's guidance capability; (3) A
bidirectional knowledge exchange where visual prompts enhance the slow system's
reasoning while its feedback refines the fast planner's decision-making. To
strengthen VLM reasoning, we develop a question-answering mechanism coupled
with reward-instruct training strategy. In open-loop experiments, FASIONAD
achieves a $6.7\%$ reduction in average $L2$ trajectory error and $28.1\%$
lower collision rate.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 08:27:01 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Qian",
"Kangan",
""
],
[
"Luo",
"Ziang",
""
],
[
"Jiang",
"Sicong",
""
],
[
"Huang",
"Zilin",
""
],
[
"Miao",
"Jinyu",
""
],
[
"Ma",
"Zhikun",
""
],
[
"Zhu",
"Tianze",
""
],
[
"Li",
"Jiayin",
""
],
[
"He",
"Yangfan",
""
],
[
"Fu",
"Zheng",
""
],
[
"Shi",
"Yining",
""
],
[
"Wang",
"Boyue",
""
],
[
"Lin",
"Hezhe",
""
],
[
"Chen",
"Ziyu",
""
],
[
"Yu",
"Jiangbo",
""
],
[
"Jiao",
"Xinyu",
""
],
[
"Yang",
"Mengmeng",
""
],
[
"Jiang",
"Kun",
""
],
[
"Yang",
"Diange",
""
]
]
| TITLE: FASIONAD++ : Integrating High-Level Instruction and Information
Bottleneck in FAt-Slow fusION Systems for Enhanced Safety in Autonomous
Driving with Adaptive Feedback
ABSTRACT: Ensuring safe, comfortable, and efficient planning is crucial for autonomous
driving systems. While end-to-end models trained on large datasets perform well
in standard driving scenarios, they struggle with complex low-frequency events.
Recent Large Language Models (LLMs) and Vision Language Models (VLMs)
advancements offer enhanced reasoning but suffer from computational
inefficiency. Inspired by the dual-process cognitive model "Thinking, Fast and
Slow", we propose $\textbf{FASIONAD}$ -- a novel dual-system framework that
synergizes a fast end-to-end planner with a VLM-based reasoning module. The
fast system leverages end-to-end learning to achieve real-time trajectory
generation in common scenarios, while the slow system activates through
uncertainty estimation to perform contextual analysis and complex scenario
resolution. Our architecture introduces three key innovations: (1) A dynamic
switching mechanism enabling slow system intervention based on real-time
uncertainty assessment; (2) An information bottleneck with high-level plan
feedback that optimizes the slow system's guidance capability; (3) A
bidirectional knowledge exchange where visual prompts enhance the slow system's
reasoning while its feedback refines the fast planner's decision-making. To
strengthen VLM reasoning, we develop a question-answering mechanism coupled
with reward-instruct training strategy. In open-loop experiments, FASIONAD
achieves a $6.7\%$ reduction in average $L2$ trajectory error and $28.1\%$
lower collision rate.
| no_new_dataset | 0.946892 |
2503.08165 | Xinhang Liu | Xinhang Liu, Yu-Wing Tai, Chi-Keung Tang | Multimodal Generation of Animatable 3D Human Models with AvatarForge | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We introduce AvatarForge, a framework for generating animatable 3D human
avatars from text or image inputs using AI-driven procedural generation. While
diffusion-based methods have made strides in general 3D object generation, they
struggle with high-quality, customizable human avatars due to the complexity
and diversity of human body shapes, poses, exacerbated by the scarcity of
high-quality data. Additionally, animating these avatars remains a significant
challenge for existing methods. AvatarForge overcomes these limitations by
combining LLM-based commonsense reasoning with off-the-shelf 3D human
generators, enabling fine-grained control over body and facial details. Unlike
diffusion models which often rely on pre-trained datasets lacking precise
control over individual human features, AvatarForge offers a more flexible
approach, bringing humans into the iterative design and modeling loop, with its
auto-verification system allowing for continuous refinement of the generated
avatars, and thus promoting high accuracy and customization. Our evaluations
show that AvatarForge outperforms state-of-the-art methods in both text- and
image-to-avatar generation, making it a versatile tool for artistic creation
and animation.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 08:29:18 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Liu",
"Xinhang",
""
],
[
"Tai",
"Yu-Wing",
""
],
[
"Tang",
"Chi-Keung",
""
]
]
| TITLE: Multimodal Generation of Animatable 3D Human Models with AvatarForge
ABSTRACT: We introduce AvatarForge, a framework for generating animatable 3D human
avatars from text or image inputs using AI-driven procedural generation. While
diffusion-based methods have made strides in general 3D object generation, they
struggle with high-quality, customizable human avatars due to the complexity
and diversity of human body shapes, poses, exacerbated by the scarcity of
high-quality data. Additionally, animating these avatars remains a significant
challenge for existing methods. AvatarForge overcomes these limitations by
combining LLM-based commonsense reasoning with off-the-shelf 3D human
generators, enabling fine-grained control over body and facial details. Unlike
diffusion models which often rely on pre-trained datasets lacking precise
control over individual human features, AvatarForge offers a more flexible
approach, bringing humans into the iterative design and modeling loop, with its
auto-verification system allowing for continuous refinement of the generated
avatars, and thus promoting high accuracy and customization. Our evaluations
show that AvatarForge outperforms state-of-the-art methods in both text- and
image-to-avatar generation, making it a versatile tool for artistic creation
and animation.
| no_new_dataset | 0.945045 |
2503.08166 | JiaXuan Zhu | Jiaxuan Zhu, Hao Tang | Dynamic Scene Reconstruction: Recent Advance in Real-time Rendering and
Streaming | 20 pages, 6 figures | null | null | null | cs.GR cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Representing and rendering dynamic scenes from 2D images is a fundamental yet
challenging problem in computer vision and graphics. This survey provides a
comprehensive review of the evolution and advancements in dynamic scene
representation and rendering, with a particular emphasis on recent progress in
Neural Radiance Fields based and 3D Gaussian Splatting based reconstruction
methods. We systematically summarize existing approaches, categorize them
according to their core principles, compile relevant datasets, compare the
performance of various methods on these benchmarks, and explore the challenges
and future research directions in this rapidly evolving field. In total, we
review over 170 relevant papers, offering a broad perspective on the state of
the art in this domain.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 08:29:41 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Zhu",
"Jiaxuan",
""
],
[
"Tang",
"Hao",
""
]
]
| TITLE: Dynamic Scene Reconstruction: Recent Advance in Real-time Rendering and
Streaming
ABSTRACT: Representing and rendering dynamic scenes from 2D images is a fundamental yet
challenging problem in computer vision and graphics. This survey provides a
comprehensive review of the evolution and advancements in dynamic scene
representation and rendering, with a particular emphasis on recent progress in
Neural Radiance Fields based and 3D Gaussian Splatting based reconstruction
methods. We systematically summarize existing approaches, categorize them
according to their core principles, compile relevant datasets, compare the
performance of various methods on these benchmarks, and explore the challenges
and future research directions in this rapidly evolving field. In total, we
review over 170 relevant papers, offering a broad perspective on the state of
the art in this domain.
| no_new_dataset | 0.941439 |
2503.08168 | Miao Zhang | Miao Zhang, Jun Yin, Pengyu Zeng, Yiqing Shen, Shuai Lu, Xueqian Wang | TSCnet: A Text-driven Semantic-level Controllable Framework for
Customized Low-Light Image Enhancement | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Deep learning-based image enhancement methods show significant advantages in
reducing noise and improving visibility in low-light conditions. These methods
are typically based on one-to-one mapping, where the model learns a direct
transformation from low light to specific enhanced images. Therefore, these
methods are inflexible as they do not allow highly personalized mapping, even
though an individual's lighting preferences are inherently personalized. To
overcome these limitations, we propose a new light enhancement task and a new
framework that provides customized lighting control through prompt-driven,
semantic-level, and quantitative brightness adjustments. The framework begins
by leveraging a Large Language Model (LLM) to understand natural language
prompts, enabling it to identify target objects for brightness adjustments. To
localize these target objects, the Retinex-based Reasoning Segment (RRS) module
generates precise target localization masks using reflection images.
Subsequently, the Text-based Brightness Controllable (TBC) module adjusts
brightness levels based on the generated illumination map. Finally, an Adaptive
Contextual Compensation (ACC) module integrates multi-modal inputs and controls
a conditional diffusion model to adjust the lighting, ensuring seamless and
precise enhancements accurately. Experimental results on benchmark datasets
demonstrate our framework's superior performance at increasing visibility,
maintaining natural color balance, and amplifying fine details without creating
artifacts. Furthermore, its robust generalization capabilities enable complex
semantic-level lighting adjustments in diverse open-world environments through
natural language interactions.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 08:30:50 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Zhang",
"Miao",
""
],
[
"Yin",
"Jun",
""
],
[
"Zeng",
"Pengyu",
""
],
[
"Shen",
"Yiqing",
""
],
[
"Lu",
"Shuai",
""
],
[
"Wang",
"Xueqian",
""
]
]
| TITLE: TSCnet: A Text-driven Semantic-level Controllable Framework for
Customized Low-Light Image Enhancement
ABSTRACT: Deep learning-based image enhancement methods show significant advantages in
reducing noise and improving visibility in low-light conditions. These methods
are typically based on one-to-one mapping, where the model learns a direct
transformation from low light to specific enhanced images. Therefore, these
methods are inflexible as they do not allow highly personalized mapping, even
though an individual's lighting preferences are inherently personalized. To
overcome these limitations, we propose a new light enhancement task and a new
framework that provides customized lighting control through prompt-driven,
semantic-level, and quantitative brightness adjustments. The framework begins
by leveraging a Large Language Model (LLM) to understand natural language
prompts, enabling it to identify target objects for brightness adjustments. To
localize these target objects, the Retinex-based Reasoning Segment (RRS) module
generates precise target localization masks using reflection images.
Subsequently, the Text-based Brightness Controllable (TBC) module adjusts
brightness levels based on the generated illumination map. Finally, an Adaptive
Contextual Compensation (ACC) module integrates multi-modal inputs and controls
a conditional diffusion model to adjust the lighting, ensuring seamless and
precise enhancements accurately. Experimental results on benchmark datasets
demonstrate our framework's superior performance at increasing visibility,
maintaining natural color balance, and amplifying fine details without creating
artifacts. Furthermore, its robust generalization capabilities enable complex
semantic-level lighting adjustments in diverse open-world environments through
natural language interactions.
| no_new_dataset | 0.950595 |
2503.08170 | Dongyue Li | Dongyue Li and Daisuke Deguchi and Hiroshi Murase | CQVPR: Landmark-aware Contextual Queries for Visual Place Recognition | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Visual Place Recognition (VPR) aims to estimate the location of the given
query image within a database of geo-tagged images. To identify the exact
location in an image, detecting landmarks is crucial. However, in some
scenarios, such as urban environments, there are numerous landmarks, such as
various modern buildings, and the landmarks in different cities often exhibit
high visual similarity. Therefore, it is essential not only to leverage the
landmarks but also to consider the contextual information surrounding them,
such as whether there are trees, roads, or other features around the landmarks.
We propose the Contextual Query VPR (CQVPR), which integrates contextual
information with detailed pixel-level visual features. By leveraging a set of
learnable contextual queries, our method automatically learns the high-level
contexts with respect to landmarks and their surrounding areas. Heatmaps
depicting regions that each query attends to serve as context-aware features,
offering cues that could enhance the understanding of each scene. We further
propose a query matching loss to supervise the extraction process of contextual
queries. Extensive experiments on several datasets demonstrate that the
proposed method outperforms other state-of-the-art methods, especially in
challenging scenarios.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 08:32:50 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Li",
"Dongyue",
""
],
[
"Deguchi",
"Daisuke",
""
],
[
"Murase",
"Hiroshi",
""
]
]
| TITLE: CQVPR: Landmark-aware Contextual Queries for Visual Place Recognition
ABSTRACT: Visual Place Recognition (VPR) aims to estimate the location of the given
query image within a database of geo-tagged images. To identify the exact
location in an image, detecting landmarks is crucial. However, in some
scenarios, such as urban environments, there are numerous landmarks, such as
various modern buildings, and the landmarks in different cities often exhibit
high visual similarity. Therefore, it is essential not only to leverage the
landmarks but also to consider the contextual information surrounding them,
such as whether there are trees, roads, or other features around the landmarks.
We propose the Contextual Query VPR (CQVPR), which integrates contextual
information with detailed pixel-level visual features. By leveraging a set of
learnable contextual queries, our method automatically learns the high-level
contexts with respect to landmarks and their surrounding areas. Heatmaps
depicting regions that each query attends to serve as context-aware features,
offering cues that could enhance the understanding of each scene. We further
propose a query matching loss to supervise the extraction process of contextual
queries. Extensive experiments on several datasets demonstrate that the
proposed method outperforms other state-of-the-art methods, especially in
challenging scenarios.
| no_new_dataset | 0.941815 |
2503.08173 | Yuan Tian | Yuan Tian, Kaiyuan Ji, Rongzhao Zhang, Yankai Jiang, Chunyi Li,
Xiaosong Wang, Guangtao Zhai | Towards All-in-One Medical Image Re-Identification | Accepted to CVPR2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Medical image re-identification (MedReID) is under-explored so far, despite
its critical applications in personalized healthcare and privacy protection. In
this paper, we introduce a thorough benchmark and a unified model for this
problem. First, to handle various medical modalities, we propose a novel
Continuous Modality-based Parameter Adapter (ComPA). ComPA condenses medical
content into a continuous modality representation and dynamically adjusts the
modality-agnostic model with modality-specific parameters at runtime. This
allows a single model to adaptively learn and process diverse modality data.
Furthermore, we integrate medical priors into our model by aligning it with a
bag of pre-trained medical foundation models, in terms of the differential
features. Compared to single-image feature, modeling the inter-image difference
better fits the re-identification problem, which involves discriminating
multiple images. We evaluate the proposed model against 25 foundation models
and 8 large multi-modal language models across 11 image datasets, demonstrating
consistently superior performance. Additionally, we deploy the proposed MedReID
technique to two real-world applications, i.e., history-augmented personalized
diagnosis and medical privacy protection. Codes and model is available at
\href{https://github.com/tianyuan168326/All-in-One-MedReID-Pytorch}{https://github.com/tianyuan168326/All-in-One-MedReID-Pytorch}.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 08:35:00 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Tian",
"Yuan",
""
],
[
"Ji",
"Kaiyuan",
""
],
[
"Zhang",
"Rongzhao",
""
],
[
"Jiang",
"Yankai",
""
],
[
"Li",
"Chunyi",
""
],
[
"Wang",
"Xiaosong",
""
],
[
"Zhai",
"Guangtao",
""
]
]
| TITLE: Towards All-in-One Medical Image Re-Identification
ABSTRACT: Medical image re-identification (MedReID) is under-explored so far, despite
its critical applications in personalized healthcare and privacy protection. In
this paper, we introduce a thorough benchmark and a unified model for this
problem. First, to handle various medical modalities, we propose a novel
Continuous Modality-based Parameter Adapter (ComPA). ComPA condenses medical
content into a continuous modality representation and dynamically adjusts the
modality-agnostic model with modality-specific parameters at runtime. This
allows a single model to adaptively learn and process diverse modality data.
Furthermore, we integrate medical priors into our model by aligning it with a
bag of pre-trained medical foundation models, in terms of the differential
features. Compared to single-image feature, modeling the inter-image difference
better fits the re-identification problem, which involves discriminating
multiple images. We evaluate the proposed model against 25 foundation models
and 8 large multi-modal language models across 11 image datasets, demonstrating
consistently superior performance. Additionally, we deploy the proposed MedReID
technique to two real-world applications, i.e., history-augmented personalized
diagnosis and medical privacy protection. Codes and model is available at
\href{https://github.com/tianyuan168326/All-in-One-MedReID-Pytorch}{https://github.com/tianyuan168326/All-in-One-MedReID-Pytorch}.
| no_new_dataset | 0.950457 |
2503.08175 | Zitong Shi | Zitong Shi, Guancheng Wan, Wenke Huang, Guibin Zhang, Jiawei Shao,
Mang Ye, Carl Yang | Privacy-Enhancing Paradigms within Federated Multi-Agent Systems | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | LLM-based Multi-Agent Systems (MAS) have proven highly effective in solving
complex problems by integrating multiple agents, each performing different
roles. However, in sensitive domains, they face emerging privacy protection
challenges. In this paper, we introduce the concept of Federated MAS,
highlighting the fundamental differences between Federated MAS and traditional
FL. We then identify key challenges in developing Federated MAS, including: 1)
heterogeneous privacy protocols among agents, 2) structural differences in
multi-party conversations, and 3) dynamic conversational network structures. To
address these challenges, we propose Embedded Privacy-Enhancing Agents
(EPEAgent), an innovative solution that integrates seamlessly into the
Retrieval-Augmented Generation (RAG) phase and the context retrieval stage.
This solution minimizes data flows, ensuring that only task-relevant,
agent-specific information is shared. Additionally, we design and generate a
comprehensive dataset to evaluate the proposed paradigm. Extensive experiments
demonstrate that EPEAgent effectively enhances privacy protection while
maintaining strong system performance. The code will be availiable at
https://github.com/ZitongShi/EPEAgent
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 08:38:45 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Shi",
"Zitong",
""
],
[
"Wan",
"Guancheng",
""
],
[
"Huang",
"Wenke",
""
],
[
"Zhang",
"Guibin",
""
],
[
"Shao",
"Jiawei",
""
],
[
"Ye",
"Mang",
""
],
[
"Yang",
"Carl",
""
]
]
| TITLE: Privacy-Enhancing Paradigms within Federated Multi-Agent Systems
ABSTRACT: LLM-based Multi-Agent Systems (MAS) have proven highly effective in solving
complex problems by integrating multiple agents, each performing different
roles. However, in sensitive domains, they face emerging privacy protection
challenges. In this paper, we introduce the concept of Federated MAS,
highlighting the fundamental differences between Federated MAS and traditional
FL. We then identify key challenges in developing Federated MAS, including: 1)
heterogeneous privacy protocols among agents, 2) structural differences in
multi-party conversations, and 3) dynamic conversational network structures. To
address these challenges, we propose Embedded Privacy-Enhancing Agents
(EPEAgent), an innovative solution that integrates seamlessly into the
Retrieval-Augmented Generation (RAG) phase and the context retrieval stage.
This solution minimizes data flows, ensuring that only task-relevant,
agent-specific information is shared. Additionally, we design and generate a
comprehensive dataset to evaluate the proposed paradigm. Extensive experiments
demonstrate that EPEAgent effectively enhances privacy protection while
maintaining strong system performance. The code will be availiable at
https://github.com/ZitongShi/EPEAgent
| new_dataset | 0.958265 |
2503.08189 | Xinyan Wang | Xinyan Wang, Jinshuo Liu, Cheng Bi, Kaijian Xie, Meng Wang, Juan Deng
and Jeff Pan | SoTCKGE:Continual Knowledge Graph Embedding Based on Spatial Offset
Transformation | 9 pages, 5 figures | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current Continual Knowledge Graph Embedding (CKGE) methods primarily rely on
translation-based embedding methods, leveraging previously acquired knowledge
to initialize new facts. To enhance learning efficiency, these methods often
integrate fine-tuning or continual learning strategies. However, this
compromises the model's prediction accuracy and the translation-based methods
lack support for complex relational structures (multi-hop relations). To tackle
this challenge, we propose a novel CKGE framework SoTCKGE grounded in Spatial
Offset Transformation. Within this framework, entity positions are defined as
being jointly determined by base position vectors and offset vectors. This not
only enhances the model's ability to represent complex relational structures
but also allows for the embedding update of both new and old knowledge through
simple spatial offset transformations, without the need for continuous learning
methods. Furthermore, we introduce a hierarchical update strategy and a
balanced embedding method to refine the parameter update process, effectively
minimizing training costs and augmenting model accuracy. To comprehensively
assess the performance of our model, we have conducted extensive experimlents
on four publicly accessible datasets and a new dataset constructed by us.
Experimental results demonstrate the advantage of our model in enhancing
multi-hop relationship learning and further improving prediction accuracy.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 08:54:03 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Wang",
"Xinyan",
""
],
[
"Liu",
"Jinshuo",
""
],
[
"Bi",
"Cheng",
""
],
[
"Xie",
"Kaijian",
""
],
[
"Wang",
"Meng",
""
],
[
"Deng",
"Juan",
""
],
[
"Pan",
"Jeff",
""
]
]
| TITLE: SoTCKGE:Continual Knowledge Graph Embedding Based on Spatial Offset
Transformation
ABSTRACT: Current Continual Knowledge Graph Embedding (CKGE) methods primarily rely on
translation-based embedding methods, leveraging previously acquired knowledge
to initialize new facts. To enhance learning efficiency, these methods often
integrate fine-tuning or continual learning strategies. However, this
compromises the model's prediction accuracy and the translation-based methods
lack support for complex relational structures (multi-hop relations). To tackle
this challenge, we propose a novel CKGE framework SoTCKGE grounded in Spatial
Offset Transformation. Within this framework, entity positions are defined as
being jointly determined by base position vectors and offset vectors. This not
only enhances the model's ability to represent complex relational structures
but also allows for the embedding update of both new and old knowledge through
simple spatial offset transformations, without the need for continuous learning
methods. Furthermore, we introduce a hierarchical update strategy and a
balanced embedding method to refine the parameter update process, effectively
minimizing training costs and augmenting model accuracy. To comprehensively
assess the performance of our model, we have conducted extensive experimlents
on four publicly accessible datasets and a new dataset constructed by us.
Experimental results demonstrate the advantage of our model in enhancing
multi-hop relationship learning and further improving prediction accuracy.
| new_dataset | 0.967747 |
2503.08201 | Xuanhan Wang | Xuanhan Wang, Huimin Deng, Lianli Gao, Jingkuan Song | Scale-Aware Pre-Training for Human-Centric Visual Perception: Enabling
Lightweight and Generalizable Models | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Human-centric visual perception (HVP) has recently achieved remarkable
progress due to advancements in large-scale self-supervised pretraining (SSP).
However, existing HVP models face limitations in adapting to real-world
applications, which require general visual patterns for downstream tasks while
maintaining computationally sustainable costs to ensure compatibility with edge
devices. These limitations primarily arise from two issues: 1) the pretraining
objectives focus solely on specific visual patterns, limiting the
generalizability of the learned patterns for diverse downstream tasks; and 2)
HVP models often exhibit excessively large model sizes, making them
incompatible with real-world applications. To address these limitations, we
introduce Scale-Aware Image Pretraining (SAIP), a novel SSP framework enabling
lightweight vision models to acquire general patterns for HVP. Specifically,
SAIP incorporates three learning objectives based on the principle of
cross-scale consistency: 1) Cross-scale Matching (CSM) which contrastively
learns image-level invariant patterns from multi-scale single-person images; 2)
Cross-scale Reconstruction (CSR) which learns pixel-level consistent visual
structures from multi-scale masked single-person images; and 3) Cross-scale
Search (CSS) which learns to capture diverse patterns from multi-scale
multi-person images. Three objectives complement one another, enabling
lightweight models to learn multi-scale generalizable patterns essential for
HVP downstream tasks.Extensive experiments conducted across 12 HVP datasets
demonstrate that SAIP exhibits remarkable generalization capabilities across 9
human-centric vision tasks. Moreover, it achieves significant performance
improvements over existing methods, with gains of 3%-13% in single-person
discrimination tasks, 1%-11% in dense prediction tasks, and 1%-6% in
multi-person visual understanding tasks.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 09:12:51 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Wang",
"Xuanhan",
""
],
[
"Deng",
"Huimin",
""
],
[
"Gao",
"Lianli",
""
],
[
"Song",
"Jingkuan",
""
]
]
| TITLE: Scale-Aware Pre-Training for Human-Centric Visual Perception: Enabling
Lightweight and Generalizable Models
ABSTRACT: Human-centric visual perception (HVP) has recently achieved remarkable
progress due to advancements in large-scale self-supervised pretraining (SSP).
However, existing HVP models face limitations in adapting to real-world
applications, which require general visual patterns for downstream tasks while
maintaining computationally sustainable costs to ensure compatibility with edge
devices. These limitations primarily arise from two issues: 1) the pretraining
objectives focus solely on specific visual patterns, limiting the
generalizability of the learned patterns for diverse downstream tasks; and 2)
HVP models often exhibit excessively large model sizes, making them
incompatible with real-world applications. To address these limitations, we
introduce Scale-Aware Image Pretraining (SAIP), a novel SSP framework enabling
lightweight vision models to acquire general patterns for HVP. Specifically,
SAIP incorporates three learning objectives based on the principle of
cross-scale consistency: 1) Cross-scale Matching (CSM) which contrastively
learns image-level invariant patterns from multi-scale single-person images; 2)
Cross-scale Reconstruction (CSR) which learns pixel-level consistent visual
structures from multi-scale masked single-person images; and 3) Cross-scale
Search (CSS) which learns to capture diverse patterns from multi-scale
multi-person images. Three objectives complement one another, enabling
lightweight models to learn multi-scale generalizable patterns essential for
HVP downstream tasks.Extensive experiments conducted across 12 HVP datasets
demonstrate that SAIP exhibits remarkable generalization capabilities across 9
human-centric vision tasks. Moreover, it achieves significant performance
improvements over existing methods, with gains of 3%-13% in single-person
discrimination tasks, 1%-11% in dense prediction tasks, and 1%-6% in
multi-person visual understanding tasks.
| no_new_dataset | 0.947962 |
2503.08203 | Chungpa Lee | Chungpa Lee, Jeongheon Oh, Kibok Lee, Jy-yong Sohn | A Theoretical Framework for Preventing Class Collapse in Supervised
Contrastive Learning | null | Proceedings of the 28th International Conference on Artificial
Intelligence and Statistics (AISTATS) 2025 | null | null | cs.LG cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | Supervised contrastive learning (SupCL) has emerged as a prominent approach
in representation learning, leveraging both supervised and self-supervised
losses. However, achieving an optimal balance between these losses is
challenging; failing to do so can lead to class collapse, reducing
discrimination among individual embeddings in the same class. In this paper, we
present theoretically grounded guidelines for SupCL to prevent class collapse
in learned representations. Specifically, we introduce the Simplex-to-Simplex
Embedding Model (SSEM), a theoretical framework that models various embedding
structures, including all embeddings that minimize the supervised contrastive
loss. Through SSEM, we analyze how hyperparameters affect learned
representations, offering practical guidelines for hyperparameter selection to
mitigate the risk of class collapse. Our theoretical findings are supported by
empirical results across synthetic and real-world datasets.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 09:17:58 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Lee",
"Chungpa",
""
],
[
"Oh",
"Jeongheon",
""
],
[
"Lee",
"Kibok",
""
],
[
"Sohn",
"Jy-yong",
""
]
]
| TITLE: A Theoretical Framework for Preventing Class Collapse in Supervised
Contrastive Learning
ABSTRACT: Supervised contrastive learning (SupCL) has emerged as a prominent approach
in representation learning, leveraging both supervised and self-supervised
losses. However, achieving an optimal balance between these losses is
challenging; failing to do so can lead to class collapse, reducing
discrimination among individual embeddings in the same class. In this paper, we
present theoretically grounded guidelines for SupCL to prevent class collapse
in learned representations. Specifically, we introduce the Simplex-to-Simplex
Embedding Model (SSEM), a theoretical framework that models various embedding
structures, including all embeddings that minimize the supervised contrastive
loss. Through SSEM, we analyze how hyperparameters affect learned
representations, offering practical guidelines for hyperparameter selection to
mitigate the risk of class collapse. Our theoretical findings are supported by
empirical results across synthetic and real-world datasets.
| no_new_dataset | 0.949949 |
2503.08205 | Yiheng Yu | Yiheng Yu, Sheng Liu, Yuan Feng, Min Xu, Zhelun Jin, Xuhua Yang | OLMD: Orientation-aware Long-term Motion Decoupling for Continuous Sign
Language Recognition | null | null | null | null | cs.CV cs.AI cs.HC | http://creativecommons.org/licenses/by/4.0/ | The primary challenge in continuous sign language recognition (CSLR) mainly
stems from the presence of multi-orientational and long-term motions. However,
current research overlooks these crucial aspects, significantly impacting
accuracy. To tackle these issues, we propose a novel CSLR framework:
Orientation-aware Long-term Motion Decoupling (OLMD), which efficiently
aggregates long-term motions and decouples multi-orientational signals into
easily interpretable components. Specifically, our innovative Long-term Motion
Aggregation (LMA) module filters out static redundancy while adaptively
capturing abundant features of long-term motions. We further enhance
orientation awareness by decoupling complex movements into horizontal and
vertical components, allowing for motion purification in both orientations.
Additionally, two coupling mechanisms are proposed: stage and cross-stage
coupling, which together enrich multi-scale features and improve the
generalization capabilities of the model. Experimentally, OLMD shows SOTA
performance on three large-scale datasets: PHOENIX14, PHOENIX14-T, and
CSL-Daily. Notably, we improved the word error rate (WER) on PHOENIX14 by an
absolute 1.6% compared to the previous SOTA
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 09:20:06 GMT"
}
]
| 2025-03-12T00:00:00 | [
[
"Yu",
"Yiheng",
""
],
[
"Liu",
"Sheng",
""
],
[
"Feng",
"Yuan",
""
],
[
"Xu",
"Min",
""
],
[
"Jin",
"Zhelun",
""
],
[
"Yang",
"Xuhua",
""
]
]
| TITLE: OLMD: Orientation-aware Long-term Motion Decoupling for Continuous Sign
Language Recognition
ABSTRACT: The primary challenge in continuous sign language recognition (CSLR) mainly
stems from the presence of multi-orientational and long-term motions. However,
current research overlooks these crucial aspects, significantly impacting
accuracy. To tackle these issues, we propose a novel CSLR framework:
Orientation-aware Long-term Motion Decoupling (OLMD), which efficiently
aggregates long-term motions and decouples multi-orientational signals into
easily interpretable components. Specifically, our innovative Long-term Motion
Aggregation (LMA) module filters out static redundancy while adaptively
capturing abundant features of long-term motions. We further enhance
orientation awareness by decoupling complex movements into horizontal and
vertical components, allowing for motion purification in both orientations.
Additionally, two coupling mechanisms are proposed: stage and cross-stage
coupling, which together enrich multi-scale features and improve the
generalization capabilities of the model. Experimentally, OLMD shows SOTA
performance on three large-scale datasets: PHOENIX14, PHOENIX14-T, and
CSL-Daily. Notably, we improved the word error rate (WER) on PHOENIX14 by an
absolute 1.6% compared to the previous SOTA
| no_new_dataset | 0.947137 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.