Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2503.11738 | Elena Ballante | Elena Ballante, Pietro Muliere, Silvia Figini | Generalized Bayesian Ensemble Survival Tree (GBEST) model | null | null | null | null | stat.ME cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | This paper proposes a new class of predictive models for survival analysis
called Generalized Bayesian Ensemble Survival Tree (GBEST). It is well known
that survival analysis poses many different challenges, in particular when
applied to small data or censorship mechanism. Our contribution is the proposal
of an ensemble approach that uses Bayesian bootstrap and beta Stacy bootstrap
methods to improve the outcome in survival application with a special focus on
small datasets. More precisely, a novel approach to integrate Beta Stacy
Bayesian bootstrap in bagging tree models for censored data is proposed in this
paper. Empirical evidence achieved on simulated and real data underlines that
our approach performs better in terms of predictive performances and stability
of the results compared with classical survival models available in the
literature. In terms of methodology our novel contribution considers the
adaptation of recent Bayesian ensemble approaches to survival data, providing a
new model called Generalized Bayesian Ensemble Survival Tree (GBEST). A further
result in terms of computational novelty is the implementation in R of GBEST,
available in a public GitHub repository.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 15:40:18 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Ballante",
"Elena",
""
],
[
"Muliere",
"Pietro",
""
],
[
"Figini",
"Silvia",
""
]
] | TITLE: Generalized Bayesian Ensemble Survival Tree (GBEST) model
ABSTRACT: This paper proposes a new class of predictive models for survival analysis
called Generalized Bayesian Ensemble Survival Tree (GBEST). It is well known
that survival analysis poses many different challenges, in particular when
applied to small data or censorship mechanism. Our contribution is the proposal
of an ensemble approach that uses Bayesian bootstrap and beta Stacy bootstrap
methods to improve the outcome in survival application with a special focus on
small datasets. More precisely, a novel approach to integrate Beta Stacy
Bayesian bootstrap in bagging tree models for censored data is proposed in this
paper. Empirical evidence achieved on simulated and real data underlines that
our approach performs better in terms of predictive performances and stability
of the results compared with classical survival models available in the
literature. In terms of methodology our novel contribution considers the
adaptation of recent Bayesian ensemble approaches to survival data, providing a
new model called Generalized Bayesian Ensemble Survival Tree (GBEST). A further
result in terms of computational novelty is the implementation in R of GBEST,
available in a public GitHub repository.
|
2503.11739 | Zirui Yuan | Zirui Yuan, Siqi Lai, Hao Liu | CoLLMLight: Cooperative Large Language Model Agents for Network-Wide
Traffic Signal Control | Under review, 14 pages | null | null | null | cs.LG cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traffic Signal Control (TSC) plays a critical role in urban traffic
management by optimizing traffic flow and mitigating congestion. While Large
Language Models (LLMs) have recently emerged as promising tools for TSC due to
their exceptional problem-solving and generalization capabilities, existing
approaches fail to address the essential need for inter-agent coordination,
limiting their effectiveness in achieving network-wide optimization. To bridge
this gap, we propose CoLLMLight, a cooperative LLM agent framework for TSC.
Specifically, we first construct a structured spatiotemporal graph to capture
real-time traffic dynamics and spatial relationships among neighboring
intersections, enabling the LLM to reason about complex traffic interactions.
Moreover, we introduce a complexity-aware reasoning mechanism that dynamically
adapts reasoning depth based on real-time traffic conditions, ensuring optimal
computational efficiency without sacrificing decision quality. Besides, we
propose a fine-tuning strategy that leverages iterative simulation-driven data
collection and environmental feedback to build a lightweight LLM tailored for
cooperative TSC. Extensive experiments on both synthetic and real-world
datasets demonstrate that CoLLMLight outperforms state-of-the-art methods in
diverse traffic scenarios, showcasing its effectiveness, scalability, and
robustness.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 15:40:39 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Yuan",
"Zirui",
""
],
[
"Lai",
"Siqi",
""
],
[
"Liu",
"Hao",
""
]
] | TITLE: CoLLMLight: Cooperative Large Language Model Agents for Network-Wide
Traffic Signal Control
ABSTRACT: Traffic Signal Control (TSC) plays a critical role in urban traffic
management by optimizing traffic flow and mitigating congestion. While Large
Language Models (LLMs) have recently emerged as promising tools for TSC due to
their exceptional problem-solving and generalization capabilities, existing
approaches fail to address the essential need for inter-agent coordination,
limiting their effectiveness in achieving network-wide optimization. To bridge
this gap, we propose CoLLMLight, a cooperative LLM agent framework for TSC.
Specifically, we first construct a structured spatiotemporal graph to capture
real-time traffic dynamics and spatial relationships among neighboring
intersections, enabling the LLM to reason about complex traffic interactions.
Moreover, we introduce a complexity-aware reasoning mechanism that dynamically
adapts reasoning depth based on real-time traffic conditions, ensuring optimal
computational efficiency without sacrificing decision quality. Besides, we
propose a fine-tuning strategy that leverages iterative simulation-driven data
collection and environmental feedback to build a lightweight LLM tailored for
cooperative TSC. Extensive experiments on both synthetic and real-world
datasets demonstrate that CoLLMLight outperforms state-of-the-art methods in
diverse traffic scenarios, showcasing its effectiveness, scalability, and
robustness.
|
2503.11742 | Moreno D'Inc\`a | Moreno D'Inc\`a, Elia Peruzzo, Xingqian Xu, Humphrey Shi, Nicu Sebe,
Massimiliano Mancini | Safe Vision-Language Models via Unsafe Weights Manipulation | Work in progress | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Vision-language models (VLMs) often inherit the biases and unsafe
associations present within their large-scale training dataset. While recent
approaches mitigate unsafe behaviors, their evaluation focuses on how safe the
model is on unsafe inputs, ignoring potential shortcomings on safe ones. In
this paper, we first revise safety evaluation by introducing SafeGround, a new
set of metrics that evaluate safety at different levels of granularity. With
this metric, we uncover a surprising issue of training-based methods: they make
the model less safe on safe inputs. From this finding, we take a different
direction and explore whether it is possible to make a model safer without
training, introducing Unsafe Weights Manipulation (UWM). UWM uses a calibration
set of safe and unsafe instances to compare activations between safe and unsafe
content, identifying the most important parameters for processing the latter.
Their values are then manipulated via negation. Experiments show that UWM
achieves the best tradeoff between safety and knowledge preservation,
consistently improving VLMs on unsafe queries while outperforming even
training-based state-of-the-art methods on safe ones.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 17:00:22 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"D'Incà",
"Moreno",
""
],
[
"Peruzzo",
"Elia",
""
],
[
"Xu",
"Xingqian",
""
],
[
"Shi",
"Humphrey",
""
],
[
"Sebe",
"Nicu",
""
],
[
"Mancini",
"Massimiliano",
""
]
] | TITLE: Safe Vision-Language Models via Unsafe Weights Manipulation
ABSTRACT: Vision-language models (VLMs) often inherit the biases and unsafe
associations present within their large-scale training dataset. While recent
approaches mitigate unsafe behaviors, their evaluation focuses on how safe the
model is on unsafe inputs, ignoring potential shortcomings on safe ones. In
this paper, we first revise safety evaluation by introducing SafeGround, a new
set of metrics that evaluate safety at different levels of granularity. With
this metric, we uncover a surprising issue of training-based methods: they make
the model less safe on safe inputs. From this finding, we take a different
direction and explore whether it is possible to make a model safer without
training, introducing Unsafe Weights Manipulation (UWM). UWM uses a calibration
set of safe and unsafe instances to compare activations between safe and unsafe
content, identifying the most important parameters for processing the latter.
Their values are then manipulated via negation. Experiments show that UWM
achieves the best tradeoff between safety and knowledge preservation,
consistently improving VLMs on unsafe queries while outperforming even
training-based state-of-the-art methods on safe ones.
|
2503.11743 | Tianliang Xu | Tianliang Xu, Eva Maxfield Brown, Dustin Dwyer, Sabina Tomkins | PUBLICSPEAK: Hearing the Public with a Probabilistic Framework in Local
Government | 10 pages, 3 figures, in the 39th Annual AAAI Conference on Artificial
Intelligence | null | null | null | cs.AI cs.CY | http://creativecommons.org/licenses/by/4.0/ | Local governments around the world are making consequential decisions on
behalf of their constituents, and these constituents are responding with
requests, advice, and assessments of their officials at public meetings. So
many small meetings cannot be covered by traditional newsrooms at scale. We
propose PUBLICSPEAK, a probabilistic framework which can utilize meeting
structure, domain knowledge, and linguistic information to discover public
remarks in local government meetings. We then use our approach to inspect the
issues raised by constituents in 7 cities across the United States. We evaluate
our approach on a novel dataset of local government meetings and find that
PUBLICSPEAK improves over state-of-the-art by 10% on average, and by up to 40%.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 17:04:36 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Xu",
"Tianliang",
""
],
[
"Brown",
"Eva Maxfield",
""
],
[
"Dwyer",
"Dustin",
""
],
[
"Tomkins",
"Sabina",
""
]
] | TITLE: PUBLICSPEAK: Hearing the Public with a Probabilistic Framework in Local
Government
ABSTRACT: Local governments around the world are making consequential decisions on
behalf of their constituents, and these constituents are responding with
requests, advice, and assessments of their officials at public meetings. So
many small meetings cannot be covered by traditional newsrooms at scale. We
propose PUBLICSPEAK, a probabilistic framework which can utilize meeting
structure, domain knowledge, and linguistic information to discover public
remarks in local government meetings. We then use our approach to inspect the
issues raised by constituents in 7 cities across the United States. We evaluate
our approach on a novel dataset of local government meetings and find that
PUBLICSPEAK improves over state-of-the-art by 10% on average, and by up to 40%.
|
2503.11774 | Zhixuan Lian | Zhixuan Lian, Shangyu Li, Qixuan Huang, Zijian Huang, Haifei Liu,
Jianan Qiu, Puyu Yang, Laifa Tao | UBMF: Uncertainty-Aware Bayesian Meta-Learning Framework for Fault
Diagnosis with Imbalanced Industrial Data | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fault diagnosis of mechanical equipment involves data collection, feature
extraction, and pattern recognition but is often hindered by the imbalanced
nature of industrial data, introducing significant uncertainty and reducing
diagnostic reliability. To address these challenges, this study proposes the
Uncertainty-Aware Bayesian Meta-Learning Framework (UBMF), which integrates
four key modules: data perturbation injection for enhancing feature robustness,
cross-task self-supervised feature extraction for improving transferability,
uncertainty-based sample filtering for robust out-of-domain generalization, and
Bayesian meta-knowledge integration for fine-grained classification.
Experimental results on ten open-source datasets under various imbalanced
conditions, including cross-task, small-sample, and unseen-sample scenarios,
demonstrate the superiority of UBMF, achieving an average improvement of 42.22%
across ten Any-way 1-5-shot diagnostic tasks. This integrated framework
effectively enhances diagnostic accuracy, generalization, and adaptability,
providing a reliable solution for complex industrial fault diagnosis.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 18:05:18 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Lian",
"Zhixuan",
""
],
[
"Li",
"Shangyu",
""
],
[
"Huang",
"Qixuan",
""
],
[
"Huang",
"Zijian",
""
],
[
"Liu",
"Haifei",
""
],
[
"Qiu",
"Jianan",
""
],
[
"Yang",
"Puyu",
""
],
[
"Tao",
"Laifa",
""
]
] | TITLE: UBMF: Uncertainty-Aware Bayesian Meta-Learning Framework for Fault
Diagnosis with Imbalanced Industrial Data
ABSTRACT: Fault diagnosis of mechanical equipment involves data collection, feature
extraction, and pattern recognition but is often hindered by the imbalanced
nature of industrial data, introducing significant uncertainty and reducing
diagnostic reliability. To address these challenges, this study proposes the
Uncertainty-Aware Bayesian Meta-Learning Framework (UBMF), which integrates
four key modules: data perturbation injection for enhancing feature robustness,
cross-task self-supervised feature extraction for improving transferability,
uncertainty-based sample filtering for robust out-of-domain generalization, and
Bayesian meta-knowledge integration for fine-grained classification.
Experimental results on ten open-source datasets under various imbalanced
conditions, including cross-task, small-sample, and unseen-sample scenarios,
demonstrate the superiority of UBMF, achieving an average improvement of 42.22%
across ten Any-way 1-5-shot diagnostic tasks. This integrated framework
effectively enhances diagnostic accuracy, generalization, and adaptability,
providing a reliable solution for complex industrial fault diagnosis.
|
2503.11780 | Tanyi Zhao | Tianyi Zhao, Boyang Liu, Yanglei Gao, Yiming Sun, Maoxun Yuan,
Xingxing Wei | Rethinking Multi-modal Object Detection from the Perspective of
Mono-Modality Feature Learning | 10 pages, 6 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-Modal Object Detection (MMOD), due to its stronger adaptability to
various complex environments, has been widely applied in various applications.
Extensive research is dedicated to the RGB-IR object detection, primarily
focusing on how to integrate complementary features from RGB-IR modalities.
However, they neglect the mono-modality insufficient learning problem that the
decreased feature extraction capability in multi-modal joint learning. This
leads to an unreasonable but prevalent phenomenon--Fusion Degradation, which
hinders the performance improvement of the MMOD model. Motivated by this, in
this paper, we introduce linear probing evaluation to the multi-modal detectors
and rethink the multi-modal object detection task from the mono-modality
learning perspective. Therefore, we construct an novel framework called
M$^2$D-LIF, which consists of the Mono-Modality Distillation (M$^2$D) method
and the Local Illumination-aware Fusion (LIF) module. The M$^2$D-LIF framework
facilitates the sufficient learning of mono-modality during multi-modal joint
training and explores a lightweight yet effective feature fusion manner to
achieve superior object detection performance. Extensive experiments conducted
on three MMOD datasets demonstrate that our M$^2$D-LIF effectively mitigates
the Fusion Degradation phenomenon and outperforms the previous SOTA detectors.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 18:15:53 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Zhao",
"Tianyi",
""
],
[
"Liu",
"Boyang",
""
],
[
"Gao",
"Yanglei",
""
],
[
"Sun",
"Yiming",
""
],
[
"Yuan",
"Maoxun",
""
],
[
"Wei",
"Xingxing",
""
]
] | TITLE: Rethinking Multi-modal Object Detection from the Perspective of
Mono-Modality Feature Learning
ABSTRACT: Multi-Modal Object Detection (MMOD), due to its stronger adaptability to
various complex environments, has been widely applied in various applications.
Extensive research is dedicated to the RGB-IR object detection, primarily
focusing on how to integrate complementary features from RGB-IR modalities.
However, they neglect the mono-modality insufficient learning problem that the
decreased feature extraction capability in multi-modal joint learning. This
leads to an unreasonable but prevalent phenomenon--Fusion Degradation, which
hinders the performance improvement of the MMOD model. Motivated by this, in
this paper, we introduce linear probing evaluation to the multi-modal detectors
and rethink the multi-modal object detection task from the mono-modality
learning perspective. Therefore, we construct an novel framework called
M$^2$D-LIF, which consists of the Mono-Modality Distillation (M$^2$D) method
and the Local Illumination-aware Fusion (LIF) module. The M$^2$D-LIF framework
facilitates the sufficient learning of mono-modality during multi-modal joint
training and explores a lightweight yet effective feature fusion manner to
achieve superior object detection performance. Extensive experiments conducted
on three MMOD datasets demonstrate that our M$^2$D-LIF effectively mitigates
the Fusion Degradation phenomenon and outperforms the previous SOTA detectors.
|
2503.11781 | Georgy Perevozchikov | Artem Nikonorov, Georgy Perevozchikov, Andrei Korepanov, Nancy Mehta,
Mahmoud Afifi, Egor Ershov, and Radu Timofte | Color Matching Using Hypernetwork-Based Kolmogorov-Arnold Networks | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We present cmKAN, a versatile framework for color matching. Given an input
image with colors from a source color distribution, our method effectively and
accurately maps these colors to match a target color distribution in both
supervised and unsupervised settings. Our framework leverages the spline
capabilities of Kolmogorov-Arnold Networks (KANs) to model the color matching
between source and target distributions. Specifically, we developed a
hypernetwork that generates spatially varying weight maps to control the
nonlinear splines of a KAN, enabling accurate color matching. As part of this
work, we introduce a first large-scale dataset of paired images captured by two
distinct cameras and evaluate the efficacy of our and existing methods in
matching colors. We evaluated our approach across various color-matching tasks,
including: (1) raw-to-raw mapping, where the source color distribution is in
one camera's raw color space and the target in another camera's raw space; (2)
raw-to-sRGB mapping, where the source color distribution is in a camera's raw
space and the target is in the display sRGB space, emulating the color
rendering of a camera ISP; and (3) sRGB-to-sRGB mapping, where the goal is to
transfer colors from a source sRGB space (e.g., produced by a source camera
ISP) to a target sRGB space (e.g., from a different camera ISP). The results
show that our method outperforms existing approaches by 37.3% on average for
supervised and unsupervised cases while remaining lightweight compared to other
methods. The codes, dataset, and pre-trained models are available at:
https://github.com/gosha20777/cmKAN
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 18:17:19 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Nikonorov",
"Artem",
""
],
[
"Perevozchikov",
"Georgy",
""
],
[
"Korepanov",
"Andrei",
""
],
[
"Mehta",
"Nancy",
""
],
[
"Afifi",
"Mahmoud",
""
],
[
"Ershov",
"Egor",
""
],
[
"Timofte",
"Radu",
""
]
] | TITLE: Color Matching Using Hypernetwork-Based Kolmogorov-Arnold Networks
ABSTRACT: We present cmKAN, a versatile framework for color matching. Given an input
image with colors from a source color distribution, our method effectively and
accurately maps these colors to match a target color distribution in both
supervised and unsupervised settings. Our framework leverages the spline
capabilities of Kolmogorov-Arnold Networks (KANs) to model the color matching
between source and target distributions. Specifically, we developed a
hypernetwork that generates spatially varying weight maps to control the
nonlinear splines of a KAN, enabling accurate color matching. As part of this
work, we introduce a first large-scale dataset of paired images captured by two
distinct cameras and evaluate the efficacy of our and existing methods in
matching colors. We evaluated our approach across various color-matching tasks,
including: (1) raw-to-raw mapping, where the source color distribution is in
one camera's raw color space and the target in another camera's raw space; (2)
raw-to-sRGB mapping, where the source color distribution is in a camera's raw
space and the target is in the display sRGB space, emulating the color
rendering of a camera ISP; and (3) sRGB-to-sRGB mapping, where the goal is to
transfer colors from a source sRGB space (e.g., produced by a source camera
ISP) to a target sRGB space (e.g., from a different camera ISP). The results
show that our method outperforms existing approaches by 37.3% on average for
supervised and unsupervised cases while remaining lightweight compared to other
methods. The codes, dataset, and pre-trained models are available at:
https://github.com/gosha20777/cmKAN
|
2503.11792 | Peizhi Yan | Peizhi Yan, Rabab K. Ward, Dan Wang, Qiang Tang, Shan Du | StyleMorpheus: A Style-Based 3D-Aware Morphable Face Model | 13 pages, work was completed in 2023 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | For 3D face modeling, the recently developed 3D-aware neural rendering
methods are able to render photorealistic face images with arbitrary viewing
directions. The training of the parametric controllable 3D-aware face models,
however, still relies on a large-scale dataset that is lab-collected. To
address this issue, this paper introduces "StyleMorpheus", the first
style-based neural 3D Morphable Face Model (3DMM) that is trained on
in-the-wild images. It inherits 3DMM's disentangled controllability (over face
identity, expression, and appearance) but without the need for accurately
reconstructed explicit 3D shapes. StyleMorpheus employs an auto-encoder
structure. The encoder aims at learning a representative disentangled
parametric code space and the decoder improves the disentanglement using shape
and appearance-related style codes in the different sub-modules of the network.
Furthermore, we fine-tune the decoder through style-based generative
adversarial learning to achieve photorealistic 3D rendering quality. The
proposed style-based design enables StyleMorpheus to achieve state-of-the-art
3D-aware face reconstruction results, while also allowing disentangled control
of the reconstructed face. Our model achieves real-time rendering speed,
allowing its use in virtual reality applications. We also demonstrate the
capability of the proposed style-based design in face editing applications such
as style mixing and color editing. Project homepage:
https://github.com/ubc-3d-vision-lab/StyleMorpheus.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 18:32:02 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Yan",
"Peizhi",
""
],
[
"Ward",
"Rabab K.",
""
],
[
"Wang",
"Dan",
""
],
[
"Tang",
"Qiang",
""
],
[
"Du",
"Shan",
""
]
] | TITLE: StyleMorpheus: A Style-Based 3D-Aware Morphable Face Model
ABSTRACT: For 3D face modeling, the recently developed 3D-aware neural rendering
methods are able to render photorealistic face images with arbitrary viewing
directions. The training of the parametric controllable 3D-aware face models,
however, still relies on a large-scale dataset that is lab-collected. To
address this issue, this paper introduces "StyleMorpheus", the first
style-based neural 3D Morphable Face Model (3DMM) that is trained on
in-the-wild images. It inherits 3DMM's disentangled controllability (over face
identity, expression, and appearance) but without the need for accurately
reconstructed explicit 3D shapes. StyleMorpheus employs an auto-encoder
structure. The encoder aims at learning a representative disentangled
parametric code space and the decoder improves the disentanglement using shape
and appearance-related style codes in the different sub-modules of the network.
Furthermore, we fine-tune the decoder through style-based generative
adversarial learning to achieve photorealistic 3D rendering quality. The
proposed style-based design enables StyleMorpheus to achieve state-of-the-art
3D-aware face reconstruction results, while also allowing disentangled control
of the reconstructed face. Our model achieves real-time rendering speed,
allowing its use in virtual reality applications. We also demonstrate the
capability of the proposed style-based design in face editing applications such
as style mixing and color editing. Project homepage:
https://github.com/ubc-3d-vision-lab/StyleMorpheus.
|
2503.11828 | Israat Haque | Chengyan Jiang and Jiamin Fan and Talal Halabi and Israat Haque | Performance Analysis of Decentralized Federated Learning Deployments | null | null | null | null | cs.LG cs.DC cs.NI | http://creativecommons.org/licenses/by/4.0/ | The widespread adoption of smartphones and smart wearable devices has led to
the widespread use of Centralized Federated Learning (CFL) for training
powerful machine learning models while preserving data privacy. However, CFL
faces limitations due to its overreliance on a central server, which impacts
latency and system robustness. Decentralized Federated Learning (DFL) is
introduced to address these challenges. It facilitates direct collaboration
among participating devices without relying on a central server. Each device
can independently connect with other devices and share model parameters. This
work explores crucial factors influencing the convergence and generalization
capacity of DFL models, emphasizing network topologies, non-IID data
distribution, and training strategies. We first derive the convergence rate of
different DFL model deployment strategies. Then, we comprehensively analyze
various network topologies (e.g., linear, ring, star, and mesh) with different
degrees of non-IID data and evaluate them over widely adopted machine learning
models (e.g., classical, deep neural networks, and Large Language Models) and
real-world datasets. The results reveal that models converge to the optimal one
for IID data. However, the convergence rate is inversely proportional to the
degree of non-IID data distribution. Our findings will serve as valuable
guidelines for designing effective DFL model deployments in practical
applications.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 19:37:13 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Jiang",
"Chengyan",
""
],
[
"Fan",
"Jiamin",
""
],
[
"Halabi",
"Talal",
""
],
[
"Haque",
"Israat",
""
]
] | TITLE: Performance Analysis of Decentralized Federated Learning Deployments
ABSTRACT: The widespread adoption of smartphones and smart wearable devices has led to
the widespread use of Centralized Federated Learning (CFL) for training
powerful machine learning models while preserving data privacy. However, CFL
faces limitations due to its overreliance on a central server, which impacts
latency and system robustness. Decentralized Federated Learning (DFL) is
introduced to address these challenges. It facilitates direct collaboration
among participating devices without relying on a central server. Each device
can independently connect with other devices and share model parameters. This
work explores crucial factors influencing the convergence and generalization
capacity of DFL models, emphasizing network topologies, non-IID data
distribution, and training strategies. We first derive the convergence rate of
different DFL model deployment strategies. Then, we comprehensively analyze
various network topologies (e.g., linear, ring, star, and mesh) with different
degrees of non-IID data and evaluate them over widely adopted machine learning
models (e.g., classical, deep neural networks, and Large Language Models) and
real-world datasets. The results reveal that models converge to the optimal one
for IID data. However, the convergence rate is inversely proportional to the
degree of non-IID data distribution. Our findings will serve as valuable
guidelines for designing effective DFL model deployments in practical
applications.
|
2503.11832 | Yiwei Chen | Yiwei Chen, Yuguang Yao, Yihua Zhang, Bingquan Shen, Gaowen Liu, Sijia
Liu | Safety Mirage: How Spurious Correlations Undermine VLM Safety
Fine-tuning | null | null | null | null | cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Recent vision-language models (VLMs) have made remarkable strides in
generative modeling with multimodal inputs, particularly text and images.
However, their susceptibility to generating harmful content when exposed to
unsafe queries raises critical safety concerns. While current alignment
strategies primarily rely on supervised safety fine-tuning with curated
datasets, we identify a fundamental limitation we call the "safety mirage"
where supervised fine-tuning inadvertently reinforces spurious correlations
between superficial textual patterns and safety responses, rather than
fostering deep, intrinsic mitigation of harm. We show that these spurious
correlations leave fine-tuned VLMs vulnerable even to a simple one-word
modification-based attack, where substituting a single word in text queries
with a spurious correlation-inducing alternative can effectively bypass
safeguards. Additionally, these correlations contribute to the over prudence,
causing fine-tuned VLMs to refuse benign queries unnecessarily. To address this
issue, we show machine unlearning (MU) as a powerful alternative to supervised
safety fine-tuning as it avoids biased feature-label mappings and directly
removes harmful knowledge from VLMs while preserving their general
capabilities. Extensive evaluations across safety benchmarks show that under
one-word attacks, MU-based alignment reduces the attack success rate by up to
60.17% and cuts unnecessary rejections by over 84.20%. Codes are available at
https://github.com/OPTML-Group/VLM-Safety-MU. WARNING: There exist AI
generations that may be offensive in nature.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 19:52:08 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Chen",
"Yiwei",
""
],
[
"Yao",
"Yuguang",
""
],
[
"Zhang",
"Yihua",
""
],
[
"Shen",
"Bingquan",
""
],
[
"Liu",
"Gaowen",
""
],
[
"Liu",
"Sijia",
""
]
] | TITLE: Safety Mirage: How Spurious Correlations Undermine VLM Safety
Fine-tuning
ABSTRACT: Recent vision-language models (VLMs) have made remarkable strides in
generative modeling with multimodal inputs, particularly text and images.
However, their susceptibility to generating harmful content when exposed to
unsafe queries raises critical safety concerns. While current alignment
strategies primarily rely on supervised safety fine-tuning with curated
datasets, we identify a fundamental limitation we call the "safety mirage"
where supervised fine-tuning inadvertently reinforces spurious correlations
between superficial textual patterns and safety responses, rather than
fostering deep, intrinsic mitigation of harm. We show that these spurious
correlations leave fine-tuned VLMs vulnerable even to a simple one-word
modification-based attack, where substituting a single word in text queries
with a spurious correlation-inducing alternative can effectively bypass
safeguards. Additionally, these correlations contribute to the over prudence,
causing fine-tuned VLMs to refuse benign queries unnecessarily. To address this
issue, we show machine unlearning (MU) as a powerful alternative to supervised
safety fine-tuning as it avoids biased feature-label mappings and directly
removes harmful knowledge from VLMs while preserving their general
capabilities. Extensive evaluations across safety benchmarks show that under
one-word attacks, MU-based alignment reduces the attack success rate by up to
60.17% and cuts unnecessary rejections by over 84.20%. Codes are available at
https://github.com/OPTML-Group/VLM-Safety-MU. WARNING: There exist AI
generations that may be offensive in nature.
|
2503.11836 | Oscar Morris | Oscar Morris | Transfer Learning for Automated Feedback Generation on Small Datasets | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Feedback is a very important part the learning process. However, it is
challenging to make this feedback both timely and accurate when relying on
human markers. This is the challenge that Automated Feedback Generation
attempts to address. In this paper, a technique to train such a system on a
very small dataset with very long sequences is presented. Both of these
attributes make this a very challenging task, however, by using a three stage
transfer learning pipeline state-of-the-art results can be achieved with
qualitatively accurate but unhuman sounding results. The use of both Automated
Essay Scoring and Automated Feedback Generation systems in the real world is
also discussed.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 19:57:54 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Morris",
"Oscar",
""
]
] | TITLE: Transfer Learning for Automated Feedback Generation on Small Datasets
ABSTRACT: Feedback is a very important part the learning process. However, it is
challenging to make this feedback both timely and accurate when relying on
human markers. This is the challenge that Automated Feedback Generation
attempts to address. In this paper, a technique to train such a system on a
very small dataset with very long sequences is presented. Both of these
attributes make this a very challenging task, however, by using a three stage
transfer learning pipeline state-of-the-art results can be achieved with
qualitatively accurate but unhuman sounding results. The use of both Automated
Essay Scoring and Automated Feedback Generation systems in the real world is
also discussed.
|
2503.11838 | Ximing Wen | Ximing Wen, Rezvaneh Rezapour | A Transformer and Prototype-based Interpretable Model for Contextual
Sarcasm Detection | 8 pages, 2 figures | null | null | null | cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | Sarcasm detection, with its figurative nature, poses unique challenges for
affective systems designed to perform sentiment analysis. While these systems
typically perform well at identifying direct expressions of emotion, they
struggle with sarcasm's inherent contradiction between literal and intended
sentiment. Since transformer-based language models (LMs) are known for their
efficient ability to capture contextual meanings, we propose a method that
leverages LMs and prototype-based networks, enhanced by sentiment embeddings to
conduct interpretable sarcasm detection. Our approach is intrinsically
interpretable without extra post-hoc interpretability techniques. We test our
model on three public benchmark datasets and show that our model outperforms
the current state-of-the-art. At the same time, the prototypical layer enhances
the model's inherent interpretability by generating explanations through
similar examples in the reference time. Furthermore, we demonstrate the
effectiveness of incongruity loss in the ablation study, which we construct
using sentiment prototypes.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 19:58:43 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Wen",
"Ximing",
""
],
[
"Rezapour",
"Rezvaneh",
""
]
] | TITLE: A Transformer and Prototype-based Interpretable Model for Contextual
Sarcasm Detection
ABSTRACT: Sarcasm detection, with its figurative nature, poses unique challenges for
affective systems designed to perform sentiment analysis. While these systems
typically perform well at identifying direct expressions of emotion, they
struggle with sarcasm's inherent contradiction between literal and intended
sentiment. Since transformer-based language models (LMs) are known for their
efficient ability to capture contextual meanings, we propose a method that
leverages LMs and prototype-based networks, enhanced by sentiment embeddings to
conduct interpretable sarcasm detection. Our approach is intrinsically
interpretable without extra post-hoc interpretability techniques. We test our
model on three public benchmark datasets and show that our model outperforms
the current state-of-the-art. At the same time, the prototypical layer enhances
the model's inherent interpretability by generating explanations through
similar examples in the reference time. Furthermore, we demonstrate the
effectiveness of incongruity loss in the ablation study, which we construct
using sentiment prototypes.
|
2503.11841 | Tianwei Lan | Tianwei Lan, Luca Demetrio, Farid Nait-Abdesselam, Yufei Han, Simone
Aonzo | Trust Under Siege: Label Spoofing Attacks against Machine Learning for
Android Malware Detection | null | null | null | null | cs.CR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine learning (ML) malware detectors rely heavily on crowd-sourced
AntiVirus (AV) labels, with platforms like VirusTotal serving as a trusted
source of malware annotations. But what if attackers could manipulate these
labels to classify benign software as malicious? We introduce label spoofing
attacks, a new threat that contaminates crowd-sourced datasets by embedding
minimal and undetectable malicious patterns into benign samples. These patterns
coerce AV engines into misclassifying legitimate files as harmful, enabling
poisoning attacks against ML-based malware classifiers trained on those data.
We demonstrate this scenario by developing AndroVenom, a methodology for
polluting realistic data sources, causing consequent poisoning attacks against
ML malware detectors. Experiments show that not only state-of-the-art feature
extractors are unable to filter such injection, but also various ML models
experience Denial of Service already with 1% poisoned samples. Additionally,
attackers can flip decisions of specific unaltered benign samples by modifying
only 0.015% of the training data, threatening their reputation and market share
and being unable to be stopped by anomaly detectors on training data. We
conclude our manuscript by raising the alarm on the trustworthiness of the
training process based on AV annotations, requiring further investigation on
how to produce proper labels for ML malware detectors.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 20:05:56 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Lan",
"Tianwei",
""
],
[
"Demetrio",
"Luca",
""
],
[
"Nait-Abdesselam",
"Farid",
""
],
[
"Han",
"Yufei",
""
],
[
"Aonzo",
"Simone",
""
]
] | TITLE: Trust Under Siege: Label Spoofing Attacks against Machine Learning for
Android Malware Detection
ABSTRACT: Machine learning (ML) malware detectors rely heavily on crowd-sourced
AntiVirus (AV) labels, with platforms like VirusTotal serving as a trusted
source of malware annotations. But what if attackers could manipulate these
labels to classify benign software as malicious? We introduce label spoofing
attacks, a new threat that contaminates crowd-sourced datasets by embedding
minimal and undetectable malicious patterns into benign samples. These patterns
coerce AV engines into misclassifying legitimate files as harmful, enabling
poisoning attacks against ML-based malware classifiers trained on those data.
We demonstrate this scenario by developing AndroVenom, a methodology for
polluting realistic data sources, causing consequent poisoning attacks against
ML malware detectors. Experiments show that not only state-of-the-art feature
extractors are unable to filter such injection, but also various ML models
experience Denial of Service already with 1% poisoned samples. Additionally,
attackers can flip decisions of specific unaltered benign samples by modifying
only 0.015% of the training data, threatening their reputation and market share
and being unable to be stopped by anomaly detectors on training data. We
conclude our manuscript by raising the alarm on the trustworthiness of the
training process based on AV annotations, requiring further investigation on
how to produce proper labels for ML malware detectors.
|
2503.11855 | Xiaobin Zhang | Jingzong Zhou, Yuhan Zhu, Xiaobin Zhang, Sunil Agrawal, and
Konstantinos Karydis | Learning-based Estimation of Forward Kinematics for an Orthotic Parallel
Robotic Mechanism | null | null | null | null | cs.RO cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces a 3D parallel robot with three identical
five-degree-of-freedom chains connected to a circular brace end-effector, aimed
to serve as an assistive device for patients with cervical spondylosis. The
inverse kinematics of the system is solved analytically, whereas learning-based
methods are deployed to solve the forward kinematics. The methods considered
herein include a Koopman operator-based approach as well as a neural
network-based approach. The task is to predict the position and orientation of
end-effector trajectories. The dataset used to train these methods is based on
the analytical solutions derived via inverse kinematics. The methods are tested
both in simulation and via physical hardware experiments with the developed
robot. Results validate the suitability of deploying learning-based methods for
studying parallel mechanism forward kinematics that are generally hard to
resolve analytically.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 20:37:06 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Zhou",
"Jingzong",
""
],
[
"Zhu",
"Yuhan",
""
],
[
"Zhang",
"Xiaobin",
""
],
[
"Agrawal",
"Sunil",
""
],
[
"Karydis",
"Konstantinos",
""
]
] | TITLE: Learning-based Estimation of Forward Kinematics for an Orthotic Parallel
Robotic Mechanism
ABSTRACT: This paper introduces a 3D parallel robot with three identical
five-degree-of-freedom chains connected to a circular brace end-effector, aimed
to serve as an assistive device for patients with cervical spondylosis. The
inverse kinematics of the system is solved analytically, whereas learning-based
methods are deployed to solve the forward kinematics. The methods considered
herein include a Koopman operator-based approach as well as a neural
network-based approach. The task is to predict the position and orientation of
end-effector trajectories. The dataset used to train these methods is based on
the analytical solutions derived via inverse kinematics. The methods are tested
both in simulation and via physical hardware experiments with the developed
robot. Results validate the suitability of deploying learning-based methods for
studying parallel mechanism forward kinematics that are generally hard to
resolve analytically.
|
2503.11893 | Md Jahidul Islam Dr | Md Abu Bakr Siddique, Junliang Liu, Piyush Singh, and Md Jahidul Islam | UStyle: Waterbody Style Transfer of Underwater Scenes by Depth-Guided
Feature Synthesis | null | null | null | null | cs.CV eess.IV | http://creativecommons.org/licenses/by/4.0/ | The concept of waterbody style transfer remains largely unexplored in the
underwater imaging and vision literature. Traditional image style transfer
(STx) methods primarily focus on artistic and photorealistic blending, often
failing to preserve object and scene geometry in images captured in
high-scattering mediums such as underwater. The wavelength-dependent nonlinear
attenuation and depth-dependent backscattering artifacts further complicate
learning underwater image STx from unpaired data. This paper introduces UStyle,
the first data-driven learning framework for transferring waterbody styles
across underwater images without requiring prior reference images or scene
information. We propose a novel depth-aware whitening and coloring transform
(DA-WCT) mechanism that integrates physics-based waterbody synthesis to ensure
perceptually consistent stylization while preserving scene structure. To
enhance style transfer quality, we incorporate carefully designed loss
functions that guide UStyle to maintain colorfulness, lightness, structural
integrity, and frequency-domain characteristics, as well as high-level content
in VGG and CLIP (contrastive language-image pretraining) feature spaces. By
addressing domain-specific challenges, UStyle provides a robust framework for
no-reference underwater image STx, surpassing state-of-the-art (SOTA) methods
that rely solely on end-to-end reconstruction loss. Furthermore, we introduce
the UF7D dataset, a curated collection of high-resolution underwater images
spanning seven distinct waterbody styles, establishing a benchmark to support
future research in underwater image STx. The UStyle inference pipeline and UF7D
dataset are released at: https://github.com/uf-robopi/UStyle.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 21:49:40 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Siddique",
"Md Abu Bakr",
""
],
[
"Liu",
"Junliang",
""
],
[
"Singh",
"Piyush",
""
],
[
"Islam",
"Md Jahidul",
""
]
] | TITLE: UStyle: Waterbody Style Transfer of Underwater Scenes by Depth-Guided
Feature Synthesis
ABSTRACT: The concept of waterbody style transfer remains largely unexplored in the
underwater imaging and vision literature. Traditional image style transfer
(STx) methods primarily focus on artistic and photorealistic blending, often
failing to preserve object and scene geometry in images captured in
high-scattering mediums such as underwater. The wavelength-dependent nonlinear
attenuation and depth-dependent backscattering artifacts further complicate
learning underwater image STx from unpaired data. This paper introduces UStyle,
the first data-driven learning framework for transferring waterbody styles
across underwater images without requiring prior reference images or scene
information. We propose a novel depth-aware whitening and coloring transform
(DA-WCT) mechanism that integrates physics-based waterbody synthesis to ensure
perceptually consistent stylization while preserving scene structure. To
enhance style transfer quality, we incorporate carefully designed loss
functions that guide UStyle to maintain colorfulness, lightness, structural
integrity, and frequency-domain characteristics, as well as high-level content
in VGG and CLIP (contrastive language-image pretraining) feature spaces. By
addressing domain-specific challenges, UStyle provides a robust framework for
no-reference underwater image STx, surpassing state-of-the-art (SOTA) methods
that rely solely on end-to-end reconstruction loss. Furthermore, we introduce
the UF7D dataset, a curated collection of high-resolution underwater images
spanning seven distinct waterbody styles, establishing a benchmark to support
future research in underwater image STx. The UStyle inference pipeline and UF7D
dataset are released at: https://github.com/uf-robopi/UStyle.
|
2503.11895 | Bhiman Kumar Baghel | Bhiman Kumar Baghel, Scott M. Jordan, Zheyuan Ryan Shi, Xiang Lorraine
Li | Resolving UnderEdit & OverEdit with Iterative & Neighbor-Assisted Model
Editing | Under Review @ ACL'25 | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) are used in various downstream language tasks,
making it crucial to keep their knowledge up-to-date, but both retraining and
fine-tuning the model can be costly. Model editing offers an efficient and
effective alternative by a single update to only a key subset of model
parameters. While being efficient, these methods are not perfect. Sometimes
knowledge edits are unsuccessful, i.e., UnderEdit, or the edit contaminated
neighboring knowledge that should remain unchanged, i.e., OverEdit. To address
these limitations, we propose iterative model editing, based on our hypothesis
that a single parameter update is often insufficient, to mitigate UnderEdit,
and neighbor-assisted model editing, which incorporates neighboring knowledge
during editing to minimize OverEdit. Extensive experiments demonstrate that our
methods effectively reduce UnderEdit up to 38 percentage points and OverEdit up
to 6 percentage points across multiple model editing algorithms, LLMs, and
benchmark datasets.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 21:53:12 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Baghel",
"Bhiman Kumar",
""
],
[
"Jordan",
"Scott M.",
""
],
[
"Shi",
"Zheyuan Ryan",
""
],
[
"Li",
"Xiang Lorraine",
""
]
] | TITLE: Resolving UnderEdit & OverEdit with Iterative & Neighbor-Assisted Model
Editing
ABSTRACT: Large Language Models (LLMs) are used in various downstream language tasks,
making it crucial to keep their knowledge up-to-date, but both retraining and
fine-tuning the model can be costly. Model editing offers an efficient and
effective alternative by a single update to only a key subset of model
parameters. While being efficient, these methods are not perfect. Sometimes
knowledge edits are unsuccessful, i.e., UnderEdit, or the edit contaminated
neighboring knowledge that should remain unchanged, i.e., OverEdit. To address
these limitations, we propose iterative model editing, based on our hypothesis
that a single parameter update is often insufficient, to mitigate UnderEdit,
and neighbor-assisted model editing, which incorporates neighboring knowledge
during editing to minimize OverEdit. Extensive experiments demonstrate that our
methods effectively reduce UnderEdit up to 38 percentage points and OverEdit up
to 6 percentage points across multiple model editing algorithms, LLMs, and
benchmark datasets.
|
2503.11899 | Zhe Bai | Da Long, Shandian Zhe, Samuel Williams, Leonid Oliker, Zhe Bai | Spatio-temporal Fourier Transformer (StFT) for Long-term Dynamics
Prediction | 16 pages, 10 figures | null | null | null | cs.LG eess.SP | http://creativecommons.org/licenses/by/4.0/ | Simulating the long-term dynamics of multi-scale and multi-physics systems
poses a significant challenge in understanding complex phenomena across science
and engineering. The complexity arises from the intricate interactions between
scales and the interplay of diverse physical processes. Neural operators have
emerged as promising models for predicting such dynamics due to their
flexibility and computational efficiency. However, they often fail to
effectively capture multi-scale interactions or quantify the uncertainties
inherent in the predictions. These limitations lead to rapid error
accumulation, particularly in long-term forecasting of systems characterized by
complex and coupled dynamics. To address these challenges, we propose a
spatio-temporal Fourier transformer (StFT), in which each transformer block is
designed to learn dynamics at a specific scale. By leveraging a structured
hierarchy of StFT blocks, the model explicitly captures dynamics across both
macro- and micro- spatial scales. Furthermore, a generative residual correction
mechanism is integrated to estimate and mitigate predictive uncertainties,
enhancing both the accuracy and reliability of long-term forecasts. Evaluations
conducted on three benchmark datasets (plasma, fluid, and atmospheric dynamics)
demonstrate the advantages of our approach over state-of-the-art ML methods.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 22:04:03 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Long",
"Da",
""
],
[
"Zhe",
"Shandian",
""
],
[
"Williams",
"Samuel",
""
],
[
"Oliker",
"Leonid",
""
],
[
"Bai",
"Zhe",
""
]
] | TITLE: Spatio-temporal Fourier Transformer (StFT) for Long-term Dynamics
Prediction
ABSTRACT: Simulating the long-term dynamics of multi-scale and multi-physics systems
poses a significant challenge in understanding complex phenomena across science
and engineering. The complexity arises from the intricate interactions between
scales and the interplay of diverse physical processes. Neural operators have
emerged as promising models for predicting such dynamics due to their
flexibility and computational efficiency. However, they often fail to
effectively capture multi-scale interactions or quantify the uncertainties
inherent in the predictions. These limitations lead to rapid error
accumulation, particularly in long-term forecasting of systems characterized by
complex and coupled dynamics. To address these challenges, we propose a
spatio-temporal Fourier transformer (StFT), in which each transformer block is
designed to learn dynamics at a specific scale. By leveraging a structured
hierarchy of StFT blocks, the model explicitly captures dynamics across both
macro- and micro- spatial scales. Furthermore, a generative residual correction
mechanism is integrated to estimate and mitigate predictive uncertainties,
enhancing both the accuracy and reliability of long-term forecasts. Evaluations
conducted on three benchmark datasets (plasma, fluid, and atmospheric dynamics)
demonstrate the advantages of our approach over state-of-the-art ML methods.
|
2503.11900 | Lauren Harrell | Lauren Harrell, Christine Kaeser-Chen, Burcu Karagol Ayan, Keith
Anderson, Michelangelo Conserva, Elise Kleeman, Maxim Neumann, Matt Overlan,
Melissa Chapman, Drew Purves | Heterogenous graph neural networks for species distribution modeling | 11 pages, 3 figures, | null | null | null | cs.LG q-bio.PE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Species distribution models (SDMs) are necessary for measuring and predicting
occurrences and habitat suitability of species and their relationship with
environmental factors. We introduce a novel presence-only SDM with graph neural
networks (GNN). In our model, species and locations are treated as two distinct
node sets, and the learning task is predicting detection records as the edges
that connect locations to species. Using GNN for SDM allows us to model
fine-grained interactions between species and the environment. We evaluate the
potential of this methodology on the six-region dataset compiled by National
Center for Ecological Analysis and Synthesis (NCEAS) for benchmarking SDMs. For
each of the regions, the heterogeneous GNN model is comparable to or
outperforms previously-benchmarked single-species SDMs as well as a
feed-forward neural network baseline model.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 22:08:30 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Harrell",
"Lauren",
""
],
[
"Kaeser-Chen",
"Christine",
""
],
[
"Ayan",
"Burcu Karagol",
""
],
[
"Anderson",
"Keith",
""
],
[
"Conserva",
"Michelangelo",
""
],
[
"Kleeman",
"Elise",
""
],
[
"Neumann",
"Maxim",
""
],
[
"Overlan",
"Matt",
""
],
[
"Chapman",
"Melissa",
""
],
[
"Purves",
"Drew",
""
]
] | TITLE: Heterogenous graph neural networks for species distribution modeling
ABSTRACT: Species distribution models (SDMs) are necessary for measuring and predicting
occurrences and habitat suitability of species and their relationship with
environmental factors. We introduce a novel presence-only SDM with graph neural
networks (GNN). In our model, species and locations are treated as two distinct
node sets, and the learning task is predicting detection records as the edges
that connect locations to species. Using GNN for SDM allows us to model
fine-grained interactions between species and the environment. We evaluate the
potential of this methodology on the six-region dataset compiled by National
Center for Ecological Analysis and Synthesis (NCEAS) for benchmarking SDMs. For
each of the regions, the heterogeneous GNN model is comparable to or
outperforms previously-benchmarked single-species SDMs as well as a
feed-forward neural network baseline model.
|
2503.11906 | Ch Muhammad Awais | Ch Muhammad Awais and Marco Reggiannini and Davide Moroni and Emanuele
Salerno | A Survey on SAR ship classification using Deep Learning | Submitted to JSTARS journal | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Deep learning (DL) has emerged as a powerful tool for Synthetic Aperture
Radar (SAR) ship classification. This survey comprehensively analyzes the
diverse DL techniques employed in this domain. We identify critical trends and
challenges, highlighting the importance of integrating handcrafted features,
utilizing public datasets, data augmentation, fine-tuning, explainability
techniques, and fostering interdisciplinary collaborations to improve DL model
performance. This survey establishes a first-of-its-kind taxonomy for
categorizing relevant research based on DL models, handcrafted feature use, SAR
attribute utilization, and the impact of fine-tuning. We discuss the
methodologies used in SAR ship classification tasks and the impact of different
techniques. Finally, the survey explores potential avenues for future research,
including addressing data scarcity, exploring novel DL architectures,
incorporating interpretability techniques, and establishing standardized
performance metrics. By addressing these challenges and leveraging advancements
in DL, researchers can contribute to developing more accurate and efficient
ship classification systems, ultimately enhancing maritime surveillance and
related applications.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 22:19:24 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Awais",
"Ch Muhammad",
""
],
[
"Reggiannini",
"Marco",
""
],
[
"Moroni",
"Davide",
""
],
[
"Salerno",
"Emanuele",
""
]
] | TITLE: A Survey on SAR ship classification using Deep Learning
ABSTRACT: Deep learning (DL) has emerged as a powerful tool for Synthetic Aperture
Radar (SAR) ship classification. This survey comprehensively analyzes the
diverse DL techniques employed in this domain. We identify critical trends and
challenges, highlighting the importance of integrating handcrafted features,
utilizing public datasets, data augmentation, fine-tuning, explainability
techniques, and fostering interdisciplinary collaborations to improve DL model
performance. This survey establishes a first-of-its-kind taxonomy for
categorizing relevant research based on DL models, handcrafted feature use, SAR
attribute utilization, and the impact of fine-tuning. We discuss the
methodologies used in SAR ship classification tasks and the impact of different
techniques. Finally, the survey explores potential avenues for future research,
including addressing data scarcity, exploring novel DL architectures,
incorporating interpretability techniques, and establishing standardized
performance metrics. By addressing these challenges and leveraging advancements
in DL, researchers can contribute to developing more accurate and efficient
ship classification systems, ultimately enhancing maritime surveillance and
related applications.
|
2503.11910 | Eduard Tulchinskii | Eduard Tulchinskii, Daria Voronkova, Ilya Trofimov, Evgeny Burnaev,
Serguei Barannikov | RTD-Lite: Scalable Topological Analysis for Comparing Weighted Graphs in
Learning Tasks | Accepted for AISTATS 2025 | null | null | null | cs.LG cs.AI math.SG | http://creativecommons.org/licenses/by/4.0/ | Topological methods for comparing weighted graphs are valuable in various
learning tasks but often suffer from computational inefficiency on large
datasets. We introduce RTD-Lite, a scalable algorithm that efficiently compares
topological features, specifically connectivity or cluster structures at
arbitrary scales, of two weighted graphs with one-to-one correspondence between
vertices. Using minimal spanning trees in auxiliary graphs, RTD-Lite captures
topological discrepancies with $O(n^2)$ time and memory complexity. This
efficiency enables its application in tasks like dimensionality reduction and
neural network training. Experiments on synthetic and real-world datasets
demonstrate that RTD-Lite effectively identifies topological differences while
significantly reducing computation time compared to existing methods. Moreover,
integrating RTD-Lite into neural network training as a loss function component
enhances the preservation of topological structures in learned representations.
Our code is publicly available at https://github.com/ArGintum/RTD-Lite
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 22:42:13 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Tulchinskii",
"Eduard",
""
],
[
"Voronkova",
"Daria",
""
],
[
"Trofimov",
"Ilya",
""
],
[
"Burnaev",
"Evgeny",
""
],
[
"Barannikov",
"Serguei",
""
]
] | TITLE: RTD-Lite: Scalable Topological Analysis for Comparing Weighted Graphs in
Learning Tasks
ABSTRACT: Topological methods for comparing weighted graphs are valuable in various
learning tasks but often suffer from computational inefficiency on large
datasets. We introduce RTD-Lite, a scalable algorithm that efficiently compares
topological features, specifically connectivity or cluster structures at
arbitrary scales, of two weighted graphs with one-to-one correspondence between
vertices. Using minimal spanning trees in auxiliary graphs, RTD-Lite captures
topological discrepancies with $O(n^2)$ time and memory complexity. This
efficiency enables its application in tasks like dimensionality reduction and
neural network training. Experiments on synthetic and real-world datasets
demonstrate that RTD-Lite effectively identifies topological differences while
significantly reducing computation time compared to existing methods. Moreover,
integrating RTD-Lite into neural network training as a loss function component
enhances the preservation of topological structures in learned representations.
Our code is publicly available at https://github.com/ArGintum/RTD-Lite
|
2503.11924 | Krishna Sayana | Kun Su, Krishna Sayana, Hubert Pham, James Pine, Yuri Vasilevski,
Raghavendra Vasudeva, Marialena Kyriakidi, Liam Hebert, Ambarish Jash,
Anushya Subbiah, Sukhdeep Sodhi | REGEN: A Dataset and Benchmarks with Natural Language Critiques and
Narratives | null | null | null | null | cs.CL cs.AI cs.IR cs.LG | http://creativecommons.org/licenses/by/4.0/ | This paper introduces a novel dataset REGEN (Reviews Enhanced with GEnerative
Narratives), designed to benchmark the conversational capabilities of
recommender Large Language Models (LLMs), addressing the limitations of
existing datasets that primarily focus on sequential item prediction. REGEN
extends the Amazon Product Reviews dataset by inpainting two key natural
language features: (1) user critiques, representing user "steering" queries
that lead to the selection of a subsequent item, and (2) narratives, rich
textual outputs associated with each recommended item taking into account prior
context. The narratives include product endorsements, purchase explanations,
and summaries of user preferences.
Further, we establish an end-to-end modeling benchmark for the task of
conversational recommendation, where models are trained to generate both
recommendations and corresponding narratives conditioned on user history (items
and critiques). For this joint task, we introduce a modeling framework LUMEN
(LLM-based Unified Multi-task Model with Critiques, Recommendations, and
Narratives) which uses an LLM as a backbone for critiquing, retrieval and
generation. We also evaluate the dataset's quality using standard auto-rating
techniques and benchmark it by training both traditional and LLM-based
recommender models. Our results demonstrate that incorporating critiques
enhances recommendation quality by enabling the recommender to learn language
understanding and integrate it with recommendation signals. Furthermore, LLMs
trained on our dataset effectively generate both recommendations and contextual
narratives, achieving performance comparable to state-of-the-art recommenders
and language models.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 23:47:46 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Su",
"Kun",
""
],
[
"Sayana",
"Krishna",
""
],
[
"Pham",
"Hubert",
""
],
[
"Pine",
"James",
""
],
[
"Vasilevski",
"Yuri",
""
],
[
"Vasudeva",
"Raghavendra",
""
],
[
"Kyriakidi",
"Marialena",
""
],
[
"Hebert",
"Liam",
""
],
[
"Jash",
"Ambarish",
""
],
[
"Subbiah",
"Anushya",
""
],
[
"Sodhi",
"Sukhdeep",
""
]
] | TITLE: REGEN: A Dataset and Benchmarks with Natural Language Critiques and
Narratives
ABSTRACT: This paper introduces a novel dataset REGEN (Reviews Enhanced with GEnerative
Narratives), designed to benchmark the conversational capabilities of
recommender Large Language Models (LLMs), addressing the limitations of
existing datasets that primarily focus on sequential item prediction. REGEN
extends the Amazon Product Reviews dataset by inpainting two key natural
language features: (1) user critiques, representing user "steering" queries
that lead to the selection of a subsequent item, and (2) narratives, rich
textual outputs associated with each recommended item taking into account prior
context. The narratives include product endorsements, purchase explanations,
and summaries of user preferences.
Further, we establish an end-to-end modeling benchmark for the task of
conversational recommendation, where models are trained to generate both
recommendations and corresponding narratives conditioned on user history (items
and critiques). For this joint task, we introduce a modeling framework LUMEN
(LLM-based Unified Multi-task Model with Critiques, Recommendations, and
Narratives) which uses an LLM as a backbone for critiquing, retrieval and
generation. We also evaluate the dataset's quality using standard auto-rating
techniques and benchmark it by training both traditional and LLM-based
recommender models. Our results demonstrate that incorporating critiques
enhances recommendation quality by enabling the recommender to learn language
understanding and integrate it with recommendation signals. Furthermore, LLMs
trained on our dataset effectively generate both recommendations and contextual
narratives, achieving performance comparable to state-of-the-art recommenders
and language models.
|
2503.11932 | Badri Vishal Kasuba | Dhruv Kudale, Badri Vishal Kasuba, Venkatapathy Subramanian, Parag
Chaudhuri, Ganesh Ramakrishnan | SPRINT: Script-agnostic Structure Recognition in Tables | Accepted at ICDAR 2024 | null | 10.1007/978-3-031-70549-6_21 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Table Structure Recognition (TSR) is vital for various downstream tasks like
information retrieval, table reconstruction, and document understanding. While
most state-of-the-art (SOTA) research predominantly focuses on TSR in English
documents, the need for similar capabilities in other languages is evident,
considering the global diversity of data. Moreover, creating substantial
labeled data in non-English languages and training these SOTA models from
scratch is costly and time-consuming. We propose TSR as a language-agnostic
cell arrangement prediction and introduce SPRINT, Script-agnostic Structure
Recognition in Tables. SPRINT uses recently introduced Optimized Table
Structure Language (OTSL) sequences to predict table structures. We show that
when coupled with a pre-trained table grid estimator, SPRINT can improve the
overall tree edit distance-based similarity structure scores of tables even for
non-English documents. We experimentally evaluate our performance across
benchmark TSR datasets including PubTabNet, FinTabNet, and PubTables-1M. Our
findings reveal that SPRINT not only matches SOTA models in performance on
standard datasets but also demonstrates lower latency. Additionally, SPRINT
excels in accurately identifying table structures in non-English documents,
surpassing current leading models by showing an absolute average increase of
11.12%. We also present an algorithm for converting valid OTSL predictions into
a widely used HTML-based table representation. To encourage further research,
we release our code and Multilingual Scanned and Scene Table Structure
Recognition Dataset, MUSTARD labeled with OTSL sequences for 1428 tables in
thirteen languages encompassing several scripts at
https://github.com/IITB-LEAP-OCR/SPRINT
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 00:43:53 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Kudale",
"Dhruv",
""
],
[
"Kasuba",
"Badri Vishal",
""
],
[
"Subramanian",
"Venkatapathy",
""
],
[
"Chaudhuri",
"Parag",
""
],
[
"Ramakrishnan",
"Ganesh",
""
]
] | TITLE: SPRINT: Script-agnostic Structure Recognition in Tables
ABSTRACT: Table Structure Recognition (TSR) is vital for various downstream tasks like
information retrieval, table reconstruction, and document understanding. While
most state-of-the-art (SOTA) research predominantly focuses on TSR in English
documents, the need for similar capabilities in other languages is evident,
considering the global diversity of data. Moreover, creating substantial
labeled data in non-English languages and training these SOTA models from
scratch is costly and time-consuming. We propose TSR as a language-agnostic
cell arrangement prediction and introduce SPRINT, Script-agnostic Structure
Recognition in Tables. SPRINT uses recently introduced Optimized Table
Structure Language (OTSL) sequences to predict table structures. We show that
when coupled with a pre-trained table grid estimator, SPRINT can improve the
overall tree edit distance-based similarity structure scores of tables even for
non-English documents. We experimentally evaluate our performance across
benchmark TSR datasets including PubTabNet, FinTabNet, and PubTables-1M. Our
findings reveal that SPRINT not only matches SOTA models in performance on
standard datasets but also demonstrates lower latency. Additionally, SPRINT
excels in accurately identifying table structures in non-English documents,
surpassing current leading models by showing an absolute average increase of
11.12%. We also present an algorithm for converting valid OTSL predictions into
a widely used HTML-based table representation. To encourage further research,
we release our code and Multilingual Scanned and Scene Table Structure
Recognition Dataset, MUSTARD labeled with OTSL sequences for 1428 tables in
thirteen languages encompassing several scripts at
https://github.com/IITB-LEAP-OCR/SPRINT
|
2503.11946 | Zhishu Shen | Ye Zhang, Zhishu Shen, Dawen Jiang, Xiangrui Liu, Qiushi Zheng, Jiong
Jin | CCRSat: A Collaborative Computation Reuse Framework for Satellite Edge
Computing Networks | null | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In satellite computing applications, such as remote sensing, tasks often
involve similar or identical input data, leading to the same processing
results. Computation reuse is an emerging paradigm that leverages the execution
results of previous tasks to enhance the utilization of computational
resources. While this paradigm has been extensively studied in terrestrial
networks with abundant computing and caching resources, such as named data
networking (NDN), it is essential to develop a framework appropriate for
resource-constrained satellite networks, which are expected to have longer task
completion times. In this paper, we propose CCRSat, a collaborative computation
reuse framework for satellite edge computing networks. CCRSat initially
implements local computation reuse on an independent satellite, utilizing a
satellite reuse state (SRS) to assess the efficiency of computation reuse.
Additionally, an inter-satellite computation reuse algorithm is introduced,
which utilizes the collaborative sharing of similarity in previously processed
data among multiple satellites. The evaluation results tested on real-world
datasets demonstrate that, compared to comparative scenarios, our proposed
CCRSat can significantly reduce task completion time by up to 62.1% and
computational resource consumption by up to 28.8%.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 01:35:53 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Zhang",
"Ye",
""
],
[
"Shen",
"Zhishu",
""
],
[
"Jiang",
"Dawen",
""
],
[
"Liu",
"Xiangrui",
""
],
[
"Zheng",
"Qiushi",
""
],
[
"Jin",
"Jiong",
""
]
] | TITLE: CCRSat: A Collaborative Computation Reuse Framework for Satellite Edge
Computing Networks
ABSTRACT: In satellite computing applications, such as remote sensing, tasks often
involve similar or identical input data, leading to the same processing
results. Computation reuse is an emerging paradigm that leverages the execution
results of previous tasks to enhance the utilization of computational
resources. While this paradigm has been extensively studied in terrestrial
networks with abundant computing and caching resources, such as named data
networking (NDN), it is essential to develop a framework appropriate for
resource-constrained satellite networks, which are expected to have longer task
completion times. In this paper, we propose CCRSat, a collaborative computation
reuse framework for satellite edge computing networks. CCRSat initially
implements local computation reuse on an independent satellite, utilizing a
satellite reuse state (SRS) to assess the efficiency of computation reuse.
Additionally, an inter-satellite computation reuse algorithm is introduced,
which utilizes the collaborative sharing of similarity in previously processed
data among multiple satellites. The evaluation results tested on real-world
datasets demonstrate that, compared to comparative scenarios, our proposed
CCRSat can significantly reduce task completion time by up to 62.1% and
computational resource consumption by up to 28.8%.
|
2503.11948 | Thivya Thogesan Miss | Thivya Thogesan, Anupiya Nugaliyadde, Kok Wai Wong | Integration of Explainable AI Techniques with Large Language Models for
Enhanced Interpretability for Sentiment Analysis | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Interpretability remains a key difficulty in sentiment analysis with Large
Language Models (LLMs), particularly in high-stakes applications where it is
crucial to comprehend the rationale behind forecasts. This research addressed
this by introducing a technique that applies SHAP (Shapley Additive
Explanations) by breaking down LLMs into components such as embedding
layer,encoder,decoder and attention layer to provide a layer-by-layer knowledge
of sentiment prediction. The approach offers a clearer overview of how model
interpret and categorise sentiment by breaking down LLMs into these parts. The
method is evaluated using the Stanford Sentiment Treebank (SST-2) dataset,
which shows how different sentences affect different layers. The effectiveness
of layer-wise SHAP analysis in clarifying sentiment-specific token attributions
is demonstrated by experimental evaluations, which provide a notable
enhancement over current whole-model explainability techniques. These results
highlight how the suggested approach could improve the reliability and
transparency of LLM-based sentiment analysis in crucial applications.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 01:37:54 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Thogesan",
"Thivya",
""
],
[
"Nugaliyadde",
"Anupiya",
""
],
[
"Wong",
"Kok Wai",
""
]
] | TITLE: Integration of Explainable AI Techniques with Large Language Models for
Enhanced Interpretability for Sentiment Analysis
ABSTRACT: Interpretability remains a key difficulty in sentiment analysis with Large
Language Models (LLMs), particularly in high-stakes applications where it is
crucial to comprehend the rationale behind forecasts. This research addressed
this by introducing a technique that applies SHAP (Shapley Additive
Explanations) by breaking down LLMs into components such as embedding
layer,encoder,decoder and attention layer to provide a layer-by-layer knowledge
of sentiment prediction. The approach offers a clearer overview of how model
interpret and categorise sentiment by breaking down LLMs into these parts. The
method is evaluated using the Stanford Sentiment Treebank (SST-2) dataset,
which shows how different sentences affect different layers. The effectiveness
of layer-wise SHAP analysis in clarifying sentiment-specific token attributions
is demonstrated by experimental evaluations, which provide a notable
enhancement over current whole-model explainability techniques. These results
highlight how the suggested approach could improve the reliability and
transparency of LLM-based sentiment analysis in crucial applications.
|
2503.11953 | Priyanka Mandikal | Priyanka Mandikal, Tushar Nagarajan, Alex Stoken, Zihui Xue, Kristen
Grauman | SPOC: Spatially-Progressing Object State Change Segmentation in Video | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Object state changes in video reveal critical information about human and
agent activity. However, existing methods are limited to temporal localization
of when the object is in its initial state (e.g., the unchopped avocado) versus
when it has completed a state change (e.g., the chopped avocado), which limits
applicability for any task requiring detailed information about the progress of
the actions and its spatial localization. We propose to deepen the problem by
introducing the spatially-progressing object state change segmentation task.
The goal is to segment at the pixel-level those regions of an object that are
actionable and those that are transformed. We introduce the first model to
address this task, designing a VLM-based pseudo-labeling approach, state-change
dynamics constraints, and a novel WhereToChange benchmark built on in-the-wild
Internet videos. Experiments on two datasets validate both the challenge of the
new task as well as the promise of our model for localizing exactly where and
how fast objects are changing in video. We further demonstrate useful
implications for tracking activity progress to benefit robotic agents. Project
page: https://vision.cs.utexas.edu/projects/spoc-spatially-progressing-osc
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 01:48:54 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Mandikal",
"Priyanka",
""
],
[
"Nagarajan",
"Tushar",
""
],
[
"Stoken",
"Alex",
""
],
[
"Xue",
"Zihui",
""
],
[
"Grauman",
"Kristen",
""
]
] | TITLE: SPOC: Spatially-Progressing Object State Change Segmentation in Video
ABSTRACT: Object state changes in video reveal critical information about human and
agent activity. However, existing methods are limited to temporal localization
of when the object is in its initial state (e.g., the unchopped avocado) versus
when it has completed a state change (e.g., the chopped avocado), which limits
applicability for any task requiring detailed information about the progress of
the actions and its spatial localization. We propose to deepen the problem by
introducing the spatially-progressing object state change segmentation task.
The goal is to segment at the pixel-level those regions of an object that are
actionable and those that are transformed. We introduce the first model to
address this task, designing a VLM-based pseudo-labeling approach, state-change
dynamics constraints, and a novel WhereToChange benchmark built on in-the-wild
Internet videos. Experiments on two datasets validate both the challenge of the
new task as well as the promise of our model for localizing exactly where and
how fast objects are changing in video. We further demonstrate useful
implications for tracking activity progress to benefit robotic agents. Project
page: https://vision.cs.utexas.edu/projects/spoc-spatially-progressing-osc
|
2503.11954 | Ahcen Aliouat | Ahcen Aliouat, Elsa Dupraz | Goal-Oriented Source Coding using LDPC Codes for Compressed-Domain Image
Classification | 11 pages, 13 figures, Submitted to IEEE Transactions on
Communications (Under Review) | null | null | null | eess.IV cs.AI cs.CV cs.IT cs.LG math.IT | http://creativecommons.org/licenses/by/4.0/ | In the emerging field of goal-oriented communications, the focus has shifted
from reconstructing data to directly performing specific learning tasks, such
as classification, segmentation, or pattern recognition, on the received coded
data. In the commonly studied scenario of classification from compressed
images, a key objective is to enable learning directly on entropy-coded data,
thereby bypassing the computationally intensive step of data reconstruction.
Conventional entropy-coding methods, such as Huffman and Arithmetic coding, are
effective for compression but disrupt the data structure, making them less
suitable for direct learning without decoding. This paper investigates the use
of low-density parity-check (LDPC) codes -- originally designed for channel
coding -- as an alternative entropy-coding approach. It is hypothesized that
the structured nature of LDPC codes can be leveraged more effectively by deep
learning models for tasks like classification. At the receiver side, gated
recurrent unit (GRU) models are trained to perform image classification
directly on LDPC-coded data. Experiments on datasets like MNIST, Fashion-MNIST,
and CIFAR show that LDPC codes outperform Huffman and Arithmetic coding in
classification tasks, while requiring significantly smaller learning models.
Furthermore, the paper analyzes why LDPC codes preserve data structure more
effectively than traditional entropy-coding techniques and explores the impact
of key code parameters on classification performance. These results suggest
that LDPC-based entropy coding offers an optimal balance between learning
efficiency and model complexity, eliminating the need for prior decoding.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 01:52:09 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Aliouat",
"Ahcen",
""
],
[
"Dupraz",
"Elsa",
""
]
] | TITLE: Goal-Oriented Source Coding using LDPC Codes for Compressed-Domain Image
Classification
ABSTRACT: In the emerging field of goal-oriented communications, the focus has shifted
from reconstructing data to directly performing specific learning tasks, such
as classification, segmentation, or pattern recognition, on the received coded
data. In the commonly studied scenario of classification from compressed
images, a key objective is to enable learning directly on entropy-coded data,
thereby bypassing the computationally intensive step of data reconstruction.
Conventional entropy-coding methods, such as Huffman and Arithmetic coding, are
effective for compression but disrupt the data structure, making them less
suitable for direct learning without decoding. This paper investigates the use
of low-density parity-check (LDPC) codes -- originally designed for channel
coding -- as an alternative entropy-coding approach. It is hypothesized that
the structured nature of LDPC codes can be leveraged more effectively by deep
learning models for tasks like classification. At the receiver side, gated
recurrent unit (GRU) models are trained to perform image classification
directly on LDPC-coded data. Experiments on datasets like MNIST, Fashion-MNIST,
and CIFAR show that LDPC codes outperform Huffman and Arithmetic coding in
classification tasks, while requiring significantly smaller learning models.
Furthermore, the paper analyzes why LDPC codes preserve data structure more
effectively than traditional entropy-coding techniques and explores the impact
of key code parameters on classification performance. These results suggest
that LDPC-based entropy coding offers an optimal balance between learning
efficiency and model complexity, eliminating the need for prior decoding.
|
2503.11958 | Zheyuan Hu | Chong Su, Yingbin Fu, Zheyuan Hu, Jing Yang, Param Hanji, Shaojun
Wang, Xuan Zhao, Cengiz \"Oztireli, Fangcheng Zhong | CHOrD: Generation of Collision-Free, House-Scale, and Organized Digital
Twins for 3D Indoor Scenes with Controllable Floor Plans and Optimal Layouts | Chong Su and Yingbin Fu contributed equally to this work | null | null | null | cs.CV cs.AI cs.LG cs.RO | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We introduce CHOrD, a novel framework for scalable synthesis of 3D indoor
scenes, designed to create house-scale, collision-free, and hierarchically
structured indoor digital twins. In contrast to existing methods that directly
synthesize the scene layout as a scene graph or object list, CHOrD incorporates
a 2D image-based intermediate layout representation, enabling effective
prevention of collision artifacts by successfully capturing them as
out-of-distribution (OOD) scenarios during generation. Furthermore, unlike
existing methods, CHOrD is capable of generating scene layouts that adhere to
complex floor plans with multi-modal controls, enabling the creation of
coherent, house-wide layouts robust to both geometric and semantic variations
in room structures. Additionally, we propose a novel dataset with expanded
coverage of household items and room configurations, as well as significantly
improved data quality. CHOrD demonstrates state-of-the-art performance on both
the 3D-FRONT and our proposed datasets, delivering photorealistic, spatially
coherent indoor scene synthesis adaptable to arbitrary floor plan variations.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 02:05:10 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Su",
"Chong",
""
],
[
"Fu",
"Yingbin",
""
],
[
"Hu",
"Zheyuan",
""
],
[
"Yang",
"Jing",
""
],
[
"Hanji",
"Param",
""
],
[
"Wang",
"Shaojun",
""
],
[
"Zhao",
"Xuan",
""
],
[
"Öztireli",
"Cengiz",
""
],
[
"Zhong",
"Fangcheng",
""
]
] | TITLE: CHOrD: Generation of Collision-Free, House-Scale, and Organized Digital
Twins for 3D Indoor Scenes with Controllable Floor Plans and Optimal Layouts
ABSTRACT: We introduce CHOrD, a novel framework for scalable synthesis of 3D indoor
scenes, designed to create house-scale, collision-free, and hierarchically
structured indoor digital twins. In contrast to existing methods that directly
synthesize the scene layout as a scene graph or object list, CHOrD incorporates
a 2D image-based intermediate layout representation, enabling effective
prevention of collision artifacts by successfully capturing them as
out-of-distribution (OOD) scenarios during generation. Furthermore, unlike
existing methods, CHOrD is capable of generating scene layouts that adhere to
complex floor plans with multi-modal controls, enabling the creation of
coherent, house-wide layouts robust to both geometric and semantic variations
in room structures. Additionally, we propose a novel dataset with expanded
coverage of household items and room configurations, as well as significantly
improved data quality. CHOrD demonstrates state-of-the-art performance on both
the 3D-FRONT and our proposed datasets, delivering photorealistic, spatially
coherent indoor scene synthesis adaptable to arbitrary floor plan variations.
|
2503.11973 | Haonan Pan | Haonan Pan, Shuheng Chen, Elham Pishgar, Kamiar Alaei, Greg Placencia,
Maryam Pishgar | Machine Learning-Based Model for Postoperative Stroke Prediction in
Coronary Artery Disease | 19 pages, 7 figures, submitted to PLOS One. The study employs machine
learning techniques, particularly Support Vector Machines, to predict
postoperative stroke risk in coronary artery disease patients undergoing
revascularization. It utilizes the MIMIC-IV v3.1 database and incorporates
SHapley Additive Properties analysis for model interpretation | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Coronary artery disease remains one of the leading causes of mortality
globally. Despite advances in revascularization treatments like PCI and CABG,
postoperative stroke is inevitable. This study aims to develop and evaluate a
sophisticated machine learning prediction model to assess postoperative stroke
risk in coronary revascularization patients.This research employed data from
the MIMIC-IV database, consisting of a cohort of 7023 individuals. Study data
included clinical, laboratory, and comorbidity variables. To reduce
multicollinearity, variables with over 30% missing values and features with a
correlation coefficient larger than 0.9 were deleted. The dataset has 70%
training and 30% test. The Random Forest technique interpolated residual
dataset missing values. Numerical values were normalized, whereas categorical
variables were one-hot encoded. LASSO regularization selected features, and
grid search found model hyperparameters. Finally, Logistic Regression, XGBoost,
SVM, and CatBoost were employed for predictive modeling, and SHAP analysis
assessed stroke risk for each variable. AUC of 0.855 (0.829-0.878) showed that
SVM model outperformed logistic regression and CatBoost models in prior
research. SHAP research showed that the Charlson Comorbidity Index (CCI),
diabetes, chronic kidney disease, and heart failure are significant prognostic
factors for postoperative stroke. This study shows that improved machine
learning reduces overfitting and improves model predictive accuracy. Models
using the CCI alone cannot predict postoperative stroke risk as accurately as
those using independent comorbidity variables. The suggested technique provides
a more thorough and individualized risk assessment by encompassing a wider
range of clinically relevant characteristics, making it a better reference for
preoperative risk assessments and targeted intervention.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 02:50:32 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Pan",
"Haonan",
""
],
[
"Chen",
"Shuheng",
""
],
[
"Pishgar",
"Elham",
""
],
[
"Alaei",
"Kamiar",
""
],
[
"Placencia",
"Greg",
""
],
[
"Pishgar",
"Maryam",
""
]
] | TITLE: Machine Learning-Based Model for Postoperative Stroke Prediction in
Coronary Artery Disease
ABSTRACT: Coronary artery disease remains one of the leading causes of mortality
globally. Despite advances in revascularization treatments like PCI and CABG,
postoperative stroke is inevitable. This study aims to develop and evaluate a
sophisticated machine learning prediction model to assess postoperative stroke
risk in coronary revascularization patients.This research employed data from
the MIMIC-IV database, consisting of a cohort of 7023 individuals. Study data
included clinical, laboratory, and comorbidity variables. To reduce
multicollinearity, variables with over 30% missing values and features with a
correlation coefficient larger than 0.9 were deleted. The dataset has 70%
training and 30% test. The Random Forest technique interpolated residual
dataset missing values. Numerical values were normalized, whereas categorical
variables were one-hot encoded. LASSO regularization selected features, and
grid search found model hyperparameters. Finally, Logistic Regression, XGBoost,
SVM, and CatBoost were employed for predictive modeling, and SHAP analysis
assessed stroke risk for each variable. AUC of 0.855 (0.829-0.878) showed that
SVM model outperformed logistic regression and CatBoost models in prior
research. SHAP research showed that the Charlson Comorbidity Index (CCI),
diabetes, chronic kidney disease, and heart failure are significant prognostic
factors for postoperative stroke. This study shows that improved machine
learning reduces overfitting and improves model predictive accuracy. Models
using the CCI alone cannot predict postoperative stroke risk as accurately as
those using independent comorbidity variables. The suggested technique provides
a more thorough and individualized risk assessment by encompassing a wider
range of clinically relevant characteristics, making it a better reference for
preoperative risk assessments and targeted intervention.
|
2503.11979 | Runfa Li | Runfa Blark Li, Mahdi Shaghaghi, Keito Suzuki, Xinshuang Liu, Varun
Moparthi, Bang Du, Walker Curtis, Martin Renschler, Ki Myung Brian Lee,
Nikolay Atanasov, Truong Nguyen | DynaGSLAM: Real-Time Gaussian-Splatting SLAM for Online Rendering,
Tracking, Motion Predictions of Moving Objects in Dynamic Scenes | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Simultaneous Localization and Mapping (SLAM) is one of the most important
environment-perception and navigation algorithms for computer vision, robotics,
and autonomous cars/drones. Hence, high quality and fast mapping becomes a
fundamental problem. With the advent of 3D Gaussian Splatting (3DGS) as an
explicit representation with excellent rendering quality and speed,
state-of-the-art (SOTA) works introduce GS to SLAM. Compared to classical
pointcloud-SLAM, GS-SLAM generates photometric information by learning from
input camera views and synthesize unseen views with high-quality textures.
However, these GS-SLAM fail when moving objects occupy the scene that violate
the static assumption of bundle adjustment. The failed updates of moving GS
affects the static GS and contaminates the full map over long frames. Although
some efforts have been made by concurrent works to consider moving objects for
GS-SLAM, they simply detect and remove the moving regions from GS rendering
("anti'' dynamic GS-SLAM), where only the static background could benefit from
GS. To this end, we propose the first real-time GS-SLAM, "DynaGSLAM'', that
achieves high-quality online GS rendering, tracking, motion predictions of
moving objects in dynamic scenes while jointly estimating accurate ego motion.
Our DynaGSLAM outperforms SOTA static & "Anti'' dynamic GS-SLAM on three
dynamic real datasets, while keeping speed and memory efficiency in practice.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 03:20:14 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Li",
"Runfa Blark",
""
],
[
"Shaghaghi",
"Mahdi",
""
],
[
"Suzuki",
"Keito",
""
],
[
"Liu",
"Xinshuang",
""
],
[
"Moparthi",
"Varun",
""
],
[
"Du",
"Bang",
""
],
[
"Curtis",
"Walker",
""
],
[
"Renschler",
"Martin",
""
],
[
"Lee",
"Ki Myung Brian",
""
],
[
"Atanasov",
"Nikolay",
""
],
[
"Nguyen",
"Truong",
""
]
] | TITLE: DynaGSLAM: Real-Time Gaussian-Splatting SLAM for Online Rendering,
Tracking, Motion Predictions of Moving Objects in Dynamic Scenes
ABSTRACT: Simultaneous Localization and Mapping (SLAM) is one of the most important
environment-perception and navigation algorithms for computer vision, robotics,
and autonomous cars/drones. Hence, high quality and fast mapping becomes a
fundamental problem. With the advent of 3D Gaussian Splatting (3DGS) as an
explicit representation with excellent rendering quality and speed,
state-of-the-art (SOTA) works introduce GS to SLAM. Compared to classical
pointcloud-SLAM, GS-SLAM generates photometric information by learning from
input camera views and synthesize unseen views with high-quality textures.
However, these GS-SLAM fail when moving objects occupy the scene that violate
the static assumption of bundle adjustment. The failed updates of moving GS
affects the static GS and contaminates the full map over long frames. Although
some efforts have been made by concurrent works to consider moving objects for
GS-SLAM, they simply detect and remove the moving regions from GS rendering
("anti'' dynamic GS-SLAM), where only the static background could benefit from
GS. To this end, we propose the first real-time GS-SLAM, "DynaGSLAM'', that
achieves high-quality online GS rendering, tracking, motion predictions of
moving objects in dynamic scenes while jointly estimating accurate ego motion.
Our DynaGSLAM outperforms SOTA static & "Anti'' dynamic GS-SLAM on three
dynamic real datasets, while keeping speed and memory efficiency in practice.
|
2503.11984 | Xinyu Liu | Xinyu Liu and Shuyu Shen and Boyan Li and Nan Tang and Yuyu Luo | NL2SQL-BUGs: A Benchmark for Detecting Semantic Errors in NL2SQL
Translation | 12 pages, 6 figures | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Natural Language to SQL (i.e., NL2SQL) translation is crucial for
democratizing database access, but even state-of-the-art models frequently
generate semantically incorrect SQL queries, hindering the widespread adoption
of these techniques by database vendors. While existing NL2SQL benchmarks
primarily focus on correct query translation, we argue that a benchmark
dedicated to identifying common errors in NL2SQL translations is equally
important, as accurately detecting these errors is a prerequisite for any
subsequent correction-whether performed by humans or models. To address this
gap, we propose NL2SQL-BUGs, the first benchmark dedicated to detecting and
categorizing semantic errors in NL2SQL translation. NL2SQL-BUGs adopts a
two-level taxonomy to systematically classify semantic errors, covering 9 main
categories and 31 subcategories. The benchmark consists of 2018
expert-annotated instances, each containing a natural language query, database
schema, and SQL query, with detailed error annotations for semantically
incorrect queries. Through comprehensive experiments, we demonstrate that
current large language models exhibit significant limitations in semantic error
detection, achieving an average detection accuracy of only 75.16%. Despite
this, the models were able to successfully detect 106 errors (accounting for
6.91%) in the widely-used NL2SQL dataset, BIRD, which were previously
annotation errors in the benchmark. This highlights the importance of semantic
error detection in NL2SQL systems.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 03:54:10 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Liu",
"Xinyu",
""
],
[
"Shen",
"Shuyu",
""
],
[
"Li",
"Boyan",
""
],
[
"Tang",
"Nan",
""
],
[
"Luo",
"Yuyu",
""
]
] | TITLE: NL2SQL-BUGs: A Benchmark for Detecting Semantic Errors in NL2SQL
Translation
ABSTRACT: Natural Language to SQL (i.e., NL2SQL) translation is crucial for
democratizing database access, but even state-of-the-art models frequently
generate semantically incorrect SQL queries, hindering the widespread adoption
of these techniques by database vendors. While existing NL2SQL benchmarks
primarily focus on correct query translation, we argue that a benchmark
dedicated to identifying common errors in NL2SQL translations is equally
important, as accurately detecting these errors is a prerequisite for any
subsequent correction-whether performed by humans or models. To address this
gap, we propose NL2SQL-BUGs, the first benchmark dedicated to detecting and
categorizing semantic errors in NL2SQL translation. NL2SQL-BUGs adopts a
two-level taxonomy to systematically classify semantic errors, covering 9 main
categories and 31 subcategories. The benchmark consists of 2018
expert-annotated instances, each containing a natural language query, database
schema, and SQL query, with detailed error annotations for semantically
incorrect queries. Through comprehensive experiments, we demonstrate that
current large language models exhibit significant limitations in semantic error
detection, achieving an average detection accuracy of only 75.16%. Despite
this, the models were able to successfully detect 106 errors (accounting for
6.91%) in the widely-used NL2SQL dataset, BIRD, which were previously
annotation errors in the benchmark. This highlights the importance of semantic
error detection in NL2SQL systems.
|
2503.12006 | Lei Zhou | Zhe Shan, Yang Liu, Lei Zhou, Cheng Yan, Heng Wang, Xia Xie | ROS-SAM: High-Quality Interactive Segmentation for Remote Sensing Moving
Object | Accepted to CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The availability of large-scale remote sensing video data underscores the
importance of high-quality interactive segmentation. However, challenges such
as small object sizes, ambiguous features, and limited generalization make it
difficult for current methods to achieve this goal. In this work, we propose
ROS-SAM, a method designed to achieve high-quality interactive segmentation
while preserving generalization across diverse remote sensing data. The ROS-SAM
is built upon three key innovations: 1) LoRA-based fine-tuning, which enables
efficient domain adaptation while maintaining SAM's generalization ability, 2)
Enhancement of deep network layers to improve the discriminability of extracted
features, thereby reducing misclassifications, and 3) Integration of global
context with local boundary details in the mask decoder to generate
high-quality segmentation masks. Additionally, we design the data pipeline to
ensure the model learns to better handle objects at varying scales during
training while focusing on high-quality predictions during inference.
Experiments on remote sensing video datasets show that the redesigned data
pipeline boosts the IoU by 6%, while ROS-SAM increases the IoU by 13%. Finally,
when evaluated on existing remote sensing object tracking datasets, ROS-SAM
demonstrates impressive zero-shot capabilities, generating masks that closely
resemble manual annotations. These results confirm ROS-SAM as a powerful tool
for fine-grained segmentation in remote sensing applications. Code is available
at https://github.com/ShanZard/ROS-SAM.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 06:10:09 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Shan",
"Zhe",
""
],
[
"Liu",
"Yang",
""
],
[
"Zhou",
"Lei",
""
],
[
"Yan",
"Cheng",
""
],
[
"Wang",
"Heng",
""
],
[
"Xie",
"Xia",
""
]
] | TITLE: ROS-SAM: High-Quality Interactive Segmentation for Remote Sensing Moving
Object
ABSTRACT: The availability of large-scale remote sensing video data underscores the
importance of high-quality interactive segmentation. However, challenges such
as small object sizes, ambiguous features, and limited generalization make it
difficult for current methods to achieve this goal. In this work, we propose
ROS-SAM, a method designed to achieve high-quality interactive segmentation
while preserving generalization across diverse remote sensing data. The ROS-SAM
is built upon three key innovations: 1) LoRA-based fine-tuning, which enables
efficient domain adaptation while maintaining SAM's generalization ability, 2)
Enhancement of deep network layers to improve the discriminability of extracted
features, thereby reducing misclassifications, and 3) Integration of global
context with local boundary details in the mask decoder to generate
high-quality segmentation masks. Additionally, we design the data pipeline to
ensure the model learns to better handle objects at varying scales during
training while focusing on high-quality predictions during inference.
Experiments on remote sensing video datasets show that the redesigned data
pipeline boosts the IoU by 6%, while ROS-SAM increases the IoU by 13%. Finally,
when evaluated on existing remote sensing object tracking datasets, ROS-SAM
demonstrates impressive zero-shot capabilities, generating masks that closely
resemble manual annotations. These results confirm ROS-SAM as a powerful tool
for fine-grained segmentation in remote sensing applications. Code is available
at https://github.com/ShanZard/ROS-SAM.
|
2503.12012 | Qingshi Sun | Qingshi Sun, Nathan Justin, Andres Gomez, Phebe Vayanos | Mixed-feature Logistic Regression Robust to Distribution Shifts | The 28th International Conference on Artificial Intelligence and
Statistics (AISTATS), 2025 | null | null | null | cs.LG math.OC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Logistic regression models are widely used in the social and behavioral
sciences and in high-stakes domains, due to their simplicity and
interpretability properties. At the same time, such domains are permeated by
distribution shifts, where the distribution generating the data changes between
training and deployment. In this paper, we study a distributionally robust
logistic regression problem that seeks the model that will perform best against
adversarial realizations of the data distribution drawn from a suitably
constructed Wasserstein ambiguity set. Our model and solution approach differ
from prior work in that we can capture settings where the likelihood of
distribution shifts can vary across features, significantly broadening the
applicability of our model relative to the state-of-the-art. We propose a
graph-based solution approach that can be integrated into off-the-shelf
optimization solvers. We evaluate the performance of our model and algorithms
on numerous publicly available datasets. Our solution achieves a 408x speed-up
relative to the state-of-the-art. Additionally, compared to the
state-of-the-art, our model reduces average calibration error by up to 36.19%
and worst-case calibration error by up to 41.70%, while increasing the average
area under the ROC curve (AUC) by up to 18.02% and worst-case AUC by up to
48.37%.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 06:31:16 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Sun",
"Qingshi",
""
],
[
"Justin",
"Nathan",
""
],
[
"Gomez",
"Andres",
""
],
[
"Vayanos",
"Phebe",
""
]
] | TITLE: Mixed-feature Logistic Regression Robust to Distribution Shifts
ABSTRACT: Logistic regression models are widely used in the social and behavioral
sciences and in high-stakes domains, due to their simplicity and
interpretability properties. At the same time, such domains are permeated by
distribution shifts, where the distribution generating the data changes between
training and deployment. In this paper, we study a distributionally robust
logistic regression problem that seeks the model that will perform best against
adversarial realizations of the data distribution drawn from a suitably
constructed Wasserstein ambiguity set. Our model and solution approach differ
from prior work in that we can capture settings where the likelihood of
distribution shifts can vary across features, significantly broadening the
applicability of our model relative to the state-of-the-art. We propose a
graph-based solution approach that can be integrated into off-the-shelf
optimization solvers. We evaluate the performance of our model and algorithms
on numerous publicly available datasets. Our solution achieves a 408x speed-up
relative to the state-of-the-art. Additionally, compared to the
state-of-the-art, our model reduces average calibration error by up to 36.19%
and worst-case calibration error by up to 41.70%, while increasing the average
area under the ROC curve (AUC) by up to 18.02% and worst-case AUC by up to
48.37%.
|
2503.12014 | Shun Zou | Shun Zou, Yi Zou, Mingya Zhang, Shipeng Luo, Guangwei Gao, Guojun Qi | Learning Dual-Domain Multi-Scale Representations for Single Image
Deraining | 6 pages, 5 figures, code: https://zs1314.github.io/DMSR | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing image deraining methods typically rely on single-input,
single-output, and single-scale architectures, which overlook the joint
multi-scale information between external and internal features. Furthermore,
single-domain representations are often too restrictive, limiting their ability
to handle the complexities of real-world rain scenarios. To address these
challenges, we propose a novel Dual-Domain Multi-Scale Representation Network
(DMSR). The key idea is to exploit joint multi-scale representations from both
external and internal domains in parallel while leveraging the strengths of
both spatial and frequency domains to capture more comprehensive properties.
Specifically, our method consists of two main components: the Multi-Scale
Progressive Spatial Refinement Module (MPSRM) and the Frequency Domain Scale
Mixer (FDSM). The MPSRM enables the interaction and coupling of multi-scale
expert information within the internal domain using a hierarchical modulation
and fusion strategy. The FDSM extracts multi-scale local information in the
spatial domain, while also modeling global dependencies in the frequency
domain. Extensive experiments show that our model achieves state-of-the-art
performance across six benchmark datasets.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 06:45:33 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Zou",
"Shun",
""
],
[
"Zou",
"Yi",
""
],
[
"Zhang",
"Mingya",
""
],
[
"Luo",
"Shipeng",
""
],
[
"Gao",
"Guangwei",
""
],
[
"Qi",
"Guojun",
""
]
] | TITLE: Learning Dual-Domain Multi-Scale Representations for Single Image
Deraining
ABSTRACT: Existing image deraining methods typically rely on single-input,
single-output, and single-scale architectures, which overlook the joint
multi-scale information between external and internal features. Furthermore,
single-domain representations are often too restrictive, limiting their ability
to handle the complexities of real-world rain scenarios. To address these
challenges, we propose a novel Dual-Domain Multi-Scale Representation Network
(DMSR). The key idea is to exploit joint multi-scale representations from both
external and internal domains in parallel while leveraging the strengths of
both spatial and frequency domains to capture more comprehensive properties.
Specifically, our method consists of two main components: the Multi-Scale
Progressive Spatial Refinement Module (MPSRM) and the Frequency Domain Scale
Mixer (FDSM). The MPSRM enables the interaction and coupling of multi-scale
expert information within the internal domain using a hierarchical modulation
and fusion strategy. The FDSM extracts multi-scale local information in the
spatial domain, while also modeling global dependencies in the frequency
domain. Extensive experiments show that our model achieves state-of-the-art
performance across six benchmark datasets.
|
2503.12018 | Zhe Jin | Zhe Jin, Tat-Seng Chua | Compose Your Aesthetics: Empowering Text-to-Image Models with the
Principles of Art | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Text-to-Image (T2I) diffusion models (DM) have garnered widespread adoption
due to their capability in generating high-fidelity outputs and accessibility
to anyone able to put imagination into words. However, DMs are often
predisposed to generate unappealing outputs, much like the random images on the
internet they were trained on. Existing approaches to address this are founded
on the implicit premise that visual aesthetics is universal, which is limiting.
Aesthetics in the T2I context should be about personalization and we propose
the novel task of aesthetics alignment which seeks to align user-specified
aesthetics with the T2I generation output. Inspired by how artworks provide an
invaluable perspective to approach aesthetics, we codify visual aesthetics
using the compositional framework artists employ, known as the Principles of
Art (PoA). To facilitate this study, we introduce CompArt, a large-scale
compositional art dataset building on top of WikiArt with PoA analysis
annotated by a capable Multimodal LLM. Leveraging the expressive power of LLMs
and training a lightweight and transferrable adapter, we demonstrate that T2I
DMs can effectively offer 10 compositional controls through user-specified PoA
conditions. Additionally, we design an appropriate evaluation framework to
assess the efficacy of our approach.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 06:58:09 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Jin",
"Zhe",
""
],
[
"Chua",
"Tat-Seng",
""
]
] | TITLE: Compose Your Aesthetics: Empowering Text-to-Image Models with the
Principles of Art
ABSTRACT: Text-to-Image (T2I) diffusion models (DM) have garnered widespread adoption
due to their capability in generating high-fidelity outputs and accessibility
to anyone able to put imagination into words. However, DMs are often
predisposed to generate unappealing outputs, much like the random images on the
internet they were trained on. Existing approaches to address this are founded
on the implicit premise that visual aesthetics is universal, which is limiting.
Aesthetics in the T2I context should be about personalization and we propose
the novel task of aesthetics alignment which seeks to align user-specified
aesthetics with the T2I generation output. Inspired by how artworks provide an
invaluable perspective to approach aesthetics, we codify visual aesthetics
using the compositional framework artists employ, known as the Principles of
Art (PoA). To facilitate this study, we introduce CompArt, a large-scale
compositional art dataset building on top of WikiArt with PoA analysis
annotated by a capable Multimodal LLM. Leveraging the expressive power of LLMs
and training a lightweight and transferrable adapter, we demonstrate that T2I
DMs can effectively offer 10 compositional controls through user-specified PoA
conditions. Additionally, we design an appropriate evaluation framework to
assess the efficacy of our approach.
|
2503.12030 | Zhenxin Li | Zhenxin Li, Shihao Wang, Shiyi Lan, Zhiding Yu, Zuxuan Wu, Jose M.
Alvarez | Hydra-NeXt: Robust Closed-Loop Driving with Open-Loop Training | null | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | End-to-end autonomous driving research currently faces a critical challenge
in bridging the gap between open-loop training and closed-loop deployment.
Current approaches are trained to predict trajectories in an open-loop
environment, which struggle with quick reactions to other agents in closed-loop
environments and risk generating kinematically infeasible plans due to the gap
between open-loop training and closed-loop driving. In this paper, we introduce
Hydra-NeXt, a novel multi-branch planning framework that unifies trajectory
prediction, control prediction, and a trajectory refinement network in one
model. Unlike current open-loop trajectory prediction models that only handle
general-case planning, Hydra-NeXt further utilizes a control decoder to focus
on short-term actions, which enables faster responses to dynamic situations and
reactive agents. Moreover, we propose the Trajectory Refinement module to
augment and refine the planning decisions by effectively adhering to kinematic
constraints in closed-loop environments. This unified approach bridges the gap
between open-loop training and closed-loop driving, demonstrating superior
performance of 65.89 Driving Score (DS) and 48.20% Success Rate (SR) on the
Bench2Drive dataset without relying on external experts for data collection.
Hydra-NeXt surpasses the previous state-of-the-art by 22.98 DS and 17.49 SR,
marking a significant advancement in autonomous driving. Code will be available
at https://github.com/woxihuanjiangguo/Hydra-NeXt.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 07:42:27 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Li",
"Zhenxin",
""
],
[
"Wang",
"Shihao",
""
],
[
"Lan",
"Shiyi",
""
],
[
"Yu",
"Zhiding",
""
],
[
"Wu",
"Zuxuan",
""
],
[
"Alvarez",
"Jose M.",
""
]
] | TITLE: Hydra-NeXt: Robust Closed-Loop Driving with Open-Loop Training
ABSTRACT: End-to-end autonomous driving research currently faces a critical challenge
in bridging the gap between open-loop training and closed-loop deployment.
Current approaches are trained to predict trajectories in an open-loop
environment, which struggle with quick reactions to other agents in closed-loop
environments and risk generating kinematically infeasible plans due to the gap
between open-loop training and closed-loop driving. In this paper, we introduce
Hydra-NeXt, a novel multi-branch planning framework that unifies trajectory
prediction, control prediction, and a trajectory refinement network in one
model. Unlike current open-loop trajectory prediction models that only handle
general-case planning, Hydra-NeXt further utilizes a control decoder to focus
on short-term actions, which enables faster responses to dynamic situations and
reactive agents. Moreover, we propose the Trajectory Refinement module to
augment and refine the planning decisions by effectively adhering to kinematic
constraints in closed-loop environments. This unified approach bridges the gap
between open-loop training and closed-loop driving, demonstrating superior
performance of 65.89 Driving Score (DS) and 48.20% Success Rate (SR) on the
Bench2Drive dataset without relying on external experts for data collection.
Hydra-NeXt surpasses the previous state-of-the-art by 22.98 DS and 17.49 SR,
marking a significant advancement in autonomous driving. Code will be available
at https://github.com/woxihuanjiangguo/Hydra-NeXt.
|
2503.12034 | Enes Erdogan | Enes Erdogan, Eren Erdal Aksoy and Sanem Sariel | Real-Time Manipulation Action Recognition with a Factorized Graph
Sequence Encoder | 8 pages, 3 figures, 7 tables | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recognition of human manipulation actions in real-time is essential for safe
and effective human-robot interaction and collaboration. The challenge lies in
developing a model that is both lightweight enough for real-time execution and
capable of generalization. While some existing methods in the literature can
run in real-time, they struggle with temporal scalability, i.e., they fail to
adapt to long-duration manipulations effectively. To address this, leveraging
the generalizable scene graph representations, we propose a new Factorized
Graph Sequence Encoder network that not only runs in real-time but also scales
effectively in the temporal dimension, thanks to its factorized encoder
architecture. Additionally, we introduce Hand Pooling operation, a simple
pooling operation for more focused extraction of the graph-level embeddings.
Our model outperforms the previous state-of-the-art real-time approach,
achieving a 14.3\% and 5.6\% improvement in F1-macro score on the KIT Bimanual
Action (Bimacs) Dataset and Collaborative Action (CoAx) Dataset, respectively.
Moreover, we conduct an extensive ablation study to validate our network design
choices. Finally, we compare our model with its architecturally similar
RGB-based model on the Bimacs dataset and show the limitations of this model in
contrast to ours on such an object-centric manipulation dataset.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 07:58:25 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Erdogan",
"Enes",
""
],
[
"Aksoy",
"Eren Erdal",
""
],
[
"Sariel",
"Sanem",
""
]
] | TITLE: Real-Time Manipulation Action Recognition with a Factorized Graph
Sequence Encoder
ABSTRACT: Recognition of human manipulation actions in real-time is essential for safe
and effective human-robot interaction and collaboration. The challenge lies in
developing a model that is both lightweight enough for real-time execution and
capable of generalization. While some existing methods in the literature can
run in real-time, they struggle with temporal scalability, i.e., they fail to
adapt to long-duration manipulations effectively. To address this, leveraging
the generalizable scene graph representations, we propose a new Factorized
Graph Sequence Encoder network that not only runs in real-time but also scales
effectively in the temporal dimension, thanks to its factorized encoder
architecture. Additionally, we introduce Hand Pooling operation, a simple
pooling operation for more focused extraction of the graph-level embeddings.
Our model outperforms the previous state-of-the-art real-time approach,
achieving a 14.3\% and 5.6\% improvement in F1-macro score on the KIT Bimanual
Action (Bimacs) Dataset and Collaborative Action (CoAx) Dataset, respectively.
Moreover, we conduct an extensive ablation study to validate our network design
choices. Finally, we compare our model with its architecturally similar
RGB-based model on the Bimacs dataset and show the limitations of this model in
contrast to ours on such an object-centric manipulation dataset.
|
2503.12037 | Hang Ni | Hang Ni, Jindong Han, Nengjun Zhu, Hao Liu | Unsupervised Graph Anomaly Detection via Multi-Hypersphere Heterophilic
Graph Learning | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph Anomaly Detection (GAD) plays a vital role in various data mining
applications such as e-commerce fraud prevention and malicious user detection.
Recently, Graph Neural Network (GNN) based approach has demonstrated great
effectiveness in GAD by first encoding graph data into low-dimensional
representations and then identifying anomalies under the guidance of supervised
or unsupervised signals. However, existing GNN-based approaches implicitly
follow the homophily principle (i.e., the "like attracts like" phenomenon) and
fail to learn discriminative embedding for anomalies that connect vast normal
nodes. Moreover, such approaches identify anomalies in a unified global
perspective but overlook diversified abnormal patterns conditioned on local
graph context, leading to suboptimal performance. To overcome the
aforementioned limitations, in this paper, we propose a Multi-hypersphere
Heterophilic Graph Learning (MHetGL) framework for unsupervised GAD.
Specifically, we first devise a Heterophilic Graph Encoding (HGE) module to
learn distinguishable representations for potential anomalies by purifying and
augmenting their neighborhood in a fully unsupervised manner. Then, we propose
a Multi-Hypersphere Learning (MHL) module to enhance the detection capability
for context-dependent anomalies by jointly incorporating critical patterns from
both global and local perspectives. Extensive experiments on ten real-world
datasets show that MHetGL outperforms 14 baselines. Our code is publicly
available at https://github.com/KennyNH/MHetGL.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 08:08:13 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Ni",
"Hang",
""
],
[
"Han",
"Jindong",
""
],
[
"Zhu",
"Nengjun",
""
],
[
"Liu",
"Hao",
""
]
] | TITLE: Unsupervised Graph Anomaly Detection via Multi-Hypersphere Heterophilic
Graph Learning
ABSTRACT: Graph Anomaly Detection (GAD) plays a vital role in various data mining
applications such as e-commerce fraud prevention and malicious user detection.
Recently, Graph Neural Network (GNN) based approach has demonstrated great
effectiveness in GAD by first encoding graph data into low-dimensional
representations and then identifying anomalies under the guidance of supervised
or unsupervised signals. However, existing GNN-based approaches implicitly
follow the homophily principle (i.e., the "like attracts like" phenomenon) and
fail to learn discriminative embedding for anomalies that connect vast normal
nodes. Moreover, such approaches identify anomalies in a unified global
perspective but overlook diversified abnormal patterns conditioned on local
graph context, leading to suboptimal performance. To overcome the
aforementioned limitations, in this paper, we propose a Multi-hypersphere
Heterophilic Graph Learning (MHetGL) framework for unsupervised GAD.
Specifically, we first devise a Heterophilic Graph Encoding (HGE) module to
learn distinguishable representations for potential anomalies by purifying and
augmenting their neighborhood in a fully unsupervised manner. Then, we propose
a Multi-Hypersphere Learning (MHL) module to enhance the detection capability
for context-dependent anomalies by jointly incorporating critical patterns from
both global and local perspectives. Extensive experiments on ten real-world
datasets show that MHetGL outperforms 14 baselines. Our code is publicly
available at https://github.com/KennyNH/MHetGL.
|
2503.12047 | Hangrui Xu | Hangrui Xu, Chuanrui Zhang, Zhengxian Wu, Peng Jiao, Haoqian Wang | PSGait: Multimodal Gait Recognition using Parsing Skeleton | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Gait recognition has emerged as a robust biometric modality due to its
non-intrusive nature and resilience to occlusion. Conventional gait recognition
methods typically rely on silhouettes or skeletons. Despite their success in
gait recognition for controlled laboratory environments, they usually fail in
real-world scenarios due to their limited information entropy for gait
representations. To achieve accurate gait recognition in the wild, we propose a
novel gait representation, named Parsing Skeleton. This representation
innovatively introduces the skeleton-guided human parsing method to capture
fine-grained body dynamics, so they have much higher information entropy to
encode the shapes and dynamics of fine-grained human parts during walking.
Moreover, to effectively explore the capability of the parsing skeleton
representation, we propose a novel parsing skeleton-based gait recognition
framework, named PSGait, which takes parsing skeletons and silhouettes as
input. By fusing these two modalities, the resulting image sequences are fed
into gait recognition models for enhanced individual differentiation. We
conduct comprehensive benchmarks on various datasets to evaluate our model.
PSGait outperforms existing state-of-the-art multimodal methods. Furthermore,
as a plug-and-play method, PSGait leads to a maximum improvement of 10.9% in
Rank-1 accuracy across various gait recognition models. These results
demonstrate the effectiveness and versatility of parsing skeletons for gait
recognition in the wild, establishing PSGait as a new state-of-the-art approach
for multimodal gait recognition.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 08:38:47 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Xu",
"Hangrui",
""
],
[
"Zhang",
"Chuanrui",
""
],
[
"Wu",
"Zhengxian",
""
],
[
"Jiao",
"Peng",
""
],
[
"Wang",
"Haoqian",
""
]
] | TITLE: PSGait: Multimodal Gait Recognition using Parsing Skeleton
ABSTRACT: Gait recognition has emerged as a robust biometric modality due to its
non-intrusive nature and resilience to occlusion. Conventional gait recognition
methods typically rely on silhouettes or skeletons. Despite their success in
gait recognition for controlled laboratory environments, they usually fail in
real-world scenarios due to their limited information entropy for gait
representations. To achieve accurate gait recognition in the wild, we propose a
novel gait representation, named Parsing Skeleton. This representation
innovatively introduces the skeleton-guided human parsing method to capture
fine-grained body dynamics, so they have much higher information entropy to
encode the shapes and dynamics of fine-grained human parts during walking.
Moreover, to effectively explore the capability of the parsing skeleton
representation, we propose a novel parsing skeleton-based gait recognition
framework, named PSGait, which takes parsing skeletons and silhouettes as
input. By fusing these two modalities, the resulting image sequences are fed
into gait recognition models for enhanced individual differentiation. We
conduct comprehensive benchmarks on various datasets to evaluate our model.
PSGait outperforms existing state-of-the-art multimodal methods. Furthermore,
as a plug-and-play method, PSGait leads to a maximum improvement of 10.9% in
Rank-1 accuracy across various gait recognition models. These results
demonstrate the effectiveness and versatility of parsing skeletons for gait
recognition in the wild, establishing PSGait as a new state-of-the-art approach
for multimodal gait recognition.
|
2503.12049 | Ruijie Lu | Ruijie Lu, Yixin Chen, Yu Liu, Jiaxiang Tang, Junfeng Ni, Diwen Wan,
Gang Zeng, Siyuan Huang | TACO: Taming Diffusion for in-the-wild Video Amodal Completion | Project page: https://jason-aplp.github.io/TACO | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Humans can infer complete shapes and appearances of objects from limited
visual cues, relying on extensive prior knowledge of the physical world.
However, completing partially observable objects while ensuring consistency
across video frames remains challenging for existing models, especially for
unstructured, in-the-wild videos. This paper tackles the task of Video Amodal
Completion (VAC), which aims to generate the complete object consistently
throughout the video given a visual prompt specifying the object of interest.
Leveraging the rich, consistent manifolds learned by pre-trained video
diffusion models, we propose a conditional diffusion model, TACO, that
repurposes these manifolds for VAC. To enable its effective and robust
generalization to challenging in-the-wild scenarios, we curate a large-scale
synthetic dataset with multiple difficulty levels by systematically imposing
occlusions onto un-occluded videos. Building on this, we devise a progressive
fine-tuning paradigm that starts with simpler recovery tasks and gradually
advances to more complex ones. We demonstrate TACO's versatility on a wide
range of in-the-wild videos from Internet, as well as on diverse, unseen
datasets commonly used in autonomous driving, robotic manipulation, and scene
understanding. Moreover, we show that TACO can be effectively applied to
various downstream tasks like object reconstruction and pose estimation,
highlighting its potential to facilitate physical world understanding and
reasoning. Our project page is available at https://jason-aplp.github.io/TACO.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 08:47:45 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Lu",
"Ruijie",
""
],
[
"Chen",
"Yixin",
""
],
[
"Liu",
"Yu",
""
],
[
"Tang",
"Jiaxiang",
""
],
[
"Ni",
"Junfeng",
""
],
[
"Wan",
"Diwen",
""
],
[
"Zeng",
"Gang",
""
],
[
"Huang",
"Siyuan",
""
]
] | TITLE: TACO: Taming Diffusion for in-the-wild Video Amodal Completion
ABSTRACT: Humans can infer complete shapes and appearances of objects from limited
visual cues, relying on extensive prior knowledge of the physical world.
However, completing partially observable objects while ensuring consistency
across video frames remains challenging for existing models, especially for
unstructured, in-the-wild videos. This paper tackles the task of Video Amodal
Completion (VAC), which aims to generate the complete object consistently
throughout the video given a visual prompt specifying the object of interest.
Leveraging the rich, consistent manifolds learned by pre-trained video
diffusion models, we propose a conditional diffusion model, TACO, that
repurposes these manifolds for VAC. To enable its effective and robust
generalization to challenging in-the-wild scenarios, we curate a large-scale
synthetic dataset with multiple difficulty levels by systematically imposing
occlusions onto un-occluded videos. Building on this, we devise a progressive
fine-tuning paradigm that starts with simpler recovery tasks and gradually
advances to more complex ones. We demonstrate TACO's versatility on a wide
range of in-the-wild videos from Internet, as well as on diverse, unseen
datasets commonly used in autonomous driving, robotic manipulation, and scene
understanding. Moreover, we show that TACO can be effectively applied to
various downstream tasks like object reconstruction and pose estimation,
highlighting its potential to facilitate physical world understanding and
reasoning. Our project page is available at https://jason-aplp.github.io/TACO.
|
2503.12055 | Zhenhao Wang | Chuancheng Zhang, Zhenhao Wang, Jiangcheng Wang, Kun Su, Qiang Lv, Bin
Jiang, Kunkun Hao, Wenyu Wang | Generative Modeling of Adversarial Lane-Change Scenario | null | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Decision-making in long-tail scenarios is crucial to autonomous driving
development, with realistic and challenging simulations playing a pivotal role
in testing safety-critical situations. However, the current open-source
datasets do not systematically include long-tail distributed scenario data,
making acquiring such scenarios a formidable task. To address this problem, a
data mining framework is proposed, which performs in-depth analysis on two
widely-used datasets, NGSIM and INTERACTION, to pinpoint data with hazardous
behavioral traits, aiming to bridge the gap in these overlooked scenarios. The
approach utilizes Generative Adversarial Imitation Learning (GAIL) based on an
enhanced Proximal Policy Optimization (PPO) model, integrated with the
vehicle's environmental analysis, to iteratively refine and represent the newly
generated vehicle trajectory. Innovatively, the solution optimizes the
generation of adversarial scenario data from the perspectives of sensitivity
and reasonable adversarial. It is demonstrated through experiments that,
compared to the unfiltered data and baseline models, the approach exhibits more
adversarial yet natural behavior regarding collision rate, acceleration, and
lane changes, thereby validating its suitability for generating scenario data
and providing constructive insights for the development of future scenarios and
subsequent decision training.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 09:05:04 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Zhang",
"Chuancheng",
""
],
[
"Wang",
"Zhenhao",
""
],
[
"Wang",
"Jiangcheng",
""
],
[
"Su",
"Kun",
""
],
[
"Lv",
"Qiang",
""
],
[
"Jiang",
"Bin",
""
],
[
"Hao",
"Kunkun",
""
],
[
"Wang",
"Wenyu",
""
]
] | TITLE: Generative Modeling of Adversarial Lane-Change Scenario
ABSTRACT: Decision-making in long-tail scenarios is crucial to autonomous driving
development, with realistic and challenging simulations playing a pivotal role
in testing safety-critical situations. However, the current open-source
datasets do not systematically include long-tail distributed scenario data,
making acquiring such scenarios a formidable task. To address this problem, a
data mining framework is proposed, which performs in-depth analysis on two
widely-used datasets, NGSIM and INTERACTION, to pinpoint data with hazardous
behavioral traits, aiming to bridge the gap in these overlooked scenarios. The
approach utilizes Generative Adversarial Imitation Learning (GAIL) based on an
enhanced Proximal Policy Optimization (PPO) model, integrated with the
vehicle's environmental analysis, to iteratively refine and represent the newly
generated vehicle trajectory. Innovatively, the solution optimizes the
generation of adversarial scenario data from the perspectives of sensitivity
and reasonable adversarial. It is demonstrated through experiments that,
compared to the unfiltered data and baseline models, the approach exhibits more
adversarial yet natural behavior regarding collision rate, acceleration, and
lane changes, thereby validating its suitability for generating scenario data
and providing constructive insights for the development of future scenarios and
subsequent decision training.
|
2503.12058 | Chenyang Zhao | Chenhao Lin, Chenyang Zhao, Shiwei Wang, Longtian Wang, Chao Shen,
Zhengyu Zhao | Revisiting Training-Inference Trigger Intensity in Backdoor Attacks | To Appear in the 34th USENIX Security Symposium (USENIX Security 25) | null | null | null | cs.CR cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Backdoor attacks typically place a specific trigger on certain training data,
such that the model makes prediction errors on inputs with that trigger during
inference. Despite the core role of the trigger, existing studies have commonly
believed a perfect match between training-inference triggers is optimal. In
this paper, for the first time, we systematically explore the
training-inference trigger relation, particularly focusing on their mismatch,
based on a Training-Inference Trigger Intensity Manipulation (TITIM) workflow.
TITIM specifically investigates the training-inference trigger intensity, such
as the size or the opacity of a trigger, and reveals new insights into trigger
generalization and overfitting.
These new insights challenge the above common belief by demonstrating that
the training-inference trigger mismatch can facilitate attacks in two practical
scenarios, posing more significant security threats than previously thought.
First, when the inference trigger is fixed, using training triggers with mixed
intensities leads to stronger attacks than using any single intensity. For
example, on CIFAR-10 with ResNet-18, mixing training triggers with 1.0 and 0.1
opacities improves the worst-case attack success rate (ASR) (over different
testing opacities) of the best single-opacity attack from 10.61\% to 92.77\%.
Second, intentionally using certain mismatched training-inference triggers can
improve the attack stealthiness, i.e., better bypassing defenses. For example,
compared to the training/inference intensity of 1.0/1.0, using 1.0/0.7
decreases the area under the curve (AUC) of the Scale-Up defense from 0.96 to
0.62, while maintaining a high attack ASR (99.65\% vs. 91.62\%). The above new
insights are validated to be generalizable across different backdoor attacks,
models, datasets, tasks, and (digital/physical) domains.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 09:07:00 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Lin",
"Chenhao",
""
],
[
"Zhao",
"Chenyang",
""
],
[
"Wang",
"Shiwei",
""
],
[
"Wang",
"Longtian",
""
],
[
"Shen",
"Chao",
""
],
[
"Zhao",
"Zhengyu",
""
]
] | TITLE: Revisiting Training-Inference Trigger Intensity in Backdoor Attacks
ABSTRACT: Backdoor attacks typically place a specific trigger on certain training data,
such that the model makes prediction errors on inputs with that trigger during
inference. Despite the core role of the trigger, existing studies have commonly
believed a perfect match between training-inference triggers is optimal. In
this paper, for the first time, we systematically explore the
training-inference trigger relation, particularly focusing on their mismatch,
based on a Training-Inference Trigger Intensity Manipulation (TITIM) workflow.
TITIM specifically investigates the training-inference trigger intensity, such
as the size or the opacity of a trigger, and reveals new insights into trigger
generalization and overfitting.
These new insights challenge the above common belief by demonstrating that
the training-inference trigger mismatch can facilitate attacks in two practical
scenarios, posing more significant security threats than previously thought.
First, when the inference trigger is fixed, using training triggers with mixed
intensities leads to stronger attacks than using any single intensity. For
example, on CIFAR-10 with ResNet-18, mixing training triggers with 1.0 and 0.1
opacities improves the worst-case attack success rate (ASR) (over different
testing opacities) of the best single-opacity attack from 10.61\% to 92.77\%.
Second, intentionally using certain mismatched training-inference triggers can
improve the attack stealthiness, i.e., better bypassing defenses. For example,
compared to the training/inference intensity of 1.0/1.0, using 1.0/0.7
decreases the area under the curve (AUC) of the Scale-Up defense from 0.96 to
0.62, while maintaining a high attack ASR (99.65\% vs. 91.62\%). The above new
insights are validated to be generalizable across different backdoor attacks,
models, datasets, tasks, and (digital/physical) domains.
|
2503.12061 | Yuqing Yan | Yuqing Yan, Yirui Wu | EHNet: An Efficient Hybrid Network for Crowd Counting and Localization | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, crowd counting and localization have become crucial
techniques in computer vision, with applications spanning various domains. The
presence of multi-scale crowd distributions within a single image remains a
fundamental challenge in crowd counting tasks. To address these challenges, we
introduce the Efficient Hybrid Network (EHNet), a novel framework for efficient
crowd counting and localization. By reformulating crowd counting into a point
regression framework, EHNet leverages the Spatial-Position Attention Module
(SPAM) to capture comprehensive spatial contexts and long-range dependencies.
Additionally, we develop an Adaptive Feature Aggregation Module (AFAM) to
effectively fuse and harmonize multi-scale feature representations. Building
upon these, we introduce the Multi-Scale Attentive Decoder (MSAD). Experimental
results on four benchmark datasets demonstrate that EHNet achieves competitive
performance with reduced computational overhead, outperforming existing methods
on ShanghaiTech Part \_A, ShanghaiTech Part \_B, UCF-CC-50, and UCF-QNRF. Our
code is in https://anonymous.4open.science/r/EHNet.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 09:18:47 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Yan",
"Yuqing",
""
],
[
"Wu",
"Yirui",
""
]
] | TITLE: EHNet: An Efficient Hybrid Network for Crowd Counting and Localization
ABSTRACT: In recent years, crowd counting and localization have become crucial
techniques in computer vision, with applications spanning various domains. The
presence of multi-scale crowd distributions within a single image remains a
fundamental challenge in crowd counting tasks. To address these challenges, we
introduce the Efficient Hybrid Network (EHNet), a novel framework for efficient
crowd counting and localization. By reformulating crowd counting into a point
regression framework, EHNet leverages the Spatial-Position Attention Module
(SPAM) to capture comprehensive spatial contexts and long-range dependencies.
Additionally, we develop an Adaptive Feature Aggregation Module (AFAM) to
effectively fuse and harmonize multi-scale feature representations. Building
upon these, we introduce the Multi-Scale Attentive Decoder (MSAD). Experimental
results on four benchmark datasets demonstrate that EHNet achieves competitive
performance with reduced computational overhead, outperforming existing methods
on ShanghaiTech Part \_A, ShanghaiTech Part \_B, UCF-CC-50, and UCF-QNRF. Our
code is in https://anonymous.4open.science/r/EHNet.
|
2503.12062 | Vineet Kumar | Vineet Kumar, Ronald Tony, Darshita Rathore, Vipasha Rana, Bhuvanesh
Mandora, Kanishka, Chetna Bansal, and Anindya Moitra | Genicious: Contextual Few-shot Prompting for Insights Discovery | 5 pages, 3 figures, CODS-COMAD Dec 24, Jodhpur, India | null | 10.1145/3703323.3704274 | null | cs.IR | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Data and insights discovery is critical for decision-making in modern
organizations. We present Genicious, an LLM-aided interface that enables users
to interact with tabular datasets and ask complex queries in natural language.
By benchmarking various prompting strategies and language models, we have
developed an end-to-end tool that leverages contextual few-shot prompting,
achieving superior performance in terms of latency, accuracy, and scalability.
Genicious empowers stakeholders to explore, analyze and visualize their
datasets efficiently while ensuring data security through role-based access
control and a Text-to-SQL approach.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 09:27:59 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Kumar",
"Vineet",
""
],
[
"Tony",
"Ronald",
""
],
[
"Rathore",
"Darshita",
""
],
[
"Rana",
"Vipasha",
""
],
[
"Mandora",
"Bhuvanesh",
""
],
[
"Kanishka",
"",
""
],
[
"Bansal",
"Chetna",
""
],
[
"Moitra",
"Anindya",
""
]
] | TITLE: Genicious: Contextual Few-shot Prompting for Insights Discovery
ABSTRACT: Data and insights discovery is critical for decision-making in modern
organizations. We present Genicious, an LLM-aided interface that enables users
to interact with tabular datasets and ask complex queries in natural language.
By benchmarking various prompting strategies and language models, we have
developed an end-to-end tool that leverages contextual few-shot prompting,
achieving superior performance in terms of latency, accuracy, and scalability.
Genicious empowers stakeholders to explore, analyze and visualize their
datasets efficiently while ensuring data security through role-based access
control and a Text-to-SQL approach.
|
2503.12063 | Yuqing Yan | Yuqing Yan, Yirui Wu | DLA-Count: Dynamic Label Assignment Network for Dense Cell Distribution
Counting | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cell counting remains a fundamental yet challenging task in medical and
biological research due to the diverse morphology of cells, their dense
distribution, and variations in image quality. We present DLA-Count, a
breakthrough approach to cell counting that introduces three key innovations:
(1) K-adjacent Hungarian Matching (KHM), which dramatically improves cell
matching in dense regions, (2) Multi-scale Deformable Gaussian Convolution
(MDGC), which adapts to varying cell morphologies, and (3) Gaussian-enhanced
Feature Decoder (GFD) for efficient multi-scale feature fusion. Our extensive
experiments on four challenging cell counting datasets (ADI, MBM, VGG, and DCC)
demonstrate that our method outperforms previous methods across diverse
datasets, with improvements in Mean Absolute Error of up to 46.7\% on ADI and
42.5\% on MBM datasets. Our code is available at
https://anonymous.4open.science/r/DLA-Count.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 09:32:42 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Yan",
"Yuqing",
""
],
[
"Wu",
"Yirui",
""
]
] | TITLE: DLA-Count: Dynamic Label Assignment Network for Dense Cell Distribution
Counting
ABSTRACT: Cell counting remains a fundamental yet challenging task in medical and
biological research due to the diverse morphology of cells, their dense
distribution, and variations in image quality. We present DLA-Count, a
breakthrough approach to cell counting that introduces three key innovations:
(1) K-adjacent Hungarian Matching (KHM), which dramatically improves cell
matching in dense regions, (2) Multi-scale Deformable Gaussian Convolution
(MDGC), which adapts to varying cell morphologies, and (3) Gaussian-enhanced
Feature Decoder (GFD) for efficient multi-scale feature fusion. Our extensive
experiments on four challenging cell counting datasets (ADI, MBM, VGG, and DCC)
demonstrate that our method outperforms previous methods across diverse
datasets, with improvements in Mean Absolute Error of up to 46.7\% on ADI and
42.5\% on MBM datasets. Our code is available at
https://anonymous.4open.science/r/DLA-Count.
|
2503.12066 | Yuetong Yu | Yuetong Yu, Ruiyang Ge, Ilker Hacihaliloglu, Alexander Rauscher, Roger
Tam, Sophia Frangou | Impact of Data Patterns on Biotype identification Using Machine Learning | null | null | null | null | cs.LG q-bio.NC q-bio.QM | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Background: Patient stratification in brain disorders remains a significant
challenge, despite advances in machine learning and multimodal neuroimaging.
Automated machine learning algorithms have been widely applied for identifying
patient subtypes (biotypes), but results have been inconsistent across studies.
These inconsistencies are often attributed to algorithmic limitations, yet an
overlooked factor may be the statistical properties of the input data. This
study investigates the contribution of data patterns on algorithm performance
by leveraging synthetic brain morphometry data as an exemplar.
Methods: Four widely used algorithms-SuStaIn, HYDRA, SmileGAN, and SurrealGAN
were evaluated using multiple synthetic pseudo-patient datasets designed to
include varying numbers and sizes of clusters and degrees of complexity of
morphometric changes. Ground truth, representing predefined clusters, allowed
for the evaluation of performance accuracy across algorithms and datasets.
Results: SuStaIn failed to process datasets with more than 17 variables,
highlighting computational inefficiencies. HYDRA was able to perform
individual-level classification in multiple datasets with no clear pattern
explaining failures. SmileGAN and SurrealGAN outperformed other algorithms in
identifying variable-based disease patterns, but these patterns were not able
to provide individual-level classification.
Conclusions: Dataset characteristics significantly influence algorithm
performance, often more than algorithmic design. The findings emphasize the
need for rigorous validation using synthetic data before real-world application
and highlight the limitations of current clustering approaches in capturing the
heterogeneity of brain disorders. These insights extend beyond neuroimaging and
have implications for machine learning applications in biomedical research.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 09:44:00 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Yu",
"Yuetong",
""
],
[
"Ge",
"Ruiyang",
""
],
[
"Hacihaliloglu",
"Ilker",
""
],
[
"Rauscher",
"Alexander",
""
],
[
"Tam",
"Roger",
""
],
[
"Frangou",
"Sophia",
""
]
] | TITLE: Impact of Data Patterns on Biotype identification Using Machine Learning
ABSTRACT: Background: Patient stratification in brain disorders remains a significant
challenge, despite advances in machine learning and multimodal neuroimaging.
Automated machine learning algorithms have been widely applied for identifying
patient subtypes (biotypes), but results have been inconsistent across studies.
These inconsistencies are often attributed to algorithmic limitations, yet an
overlooked factor may be the statistical properties of the input data. This
study investigates the contribution of data patterns on algorithm performance
by leveraging synthetic brain morphometry data as an exemplar.
Methods: Four widely used algorithms-SuStaIn, HYDRA, SmileGAN, and SurrealGAN
were evaluated using multiple synthetic pseudo-patient datasets designed to
include varying numbers and sizes of clusters and degrees of complexity of
morphometric changes. Ground truth, representing predefined clusters, allowed
for the evaluation of performance accuracy across algorithms and datasets.
Results: SuStaIn failed to process datasets with more than 17 variables,
highlighting computational inefficiencies. HYDRA was able to perform
individual-level classification in multiple datasets with no clear pattern
explaining failures. SmileGAN and SurrealGAN outperformed other algorithms in
identifying variable-based disease patterns, but these patterns were not able
to provide individual-level classification.
Conclusions: Dataset characteristics significantly influence algorithm
performance, often more than algorithmic design. The findings emphasize the
need for rigorous validation using synthetic data before real-world application
and highlight the limitations of current clustering approaches in capturing the
heterogeneity of brain disorders. These insights extend beyond neuroimaging and
have implications for machine learning applications in biomedical research.
|
2503.12068 | Qingchen Tang | Qingchen Tang, Lei Fan, Maurice Pagnucco, Yang Song | Prototype-Based Image Prompting for Weakly Supervised Histopathological
Image Segmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Weakly supervised image segmentation with image-level labels has drawn
attention due to the high cost of pixel-level annotations. Traditional methods
using Class Activation Maps (CAMs) often highlight only the most discriminative
regions, leading to incomplete masks. Recent approaches that introduce textual
information struggle with histopathological images due to inter-class
homogeneity and intra-class heterogeneity. In this paper, we propose a
prototype-based image prompting framework for histopathological image
segmentation. It constructs an image bank from the training set using
clustering, extracting multiple prototype features per class to capture
intra-class heterogeneity. By designing a matching loss between input features
and class-specific prototypes using contrastive learning, our method addresses
inter-class homogeneity and guides the model to generate more accurate CAMs.
Experiments on four datasets (LUAD-HistoSeg, BCSS-WSSS, GCSS, and BCSS) show
that our method outperforms existing weakly supervised segmentation approaches,
setting new benchmarks in histopathological image segmentation.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 09:55:31 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Tang",
"Qingchen",
""
],
[
"Fan",
"Lei",
""
],
[
"Pagnucco",
"Maurice",
""
],
[
"Song",
"Yang",
""
]
] | TITLE: Prototype-Based Image Prompting for Weakly Supervised Histopathological
Image Segmentation
ABSTRACT: Weakly supervised image segmentation with image-level labels has drawn
attention due to the high cost of pixel-level annotations. Traditional methods
using Class Activation Maps (CAMs) often highlight only the most discriminative
regions, leading to incomplete masks. Recent approaches that introduce textual
information struggle with histopathological images due to inter-class
homogeneity and intra-class heterogeneity. In this paper, we propose a
prototype-based image prompting framework for histopathological image
segmentation. It constructs an image bank from the training set using
clustering, extracting multiple prototype features per class to capture
intra-class heterogeneity. By designing a matching loss between input features
and class-specific prototypes using contrastive learning, our method addresses
inter-class homogeneity and guides the model to generate more accurate CAMs.
Experiments on four datasets (LUAD-HistoSeg, BCSS-WSSS, GCSS, and BCSS) show
that our method outperforms existing weakly supervised segmentation approaches,
setting new benchmarks in histopathological image segmentation.
|
2503.12069 | Wei Lai | Wei Lai, Tianyu Ding, ren dongdong, Lei Wang, Jing Huo, Yang Gao,
Wenbin Li | Robust Dataset Distillation by Matching Adversarial Trajectories | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dataset distillation synthesizes compact datasets that enable models to
achieve performance comparable to training on the original large-scale
datasets. However, existing distillation methods overlook the robustness of the
model, resulting in models that are vulnerable to adversarial attacks when
trained on distilled data. To address this limitation, we introduce the task of
``robust dataset distillation", a novel paradigm that embeds adversarial
robustness into the synthetic datasets during the distillation process. We
propose Matching Adversarial Trajectories (MAT), a method that integrates
adversarial training into trajectory-based dataset distillation. MAT
incorporates adversarial samples during trajectory generation to obtain robust
training trajectories, which are then used to guide the distillation process.
As experimentally demonstrated, even through natural training on our distilled
dataset, models can achieve enhanced adversarial robustness while maintaining
competitive accuracy compared to existing distillation methods. Our work
highlights robust dataset distillation as a new and important research
direction and provides a strong baseline for future research to bridge the gap
between efficient training and adversarial robustness.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 10:02:38 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Lai",
"Wei",
""
],
[
"Ding",
"Tianyu",
""
],
[
"dongdong",
"ren",
""
],
[
"Wang",
"Lei",
""
],
[
"Huo",
"Jing",
""
],
[
"Gao",
"Yang",
""
],
[
"Li",
"Wenbin",
""
]
] | TITLE: Robust Dataset Distillation by Matching Adversarial Trajectories
ABSTRACT: Dataset distillation synthesizes compact datasets that enable models to
achieve performance comparable to training on the original large-scale
datasets. However, existing distillation methods overlook the robustness of the
model, resulting in models that are vulnerable to adversarial attacks when
trained on distilled data. To address this limitation, we introduce the task of
``robust dataset distillation", a novel paradigm that embeds adversarial
robustness into the synthetic datasets during the distillation process. We
propose Matching Adversarial Trajectories (MAT), a method that integrates
adversarial training into trajectory-based dataset distillation. MAT
incorporates adversarial samples during trajectory generation to obtain robust
training trajectories, which are then used to guide the distillation process.
As experimentally demonstrated, even through natural training on our distilled
dataset, models can achieve enhanced adversarial robustness while maintaining
competitive accuracy compared to existing distillation methods. Our work
highlights robust dataset distillation as a new and important research
direction and provides a strong baseline for future research to bridge the gap
between efficient training and adversarial robustness.
|
2503.12087 | Gino Jansen | Gino E. Jansen, Mark J. Schuuring, Berto J. Bouma, Ivana I\v{s}gum | Temporally Consistent Mitral Annulus Measurements from Sparse
Annotations in Echocardiographic Videos | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | This work presents a novel approach to achieving temporally consistent mitral
annulus landmark localization in echocardiography videos using sparse
annotations. Our method introduces a self-supervised loss term that enforces
temporal consistency between neighboring frames, which smooths the position of
landmarks and enhances measurement accuracy over time. Additionally, we
incorporate realistic field-of-view augmentations to improve the recognition of
missing anatomical landmarks. We evaluate our approach on both a public and
private dataset, and demonstrate significant improvements in Mitral Annular
Plane Systolic Excursion (MAPSE) calculations and overall landmark tracking
stability. The method achieves a mean absolute MAPSE error of 1.81 $\pm$ 0.14
mm, an annulus size error of 2.46 $\pm$ 0.31 mm, and a landmark localization
error of 2.48 $\pm$ 0.07 mm. Finally, it achieves a 0.99 ROC-AUC for
recognition of missing landmarks.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 11:26:44 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Jansen",
"Gino E.",
""
],
[
"Schuuring",
"Mark J.",
""
],
[
"Bouma",
"Berto J.",
""
],
[
"Išgum",
"Ivana",
""
]
] | TITLE: Temporally Consistent Mitral Annulus Measurements from Sparse
Annotations in Echocardiographic Videos
ABSTRACT: This work presents a novel approach to achieving temporally consistent mitral
annulus landmark localization in echocardiography videos using sparse
annotations. Our method introduces a self-supervised loss term that enforces
temporal consistency between neighboring frames, which smooths the position of
landmarks and enhances measurement accuracy over time. Additionally, we
incorporate realistic field-of-view augmentations to improve the recognition of
missing anatomical landmarks. We evaluate our approach on both a public and
private dataset, and demonstrate significant improvements in Mitral Annular
Plane Systolic Excursion (MAPSE) calculations and overall landmark tracking
stability. The method achieves a mean absolute MAPSE error of 1.81 $\pm$ 0.14
mm, an annulus size error of 2.46 $\pm$ 0.31 mm, and a landmark localization
error of 2.48 $\pm$ 0.07 mm. Finally, it achieves a 0.99 ROC-AUC for
recognition of missing landmarks.
|
2503.12093 | Oren Shrout | Oren Shrout and Ayellet Tal | SFMNet: Sparse Focal Modulation for 3D Object Detection | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We propose SFMNet, a novel 3D sparse detector that combines the efficiency of
sparse convolutions with the ability to model long-range dependencies. While
traditional sparse convolution techniques efficiently capture local structures,
they struggle with modeling long-range relationships. However, capturing
long-range dependencies is fundamental for 3D object detection. In contrast,
transformers are designed to capture these long-range dependencies through
attention mechanisms. But, they come with high computational costs, due to
their quadratic query-key-value interactions. Furthermore, directly applying
attention to non-empty voxels is inefficient due to the sparse nature of 3D
scenes. Our SFMNet is built on a novel Sparse Focal Modulation (SFM) module,
which integrates short- and long-range contexts with linear complexity by
leveraging a new hierarchical sparse convolution design. This approach enables
SFMNet to achieve high detection performance with improved efficiency, making
it well-suited for large-scale LiDAR scenes. We show that our detector achieves
state-of-the-art performance on autonomous driving datasets.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 11:40:58 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Shrout",
"Oren",
""
],
[
"Tal",
"Ayellet",
""
]
] | TITLE: SFMNet: Sparse Focal Modulation for 3D Object Detection
ABSTRACT: We propose SFMNet, a novel 3D sparse detector that combines the efficiency of
sparse convolutions with the ability to model long-range dependencies. While
traditional sparse convolution techniques efficiently capture local structures,
they struggle with modeling long-range relationships. However, capturing
long-range dependencies is fundamental for 3D object detection. In contrast,
transformers are designed to capture these long-range dependencies through
attention mechanisms. But, they come with high computational costs, due to
their quadratic query-key-value interactions. Furthermore, directly applying
attention to non-empty voxels is inefficient due to the sparse nature of 3D
scenes. Our SFMNet is built on a novel Sparse Focal Modulation (SFM) module,
which integrates short- and long-range contexts with linear complexity by
leveraging a new hierarchical sparse convolution design. This approach enables
SFMNet to achieve high detection performance with improved efficiency, making
it well-suited for large-scale LiDAR scenes. We show that our detector achieves
state-of-the-art performance on autonomous driving datasets.
|
2503.12094 | Weiming Zhang | Weiming Zhang, Dingwen Xiao, Lei Chen, Lin Wang | E-SAM: Training-Free Segment Every Entity Model | Under review | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Entity Segmentation (ES) aims at identifying and segmenting distinct entities
within an image without the need for predefined class labels. This
characteristic makes ES well-suited to open-world applications with adaptation
to diverse and dynamically changing environments, where new and previously
unseen entities may appear frequently. Existing ES methods either require large
annotated datasets or high training costs, limiting their scalability and
adaptability. Recently, the Segment Anything Model (SAM), especially in its
Automatic Mask Generation (AMG) mode, has shown potential for holistic image
segmentation. However, it struggles with over-segmentation and
under-segmentation, making it less effective for ES. In this paper, we
introduce E-SAM, a novel training-free framework that exhibits exceptional ES
capability. Specifically, we first propose Multi-level Mask Generation (MMG)
that hierarchically processes SAM's AMG outputs to generate reliable
object-level masks while preserving fine details at other levels. Entity-level
Mask Refinement (EMR) then refines these object-level masks into accurate
entity-level masks. That is, it separates overlapping masks to address the
redundancy issues inherent in SAM's outputs and merges similar masks by
evaluating entity-level consistency. Lastly, Under-Segmentation Refinement
(USR) addresses under-segmentation by generating additional high-confidence
masks fused with EMR outputs to produce the final ES map. These three modules
are seamlessly optimized to achieve the best ES without additional training
overhead. Extensive experiments demonstrate that E-SAM achieves
state-of-the-art performance compared to prior ES methods, demonstrating a
significant improvement by +30.1 on benchmark metrics.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 11:41:33 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Zhang",
"Weiming",
""
],
[
"Xiao",
"Dingwen",
""
],
[
"Chen",
"Lei",
""
],
[
"Wang",
"Lin",
""
]
] | TITLE: E-SAM: Training-Free Segment Every Entity Model
ABSTRACT: Entity Segmentation (ES) aims at identifying and segmenting distinct entities
within an image without the need for predefined class labels. This
characteristic makes ES well-suited to open-world applications with adaptation
to diverse and dynamically changing environments, where new and previously
unseen entities may appear frequently. Existing ES methods either require large
annotated datasets or high training costs, limiting their scalability and
adaptability. Recently, the Segment Anything Model (SAM), especially in its
Automatic Mask Generation (AMG) mode, has shown potential for holistic image
segmentation. However, it struggles with over-segmentation and
under-segmentation, making it less effective for ES. In this paper, we
introduce E-SAM, a novel training-free framework that exhibits exceptional ES
capability. Specifically, we first propose Multi-level Mask Generation (MMG)
that hierarchically processes SAM's AMG outputs to generate reliable
object-level masks while preserving fine details at other levels. Entity-level
Mask Refinement (EMR) then refines these object-level masks into accurate
entity-level masks. That is, it separates overlapping masks to address the
redundancy issues inherent in SAM's outputs and merges similar masks by
evaluating entity-level consistency. Lastly, Under-Segmentation Refinement
(USR) addresses under-segmentation by generating additional high-confidence
masks fused with EMR outputs to produce the final ES map. These three modules
are seamlessly optimized to achieve the best ES without additional training
overhead. Extensive experiments demonstrate that E-SAM achieves
state-of-the-art performance compared to prior ES methods, demonstrating a
significant improvement by +30.1 on benchmark metrics.
|
2503.12095 | Walter Zimmer | Walter Zimmer, Ross Greer, Daniel Lehmberg, Marc Pavel, Holger Caesar,
Xingcheng Zhou, Ahmed Ghita, Mohan Trivedi, Rui Song, Hu Cao, Akshay
Gopalkrishnan, Alois C. Knoll | Towards Vision Zero: The Accid3nD Dataset | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Even though a significant amount of work has been done to increase the safety
of transportation networks, accidents still occur regularly. They must be
understood as unavoidable and sporadic outcomes of traffic networks. No public
dataset contains 3D annotations of real-world accidents recorded from roadside
sensors. We present the Accid3nD dataset, a collection of real-world highway
accidents in different weather and lighting conditions. It contains vehicle
crashes at high-speed driving with 2,634,233 labeled 2D bounding boxes,
instance masks, and 3D bounding boxes with track IDs. In total, the dataset
contains 111,945 labeled frames recorded from four roadside cameras and LiDARs
at 25 Hz. The dataset contains six object classes and is provided in the
OpenLABEL format. We propose an accident detection model that combines a
rule-based approach with a learning-based one. Experiments and ablation studies
on our dataset show the robustness of our proposed method. The dataset, model,
and code are available on our website: https://accident-dataset.github.io.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 11:42:16 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Zimmer",
"Walter",
""
],
[
"Greer",
"Ross",
""
],
[
"Lehmberg",
"Daniel",
""
],
[
"Pavel",
"Marc",
""
],
[
"Caesar",
"Holger",
""
],
[
"Zhou",
"Xingcheng",
""
],
[
"Ghita",
"Ahmed",
""
],
[
"Trivedi",
"Mohan",
""
],
[
"Song",
"Rui",
""
],
[
"Cao",
"Hu",
""
],
[
"Gopalkrishnan",
"Akshay",
""
],
[
"Knoll",
"Alois C.",
""
]
] | TITLE: Towards Vision Zero: The Accid3nD Dataset
ABSTRACT: Even though a significant amount of work has been done to increase the safety
of transportation networks, accidents still occur regularly. They must be
understood as unavoidable and sporadic outcomes of traffic networks. No public
dataset contains 3D annotations of real-world accidents recorded from roadside
sensors. We present the Accid3nD dataset, a collection of real-world highway
accidents in different weather and lighting conditions. It contains vehicle
crashes at high-speed driving with 2,634,233 labeled 2D bounding boxes,
instance masks, and 3D bounding boxes with track IDs. In total, the dataset
contains 111,945 labeled frames recorded from four roadside cameras and LiDARs
at 25 Hz. The dataset contains six object classes and is provided in the
OpenLABEL format. We propose an accident detection model that combines a
rule-based approach with a learning-based one. Experiments and ablation studies
on our dataset show the robustness of our proposed method. The dataset, model,
and code are available on our website: https://accident-dataset.github.io.
|
2503.12096 | Ashshak Sharifdeen | Ashshak Sharifdeen, Muhammad Akhtar Munir, Sanoojan Baliah, Salman
Khan, Muhammad Haris Khan | O-TPT: Orthogonality Constraints for Calibrating Test-time Prompt Tuning
in Vision-Language Models | Accepted at CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Test-time prompt tuning for vision-language models (VLMs) is getting
attention because of their ability to learn with unlabeled data without
fine-tuning. Although test-time prompt tuning methods for VLMs can boost
accuracy, the resulting models tend to demonstrate poor calibration, which
casts doubts on the reliability and trustworthiness of these models. Notably,
more attention needs to be devoted to calibrating the test-time prompt tuning
in vision-language models. To this end, we propose a new approach, called O-TPT
that introduces orthogonality constraints on the textual features corresponding
to the learnable prompts for calibrating test-time prompt tuning in VLMs.
Towards introducing orthogonality constraints, we make the following
contributions. First, we uncover new insights behind the suboptimal calibration
performance of existing methods relying on textual feature dispersion. Second,
we show that imposing a simple orthogonalization of textual features is a more
effective approach towards obtaining textual dispersion. We conduct extensive
experiments on various datasets with different backbones and baselines. The
results indicate that our method consistently outperforms the prior state of
the art in significantly reducing the overall average calibration error. Also,
our method surpasses the zero-shot calibration performance on fine-grained
classification tasks.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 11:45:54 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Sharifdeen",
"Ashshak",
""
],
[
"Munir",
"Muhammad Akhtar",
""
],
[
"Baliah",
"Sanoojan",
""
],
[
"Khan",
"Salman",
""
],
[
"Khan",
"Muhammad Haris",
""
]
] | TITLE: O-TPT: Orthogonality Constraints for Calibrating Test-time Prompt Tuning
in Vision-Language Models
ABSTRACT: Test-time prompt tuning for vision-language models (VLMs) is getting
attention because of their ability to learn with unlabeled data without
fine-tuning. Although test-time prompt tuning methods for VLMs can boost
accuracy, the resulting models tend to demonstrate poor calibration, which
casts doubts on the reliability and trustworthiness of these models. Notably,
more attention needs to be devoted to calibrating the test-time prompt tuning
in vision-language models. To this end, we propose a new approach, called O-TPT
that introduces orthogonality constraints on the textual features corresponding
to the learnable prompts for calibrating test-time prompt tuning in VLMs.
Towards introducing orthogonality constraints, we make the following
contributions. First, we uncover new insights behind the suboptimal calibration
performance of existing methods relying on textual feature dispersion. Second,
we show that imposing a simple orthogonalization of textual features is a more
effective approach towards obtaining textual dispersion. We conduct extensive
experiments on various datasets with different backbones and baselines. The
results indicate that our method consistently outperforms the prior state of
the art in significantly reducing the overall average calibration error. Also,
our method surpasses the zero-shot calibration performance on fine-grained
classification tasks.
|
2503.12100 | Jakub Klikowski | Arkadiusz Bry{\l}kowski and Jakub Klikowski | Large Language Models in Legislative Content Analysis: A Dataset from
the Polish Parliament | 15 pages, 4 figures | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Large language models (LLMs) are among the best methods for processing
natural language, partly due to their versatility. At the same time,
domain-specific LLMs are more practical in real-life applications. This work
introduces a novel natural language dataset created by acquired data from
official legislative authorities' websites. The study focuses on formulating
three natural language processing (NLP) tasks to evaluate the effectiveness of
LLMs on legislative content analysis within the context of the Polish legal
system. Key findings highlight the potential of LLMs in automating and
enhancing legislative content analysis while emphasizing specific challenges,
such as understanding legal context. The research contributes to the
advancement of NLP in the legal field, particularly in the Polish language. It
has been demonstrated that even commonly accessible data can be practically
utilized for legislative content analysis.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 12:10:20 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Bryłkowski",
"Arkadiusz",
""
],
[
"Klikowski",
"Jakub",
""
]
] | TITLE: Large Language Models in Legislative Content Analysis: A Dataset from
the Polish Parliament
ABSTRACT: Large language models (LLMs) are among the best methods for processing
natural language, partly due to their versatility. At the same time,
domain-specific LLMs are more practical in real-life applications. This work
introduces a novel natural language dataset created by acquired data from
official legislative authorities' websites. The study focuses on formulating
three natural language processing (NLP) tasks to evaluate the effectiveness of
LLMs on legislative content analysis within the context of the Polish legal
system. Key findings highlight the potential of LLMs in automating and
enhancing legislative content analysis while emphasizing specific challenges,
such as understanding legal context. The research contributes to the
advancement of NLP in the legal field, particularly in the Polish language. It
has been demonstrated that even commonly accessible data can be practically
utilized for legislative content analysis.
|
2503.12107 | Pedro Mercado | Sebastian Pineda Arango, Pedro Mercado, Shubham Kapoor, Abdul Fatir
Ansari, Lorenzo Stella, Huibin Shen, Hugo Senetaire, Caner Turkmen, Oleksandr
Shchur, Danielle C. Maddix, Michael Bohlke-Schneider, Yuyang Wang, Syama
Sundar Rangapuram | ChronosX: Adapting Pretrained Time Series Models with Exogenous
Variables | Accepted at the 28th International Conference on Artificial
Intelligence and Statistics (AISTATS), 2025 | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Covariates provide valuable information on external factors that influence
time series and are critical in many real-world time series forecasting tasks.
For example, in retail, covariates may indicate promotions or peak dates such
as holiday seasons that heavily influence demand forecasts. Recent advances in
pretraining large language model architectures for time series forecasting have
led to highly accurate forecasters. However, the majority of these models do
not readily use covariates as they are often specific to a certain task or
domain. This paper introduces a new method to incorporate covariates into
pretrained time series forecasting models. Our proposed approach incorporates
covariate information into pretrained forecasting models through modular blocks
that inject past and future covariate information, without necessarily
modifying the pretrained model in consideration. In order to evaluate our
approach, we introduce a benchmark composed of 32 different synthetic datasets
with varying dynamics to evaluate the effectivity of forecasting models with
covariates. Extensive evaluations on both synthetic and real datasets show that
our approach effectively incorporates covariate information into pretrained
models, outperforming existing baselines.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 12:34:19 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Arango",
"Sebastian Pineda",
""
],
[
"Mercado",
"Pedro",
""
],
[
"Kapoor",
"Shubham",
""
],
[
"Ansari",
"Abdul Fatir",
""
],
[
"Stella",
"Lorenzo",
""
],
[
"Shen",
"Huibin",
""
],
[
"Senetaire",
"Hugo",
""
],
[
"Turkmen",
"Caner",
""
],
[
"Shchur",
"Oleksandr",
""
],
[
"Maddix",
"Danielle C.",
""
],
[
"Bohlke-Schneider",
"Michael",
""
],
[
"Wang",
"Yuyang",
""
],
[
"Rangapuram",
"Syama Sundar",
""
]
] | TITLE: ChronosX: Adapting Pretrained Time Series Models with Exogenous
Variables
ABSTRACT: Covariates provide valuable information on external factors that influence
time series and are critical in many real-world time series forecasting tasks.
For example, in retail, covariates may indicate promotions or peak dates such
as holiday seasons that heavily influence demand forecasts. Recent advances in
pretraining large language model architectures for time series forecasting have
led to highly accurate forecasters. However, the majority of these models do
not readily use covariates as they are often specific to a certain task or
domain. This paper introduces a new method to incorporate covariates into
pretrained time series forecasting models. Our proposed approach incorporates
covariate information into pretrained forecasting models through modular blocks
that inject past and future covariate information, without necessarily
modifying the pretrained model in consideration. In order to evaluate our
approach, we introduce a benchmark composed of 32 different synthetic datasets
with varying dynamics to evaluate the effectivity of forecasting models with
covariates. Extensive evaluations on both synthetic and real datasets show that
our approach effectively incorporates covariate information into pretrained
models, outperforming existing baselines.
|
2503.12115 | Xue Jiang | Xue Jiang, Xiulian Peng, Yuan Zhang, Yan Lu | Universal Speech Token Learning via Low-Bitrate Neural Codec and
Pretrained Representations | Accepted by IEEE Journal of Selected Topics in Signal
Processing(JSTSP) | null | 10.1109/JSTSP.2024.3488557 | null | cs.SD cs.AI eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current large speech language models are mainly based on semantic tokens from
discretization of self-supervised learned representations and acoustic tokens
from a neural codec, following a semantic-modeling and acoustic-synthesis
paradigm. However, semantic tokens discard paralinguistic attributes of
speakers that is important for natural spoken communication, while prompt-based
acoustic synthesis from semantic tokens has limits in recovering paralinguistic
details and suffers from robustness issues, especially when there are domain
gaps between the prompt and the target. This paper unifies two types of tokens
and proposes the UniCodec, a universal speech token learning that encapsulates
all semantics of speech, including linguistic and paralinguistic information,
into a compact and semantically-disentangled unified token. Such a unified
token can not only benefit speech language models in understanding with
paralinguistic hints but also help speech generation with high-quality output.
A low-bitrate neural codec is leveraged to learn such disentangled discrete
representations at global and local scales, with knowledge distilled from
self-supervised learned features. Extensive evaluations on multilingual
datasets demonstrate its effectiveness in generating natural, expressive and
long-term consistent output quality with paralinguistic attributes well
preserved in several speech processing tasks.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 12:50:43 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Jiang",
"Xue",
""
],
[
"Peng",
"Xiulian",
""
],
[
"Zhang",
"Yuan",
""
],
[
"Lu",
"Yan",
""
]
] | TITLE: Universal Speech Token Learning via Low-Bitrate Neural Codec and
Pretrained Representations
ABSTRACT: Current large speech language models are mainly based on semantic tokens from
discretization of self-supervised learned representations and acoustic tokens
from a neural codec, following a semantic-modeling and acoustic-synthesis
paradigm. However, semantic tokens discard paralinguistic attributes of
speakers that is important for natural spoken communication, while prompt-based
acoustic synthesis from semantic tokens has limits in recovering paralinguistic
details and suffers from robustness issues, especially when there are domain
gaps between the prompt and the target. This paper unifies two types of tokens
and proposes the UniCodec, a universal speech token learning that encapsulates
all semantics of speech, including linguistic and paralinguistic information,
into a compact and semantically-disentangled unified token. Such a unified
token can not only benefit speech language models in understanding with
paralinguistic hints but also help speech generation with high-quality output.
A low-bitrate neural codec is leveraged to learn such disentangled discrete
representations at global and local scales, with knowledge distilled from
self-supervised learned features. Extensive evaluations on multilingual
datasets demonstrate its effectiveness in generating natural, expressive and
long-term consistent output quality with paralinguistic attributes well
preserved in several speech processing tasks.
|
2503.12125 | Hun Kang | Hun Kang, Kyoungok Kim | Robust Isolation Forest using Soft Sparse Random Projection and Valley
Emphasis Method | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Isolation Forest (iForest) is an unsupervised anomaly detection algorithm
designed to effectively detect anomalies under the assumption that anomalies
are ``few and different." Various studies have aimed to enhance iForest, but
the resulting algorithms often exhibited significant performance disparities
across datasets. Additionally, the challenge of isolating rare and widely
distributed anomalies persisted in research focused on improving splits. To
address these challenges, we introduce Robust iForest (RiForest). RiForest
leverages both existing features and random hyperplanes obtained through soft
sparse random projection to identify superior split features for anomaly
detection, independent of datasets. It utilizes the underutilized valley
emphasis method for optimal split point determination and incorporates sparsity
randomization in soft sparse random projection for enhanced anomaly detection
robustness. Across 24 benchmark datasets, experiments demonstrate RiForest's
consistent outperformance of existing algorithms in anomaly detection,
emphasizing stability and robustness to noise variables.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 13:08:50 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Kang",
"Hun",
""
],
[
"Kim",
"Kyoungok",
""
]
] | TITLE: Robust Isolation Forest using Soft Sparse Random Projection and Valley
Emphasis Method
ABSTRACT: Isolation Forest (iForest) is an unsupervised anomaly detection algorithm
designed to effectively detect anomalies under the assumption that anomalies
are ``few and different." Various studies have aimed to enhance iForest, but
the resulting algorithms often exhibited significant performance disparities
across datasets. Additionally, the challenge of isolating rare and widely
distributed anomalies persisted in research focused on improving splits. To
address these challenges, we introduce Robust iForest (RiForest). RiForest
leverages both existing features and random hyperplanes obtained through soft
sparse random projection to identify superior split features for anomaly
detection, independent of datasets. It utilizes the underutilized valley
emphasis method for optimal split point determination and incorporates sparsity
randomization in soft sparse random projection for enhanced anomaly detection
robustness. Across 24 benchmark datasets, experiments demonstrate RiForest's
consistent outperformance of existing algorithms in anomaly detection,
emphasizing stability and robustness to noise variables.
|
2503.12131 | Shentong Mo | Shentong Mo, Zehua Chen, Fan Bao, Jun Zhu | DiffGAP: A Lightweight Diffusion Module in Contrastive Space for
Bridging Cross-Model Gap | null | null | null | null | cs.CV cs.AI cs.LG cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent works in cross-modal understanding and generation, notably through
models like CLAP (Contrastive Language-Audio Pretraining) and CAVP (Contrastive
Audio-Visual Pretraining), have significantly enhanced the alignment of text,
video, and audio embeddings via a single contrastive loss. However, these
methods often overlook the bidirectional interactions and inherent noises
present in each modality, which can crucially impact the quality and efficacy
of cross-modal integration. To address this limitation, we introduce DiffGAP, a
novel approach incorporating a lightweight generative module within the
contrastive space. Specifically, our DiffGAP employs a bidirectional diffusion
process tailored to bridge the cross-modal gap more effectively. This involves
a denoising process on text and video embeddings conditioned on audio
embeddings and vice versa, thus facilitating a more nuanced and robust
cross-modal interaction. Our experimental results on VGGSound and AudioCaps
datasets demonstrate that DiffGAP significantly improves performance in
video/text-audio generation and retrieval tasks, confirming its effectiveness
in enhancing cross-modal understanding and generation capabilities.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 13:24:09 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Mo",
"Shentong",
""
],
[
"Chen",
"Zehua",
""
],
[
"Bao",
"Fan",
""
],
[
"Zhu",
"Jun",
""
]
] | TITLE: DiffGAP: A Lightweight Diffusion Module in Contrastive Space for
Bridging Cross-Model Gap
ABSTRACT: Recent works in cross-modal understanding and generation, notably through
models like CLAP (Contrastive Language-Audio Pretraining) and CAVP (Contrastive
Audio-Visual Pretraining), have significantly enhanced the alignment of text,
video, and audio embeddings via a single contrastive loss. However, these
methods often overlook the bidirectional interactions and inherent noises
present in each modality, which can crucially impact the quality and efficacy
of cross-modal integration. To address this limitation, we introduce DiffGAP, a
novel approach incorporating a lightweight generative module within the
contrastive space. Specifically, our DiffGAP employs a bidirectional diffusion
process tailored to bridge the cross-modal gap more effectively. This involves
a denoising process on text and video embeddings conditioned on audio
embeddings and vice versa, thus facilitating a more nuanced and robust
cross-modal interaction. Our experimental results on VGGSound and AudioCaps
datasets demonstrate that DiffGAP significantly improves performance in
video/text-audio generation and retrieval tasks, confirming its effectiveness
in enhancing cross-modal understanding and generation capabilities.
|
2503.12137 | Ertu\u{g}rul Ke\c{c}eci | Ertu\u{g}rul Ke\c{c}eci, M\"ujde G\"uzelkaya, Tufan Kumbasar | A State Alignment-Centric Approach to Federated System Identification:
The FedAlign Framework | null | null | null | null | cs.LG cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents FedAlign, a Federated Learning (FL) framework
particularly designed for System Identification (SYSID) tasks by aligning state
representations. Local workers can learn State-Space Models (SSMs) with
equivalent representations but different dynamics. We demonstrate that directly
aggregating these local SSMs via FedAvg results in a global model with altered
system dynamics. FedAlign overcomes this problem by employing similarity
transformation matrices to align state representations of local SSMs, thereby
establishing a common parameter basin that retains the dynamics of local SSMs.
FedAlign computes similarity transformation matrices via two distinct
approaches: FedAlign-A and FedAlign-O. In FedAlign-A, we represent the global
SSM in controllable canonical form (CCF). We apply control theory to
analytically derive similarity transformation matrices that convert each local
SSM into this form. Yet, establishing global SSM in CCF brings additional
alignment challenges in multi input - multi output SYSID as CCF representation
is not unique, unlike in single input - single output SYSID. In FedAlign-O, we
address these alignment challenges by reformulating the local parameter basin
alignment problem as an optimization task. We determine the parameter basin of
a local worker as the common parameter basin and solve least square problems to
obtain similarity transformation matrices needed to align the remaining local
SSMs. Through the experiments conducted on synthetic and real-world datasets,
we show that FedAlign outperforms FedAvg, converges faster, and provides
improved stability of the global SSM thanks to the efficient alignment of local
parameter basins.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 13:43:54 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Keçeci",
"Ertuğrul",
""
],
[
"Güzelkaya",
"Müjde",
""
],
[
"Kumbasar",
"Tufan",
""
]
] | TITLE: A State Alignment-Centric Approach to Federated System Identification:
The FedAlign Framework
ABSTRACT: This paper presents FedAlign, a Federated Learning (FL) framework
particularly designed for System Identification (SYSID) tasks by aligning state
representations. Local workers can learn State-Space Models (SSMs) with
equivalent representations but different dynamics. We demonstrate that directly
aggregating these local SSMs via FedAvg results in a global model with altered
system dynamics. FedAlign overcomes this problem by employing similarity
transformation matrices to align state representations of local SSMs, thereby
establishing a common parameter basin that retains the dynamics of local SSMs.
FedAlign computes similarity transformation matrices via two distinct
approaches: FedAlign-A and FedAlign-O. In FedAlign-A, we represent the global
SSM in controllable canonical form (CCF). We apply control theory to
analytically derive similarity transformation matrices that convert each local
SSM into this form. Yet, establishing global SSM in CCF brings additional
alignment challenges in multi input - multi output SYSID as CCF representation
is not unique, unlike in single input - single output SYSID. In FedAlign-O, we
address these alignment challenges by reformulating the local parameter basin
alignment problem as an optimization task. We determine the parameter basin of
a local worker as the common parameter basin and solve least square problems to
obtain similarity transformation matrices needed to align the remaining local
SSMs. Through the experiments conducted on synthetic and real-world datasets,
we show that FedAlign outperforms FedAvg, converges faster, and provides
improved stability of the global SSM thanks to the efficient alignment of local
parameter basins.
|
2503.12141 | Shayan Rokhva | Shayan Rokhva, Babak Teimourpour, Romina Babaei | Enhanced Sentiment Analysis of Iranian Restaurant Reviews Utilizing
Sentiment Intensity Analyzer & Fuzzy Logic | null | null | null | null | cs.CL cs.CV | http://creativecommons.org/licenses/by/4.0/ | This research presents an advanced sentiment analysis framework studied on
Iranian restaurant reviews, combining fuzzy logic with conventional sentiment
analysis techniques to assess both sentiment polarity and intensity. A dataset
of 1266 reviews, alongside corresponding star ratings, was compiled and
preprocessed for analysis. Initial sentiment analysis was conducted using the
Sentiment Intensity Analyzer (VADER), a rule-based tool that assigns sentiment
scores across positive, negative, and neutral categories. However, a noticeable
bias toward neutrality often led to an inaccurate representation of sentiment
intensity. To mitigate this issue, based on a fuzzy perspective, two refinement
techniques were introduced, applying square-root and fourth-root
transformations to amplify positive and negative sentiment scores while
maintaining neutrality. This led to three distinct methodologies: Approach 1,
utilizing unaltered VADER scores; Approach 2, modifying sentiment values using
the square root; and Approach 3, applying the fourth root for further
refinement. A Fuzzy Inference System incorporating comprehensive fuzzy rules
was then developed to process these refined scores and generate a single,
continuous sentiment value for each review based on each approach. Comparative
analysis, including human supervision and alignment with customer star ratings,
revealed that the refined approaches significantly improved sentiment analysis
by reducing neutrality bias and better capturing sentiment intensity. Despite
these advancements, minor over-amplification and persistent neutrality in
domain-specific cases were identified, leading us to propose several future
studies to tackle these occasional barriers. The study's methodology and
outcomes offer valuable insights for businesses seeking a more precise
understanding of consumer sentiment, enhancing sentiment analysis across
various industries.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 13:55:23 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Rokhva",
"Shayan",
""
],
[
"Teimourpour",
"Babak",
""
],
[
"Babaei",
"Romina",
""
]
] | TITLE: Enhanced Sentiment Analysis of Iranian Restaurant Reviews Utilizing
Sentiment Intensity Analyzer & Fuzzy Logic
ABSTRACT: This research presents an advanced sentiment analysis framework studied on
Iranian restaurant reviews, combining fuzzy logic with conventional sentiment
analysis techniques to assess both sentiment polarity and intensity. A dataset
of 1266 reviews, alongside corresponding star ratings, was compiled and
preprocessed for analysis. Initial sentiment analysis was conducted using the
Sentiment Intensity Analyzer (VADER), a rule-based tool that assigns sentiment
scores across positive, negative, and neutral categories. However, a noticeable
bias toward neutrality often led to an inaccurate representation of sentiment
intensity. To mitigate this issue, based on a fuzzy perspective, two refinement
techniques were introduced, applying square-root and fourth-root
transformations to amplify positive and negative sentiment scores while
maintaining neutrality. This led to three distinct methodologies: Approach 1,
utilizing unaltered VADER scores; Approach 2, modifying sentiment values using
the square root; and Approach 3, applying the fourth root for further
refinement. A Fuzzy Inference System incorporating comprehensive fuzzy rules
was then developed to process these refined scores and generate a single,
continuous sentiment value for each review based on each approach. Comparative
analysis, including human supervision and alignment with customer star ratings,
revealed that the refined approaches significantly improved sentiment analysis
by reducing neutrality bias and better capturing sentiment intensity. Despite
these advancements, minor over-amplification and persistent neutrality in
domain-specific cases were identified, leading us to propose several future
studies to tackle these occasional barriers. The study's methodology and
outcomes offer valuable insights for businesses seeking a more precise
understanding of consumer sentiment, enhancing sentiment analysis across
various industries.
|
2503.12143 | Maryam Daniali | Maryam Daniali, Shivaram Karandikar, Dabriel Zimmerman, J. Eric
Schmitt, Matthew J. Buczek, Benjamin Jung, Laura Mercedes, Jakob Seidlitz,
Vanessa Troiani, Lena Dorfschmidt, Eren Kafadar, Remo Williams, Susan
Sotardi, Arastoo Vosough, Scott Haag, Jenna M. Schabdach, Aaron
Alexander-Bloch | Language Models for Automated Classification of Brain MRI Reports and
Growth Chart Generation | null | null | null | null | eess.IV cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Clinically acquired brain MRIs and radiology reports are valuable but
underutilized resources due to the challenges of manual analysis and data
heterogeneity. We developed fine-tuned language models (LMs) to classify brain
MRI reports as normal (reports with limited pathology) or abnormal, fine-tuning
BERT, BioBERT, ClinicalBERT, and RadBERT on 44,661 reports. We also explored
the reasoning capabilities of a leading LM, Gemini 1.5-Pro, for normal report
categorization. Automated image processing and modeling generated brain growth
charts from LM-classified normal scans, comparing them to human-derived charts.
Fine-tuned LMs achieved high classification performance (F1-Score >97%), with
unbalanced training mitigating class imbalance. Performance was robust on
out-of-distribution data, with full text outperforming summary (impression)
sections. Gemini 1.5-Pro showed a promising categorization performance,
especially with clinical inference. LM-derived brain growth charts were nearly
identical to human-annotated charts (r = 0.99, p < 2.2e-16). Our LMs offer
scalable analysis of radiology reports, enabling automated classification of
brain MRIs in large datasets. One application is automated generation of brain
growth charts for benchmarking quantitative image features. Further research is
needed to address data heterogeneity and optimize LM reasoning.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 13:59:44 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Daniali",
"Maryam",
""
],
[
"Karandikar",
"Shivaram",
""
],
[
"Zimmerman",
"Dabriel",
""
],
[
"Schmitt",
"J. Eric",
""
],
[
"Buczek",
"Matthew J.",
""
],
[
"Jung",
"Benjamin",
""
],
[
"Mercedes",
"Laura",
""
],
[
"Seidlitz",
"Jakob",
""
],
[
"Troiani",
"Vanessa",
""
],
[
"Dorfschmidt",
"Lena",
""
],
[
"Kafadar",
"Eren",
""
],
[
"Williams",
"Remo",
""
],
[
"Sotardi",
"Susan",
""
],
[
"Vosough",
"Arastoo",
""
],
[
"Haag",
"Scott",
""
],
[
"Schabdach",
"Jenna M.",
""
],
[
"Alexander-Bloch",
"Aaron",
""
]
] | TITLE: Language Models for Automated Classification of Brain MRI Reports and
Growth Chart Generation
ABSTRACT: Clinically acquired brain MRIs and radiology reports are valuable but
underutilized resources due to the challenges of manual analysis and data
heterogeneity. We developed fine-tuned language models (LMs) to classify brain
MRI reports as normal (reports with limited pathology) or abnormal, fine-tuning
BERT, BioBERT, ClinicalBERT, and RadBERT on 44,661 reports. We also explored
the reasoning capabilities of a leading LM, Gemini 1.5-Pro, for normal report
categorization. Automated image processing and modeling generated brain growth
charts from LM-classified normal scans, comparing them to human-derived charts.
Fine-tuned LMs achieved high classification performance (F1-Score >97%), with
unbalanced training mitigating class imbalance. Performance was robust on
out-of-distribution data, with full text outperforming summary (impression)
sections. Gemini 1.5-Pro showed a promising categorization performance,
especially with clinical inference. LM-derived brain growth charts were nearly
identical to human-annotated charts (r = 0.99, p < 2.2e-16). Our LMs offer
scalable analysis of radiology reports, enabling automated classification of
brain MRIs in large datasets. One application is automated generation of brain
growth charts for benchmarking quantitative image features. Further research is
needed to address data heterogeneity and optimize LM reasoning.
|
2503.12149 | Junjie Chen | Junjie Chen and Xuyang Liu and Subin Huang and Linfeng Zhang and Hang
Yu | Seeing Sarcasm Through Different Eyes: Analyzing Multimodal Sarcasm
Perception in Large Vision-Language Models | null | null | null | null | cs.CL cs.MM cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the advent of large vision-language models (LVLMs) demonstrating
increasingly human-like abilities, a pivotal question emerges: do different
LVLMs interpret multimodal sarcasm differently, and can a single model grasp
sarcasm from multiple perspectives like humans? To explore this, we introduce
an analytical framework using systematically designed prompts on existing
multimodal sarcasm datasets. Evaluating 12 state-of-the-art LVLMs over 2,409
samples, we examine interpretive variations within and across models, focusing
on confidence levels, alignment with dataset labels, and recognition of
ambiguous "neutral" cases. Our findings reveal notable discrepancies -- across
LVLMs and within the same model under varied prompts. While
classification-oriented prompts yield higher internal consistency, models
diverge markedly when tasked with interpretive reasoning. These results
challenge binary labeling paradigms by highlighting sarcasm's subjectivity. We
advocate moving beyond rigid annotation schemes toward multi-perspective,
uncertainty-aware modeling, offering deeper insights into multimodal sarcasm
comprehension. Our code and data are available at:
https://github.com/CoderChen01/LVLMSarcasmAnalysis
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 14:10:25 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Chen",
"Junjie",
""
],
[
"Liu",
"Xuyang",
""
],
[
"Huang",
"Subin",
""
],
[
"Zhang",
"Linfeng",
""
],
[
"Yu",
"Hang",
""
]
] | TITLE: Seeing Sarcasm Through Different Eyes: Analyzing Multimodal Sarcasm
Perception in Large Vision-Language Models
ABSTRACT: With the advent of large vision-language models (LVLMs) demonstrating
increasingly human-like abilities, a pivotal question emerges: do different
LVLMs interpret multimodal sarcasm differently, and can a single model grasp
sarcasm from multiple perspectives like humans? To explore this, we introduce
an analytical framework using systematically designed prompts on existing
multimodal sarcasm datasets. Evaluating 12 state-of-the-art LVLMs over 2,409
samples, we examine interpretive variations within and across models, focusing
on confidence levels, alignment with dataset labels, and recognition of
ambiguous "neutral" cases. Our findings reveal notable discrepancies -- across
LVLMs and within the same model under varied prompts. While
classification-oriented prompts yield higher internal consistency, models
diverge markedly when tasked with interpretive reasoning. These results
challenge binary labeling paradigms by highlighting sarcasm's subjectivity. We
advocate moving beyond rigid annotation schemes toward multi-perspective,
uncertainty-aware modeling, offering deeper insights into multimodal sarcasm
comprehension. Our code and data are available at:
https://github.com/CoderChen01/LVLMSarcasmAnalysis
|
2503.12156 | Yunbo Long | Yunbo Long, Liming Xu, Alexandra Brintrup | Efficient and Privacy-Preserved Link Prediction via Condensed Graphs | null | null | null | null | cs.LG cs.SI | http://creativecommons.org/licenses/by/4.0/ | Link prediction is crucial for uncovering hidden connections within complex
networks, enabling applications such as identifying potential customers and
products. However, this research faces significant challenges, including
concerns about data privacy, as well as high computational and storage costs,
especially when dealing with large-scale networks. Condensed graphs, which are
much smaller than the original graphs while retaining essential information,
has become an effective solution to both maintain data utility and preserve
privacy. Existing methods, however, initialize synthetic graphs through random
node selection without considering node connectivity, and are mainly designed
for node classification tasks. As a result, their potential for
privacy-preserving link prediction remains largely unexplored. We introduce
HyDRO\textsuperscript{+}, a graph condensation method guided by algebraic
Jaccard similarity, which leverages local connectivity information to optimize
condensed graph structures. Extensive experiments on four real-world networks
show that our method outperforms state-of-the-art methods and even the original
networks in balancing link prediction accuracy and privacy preservation.
Moreover, our method achieves nearly 20* faster training and reduces storage
requirements by 452*, as demonstrated on the Computers dataset, compared to
link prediction on the original networks. This work represents the first
attempt to leverage condensed graphs for privacy-preserving link prediction
information sharing in real-world complex networks. It offers a promising
pathway for preserving link prediction information while safeguarding privacy,
advancing the use of graph condensation in large-scale networks with privacy
concerns.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 14:54:04 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Long",
"Yunbo",
""
],
[
"Xu",
"Liming",
""
],
[
"Brintrup",
"Alexandra",
""
]
] | TITLE: Efficient and Privacy-Preserved Link Prediction via Condensed Graphs
ABSTRACT: Link prediction is crucial for uncovering hidden connections within complex
networks, enabling applications such as identifying potential customers and
products. However, this research faces significant challenges, including
concerns about data privacy, as well as high computational and storage costs,
especially when dealing with large-scale networks. Condensed graphs, which are
much smaller than the original graphs while retaining essential information,
has become an effective solution to both maintain data utility and preserve
privacy. Existing methods, however, initialize synthetic graphs through random
node selection without considering node connectivity, and are mainly designed
for node classification tasks. As a result, their potential for
privacy-preserving link prediction remains largely unexplored. We introduce
HyDRO\textsuperscript{+}, a graph condensation method guided by algebraic
Jaccard similarity, which leverages local connectivity information to optimize
condensed graph structures. Extensive experiments on four real-world networks
show that our method outperforms state-of-the-art methods and even the original
networks in balancing link prediction accuracy and privacy preservation.
Moreover, our method achieves nearly 20* faster training and reduces storage
requirements by 452*, as demonstrated on the Computers dataset, compared to
link prediction on the original networks. This work represents the first
attempt to leverage condensed graphs for privacy-preserving link prediction
information sharing in real-world complex networks. It offers a promising
pathway for preserving link prediction information while safeguarding privacy,
advancing the use of graph condensation in large-scale networks with privacy
concerns.
|
2503.12163 | Ruwei Pan | Ruwei Pan, Hongyu Zhang, Zhonghao Jiang, Ran Hou | AgentDroid: A Multi-Agent Framework for Detecting Fraudulent Android
Applications | null | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the increasing prevalence of fraudulent Android applications such as
fake and malicious applications, it is crucial to detect them with high
accuracy and adaptability. This paper introduces AgentDroid, a novel framework
for Android fraudulent application detection based on multi-modal analysis and
multi-agent systems. AgentDroid overcomes the limitations of traditional
detection methods such as the inability to handle multimodal data and high
false alarm rates. It processes Android applications and extracts a series of
multi-modal data for analysis. Multiple LLM-based agents with specialized roles
analyze the relevant data and collaborate to detect complex fraud effectively.
We constructed a dataset containing various categories of fraudulent
applications and legitimate applications and validated our framework on this
dataset. Experimental results indicate that our multi-agent framework based on
GPT-4o achieves an accuracy of 91.7% and an F1-Score of 91.68%, showing
improved detection accuracy over the baseline methods.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 15:07:43 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Pan",
"Ruwei",
""
],
[
"Zhang",
"Hongyu",
""
],
[
"Jiang",
"Zhonghao",
""
],
[
"Hou",
"Ran",
""
]
] | TITLE: AgentDroid: A Multi-Agent Framework for Detecting Fraudulent Android
Applications
ABSTRACT: With the increasing prevalence of fraudulent Android applications such as
fake and malicious applications, it is crucial to detect them with high
accuracy and adaptability. This paper introduces AgentDroid, a novel framework
for Android fraudulent application detection based on multi-modal analysis and
multi-agent systems. AgentDroid overcomes the limitations of traditional
detection methods such as the inability to handle multimodal data and high
false alarm rates. It processes Android applications and extracts a series of
multi-modal data for analysis. Multiple LLM-based agents with specialized roles
analyze the relevant data and collaborate to detect complex fraud effectively.
We constructed a dataset containing various categories of fraudulent
applications and legitimate applications and validated our framework on this
dataset. Experimental results indicate that our multi-agent framework based on
GPT-4o achieves an accuracy of 91.7% and an F1-Score of 91.68%, showing
improved detection accuracy over the baseline methods.
|
2503.12165 | Guanbin Li | Zijian He, Yuwei Ning, Yipeng Qin, Wangrun Wang, Sibei Yang, Liang
Lin, Guanbin Li | VTON 360: High-Fidelity Virtual Try-On from Any Viewing Direction | Accepted to CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Virtual Try-On (VTON) is a transformative technology in e-commerce and
fashion design, enabling realistic digital visualization of clothing on
individuals. In this work, we propose VTON 360, a novel 3D VTON method that
addresses the open challenge of achieving high-fidelity VTON that supports
any-view rendering. Specifically, we leverage the equivalence between a 3D
model and its rendered multi-view 2D images, and reformulate 3D VTON as an
extension of 2D VTON that ensures 3D consistent results across multiple views.
To achieve this, we extend 2D VTON models to include multi-view garments and
clothing-agnostic human body images as input, and propose several novel
techniques to enhance them, including: i) a pseudo-3D pose representation using
normal maps derived from the SMPL-X 3D human model, ii) a multi-view spatial
attention mechanism that models the correlations between features from
different viewing angles, and iii) a multi-view CLIP embedding that enhances
the garment CLIP features used in 2D VTON with camera information. Extensive
experiments on large-scale real datasets and clothing images from e-commerce
platforms demonstrate the effectiveness of our approach. Project page:
https://scnuhealthy.github.io/VTON360.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 15:08:48 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"He",
"Zijian",
""
],
[
"Ning",
"Yuwei",
""
],
[
"Qin",
"Yipeng",
""
],
[
"Wang",
"Wangrun",
""
],
[
"Yang",
"Sibei",
""
],
[
"Lin",
"Liang",
""
],
[
"Li",
"Guanbin",
""
]
] | TITLE: VTON 360: High-Fidelity Virtual Try-On from Any Viewing Direction
ABSTRACT: Virtual Try-On (VTON) is a transformative technology in e-commerce and
fashion design, enabling realistic digital visualization of clothing on
individuals. In this work, we propose VTON 360, a novel 3D VTON method that
addresses the open challenge of achieving high-fidelity VTON that supports
any-view rendering. Specifically, we leverage the equivalence between a 3D
model and its rendered multi-view 2D images, and reformulate 3D VTON as an
extension of 2D VTON that ensures 3D consistent results across multiple views.
To achieve this, we extend 2D VTON models to include multi-view garments and
clothing-agnostic human body images as input, and propose several novel
techniques to enhance them, including: i) a pseudo-3D pose representation using
normal maps derived from the SMPL-X 3D human model, ii) a multi-view spatial
attention mechanism that models the correlations between features from
different viewing angles, and iii) a multi-view CLIP embedding that enhances
the garment CLIP features used in 2D VTON with camera information. Extensive
experiments on large-scale real datasets and clothing images from e-commerce
platforms demonstrate the effectiveness of our approach. Project page:
https://scnuhealthy.github.io/VTON360.
|
2503.12180 | Jiangtao Gong | Yuhang Peng, Sidong Wang, Jihaoyu Yang, Shilong Li, Han Wang and
Jiangtao Gong | Bench2FreeAD: A Benchmark for Vision-based End-to-end Navigation in
Unstructured Robotic Environments | 7 pages, 9 figures | null | null | null | cs.RO cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Most current end-to-end (E2E) autonomous driving algorithms are built on
standard vehicles in structured transportation scenarios, lacking exploration
of robot navigation for unstructured scenarios such as auxiliary roads, campus
roads, and indoor settings. This paper investigates E2E robot navigation in
unstructured road environments. First, we introduce two data collection
pipelines - one for real-world robot data and another for synthetic data
generated using the Isaac Sim simulator, which together produce an unstructured
robotics navigation dataset -- FreeWorld Dataset. Second, we fine-tuned an
efficient E2E autonomous driving model -- VAD -- using our datasets to validate
the performance and adaptability of E2E autonomous driving models in these
environments. Results demonstrate that fine-tuning through our datasets
significantly enhances the navigation potential of E2E autonomous driving
models in unstructured robotic environments. Thus, this paper presents the
first dataset targeting E2E robot navigation tasks in unstructured scenarios,
and provides a benchmark based on vision-based E2E autonomous driving
algorithms to facilitate the development of E2E navigation technology for
logistics and service robots. The project is available on Github.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 15:46:49 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Peng",
"Yuhang",
""
],
[
"Wang",
"Sidong",
""
],
[
"Yang",
"Jihaoyu",
""
],
[
"Li",
"Shilong",
""
],
[
"Wang",
"Han",
""
],
[
"Gong",
"Jiangtao",
""
]
] | TITLE: Bench2FreeAD: A Benchmark for Vision-based End-to-end Navigation in
Unstructured Robotic Environments
ABSTRACT: Most current end-to-end (E2E) autonomous driving algorithms are built on
standard vehicles in structured transportation scenarios, lacking exploration
of robot navigation for unstructured scenarios such as auxiliary roads, campus
roads, and indoor settings. This paper investigates E2E robot navigation in
unstructured road environments. First, we introduce two data collection
pipelines - one for real-world robot data and another for synthetic data
generated using the Isaac Sim simulator, which together produce an unstructured
robotics navigation dataset -- FreeWorld Dataset. Second, we fine-tuned an
efficient E2E autonomous driving model -- VAD -- using our datasets to validate
the performance and adaptability of E2E autonomous driving models in these
environments. Results demonstrate that fine-tuning through our datasets
significantly enhances the navigation potential of E2E autonomous driving
models in unstructured robotic environments. Thus, this paper presents the
first dataset targeting E2E robot navigation tasks in unstructured scenarios,
and provides a benchmark based on vision-based E2E autonomous driving
algorithms to facilitate the development of E2E navigation technology for
logistics and service robots. The project is available on Github.
|
2503.12183 | Enze Liu | Enze Liu, Bowen Zheng, Wayne Xin Zhao, Ji-Rong Wen | Bridging Textual-Collaborative Gap through Semantic Codes for Sequential
Recommendation | null | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, substantial research efforts have been devoted to enhancing
sequential recommender systems by integrating abundant side information with
ID-based collaborative information. This study specifically focuses on
leveraging the textual metadata (e.g., titles and brands) associated with
items. While existing methods have achieved notable success by combining text
and ID representations, they often struggle to strike a balance between textual
information embedded in text representations and collaborative information from
sequential patterns of user behavior. In light of this, we propose CoCoRec, a
novel Code-based textual and Collaborative semantic fusion method for
sequential Recommendation. The key idea behind our approach is to bridge the
gap between textual and collaborative information using semantic codes.
Specifically, we generate fine-grained semantic codes from multi-view text
embeddings through vector quantization techniques. Subsequently, we develop a
code-guided semantic-fusion module based on the cross-attention mechanism to
flexibly extract and integrate relevant information from text representations.
In order to further enhance the fusion of textual and collaborative semantics,
we introduce an optimization strategy that employs code masking with two
specific objectives: masked code modeling and masked sequence alignment. The
merit of these objectives lies in leveraging mask prediction tasks and
augmented item representations to capture code correlations within individual
items and enhance the sequence modeling of the recommendation backbone.
Extensive experiments conducted on four public datasets demonstrate the
superiority of CoCoRec, showing significant improvements over various
sequential recommendation models. Our code is available at
https://anonymous.4open.science/r/CoCoRec-6E41.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 15:54:44 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Liu",
"Enze",
""
],
[
"Zheng",
"Bowen",
""
],
[
"Zhao",
"Wayne Xin",
""
],
[
"Wen",
"Ji-Rong",
""
]
] | TITLE: Bridging Textual-Collaborative Gap through Semantic Codes for Sequential
Recommendation
ABSTRACT: In recent years, substantial research efforts have been devoted to enhancing
sequential recommender systems by integrating abundant side information with
ID-based collaborative information. This study specifically focuses on
leveraging the textual metadata (e.g., titles and brands) associated with
items. While existing methods have achieved notable success by combining text
and ID representations, they often struggle to strike a balance between textual
information embedded in text representations and collaborative information from
sequential patterns of user behavior. In light of this, we propose CoCoRec, a
novel Code-based textual and Collaborative semantic fusion method for
sequential Recommendation. The key idea behind our approach is to bridge the
gap between textual and collaborative information using semantic codes.
Specifically, we generate fine-grained semantic codes from multi-view text
embeddings through vector quantization techniques. Subsequently, we develop a
code-guided semantic-fusion module based on the cross-attention mechanism to
flexibly extract and integrate relevant information from text representations.
In order to further enhance the fusion of textual and collaborative semantics,
we introduce an optimization strategy that employs code masking with two
specific objectives: masked code modeling and masked sequence alignment. The
merit of these objectives lies in leveraging mask prediction tasks and
augmented item representations to capture code correlations within individual
items and enhance the sequence modeling of the recommendation backbone.
Extensive experiments conducted on four public datasets demonstrate the
superiority of CoCoRec, showing significant improvements over various
sequential recommendation models. Our code is available at
https://anonymous.4open.science/r/CoCoRec-6E41.
|
2503.12185 | Xiaoyu Chu | S\'andor Battaglini-Fischer, Nishanthi Srinivasan, B\'alint L\'aszl\'o
Szarvas, Xiaoyu Chu, Alexandru Iosup | FAILS: A Framework for Automated Collection and Analysis of LLM Service
Incidents | null | HotCloudPerf 2025 | 10.1145/3680256.3721320 | null | cs.PF cs.DC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Large Language Model (LLM) services such as ChatGPT, DALLE, and Cursor have
quickly become essential for society, businesses, and individuals, empowering
applications such as chatbots, image generation, and code assistance. The
complexity of LLM systems makes them prone to failures and affects their
reliability and availability, yet their failure patterns are not fully
understood, making it an emerging problem. However, there are limited datasets
and studies in this area, particularly lacking an open-access tool for
analyzing LLM service failures based on incident reports. Addressing these
problems, in this work we propose FAILS, the first open-sourced framework for
incident reports collection and analysis on different LLM services and
providers. FAILS provides comprehensive data collection, analysis, and
visualization capabilities, including:(1) It can automatically collect, clean,
and update incident data through its data scraper and processing components;(2)
It provides 17 types of failure analysis, allowing users to explore temporal
trends of incidents, analyze service reliability metrics, such as Mean Time to
Recovery (MTTR) and Mean Time Between Failures (MTBF);(3) It leverages advanced
LLM tools to assist in data analysis and interpretation, enabling users to gain
observations and insights efficiently. All functions are integrated in the
backend, allowing users to easily access them through a web-based frontend
interface. FAILS supports researchers, engineers, and general users to
understand failure patterns and further mitigate operational incidents and
outages in LLM services. The framework is publicly available on
https://github.com/atlarge-research/FAILS.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 16:06:16 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Battaglini-Fischer",
"Sándor",
""
],
[
"Srinivasan",
"Nishanthi",
""
],
[
"Szarvas",
"Bálint László",
""
],
[
"Chu",
"Xiaoyu",
""
],
[
"Iosup",
"Alexandru",
""
]
] | TITLE: FAILS: A Framework for Automated Collection and Analysis of LLM Service
Incidents
ABSTRACT: Large Language Model (LLM) services such as ChatGPT, DALLE, and Cursor have
quickly become essential for society, businesses, and individuals, empowering
applications such as chatbots, image generation, and code assistance. The
complexity of LLM systems makes them prone to failures and affects their
reliability and availability, yet their failure patterns are not fully
understood, making it an emerging problem. However, there are limited datasets
and studies in this area, particularly lacking an open-access tool for
analyzing LLM service failures based on incident reports. Addressing these
problems, in this work we propose FAILS, the first open-sourced framework for
incident reports collection and analysis on different LLM services and
providers. FAILS provides comprehensive data collection, analysis, and
visualization capabilities, including:(1) It can automatically collect, clean,
and update incident data through its data scraper and processing components;(2)
It provides 17 types of failure analysis, allowing users to explore temporal
trends of incidents, analyze service reliability metrics, such as Mean Time to
Recovery (MTTR) and Mean Time Between Failures (MTBF);(3) It leverages advanced
LLM tools to assist in data analysis and interpretation, enabling users to gain
observations and insights efficiently. All functions are integrated in the
backend, allowing users to easily access them through a web-based frontend
interface. FAILS supports researchers, engineers, and general users to
understand failure patterns and further mitigate operational incidents and
outages in LLM services. The framework is publicly available on
https://github.com/atlarge-research/FAILS.
|
2503.12191 | Runlong Cao | Ying Zang, Yuncan Gao, Jiangi Zhang, Yuangi Hu, Runlong Cao, Lanyun
Zhu, Qi Zhu, Deyi Ji, Renjun Xu, Tianrun Chen | Breaking the Box: Enhancing Remote Sensing Image Segmentation with
Freehand Sketches | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work advances zero-shot interactive segmentation for remote sensing
imagery through three key contributions. First, we propose a novel sketch-based
prompting method, enabling users to intuitively outline objects, surpassing
traditional point or box prompts. Second, we introduce LTL-Sensing, the first
dataset pairing human sketches with remote sensing imagery, setting a benchmark
for future research. Third, we present LTL-Net, a model featuring a multi-input
prompting transport module tailored for freehand sketches. Extensive
experiments show our approach significantly improves segmentation accuracy and
robustness over state-of-the-art methods like SAM, fostering more intuitive
human-AI collaboration in remote sensing analysis and enhancing its
applications.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 16:21:37 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Zang",
"Ying",
""
],
[
"Gao",
"Yuncan",
""
],
[
"Zhang",
"Jiangi",
""
],
[
"Hu",
"Yuangi",
""
],
[
"Cao",
"Runlong",
""
],
[
"Zhu",
"Lanyun",
""
],
[
"Zhu",
"Qi",
""
],
[
"Ji",
"Deyi",
""
],
[
"Xu",
"Renjun",
""
],
[
"Chen",
"Tianrun",
""
]
] | TITLE: Breaking the Box: Enhancing Remote Sensing Image Segmentation with
Freehand Sketches
ABSTRACT: This work advances zero-shot interactive segmentation for remote sensing
imagery through three key contributions. First, we propose a novel sketch-based
prompting method, enabling users to intuitively outline objects, surpassing
traditional point or box prompts. Second, we introduce LTL-Sensing, the first
dataset pairing human sketches with remote sensing imagery, setting a benchmark
for future research. Third, we present LTL-Net, a model featuring a multi-input
prompting transport module tailored for freehand sketches. Extensive
experiments show our approach significantly improves segmentation accuracy and
robustness over state-of-the-art methods like SAM, fostering more intuitive
human-AI collaboration in remote sensing analysis and enhancing its
applications.
|
2503.12193 | Sai Sriram Talasu | S Balasubramanian, Yedu Krishna P, Talasu Sai Sriram, M Sai
Subramaniam, Manepalli Pranav Phanindra Sai, Darshan Gera | S2IL: Structurally Stable Incremental Learning | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Feature Distillation (FD) strategies are proven to be effective in mitigating
Catastrophic Forgetting (CF) seen in Class Incremental Learning (CIL). However,
current FD approaches enforce strict alignment of feature magnitudes and
directions across incremental steps, limiting the model's ability to adapt to
new knowledge. In this paper we propose Structurally Stable Incremental
Learning(S22IL), a FD method for CIL that mitigates CF by focusing on
preserving the overall spatial patterns of features which promote flexible
(plasticity) yet stable representations that preserve old knowledge
(stability). We also demonstrate that our proposed method S2IL achieves strong
incremental accuracy and outperforms other FD methods on SOTA benchmark
datasets CIFAR-100, ImageNet-100 and ImageNet-1K. Notably, S2IL outperforms
other methods by a significant margin in scenarios that have a large number of
incremental tasks.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 16:24:57 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Balasubramanian",
"S",
""
],
[
"P",
"Yedu Krishna",
""
],
[
"Sriram",
"Talasu Sai",
""
],
[
"Subramaniam",
"M Sai",
""
],
[
"Sai",
"Manepalli Pranav Phanindra",
""
],
[
"Gera",
"Darshan",
""
]
] | TITLE: S2IL: Structurally Stable Incremental Learning
ABSTRACT: Feature Distillation (FD) strategies are proven to be effective in mitigating
Catastrophic Forgetting (CF) seen in Class Incremental Learning (CIL). However,
current FD approaches enforce strict alignment of feature magnitudes and
directions across incremental steps, limiting the model's ability to adapt to
new knowledge. In this paper we propose Structurally Stable Incremental
Learning(S22IL), a FD method for CIL that mitigates CF by focusing on
preserving the overall spatial patterns of features which promote flexible
(plasticity) yet stable representations that preserve old knowledge
(stability). We also demonstrate that our proposed method S2IL achieves strong
incremental accuracy and outperforms other FD methods on SOTA benchmark
datasets CIFAR-100, ImageNet-100 and ImageNet-1K. Notably, S2IL outperforms
other methods by a significant margin in scenarios that have a large number of
incremental tasks.
|
2503.12206 | Ans Munir | Ans Munir, Faisal Z. Qureshi, Muhammad Haris Khan, and Mohsen Ali | TLAC: Two-stage LMM Augmented CLIP for Zero-Shot Classification | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Contrastive Language-Image Pretraining (CLIP) has shown impressive zero-shot
performance on image classification. However, state-of-the-art methods often
rely on fine-tuning techniques like prompt learning and adapter-based tuning to
optimize CLIP's performance. The necessity for fine-tuning significantly limits
CLIP's adaptability to novel datasets and domains. This requirement mandates
substantial time and computational resources for each new dataset. To overcome
this limitation, we introduce simple yet effective training-free approaches,
Single-stage LMM Augmented CLIP (SLAC) and Two-stage LMM Augmented CLIP (TLAC),
that leverages powerful Large Multimodal Models (LMMs), such as Gemini, for
image classification. The proposed methods leverages the capabilities of
pre-trained LMMs, allowing for seamless adaptation to diverse datasets and
domains without the need for additional training. Our approaches involve
prompting the LMM to identify objects within an image. Subsequently, the CLIP
text encoder determines the image class by identifying the dataset class with
the highest semantic similarity to the LLM predicted object. We evaluated our
models on 11 base-to-novel datasets and they achieved superior accuracy on 9 of
these, including benchmarks like ImageNet, SUN397 and Caltech101, while
maintaining a strictly training-free paradigm. Our overall accuracy of 83.44%
surpasses the previous state-of-the-art few-shot methods by a margin of 6.75%.
Our method achieved 83.6% average accuracy across 13 datasets, a 9.7%
improvement over the previous 73.9% state-of-the-art for training-free
approaches. Our method improves domain generalization, with a 3.6% gain on
ImageNetV2, 16.96% on ImageNet-S, and 12.59% on ImageNet-R, over prior few-shot
methods.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 17:11:41 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Munir",
"Ans",
""
],
[
"Qureshi",
"Faisal Z.",
""
],
[
"Khan",
"Muhammad Haris",
""
],
[
"Ali",
"Mohsen",
""
]
] | TITLE: TLAC: Two-stage LMM Augmented CLIP for Zero-Shot Classification
ABSTRACT: Contrastive Language-Image Pretraining (CLIP) has shown impressive zero-shot
performance on image classification. However, state-of-the-art methods often
rely on fine-tuning techniques like prompt learning and adapter-based tuning to
optimize CLIP's performance. The necessity for fine-tuning significantly limits
CLIP's adaptability to novel datasets and domains. This requirement mandates
substantial time and computational resources for each new dataset. To overcome
this limitation, we introduce simple yet effective training-free approaches,
Single-stage LMM Augmented CLIP (SLAC) and Two-stage LMM Augmented CLIP (TLAC),
that leverages powerful Large Multimodal Models (LMMs), such as Gemini, for
image classification. The proposed methods leverages the capabilities of
pre-trained LMMs, allowing for seamless adaptation to diverse datasets and
domains without the need for additional training. Our approaches involve
prompting the LMM to identify objects within an image. Subsequently, the CLIP
text encoder determines the image class by identifying the dataset class with
the highest semantic similarity to the LLM predicted object. We evaluated our
models on 11 base-to-novel datasets and they achieved superior accuracy on 9 of
these, including benchmarks like ImageNet, SUN397 and Caltech101, while
maintaining a strictly training-free paradigm. Our overall accuracy of 83.44%
surpasses the previous state-of-the-art few-shot methods by a margin of 6.75%.
Our method achieved 83.6% average accuracy across 13 datasets, a 9.7%
improvement over the previous 73.9% state-of-the-art for training-free
approaches. Our method improves domain generalization, with a 3.6% gain on
ImageNetV2, 16.96% on ImageNet-S, and 12.59% on ImageNet-R, over prior few-shot
methods.
|
2503.12211 | Omri Weinstein | Nir Ailon, Akhiad Bercovich, Omri Weinstein | Changing Base Without Losing Pace: A GPU-Efficient Alternative to MatMul
in DNNs | null | null | null | null | cs.LG cs.AI cs.DS | http://creativecommons.org/licenses/by/4.0/ | We propose a cheaper alternative bilinear operator to matrix-multiplication
in deep neural networks (DNNs). Unlike many stubborn attempts to accelerate
MatMuls in DNN inference, this operator is supported by capabilities of
existing GPU hardware, most notably NVIDIA TensorCores. To our knowledge, this
is the first GPU-native acceleration technique which \emph{does not decrease}
(in fact, increases) the number of trainable parameters of the network,
mitigating the accuracy-loss of compression-based techniques. Hence, this
operator is at the same time more expressive than MatMul, yet requires
substantially \emph{fewer} FLOPs to evaluate. We term this new operator
\emph{Strassen-Tile} (STL).
The main idea behind STL$(X,W)$ is a \emph{local} change-of-basis (learnable
encoder) on weights and activation \emph{tiles}, after which we perform batched
\emph{elementwise} products between tiles, and a final decoding transformation
(inspired by algebraic pipelines from fast matrix and polynomial
multiplication).
We compare STL against two benchmarks. The first one is SoTA T2T-ViT on
Imagenet-1K. Here we show that replacing \emph{all} linear layers with STL and
training from scratch, results in factor x2.7 reduction in FLOPs with a 0.5
\emph{accuracy improvement}. Our second speed-accuracy comparison benchmark for
pretrained LLMs is the most practical GPU-acceleration technique, \twofour
structured Sparsity. Finetuning TinyLlama \cite{tinyllama24} with STL layers on
the Slim Pajama dataset, achieves similar accuracy to 2:4, with x2.2 FLOP
speedup compared to x1.7 of the latter.
Finally, we discuss a group-theoretic approach for discovering
\emph{universal} encoders for STL, which could lead to fast \emph{black-box}
acceleration via approximate matrix-multiplication (AMM).
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 17:31:36 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Ailon",
"Nir",
""
],
[
"Bercovich",
"Akhiad",
""
],
[
"Weinstein",
"Omri",
""
]
] | TITLE: Changing Base Without Losing Pace: A GPU-Efficient Alternative to MatMul
in DNNs
ABSTRACT: We propose a cheaper alternative bilinear operator to matrix-multiplication
in deep neural networks (DNNs). Unlike many stubborn attempts to accelerate
MatMuls in DNN inference, this operator is supported by capabilities of
existing GPU hardware, most notably NVIDIA TensorCores. To our knowledge, this
is the first GPU-native acceleration technique which \emph{does not decrease}
(in fact, increases) the number of trainable parameters of the network,
mitigating the accuracy-loss of compression-based techniques. Hence, this
operator is at the same time more expressive than MatMul, yet requires
substantially \emph{fewer} FLOPs to evaluate. We term this new operator
\emph{Strassen-Tile} (STL).
The main idea behind STL$(X,W)$ is a \emph{local} change-of-basis (learnable
encoder) on weights and activation \emph{tiles}, after which we perform batched
\emph{elementwise} products between tiles, and a final decoding transformation
(inspired by algebraic pipelines from fast matrix and polynomial
multiplication).
We compare STL against two benchmarks. The first one is SoTA T2T-ViT on
Imagenet-1K. Here we show that replacing \emph{all} linear layers with STL and
training from scratch, results in factor x2.7 reduction in FLOPs with a 0.5
\emph{accuracy improvement}. Our second speed-accuracy comparison benchmark for
pretrained LLMs is the most practical GPU-acceleration technique, \twofour
structured Sparsity. Finetuning TinyLlama \cite{tinyllama24} with STL layers on
the Slim Pajama dataset, achieves similar accuracy to 2:4, with x2.2 FLOP
speedup compared to x1.7 of the latter.
Finally, we discuss a group-theoretic approach for discovering
\emph{universal} encoders for STL, which could lead to fast \emph{black-box}
acceleration via approximate matrix-multiplication (AMM).
|
2503.12215 | Amulya Reddy Maligireddy | Amulya Reddy Maligireddy, Manohar Reddy Uppula, Nidhi Rastogi,
Yaswanth Reddy Parla | Gun Detection Using Combined Human Pose and Weapon Appearance | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | The increasing frequency of firearm-related incidents has necessitated
advancements in security and surveillance systems, particularly in firearm
detection within public spaces. Traditional gun detection methods rely on
manual inspections and continuous human monitoring of CCTV footage, which are
labor-intensive and prone to high false positive and negative rates. To address
these limitations, we propose a novel approach that integrates human pose
estimation with weapon appearance recognition using deep learning techniques.
Unlike prior studies that focus on either body pose estimation or firearm
detection in isolation, our method jointly analyzes posture and weapon presence
to enhance detection accuracy in real-world, dynamic environments. To train our
model, we curated a diverse dataset comprising images from open-source
repositories such as IMFDB and Monash Guns, supplemented with AI-generated and
manually collected images from web sources. This dataset ensures robust
generalization and realistic performance evaluation under various surveillance
conditions. Our research aims to improve the precision and reliability of
firearm detection systems, contributing to enhanced public safety and threat
mitigation in high-risk areas.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 17:57:35 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Maligireddy",
"Amulya Reddy",
""
],
[
"Uppula",
"Manohar Reddy",
""
],
[
"Rastogi",
"Nidhi",
""
],
[
"Parla",
"Yaswanth Reddy",
""
]
] | TITLE: Gun Detection Using Combined Human Pose and Weapon Appearance
ABSTRACT: The increasing frequency of firearm-related incidents has necessitated
advancements in security and surveillance systems, particularly in firearm
detection within public spaces. Traditional gun detection methods rely on
manual inspections and continuous human monitoring of CCTV footage, which are
labor-intensive and prone to high false positive and negative rates. To address
these limitations, we propose a novel approach that integrates human pose
estimation with weapon appearance recognition using deep learning techniques.
Unlike prior studies that focus on either body pose estimation or firearm
detection in isolation, our method jointly analyzes posture and weapon presence
to enhance detection accuracy in real-world, dynamic environments. To train our
model, we curated a diverse dataset comprising images from open-source
repositories such as IMFDB and Monash Guns, supplemented with AI-generated and
manually collected images from web sources. This dataset ensures robust
generalization and realistic performance evaluation under various surveillance
conditions. Our research aims to improve the precision and reliability of
firearm detection systems, contributing to enhanced public safety and threat
mitigation in high-risk areas.
|
2503.12218 | Chengxuan Qian | Chengxuan Qian, Kai Han, Siqi Ma, Chongwen Lyu, Zhenlong Yuan, Jun
Chen, Zhe Liu | Adaptive Label Correction for Robust Medical Image Segmentation with
Noisy Labels | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning has shown remarkable success in medical image analysis, but its
reliance on large volumes of high-quality labeled data limits its
applicability. While noisy labeled data are easier to obtain, directly
incorporating them into training can degrade model performance. To address this
challenge, we propose a Mean Teacher-based Adaptive Label Correction (ALC)
self-ensemble framework for robust medical image segmentation with noisy
labels. The framework leverages the Mean Teacher architecture to ensure
consistent learning under noise perturbations. It includes an adaptive label
refinement mechanism that dynamically captures and weights differences across
multiple disturbance versions to enhance the quality of noisy labels.
Additionally, a sample-level uncertainty-based label selection algorithm is
introduced to prioritize high-confidence samples for network updates,
mitigating the impact of noisy annotations. Consistency learning is integrated
to align the predictions of the student and teacher networks, further enhancing
model robustness. Extensive experiments on two public datasets demonstrate the
effectiveness of the proposed framework, showing significant improvements in
segmentation performance. By fully exploiting the strengths of the Mean Teacher
structure, the ALC framework effectively processes noisy labels, adapts to
challenging scenarios, and achieves competitive results compared to
state-of-the-art methods.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 18:03:01 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Qian",
"Chengxuan",
""
],
[
"Han",
"Kai",
""
],
[
"Ma",
"Siqi",
""
],
[
"Lyu",
"Chongwen",
""
],
[
"Yuan",
"Zhenlong",
""
],
[
"Chen",
"Jun",
""
],
[
"Liu",
"Zhe",
""
]
] | TITLE: Adaptive Label Correction for Robust Medical Image Segmentation with
Noisy Labels
ABSTRACT: Deep learning has shown remarkable success in medical image analysis, but its
reliance on large volumes of high-quality labeled data limits its
applicability. While noisy labeled data are easier to obtain, directly
incorporating them into training can degrade model performance. To address this
challenge, we propose a Mean Teacher-based Adaptive Label Correction (ALC)
self-ensemble framework for robust medical image segmentation with noisy
labels. The framework leverages the Mean Teacher architecture to ensure
consistent learning under noise perturbations. It includes an adaptive label
refinement mechanism that dynamically captures and weights differences across
multiple disturbance versions to enhance the quality of noisy labels.
Additionally, a sample-level uncertainty-based label selection algorithm is
introduced to prioritize high-confidence samples for network updates,
mitigating the impact of noisy annotations. Consistency learning is integrated
to align the predictions of the student and teacher networks, further enhancing
model robustness. Extensive experiments on two public datasets demonstrate the
effectiveness of the proposed framework, showing significant improvements in
segmentation performance. By fully exploiting the strengths of the Mean Teacher
structure, the ALC framework effectively processes noisy labels, adapts to
challenging scenarios, and achieves competitive results compared to
state-of-the-art methods.
|
2503.12222 | Giovanni Montana | Natinael Solomon Neggatu, Jeremie Houssineau, Giovanni Montana | Evaluation-Time Policy Switching for Offline Reinforcement Learning | Proc. of the 24th International Conference on Autonomous Agents and
Multiagent Systems (AAMAS 2025) | null | null | null | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Offline reinforcement learning (RL) looks at learning how to optimally solve
tasks using a fixed dataset of interactions from the environment. Many
off-policy algorithms developed for online learning struggle in the offline
setting as they tend to over-estimate the behaviour of out of distributions
actions. Existing offline RL algorithms adapt off-policy algorithms, employing
techniques such as constraining the policy or modifying the value function to
achieve good performance on individual datasets but struggle to adapt to
different tasks or datasets of different qualities without tuning
hyper-parameters. We introduce a policy switching technique that dynamically
combines the behaviour of a pure off-policy RL agent, for improving behaviour,
and a behavioural cloning (BC) agent, for staying close to the data. We achieve
this by using a combination of epistemic uncertainty, quantified by our RL
model, and a metric for aleatoric uncertainty extracted from the dataset. We
show empirically that our policy switching technique can outperform not only
the individual algorithms used in the switching process but also compete with
state-of-the-art methods on numerous benchmarks. Our use of epistemic
uncertainty for policy switching also allows us to naturally extend our method
to the domain of offline to online fine-tuning allowing our model to adapt
quickly and safely from online data, either matching or exceeding the
performance of current methods that typically require additional modification
or hyper-parameter fine-tuning.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 18:12:16 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Neggatu",
"Natinael Solomon",
""
],
[
"Houssineau",
"Jeremie",
""
],
[
"Montana",
"Giovanni",
""
]
] | TITLE: Evaluation-Time Policy Switching for Offline Reinforcement Learning
ABSTRACT: Offline reinforcement learning (RL) looks at learning how to optimally solve
tasks using a fixed dataset of interactions from the environment. Many
off-policy algorithms developed for online learning struggle in the offline
setting as they tend to over-estimate the behaviour of out of distributions
actions. Existing offline RL algorithms adapt off-policy algorithms, employing
techniques such as constraining the policy or modifying the value function to
achieve good performance on individual datasets but struggle to adapt to
different tasks or datasets of different qualities without tuning
hyper-parameters. We introduce a policy switching technique that dynamically
combines the behaviour of a pure off-policy RL agent, for improving behaviour,
and a behavioural cloning (BC) agent, for staying close to the data. We achieve
this by using a combination of epistemic uncertainty, quantified by our RL
model, and a metric for aleatoric uncertainty extracted from the dataset. We
show empirically that our policy switching technique can outperform not only
the individual algorithms used in the switching process but also compete with
state-of-the-art methods on numerous benchmarks. Our use of epistemic
uncertainty for policy switching also allows us to naturally extend our method
to the domain of offline to online fine-tuning allowing our model to adapt
quickly and safely from online data, either matching or exceeding the
performance of current methods that typically require additional modification
or hyper-parameter fine-tuning.
|
2503.12230 | Sven Behnke | Yihao Wang and Raphael Memmesheimer and Sven Behnke | LIAM: Multimodal Transformer for Language Instructions, Images, Actions
and Semantic Maps | null | null | null | null | cs.CV cs.AI cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The availability of large language models and open-vocabulary object
perception methods enables more flexibility for domestic service robots. The
large variability of domestic tasks can be addressed without implementing each
task individually by providing the robot with a task description along with
appropriate environment information. In this work, we propose LIAM - an
end-to-end model that predicts action transcripts based on language, image,
action, and map inputs. Language and image inputs are encoded with a CLIP
backbone, for which we designed two pre-training tasks to fine-tune its weights
and pre-align the latent spaces. We evaluate our method on the ALFRED dataset,
a simulator-generated benchmark for domestic tasks. Our results demonstrate the
importance of pre-aligning embedding spaces from different modalities and the
efficacy of incorporating semantic maps.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 18:54:06 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Wang",
"Yihao",
""
],
[
"Memmesheimer",
"Raphael",
""
],
[
"Behnke",
"Sven",
""
]
] | TITLE: LIAM: Multimodal Transformer for Language Instructions, Images, Actions
and Semantic Maps
ABSTRACT: The availability of large language models and open-vocabulary object
perception methods enables more flexibility for domestic service robots. The
large variability of domestic tasks can be addressed without implementing each
task individually by providing the robot with a task description along with
appropriate environment information. In this work, we propose LIAM - an
end-to-end model that predicts action transcripts based on language, image,
action, and map inputs. Language and image inputs are encoded with a CLIP
backbone, for which we designed two pre-training tasks to fine-tune its weights
and pre-align the latent spaces. We evaluate our method on the ALFRED dataset,
a simulator-generated benchmark for domestic tasks. Our results demonstrate the
importance of pre-aligning embedding spaces from different modalities and the
efficacy of incorporating semantic maps.
|
2503.12239 | Sahraoui Dhelim Dr | Soufiane Bacha, Huansheng Ning, Belarbi Mostefa, Doreen Sebastian
Sarwatt, Sahraoui Dhelim | A Novel Double Pruning method for Imbalanced Data using Information
Entropy and Roulette Wheel Selection for Breast Cancer Diagnosis | null | null | null | null | cs.LG cs.AI cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate illness diagnosis is vital for effective treatment and patient
safety. Machine learning models are widely used for cancer diagnosis based on
historical medical data. However, data imbalance remains a major challenge,
leading to hindering classifier performance and reliability. The SMOTEBoost
method addresses this issue by generating synthetic data to balance the
dataset, but it may overlook crucial overlapping regions near the decision
boundary and can produce noisy samples. This paper proposes RE-SMOTEBoost, an
enhanced version of SMOTEBoost, designed to overcome these limitations.
Firstly, RE-SMOTEBoost focuses on generating synthetic samples in overlapping
regions to better capture the decision boundary using roulette wheel selection.
Secondly, it incorporates a filtering mechanism based on information entropy to
reduce noise, and borderline cases and improve the quality of generated data.
Thirdly, we introduce a double regularization penalty to control the synthetic
samples proximity to the decision boundary and avoid class overlap. These
enhancements enable higher-quality oversampling of the minority class,
resulting in a more balanced and effective training dataset. The proposed
method outperforms existing state-of-the-art techniques when evaluated on
imbalanced datasets. Compared to the top-performing sampling algorithms,
RE-SMOTEBoost demonstrates a notable improvement of 3.22\% in accuracy and a
variance reduction of 88.8\%. These results indicate that the proposed model
offers a solid solution for medical settings, effectively overcoming data
scarcity and severe imbalance caused by limited samples, data collection
difficulties, and privacy constraints.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 19:34:15 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Bacha",
"Soufiane",
""
],
[
"Ning",
"Huansheng",
""
],
[
"Mostefa",
"Belarbi",
""
],
[
"Sarwatt",
"Doreen Sebastian",
""
],
[
"Dhelim",
"Sahraoui",
""
]
] | TITLE: A Novel Double Pruning method for Imbalanced Data using Information
Entropy and Roulette Wheel Selection for Breast Cancer Diagnosis
ABSTRACT: Accurate illness diagnosis is vital for effective treatment and patient
safety. Machine learning models are widely used for cancer diagnosis based on
historical medical data. However, data imbalance remains a major challenge,
leading to hindering classifier performance and reliability. The SMOTEBoost
method addresses this issue by generating synthetic data to balance the
dataset, but it may overlook crucial overlapping regions near the decision
boundary and can produce noisy samples. This paper proposes RE-SMOTEBoost, an
enhanced version of SMOTEBoost, designed to overcome these limitations.
Firstly, RE-SMOTEBoost focuses on generating synthetic samples in overlapping
regions to better capture the decision boundary using roulette wheel selection.
Secondly, it incorporates a filtering mechanism based on information entropy to
reduce noise, and borderline cases and improve the quality of generated data.
Thirdly, we introduce a double regularization penalty to control the synthetic
samples proximity to the decision boundary and avoid class overlap. These
enhancements enable higher-quality oversampling of the minority class,
resulting in a more balanced and effective training dataset. The proposed
method outperforms existing state-of-the-art techniques when evaluated on
imbalanced datasets. Compared to the top-performing sampling algorithms,
RE-SMOTEBoost demonstrates a notable improvement of 3.22\% in accuracy and a
variance reduction of 88.8\%. These results indicate that the proposed model
offers a solid solution for medical settings, effectively overcoming data
scarcity and severe imbalance caused by limited samples, data collection
difficulties, and privacy constraints.
|
2503.12258 | Myisha Ahmed Chowdhury | Myisha A. Chowdhury, Gift Modekwe, and Qiugang Lu | Lithium-ion Battery Capacity Prediction via Conditional Recurrent
Generative Adversarial Network-based Time-Series Regeneration | 7 pages, 6 figures | null | null | null | eess.SY cs.SY | http://creativecommons.org/licenses/by/4.0/ | Accurate capacity prediction is essential for the safe and reliable operation
of batteries by anticipating potential failures beforehand. The performance of
state-of-the-art capacity prediction methods is significantly hindered by the
limited availability of training data, primarily attributed to the expensive
experimentation and data sharing restrictions. To tackle this issue, this paper
presents a recurrent conditional generative adversarial network (RCGAN) scheme
to enrich the limited battery data by adding high-fidelity synthetic ones to
improve the capacity prediction. The proposed RCGAN scheme consists of a
generator network to generate synthetic samples that closely resemble the true
data and a discriminator network to differentiate real and synthetic samples.
Long shortterm memory (LSTM)-based generator and discriminator are leveraged to
learn the temporal and spatial distributions in the multivariate time-series
battery data. Moreover, the generator is conditioned on the capacity value to
account for changes in battery dynamics due to the degradation over usage
cycles. The effectiveness of the RCGAN is evaluated across six batteries from
two benchmark datasets (NASA and MIT). The raw data is then augmented with
synthetic samples from the RCGAN to train LSTM and gate recurrent unit (GRU)
models for capacity prediction. Simulation results show that the models trained
with augmented datasets significantly outperform those trained with the
original datasets in capacity prediction.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 20:52:15 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Chowdhury",
"Myisha A.",
""
],
[
"Modekwe",
"Gift",
""
],
[
"Lu",
"Qiugang",
""
]
] | TITLE: Lithium-ion Battery Capacity Prediction via Conditional Recurrent
Generative Adversarial Network-based Time-Series Regeneration
ABSTRACT: Accurate capacity prediction is essential for the safe and reliable operation
of batteries by anticipating potential failures beforehand. The performance of
state-of-the-art capacity prediction methods is significantly hindered by the
limited availability of training data, primarily attributed to the expensive
experimentation and data sharing restrictions. To tackle this issue, this paper
presents a recurrent conditional generative adversarial network (RCGAN) scheme
to enrich the limited battery data by adding high-fidelity synthetic ones to
improve the capacity prediction. The proposed RCGAN scheme consists of a
generator network to generate synthetic samples that closely resemble the true
data and a discriminator network to differentiate real and synthetic samples.
Long shortterm memory (LSTM)-based generator and discriminator are leveraged to
learn the temporal and spatial distributions in the multivariate time-series
battery data. Moreover, the generator is conditioned on the capacity value to
account for changes in battery dynamics due to the degradation over usage
cycles. The effectiveness of the RCGAN is evaluated across six batteries from
two benchmark datasets (NASA and MIT). The raw data is then augmented with
synthetic samples from the RCGAN to train LSTM and gate recurrent unit (GRU)
models for capacity prediction. Simulation results show that the models trained
with augmented datasets significantly outperform those trained with the
original datasets in capacity prediction.
|
2503.12267 | Mohamed Ali Zormati | Aziz Amari, Mariem Makni, Wissal Fnaich, Akram Lahmar, Fedi Koubaa,
Oumayma Charrad, Mohamed Ali Zormati, Rabaa Youssef Douss | An Efficient Deep Learning-Based Approach to Automating Invoice Document
Validation | null | 2024 IEEE/ACS 21st International Conference on Computer Systems
and Applications (AICCSA) | 10.1109/AICCSA63423.2024.10912544 | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In large organizations, the number of financial transactions can grow
rapidly, driving the need for fast and accurate multi-criteria invoice
validation. Manual processing remains error-prone and time-consuming, while
current automated solutions are limited by their inability to support a variety
of constraints, such as documents that are partially handwritten or
photographed with a mobile phone. In this paper, we propose to automate the
validation of machine written invoices using document layout analysis and
object detection techniques based on recent deep learning (DL) models. We
introduce a novel dataset consisting of manually annotated real-world invoices
and a multi-criteria validation process. We fine-tune and benchmark the most
relevant DL models on our dataset. Experimental results show the effectiveness
of the proposed pipeline and selected DL models in terms of achieving fast and
accurate validation of invoices.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 21:33:00 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Amari",
"Aziz",
""
],
[
"Makni",
"Mariem",
""
],
[
"Fnaich",
"Wissal",
""
],
[
"Lahmar",
"Akram",
""
],
[
"Koubaa",
"Fedi",
""
],
[
"Charrad",
"Oumayma",
""
],
[
"Zormati",
"Mohamed Ali",
""
],
[
"Douss",
"Rabaa Youssef",
""
]
] | TITLE: An Efficient Deep Learning-Based Approach to Automating Invoice Document
Validation
ABSTRACT: In large organizations, the number of financial transactions can grow
rapidly, driving the need for fast and accurate multi-criteria invoice
validation. Manual processing remains error-prone and time-consuming, while
current automated solutions are limited by their inability to support a variety
of constraints, such as documents that are partially handwritten or
photographed with a mobile phone. In this paper, we propose to automate the
validation of machine written invoices using document layout analysis and
object detection techniques based on recent deep learning (DL) models. We
introduce a novel dataset consisting of manually annotated real-world invoices
and a multi-criteria validation process. We fine-tune and benchmark the most
relevant DL models on our dataset. Experimental results show the effectiveness
of the proposed pipeline and selected DL models in terms of achieving fast and
accurate validation of invoices.
|
2503.12273 | Siddharth Rout | Siddharth Rout, Eldad Haber, St\'ephane Gaudreault | Probabilistic Forecasting for Dynamical Systems with Missing or
Imperfect Data | null | null | null | null | physics.comp-ph cs.LG math.DS physics.ao-ph | http://creativecommons.org/licenses/by/4.0/ | The modeling of dynamical systems is essential in many fields, but applying
machine learning techniques is often challenging due to incomplete or noisy
data. This study introduces a variant of stochastic interpolation (SI) for
probabilistic forecasting, estimating future states as distributions rather
than single-point predictions. We explore its mathematical foundations and
demonstrate its effectiveness on various dynamical systems, including the
challenging WeatherBench dataset.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 22:09:39 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Rout",
"Siddharth",
""
],
[
"Haber",
"Eldad",
""
],
[
"Gaudreault",
"Stéphane",
""
]
] | TITLE: Probabilistic Forecasting for Dynamical Systems with Missing or
Imperfect Data
ABSTRACT: The modeling of dynamical systems is essential in many fields, but applying
machine learning techniques is often challenging due to incomplete or noisy
data. This study introduces a variant of stochastic interpolation (SI) for
probabilistic forecasting, estimating future states as distributions rather
than single-point predictions. We explore its mathematical foundations and
demonstrate its effectiveness on various dynamical systems, including the
challenging WeatherBench dataset.
|
2503.12281 | Paola Natalia Ca\~nas Rodriguez | Paola Natalia Ca\~nas, Marcos Nieto, Oihana Otaegui, and Igor
Rodr\'iguez | Exploration of VLMs for Driver Monitoring Systems Applications | Accepted in 16th ITS European Congress, Seville, Spain, 19-21 May
2025 | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In recent years, we have witnessed significant progress in emerging deep
learning models, particularly Large Language Models (LLMs) and Vision-Language
Models (VLMs). These models have demonstrated promising results, indicating a
new era of Artificial Intelligence (AI) that surpasses previous methodologies.
Their extensive knowledge and zero-shot capabilities suggest a paradigm shift
in developing deep learning solutions, moving from data capturing and algorithm
training to just writing appropriate prompts. While the application of these
technologies has been explored across various industries, including automotive,
there is a notable gap in the scientific literature regarding their use in
Driver Monitoring Systems (DMS). This paper presents our initial approach to
implementing VLMs in this domain, utilising the Driver Monitoring Dataset to
evaluate their performance and discussing their advantages and challenges when
implemented in real-world scenarios.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 22:37:36 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Cañas",
"Paola Natalia",
""
],
[
"Nieto",
"Marcos",
""
],
[
"Otaegui",
"Oihana",
""
],
[
"Rodríguez",
"Igor",
""
]
] | TITLE: Exploration of VLMs for Driver Monitoring Systems Applications
ABSTRACT: In recent years, we have witnessed significant progress in emerging deep
learning models, particularly Large Language Models (LLMs) and Vision-Language
Models (VLMs). These models have demonstrated promising results, indicating a
new era of Artificial Intelligence (AI) that surpasses previous methodologies.
Their extensive knowledge and zero-shot capabilities suggest a paradigm shift
in developing deep learning solutions, moving from data capturing and algorithm
training to just writing appropriate prompts. While the application of these
technologies has been explored across various industries, including automotive,
there is a notable gap in the scientific literature regarding their use in
Driver Monitoring Systems (DMS). This paper presents our initial approach to
implementing VLMs in this domain, utilising the Driver Monitoring Dataset to
evaluate their performance and discussing their advantages and challenges when
implemented in real-world scenarios.
|
2503.12286 | Da Wu | Da Wu, Zhanliang Wang, Quan Nguyen, Kai Wang | Integrating Chain-of-Thought and Retrieval Augmented Generation Enhances
Rare Disease Diagnosis from Clinical Notes | 31 pages, 3 figures | null | null | null | cs.CL cs.AI q-bio.GN q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Background: Several studies show that large language models (LLMs) struggle
with phenotype-driven gene prioritization for rare diseases. These studies
typically use Human Phenotype Ontology (HPO) terms to prompt foundation models
like GPT and LLaMA to predict candidate genes. However, in real-world settings,
foundation models are not optimized for domain-specific tasks like clinical
diagnosis, yet inputs are unstructured clinical notes rather than standardized
terms. How LLMs can be instructed to predict candidate genes or disease
diagnosis from unstructured clinical notes remains a major challenge. Methods:
We introduce RAG-driven CoT and CoT-driven RAG, two methods that combine
Chain-of-Thought (CoT) and Retrieval Augmented Generation (RAG) to analyze
clinical notes. A five-question CoT protocol mimics expert reasoning, while RAG
retrieves data from sources like HPO and OMIM (Online Mendelian Inheritance in
Man). We evaluated these approaches on rare disease datasets, including 5,980
Phenopacket-derived notes, 255 literature-based narratives, and 220 in-house
clinical notes from Childrens Hospital of Philadelphia. Results: We found that
recent foundations models, including Llama 3.3-70B-Instruct and
DeepSeek-R1-Distill-Llama-70B, outperformed earlier versions such as Llama 2
and GPT-3.5. We also showed that RAG-driven CoT and CoT-driven RAG both
outperform foundation models in candidate gene prioritization from clinical
notes; in particular, both methods with DeepSeek backbone resulted in a top-10
gene accuracy of over 40% on Phenopacket-derived clinical notes. RAG-driven CoT
works better for high-quality notes, where early retrieval can anchor the
subsequent reasoning steps in domain-specific evidence, while CoT-driven RAG
has advantage when processing lengthy and noisy notes.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 22:57:31 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Wu",
"Da",
""
],
[
"Wang",
"Zhanliang",
""
],
[
"Nguyen",
"Quan",
""
],
[
"Wang",
"Kai",
""
]
] | TITLE: Integrating Chain-of-Thought and Retrieval Augmented Generation Enhances
Rare Disease Diagnosis from Clinical Notes
ABSTRACT: Background: Several studies show that large language models (LLMs) struggle
with phenotype-driven gene prioritization for rare diseases. These studies
typically use Human Phenotype Ontology (HPO) terms to prompt foundation models
like GPT and LLaMA to predict candidate genes. However, in real-world settings,
foundation models are not optimized for domain-specific tasks like clinical
diagnosis, yet inputs are unstructured clinical notes rather than standardized
terms. How LLMs can be instructed to predict candidate genes or disease
diagnosis from unstructured clinical notes remains a major challenge. Methods:
We introduce RAG-driven CoT and CoT-driven RAG, two methods that combine
Chain-of-Thought (CoT) and Retrieval Augmented Generation (RAG) to analyze
clinical notes. A five-question CoT protocol mimics expert reasoning, while RAG
retrieves data from sources like HPO and OMIM (Online Mendelian Inheritance in
Man). We evaluated these approaches on rare disease datasets, including 5,980
Phenopacket-derived notes, 255 literature-based narratives, and 220 in-house
clinical notes from Childrens Hospital of Philadelphia. Results: We found that
recent foundations models, including Llama 3.3-70B-Instruct and
DeepSeek-R1-Distill-Llama-70B, outperformed earlier versions such as Llama 2
and GPT-3.5. We also showed that RAG-driven CoT and CoT-driven RAG both
outperform foundation models in candidate gene prioritization from clinical
notes; in particular, both methods with DeepSeek backbone resulted in a top-10
gene accuracy of over 40% on Phenopacket-derived clinical notes. RAG-driven CoT
works better for high-quality notes, where early retrieval can anchor the
subsequent reasoning steps in domain-specific evidence, while CoT-driven RAG
has advantage when processing lengthy and noisy notes.
|
2503.12287 | Yansong Wu | Yansong Wu, Xiao Chen, Yu Chen, Hamid Sadeghian, Fan Wu, Zhenshan
Bing, Sami Haddadin, Alexander K\"onig, Alois Knoll | SharedAssembly: A Data Collection Approach via Shared Tele-Assembly | 7 pages, 6 figures | null | null | null | cs.RO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Assembly is a fundamental skill for robots in both modern manufacturing and
service robotics. Existing datasets aim to address the data bottleneck in
training general-purpose robot models, falling short of capturing contact-rich
assembly tasks. To bridge this gap, we introduce SharedAssembly, a novel
bilateral teleoperation approach with shared autonomy for scalable assembly
execution and data collection. User studies demonstrate that the proposed
approach enhances both success rates and efficiency, achieving a 97.0% success
rate across various sub-millimeter-level assembly tasks. Notably, novice and
intermediate users achieve performance comparable to experts using baseline
teleoperation methods, significantly enhancing large-scale data collection.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 23:00:22 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Wu",
"Yansong",
""
],
[
"Chen",
"Xiao",
""
],
[
"Chen",
"Yu",
""
],
[
"Sadeghian",
"Hamid",
""
],
[
"Wu",
"Fan",
""
],
[
"Bing",
"Zhenshan",
""
],
[
"Haddadin",
"Sami",
""
],
[
"König",
"Alexander",
""
],
[
"Knoll",
"Alois",
""
]
] | TITLE: SharedAssembly: A Data Collection Approach via Shared Tele-Assembly
ABSTRACT: Assembly is a fundamental skill for robots in both modern manufacturing and
service robotics. Existing datasets aim to address the data bottleneck in
training general-purpose robot models, falling short of capturing contact-rich
assembly tasks. To bridge this gap, we introduce SharedAssembly, a novel
bilateral teleoperation approach with shared autonomy for scalable assembly
execution and data collection. User studies demonstrate that the proposed
approach enhances both success rates and efficiency, achieving a 97.0% success
rate across various sub-millimeter-level assembly tasks. Notably, novice and
intermediate users achieve performance comparable to experts using baseline
teleoperation methods, significantly enhancing large-scale data collection.
|
2503.12293 | Averi Bates | Averi Bates, Ryan Vavricka, Shane Carleton, Ruosi Shao, Chongle Pan | Unified Modeling Language Code Generation from Diagram Images Using
Multimodal Large Language Models | Number of pages: 32, Number of figures: 23, Number of tables: 7,
Submitted to the Journal of Machine Learning with Applications, Author
Contributions: Averi Bates: Methodology, Development, Analysis, Data
Curation, Drafting, Review. Ryan Vavricka: Data Curation, Development,
Review. Shane Carleton: Supervision, Funding. Ruosi Shao: Review. Chongle
Pan: Supervision, Review | null | null | null | cs.SE cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Unified Modeling Language is a standardized visual language widely used
for modeling and documenting the design of software systems. Although many
tools generate UML diagrams from UML code, generating executable UML code from
image-based UML diagrams remains challenging. This paper proposes a new
approach to generate UML code using a large multimodal language model
automatically. Synthetic UML activity and sequence diagram datasets were
created to train and test the model. We compared standard fine-tuning with LoRA
techniques to optimize base models. The experiments measured code generation
accuracy across different model sizes and training strategies. These results
demonstrated that domain-adapted MM-LLMs perform for UML code generation
automation, whereby, at the best model, it achieved BLEU and SSIM scores of
0.779 and 0.942 on sequence diagrams. This will enable the modernization of
legacy systems and decrease the manual effort in software development
workflows.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 23:20:26 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Bates",
"Averi",
""
],
[
"Vavricka",
"Ryan",
""
],
[
"Carleton",
"Shane",
""
],
[
"Shao",
"Ruosi",
""
],
[
"Pan",
"Chongle",
""
]
] | TITLE: Unified Modeling Language Code Generation from Diagram Images Using
Multimodal Large Language Models
ABSTRACT: The Unified Modeling Language is a standardized visual language widely used
for modeling and documenting the design of software systems. Although many
tools generate UML diagrams from UML code, generating executable UML code from
image-based UML diagrams remains challenging. This paper proposes a new
approach to generate UML code using a large multimodal language model
automatically. Synthetic UML activity and sequence diagram datasets were
created to train and test the model. We compared standard fine-tuning with LoRA
techniques to optimize base models. The experiments measured code generation
accuracy across different model sizes and training strategies. These results
demonstrated that domain-adapted MM-LLMs perform for UML code generation
automation, whereby, at the best model, it achieved BLEU and SSIM scores of
0.779 and 0.942 on sequence diagrams. This will enable the modernization of
legacy systems and decrease the manual effort in software development
workflows.
|
2503.12294 | Julie Hunter | Olivier Gouvert, Julie Hunter, J\'er\^ome Louradour, Christophe
Cerisara, Evan Dufraisse, Yaya Sy, Laura Rivi\`ere, Jean-Pierre Lorr\'e,
OpenLLM-France community | The Lucie-7B LLM and the Lucie Training Dataset: Open resources for
multilingual language generation | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | We present both the Lucie Training Dataset and the Lucie-7B foundation model.
The Lucie Training Dataset is a multilingual collection of textual corpora
centered around French and designed to offset anglo-centric biases found in
many datasets for large language model pretraining. Its French data is pulled
not only from traditional web sources, but also from French cultural heritage
documents, filling an important gap in modern datasets. Beyond French, which
makes up the largest share of the data, we added documents to support several
other European languages, including English, Spanish, German, and Italian.
Apart from its value as a resource for French language and culture, an
important feature of this dataset is that it prioritizes data rights by
minimizing copyrighted material. In addition, building on the philosophy of
past open projects, it is redistributed in the form used for training and its
processing is described on Hugging Face and GitHub. The Lucie-7B foundation
model is trained on equal amounts of data in French and English -- roughly 33%
each -- in an effort to better represent cultural aspects of French-speaking
communities. We also describe two instruction fine-tuned models,
Lucie-7B-Instruct-v1.1 and Lucie-7B-Instruct-human-data, which we release as
demonstrations of Lucie-7B in use. These models achieve promising results
compared to state-of-the-art models, demonstrating that an open approach
prioritizing data rights can still deliver strong performance. We see these
models as an initial step toward developing more performant, aligned models in
the near future. Model weights for Lucie-7B and the Lucie instruct models,
along with intermediate checkpoints for the former, are published on Hugging
Face, while model training and data preparation code is available on GitHub.
This makes Lucie-7B one of the first OSI compliant language models according to
the new OSI definition.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 23:20:45 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Gouvert",
"Olivier",
""
],
[
"Hunter",
"Julie",
""
],
[
"Louradour",
"Jérôme",
""
],
[
"Cerisara",
"Christophe",
""
],
[
"Dufraisse",
"Evan",
""
],
[
"Sy",
"Yaya",
""
],
[
"Rivière",
"Laura",
""
],
[
"Lorré",
"Jean-Pierre",
""
],
[
"community",
"OpenLLM-France",
""
]
] | TITLE: The Lucie-7B LLM and the Lucie Training Dataset: Open resources for
multilingual language generation
ABSTRACT: We present both the Lucie Training Dataset and the Lucie-7B foundation model.
The Lucie Training Dataset is a multilingual collection of textual corpora
centered around French and designed to offset anglo-centric biases found in
many datasets for large language model pretraining. Its French data is pulled
not only from traditional web sources, but also from French cultural heritage
documents, filling an important gap in modern datasets. Beyond French, which
makes up the largest share of the data, we added documents to support several
other European languages, including English, Spanish, German, and Italian.
Apart from its value as a resource for French language and culture, an
important feature of this dataset is that it prioritizes data rights by
minimizing copyrighted material. In addition, building on the philosophy of
past open projects, it is redistributed in the form used for training and its
processing is described on Hugging Face and GitHub. The Lucie-7B foundation
model is trained on equal amounts of data in French and English -- roughly 33%
each -- in an effort to better represent cultural aspects of French-speaking
communities. We also describe two instruction fine-tuned models,
Lucie-7B-Instruct-v1.1 and Lucie-7B-Instruct-human-data, which we release as
demonstrations of Lucie-7B in use. These models achieve promising results
compared to state-of-the-art models, demonstrating that an open approach
prioritizing data rights can still deliver strong performance. We see these
models as an initial step toward developing more performant, aligned models in
the near future. Model weights for Lucie-7B and the Lucie instruct models,
along with intermediate checkpoints for the former, are published on Hugging
Face, while model training and data preparation code is available on GitHub.
This makes Lucie-7B one of the first OSI compliant language models according to
the new OSI definition.
|
2503.12297 | Gagan Khandate | Gagan Khandate, Boxuan Wang, Sarah Park, Weizhe Ni, Jaoquin Palacious,
Kate Lampo, Philippe Wu, Rosh Ho, Eric Chang, Matei Ciocarlie | Train Robots in a JIF: Joint Inverse and Forward Dynamics with Human and
Robot Demonstrations | 9 pages, 8 figures, submission to RSS 2025 | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pre-training on large datasets of robot demonstrations is a powerful
technique for learning diverse manipulation skills but is often limited by the
high cost and complexity of collecting robot-centric data, especially for tasks
requiring tactile feedback. This work addresses these challenges by introducing
a novel method for pre-training with multi-modal human demonstrations. Our
approach jointly learns inverse and forward dynamics to extract latent state
representations, towards learning manipulation specific representations. This
enables efficient fine-tuning with only a small number of robot demonstrations,
significantly improving data efficiency. Furthermore, our method allows for the
use of multi-modal data, such as combination of vision and touch for
manipulation. By leveraging latent dynamics modeling and tactile sensing, this
approach paves the way for scalable robot manipulation learning based on human
demonstrations.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 23:37:15 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Khandate",
"Gagan",
""
],
[
"Wang",
"Boxuan",
""
],
[
"Park",
"Sarah",
""
],
[
"Ni",
"Weizhe",
""
],
[
"Palacious",
"Jaoquin",
""
],
[
"Lampo",
"Kate",
""
],
[
"Wu",
"Philippe",
""
],
[
"Ho",
"Rosh",
""
],
[
"Chang",
"Eric",
""
],
[
"Ciocarlie",
"Matei",
""
]
] | TITLE: Train Robots in a JIF: Joint Inverse and Forward Dynamics with Human and
Robot Demonstrations
ABSTRACT: Pre-training on large datasets of robot demonstrations is a powerful
technique for learning diverse manipulation skills but is often limited by the
high cost and complexity of collecting robot-centric data, especially for tasks
requiring tactile feedback. This work addresses these challenges by introducing
a novel method for pre-training with multi-modal human demonstrations. Our
approach jointly learns inverse and forward dynamics to extract latent state
representations, towards learning manipulation specific representations. This
enables efficient fine-tuning with only a small number of robot demonstrations,
significantly improving data efficiency. Furthermore, our method allows for the
use of multi-modal data, such as combination of vision and touch for
manipulation. By leveraging latent dynamics modeling and tactile sensing, this
approach paves the way for scalable robot manipulation learning based on human
demonstrations.
|
2503.12301 | Amirabbas Afzali | Amirabbas Afzali, Amirhossein Afsharrad, Seyed Shahabeddin Mousavi,
Sanjay Lall | One Goal, Many Challenges: Robust Preference Optimization Amid
Content-Aware and Multi-Source Noise | null | null | null | null | cs.LG cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) have made significant strides in generating
human-like responses, largely due to preference alignment techniques. However,
these methods often assume unbiased human feedback, which is rarely the case in
real-world scenarios. This paper introduces Content-Aware Noise-Resilient
Preference Optimization (CNRPO), a novel framework that addresses multiple
sources of content-dependent noise in preference learning. CNRPO employs a
multi-objective optimization approach to separate true preferences from
content-aware noises, effectively mitigating their impact. We leverage backdoor
attack mechanisms to efficiently learn and control various noise sources within
a single model. Theoretical analysis and extensive experiments on different
synthetic noisy datasets demonstrate that CNRPO significantly improves
alignment with primary human preferences while controlling for secondary noises
and biases, such as response length and harmfulness.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 00:22:00 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Afzali",
"Amirabbas",
""
],
[
"Afsharrad",
"Amirhossein",
""
],
[
"Mousavi",
"Seyed Shahabeddin",
""
],
[
"Lall",
"Sanjay",
""
]
] | TITLE: One Goal, Many Challenges: Robust Preference Optimization Amid
Content-Aware and Multi-Source Noise
ABSTRACT: Large Language Models (LLMs) have made significant strides in generating
human-like responses, largely due to preference alignment techniques. However,
these methods often assume unbiased human feedback, which is rarely the case in
real-world scenarios. This paper introduces Content-Aware Noise-Resilient
Preference Optimization (CNRPO), a novel framework that addresses multiple
sources of content-dependent noise in preference learning. CNRPO employs a
multi-objective optimization approach to separate true preferences from
content-aware noises, effectively mitigating their impact. We leverage backdoor
attack mechanisms to efficiently learn and control various noise sources within
a single model. Theoretical analysis and extensive experiments on different
synthetic noisy datasets demonstrate that CNRPO significantly improves
alignment with primary human preferences while controlling for secondary noises
and biases, such as response length and harmfulness.
|
2503.12307 | Jiahao Wu | Jiahao Wu, Rui Peng, Zhiyan Wang, Lu Xiao, Luyang Tang, Jinbo Yan,
Kaiqiang Xiong, Ronggang Wang | Swift4D:Adaptive divide-and-conquer Gaussian Splatting for compact and
efficient reconstruction of dynamic scene | ICLR 2025 | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Novel view synthesis has long been a practical but challenging task, although
the introduction of numerous methods to solve this problem, even combining
advanced representations like 3D Gaussian Splatting, they still struggle to
recover high-quality results and often consume too much storage memory and
training time. In this paper we propose Swift4D, a divide-and-conquer 3D
Gaussian Splatting method that can handle static and dynamic primitives
separately, achieving a good trade-off between rendering quality and
efficiency, motivated by the fact that most of the scene is the static
primitive and does not require additional dynamic properties. Concretely, we
focus on modeling dynamic transformations only for the dynamic primitives which
benefits both efficiency and quality. We first employ a learnable decomposition
strategy to separate the primitives, which relies on an additional parameter to
classify primitives as static or dynamic. For the dynamic primitives, we employ
a compact multi-resolution 4D Hash mapper to transform these primitives from
canonical space into deformation space at each timestamp, and then mix the
static and dynamic primitives to produce the final output. This
divide-and-conquer method facilitates efficient training and reduces storage
redundancy. Our method not only achieves state-of-the-art rendering quality
while being 20X faster in training than previous SOTA methods with a minimum
storage requirement of only 30MB on real-world datasets. Code is available at
https://github.com/WuJH2001/swift4d.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 01:13:11 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Wu",
"Jiahao",
""
],
[
"Peng",
"Rui",
""
],
[
"Wang",
"Zhiyan",
""
],
[
"Xiao",
"Lu",
""
],
[
"Tang",
"Luyang",
""
],
[
"Yan",
"Jinbo",
""
],
[
"Xiong",
"Kaiqiang",
""
],
[
"Wang",
"Ronggang",
""
]
] | TITLE: Swift4D:Adaptive divide-and-conquer Gaussian Splatting for compact and
efficient reconstruction of dynamic scene
ABSTRACT: Novel view synthesis has long been a practical but challenging task, although
the introduction of numerous methods to solve this problem, even combining
advanced representations like 3D Gaussian Splatting, they still struggle to
recover high-quality results and often consume too much storage memory and
training time. In this paper we propose Swift4D, a divide-and-conquer 3D
Gaussian Splatting method that can handle static and dynamic primitives
separately, achieving a good trade-off between rendering quality and
efficiency, motivated by the fact that most of the scene is the static
primitive and does not require additional dynamic properties. Concretely, we
focus on modeling dynamic transformations only for the dynamic primitives which
benefits both efficiency and quality. We first employ a learnable decomposition
strategy to separate the primitives, which relies on an additional parameter to
classify primitives as static or dynamic. For the dynamic primitives, we employ
a compact multi-resolution 4D Hash mapper to transform these primitives from
canonical space into deformation space at each timestamp, and then mix the
static and dynamic primitives to produce the final output. This
divide-and-conquer method facilitates efficient training and reduces storage
redundancy. Our method not only achieves state-of-the-art rendering quality
while being 20X faster in training than previous SOTA methods with a minimum
storage requirement of only 30MB on real-world datasets. Code is available at
https://github.com/WuJH2001/swift4d.
|
2503.12312 | Henri A\"idasso | Henri A\"idasso | FlakeRanker: Automated Identification and Prioritization of Flaky Job
Failure Categories | Artifact awarded the Reusable badge at the 47th International
Conference on Software Engineering - ICSE 2025 Artifact Evaluation Track | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This document presents the artifact associated with the ICSE SEIP 25 paper
titled On the Diagnosis of Flaky Job Failures: Understanding and Prioritizing
Failure Categories. The original paper identifies and analyzes 46 distinct
categories of flaky job failures that developers encounter, using Recency (R),
Frequency (F), and Monetary (M) measures. In addition, it uses an RFM
clustering model to identify and prioritize the most wasteful and persistent.
The original paper only discusses the rankings and evolution of the top 20
categories in the results. This artifact contains (1) the regex and scripts
used to automate the labeling process for RQ1, (2) complete analysis results,
including the ranking of all 46 categories by cost in RQ2 and the evolution of
these categories over time in RQ3, and (3) the RFM dataset and scripts used to
create the RFM clustering model for prioritization in RQ4. In addition, we
engineered the labeling tool and the RFM-based prioritization methodology in a
command-line interface (CLI) called FLAKERANKER to facilitate reuse and
repurposing in future studies.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 01:37:31 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Aïdasso",
"Henri",
""
]
] | TITLE: FlakeRanker: Automated Identification and Prioritization of Flaky Job
Failure Categories
ABSTRACT: This document presents the artifact associated with the ICSE SEIP 25 paper
titled On the Diagnosis of Flaky Job Failures: Understanding and Prioritizing
Failure Categories. The original paper identifies and analyzes 46 distinct
categories of flaky job failures that developers encounter, using Recency (R),
Frequency (F), and Monetary (M) measures. In addition, it uses an RFM
clustering model to identify and prioritize the most wasteful and persistent.
The original paper only discusses the rankings and evolution of the top 20
categories in the results. This artifact contains (1) the regex and scripts
used to automate the labeling process for RQ1, (2) complete analysis results,
including the ranking of all 46 categories by cost in RQ2 and the evolution of
these categories over time in RQ3, and (3) the RFM dataset and scripts used to
create the RFM clustering model for prioritization in RQ4. In addition, we
engineered the labeling tool and the RFM-based prioritization methodology in a
command-line interface (CLI) called FLAKERANKER to facilitate reuse and
repurposing in future studies.
|
2503.12332 | Yunze Liu | Yunze Liu, Peiran Wu, Cheng Liang, Junxiao Shen, Limin Wang, Li Yi | VideoMAP: Toward Scalable Mamba-based Video Autoregressive Pretraining | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent Mamba-based architectures for video understanding demonstrate
promising computational efficiency and competitive performance, yet struggle
with overfitting issues that hinder their scalability. To overcome this
challenge, we introduce VideoMAP, a Hybrid Mamba-Transformer framework
featuring a novel pre-training approach. VideoMAP uses a 4:1
Mamba-to-Transformer ratio, effectively balancing computational cost and model
capacity. This architecture, combined with our proposed frame-wise masked
autoregressive pre-training strategy, delivers significant performance gains
when scaling to larger models. Additionally, VideoMAP exhibits impressive
sample efficiency, significantly outperforming existing methods with less
training data. Experiments show that VideoMAP outperforms existing models
across various datasets, including Kinetics-400, Something-Something V2,
Breakfast, and COIN. Furthermore, we demonstrate the potential of VideoMAP as a
visual encoder for multimodal large language models, highlighting its ability
to reduce memory usage and enable the processing of longer video sequences. The
code is open-source at https://github.com/yunzeliu/MAP
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 03:01:07 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Liu",
"Yunze",
""
],
[
"Wu",
"Peiran",
""
],
[
"Liang",
"Cheng",
""
],
[
"Shen",
"Junxiao",
""
],
[
"Wang",
"Limin",
""
],
[
"Yi",
"Li",
""
]
] | TITLE: VideoMAP: Toward Scalable Mamba-based Video Autoregressive Pretraining
ABSTRACT: Recent Mamba-based architectures for video understanding demonstrate
promising computational efficiency and competitive performance, yet struggle
with overfitting issues that hinder their scalability. To overcome this
challenge, we introduce VideoMAP, a Hybrid Mamba-Transformer framework
featuring a novel pre-training approach. VideoMAP uses a 4:1
Mamba-to-Transformer ratio, effectively balancing computational cost and model
capacity. This architecture, combined with our proposed frame-wise masked
autoregressive pre-training strategy, delivers significant performance gains
when scaling to larger models. Additionally, VideoMAP exhibits impressive
sample efficiency, significantly outperforming existing methods with less
training data. Experiments show that VideoMAP outperforms existing models
across various datasets, including Kinetics-400, Something-Something V2,
Breakfast, and COIN. Furthermore, we demonstrate the potential of VideoMAP as a
visual encoder for multimodal large language models, highlighting its ability
to reduce memory usage and enable the processing of longer video sequences. The
code is open-source at https://github.com/yunzeliu/MAP
|
2503.12340 | Xin Wang | Xin Wang, Samiul Alam, Zhongwei Wan, Hui Shen, Mi Zhang | SVD-LLM V2: Optimizing Singular Value Truncation for Large Language
Model Compression | NAACL 2025; Code available at
https://github.com/AIoT-MLSys-Lab/SVD-LLM | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite significant advancements, the practical deployment of Large Language
Models (LLMs) is often hampered by their immense sizes, highlighting the need
for effective compression techniques. Singular Value Decomposition (SVD) is a
promising LLM compression technique. However, existing SVD-based compression
methods fall short in reducing truncation losses, leading to less competitive
performance in compressed models. In this work, we introduce SVD-LLM V2, a
SVD-based LLM compression method that optimizes singular value truncation in
SVD compression with two techniques. First, SVD-LLM V2 proposes to use
theoretical truncation loss of weight matrices to assign a unique compression
ratio to each weight matrix at different layers to accommodate weight
redundancy heterogeneity. Second, SVD-LLM V2 proposes loss-optimized weight
truncation to ensure that the truncated singular values result in a lower and
more stable truncation loss in practice. We evaluate SVD-LLM V2 on ten datasets
and five LLMs at various scales. Our results show SVD-LLM V2 outperforms
state-of-the-art SVD-based LLM compression methods. Our code is available at
https://github.com/AIoT-MLSys-Lab/SVD-LLM
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 03:27:12 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Wang",
"Xin",
""
],
[
"Alam",
"Samiul",
""
],
[
"Wan",
"Zhongwei",
""
],
[
"Shen",
"Hui",
""
],
[
"Zhang",
"Mi",
""
]
] | TITLE: SVD-LLM V2: Optimizing Singular Value Truncation for Large Language
Model Compression
ABSTRACT: Despite significant advancements, the practical deployment of Large Language
Models (LLMs) is often hampered by their immense sizes, highlighting the need
for effective compression techniques. Singular Value Decomposition (SVD) is a
promising LLM compression technique. However, existing SVD-based compression
methods fall short in reducing truncation losses, leading to less competitive
performance in compressed models. In this work, we introduce SVD-LLM V2, a
SVD-based LLM compression method that optimizes singular value truncation in
SVD compression with two techniques. First, SVD-LLM V2 proposes to use
theoretical truncation loss of weight matrices to assign a unique compression
ratio to each weight matrix at different layers to accommodate weight
redundancy heterogeneity. Second, SVD-LLM V2 proposes loss-optimized weight
truncation to ensure that the truncated singular values result in a lower and
more stable truncation loss in practice. We evaluate SVD-LLM V2 on ten datasets
and five LLMs at various scales. Our results show SVD-LLM V2 outperforms
state-of-the-art SVD-based LLM compression methods. Our code is available at
https://github.com/AIoT-MLSys-Lab/SVD-LLM
|
2503.12343 | Xiaoyu Xiong | Xiaoyu Xiong, Changyu Hu, Chunru Lin, Pingchuan Ma, Chuang Gan, Tao Du | TopoGaussian: Inferring Internal Topology Structures from Visual Clues | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We present TopoGaussian, a holistic, particle-based pipeline for inferring
the interior structure of an opaque object from easily accessible photos and
videos as input. Traditional mesh-based approaches require tedious and
error-prone mesh filling and fixing process, while typically output rough
boundary surface. Our pipeline combines Gaussian Splatting with a novel,
versatile particle-based differentiable simulator that simultaneously
accommodates constitutive model, actuator, and collision, without interference
with mesh. Based on the gradients from this simulator, we provide flexible
choice of topology representation for optimization, including particle, neural
implicit surface, and quadratic surface. The resultant pipeline takes easily
accessible photos and videos as input and outputs the topology that matches the
physical characteristics of the input. We demonstrate the efficacy of our
pipeline on a synthetic dataset and four real-world tasks with 3D-printed
prototypes. Compared with existing mesh-based method, our pipeline is 5.26x
faster on average with improved shape quality. These results highlight the
potential of our pipeline in 3D vision, soft robotics, and manufacturing
applications.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 03:47:42 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Xiong",
"Xiaoyu",
""
],
[
"Hu",
"Changyu",
""
],
[
"Lin",
"Chunru",
""
],
[
"Ma",
"Pingchuan",
""
],
[
"Gan",
"Chuang",
""
],
[
"Du",
"Tao",
""
]
] | TITLE: TopoGaussian: Inferring Internal Topology Structures from Visual Clues
ABSTRACT: We present TopoGaussian, a holistic, particle-based pipeline for inferring
the interior structure of an opaque object from easily accessible photos and
videos as input. Traditional mesh-based approaches require tedious and
error-prone mesh filling and fixing process, while typically output rough
boundary surface. Our pipeline combines Gaussian Splatting with a novel,
versatile particle-based differentiable simulator that simultaneously
accommodates constitutive model, actuator, and collision, without interference
with mesh. Based on the gradients from this simulator, we provide flexible
choice of topology representation for optimization, including particle, neural
implicit surface, and quadratic surface. The resultant pipeline takes easily
accessible photos and videos as input and outputs the topology that matches the
physical characteristics of the input. We demonstrate the efficacy of our
pipeline on a synthetic dataset and four real-world tasks with 3D-printed
prototypes. Compared with existing mesh-based method, our pipeline is 5.26x
faster on average with improved shape quality. These results highlight the
potential of our pipeline in 3D vision, soft robotics, and manufacturing
applications.
|
2503.12345 | Zhongyuan Wang | Zhongyuan Wang, Richong Zhang, Zhijie Nie | General Table Question Answering via Answer-Formula Joint Generation | work in progress | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Advanced table question answering (TableQA) methods prompt large language
models (LLMs) to generate answer text, SQL query, Python code, or custom
operations, which impressively improve the complex reasoning problems in the
TableQA task. However, these methods lack the versatility to cope with specific
question types or table structures. In contrast, the Spreadsheet Formula, the
widely-used and well-defined operation language for tabular data, has not been
thoroughly explored to solve TableQA. In this paper, we first attempt to use
Formula as the logical form for solving complex reasoning on the tables with
different structures. Specifically, we construct a large Formula-annotated
TableQA dataset \texttt{FromulaQA} from existing datasets. In addition, we
propose \texttt{TabAF}, a general table answering framework to solve multiple
types of tasks over multiple types of tables simultaneously. Unlike existing
methods, \texttt{TabAF} decodes answers and Formulas with a single LLM
backbone, demonstrating great versatility and generalization. \texttt{TabAF}
based on Llama3.1-70B achieves new state-of-the-art performance on the
WikiTableQuestion, HiTab and TabFact.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 03:51:06 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Wang",
"Zhongyuan",
""
],
[
"Zhang",
"Richong",
""
],
[
"Nie",
"Zhijie",
""
]
] | TITLE: General Table Question Answering via Answer-Formula Joint Generation
ABSTRACT: Advanced table question answering (TableQA) methods prompt large language
models (LLMs) to generate answer text, SQL query, Python code, or custom
operations, which impressively improve the complex reasoning problems in the
TableQA task. However, these methods lack the versatility to cope with specific
question types or table structures. In contrast, the Spreadsheet Formula, the
widely-used and well-defined operation language for tabular data, has not been
thoroughly explored to solve TableQA. In this paper, we first attempt to use
Formula as the logical form for solving complex reasoning on the tables with
different structures. Specifically, we construct a large Formula-annotated
TableQA dataset \texttt{FromulaQA} from existing datasets. In addition, we
propose \texttt{TabAF}, a general table answering framework to solve multiple
types of tasks over multiple types of tables simultaneously. Unlike existing
methods, \texttt{TabAF} decodes answers and Formulas with a single LLM
backbone, demonstrating great versatility and generalization. \texttt{TabAF}
based on Llama3.1-70B achieves new state-of-the-art performance on the
WikiTableQuestion, HiTab and TabFact.
|
2503.12348 | Mo Zhou | Mo Zhou, Jianwei Wang, Xuanmeng Zhang, Dylan Campbell, Kai Wang, Long
Yuan, Wenjie Zhang, Xuemin Lin | ProbDiffFlow: An Efficient Learning-Free Framework for Probabilistic
Single-Image Optical Flow Estimation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper studies optical flow estimation, a critical task in motion
analysis with applications in autonomous navigation, action recognition, and
film production. Traditional optical flow methods require consecutive frames,
which are often unavailable due to limitations in data acquisition or
real-world scene disruptions. Thus, single-frame optical flow estimation is
emerging in the literature. However, existing single-frame approaches suffer
from two major limitations: (1) they rely on labeled training data, making them
task-specific, and (2) they produce deterministic predictions, failing to
capture motion uncertainty. To overcome these challenges, we propose
ProbDiffFlow, a training-free framework that estimates optical flow
distributions from a single image. Instead of directly predicting motion,
ProbDiffFlow follows an estimation-by-synthesis paradigm: it first generates
diverse plausible future frames using a diffusion-based model, then estimates
motion from these synthesized samples using a pre-trained optical flow model,
and finally aggregates the results into a probabilistic flow distribution. This
design eliminates the need for task-specific training while capturing multiple
plausible motions. Experiments on both synthetic and real-world datasets
demonstrate that ProbDiffFlow achieves superior accuracy, diversity, and
efficiency, outperforming existing single-image and two-frame baselines.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 04:07:51 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Zhou",
"Mo",
""
],
[
"Wang",
"Jianwei",
""
],
[
"Zhang",
"Xuanmeng",
""
],
[
"Campbell",
"Dylan",
""
],
[
"Wang",
"Kai",
""
],
[
"Yuan",
"Long",
""
],
[
"Zhang",
"Wenjie",
""
],
[
"Lin",
"Xuemin",
""
]
] | TITLE: ProbDiffFlow: An Efficient Learning-Free Framework for Probabilistic
Single-Image Optical Flow Estimation
ABSTRACT: This paper studies optical flow estimation, a critical task in motion
analysis with applications in autonomous navigation, action recognition, and
film production. Traditional optical flow methods require consecutive frames,
which are often unavailable due to limitations in data acquisition or
real-world scene disruptions. Thus, single-frame optical flow estimation is
emerging in the literature. However, existing single-frame approaches suffer
from two major limitations: (1) they rely on labeled training data, making them
task-specific, and (2) they produce deterministic predictions, failing to
capture motion uncertainty. To overcome these challenges, we propose
ProbDiffFlow, a training-free framework that estimates optical flow
distributions from a single image. Instead of directly predicting motion,
ProbDiffFlow follows an estimation-by-synthesis paradigm: it first generates
diverse plausible future frames using a diffusion-based model, then estimates
motion from these synthesized samples using a pre-trained optical flow model,
and finally aggregates the results into a probabilistic flow distribution. This
design eliminates the need for task-specific training while capturing multiple
plausible motions. Experiments on both synthetic and real-world datasets
demonstrate that ProbDiffFlow achieves superior accuracy, diversity, and
efficiency, outperforming existing single-image and two-frame baselines.
|
2503.12350 | Wenqing Kuang | Wenqing Kuang (1), Xiongwei Zhao (2), Yehui Shen (1), Congcong Wen
(3), Huimin Lu (1), Zongtan Zhou (1), Xieyuanli Chen (1) ((1) National
University of Defense Technology, (2) Harbin Institute of Technology, (3) New
York University Abu Dhabi) | ResLPR: A LiDAR Data Restoration Network and Benchmark for Robust Place
Recognition Against Weather Corruptions | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | LiDAR-based place recognition (LPR) is a key component for autonomous
driving, and its resilience to environmental corruption is critical for safety
in high-stakes applications. While state-of-the-art (SOTA) LPR methods perform
well in clean weather, they still struggle with weather-induced corruption
commonly encountered in driving scenarios. To tackle this, we propose
ResLPRNet, a novel LiDAR data restoration network that largely enhances LPR
performance under adverse weather by restoring corrupted LiDAR scans using a
wavelet transform-based network. ResLPRNet is efficient, lightweight and can be
integrated plug-and-play with pretrained LPR models without substantial
additional computational cost. Given the lack of LPR datasets under adverse
weather, we introduce ResLPR, a novel benchmark that examines SOTA LPR methods
under a wide range of LiDAR distortions induced by severe snow, fog, and rain
conditions. Experiments on our proposed WeatherKITTI and WeatherNCLT datasets
demonstrate the resilience and notable gains achieved by using our restoration
method with multiple LPR approaches in challenging weather scenarios. Our code
and benchmark are publicly available here:
https://github.com/nubot-nudt/ResLPR.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 04:14:20 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Kuang",
"Wenqing",
""
],
[
"Zhao",
"Xiongwei",
""
],
[
"Shen",
"Yehui",
""
],
[
"Wen",
"Congcong",
""
],
[
"Lu",
"Huimin",
""
],
[
"Zhou",
"Zongtan",
""
],
[
"Chen",
"Xieyuanli",
""
]
] | TITLE: ResLPR: A LiDAR Data Restoration Network and Benchmark for Robust Place
Recognition Against Weather Corruptions
ABSTRACT: LiDAR-based place recognition (LPR) is a key component for autonomous
driving, and its resilience to environmental corruption is critical for safety
in high-stakes applications. While state-of-the-art (SOTA) LPR methods perform
well in clean weather, they still struggle with weather-induced corruption
commonly encountered in driving scenarios. To tackle this, we propose
ResLPRNet, a novel LiDAR data restoration network that largely enhances LPR
performance under adverse weather by restoring corrupted LiDAR scans using a
wavelet transform-based network. ResLPRNet is efficient, lightweight and can be
integrated plug-and-play with pretrained LPR models without substantial
additional computational cost. Given the lack of LPR datasets under adverse
weather, we introduce ResLPR, a novel benchmark that examines SOTA LPR methods
under a wide range of LiDAR distortions induced by severe snow, fog, and rain
conditions. Experiments on our proposed WeatherKITTI and WeatherNCLT datasets
demonstrate the resilience and notable gains achieved by using our restoration
method with multiple LPR approaches in challenging weather scenarios. Our code
and benchmark are publicly available here:
https://github.com/nubot-nudt/ResLPR.
|
2503.12357 | Krishna Chaitanya Polavaram | Krishna Chaitanya Polavaram | Numerical Words and Linguistic Loops: The Perpetual Four-Letter Routine | 9 pages, 3 figures, 2 tables | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study presents a fascinating linguistic property related to the number
of letters in words and their corresponding numerical values. By selecting any
arbitrary word, counting its constituent letters, and subsequently spelling out
the resulting count and tallying the letters anew, an unanticipated pattern is
observed. Remarkably, this iterative sequence, conducted on a dataset of
100,000 random words, invariably converges to the numeral four (4), termed the
Linguistic Loop (LL) constant. Examining 73 languages utilizing the Latin
alphabet, this research reveals distinctive patterns. Among them, 28 languages
exhibit LL-positive behavior adhering to the established property, while 31
languages deviate as LL-negative. Additionally, 13 languages display nuanced
tendencies: eight feature two LL constants (bi-positivity), and five feature
three constants (tri-positivity). This discovery highlights a linguistic quirk
within Latin alphabet-based language number-word representations, uncovering an
intriguing facet across diverse alphabetic systems. It also raises questions
about the underlying linguistic and cognitive mechanisms responsible for this
phenomenon.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 04:53:23 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Polavaram",
"Krishna Chaitanya",
""
]
] | TITLE: Numerical Words and Linguistic Loops: The Perpetual Four-Letter Routine
ABSTRACT: This study presents a fascinating linguistic property related to the number
of letters in words and their corresponding numerical values. By selecting any
arbitrary word, counting its constituent letters, and subsequently spelling out
the resulting count and tallying the letters anew, an unanticipated pattern is
observed. Remarkably, this iterative sequence, conducted on a dataset of
100,000 random words, invariably converges to the numeral four (4), termed the
Linguistic Loop (LL) constant. Examining 73 languages utilizing the Latin
alphabet, this research reveals distinctive patterns. Among them, 28 languages
exhibit LL-positive behavior adhering to the established property, while 31
languages deviate as LL-negative. Additionally, 13 languages display nuanced
tendencies: eight feature two LL constants (bi-positivity), and five feature
three constants (tri-positivity). This discovery highlights a linguistic quirk
within Latin alphabet-based language number-word representations, uncovering an
intriguing facet across diverse alphabetic systems. It also raises questions
about the underlying linguistic and cognitive mechanisms responsible for this
phenomenon.
|
2503.12365 | Boying Wang | Xiangfei Fang, Boying Wang, Chengying Huan, Shaonan Ma, Heng Zhang,
Chen Zhao | HyperKAN: Hypergraph Representation Learning with Kolmogorov-Arnold
Networks | Accepted by ICASSP2025 | null | null | null | cs.LG cs.CV cs.SI | http://creativecommons.org/licenses/by/4.0/ | Hypergraph representation learning has garnered increasing attention across
various domains due to its capability to model high-order relationships.
Traditional methods often rely on hypergraph neural networks (HNNs) employing
message passing mechanisms to aggregate vertex and hyperedge features. However,
these methods are constrained by their dependence on hypergraph topology,
leading to the challenge of imbalanced information aggregation, where
high-degree vertices tend to aggregate redundant features, while low-degree
vertices often struggle to capture sufficient structural features. To overcome
the above challenges, we introduce HyperKAN, a novel framework for hypergraph
representation learning that transcends the limitations of message-passing
techniques. HyperKAN begins by encoding features for each vertex and then
leverages Kolmogorov-Arnold Networks (KANs) to capture complex nonlinear
relationships. By adjusting structural features based on similarity, our
approach generates refined vertex representations that effectively addresses
the challenge of imbalanced information aggregation. Experiments conducted on
the real-world datasets demonstrate that HyperKAN significantly outperforms
state of-the-art HNN methods, achieving nearly a 9% performance improvement on
the Senate dataset.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 05:39:52 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Fang",
"Xiangfei",
""
],
[
"Wang",
"Boying",
""
],
[
"Huan",
"Chengying",
""
],
[
"Ma",
"Shaonan",
""
],
[
"Zhang",
"Heng",
""
],
[
"Zhao",
"Chen",
""
]
] | TITLE: HyperKAN: Hypergraph Representation Learning with Kolmogorov-Arnold
Networks
ABSTRACT: Hypergraph representation learning has garnered increasing attention across
various domains due to its capability to model high-order relationships.
Traditional methods often rely on hypergraph neural networks (HNNs) employing
message passing mechanisms to aggregate vertex and hyperedge features. However,
these methods are constrained by their dependence on hypergraph topology,
leading to the challenge of imbalanced information aggregation, where
high-degree vertices tend to aggregate redundant features, while low-degree
vertices often struggle to capture sufficient structural features. To overcome
the above challenges, we introduce HyperKAN, a novel framework for hypergraph
representation learning that transcends the limitations of message-passing
techniques. HyperKAN begins by encoding features for each vertex and then
leverages Kolmogorov-Arnold Networks (KANs) to capture complex nonlinear
relationships. By adjusting structural features based on similarity, our
approach generates refined vertex representations that effectively addresses
the challenge of imbalanced information aggregation. Experiments conducted on
the real-world datasets demonstrate that HyperKAN significantly outperforms
state of-the-art HNN methods, achieving nearly a 9% performance improvement on
the Senate dataset.
|
2503.12366 | Suchanuch Piriyasatit | Suchanuch Piriyasatit, Chaohao Yuan, Ercan Engin Kuruoglu | ASD Classification on Dynamic Brain Connectome using Temporal Random
Walk with Transformer-based Dynamic Network Embedding | null | null | null | null | cs.LG q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Autism Spectrum Disorder (ASD) is a complex neurological condition
characterized by varied developmental impairments, especially in communication
and social interaction. Accurate and early diagnosis of ASD is crucial for
effective intervention, which is enhanced by richer representations of brain
activity. The brain functional connectome, which refers to the statistical
relationships between different brain regions measured through neuroimaging,
provides crucial insights into brain function. Traditional static methods often
fail to capture the dynamic nature of brain activity, in contrast, dynamic
brain connectome analysis provides a more comprehensive view by capturing the
temporal variations in the brain. We propose BrainTWT, a novel dynamic network
embedding approach that captures temporal evolution of the brain connectivity
over time and considers also the dynamics between different temporal network
snapshots. BrainTWT employs temporal random walks to capture dynamics across
different temporal network snapshots and leverages the Transformer's ability to
model long term dependencies in sequential data to learn the discriminative
embeddings from these temporal sequences using temporal structure prediction
tasks. The experimental evaluation, utilizing the Autism Brain Imaging Data
Exchange (ABIDE) dataset, demonstrates that BrainTWT outperforms baseline
methods in ASD classification.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 05:44:11 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Piriyasatit",
"Suchanuch",
""
],
[
"Yuan",
"Chaohao",
""
],
[
"Kuruoglu",
"Ercan Engin",
""
]
] | TITLE: ASD Classification on Dynamic Brain Connectome using Temporal Random
Walk with Transformer-based Dynamic Network Embedding
ABSTRACT: Autism Spectrum Disorder (ASD) is a complex neurological condition
characterized by varied developmental impairments, especially in communication
and social interaction. Accurate and early diagnosis of ASD is crucial for
effective intervention, which is enhanced by richer representations of brain
activity. The brain functional connectome, which refers to the statistical
relationships between different brain regions measured through neuroimaging,
provides crucial insights into brain function. Traditional static methods often
fail to capture the dynamic nature of brain activity, in contrast, dynamic
brain connectome analysis provides a more comprehensive view by capturing the
temporal variations in the brain. We propose BrainTWT, a novel dynamic network
embedding approach that captures temporal evolution of the brain connectivity
over time and considers also the dynamics between different temporal network
snapshots. BrainTWT employs temporal random walks to capture dynamics across
different temporal network snapshots and leverages the Transformer's ability to
model long term dependencies in sequential data to learn the discriminative
embeddings from these temporal sequences using temporal structure prediction
tasks. The experimental evaluation, utilizing the Autism Brain Imaging Data
Exchange (ABIDE) dataset, demonstrates that BrainTWT outperforms baseline
methods in ASD classification.
|
2503.12370 | Rupak Sarkar | Rupak Sarkar, Neha Srikanth, Taylor Hudson, Rachel Rudinger, Claire
Bonial, Philip Resnik | Understanding Common Ground Misalignment in Goal-Oriented Dialog: A
Case-Study with Ubuntu Chat Logs | 8 pages | null | null | null | cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | While it is commonly accepted that maintaining common ground plays a role in
conversational success, little prior research exists connecting conversational
grounding to success in task-oriented conversations. We study failures of
grounding in the Ubuntu IRC dataset, where participants use text-only
communication to resolve technical issues. We find that disruptions in
conversational flow often stem from a misalignment in common ground, driven by
a divergence in beliefs and assumptions held by participants. These
disruptions, which we call conversational friction, significantly correlate
with task success. We find that although LLMs can identify overt cases of
conversational friction, they struggle with subtler and more context-dependent
instances requiring pragmatic or domain-specific reasoning.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 06:19:44 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Sarkar",
"Rupak",
""
],
[
"Srikanth",
"Neha",
""
],
[
"Hudson",
"Taylor",
""
],
[
"Rudinger",
"Rachel",
""
],
[
"Bonial",
"Claire",
""
],
[
"Resnik",
"Philip",
""
]
] | TITLE: Understanding Common Ground Misalignment in Goal-Oriented Dialog: A
Case-Study with Ubuntu Chat Logs
ABSTRACT: While it is commonly accepted that maintaining common ground plays a role in
conversational success, little prior research exists connecting conversational
grounding to success in task-oriented conversations. We study failures of
grounding in the Ubuntu IRC dataset, where participants use text-only
communication to resolve technical issues. We find that disruptions in
conversational flow often stem from a misalignment in common ground, driven by
a divergence in beliefs and assumptions held by participants. These
disruptions, which we call conversational friction, significantly correlate
with task success. We find that although LLMs can identify overt cases of
conversational friction, they struggle with subtler and more context-dependent
instances requiring pragmatic or domain-specific reasoning.
|
2503.12377 | Jonas Ferrao | Jonas Chris Ferrao, Dickson Dias, Sweta Morajkar, and Manisha Gokuldas
Fal Dessai | GCBLANE: A graph-enhanced convolutional BiLSTM attention network for
improved transcription factor binding site prediction | null | null | null | null | cs.LG q-bio.GN q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Identifying transcription factor binding sites (TFBS) is crucial for
understanding gene regulation, as these sites enable transcription factors
(TFs) to bind to DNA and modulate gene expression. Despite advances in
high-throughput sequencing, accurately identifying TFBS remains challenging due
to the vast genomic data and complex binding patterns. GCBLANE, a
graph-enhanced convolutional bidirectional Long Short-Term Memory (LSTM)
attention network, is introduced to address this issue. It integrates
convolutional, multi-head attention, and recurrent layers with a graph neural
network to detect key features for TFBS prediction. On 690 ENCODE ChIP-Seq
datasets, GCBLANE achieved an average AUC of 0.943, and on 165 ENCODE datasets,
it reached an AUC of 0.9495, outperforming advanced models that utilize
multimodal approaches, including DNA shape information. This result underscores
GCBLANE's effectiveness compared to other methods. By combining graph-based
learning with sequence analysis, GCBLANE significantly advances TFBS
prediction.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 06:52:03 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Ferrao",
"Jonas Chris",
""
],
[
"Dias",
"Dickson",
""
],
[
"Morajkar",
"Sweta",
""
],
[
"Dessai",
"Manisha Gokuldas Fal",
""
]
] | TITLE: GCBLANE: A graph-enhanced convolutional BiLSTM attention network for
improved transcription factor binding site prediction
ABSTRACT: Identifying transcription factor binding sites (TFBS) is crucial for
understanding gene regulation, as these sites enable transcription factors
(TFs) to bind to DNA and modulate gene expression. Despite advances in
high-throughput sequencing, accurately identifying TFBS remains challenging due
to the vast genomic data and complex binding patterns. GCBLANE, a
graph-enhanced convolutional bidirectional Long Short-Term Memory (LSTM)
attention network, is introduced to address this issue. It integrates
convolutional, multi-head attention, and recurrent layers with a graph neural
network to detect key features for TFBS prediction. On 690 ENCODE ChIP-Seq
datasets, GCBLANE achieved an average AUC of 0.943, and on 165 ENCODE datasets,
it reached an AUC of 0.9495, outperforming advanced models that utilize
multimodal approaches, including DNA shape information. This result underscores
GCBLANE's effectiveness compared to other methods. By combining graph-based
learning with sequence analysis, GCBLANE significantly advances TFBS
prediction.
|
2503.12381 | Rudresh Dwivedi | Ruchika Sharma, Rudresh Dwivedi | Deepfake Detection with Optimized Hybrid Model: EAR Biometric Descriptor
via Improved RCNN | Submiited to journal | null | null | null | cs.CV cs.MM | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Deepfake is a widely used technology employed in recent years to create
pernicious content such as fake news, movies, and rumors by altering and
substituting facial information from various sources. Given the ongoing
evolution of deepfakes investigation of continuous identification and
prevention is crucial. Due to recent technological advancements in AI
(Artificial Intelligence) distinguishing deepfakes and artificially altered
images has become challenging. This approach introduces the robust detection of
subtle ear movements and shape changes to generate ear descriptors. Further, we
also propose a novel optimized hybrid deepfake detection model that considers
the ear biometric descriptors via enhanced RCNN (Region-Based Convolutional
Neural Network). Initially, the input video is converted into frames and
preprocessed through resizing, normalization, grayscale conversion, and
filtering processes followed by face detection using the Viola-Jones technique.
Next, a hybrid model comprising DBN (Deep Belief Network) and Bi-GRU
(Bidirectional Gated Recurrent Unit) is utilized for deepfake detection based
on ear descriptors. The output from the detection phase is determined through
improved score-level fusion. To enhance the performance, the weights of both
detection models are optimally tuned using the SU-JFO (Self-Upgraded Jellyfish
Optimization method). Experimentation is conducted based on four scenarios:
compression, noise, rotation, pose, and illumination on three different
datasets. The performance results affirm that our proposed method outperforms
traditional models such as CNN (Convolution Neural Network), SqueezeNet, LeNet,
LinkNet, LSTM (Long Short-Term Memory), DFP (Deepfake Predictor) [1], and
ResNext+CNN+LSTM [2] in terms of various performance metrics viz. accuracy,
specificity, and precision.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 07:01:29 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Sharma",
"Ruchika",
""
],
[
"Dwivedi",
"Rudresh",
""
]
] | TITLE: Deepfake Detection with Optimized Hybrid Model: EAR Biometric Descriptor
via Improved RCNN
ABSTRACT: Deepfake is a widely used technology employed in recent years to create
pernicious content such as fake news, movies, and rumors by altering and
substituting facial information from various sources. Given the ongoing
evolution of deepfakes investigation of continuous identification and
prevention is crucial. Due to recent technological advancements in AI
(Artificial Intelligence) distinguishing deepfakes and artificially altered
images has become challenging. This approach introduces the robust detection of
subtle ear movements and shape changes to generate ear descriptors. Further, we
also propose a novel optimized hybrid deepfake detection model that considers
the ear biometric descriptors via enhanced RCNN (Region-Based Convolutional
Neural Network). Initially, the input video is converted into frames and
preprocessed through resizing, normalization, grayscale conversion, and
filtering processes followed by face detection using the Viola-Jones technique.
Next, a hybrid model comprising DBN (Deep Belief Network) and Bi-GRU
(Bidirectional Gated Recurrent Unit) is utilized for deepfake detection based
on ear descriptors. The output from the detection phase is determined through
improved score-level fusion. To enhance the performance, the weights of both
detection models are optimally tuned using the SU-JFO (Self-Upgraded Jellyfish
Optimization method). Experimentation is conducted based on four scenarios:
compression, noise, rotation, pose, and illumination on three different
datasets. The performance results affirm that our proposed method outperforms
traditional models such as CNN (Convolution Neural Network), SqueezeNet, LeNet,
LinkNet, LSTM (Long Short-Term Memory), DFP (Deepfake Predictor) [1], and
ResNext+CNN+LSTM [2] in terms of various performance metrics viz. accuracy,
specificity, and precision.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.