Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2503.24378 | Michael Katz | Harsha Kokel, Michael Katz, Kavitha Srinivas, Shirin Sohrabi | ACPBench Hard: Unrestrained Reasoning about Action, Change, and Planning | Accepted to LM4Plan@AAAI 2025 | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The ACPBench dataset provides atomic reasoning tasks required for efficient
planning. The dataset is aimed at distilling the complex plan generation task
into separate atomic reasoning tasks in their easiest possible form, boolean or
multiple-choice questions, where the model has to choose the right answer from
the provided options. While the aim of ACPBench is to test the simplest form of
reasoning about action and change, when tasked with planning, a model does not
typically have options to choose from and thus the reasoning required for
planning dictates an open-ended, generative form for these tasks. To that end,
we introduce ACPBench Hard, a generative version of ACPBench, with open-ended
questions which the model needs to answer. Models that perform well on these
tasks could in principle be integrated into a planner or be used directly as a
policy. We discuss the complexity of these tasks as well as the complexity of
validating the correctness of their answers and present validation algorithms
for each task. Equipped with these validators, we test the performance of a
variety of models on our tasks and find that for most of these tasks the
performance of even the largest models is still subpar. Our experiments show
that no model outperforms another in these tasks and with a few exceptions all
tested language models score below 65%, indicating that even the current
frontier language models have a long way to go before they can reliably reason
about planning. In fact, even the so-called reasoning models struggle with
solving these reasoning tasks. ACPBench Hard collection is available at the
following link: https://ibm.github.io/ACPBench
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 17:58:25 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Kokel",
"Harsha",
""
],
[
"Katz",
"Michael",
""
],
[
"Srinivas",
"Kavitha",
""
],
[
"Sohrabi",
"Shirin",
""
]
] | TITLE: ACPBench Hard: Unrestrained Reasoning about Action, Change, and Planning
ABSTRACT: The ACPBench dataset provides atomic reasoning tasks required for efficient
planning. The dataset is aimed at distilling the complex plan generation task
into separate atomic reasoning tasks in their easiest possible form, boolean or
multiple-choice questions, where the model has to choose the right answer from
the provided options. While the aim of ACPBench is to test the simplest form of
reasoning about action and change, when tasked with planning, a model does not
typically have options to choose from and thus the reasoning required for
planning dictates an open-ended, generative form for these tasks. To that end,
we introduce ACPBench Hard, a generative version of ACPBench, with open-ended
questions which the model needs to answer. Models that perform well on these
tasks could in principle be integrated into a planner or be used directly as a
policy. We discuss the complexity of these tasks as well as the complexity of
validating the correctness of their answers and present validation algorithms
for each task. Equipped with these validators, we test the performance of a
variety of models on our tasks and find that for most of these tasks the
performance of even the largest models is still subpar. Our experiments show
that no model outperforms another in these tasks and with a few exceptions all
tested language models score below 65%, indicating that even the current
frontier language models have a long way to go before they can reliably reason
about planning. In fact, even the so-called reasoning models struggle with
solving these reasoning tasks. ACPBench Hard collection is available at the
following link: https://ibm.github.io/ACPBench
|
2503.24379 | Shengqiong Wu | Shengqiong Wu and Weicai Ye and Jiahao Wang and Quande Liu and Xintao
Wang and Pengfei Wan and Di Zhang and Kun Gai and Shuicheng Yan and Hao Fei
and Tat-Seng Chua | Any2Caption:Interpreting Any Condition to Caption for Controllable Video
Generation | Project Page: https://sqwu.top/Any2Cap/ | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | To address the bottleneck of accurate user intent interpretation within the
current video generation community, we present Any2Caption, a novel framework
for controllable video generation under any condition. The key idea is to
decouple various condition interpretation steps from the video synthesis step.
By leveraging modern multimodal large language models (MLLMs), Any2Caption
interprets diverse inputs--text, images, videos, and specialized cues such as
region, motion, and camera poses--into dense, structured captions that offer
backbone video generators with better guidance. We also introduce Any2CapIns, a
large-scale dataset with 337K instances and 407K conditions for
any-condition-to-caption instruction tuning. Comprehensive evaluations
demonstrate significant improvements of our system in controllability and video
quality across various aspects of existing video generation models. Project
Page: https://sqwu.top/Any2Cap/
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 17:59:01 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Wu",
"Shengqiong",
""
],
[
"Ye",
"Weicai",
""
],
[
"Wang",
"Jiahao",
""
],
[
"Liu",
"Quande",
""
],
[
"Wang",
"Xintao",
""
],
[
"Wan",
"Pengfei",
""
],
[
"Zhang",
"Di",
""
],
[
"Gai",
"Kun",
""
],
[
"Yan",
"Shuicheng",
""
],
[
"Fei",
"Hao",
""
],
[
"Chua",
"Tat-Seng",
""
]
] | TITLE: Any2Caption:Interpreting Any Condition to Caption for Controllable Video
Generation
ABSTRACT: To address the bottleneck of accurate user intent interpretation within the
current video generation community, we present Any2Caption, a novel framework
for controllable video generation under any condition. The key idea is to
decouple various condition interpretation steps from the video synthesis step.
By leveraging modern multimodal large language models (MLLMs), Any2Caption
interprets diverse inputs--text, images, videos, and specialized cues such as
region, motion, and camera poses--into dense, structured captions that offer
backbone video generators with better guidance. We also introduce Any2CapIns, a
large-scale dataset with 337K instances and 407K conditions for
any-condition-to-caption instruction tuning. Comprehensive evaluations
demonstrate significant improvements of our system in controllability and video
quality across various aspects of existing video generation models. Project
Page: https://sqwu.top/Any2Cap/
|
2503.24381 | Jiachen Li | Yuping Wang and Xiangyu Huang and Xiaokang Sun and Mingxuan Yan and
Shuo Xing and Zhengzhong Tu and Jiachen Li | UniOcc: A Unified Benchmark for Occupancy Forecasting and Prediction in
Autonomous Driving | 14 pages; Dataset: https://huggingface.co/datasets/tasl-lab/uniocc;
Code: https://github.com/tasl-lab/UniOcc | null | null | null | cs.CV cs.AI cs.LG cs.MA cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce UniOcc, a comprehensive, unified benchmark for occupancy
forecasting (i.e., predicting future occupancies based on historical
information) and current-frame occupancy prediction from camera images. UniOcc
unifies data from multiple real-world datasets (i.e., nuScenes, Waymo) and
high-fidelity driving simulators (i.e., CARLA, OpenCOOD), which provides 2D/3D
occupancy labels with per-voxel flow annotations and support for cooperative
autonomous driving. In terms of evaluation, unlike existing studies that rely
on suboptimal pseudo labels for evaluation, UniOcc incorporates novel metrics
that do not depend on ground-truth occupancy, enabling robust assessment of
additional aspects of occupancy quality. Through extensive experiments on
state-of-the-art models, we demonstrate that large-scale, diverse training data
and explicit flow information significantly enhance occupancy prediction and
forecasting performance.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 17:59:24 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Wang",
"Yuping",
""
],
[
"Huang",
"Xiangyu",
""
],
[
"Sun",
"Xiaokang",
""
],
[
"Yan",
"Mingxuan",
""
],
[
"Xing",
"Shuo",
""
],
[
"Tu",
"Zhengzhong",
""
],
[
"Li",
"Jiachen",
""
]
] | TITLE: UniOcc: A Unified Benchmark for Occupancy Forecasting and Prediction in
Autonomous Driving
ABSTRACT: We introduce UniOcc, a comprehensive, unified benchmark for occupancy
forecasting (i.e., predicting future occupancies based on historical
information) and current-frame occupancy prediction from camera images. UniOcc
unifies data from multiple real-world datasets (i.e., nuScenes, Waymo) and
high-fidelity driving simulators (i.e., CARLA, OpenCOOD), which provides 2D/3D
occupancy labels with per-voxel flow annotations and support for cooperative
autonomous driving. In terms of evaluation, unlike existing studies that rely
on suboptimal pseudo labels for evaluation, UniOcc incorporates novel metrics
that do not depend on ground-truth occupancy, enabling robust assessment of
additional aspects of occupancy quality. Through extensive experiments on
state-of-the-art models, we demonstrate that large-scale, diverse training data
and explicit flow information significantly enhance occupancy prediction and
forecasting performance.
|
2503.24389 | Chenyang Li | Chenyang Li, Wenxuan Liu, Guoqiang Gong, Xiaobo Ding, Xian Zhong | SU-YOLO: Spiking Neural Network for Efficient Underwater Object
Detection | null | null | null | null | cs.CV cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Underwater object detection is critical for oceanic research and industrial
safety inspections. However, the complex optical environment and the limited
resources of underwater equipment pose significant challenges to achieving high
accuracy and low power consumption. To address these issues, we propose Spiking
Underwater YOLO (SU-YOLO), a Spiking Neural Network (SNN) model. Leveraging the
lightweight and energy-efficient properties of SNNs, SU-YOLO incorporates a
novel spike-based underwater image denoising method based solely on integer
addition, which enhances the quality of feature maps with minimal computational
overhead. In addition, we introduce Separated Batch Normalization (SeBN), a
technique that normalizes feature maps independently across multiple time steps
and is optimized for integration with residual structures to capture the
temporal dynamics of SNNs more effectively. The redesigned spiking residual
blocks integrate the Cross Stage Partial Network (CSPNet) with the YOLO
architecture to mitigate spike degradation and enhance the model's feature
extraction capabilities. Experimental results on URPC2019 underwater dataset
demonstrate that SU-YOLO achieves mAP of 78.8% with 6.97M parameters and an
energy consumption of 2.98 mJ, surpassing mainstream SNN models in both
detection accuracy and computational efficiency. These results underscore the
potential of SNNs for engineering applications. The code is available in
https://github.com/lwxfight/snn-underwater.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 17:59:52 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Li",
"Chenyang",
""
],
[
"Liu",
"Wenxuan",
""
],
[
"Gong",
"Guoqiang",
""
],
[
"Ding",
"Xiaobo",
""
],
[
"Zhong",
"Xian",
""
]
] | TITLE: SU-YOLO: Spiking Neural Network for Efficient Underwater Object
Detection
ABSTRACT: Underwater object detection is critical for oceanic research and industrial
safety inspections. However, the complex optical environment and the limited
resources of underwater equipment pose significant challenges to achieving high
accuracy and low power consumption. To address these issues, we propose Spiking
Underwater YOLO (SU-YOLO), a Spiking Neural Network (SNN) model. Leveraging the
lightweight and energy-efficient properties of SNNs, SU-YOLO incorporates a
novel spike-based underwater image denoising method based solely on integer
addition, which enhances the quality of feature maps with minimal computational
overhead. In addition, we introduce Separated Batch Normalization (SeBN), a
technique that normalizes feature maps independently across multiple time steps
and is optimized for integration with residual structures to capture the
temporal dynamics of SNNs more effectively. The redesigned spiking residual
blocks integrate the Cross Stage Partial Network (CSPNet) with the YOLO
architecture to mitigate spike degradation and enhance the model's feature
extraction capabilities. Experimental results on URPC2019 underwater dataset
demonstrate that SU-YOLO achieves mAP of 78.8% with 6.97M parameters and an
energy consumption of 2.98 mJ, surpassing mainstream SNN models in both
detection accuracy and computational efficiency. These results underscore the
potential of SNNs for engineering applications. The code is available in
https://github.com/lwxfight/snn-underwater.
|
2503.24391 | Xingyu Chen | Xingyu Chen, Yue Chen, Yuliang Xiu, Andreas Geiger, Anpei Chen | Easi3R: Estimating Disentangled Motion from DUSt3R Without Training | Page: https://easi3r.github.io/ Code:
https://github.com/Inception3D/Easi3R | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in DUSt3R have enabled robust estimation of dense point
clouds and camera parameters of static scenes, leveraging Transformer network
architectures and direct supervision on large-scale 3D datasets. In contrast,
the limited scale and diversity of available 4D datasets present a major
bottleneck for training a highly generalizable 4D model. This constraint has
driven conventional 4D methods to fine-tune 3D models on scalable dynamic video
data with additional geometric priors such as optical flow and depths. In this
work, we take an opposite path and introduce Easi3R, a simple yet efficient
training-free method for 4D reconstruction. Our approach applies attention
adaptation during inference, eliminating the need for from-scratch pre-training
or network fine-tuning. We find that the attention layers in DUSt3R inherently
encode rich information about camera and object motion. By carefully
disentangling these attention maps, we achieve accurate dynamic region
segmentation, camera pose estimation, and 4D dense point map reconstruction.
Extensive experiments on real-world dynamic videos demonstrate that our
lightweight attention adaptation significantly outperforms previous
state-of-the-art methods that are trained or finetuned on extensive dynamic
datasets. Our code is publicly available for research purpose at
https://easi3r.github.io/
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 17:59:58 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Chen",
"Xingyu",
""
],
[
"Chen",
"Yue",
""
],
[
"Xiu",
"Yuliang",
""
],
[
"Geiger",
"Andreas",
""
],
[
"Chen",
"Anpei",
""
]
] | TITLE: Easi3R: Estimating Disentangled Motion from DUSt3R Without Training
ABSTRACT: Recent advances in DUSt3R have enabled robust estimation of dense point
clouds and camera parameters of static scenes, leveraging Transformer network
architectures and direct supervision on large-scale 3D datasets. In contrast,
the limited scale and diversity of available 4D datasets present a major
bottleneck for training a highly generalizable 4D model. This constraint has
driven conventional 4D methods to fine-tune 3D models on scalable dynamic video
data with additional geometric priors such as optical flow and depths. In this
work, we take an opposite path and introduce Easi3R, a simple yet efficient
training-free method for 4D reconstruction. Our approach applies attention
adaptation during inference, eliminating the need for from-scratch pre-training
or network fine-tuning. We find that the attention layers in DUSt3R inherently
encode rich information about camera and object motion. By carefully
disentangling these attention maps, we achieve accurate dynamic region
segmentation, camera pose estimation, and 4D dense point map reconstruction.
Extensive experiments on real-world dynamic videos demonstrate that our
lightweight attention adaptation significantly outperforms previous
state-of-the-art methods that are trained or finetuned on extensive dynamic
datasets. Our code is publicly available for research purpose at
https://easi3r.github.io/
|
1811.04661 | Namita Jain Mrs | Namita Jain, Susmita Ghosh, C. A. Murthy | RelDenClu: A Relative Density based Biclustering Method for identifying
non-linear feature relations | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The existing biclustering algorithms for finding feature relation based
biclusters often depend on assumptions like monotonicity or linearity. Though a
few algorithms overcome this problem by using density-based methods, they tend
to miss out many biclusters because they use global criteria for identifying
dense regions. The proposed method, RelDenClu uses the local variations in
marginal and joint densities for each pair of features to find the subset of
observations, which forms the bases of the relation between them. It then finds
the set of features connected by a common set of observations, resulting in a
bicluster.
To show the effectiveness of the proposed methodology, experimentation has
been carried out on fifteen types of simulated datasets. Further, it has been
applied to six real-life datasets. For three of these real-life datasets, the
proposed method is used for unsupervised learning, while for other three
real-life datasets it is used as an aid to supervised learning. For all the
datasets the performance of the proposed method is compared with that of seven
different state-of-the-art algorithms and the proposed algorithm is seen to
produce better results. The efficacy of proposed algorithm is also seen by its
use on COVID-19 dataset for identifying some features (genetic, demographics
and others) that are likely to affect the spread of COVID-19.
| [
{
"version": "v1",
"created": "Mon, 12 Nov 2018 11:11:26 GMT"
},
{
"version": "v2",
"created": "Thu, 2 May 2019 10:26:25 GMT"
},
{
"version": "v3",
"created": "Mon, 25 May 2020 17:39:50 GMT"
},
{
"version": "v4",
"created": "Thu, 28 May 2020 09:54:59 GMT"
},
{
"version": "v5",
"created": "Tue, 11 May 2021 11:32:37 GMT"
},
{
"version": "v6",
"created": "Fri, 28 Mar 2025 17:02:28 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Jain",
"Namita",
""
],
[
"Ghosh",
"Susmita",
""
],
[
"Murthy",
"C. A.",
""
]
] | TITLE: RelDenClu: A Relative Density based Biclustering Method for identifying
non-linear feature relations
ABSTRACT: The existing biclustering algorithms for finding feature relation based
biclusters often depend on assumptions like monotonicity or linearity. Though a
few algorithms overcome this problem by using density-based methods, they tend
to miss out many biclusters because they use global criteria for identifying
dense regions. The proposed method, RelDenClu uses the local variations in
marginal and joint densities for each pair of features to find the subset of
observations, which forms the bases of the relation between them. It then finds
the set of features connected by a common set of observations, resulting in a
bicluster.
To show the effectiveness of the proposed methodology, experimentation has
been carried out on fifteen types of simulated datasets. Further, it has been
applied to six real-life datasets. For three of these real-life datasets, the
proposed method is used for unsupervised learning, while for other three
real-life datasets it is used as an aid to supervised learning. For all the
datasets the performance of the proposed method is compared with that of seven
different state-of-the-art algorithms and the proposed algorithm is seen to
produce better results. The efficacy of proposed algorithm is also seen by its
use on COVID-19 dataset for identifying some features (genetic, demographics
and others) that are likely to affect the spread of COVID-19.
|
2103.10584 | Chaojian Li | Chaojian Li, Zhongzhi Yu, Yonggan Fu, Yongan Zhang, Yang Zhao, Haoran
You, Qixuan Yu, Yue Wang, Yingyan Celine Lin | HW-NAS-Bench:Hardware-Aware Neural Architecture Search Benchmark | Accepted at ICLR 2021 (Spotlight) | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | HardWare-aware Neural Architecture Search (HW-NAS) has recently gained
tremendous attention by automating the design of DNNs deployed in more
resource-constrained daily life devices. Despite its promising performance,
developing optimal HW-NAS solutions can be prohibitively challenging as it
requires cross-disciplinary knowledge in the algorithm, micro-architecture, and
device-specific compilation. First, to determine the hardware-cost to be
incorporated into the NAS process, existing works mostly adopt either
pre-collected hardware-cost look-up tables or device-specific hardware-cost
models. Both of them limit the development of HW-NAS innovations and impose a
barrier-to-entry to non-hardware experts. Second, similar to generic NAS, it
can be notoriously difficult to benchmark HW-NAS algorithms due to their
significant required computational resources and the differences in adopted
search spaces, hyperparameters, and hardware devices. To this end, we develop
HW-NAS-Bench, the first public dataset for HW-NAS research which aims to
democratize HW-NAS research to non-hardware experts and make HW-NAS research
more reproducible and accessible. To design HW-NAS-Bench, we carefully
collected the measured/estimated hardware performance of all the networks in
the search spaces of both NAS-Bench-201 and FBNet, on six hardware devices that
fall into three categories (i.e., commercial edge devices, FPGA, and ASIC).
Furthermore, we provide a comprehensive analysis of the collected measurements
in HW-NAS-Bench to provide insights for HW-NAS research. Finally, we
demonstrate exemplary user cases to (1) show that HW-NAS-Bench allows
non-hardware experts to perform HW-NAS by simply querying it and (2) verify
that dedicated device-specific HW-NAS can indeed lead to optimal accuracy-cost
trade-offs. The codes and all collected data are available at
https://github.com/RICE-EIC/HW-NAS-Bench.
| [
{
"version": "v1",
"created": "Fri, 19 Mar 2021 01:24:49 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 00:06:06 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Li",
"Chaojian",
""
],
[
"Yu",
"Zhongzhi",
""
],
[
"Fu",
"Yonggan",
""
],
[
"Zhang",
"Yongan",
""
],
[
"Zhao",
"Yang",
""
],
[
"You",
"Haoran",
""
],
[
"Yu",
"Qixuan",
""
],
[
"Wang",
"Yue",
""
],
[
"Lin",
"Yingyan Celine",
""
]
] | TITLE: HW-NAS-Bench:Hardware-Aware Neural Architecture Search Benchmark
ABSTRACT: HardWare-aware Neural Architecture Search (HW-NAS) has recently gained
tremendous attention by automating the design of DNNs deployed in more
resource-constrained daily life devices. Despite its promising performance,
developing optimal HW-NAS solutions can be prohibitively challenging as it
requires cross-disciplinary knowledge in the algorithm, micro-architecture, and
device-specific compilation. First, to determine the hardware-cost to be
incorporated into the NAS process, existing works mostly adopt either
pre-collected hardware-cost look-up tables or device-specific hardware-cost
models. Both of them limit the development of HW-NAS innovations and impose a
barrier-to-entry to non-hardware experts. Second, similar to generic NAS, it
can be notoriously difficult to benchmark HW-NAS algorithms due to their
significant required computational resources and the differences in adopted
search spaces, hyperparameters, and hardware devices. To this end, we develop
HW-NAS-Bench, the first public dataset for HW-NAS research which aims to
democratize HW-NAS research to non-hardware experts and make HW-NAS research
more reproducible and accessible. To design HW-NAS-Bench, we carefully
collected the measured/estimated hardware performance of all the networks in
the search spaces of both NAS-Bench-201 and FBNet, on six hardware devices that
fall into three categories (i.e., commercial edge devices, FPGA, and ASIC).
Furthermore, we provide a comprehensive analysis of the collected measurements
in HW-NAS-Bench to provide insights for HW-NAS research. Finally, we
demonstrate exemplary user cases to (1) show that HW-NAS-Bench allows
non-hardware experts to perform HW-NAS by simply querying it and (2) verify
that dedicated device-specific HW-NAS can indeed lead to optimal accuracy-cost
trade-offs. The codes and all collected data are available at
https://github.com/RICE-EIC/HW-NAS-Bench.
|
2107.07706 | Chaojian Li | Chaojian Li, Wuyang Chen, Yuchen Gu, Tianlong Chen, Yonggan Fu,
Zhangyang Wang, Yingyan Celine Lin | DANCE: DAta-Network Co-optimization for Efficient Segmentation Model
Training and Inference | 16 pages, 6 figures | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Semantic segmentation for scene understanding is nowadays widely demanded,
raising significant challenges for the algorithm efficiency, especially its
applications on resource-limited platforms. Current segmentation models are
trained and evaluated on massive high-resolution scene images ("data level")
and suffer from the expensive computation arising from the required multi-scale
aggregation("network level"). In both folds, the computational and energy costs
in training and inference are notable due to the often desired large input
resolutions and heavy computational burden of segmentation models. To this end,
we propose DANCE, general automated DAta-Network Co-optimization for Efficient
segmentation model training and inference. Distinct from existing efficient
segmentation approaches that focus merely on light-weight network design, DANCE
distinguishes itself as an automated simultaneous data-network co-optimization
via both input data manipulation and network architecture slimming.
Specifically, DANCE integrates automated data slimming which adaptively
downsamples/drops input images and controls their corresponding contribution to
the training loss guided by the images' spatial complexity. Such a downsampling
operation, in addition to slimming down the cost associated with the input size
directly, also shrinks the dynamic range of input object and context scales,
therefore motivating us to also adaptively slim the network to match the
downsampled data. Extensive experiments and ablating studies (on four SOTA
segmentation models with three popular segmentation datasets under two training
settings) demonstrate that DANCE can achieve "all-win" towards efficient
segmentation(reduced training cost, less expensive inference, and better mean
Intersection-over-Union (mIoU)).
| [
{
"version": "v1",
"created": "Fri, 16 Jul 2021 04:58:58 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 00:40:56 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Li",
"Chaojian",
""
],
[
"Chen",
"Wuyang",
""
],
[
"Gu",
"Yuchen",
""
],
[
"Chen",
"Tianlong",
""
],
[
"Fu",
"Yonggan",
""
],
[
"Wang",
"Zhangyang",
""
],
[
"Lin",
"Yingyan Celine",
""
]
] | TITLE: DANCE: DAta-Network Co-optimization for Efficient Segmentation Model
Training and Inference
ABSTRACT: Semantic segmentation for scene understanding is nowadays widely demanded,
raising significant challenges for the algorithm efficiency, especially its
applications on resource-limited platforms. Current segmentation models are
trained and evaluated on massive high-resolution scene images ("data level")
and suffer from the expensive computation arising from the required multi-scale
aggregation("network level"). In both folds, the computational and energy costs
in training and inference are notable due to the often desired large input
resolutions and heavy computational burden of segmentation models. To this end,
we propose DANCE, general automated DAta-Network Co-optimization for Efficient
segmentation model training and inference. Distinct from existing efficient
segmentation approaches that focus merely on light-weight network design, DANCE
distinguishes itself as an automated simultaneous data-network co-optimization
via both input data manipulation and network architecture slimming.
Specifically, DANCE integrates automated data slimming which adaptively
downsamples/drops input images and controls their corresponding contribution to
the training loss guided by the images' spatial complexity. Such a downsampling
operation, in addition to slimming down the cost associated with the input size
directly, also shrinks the dynamic range of input object and context scales,
therefore motivating us to also adaptively slim the network to match the
downsampled data. Extensive experiments and ablating studies (on four SOTA
segmentation models with three popular segmentation datasets under two training
settings) demonstrate that DANCE can achieve "all-win" towards efficient
segmentation(reduced training cost, less expensive inference, and better mean
Intersection-over-Union (mIoU)).
|
2209.10368 | Brian Hsuan-Cheng Liao | Brian Hsuan-Cheng Liao, Chih-Hong Cheng, Hasan Esen, Alois Knoll | USC: Uncompromising Spatial Constraints for Safety-Oriented 3D Object
Detectors in Autonomous Driving | Accepted by ITSC 2024, 8 pages (IEEE double column format), 7
figures, 2 tables | null | 10.1109/ITSC58415.2024.10919937 | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we consider the safety-oriented performance of 3D object
detectors in autonomous driving contexts. Specifically, despite impressive
results shown by the mass literature, developers often find it hard to ensure
the safe deployment of these learning-based perception models. Attributing the
challenge to the lack of safety-oriented metrics, we hereby present
uncompromising spatial constraints (USC), which characterize a simple yet
important localization requirement demanding the predictions to fully cover the
objects when seen from the autonomous vehicle. The constraints, as we formulate
using the perspective and bird's-eye views, can be naturally reflected by
quantitative measures, such that having an object detector with a higher score
implies a lower risk of collision. Finally, beyond model evaluation, we
incorporate the quantitative measures into common loss functions to enable
safety-oriented fine-tuning for existing models. With experiments using the
nuScenes dataset and a closed-loop simulation, our work demonstrates such
considerations of safety notions at the perception level not only improve model
performances beyond accuracy but also allow for a more direct linkage to actual
system safety.
| [
{
"version": "v1",
"created": "Wed, 21 Sep 2022 14:03:08 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Feb 2023 16:21:02 GMT"
},
{
"version": "v3",
"created": "Sun, 5 Mar 2023 12:26:16 GMT"
},
{
"version": "v4",
"created": "Thu, 2 May 2024 15:46:28 GMT"
},
{
"version": "v5",
"created": "Fri, 28 Mar 2025 16:42:03 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Liao",
"Brian Hsuan-Cheng",
""
],
[
"Cheng",
"Chih-Hong",
""
],
[
"Esen",
"Hasan",
""
],
[
"Knoll",
"Alois",
""
]
] | TITLE: USC: Uncompromising Spatial Constraints for Safety-Oriented 3D Object
Detectors in Autonomous Driving
ABSTRACT: In this work, we consider the safety-oriented performance of 3D object
detectors in autonomous driving contexts. Specifically, despite impressive
results shown by the mass literature, developers often find it hard to ensure
the safe deployment of these learning-based perception models. Attributing the
challenge to the lack of safety-oriented metrics, we hereby present
uncompromising spatial constraints (USC), which characterize a simple yet
important localization requirement demanding the predictions to fully cover the
objects when seen from the autonomous vehicle. The constraints, as we formulate
using the perspective and bird's-eye views, can be naturally reflected by
quantitative measures, such that having an object detector with a higher score
implies a lower risk of collision. Finally, beyond model evaluation, we
incorporate the quantitative measures into common loss functions to enable
safety-oriented fine-tuning for existing models. With experiments using the
nuScenes dataset and a closed-loop simulation, our work demonstrates such
considerations of safety notions at the perception level not only improve model
performances beyond accuracy but also allow for a more direct linkage to actual
system safety.
|
2210.07072 | Zhendi Gong | Zhendi Gong, Andrew P. French, Guoping Qiu, Xin Chen | ConvTransSeg: A Multi-resolution Convolution-Transformer Network for
Medical Image Segmentation | 12 pages, 5 figures, 4 tables, also submitted to IEEE-TMI | In 2024 IEEE International Symposium on Biomedical Imaging (ISBI)
(pp. 1-5). IEEE (2024) | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Convolutional neural networks (CNNs) achieved the state-of-the-art
performance in medical image segmentation due to their ability to extract
highly complex feature representations. However, it is argued in recent studies
that traditional CNNs lack the intelligence to capture long-term dependencies
of different image regions. Following the success of applying Transformer
models on natural language processing tasks, the medical image segmentation
field has also witnessed growing interest in utilizing Transformers, due to
their ability to capture long-range contextual information. However, unlike
CNNs, Transformers lack the ability to learn local feature representations.
Thus, to fully utilize the advantages of both CNNs and Transformers, we propose
a hybrid encoder-decoder segmentation model (ConvTransSeg). It consists of a
multi-layer CNN as the encoder for feature learning and the corresponding
multi-level Transformer as the decoder for segmentation prediction. The encoder
and decoder are interconnected in a multi-resolution manner. We compared our
method with many other state-of-the-art hybrid CNN and Transformer segmentation
models on binary and multiple class image segmentation tasks using several
public medical image datasets, including skin lesion, polyp, cell and brain
tissue. The experimental results show that our method achieves overall the best
performance in terms of Dice coefficient and average symmetric surface distance
measures with low model complexity and memory consumption. In contrast to most
Transformer-based methods that we compared, our method does not require the use
of pre-trained models to achieve similar or better performance. The code is
freely available for research purposes on Github: (the link will be added upon
acceptance).
| [
{
"version": "v1",
"created": "Thu, 13 Oct 2022 14:59:23 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Gong",
"Zhendi",
""
],
[
"French",
"Andrew P.",
""
],
[
"Qiu",
"Guoping",
""
],
[
"Chen",
"Xin",
""
]
] | TITLE: ConvTransSeg: A Multi-resolution Convolution-Transformer Network for
Medical Image Segmentation
ABSTRACT: Convolutional neural networks (CNNs) achieved the state-of-the-art
performance in medical image segmentation due to their ability to extract
highly complex feature representations. However, it is argued in recent studies
that traditional CNNs lack the intelligence to capture long-term dependencies
of different image regions. Following the success of applying Transformer
models on natural language processing tasks, the medical image segmentation
field has also witnessed growing interest in utilizing Transformers, due to
their ability to capture long-range contextual information. However, unlike
CNNs, Transformers lack the ability to learn local feature representations.
Thus, to fully utilize the advantages of both CNNs and Transformers, we propose
a hybrid encoder-decoder segmentation model (ConvTransSeg). It consists of a
multi-layer CNN as the encoder for feature learning and the corresponding
multi-level Transformer as the decoder for segmentation prediction. The encoder
and decoder are interconnected in a multi-resolution manner. We compared our
method with many other state-of-the-art hybrid CNN and Transformer segmentation
models on binary and multiple class image segmentation tasks using several
public medical image datasets, including skin lesion, polyp, cell and brain
tissue. The experimental results show that our method achieves overall the best
performance in terms of Dice coefficient and average symmetric surface distance
measures with low model complexity and memory consumption. In contrast to most
Transformer-based methods that we compared, our method does not require the use
of pre-trained models to achieve similar or better performance. The code is
freely available for research purposes on Github: (the link will be added upon
acceptance).
|
2211.09810 | Yuan Xiao | Yuan Xiao, Yuchen Chen, Shiqing Ma, Chunrong Fang, Tongtong Bai,
Mingzheng Gu, Yuxin Cheng, Yanwei Chen, Zhenyu Chen | Tightening Robustness Verification of MaxPool-based Neural Networks via
Minimizing the Over-Approximation Zone | Accepted to CVPR 2025. Code Link:
https://github.com/xiaoyuanpigo/Ti-Lin-Hybrid-Lin | null | null | null | cs.LG cs.AI cs.CR | http://creativecommons.org/licenses/by/4.0/ | The robustness of neural network classifiers is important in the
safety-critical domain and can be quantified by robustness verification. At
present, efficient and scalable verification techniques are always sound but
incomplete, and thus, the improvement of verified robustness results is the key
criterion to evaluate the performance of incomplete verification approaches.
The multi-variate function MaxPool is widely adopted yet challenging to verify.
In this paper, we present Ti-Lin, a robustness verifier for MaxPool-based CNNs
with Tight Linear Approximation. Following the sequel of minimizing the
over-approximation zone of the non-linear function of CNNs, we are the first to
propose the provably neuron-wise tightest linear bounds for the MaxPool
function. By our proposed linear bounds, we can certify larger robustness
results for CNNs. We evaluate the effectiveness of Ti-Lin on different
verification frameworks with open-sourced benchmarks, including LeNet,
PointNet, and networks trained on the MNIST, CIFAR-10, Tiny ImageNet and
ModelNet40 datasets. Experimental results show that Ti-Lin significantly
outperforms the state-of-the-art methods across all networks with up to 78.6%
improvement in terms of the certified accuracy with almost the same time
consumption as the fastest tool. Our code is available at
https://github.com/xiaoyuanpigo/Ti-Lin-Hybrid-Lin.
| [
{
"version": "v1",
"created": "Sun, 13 Nov 2022 08:37:13 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 08:45:35 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Xiao",
"Yuan",
""
],
[
"Chen",
"Yuchen",
""
],
[
"Ma",
"Shiqing",
""
],
[
"Fang",
"Chunrong",
""
],
[
"Bai",
"Tongtong",
""
],
[
"Gu",
"Mingzheng",
""
],
[
"Cheng",
"Yuxin",
""
],
[
"Chen",
"Yanwei",
""
],
[
"Chen",
"Zhenyu",
""
]
] | TITLE: Tightening Robustness Verification of MaxPool-based Neural Networks via
Minimizing the Over-Approximation Zone
ABSTRACT: The robustness of neural network classifiers is important in the
safety-critical domain and can be quantified by robustness verification. At
present, efficient and scalable verification techniques are always sound but
incomplete, and thus, the improvement of verified robustness results is the key
criterion to evaluate the performance of incomplete verification approaches.
The multi-variate function MaxPool is widely adopted yet challenging to verify.
In this paper, we present Ti-Lin, a robustness verifier for MaxPool-based CNNs
with Tight Linear Approximation. Following the sequel of minimizing the
over-approximation zone of the non-linear function of CNNs, we are the first to
propose the provably neuron-wise tightest linear bounds for the MaxPool
function. By our proposed linear bounds, we can certify larger robustness
results for CNNs. We evaluate the effectiveness of Ti-Lin on different
verification frameworks with open-sourced benchmarks, including LeNet,
PointNet, and networks trained on the MNIST, CIFAR-10, Tiny ImageNet and
ModelNet40 datasets. Experimental results show that Ti-Lin significantly
outperforms the state-of-the-art methods across all networks with up to 78.6%
improvement in terms of the certified accuracy with almost the same time
consumption as the fastest tool. Our code is available at
https://github.com/xiaoyuanpigo/Ti-Lin-Hybrid-Lin.
|
2212.01120 | Chaojian Li | Chaojian Li, Sixu Li, Yang Zhao, Wenbo Zhu, Yingyan Celine Lin | RT-NeRF: Real-Time On-Device Neural Radiance Fields Towards Immersive
AR/VR Rendering | Accepted to ICCAD 2022 | null | null | null | cs.AR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural Radiance Field (NeRF) based rendering has attracted growing attention
thanks to its state-of-the-art (SOTA) rendering quality and wide applications
in Augmented and Virtual Reality (AR/VR). However, immersive real-time (> 30
FPS) NeRF based rendering enabled interactions are still limited due to the low
achievable throughput on AR/VR devices. To this end, we first profile SOTA
efficient NeRF algorithms on commercial devices and identify two primary causes
of the aforementioned inefficiency: (1) the uniform point sampling and (2) the
dense accesses and computations of the required embeddings in NeRF.
Furthermore, we propose RT-NeRF, which to the best of our knowledge is the
first algorithm-hardware co-design acceleration of NeRF. Specifically, on the
algorithm level, RT-NeRF integrates an efficient rendering pipeline for largely
alleviating the inefficiency due to the commonly adopted uniform point sampling
method in NeRF by directly computing the geometry of pre-existing points.
Additionally, RT-NeRF leverages a coarse-grained view-dependent computing
ordering scheme for eliminating the (unnecessary) processing of invisible
points. On the hardware level, our proposed RT-NeRF accelerator (1) adopts a
hybrid encoding scheme to adaptively switch between a bitmap- or
coordinate-based sparsity encoding format for NeRF's sparse embeddings, aiming
to maximize the storage savings and thus reduce the required DRAM accesses
while supporting efficient NeRF decoding; and (2) integrates both a
dual-purpose bi-direction adder & search tree and a high-density sparse search
unit to coordinate the two aforementioned encoding formats. Extensive
experiments on eight datasets consistently validate the effectiveness of
RT-NeRF, achieving a large throughput improvement (e.g., 9.7x - 3,201x) while
maintaining the rendering quality as compared with SOTA efficient NeRF
solutions.
| [
{
"version": "v1",
"created": "Fri, 2 Dec 2022 12:08:42 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 01:09:01 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Li",
"Chaojian",
""
],
[
"Li",
"Sixu",
""
],
[
"Zhao",
"Yang",
""
],
[
"Zhu",
"Wenbo",
""
],
[
"Lin",
"Yingyan Celine",
""
]
] | TITLE: RT-NeRF: Real-Time On-Device Neural Radiance Fields Towards Immersive
AR/VR Rendering
ABSTRACT: Neural Radiance Field (NeRF) based rendering has attracted growing attention
thanks to its state-of-the-art (SOTA) rendering quality and wide applications
in Augmented and Virtual Reality (AR/VR). However, immersive real-time (> 30
FPS) NeRF based rendering enabled interactions are still limited due to the low
achievable throughput on AR/VR devices. To this end, we first profile SOTA
efficient NeRF algorithms on commercial devices and identify two primary causes
of the aforementioned inefficiency: (1) the uniform point sampling and (2) the
dense accesses and computations of the required embeddings in NeRF.
Furthermore, we propose RT-NeRF, which to the best of our knowledge is the
first algorithm-hardware co-design acceleration of NeRF. Specifically, on the
algorithm level, RT-NeRF integrates an efficient rendering pipeline for largely
alleviating the inefficiency due to the commonly adopted uniform point sampling
method in NeRF by directly computing the geometry of pre-existing points.
Additionally, RT-NeRF leverages a coarse-grained view-dependent computing
ordering scheme for eliminating the (unnecessary) processing of invisible
points. On the hardware level, our proposed RT-NeRF accelerator (1) adopts a
hybrid encoding scheme to adaptively switch between a bitmap- or
coordinate-based sparsity encoding format for NeRF's sparse embeddings, aiming
to maximize the storage savings and thus reduce the required DRAM accesses
while supporting efficient NeRF decoding; and (2) integrates both a
dual-purpose bi-direction adder & search tree and a high-density sparse search
unit to coordinate the two aforementioned encoding formats. Extensive
experiments on eight datasets consistently validate the effectiveness of
RT-NeRF, achieving a large throughput improvement (e.g., 9.7x - 3,201x) while
maintaining the rendering quality as compared with SOTA efficient NeRF
solutions.
|
2212.01959 | Chaojian Li | Chaojian Li, Bichen Wu, Albert Pumarola, Peizhao Zhang, Yingyan Celine
Lin, and Peter Vajda | INGeo: Accelerating Instant Neural Scene Reconstruction with Noisy
Geometry Priors | Accepted by Computer Vision for Metaverse Workshop @ ECCV'22 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a method that accelerates reconstruction of 3D scenes and objects,
aiming to enable instant reconstruction on edge devices such as mobile phones
and AR/VR headsets. While recent works have accelerated scene reconstruction
training to minute/second-level on high-end GPUs, there is still a large gap to
the goal of instant training on edge devices which is yet highly desired in
many emerging applications such as immersive AR/VR. To this end, this work aims
to further accelerate training by leveraging geometry priors of the target
scene. Our method proposes strategies to alleviate the noise of the imperfect
geometry priors to accelerate the training speed on top of the highly optimized
Instant-NGP. On the NeRF Synthetic dataset, our work uses half of the training
iterations to reach an average test PSNR of >30.
| [
{
"version": "v1",
"created": "Mon, 5 Dec 2022 00:19:59 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 01:11:34 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Li",
"Chaojian",
""
],
[
"Wu",
"Bichen",
""
],
[
"Pumarola",
"Albert",
""
],
[
"Zhang",
"Peizhao",
""
],
[
"Lin",
"Yingyan Celine",
""
],
[
"Vajda",
"Peter",
""
]
] | TITLE: INGeo: Accelerating Instant Neural Scene Reconstruction with Noisy
Geometry Priors
ABSTRACT: We present a method that accelerates reconstruction of 3D scenes and objects,
aiming to enable instant reconstruction on edge devices such as mobile phones
and AR/VR headsets. While recent works have accelerated scene reconstruction
training to minute/second-level on high-end GPUs, there is still a large gap to
the goal of instant training on edge devices which is yet highly desired in
many emerging applications such as immersive AR/VR. To this end, this work aims
to further accelerate training by leveraging geometry priors of the target
scene. Our method proposes strategies to alleviate the noise of the imperfect
geometry priors to accelerate the training speed on top of the highly optimized
Instant-NGP. On the NeRF Synthetic dataset, our work uses half of the training
iterations to reach an average test PSNR of >30.
|
2303.10727 | Chaojian Li | Chaojian Li, Wenwan Chen, Jiayi Yuan, Yingyan Celine Lin, Ashutosh
Sabharwal | ERSAM: Neural Architecture Search For Energy-Efficient and Real-Time
Social Ambiance Measurement | Accepted by ICASSP'23 | null | null | null | cs.LG cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Social ambiance describes the context in which social interactions happen,
and can be measured using speech audio by counting the number of concurrent
speakers. This measurement has enabled various mental health tracking and
human-centric IoT applications. While on-device Socal Ambiance Measure (SAM) is
highly desirable to ensure user privacy and thus facilitate wide adoption of
the aforementioned applications, the required computational complexity of
state-of-the-art deep neural networks (DNNs) powered SAM solutions stands at
odds with the often constrained resources on mobile devices. Furthermore, only
limited labeled data is available or practical when it comes to SAM under
clinical settings due to various privacy constraints and the required human
effort, further challenging the achievable accuracy of on-device SAM solutions.
To this end, we propose a dedicated neural architecture search framework for
Energy-efficient and Real-time SAM (ERSAM). Specifically, our ERSAM framework
can automatically search for DNNs that push forward the achievable accuracy vs.
hardware efficiency frontier of mobile SAM solutions. For example,
ERSAM-delivered DNNs only consume 40 mW x 12 h energy and 0.05 seconds
processing latency for a 5 seconds audio segment on a Pixel 3 phone, while only
achieving an error rate of 14.3% on a social ambiance dataset generated by
LibriSpeech. We can expect that our ERSAM framework can pave the way for
ubiquitous on-device SAM solutions which are in growing demand.
| [
{
"version": "v1",
"created": "Sun, 19 Mar 2023 18:08:18 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Mar 2023 05:45:38 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Mar 2025 04:03:48 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Li",
"Chaojian",
""
],
[
"Chen",
"Wenwan",
""
],
[
"Yuan",
"Jiayi",
""
],
[
"Lin",
"Yingyan Celine",
""
],
[
"Sabharwal",
"Ashutosh",
""
]
] | TITLE: ERSAM: Neural Architecture Search For Energy-Efficient and Real-Time
Social Ambiance Measurement
ABSTRACT: Social ambiance describes the context in which social interactions happen,
and can be measured using speech audio by counting the number of concurrent
speakers. This measurement has enabled various mental health tracking and
human-centric IoT applications. While on-device Socal Ambiance Measure (SAM) is
highly desirable to ensure user privacy and thus facilitate wide adoption of
the aforementioned applications, the required computational complexity of
state-of-the-art deep neural networks (DNNs) powered SAM solutions stands at
odds with the often constrained resources on mobile devices. Furthermore, only
limited labeled data is available or practical when it comes to SAM under
clinical settings due to various privacy constraints and the required human
effort, further challenging the achievable accuracy of on-device SAM solutions.
To this end, we propose a dedicated neural architecture search framework for
Energy-efficient and Real-time SAM (ERSAM). Specifically, our ERSAM framework
can automatically search for DNNs that push forward the achievable accuracy vs.
hardware efficiency frontier of mobile SAM solutions. For example,
ERSAM-delivered DNNs only consume 40 mW x 12 h energy and 0.05 seconds
processing latency for a 5 seconds audio segment on a Pixel 3 phone, while only
achieving an error rate of 14.3% on a social ambiance dataset generated by
LibriSpeech. We can expect that our ERSAM framework can pave the way for
ubiquitous on-device SAM solutions which are in growing demand.
|
2309.07663 | Yuma Ichikawa | Yuma Ichikawa and Koji Hukushima | High-dimensional Asymptotics of VAEs: Threshold of Posterior Collapse
and Dataset-Size Dependence of Rate-Distortion Curve | 25 pages, 7 figures | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In variational autoencoders (VAEs), the variational posterior often collapses
to the prior, known as posterior collapse, which leads to poor representation
learning quality. An adjustable hyperparameter beta has been introduced in VAEs
to address this issue. This study sharply evaluates the conditions under which
the posterior collapse occurs with respect to beta and dataset size by
analyzing a minimal VAE in a high-dimensional limit. Additionally, this setting
enables the evaluation of the rate-distortion curve of the VAE. Our results
show that, unlike typical regularization parameters, VAEs face "inevitable
posterior collapse" beyond a certain beta threshold, regardless of dataset
size. Moreover, the dataset-size dependence of the derived rate-distortion
curve suggests that relatively large datasets are required to achieve a
rate-distortion curve with high rates. These findings robustly explain
generalization behavior observed in various real datasets with highly
non-linear VAEs.
| [
{
"version": "v1",
"created": "Thu, 14 Sep 2023 12:27:17 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 09:12:46 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Ichikawa",
"Yuma",
""
],
[
"Hukushima",
"Koji",
""
]
] | TITLE: High-dimensional Asymptotics of VAEs: Threshold of Posterior Collapse
and Dataset-Size Dependence of Rate-Distortion Curve
ABSTRACT: In variational autoencoders (VAEs), the variational posterior often collapses
to the prior, known as posterior collapse, which leads to poor representation
learning quality. An adjustable hyperparameter beta has been introduced in VAEs
to address this issue. This study sharply evaluates the conditions under which
the posterior collapse occurs with respect to beta and dataset size by
analyzing a minimal VAE in a high-dimensional limit. Additionally, this setting
enables the evaluation of the rate-distortion curve of the VAE. Our results
show that, unlike typical regularization parameters, VAEs face "inevitable
posterior collapse" beyond a certain beta threshold, regardless of dataset
size. Moreover, the dataset-size dependence of the derived rate-distortion
curve suggests that relatively large datasets are required to achieve a
rate-distortion curve with high rates. These findings robustly explain
generalization behavior observed in various real datasets with highly
non-linear VAEs.
|
2312.11841 | Chaojian Li | Chaojian Li, Bichen Wu, Peter Vajda, Yingyan Celine Lin | MixRT: Mixed Neural Representations For Real-Time NeRF Rendering | Accepted by 3DV'24. Project Page: https://licj15.github.io/MixRT/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural Radiance Field (NeRF) has emerged as a leading technique for novel
view synthesis, owing to its impressive photorealistic reconstruction and
rendering capability. Nevertheless, achieving real-time NeRF rendering in
large-scale scenes has presented challenges, often leading to the adoption of
either intricate baked mesh representations with a substantial number of
triangles or resource-intensive ray marching in baked representations. We
challenge these conventions, observing that high-quality geometry, represented
by meshes with substantial triangles, is not necessary for achieving
photorealistic rendering quality. Consequently, we propose MixRT, a novel NeRF
representation that includes a low-quality mesh, a view-dependent displacement
map, and a compressed NeRF model. This design effectively harnesses the
capabilities of existing graphics hardware, thus enabling real-time NeRF
rendering on edge devices. Leveraging a highly-optimized WebGL-based rendering
framework, our proposed MixRT attains real-time rendering speeds on edge
devices (over 30 FPS at a resolution of 1280 x 720 on a MacBook M1 Pro laptop),
better rendering quality (0.2 PSNR higher in indoor scenes of the Unbounded-360
datasets), and a smaller storage size (less than 80% compared to
state-of-the-art methods).
| [
{
"version": "v1",
"created": "Tue, 19 Dec 2023 04:14:11 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Dec 2023 03:14:40 GMT"
},
{
"version": "v3",
"created": "Mon, 15 Jan 2024 03:38:54 GMT"
},
{
"version": "v4",
"created": "Mon, 22 Jan 2024 14:59:20 GMT"
},
{
"version": "v5",
"created": "Fri, 28 Mar 2025 04:07:01 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Li",
"Chaojian",
""
],
[
"Wu",
"Bichen",
""
],
[
"Vajda",
"Peter",
""
],
[
"Lin",
"Yingyan Celine",
""
]
] | TITLE: MixRT: Mixed Neural Representations For Real-Time NeRF Rendering
ABSTRACT: Neural Radiance Field (NeRF) has emerged as a leading technique for novel
view synthesis, owing to its impressive photorealistic reconstruction and
rendering capability. Nevertheless, achieving real-time NeRF rendering in
large-scale scenes has presented challenges, often leading to the adoption of
either intricate baked mesh representations with a substantial number of
triangles or resource-intensive ray marching in baked representations. We
challenge these conventions, observing that high-quality geometry, represented
by meshes with substantial triangles, is not necessary for achieving
photorealistic rendering quality. Consequently, we propose MixRT, a novel NeRF
representation that includes a low-quality mesh, a view-dependent displacement
map, and a compressed NeRF model. This design effectively harnesses the
capabilities of existing graphics hardware, thus enabling real-time NeRF
rendering on edge devices. Leveraging a highly-optimized WebGL-based rendering
framework, our proposed MixRT attains real-time rendering speeds on edge
devices (over 30 FPS at a resolution of 1280 x 720 on a MacBook M1 Pro laptop),
better rendering quality (0.2 PSNR higher in indoor scenes of the Unbounded-360
datasets), and a smaller storage size (less than 80% compared to
state-of-the-art methods).
|
2402.07338 | Joshua Krinsky | Joshua Krinsky, Alan Bettis, Qiuyu Tang, Daniel Moreira, Aparna
Bharati | Exploring Saliency Bias in Manipulation Detection | Published in: 2024 IEEE International Conference on Image Processing
(ICIP) | null | 10.1109/ICIP51287.2024.10648063 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The social media-fuelled explosion of fake news and misinformation supported
by tampered images has led to growth in the development of models and datasets
for image manipulation detection. However, existing detection methods mostly
treat media objects in isolation, without considering the impact of specific
manipulations on viewer perception. Forensic datasets are usually analyzed
based on the manipulation operations and corresponding pixel-based masks, but
not on the semantics of the manipulation, i.e., type of scene, objects, and
viewers' attention to scene content. The semantics of the manipulation play an
important role in spreading misinformation through manipulated images. In an
attempt to encourage further development of semantic-aware forensic approaches
to understand visual misinformation, we propose a framework to analyze the
trends of visual and semantic saliency in popular image manipulation datasets
and their impact on detection.
| [
{
"version": "v1",
"created": "Mon, 12 Feb 2024 00:08:51 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Feb 2024 21:47:47 GMT"
},
{
"version": "v3",
"created": "Tue, 20 Aug 2024 18:20:44 GMT"
},
{
"version": "v4",
"created": "Fri, 28 Mar 2025 16:53:29 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Krinsky",
"Joshua",
""
],
[
"Bettis",
"Alan",
""
],
[
"Tang",
"Qiuyu",
""
],
[
"Moreira",
"Daniel",
""
],
[
"Bharati",
"Aparna",
""
]
] | TITLE: Exploring Saliency Bias in Manipulation Detection
ABSTRACT: The social media-fuelled explosion of fake news and misinformation supported
by tampered images has led to growth in the development of models and datasets
for image manipulation detection. However, existing detection methods mostly
treat media objects in isolation, without considering the impact of specific
manipulations on viewer perception. Forensic datasets are usually analyzed
based on the manipulation operations and corresponding pixel-based masks, but
not on the semantics of the manipulation, i.e., type of scene, objects, and
viewers' attention to scene content. The semantics of the manipulation play an
important role in spreading misinformation through manipulated images. In an
attempt to encourage further development of semantic-aware forensic approaches
to understand visual misinformation, we propose a framework to analyze the
trends of visual and semantic saliency in popular image manipulation datasets
and their impact on detection.
|
2402.07877 | Yangxinyu Xie | Yangxinyu Xie, Bowen Jiang, Tanwi Mallick, Joshua David Bergerson,
John K. Hutchison, Duane R. Verner, Jordan Branham, M. Ross Alexander, Robert
B. Ross, Yan Feng, Leslie-Anne Levy, Weijie Su, Camillo J. Taylor | A RAG-Based Multi-Agent LLM System for Natural Hazard Resilience and
Adaptation | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Large language models (LLMs) are a transformational capability at the
frontier of artificial intelligence and machine learning that can support
decision-makers in addressing pressing societal challenges such as extreme
natural hazard events. As generalized models, LLMs often struggle to provide
context-specific information, particularly in areas requiring specialized
knowledge. In this work, we propose a Retrieval-Augmented Generation
(RAG)-based multi-agent LLM system to support analysis and decision-making in
the context of natural hazards and extreme weather events. As a proof of
concept, we present WildfireGPT, a specialized system focused on wildfire
scenarios. The architecture employs a user-centered, multi-agent design to
deliver tailored risk insights across diverse stakeholder groups. By
integrating domain-specific projection data, observational datasets, and
scientific literature through a RAG framework, the system ensures both accuracy
and contextual relevance of the information it provides. Evaluation across ten
expert-led case studies demonstrates that WildfireGPT significantly outperforms
existing LLM-based solutions for decision support in natural hazard and extreme
weather contexts.
| [
{
"version": "v1",
"created": "Mon, 12 Feb 2024 18:41:55 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Aug 2024 19:01:23 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Mar 2025 17:14:39 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Xie",
"Yangxinyu",
""
],
[
"Jiang",
"Bowen",
""
],
[
"Mallick",
"Tanwi",
""
],
[
"Bergerson",
"Joshua David",
""
],
[
"Hutchison",
"John K.",
""
],
[
"Verner",
"Duane R.",
""
],
[
"Branham",
"Jordan",
""
],
[
"Alexander",
"M. Ross",
""
],
[
"Ross",
"Robert B.",
""
],
[
"Feng",
"Yan",
""
],
[
"Levy",
"Leslie-Anne",
""
],
[
"Su",
"Weijie",
""
],
[
"Taylor",
"Camillo J.",
""
]
] | TITLE: A RAG-Based Multi-Agent LLM System for Natural Hazard Resilience and
Adaptation
ABSTRACT: Large language models (LLMs) are a transformational capability at the
frontier of artificial intelligence and machine learning that can support
decision-makers in addressing pressing societal challenges such as extreme
natural hazard events. As generalized models, LLMs often struggle to provide
context-specific information, particularly in areas requiring specialized
knowledge. In this work, we propose a Retrieval-Augmented Generation
(RAG)-based multi-agent LLM system to support analysis and decision-making in
the context of natural hazards and extreme weather events. As a proof of
concept, we present WildfireGPT, a specialized system focused on wildfire
scenarios. The architecture employs a user-centered, multi-agent design to
deliver tailored risk insights across diverse stakeholder groups. By
integrating domain-specific projection data, observational datasets, and
scientific literature through a RAG framework, the system ensures both accuracy
and contextual relevance of the information it provides. Evaluation across ten
expert-led case studies demonstrates that WildfireGPT significantly outperforms
existing LLM-based solutions for decision support in natural hazard and extreme
weather contexts.
|
2402.17398 | Nishikanta Mohanty | Nishikanta Mohanty, Bikash K. Behera, Christopher Ferrie and Pravat
Dash | A Quantum Approach to Synthetic Minority Oversampling Technique (SMOTE) | 42 Pages, 23 Figures, 2 Tables | Quantum Mach. Intell. 7, 38 (2025) | 10.1007/s42484-025-00248-6 | null | quant-ph cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The paper proposes the Quantum-SMOTE method, a novel solution that uses
quantum computing techniques to solve the prevalent problem of class imbalance
in machine learning datasets. Quantum-SMOTE, inspired by the Synthetic Minority
Oversampling Technique (SMOTE), generates synthetic data points using quantum
processes such as swap tests and quantum rotation. The process varies from the
conventional SMOTE algorithm's usage of K-Nearest Neighbors (KNN) and Euclidean
distances, enabling synthetic instances to be generated from minority class
data points without relying on neighbor proximity. The algorithm asserts
greater control over the synthetic data generation process by introducing
hyperparameters such as rotation angle, minority percentage, and splitting
factor, which allow for customization to specific dataset requirements. Due to
the use of a compact swap test, the algorithm can accommodate a large number of
features. Furthermore, the approach is tested on a public dataset of Telecom
Churn and evaluated alongside two prominent classification algorithms, Random
Forest and Logistic Regression, to determine its impact along with varying
proportions of synthetic data.
| [
{
"version": "v1",
"created": "Tue, 27 Feb 2024 10:46:36 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Feb 2024 22:33:55 GMT"
},
{
"version": "v3",
"created": "Thu, 4 Jul 2024 10:06:23 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Mohanty",
"Nishikanta",
""
],
[
"Behera",
"Bikash K.",
""
],
[
"Ferrie",
"Christopher",
""
],
[
"Dash",
"Pravat",
""
]
] | TITLE: A Quantum Approach to Synthetic Minority Oversampling Technique (SMOTE)
ABSTRACT: The paper proposes the Quantum-SMOTE method, a novel solution that uses
quantum computing techniques to solve the prevalent problem of class imbalance
in machine learning datasets. Quantum-SMOTE, inspired by the Synthetic Minority
Oversampling Technique (SMOTE), generates synthetic data points using quantum
processes such as swap tests and quantum rotation. The process varies from the
conventional SMOTE algorithm's usage of K-Nearest Neighbors (KNN) and Euclidean
distances, enabling synthetic instances to be generated from minority class
data points without relying on neighbor proximity. The algorithm asserts
greater control over the synthetic data generation process by introducing
hyperparameters such as rotation angle, minority percentage, and splitting
factor, which allow for customization to specific dataset requirements. Due to
the use of a compact swap test, the algorithm can accommodate a large number of
features. Furthermore, the approach is tested on a public dataset of Telecom
Churn and evaluated alongside two prominent classification algorithms, Random
Forest and Logistic Regression, to determine its impact along with varying
proportions of synthetic data.
|
2403.02177 | Zirui Wu | Zirui Wu and Yansong Feng | ProTrix: Building Models for Planning and Reasoning over Tables with
Sentence Context | EMNLP 2024 Findings | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Tables play a crucial role in conveying information in various domains. We
propose a Plan-then-Reason framework to answer different types of user queries
over tables with sentence context. The framework first plans the reasoning
paths over the context, then assigns each step to program-based or textual
reasoning to reach the final answer. This framework enhances the table
reasoning abilities for both in-context learning and fine-tuning methods.
GPT-3.5-Turbo following Plan-then-Reason framework surpasses other prompting
baselines without self-consistency while using less API calls and in-context
demonstrations. We also construct an instruction tuning set TrixInstruct to
evaluate the effectiveness of fine-tuning with this framework. We present
ProTrix model family by finetuning models on TrixInstruct. Our experiments show
that ProTrix family generalizes to diverse unseen tabular tasks with only 6k
training instances. We further demonstrate that ProTrix can generate accurate
and faithful explanations to answer complex free-form questions. Our work
underscores the importance of the planning and reasoning abilities towards a
model over tabular tasks with generalizability and interpretability. We
open-source our dataset and models at https://github.com/WilliamZR/ProTrix.
| [
{
"version": "v1",
"created": "Mon, 4 Mar 2024 16:21:19 GMT"
},
{
"version": "v2",
"created": "Mon, 1 Jul 2024 11:31:21 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Mar 2025 07:03:38 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Wu",
"Zirui",
""
],
[
"Feng",
"Yansong",
""
]
] | TITLE: ProTrix: Building Models for Planning and Reasoning over Tables with
Sentence Context
ABSTRACT: Tables play a crucial role in conveying information in various domains. We
propose a Plan-then-Reason framework to answer different types of user queries
over tables with sentence context. The framework first plans the reasoning
paths over the context, then assigns each step to program-based or textual
reasoning to reach the final answer. This framework enhances the table
reasoning abilities for both in-context learning and fine-tuning methods.
GPT-3.5-Turbo following Plan-then-Reason framework surpasses other prompting
baselines without self-consistency while using less API calls and in-context
demonstrations. We also construct an instruction tuning set TrixInstruct to
evaluate the effectiveness of fine-tuning with this framework. We present
ProTrix model family by finetuning models on TrixInstruct. Our experiments show
that ProTrix family generalizes to diverse unseen tabular tasks with only 6k
training instances. We further demonstrate that ProTrix can generate accurate
and faithful explanations to answer complex free-form questions. Our work
underscores the importance of the planning and reasoning abilities towards a
model over tabular tasks with generalizability and interpretability. We
open-source our dataset and models at https://github.com/WilliamZR/ProTrix.
|
2403.19444 | Amy Rafferty | Amy Rafferty and Rishi Ramaesh and Ajitha Rajan | Leveraging Expert Input for Robust and Explainable AI-Assisted Lung
Cancer Detection in Chest X-rays | null | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning models show significant potential for advancing AI-assisted
medical diagnostics, particularly in detecting lung cancer through medical
image modalities such as chest X-rays. However, the black-box nature of these
models poses challenges to their interpretability and trustworthiness, limiting
their adoption in clinical practice. This study examines both the
interpretability and robustness of a high-performing lung cancer detection
model based on InceptionV3, utilizing a public dataset of chest X-rays and
radiological reports. We evaluate the clinical utility of multiple explainable
AI (XAI) techniques, including both post-hoc and ante-hoc approaches, and find
that existing methods often fail to provide clinically relevant explanations,
displaying inconsistencies and divergence from expert radiologist assessments.
To address these limitations, we collaborated with a radiologist to define
diagnosis-specific clinical concepts and developed ClinicXAI, an expert-driven
approach leveraging the concept bottleneck methodology. ClinicXAI generated
clinically meaningful explanations which closely aligned with the practical
requirements of clinicians while maintaining high diagnostic accuracy. We also
assess the robustness of ClinicXAI in comparison to the original InceptionV3
model by subjecting both to a series of widely utilized adversarial attacks.
Our analysis demonstrates that ClinicXAI exhibits significantly greater
resilience to adversarial perturbations. These findings underscore the
importance of incorporating domain expertise into the design of interpretable
and robust AI systems for medical diagnostics, paving the way for more
trustworthy and effective AI solutions in healthcare.
| [
{
"version": "v1",
"created": "Thu, 28 Mar 2024 14:15:13 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 15:32:18 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Rafferty",
"Amy",
""
],
[
"Ramaesh",
"Rishi",
""
],
[
"Rajan",
"Ajitha",
""
]
] | TITLE: Leveraging Expert Input for Robust and Explainable AI-Assisted Lung
Cancer Detection in Chest X-rays
ABSTRACT: Deep learning models show significant potential for advancing AI-assisted
medical diagnostics, particularly in detecting lung cancer through medical
image modalities such as chest X-rays. However, the black-box nature of these
models poses challenges to their interpretability and trustworthiness, limiting
their adoption in clinical practice. This study examines both the
interpretability and robustness of a high-performing lung cancer detection
model based on InceptionV3, utilizing a public dataset of chest X-rays and
radiological reports. We evaluate the clinical utility of multiple explainable
AI (XAI) techniques, including both post-hoc and ante-hoc approaches, and find
that existing methods often fail to provide clinically relevant explanations,
displaying inconsistencies and divergence from expert radiologist assessments.
To address these limitations, we collaborated with a radiologist to define
diagnosis-specific clinical concepts and developed ClinicXAI, an expert-driven
approach leveraging the concept bottleneck methodology. ClinicXAI generated
clinically meaningful explanations which closely aligned with the practical
requirements of clinicians while maintaining high diagnostic accuracy. We also
assess the robustness of ClinicXAI in comparison to the original InceptionV3
model by subjecting both to a series of widely utilized adversarial attacks.
Our analysis demonstrates that ClinicXAI exhibits significantly greater
resilience to adversarial perturbations. These findings underscore the
importance of incorporating domain expertise into the design of interpretable
and robust AI systems for medical diagnostics, paving the way for more
trustworthy and effective AI solutions in healthcare.
|
2405.06116 | Hongwei Ren | Hongwei Ren, Yue Zhou, Jiadong Zhu, Haotian Fu, Yulong Huang, Xiaopeng
Lin, Yuetong Fang, Fei Ma, Hao Yu, and Bojun Cheng | Rethinking Efficient and Effective Point-based Networks for Event Camera
Classification and Regression: EventMamba | Accepted by TPAMI | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Event cameras draw inspiration from biological systems, boasting low latency
and high dynamic range while consuming minimal power. The most current approach
to processing Event Cloud often involves converting it into frame-based
representations, which neglects the sparsity of events, loses fine-grained
temporal information, and increases the computational burden. In contrast,
Point Cloud is a popular representation for processing 3-dimensional data and
serves as an alternative method to exploit local and global spatial features.
Nevertheless, previous point-based methods show an unsatisfactory performance
compared to the frame-based method in dealing with spatio-temporal event
streams. In order to bridge the gap, we propose EventMamba, an efficient and
effective framework based on Point Cloud representation by rethinking the
distinction between Event Cloud and Point Cloud, emphasizing vital temporal
information. The Event Cloud is subsequently fed into a hierarchical structure
with staged modules to process both implicit and explicit temporal features.
Specifically, we redesign the global extractor to enhance explicit temporal
extraction among a long sequence of events with temporal aggregation and State
Space Model (SSM) based Mamba. Our model consumes minimal computational
resources in the experiments and still exhibits SOTA point-based performance on
six different scales of action recognition datasets. It even outperformed all
frame-based methods on both Camera Pose Relocalization (CPR) and eye-tracking
regression tasks. Our code is available at:
https://github.com/rhwxmx/EventMamba.
| [
{
"version": "v1",
"created": "Thu, 9 May 2024 21:47:46 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Jun 2024 08:59:51 GMT"
},
{
"version": "v3",
"created": "Wed, 3 Jul 2024 03:17:06 GMT"
},
{
"version": "v4",
"created": "Fri, 28 Mar 2025 14:25:05 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Ren",
"Hongwei",
""
],
[
"Zhou",
"Yue",
""
],
[
"Zhu",
"Jiadong",
""
],
[
"Fu",
"Haotian",
""
],
[
"Huang",
"Yulong",
""
],
[
"Lin",
"Xiaopeng",
""
],
[
"Fang",
"Yuetong",
""
],
[
"Ma",
"Fei",
""
],
[
"Yu",
"Hao",
""
],
[
"Cheng",
"Bojun",
""
]
] | TITLE: Rethinking Efficient and Effective Point-based Networks for Event Camera
Classification and Regression: EventMamba
ABSTRACT: Event cameras draw inspiration from biological systems, boasting low latency
and high dynamic range while consuming minimal power. The most current approach
to processing Event Cloud often involves converting it into frame-based
representations, which neglects the sparsity of events, loses fine-grained
temporal information, and increases the computational burden. In contrast,
Point Cloud is a popular representation for processing 3-dimensional data and
serves as an alternative method to exploit local and global spatial features.
Nevertheless, previous point-based methods show an unsatisfactory performance
compared to the frame-based method in dealing with spatio-temporal event
streams. In order to bridge the gap, we propose EventMamba, an efficient and
effective framework based on Point Cloud representation by rethinking the
distinction between Event Cloud and Point Cloud, emphasizing vital temporal
information. The Event Cloud is subsequently fed into a hierarchical structure
with staged modules to process both implicit and explicit temporal features.
Specifically, we redesign the global extractor to enhance explicit temporal
extraction among a long sequence of events with temporal aggregation and State
Space Model (SSM) based Mamba. Our model consumes minimal computational
resources in the experiments and still exhibits SOTA point-based performance on
six different scales of action recognition datasets. It even outperformed all
frame-based methods on both Camera Pose Relocalization (CPR) and eye-tracking
regression tasks. Our code is available at:
https://github.com/rhwxmx/EventMamba.
|
2406.03044 | Geeling Chau | Geeling Chau, Christopher Wang, Sabera Talukder, Vighnesh Subramaniam,
Saraswati Soedarmadji, Yisong Yue, Boris Katz, and Andrei Barbu | Population Transformer: Learning Population-level Representations of
Neural Activity | ICLR 2025, Project page
https://glchau.github.io/population-transformer/ | null | null | null | cs.LG q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | We present a self-supervised framework that learns population-level codes for
arbitrary ensembles of neural recordings at scale. We address key challenges in
scaling models with neural time-series data, namely, sparse and variable
electrode distribution across subjects and datasets. The Population Transformer
(PopT) stacks on top of pretrained temporal embeddings and enhances downstream
decoding by enabling learned aggregation of multiple spatially-sparse data
channels. The pretrained PopT lowers the amount of data required for downstream
decoding experiments, while increasing accuracy, even on held-out subjects and
tasks. Compared to end-to-end methods, this approach is computationally
lightweight, while achieving similar or better decoding performance. We further
show how our framework is generalizable to multiple time-series embeddings and
neural data modalities. Beyond decoding, we interpret the pretrained and
fine-tuned PopT models to show how they can be used to extract neuroscience
insights from large amounts of data. We release our code as well as a
pretrained PopT to enable off-the-shelf improvements in multi-channel
intracranial data decoding and interpretability. Code is available at
https://github.com/czlwang/PopulationTransformer.
| [
{
"version": "v1",
"created": "Wed, 5 Jun 2024 08:15:09 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Oct 2024 17:07:27 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Mar 2025 17:58:10 GMT"
},
{
"version": "v4",
"created": "Fri, 28 Mar 2025 06:43:28 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Chau",
"Geeling",
""
],
[
"Wang",
"Christopher",
""
],
[
"Talukder",
"Sabera",
""
],
[
"Subramaniam",
"Vighnesh",
""
],
[
"Soedarmadji",
"Saraswati",
""
],
[
"Yue",
"Yisong",
""
],
[
"Katz",
"Boris",
""
],
[
"Barbu",
"Andrei",
""
]
] | TITLE: Population Transformer: Learning Population-level Representations of
Neural Activity
ABSTRACT: We present a self-supervised framework that learns population-level codes for
arbitrary ensembles of neural recordings at scale. We address key challenges in
scaling models with neural time-series data, namely, sparse and variable
electrode distribution across subjects and datasets. The Population Transformer
(PopT) stacks on top of pretrained temporal embeddings and enhances downstream
decoding by enabling learned aggregation of multiple spatially-sparse data
channels. The pretrained PopT lowers the amount of data required for downstream
decoding experiments, while increasing accuracy, even on held-out subjects and
tasks. Compared to end-to-end methods, this approach is computationally
lightweight, while achieving similar or better decoding performance. We further
show how our framework is generalizable to multiple time-series embeddings and
neural data modalities. Beyond decoding, we interpret the pretrained and
fine-tuned PopT models to show how they can be used to extract neuroscience
insights from large amounts of data. We release our code as well as a
pretrained PopT to enable off-the-shelf improvements in multi-channel
intracranial data decoding and interpretability. Code is available at
https://github.com/czlwang/PopulationTransformer.
|
2406.14455 | Nizhuan Wang | Luhui Cai, Weiming Zeng, Hongyu Chen, Hua Zhang, Yueyang Li, Yu Feng,
Hongjie Yan, Lingbin Bian, Wai Ting Siok, Nizhuan Wang | MM-GTUNets: Unified Multi-Modal Graph Deep Learning for Brain Disorders
Prediction | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph deep learning (GDL) has demonstrated impressive performance in
predicting population-based brain disorders (BDs) through the integration of
both imaging and non-imaging data. However, the effectiveness of GDL based
methods heavily depends on the quality of modeling the multi-modal population
graphs and tends to degrade as the graph scale increases. Furthermore, these
methods often constrain interactions between imaging and non-imaging data to
node-edge interactions within the graph, overlooking complex inter-modal
correlations, leading to suboptimal outcomes. To overcome these challenges, we
propose MM-GTUNets, an end-to-end graph transformer based multi-modal graph
deep learning (MMGDL) framework designed for brain disorders prediction at
large scale. Specifically, to effectively leverage rich multi-modal information
related to diseases, we introduce Modality Reward Representation Learning
(MRRL) which adaptively constructs population graphs using a reward system.
Additionally, we employ variational autoencoder to reconstruct latent
representations of non-imaging features aligned with imaging features. Based on
this, we propose Adaptive Cross-Modal Graph Learning (ACMGL), which captures
critical modality-specific and modality-shared features through a unified
GTUNet encoder taking advantages of Graph UNet and Graph Transformer, and
feature fusion module. We validated our method on two public multi-modal
datasets ABIDE and ADHD-200, demonstrating its superior performance in
diagnosing BDs. Our code is available at https://github.com/NZWANG/MM-GTUNets.
| [
{
"version": "v1",
"created": "Thu, 20 Jun 2024 16:14:43 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Jan 2025 07:28:44 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Mar 2025 05:27:15 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Cai",
"Luhui",
""
],
[
"Zeng",
"Weiming",
""
],
[
"Chen",
"Hongyu",
""
],
[
"Zhang",
"Hua",
""
],
[
"Li",
"Yueyang",
""
],
[
"Feng",
"Yu",
""
],
[
"Yan",
"Hongjie",
""
],
[
"Bian",
"Lingbin",
""
],
[
"Siok",
"Wai Ting",
""
],
[
"Wang",
"Nizhuan",
""
]
] | TITLE: MM-GTUNets: Unified Multi-Modal Graph Deep Learning for Brain Disorders
Prediction
ABSTRACT: Graph deep learning (GDL) has demonstrated impressive performance in
predicting population-based brain disorders (BDs) through the integration of
both imaging and non-imaging data. However, the effectiveness of GDL based
methods heavily depends on the quality of modeling the multi-modal population
graphs and tends to degrade as the graph scale increases. Furthermore, these
methods often constrain interactions between imaging and non-imaging data to
node-edge interactions within the graph, overlooking complex inter-modal
correlations, leading to suboptimal outcomes. To overcome these challenges, we
propose MM-GTUNets, an end-to-end graph transformer based multi-modal graph
deep learning (MMGDL) framework designed for brain disorders prediction at
large scale. Specifically, to effectively leverage rich multi-modal information
related to diseases, we introduce Modality Reward Representation Learning
(MRRL) which adaptively constructs population graphs using a reward system.
Additionally, we employ variational autoencoder to reconstruct latent
representations of non-imaging features aligned with imaging features. Based on
this, we propose Adaptive Cross-Modal Graph Learning (ACMGL), which captures
critical modality-specific and modality-shared features through a unified
GTUNet encoder taking advantages of Graph UNet and Graph Transformer, and
feature fusion module. We validated our method on two public multi-modal
datasets ABIDE and ADHD-200, demonstrating its superior performance in
diagnosing BDs. Our code is available at https://github.com/NZWANG/MM-GTUNets.
|
2407.03434 | Simon M\"uller | Simon M\"uller, Thomas Nevolianis, Miquel Garcia-Rat\'es, Christoph
Riplinger, Kai Leonhard, Irina Smirnova | Predicting solvation free energies for neutral molecules in any solvent
with openCOSMO-RS | null | null | 10.1016/j.fluid.2024.114250 | null | physics.chem-ph cond-mat.soft physics.comp-ph | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The accurate prediction of solvation free energies is critical for
understanding various phenomena in the liquid phase, including reaction rates,
equilibrium constants, activity coefficients, and partition coefficients.
Despite extensive research, precise prediction of solvation free energies
remains challenging. In this study, we introduce openCOSMO-RS 24a, an improved
version of the open-source COSMO-RS model, capable of predicting solvation free
energies alongside other liquid-phase properties. We parameterize openCOSMO-RS
24a using quantum chemical calculations from ORCA 6.0, leveraging a
comprehensive dataset that includes solvation free energies, partition
coefficients, and infinite dilution activity coefficients for various solutes
and solvents at 25 {\deg}C. Additionally, we develop a Quantitative
Structure-Property Relationships model to predict molar volumes of the
solvents, an essential requirement for predicting solvation free energies from
structure alone. Our results show that openCOSMO-RS 24a achieves an average
absolute deviation of 0.45 kcal/mol for solvation free energies, 0.76 for
partition coefficients, and 0.51 for infinite dilution activity coefficients,
demonstrating improvements over the previous openCOSMO-RS 22 parameterization
and comparable results to COSMOtherm 24 TZVP. A new command line interface for
openCOSMO-RS 24a was developed which allows easy acces to the solvation energy
model directly from within ORCA 6.0. This represents a significant advancement
in the predictive modeling of solvation free energies and other solution-phase
properties, providing researchers with a robust tool for applications in
chemical and materials science.
| [
{
"version": "v1",
"created": "Wed, 3 Jul 2024 18:27:18 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Müller",
"Simon",
""
],
[
"Nevolianis",
"Thomas",
""
],
[
"Garcia-Ratés",
"Miquel",
""
],
[
"Riplinger",
"Christoph",
""
],
[
"Leonhard",
"Kai",
""
],
[
"Smirnova",
"Irina",
""
]
] | TITLE: Predicting solvation free energies for neutral molecules in any solvent
with openCOSMO-RS
ABSTRACT: The accurate prediction of solvation free energies is critical for
understanding various phenomena in the liquid phase, including reaction rates,
equilibrium constants, activity coefficients, and partition coefficients.
Despite extensive research, precise prediction of solvation free energies
remains challenging. In this study, we introduce openCOSMO-RS 24a, an improved
version of the open-source COSMO-RS model, capable of predicting solvation free
energies alongside other liquid-phase properties. We parameterize openCOSMO-RS
24a using quantum chemical calculations from ORCA 6.0, leveraging a
comprehensive dataset that includes solvation free energies, partition
coefficients, and infinite dilution activity coefficients for various solutes
and solvents at 25 {\deg}C. Additionally, we develop a Quantitative
Structure-Property Relationships model to predict molar volumes of the
solvents, an essential requirement for predicting solvation free energies from
structure alone. Our results show that openCOSMO-RS 24a achieves an average
absolute deviation of 0.45 kcal/mol for solvation free energies, 0.76 for
partition coefficients, and 0.51 for infinite dilution activity coefficients,
demonstrating improvements over the previous openCOSMO-RS 22 parameterization
and comparable results to COSMOtherm 24 TZVP. A new command line interface for
openCOSMO-RS 24a was developed which allows easy acces to the solvation energy
model directly from within ORCA 6.0. This represents a significant advancement
in the predictive modeling of solvation free energies and other solution-phase
properties, providing researchers with a robust tool for applications in
chemical and materials science.
|
2407.12838 | Laura Manrique-G\'omez | Laura Manrique-G\'omez and Tony Montes and Arturo Rodr\'iguez-Herrera
and Rub\'en Manrique | Historical Ink: 19th Century Latin American Spanish Newspaper Corpus
with LLM OCR Correction | null | ACL, Proceedings of the 4th International Conference on Natural
Language Processing for Digital Humanities, pages 132-139, 2024 | 10.18653/v1/2024.nlp4dh-1.13 | 2024.nlp4dh-1.13 | cs.CL cs.DL | http://creativecommons.org/licenses/by/4.0/ | This paper presents two significant contributions: First, it introduces a
novel dataset of 19th-century Latin American newspaper texts, addressing a
critical gap in specialized corpora for historical and linguistic analysis in
this region. Second, it develops a flexible framework that utilizes a Large
Language Model for OCR error correction and linguistic surface form detection
in digitized corpora. This semi-automated framework is adaptable to various
contexts and datasets and is applied to the newly created dataset.
| [
{
"version": "v1",
"created": "Thu, 4 Jul 2024 02:10:18 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Oct 2024 18:43:47 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Manrique-Gómez",
"Laura",
""
],
[
"Montes",
"Tony",
""
],
[
"Rodríguez-Herrera",
"Arturo",
""
],
[
"Manrique",
"Rubén",
""
]
] | TITLE: Historical Ink: 19th Century Latin American Spanish Newspaper Corpus
with LLM OCR Correction
ABSTRACT: This paper presents two significant contributions: First, it introduces a
novel dataset of 19th-century Latin American newspaper texts, addressing a
critical gap in specialized corpora for historical and linguistic analysis in
this region. Second, it develops a flexible framework that utilizes a Large
Language Model for OCR error correction and linguistic surface form detection
in digitized corpora. This semi-automated framework is adaptable to various
contexts and datasets and is applied to the newly created dataset.
|
2407.18943 | Patricia Martinkova | Patr\'icia Martinkov\'a, Jan Net\'ik, Ad\'ela Hladk\'a | Enhancing Psychometric Analysis with Interactive ShinyItemAnalysis
Modules | null | null | null | null | cs.HC stat.AP | http://creativecommons.org/licenses/by/4.0/ | ShinyItemAnalysis (SIA) is an R package and shiny application for an
interactive presentation of psychometric methods and analysis of multi-item
measurements in psychology, education, and social sciences in general. In this
article, we present a new feature introduced in the recent version of the
package, called "SIA modules", which allows researchers and practitioners to
offer new analytical methods for broader use via add-on extensions. We describe
how to build the add-on modules with the support of the new SIAtools package
and demonstrate the concepts using sample modules from the newly introduced
SIAmodules package. SIA modules are designed to integrate with and build upon
the SIA interactive application, enabling them to leverage the existing
infrastructure for tasks such as data uploading and processing. They can access
a range of outputs from various analyses, including item response theory
models, exploratory factor analysis, or differential item functioning models.
Because SIA modules come in R packages (or extend the existing ones), they may
come bundled with their datasets, use object-oriented systems, or even compiled
code. We discuss the possibility of broader use of the concept of SIA modules
in other areas and the importance of freely available interactive psychometric
software for methods dissemination.
| [
{
"version": "v1",
"created": "Wed, 10 Jul 2024 20:44:18 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 16:36:36 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Martinková",
"Patrícia",
""
],
[
"Netík",
"Jan",
""
],
[
"Hladká",
"Adéla",
""
]
] | TITLE: Enhancing Psychometric Analysis with Interactive ShinyItemAnalysis
Modules
ABSTRACT: ShinyItemAnalysis (SIA) is an R package and shiny application for an
interactive presentation of psychometric methods and analysis of multi-item
measurements in psychology, education, and social sciences in general. In this
article, we present a new feature introduced in the recent version of the
package, called "SIA modules", which allows researchers and practitioners to
offer new analytical methods for broader use via add-on extensions. We describe
how to build the add-on modules with the support of the new SIAtools package
and demonstrate the concepts using sample modules from the newly introduced
SIAmodules package. SIA modules are designed to integrate with and build upon
the SIA interactive application, enabling them to leverage the existing
infrastructure for tasks such as data uploading and processing. They can access
a range of outputs from various analyses, including item response theory
models, exploratory factor analysis, or differential item functioning models.
Because SIA modules come in R packages (or extend the existing ones), they may
come bundled with their datasets, use object-oriented systems, or even compiled
code. We discuss the possibility of broader use of the concept of SIA modules
in other areas and the importance of freely available interactive psychometric
software for methods dissemination.
|
2408.11965 | Theo Di Piazza | Theo Di Piazza, Carole Lazarus, Olivier Nempont and Loic Boussel | CT-AGRG: Automated Abnormality-Guided Report Generation from 3D Chest CT
Volumes | Paper accepted to ISBI 2025 | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | The rapid increase of computed tomography (CT) scans and their time-consuming
manual analysis have created an urgent need for robust automated analysis
techniques in clinical settings. These aim to assist radiologists and help them
managing their growing workload. Existing methods typically generate entire
reports directly from 3D CT images, without explicitly focusing on observed
abnormalities. This unguided approach often results in repetitive content or
incomplete reports, failing to prioritize anomaly-specific descriptions. We
propose a new anomaly-guided report generation model, which first predicts
abnormalities and then generates targeted descriptions for each. Evaluation on
a public dataset demonstrates significant improvements in report quality and
clinical relevance. We extend our work by conducting an ablation study to
demonstrate its effectiveness.
| [
{
"version": "v1",
"created": "Wed, 21 Aug 2024 19:36:27 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Sep 2024 08:35:32 GMT"
},
{
"version": "v3",
"created": "Sat, 28 Sep 2024 06:46:37 GMT"
},
{
"version": "v4",
"created": "Wed, 30 Oct 2024 13:22:45 GMT"
},
{
"version": "v5",
"created": "Thu, 2 Jan 2025 20:10:51 GMT"
},
{
"version": "v6",
"created": "Fri, 7 Feb 2025 14:26:51 GMT"
},
{
"version": "v7",
"created": "Fri, 28 Mar 2025 11:14:10 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Di Piazza",
"Theo",
""
],
[
"Lazarus",
"Carole",
""
],
[
"Nempont",
"Olivier",
""
],
[
"Boussel",
"Loic",
""
]
] | TITLE: CT-AGRG: Automated Abnormality-Guided Report Generation from 3D Chest CT
Volumes
ABSTRACT: The rapid increase of computed tomography (CT) scans and their time-consuming
manual analysis have created an urgent need for robust automated analysis
techniques in clinical settings. These aim to assist radiologists and help them
managing their growing workload. Existing methods typically generate entire
reports directly from 3D CT images, without explicitly focusing on observed
abnormalities. This unguided approach often results in repetitive content or
incomplete reports, failing to prioritize anomaly-specific descriptions. We
propose a new anomaly-guided report generation model, which first predicts
abnormalities and then generates targeted descriptions for each. Evaluation on
a public dataset demonstrates significant improvements in report quality and
clinical relevance. We extend our work by conducting an ablation study to
demonstrate its effectiveness.
|
2408.12139 | Haoyuan Shi | Haoyuan Shi, Tao Xu, Xiaodi Li, Qian Gao, Zhiwei Xiong, Junfeng Xia,
Zhenyu Yue | DRExplainer: Quantifiable Interpretability in Drug Response Prediction
with Directed Graph Convolutional Network | null | null | 10.1016/j.artmed.2025.103101 | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Predicting the response of a cancer cell line to a therapeutic drug is
pivotal for personalized medicine. Despite numerous deep learning methods that
have been developed for drug response prediction, integrating diverse
information about biological entities and predicting the directional response
remain major challenges. Here, we propose a novel interpretable predictive
model, DRExplainer, which leverages a directed graph convolutional network to
enhance the prediction in a directed bipartite network framework. DRExplainer
constructs a directed bipartite network integrating multi-omics profiles of
cell lines, the chemical structure of drugs and known drug response to achieve
directed prediction. Then, DRExplainer identifies the most relevant subgraph to
each prediction in this directed bipartite network by learning a mask,
facilitating critical medical decision-making. Additionally, we introduce a
quantifiable method for model interpretability that leverages a ground truth
benchmark dataset curated from biological features. In computational
experiments, DRExplainer outperforms state-of-the-art predictive methods and
another graph-based explanation method under the same experimental setting.
Finally, the case studies further validate the interpretability and the
effectiveness of DRExplainer in predictive novel drug response. Our code is
available at: https://github.com/vshy-dream/DRExplainer.
| [
{
"version": "v1",
"created": "Thu, 22 Aug 2024 05:45:48 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 02:50:29 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Shi",
"Haoyuan",
""
],
[
"Xu",
"Tao",
""
],
[
"Li",
"Xiaodi",
""
],
[
"Gao",
"Qian",
""
],
[
"Xiong",
"Zhiwei",
""
],
[
"Xia",
"Junfeng",
""
],
[
"Yue",
"Zhenyu",
""
]
] | TITLE: DRExplainer: Quantifiable Interpretability in Drug Response Prediction
with Directed Graph Convolutional Network
ABSTRACT: Predicting the response of a cancer cell line to a therapeutic drug is
pivotal for personalized medicine. Despite numerous deep learning methods that
have been developed for drug response prediction, integrating diverse
information about biological entities and predicting the directional response
remain major challenges. Here, we propose a novel interpretable predictive
model, DRExplainer, which leverages a directed graph convolutional network to
enhance the prediction in a directed bipartite network framework. DRExplainer
constructs a directed bipartite network integrating multi-omics profiles of
cell lines, the chemical structure of drugs and known drug response to achieve
directed prediction. Then, DRExplainer identifies the most relevant subgraph to
each prediction in this directed bipartite network by learning a mask,
facilitating critical medical decision-making. Additionally, we introduce a
quantifiable method for model interpretability that leverages a ground truth
benchmark dataset curated from biological features. In computational
experiments, DRExplainer outperforms state-of-the-art predictive methods and
another graph-based explanation method under the same experimental setting.
Finally, the case studies further validate the interpretability and the
effectiveness of DRExplainer in predictive novel drug response. Our code is
available at: https://github.com/vshy-dream/DRExplainer.
|
2408.15270 | Yinhuai Wang | Yinhuai Wang, Qihan Zhao, Runyi Yu, Hok Wai Tsui, Ailing Zeng, Jing
Lin, Zhengyi Luo, Jiwen Yu, Xiu Li, Qifeng Chen, Jian Zhang, Lei Zhang, Ping
Tan | SkillMimic: Learning Basketball Interaction Skills from Demonstrations | null | null | null | null | cs.CV cs.GR cs.LG cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traditional reinforcement learning methods for human-object interaction (HOI)
rely on labor-intensive, manually designed skill rewards that do not generalize
well across different interactions. We introduce SkillMimic, a unified
data-driven framework that fundamentally changes how agents learn interaction
skills by eliminating the need for skill-specific rewards. Our key insight is
that a unified HOI imitation reward can effectively capture the essence of
diverse interaction patterns from HOI datasets. This enables SkillMimic to
learn a single policy that not only masters multiple interaction skills but
also facilitates skill transitions, with both diversity and generalization
improving as the HOI dataset grows. For evaluation, we collect and introduce
two basketball datasets containing approximately 35 minutes of diverse
basketball skills. Extensive experiments show that SkillMimic successfully
masters a wide range of basketball skills including stylistic variations in
dribbling, layup, and shooting. Moreover, these learned skills can be
effectively composed by a high-level controller to accomplish complex and
long-horizon tasks such as consecutive scoring, opening new possibilities for
scalable and generalizable interaction skill learning. Project page:
https://ingrid789.github.io/SkillMimic/
| [
{
"version": "v1",
"created": "Mon, 12 Aug 2024 15:19:04 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 08:56:06 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Wang",
"Yinhuai",
""
],
[
"Zhao",
"Qihan",
""
],
[
"Yu",
"Runyi",
""
],
[
"Tsui",
"Hok Wai",
""
],
[
"Zeng",
"Ailing",
""
],
[
"Lin",
"Jing",
""
],
[
"Luo",
"Zhengyi",
""
],
[
"Yu",
"Jiwen",
""
],
[
"Li",
"Xiu",
""
],
[
"Chen",
"Qifeng",
""
],
[
"Zhang",
"Jian",
""
],
[
"Zhang",
"Lei",
""
],
[
"Tan",
"Ping",
""
]
] | TITLE: SkillMimic: Learning Basketball Interaction Skills from Demonstrations
ABSTRACT: Traditional reinforcement learning methods for human-object interaction (HOI)
rely on labor-intensive, manually designed skill rewards that do not generalize
well across different interactions. We introduce SkillMimic, a unified
data-driven framework that fundamentally changes how agents learn interaction
skills by eliminating the need for skill-specific rewards. Our key insight is
that a unified HOI imitation reward can effectively capture the essence of
diverse interaction patterns from HOI datasets. This enables SkillMimic to
learn a single policy that not only masters multiple interaction skills but
also facilitates skill transitions, with both diversity and generalization
improving as the HOI dataset grows. For evaluation, we collect and introduce
two basketball datasets containing approximately 35 minutes of diverse
basketball skills. Extensive experiments show that SkillMimic successfully
masters a wide range of basketball skills including stylistic variations in
dribbling, layup, and shooting. Moreover, these learned skills can be
effectively composed by a high-level controller to accomplish complex and
long-horizon tasks such as consecutive scoring, opening new possibilities for
scalable and generalizable interaction skill learning. Project page:
https://ingrid789.github.io/SkillMimic/
|
2409.03542 | Carlos Echegoyen | Aritz P\'erez, Carlos Echegoyen and Guzm\'an Santaf\'e | Risk-based Calibration for Generative Classifiers | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Generative classifiers are constructed on the basis of a joint probability
distribution and are typically learned using closed-form procedures that rely
on data statistics and maximize scores related to data fitting. However, these
scores are not directly linked to supervised classification metrics such as the
error, i.e., the expected 0-1 loss. To address this limitation, we propose a
learning procedure called risk-based calibration (RC) that iteratively refines
the generative classifier by adjusting its joint probability distribution
according to the 0-1 loss in training samples. This is achieved by reinforcing
data statistics associated with the true classes while weakening those of
incorrect classes. As a result, the classifier progressively assigns higher
probability to the correct labels, improving its training error. Results on 20
heterogeneous datasets using both na\"ive Bayes and quadratic discriminant
analysis show that RC significantly outperforms closed-form learning procedures
in terms of both training error and generalization error. In this way, RC
bridges the gap between traditional generative approaches and learning
procedures guided by performance measures, ensuring a closer alignment with
supervised classification objectives.
| [
{
"version": "v1",
"created": "Thu, 5 Sep 2024 14:06:56 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 10:04:24 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Pérez",
"Aritz",
""
],
[
"Echegoyen",
"Carlos",
""
],
[
"Santafé",
"Guzmán",
""
]
] | TITLE: Risk-based Calibration for Generative Classifiers
ABSTRACT: Generative classifiers are constructed on the basis of a joint probability
distribution and are typically learned using closed-form procedures that rely
on data statistics and maximize scores related to data fitting. However, these
scores are not directly linked to supervised classification metrics such as the
error, i.e., the expected 0-1 loss. To address this limitation, we propose a
learning procedure called risk-based calibration (RC) that iteratively refines
the generative classifier by adjusting its joint probability distribution
according to the 0-1 loss in training samples. This is achieved by reinforcing
data statistics associated with the true classes while weakening those of
incorrect classes. As a result, the classifier progressively assigns higher
probability to the correct labels, improving its training error. Results on 20
heterogeneous datasets using both na\"ive Bayes and quadratic discriminant
analysis show that RC significantly outperforms closed-form learning procedures
in terms of both training error and generalization error. In this way, RC
bridges the gap between traditional generative approaches and learning
procedures guided by performance measures, ensuring a closer alignment with
supervised classification objectives.
|
2409.07067 | Jingfan Yang | Jingfan Yang, Hu Gao, Ying Zhang, Bowen Ma and Depeng Dang | Structure Modeling Activation Free Fourier Network for Spacecraft Image
Denoising | Published in Neurocomputing, 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spacecraft image denoising is a crucial fundamental technology closely
related to aerospace research. However, the existing deep learning-based image
denoising methods are primarily designed for natural image and fail to
adequately consider the characteristics of spacecraft image(e.g. low-light
conditions, repetitive periodic structures), resulting in suboptimal
performance in the spacecraft image denoising task. To address the
aforementioned problems, we propose a Structure modeling Activation Free
Fourier Network (SAFFN), which is an efficient spacecraft image denoising
method including Structure Modeling Block (SMB) and Activation Free Fourier
Block (AFFB). We present SMB to effectively extract edge information and model
the structure for better identification of spacecraft components from dark
regions in spacecraft noise image. We present AFFB and utilize an improved Fast
Fourier block to extract repetitive periodic features and long-range
information in noisy spacecraft image. Extensive experimental results
demonstrate that our SAFFN performs competitively compared to the
state-of-the-art methods on spacecraft noise image datasets. The codes are
available at: https://github.com/shenduke/SAFFN.
| [
{
"version": "v1",
"created": "Wed, 11 Sep 2024 07:35:02 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Mar 2025 09:57:17 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Mar 2025 07:35:56 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Yang",
"Jingfan",
""
],
[
"Gao",
"Hu",
""
],
[
"Zhang",
"Ying",
""
],
[
"Ma",
"Bowen",
""
],
[
"Dang",
"Depeng",
""
]
] | TITLE: Structure Modeling Activation Free Fourier Network for Spacecraft Image
Denoising
ABSTRACT: Spacecraft image denoising is a crucial fundamental technology closely
related to aerospace research. However, the existing deep learning-based image
denoising methods are primarily designed for natural image and fail to
adequately consider the characteristics of spacecraft image(e.g. low-light
conditions, repetitive periodic structures), resulting in suboptimal
performance in the spacecraft image denoising task. To address the
aforementioned problems, we propose a Structure modeling Activation Free
Fourier Network (SAFFN), which is an efficient spacecraft image denoising
method including Structure Modeling Block (SMB) and Activation Free Fourier
Block (AFFB). We present SMB to effectively extract edge information and model
the structure for better identification of spacecraft components from dark
regions in spacecraft noise image. We present AFFB and utilize an improved Fast
Fourier block to extract repetitive periodic features and long-range
information in noisy spacecraft image. Extensive experimental results
demonstrate that our SAFFN performs competitively compared to the
state-of-the-art methods on spacecraft noise image datasets. The codes are
available at: https://github.com/shenduke/SAFFN.
|
2409.08839 | Alejandro Lancho | Alejandro Lancho, Amir Weiss, Gary C.F. Lee, Tejas Jayashankar, Binoy
Kurien, Yury Polyanskiy and Gregory W. Wornell | RF Challenge: The Data-Driven Radio Frequency Signal Separation
Challenge | 17 pages, 16 figures, to appear in the IEEE Open Journal of the
Communications Society | null | null | null | eess.SP cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address the critical problem of interference rejection in radio-frequency
(RF) signals using a data-driven approach that leverages deep-learning methods.
A primary contribution of this paper is the introduction of the RF Challenge,
which is a publicly available, diverse RF signal dataset for data-driven
analyses of RF signal problems. Specifically, we adopt a simplified signal
model for developing and analyzing interference rejection algorithms. For this
signal model, we introduce a set of carefully chosen deep learning
architectures, incorporating key domain-informed modifications alongside
traditional benchmark solutions to establish baseline performance metrics for
this intricate, ubiquitous problem. Through extensive simulations involving
eight different signal mixture types, we demonstrate the superior performance
(in some cases, by two orders of magnitude) of architectures such as UNet and
WaveNet over traditional methods like matched filtering and linear minimum mean
square error estimation. Our findings suggest that the data-driven approach can
yield scalable solutions, in the sense that the same architectures may be
similarly trained and deployed for different types of signals. Moreover, these
findings further corroborate the promising potential of deep learning
algorithms for enhancing communication systems, particularly via interference
mitigation. This work also includes results from an open competition based on
the RF Challenge, hosted at the 2024 IEEE International Conference on
Acoustics, Speech, and Signal Processing (ICASSP'24).
| [
{
"version": "v1",
"created": "Fri, 13 Sep 2024 13:53:41 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Mar 2025 18:16:54 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Lancho",
"Alejandro",
""
],
[
"Weiss",
"Amir",
""
],
[
"Lee",
"Gary C. F.",
""
],
[
"Jayashankar",
"Tejas",
""
],
[
"Kurien",
"Binoy",
""
],
[
"Polyanskiy",
"Yury",
""
],
[
"Wornell",
"Gregory W.",
""
]
] | TITLE: RF Challenge: The Data-Driven Radio Frequency Signal Separation
Challenge
ABSTRACT: We address the critical problem of interference rejection in radio-frequency
(RF) signals using a data-driven approach that leverages deep-learning methods.
A primary contribution of this paper is the introduction of the RF Challenge,
which is a publicly available, diverse RF signal dataset for data-driven
analyses of RF signal problems. Specifically, we adopt a simplified signal
model for developing and analyzing interference rejection algorithms. For this
signal model, we introduce a set of carefully chosen deep learning
architectures, incorporating key domain-informed modifications alongside
traditional benchmark solutions to establish baseline performance metrics for
this intricate, ubiquitous problem. Through extensive simulations involving
eight different signal mixture types, we demonstrate the superior performance
(in some cases, by two orders of magnitude) of architectures such as UNet and
WaveNet over traditional methods like matched filtering and linear minimum mean
square error estimation. Our findings suggest that the data-driven approach can
yield scalable solutions, in the sense that the same architectures may be
similarly trained and deployed for different types of signals. Moreover, these
findings further corroborate the promising potential of deep learning
algorithms for enhancing communication systems, particularly via interference
mitigation. This work also includes results from an open competition based on
the RF Challenge, hosted at the 2024 IEEE International Conference on
Acoustics, Speech, and Signal Processing (ICASSP'24).
|
2409.09221 | Yiwen Guan | Yiwen Guan, Viet Anh Trinh, Vivek Voleti, Jacob Whitehill | Multi-modal Speech Transformer Decoders: When Do Multiple Modalities
Improve Accuracy? | null | null | null | null | cs.CV cs.CL cs.MM cs.SD eess.AS | http://creativecommons.org/licenses/by/4.0/ | Decoder-only discrete-token language models have recently achieved
significant success in automatic speech recognition. However, systematic
analyses of how different modalities impact performance in specific scenarios
remain limited. In this paper, we investigate the effects of multiple
modalities on recognition accuracy on both synthetic and real-world datasets.
Our experiments suggest that: (1) Integrating more modalities can increase
accuracy; in particular, our paper is, to our best knowledge, the first to show
the benefit of combining audio, image context, and lip information; (2) Images
as a supplementary modality for speech recognition provide the greatest benefit
at moderate noise levels, moreover, they exhibit a different trend compared to
inherently synchronized modalities like lip movements; (3) Performance improves
on both synthetic and real-world datasets when the most relevant visual
information is filtered as a preprocessing step.
| [
{
"version": "v1",
"created": "Fri, 13 Sep 2024 22:18:45 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 00:48:26 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Guan",
"Yiwen",
""
],
[
"Trinh",
"Viet Anh",
""
],
[
"Voleti",
"Vivek",
""
],
[
"Whitehill",
"Jacob",
""
]
] | TITLE: Multi-modal Speech Transformer Decoders: When Do Multiple Modalities
Improve Accuracy?
ABSTRACT: Decoder-only discrete-token language models have recently achieved
significant success in automatic speech recognition. However, systematic
analyses of how different modalities impact performance in specific scenarios
remain limited. In this paper, we investigate the effects of multiple
modalities on recognition accuracy on both synthetic and real-world datasets.
Our experiments suggest that: (1) Integrating more modalities can increase
accuracy; in particular, our paper is, to our best knowledge, the first to show
the benefit of combining audio, image context, and lip information; (2) Images
as a supplementary modality for speech recognition provide the greatest benefit
at moderate noise levels, moreover, they exhibit a different trend compared to
inherently synchronized modalities like lip movements; (3) Performance improves
on both synthetic and real-world datasets when the most relevant visual
information is filtered as a preprocessing step.
|
2409.10524 | Mat\'u\v{s} Dopiriak | Mat\'u\v{s} \v{C}\'avojsk\'y, Eugen \v{S}lapak, Mat\'u\v{s} Dopiriak,
Gabriel Bug\'ar, Juraj Gazda | 3CSim: CARLA Corner Case Simulation for Control Assessment in Autonomous
Driving | null | null | 10.1109/CICT64037.2024.10899666 | null | cs.RO cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present the CARLA corner case simulation (3CSim) for evaluating autonomous
driving (AD) systems within the CARLA simulator. This framework is designed to
address the limitations of traditional AD model training by focusing on
non-standard, rare, and cognitively challenging scenarios. These corner cases
are crucial for ensuring vehicle safety and reliability, as they test advanced
control capabilities under unusual conditions. Our approach introduces a
taxonomy of corner cases categorized into state anomalies, behavior anomalies,
and evidence-based anomalies. We implement 32 unique corner cases with
adjustable parameters, including 9 predefined weather conditions, timing, and
traffic density. The framework enables repeatable and modifiable scenario
evaluations, facilitating the creation of a comprehensive dataset for further
analysis.
| [
{
"version": "v1",
"created": "Fri, 30 Aug 2024 12:38:22 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Čávojský",
"Matúš",
""
],
[
"Šlapak",
"Eugen",
""
],
[
"Dopiriak",
"Matúš",
""
],
[
"Bugár",
"Gabriel",
""
],
[
"Gazda",
"Juraj",
""
]
] | TITLE: 3CSim: CARLA Corner Case Simulation for Control Assessment in Autonomous
Driving
ABSTRACT: We present the CARLA corner case simulation (3CSim) for evaluating autonomous
driving (AD) systems within the CARLA simulator. This framework is designed to
address the limitations of traditional AD model training by focusing on
non-standard, rare, and cognitively challenging scenarios. These corner cases
are crucial for ensuring vehicle safety and reliability, as they test advanced
control capabilities under unusual conditions. Our approach introduces a
taxonomy of corner cases categorized into state anomalies, behavior anomalies,
and evidence-based anomalies. We implement 32 unique corner cases with
adjustable parameters, including 9 predefined weather conditions, timing, and
traffic density. The framework enables repeatable and modifiable scenario
evaluations, facilitating the creation of a comprehensive dataset for further
analysis.
|
2409.17524 | Chao Li | Chao Li, Chen Jiang, Xiaolong Liu, Jun Zhao, Guoxin Wang | JoyType: A Robust Design for Multilingual Visual Text Creation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generating images with accurately represented text, especially in non-Latin
languages, poses a significant challenge for diffusion models. Existing
approaches, such as the integration of hint condition diagrams via auxiliary
networks (e.g., ControlNet), have made strides towards addressing this issue.
However, diffusion models often fall short in tasks requiring controlled text
generation, such as specifying particular fonts or producing text in small
fonts. In this paper, we introduce a novel approach for multilingual visual
text creation, named JoyType, designed to maintain the font style of text
during the image generation process. Our methodology begins with assembling a
training dataset, JoyType-1M, comprising 1 million pairs of data. Each pair
includes an image, its description, and glyph instructions corresponding to the
font style within the image. We then developed a text control network, Font
ControlNet, tasked with extracting font style information to steer the image
generation. To further enhance our model's ability to maintain font style,
notably in generating small-font text, we incorporated a multi-layer OCR-aware
loss into the diffusion process. This enhancement allows JoyType to direct text
rendering using low-level descriptors. Our evaluations, based on both visual
and accuracy metrics, demonstrate that JoyType significantly outperforms
existing state-of-the-art methods. Additionally, JoyType can function as a
plugin, facilitating the creation of varied image styles in conjunction with
other stable diffusion models on HuggingFace and CivitAI. Our project is
open-sourced on https://jdh-algo.github.io/JoyType/.
| [
{
"version": "v1",
"created": "Thu, 26 Sep 2024 04:23:17 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 03:27:40 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Li",
"Chao",
""
],
[
"Jiang",
"Chen",
""
],
[
"Liu",
"Xiaolong",
""
],
[
"Zhao",
"Jun",
""
],
[
"Wang",
"Guoxin",
""
]
] | TITLE: JoyType: A Robust Design for Multilingual Visual Text Creation
ABSTRACT: Generating images with accurately represented text, especially in non-Latin
languages, poses a significant challenge for diffusion models. Existing
approaches, such as the integration of hint condition diagrams via auxiliary
networks (e.g., ControlNet), have made strides towards addressing this issue.
However, diffusion models often fall short in tasks requiring controlled text
generation, such as specifying particular fonts or producing text in small
fonts. In this paper, we introduce a novel approach for multilingual visual
text creation, named JoyType, designed to maintain the font style of text
during the image generation process. Our methodology begins with assembling a
training dataset, JoyType-1M, comprising 1 million pairs of data. Each pair
includes an image, its description, and glyph instructions corresponding to the
font style within the image. We then developed a text control network, Font
ControlNet, tasked with extracting font style information to steer the image
generation. To further enhance our model's ability to maintain font style,
notably in generating small-font text, we incorporated a multi-layer OCR-aware
loss into the diffusion process. This enhancement allows JoyType to direct text
rendering using low-level descriptors. Our evaluations, based on both visual
and accuracy metrics, demonstrate that JoyType significantly outperforms
existing state-of-the-art methods. Additionally, JoyType can function as a
plugin, facilitating the creation of varied image styles in conjunction with
other stable diffusion models on HuggingFace and CivitAI. Our project is
open-sourced on https://jdh-algo.github.io/JoyType/.
|
2410.05346 | Jiaming Zhang | Jiaming Zhang, Junhong Ye, Xingjun Ma, Yige Li, Yunfan Yang, Yunhao
Chen, Jitao Sang, Dit-Yan Yeung | AnyAttack: Towards Large-scale Self-supervised Adversarial Attacks on
Vision-language Models | CVPR 2025 | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Due to their multimodal capabilities, Vision-Language Models (VLMs) have
found numerous impactful applications in real-world scenarios. However, recent
studies have revealed that VLMs are vulnerable to image-based adversarial
attacks. Traditional targeted adversarial attacks require specific targets and
labels, limiting their real-world impact.We present AnyAttack, a
self-supervised framework that transcends the limitations of conventional
attacks through a novel foundation model approach. By pre-training on the
massive LAION-400M dataset without label supervision, AnyAttack achieves
unprecedented flexibility - enabling any image to be transformed into an attack
vector targeting any desired output across different VLMs.This approach
fundamentally changes the threat landscape, making adversarial capabilities
accessible at an unprecedented scale. Our extensive validation across five
open-source VLMs (CLIP, BLIP, BLIP2, InstructBLIP, and MiniGPT-4) demonstrates
AnyAttack's effectiveness across diverse multimodal tasks. Most concerning,
AnyAttack seamlessly transfers to commercial systems including Google Gemini,
Claude Sonnet, Microsoft Copilot and OpenAI GPT, revealing a systemic
vulnerability requiring immediate attention.
| [
{
"version": "v1",
"created": "Mon, 7 Oct 2024 09:45:18 GMT"
},
{
"version": "v2",
"created": "Tue, 17 Dec 2024 15:32:04 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Mar 2025 02:55:23 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Zhang",
"Jiaming",
""
],
[
"Ye",
"Junhong",
""
],
[
"Ma",
"Xingjun",
""
],
[
"Li",
"Yige",
""
],
[
"Yang",
"Yunfan",
""
],
[
"Chen",
"Yunhao",
""
],
[
"Sang",
"Jitao",
""
],
[
"Yeung",
"Dit-Yan",
""
]
] | TITLE: AnyAttack: Towards Large-scale Self-supervised Adversarial Attacks on
Vision-language Models
ABSTRACT: Due to their multimodal capabilities, Vision-Language Models (VLMs) have
found numerous impactful applications in real-world scenarios. However, recent
studies have revealed that VLMs are vulnerable to image-based adversarial
attacks. Traditional targeted adversarial attacks require specific targets and
labels, limiting their real-world impact.We present AnyAttack, a
self-supervised framework that transcends the limitations of conventional
attacks through a novel foundation model approach. By pre-training on the
massive LAION-400M dataset without label supervision, AnyAttack achieves
unprecedented flexibility - enabling any image to be transformed into an attack
vector targeting any desired output across different VLMs.This approach
fundamentally changes the threat landscape, making adversarial capabilities
accessible at an unprecedented scale. Our extensive validation across five
open-source VLMs (CLIP, BLIP, BLIP2, InstructBLIP, and MiniGPT-4) demonstrates
AnyAttack's effectiveness across diverse multimodal tasks. Most concerning,
AnyAttack seamlessly transfers to commercial systems including Google Gemini,
Claude Sonnet, Microsoft Copilot and OpenAI GPT, revealing a systemic
vulnerability requiring immediate attention.
|
2410.09681 | Christopher Diehl | Christopher Diehl, Peter Karkus, Sushant Veer, Marco Pavone, Torsten
Bertram | LoRD: Adapting Differentiable Driving Policies to Distribution Shifts | IEEE International Conference on Robotics & Automation, ICRA 2025 | null | null | null | cs.RO cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Distribution shifts between operational domains can severely affect the
performance of learned models in self-driving vehicles (SDVs). While this is a
well-established problem, prior work has mostly explored naive solutions such
as fine-tuning, focusing on the motion prediction task. In this work, we
explore novel adaptation strategies for differentiable autonomy stacks
consisting of prediction, planning, and control, perform evaluation in
closed-loop, and investigate the often-overlooked issue of catastrophic
forgetting. Specifically, we introduce two simple yet effective techniques: a
low-rank residual decoder (LoRD) and multi-task fine-tuning. Through
experiments across three models conducted on two real-world autonomous driving
datasets (nuPlan, exiD), we demonstrate the effectiveness of our methods and
highlight a significant performance gap between open-loop and closed-loop
evaluation in prior approaches. Our approach improves forgetting by up to
23.33% and the closed-loop OOD driving score by 9.93% in comparison to standard
fine-tuning.
| [
{
"version": "v1",
"created": "Sun, 13 Oct 2024 00:36:11 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Oct 2024 17:38:26 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Mar 2025 14:35:43 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Diehl",
"Christopher",
""
],
[
"Karkus",
"Peter",
""
],
[
"Veer",
"Sushant",
""
],
[
"Pavone",
"Marco",
""
],
[
"Bertram",
"Torsten",
""
]
] | TITLE: LoRD: Adapting Differentiable Driving Policies to Distribution Shifts
ABSTRACT: Distribution shifts between operational domains can severely affect the
performance of learned models in self-driving vehicles (SDVs). While this is a
well-established problem, prior work has mostly explored naive solutions such
as fine-tuning, focusing on the motion prediction task. In this work, we
explore novel adaptation strategies for differentiable autonomy stacks
consisting of prediction, planning, and control, perform evaluation in
closed-loop, and investigate the often-overlooked issue of catastrophic
forgetting. Specifically, we introduce two simple yet effective techniques: a
low-rank residual decoder (LoRD) and multi-task fine-tuning. Through
experiments across three models conducted on two real-world autonomous driving
datasets (nuPlan, exiD), we demonstrate the effectiveness of our methods and
highlight a significant performance gap between open-loop and closed-loop
evaluation in prior approaches. Our approach improves forgetting by up to
23.33% and the closed-loop OOD driving score by 9.93% in comparison to standard
fine-tuning.
|
2410.12673 | Zihan You | Zihan You, Ni Wang, Hao Wang, Qichao Zhao and Jinxiang Wang | MambaBEV: An efficient 3D detection model with Mamba2 | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate 3D object detection in autonomous driving relies on Bird's Eye View
(BEV) perception and effective temporal fusion.However, existing fusion
strategies based on convolutional layers or deformable self attention struggle
with global context modeling in BEV space,leading to lower accuracy for large
objects. To address this, we introduce MambaBEV, a novel BEV based 3D object
detection model that leverages Mamba2, an advanced state space model (SSM)
optimized for long sequence processing.Our key contribution is TemporalMamba, a
temporal fusion module that enhances global awareness by introducing a BEV
feature discrete rearrangement mechanism tailored for Mamba's sequential
processing. Additionally, we propose Mamba based DETR as the detection head to
improve multi object representation.Evaluations on the nuScenes dataset
demonstrate that MambaBEV base achieves an NDS of 51.7\% and an mAP of
42.7\%.Furthermore, an end to end autonomous driving paradigm validates its
effectiveness in motion forecasting and planning.Our results highlight the
potential of SSMs in autonomous driving perception, particularly in enhancing
global context understanding and large object detection.
| [
{
"version": "v1",
"created": "Wed, 16 Oct 2024 15:37:29 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 03:22:25 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"You",
"Zihan",
""
],
[
"Wang",
"Ni",
""
],
[
"Wang",
"Hao",
""
],
[
"Zhao",
"Qichao",
""
],
[
"Wang",
"Jinxiang",
""
]
] | TITLE: MambaBEV: An efficient 3D detection model with Mamba2
ABSTRACT: Accurate 3D object detection in autonomous driving relies on Bird's Eye View
(BEV) perception and effective temporal fusion.However, existing fusion
strategies based on convolutional layers or deformable self attention struggle
with global context modeling in BEV space,leading to lower accuracy for large
objects. To address this, we introduce MambaBEV, a novel BEV based 3D object
detection model that leverages Mamba2, an advanced state space model (SSM)
optimized for long sequence processing.Our key contribution is TemporalMamba, a
temporal fusion module that enhances global awareness by introducing a BEV
feature discrete rearrangement mechanism tailored for Mamba's sequential
processing. Additionally, we propose Mamba based DETR as the detection head to
improve multi object representation.Evaluations on the nuScenes dataset
demonstrate that MambaBEV base achieves an NDS of 51.7\% and an mAP of
42.7\%.Furthermore, an end to end autonomous driving paradigm validates its
effectiveness in motion forecasting and planning.Our results highlight the
potential of SSMs in autonomous driving perception, particularly in enhancing
global context understanding and large object detection.
|
2410.13360 | Haoran Hao | Haoran Hao, Jiaming Han, Changsheng Li, Yu-Feng Li, Xiangyu Yue | RAP: Retrieval-Augmented Personalization for Multimodal Large Language
Models | Accepted by CVPR 2025. Code: https://github.com/Hoar012/RAP-MLLM | null | null | null | cs.CV cs.AI cs.CL cs.LG cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The development of large language models (LLMs) has significantly enhanced
the capabilities of multimodal LLMs (MLLMs) as general assistants. However,
lack of user-specific knowledge still restricts their application in human's
daily life. In this paper, we introduce the Retrieval Augmented Personalization
(RAP) framework for MLLMs' personalization. Starting from a general MLLM, we
turn it into a personalized assistant in three steps. (a) Remember: We design a
key-value database to store user-related information, e.g., user's name, avatar
and other attributes. (b) Retrieve: When the user initiates a conversation, RAP
will retrieve relevant information from the database using a multimodal
retriever. (c) Generate: The input query and retrieved concepts' information
are fed into MLLMs to generate personalized, knowledge-augmented responses.
Unlike previous methods, RAP allows real-time concept editing via updating the
external database. To further improve generation quality and alignment with
user-specific information, we design a pipeline for data collection and create
a specialized dataset for personalized training of MLLMs. Based on the dataset,
we train a series of MLLMs as personalized multimodal assistants. By
pretraining on large-scale dataset, RAP-MLLMs can generalize to infinite visual
concepts without additional finetuning. Our models demonstrate outstanding
flexibility and generation quality across a variety of tasks, such as
personalized image captioning, question answering and visual recognition. The
code, data and models are available at https://hoar012.github.io/RAP-Project/.
| [
{
"version": "v1",
"created": "Thu, 17 Oct 2024 09:10:26 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Nov 2024 15:35:14 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Mar 2025 17:28:44 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Hao",
"Haoran",
""
],
[
"Han",
"Jiaming",
""
],
[
"Li",
"Changsheng",
""
],
[
"Li",
"Yu-Feng",
""
],
[
"Yue",
"Xiangyu",
""
]
] | TITLE: RAP: Retrieval-Augmented Personalization for Multimodal Large Language
Models
ABSTRACT: The development of large language models (LLMs) has significantly enhanced
the capabilities of multimodal LLMs (MLLMs) as general assistants. However,
lack of user-specific knowledge still restricts their application in human's
daily life. In this paper, we introduce the Retrieval Augmented Personalization
(RAP) framework for MLLMs' personalization. Starting from a general MLLM, we
turn it into a personalized assistant in three steps. (a) Remember: We design a
key-value database to store user-related information, e.g., user's name, avatar
and other attributes. (b) Retrieve: When the user initiates a conversation, RAP
will retrieve relevant information from the database using a multimodal
retriever. (c) Generate: The input query and retrieved concepts' information
are fed into MLLMs to generate personalized, knowledge-augmented responses.
Unlike previous methods, RAP allows real-time concept editing via updating the
external database. To further improve generation quality and alignment with
user-specific information, we design a pipeline for data collection and create
a specialized dataset for personalized training of MLLMs. Based on the dataset,
we train a series of MLLMs as personalized multimodal assistants. By
pretraining on large-scale dataset, RAP-MLLMs can generalize to infinite visual
concepts without additional finetuning. Our models demonstrate outstanding
flexibility and generation quality across a variety of tasks, such as
personalized image captioning, question answering and visual recognition. The
code, data and models are available at https://hoar012.github.io/RAP-Project/.
|
2410.19265 | Kexin Zhang | Kexin Zhang, Shuhan Liu, Song Wang, Weili Shi, Chen Chen, Pan Li,
Sheng Li, Jundong Li and Kaize Ding | A Survey of Deep Graph Learning under Distribution Shifts: from Graph
Out-of-Distribution Generalization to Adaptation | 19 pages, 3 figures. arXiv admin note: text overlap with
arXiv:2402.11153 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Distribution shifts on graphs -- the discrepancies in data distribution
between training and employing a graph machine learning model -- are ubiquitous
and often unavoidable in real-world scenarios. These shifts may severely
deteriorate model performance, posing significant challenges for reliable graph
machine learning. Consequently, there has been a surge in research on graph
machine learning under distribution shifts, aiming to train models to achieve
satisfactory performance on out-of-distribution (OOD) test data. In our survey,
we provide an up-to-date and forward-looking review of deep graph learning
under distribution shifts. Specifically, we cover three primary scenarios:
graph OOD generalization, training-time graph OOD adaptation, and test-time
graph OOD adaptation. We begin by formally formulating the problems and
discussing various types of distribution shifts that can affect graph learning,
such as covariate shifts and concept shifts. To provide a better understanding
of the literature, we introduce a systematic taxonomy that classifies existing
methods into model-centric and data-centric approaches, investigating the
techniques used in each category. We also summarize commonly used datasets in
this research area to facilitate further investigation. Finally, we point out
promising research directions and the corresponding challenges to encourage
further study in this vital domain. We also provide a continuously updated
reading list at https://github.com/kaize0409/Awesome-Graph-OOD.
| [
{
"version": "v1",
"created": "Fri, 25 Oct 2024 02:39:56 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 03:42:17 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Zhang",
"Kexin",
""
],
[
"Liu",
"Shuhan",
""
],
[
"Wang",
"Song",
""
],
[
"Shi",
"Weili",
""
],
[
"Chen",
"Chen",
""
],
[
"Li",
"Pan",
""
],
[
"Li",
"Sheng",
""
],
[
"Li",
"Jundong",
""
],
[
"Ding",
"Kaize",
""
]
] | TITLE: A Survey of Deep Graph Learning under Distribution Shifts: from Graph
Out-of-Distribution Generalization to Adaptation
ABSTRACT: Distribution shifts on graphs -- the discrepancies in data distribution
between training and employing a graph machine learning model -- are ubiquitous
and often unavoidable in real-world scenarios. These shifts may severely
deteriorate model performance, posing significant challenges for reliable graph
machine learning. Consequently, there has been a surge in research on graph
machine learning under distribution shifts, aiming to train models to achieve
satisfactory performance on out-of-distribution (OOD) test data. In our survey,
we provide an up-to-date and forward-looking review of deep graph learning
under distribution shifts. Specifically, we cover three primary scenarios:
graph OOD generalization, training-time graph OOD adaptation, and test-time
graph OOD adaptation. We begin by formally formulating the problems and
discussing various types of distribution shifts that can affect graph learning,
such as covariate shifts and concept shifts. To provide a better understanding
of the literature, we introduce a systematic taxonomy that classifies existing
methods into model-centric and data-centric approaches, investigating the
techniques used in each category. We also summarize commonly used datasets in
this research area to facilitate further investigation. Finally, we point out
promising research directions and the corresponding challenges to encourage
further study in this vital domain. We also provide a continuously updated
reading list at https://github.com/kaize0409/Awesome-Graph-OOD.
|
2410.22598 | Seung Hyun Cheon | Seung Hyun Cheon, Anneke Wernerfelt, Sorelle A. Friedler, Berk Ustun | Feature Responsiveness Scores: Model-Agnostic Explanations for Recourse | 10 pages, 9 figures in body, ICLR 2025 | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine learning models routinely automate decisions in applications like
lending and hiring. In such settings, consumer protection rules require
companies that deploy models to explain predictions to decision subjects. These
rules are motivated, in part, by the belief that explanations can promote
recourse by revealing information that individuals can use to contest or
improve their outcomes. In practice, many companies comply with these rules by
providing individuals with a list of the most important features for their
prediction, which they identify based on feature importance scores from feature
attribution methods such as SHAP or LIME. In this work, we show how these
practices can undermine consumers by highlighting features that would not lead
to an improved outcome and by explaining predictions that cannot be changed. We
propose to address these issues by highlighting features based on their
responsiveness score -- i.e., the probability that an individual can attain a
target prediction by changing a specific feature. We develop efficient methods
to compute responsiveness scores for any model and any dataset. We conduct an
extensive empirical study on the responsiveness of explanations in lending. Our
results show that standard practices in consumer finance can backfire by
presenting consumers with reasons without recourse, and demonstrate how our
approach improves consumer protection by highlighting responsive features and
identifying fixed predictions.
| [
{
"version": "v1",
"created": "Tue, 29 Oct 2024 23:37:49 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 09:09:11 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Cheon",
"Seung Hyun",
""
],
[
"Wernerfelt",
"Anneke",
""
],
[
"Friedler",
"Sorelle A.",
""
],
[
"Ustun",
"Berk",
""
]
] | TITLE: Feature Responsiveness Scores: Model-Agnostic Explanations for Recourse
ABSTRACT: Machine learning models routinely automate decisions in applications like
lending and hiring. In such settings, consumer protection rules require
companies that deploy models to explain predictions to decision subjects. These
rules are motivated, in part, by the belief that explanations can promote
recourse by revealing information that individuals can use to contest or
improve their outcomes. In practice, many companies comply with these rules by
providing individuals with a list of the most important features for their
prediction, which they identify based on feature importance scores from feature
attribution methods such as SHAP or LIME. In this work, we show how these
practices can undermine consumers by highlighting features that would not lead
to an improved outcome and by explaining predictions that cannot be changed. We
propose to address these issues by highlighting features based on their
responsiveness score -- i.e., the probability that an individual can attain a
target prediction by changing a specific feature. We develop efficient methods
to compute responsiveness scores for any model and any dataset. We conduct an
extensive empirical study on the responsiveness of explanations in lending. Our
results show that standard practices in consumer finance can backfire by
presenting consumers with reasons without recourse, and demonstrate how our
approach improves consumer protection by highlighting responsive features and
identifying fixed predictions.
|
2411.13550 | Ziqi Ma | Ziqi Ma, Yisong Yue, Georgia Gkioxari | Find Any Part in 3D | Project website: https://ziqi-ma.github.io/find3dsite/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Why don't we have foundation models in 3D yet? A key limitation is data
scarcity. For 3D object part segmentation, existing datasets are small in size
and lack diversity. We show that it is possible to break this data barrier by
building a data engine powered by 2D foundation models. Our data engine
automatically annotates any number of object parts: 1755x more unique part
types than existing datasets combined. By training on our annotated data with a
simple contrastive objective, we obtain an open-world model that generalizes to
any part in any object based on any text query. Even when evaluated zero-shot,
we outperform existing methods on the datasets they train on. We achieve 260%
improvement in mIoU and boost speed by 6x to 300x. Our scaling analysis
confirms that this generalization stems from the data scale, which underscores
the impact of our data engine. Finally, to advance general-category open-world
3D part segmentation, we release a benchmark covering a wide range of objects
and parts. Project website: https://ziqi-ma.github.io/find3dsite/
| [
{
"version": "v1",
"created": "Wed, 20 Nov 2024 18:59:01 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 04:36:55 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Ma",
"Ziqi",
""
],
[
"Yue",
"Yisong",
""
],
[
"Gkioxari",
"Georgia",
""
]
] | TITLE: Find Any Part in 3D
ABSTRACT: Why don't we have foundation models in 3D yet? A key limitation is data
scarcity. For 3D object part segmentation, existing datasets are small in size
and lack diversity. We show that it is possible to break this data barrier by
building a data engine powered by 2D foundation models. Our data engine
automatically annotates any number of object parts: 1755x more unique part
types than existing datasets combined. By training on our annotated data with a
simple contrastive objective, we obtain an open-world model that generalizes to
any part in any object based on any text query. Even when evaluated zero-shot,
we outperform existing methods on the datasets they train on. We achieve 260%
improvement in mIoU and boost speed by 6x to 300x. Our scaling analysis
confirms that this generalization stems from the data scale, which underscores
the impact of our data engine. Finally, to advance general-category open-world
3D part segmentation, we release a benchmark covering a wide range of objects
and parts. Project website: https://ziqi-ma.github.io/find3dsite/
|
2411.15216 | Guangkun Nie | Guangkun Nie, Gongzheng Tang, Shenda Hong | Dist Loss: Enhancing Regression in Few-Shot Region through Distribution
Distance Constraint | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Imbalanced data distributions are prevalent in real-world scenarios, posing
significant challenges in both imbalanced classification and imbalanced
regression tasks. They often cause deep learning models to overfit in areas of
high sample density (many-shot regions) while underperforming in areas of low
sample density (few-shot regions). This characteristic restricts the utility of
deep learning models in various sectors, notably healthcare, where areas with
few-shot data hold greater clinical relevance. While recent studies have shown
the benefits of incorporating distribution information in imbalanced
classification tasks, such strategies are rarely explored in imbalanced
regression. In this paper, we address this issue by introducing a novel loss
function, termed Dist Loss, designed to minimize the distribution distance
between the model's predictions and the target labels in a differentiable
manner, effectively integrating distribution information into model training.
Dist Loss enables deep learning models to regularize their output distribution
during training, effectively enhancing their focus on few-shot regions. We have
conducted extensive experiments across three datasets spanning computer vision
and healthcare: IMDB-WIKI-DIR, AgeDB-DIR, and ECG-Ka-DIR. The results
demonstrate that Dist Loss effectively mitigates the negative impact of
imbalanced data distribution on model performance, achieving state-of-the-art
results in sparse data regions. Furthermore, Dist Loss is easy to integrate,
complementing existing methods.
| [
{
"version": "v1",
"created": "Wed, 20 Nov 2024 16:17:40 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 10:23:51 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Mar 2025 02:57:57 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Nie",
"Guangkun",
""
],
[
"Tang",
"Gongzheng",
""
],
[
"Hong",
"Shenda",
""
]
] | TITLE: Dist Loss: Enhancing Regression in Few-Shot Region through Distribution
Distance Constraint
ABSTRACT: Imbalanced data distributions are prevalent in real-world scenarios, posing
significant challenges in both imbalanced classification and imbalanced
regression tasks. They often cause deep learning models to overfit in areas of
high sample density (many-shot regions) while underperforming in areas of low
sample density (few-shot regions). This characteristic restricts the utility of
deep learning models in various sectors, notably healthcare, where areas with
few-shot data hold greater clinical relevance. While recent studies have shown
the benefits of incorporating distribution information in imbalanced
classification tasks, such strategies are rarely explored in imbalanced
regression. In this paper, we address this issue by introducing a novel loss
function, termed Dist Loss, designed to minimize the distribution distance
between the model's predictions and the target labels in a differentiable
manner, effectively integrating distribution information into model training.
Dist Loss enables deep learning models to regularize their output distribution
during training, effectively enhancing their focus on few-shot regions. We have
conducted extensive experiments across three datasets spanning computer vision
and healthcare: IMDB-WIKI-DIR, AgeDB-DIR, and ECG-Ka-DIR. The results
demonstrate that Dist Loss effectively mitigates the negative impact of
imbalanced data distribution on model performance, achieving state-of-the-art
results in sparse data regions. Furthermore, Dist Loss is easy to integrate,
complementing existing methods.
|
2411.15556 | Anxhelo Diko Dr | Anxhelo Diko, Tinghuai Wang, Wassim Swaileh, Shiyan Sun, Ioannis
Patras | ReWind: Understanding Long Videos with Instructed Learnable Memory | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Vision-Language Models (VLMs) are crucial for applications requiring
integrated understanding textual and visual information. However, existing VLMs
struggle with long videos due to computational inefficiency, memory
limitations, and difficulties in maintaining coherent understanding across
extended sequences. To address these challenges, we introduce ReWind, a novel
memory-based VLM designed for efficient long video understanding while
preserving temporal fidelity. ReWind operates in a two-stage framework. In the
first stage, ReWind maintains a dynamic learnable memory module with a novel
\textbf{read-perceive-write} cycle that stores and updates instruction-relevant
visual information as the video unfolds. This module utilizes learnable queries
and cross-attentions between memory contents and the input stream, ensuring low
memory requirements by scaling linearly with the number of tokens. In the
second stage, we propose an adaptive frame selection mechanism guided by the
memory content to identify instruction-relevant key moments. It enriches the
memory representations with detailed spatial information by selecting a few
high-resolution frames, which are then combined with the memory contents and
fed into a Large Language Model (LLM) to generate the final answer. We
empirically demonstrate ReWind's superior performance in visual question
answering (VQA) and temporal grounding tasks, surpassing previous methods on
long video benchmarks. Notably, ReWind achieves a +13\% score gain and a +12\%
accuracy improvement on the MovieChat-1K VQA dataset and an +8\% mIoU increase
on Charades-STA for temporal grounding.
| [
{
"version": "v1",
"created": "Sat, 23 Nov 2024 13:23:22 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Mar 2025 23:05:29 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Diko",
"Anxhelo",
""
],
[
"Wang",
"Tinghuai",
""
],
[
"Swaileh",
"Wassim",
""
],
[
"Sun",
"Shiyan",
""
],
[
"Patras",
"Ioannis",
""
]
] | TITLE: ReWind: Understanding Long Videos with Instructed Learnable Memory
ABSTRACT: Vision-Language Models (VLMs) are crucial for applications requiring
integrated understanding textual and visual information. However, existing VLMs
struggle with long videos due to computational inefficiency, memory
limitations, and difficulties in maintaining coherent understanding across
extended sequences. To address these challenges, we introduce ReWind, a novel
memory-based VLM designed for efficient long video understanding while
preserving temporal fidelity. ReWind operates in a two-stage framework. In the
first stage, ReWind maintains a dynamic learnable memory module with a novel
\textbf{read-perceive-write} cycle that stores and updates instruction-relevant
visual information as the video unfolds. This module utilizes learnable queries
and cross-attentions between memory contents and the input stream, ensuring low
memory requirements by scaling linearly with the number of tokens. In the
second stage, we propose an adaptive frame selection mechanism guided by the
memory content to identify instruction-relevant key moments. It enriches the
memory representations with detailed spatial information by selecting a few
high-resolution frames, which are then combined with the memory contents and
fed into a Large Language Model (LLM) to generate the final answer. We
empirically demonstrate ReWind's superior performance in visual question
answering (VQA) and temporal grounding tasks, surpassing previous methods on
long video benchmarks. Notably, ReWind achieves a +13\% score gain and a +12\%
accuracy improvement on the MovieChat-1K VQA dataset and an +8\% mIoU increase
on Charades-STA for temporal grounding.
|
2411.15557 | Anxhelo Diko Dr | Anxhelo Diko, Antonino Furnari, Luigi Cinque, Giovanni Maria Farinella | LAGUNA: LAnguage Guided UNsupervised Adaptation with structured spaces | null | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Unsupervised domain adaptation remains a critical challenge in enabling the
knowledge transfer of models across unseen domains. Existing methods struggle
to balance the need for domain-invariant representations with preserving
domain-specific features, which is often due to alignment approaches that
impose the projection of samples with similar semantics close in the latent
space despite their drastic domain differences. We introduce LAGUNA - LAnguage
Guided UNsupervised Adaptation with structured spaces, a novel approach that
shifts the focus from aligning representations in absolute coordinates to
aligning the relative positioning of equivalent concepts in latent spaces.
LAGUNA defines a domain-agnostic structure upon the semantic/geometric
relationships between class labels in language space and guides adaptation,
ensuring that the organization of samples in visual space reflects reference
inter-class relationships while preserving domain-specific characteristics. We
empirically demonstrate LAGUNA's superiority in domain adaptation tasks across
four diverse images and video datasets. Remarkably, LAGUNA surpasses previous
works in 18 different adaptation scenarios across four diverse image and video
datasets with average accuracy improvements of +3.32% on DomainNet, +5.75% in
GeoPlaces, +4.77% on GeoImnet, and +1.94% mean class accuracy improvement on
EgoExo4D.
| [
{
"version": "v1",
"created": "Sat, 23 Nov 2024 13:26:53 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Nov 2024 11:01:33 GMT"
},
{
"version": "v3",
"created": "Thu, 27 Mar 2025 22:59:47 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Diko",
"Anxhelo",
""
],
[
"Furnari",
"Antonino",
""
],
[
"Cinque",
"Luigi",
""
],
[
"Farinella",
"Giovanni Maria",
""
]
] | TITLE: LAGUNA: LAnguage Guided UNsupervised Adaptation with structured spaces
ABSTRACT: Unsupervised domain adaptation remains a critical challenge in enabling the
knowledge transfer of models across unseen domains. Existing methods struggle
to balance the need for domain-invariant representations with preserving
domain-specific features, which is often due to alignment approaches that
impose the projection of samples with similar semantics close in the latent
space despite their drastic domain differences. We introduce LAGUNA - LAnguage
Guided UNsupervised Adaptation with structured spaces, a novel approach that
shifts the focus from aligning representations in absolute coordinates to
aligning the relative positioning of equivalent concepts in latent spaces.
LAGUNA defines a domain-agnostic structure upon the semantic/geometric
relationships between class labels in language space and guides adaptation,
ensuring that the organization of samples in visual space reflects reference
inter-class relationships while preserving domain-specific characteristics. We
empirically demonstrate LAGUNA's superiority in domain adaptation tasks across
four diverse images and video datasets. Remarkably, LAGUNA surpasses previous
works in 18 different adaptation scenarios across four diverse image and video
datasets with average accuracy improvements of +3.32% on DomainNet, +5.75% in
GeoPlaces, +4.77% on GeoImnet, and +1.94% mean class accuracy improvement on
EgoExo4D.
|
2411.17067 | Kaiwen Jiang | Kaiwen Jiang, Venkataram Sivaram, Cheng Peng, Ravi Ramamoorthi | Geometry Field Splatting with Gaussian Surfels | null | null | null | null | cs.GR cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Geometric reconstruction of opaque surfaces from images is a longstanding
challenge in computer vision, with renewed interest from volumetric view
synthesis algorithms using radiance fields. We leverage the geometry field
proposed in recent work for stochastic opaque surfaces, which can then be
converted to volume densities. We adapt Gaussian kernels or surfels to splat
the geometry field rather than the volume, enabling precise reconstruction of
opaque solids. Our first contribution is to derive an efficient and almost
exact differentiable rendering algorithm for geometry fields parameterized by
Gaussian surfels, while removing current approximations involving Taylor series
and no self-attenuation. Next, we address the discontinuous loss landscape when
surfels cluster near geometry, showing how to guarantee that the rendered color
is a continuous function of the colors of the kernels, irrespective of
ordering. Finally, we use latent representations with spherical harmonics
encoded reflection vectors rather than spherical harmonics encoded colors to
better address specular surfaces. We demonstrate significant improvement in the
quality of reconstructed 3D surfaces on widely-used datasets.
| [
{
"version": "v1",
"created": "Tue, 26 Nov 2024 03:07:05 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Mar 2025 20:22:49 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Jiang",
"Kaiwen",
""
],
[
"Sivaram",
"Venkataram",
""
],
[
"Peng",
"Cheng",
""
],
[
"Ramamoorthi",
"Ravi",
""
]
] | TITLE: Geometry Field Splatting with Gaussian Surfels
ABSTRACT: Geometric reconstruction of opaque surfaces from images is a longstanding
challenge in computer vision, with renewed interest from volumetric view
synthesis algorithms using radiance fields. We leverage the geometry field
proposed in recent work for stochastic opaque surfaces, which can then be
converted to volume densities. We adapt Gaussian kernels or surfels to splat
the geometry field rather than the volume, enabling precise reconstruction of
opaque solids. Our first contribution is to derive an efficient and almost
exact differentiable rendering algorithm for geometry fields parameterized by
Gaussian surfels, while removing current approximations involving Taylor series
and no self-attenuation. Next, we address the discontinuous loss landscape when
surfels cluster near geometry, showing how to guarantee that the rendered color
is a continuous function of the colors of the kernels, irrespective of
ordering. Finally, we use latent representations with spherical harmonics
encoded reflection vectors rather than spherical harmonics encoded colors to
better address specular surfaces. We demonstrate significant improvement in the
quality of reconstructed 3D surfaces on widely-used datasets.
|
2411.17556 | Tariq Khan Dr | Tariq M Khan, Dawn Lin, Shahzaib Iqbal, Erik Meijering | TAFM-Net: A Novel Approach to Skin Lesion Segmentation Using Transformer
Attention and Focal Modulation | null | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Incorporating modern computer vision techniques into clinical protocols shows
promise in improving skin lesion segmentation. The U-Net architecture has been
a key model in this area, iteratively improved to address challenges arising
from the heterogeneity of dermatologic images due to varying clinical settings,
lighting, patient attributes, and hair density. To further improve skin lesion
segmentation, we developed TAFM-Net, an innovative model leveraging
self-adaptive transformer attention (TA) coupled with focal modulation (FM).
Our model integrates an EfficientNetV2B1 encoder, which employs TA to enhance
spatial and channel-related saliency, while a densely connected decoder
integrates FM within skip connections, enhancing feature emphasis, segmentation
performance, and interpretability crucial for medical image analysis. A novel
dynamic loss function amalgamates region and boundary information, guiding
effective model training. Our model achieves competitive performance, with
Jaccard coefficients of 93.64\%, 86.88\% and 92.88\% in the ISIC2016, ISIC2017
and ISIC2018 datasets, respectively, demonstrating its potential in real-world
scenarios.
| [
{
"version": "v1",
"created": "Tue, 26 Nov 2024 16:18:48 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Khan",
"Tariq M",
""
],
[
"Lin",
"Dawn",
""
],
[
"Iqbal",
"Shahzaib",
""
],
[
"Meijering",
"Erik",
""
]
] | TITLE: TAFM-Net: A Novel Approach to Skin Lesion Segmentation Using Transformer
Attention and Focal Modulation
ABSTRACT: Incorporating modern computer vision techniques into clinical protocols shows
promise in improving skin lesion segmentation. The U-Net architecture has been
a key model in this area, iteratively improved to address challenges arising
from the heterogeneity of dermatologic images due to varying clinical settings,
lighting, patient attributes, and hair density. To further improve skin lesion
segmentation, we developed TAFM-Net, an innovative model leveraging
self-adaptive transformer attention (TA) coupled with focal modulation (FM).
Our model integrates an EfficientNetV2B1 encoder, which employs TA to enhance
spatial and channel-related saliency, while a densely connected decoder
integrates FM within skip connections, enhancing feature emphasis, segmentation
performance, and interpretability crucial for medical image analysis. A novel
dynamic loss function amalgamates region and boundary information, guiding
effective model training. Our model achieves competitive performance, with
Jaccard coefficients of 93.64\%, 86.88\% and 92.88\% in the ISIC2016, ISIC2017
and ISIC2018 datasets, respectively, demonstrating its potential in real-world
scenarios.
|
2412.00175 | Stefan Smeu | Stefan Smeu, Dragos-Alexandru Boldisor, Dan Oneata, Elisabeta Oneata | Circumventing shortcuts in audio-visual deepfake detection datasets with
unsupervised learning | null | null | null | null | cs.CV cs.LG cs.SD eess.AS eess.IV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Good datasets are essential for developing and benchmarking any machine
learning system. Their importance is even more extreme for safety critical
applications such as deepfake detection - the focus of this paper. Here we
reveal that two of the most widely used audio-video deepfake datasets suffer
from a previously unidentified spurious feature: the leading silence. Fake
videos start with a very brief moment of silence and based on this feature
alone, we can separate the real and fake samples almost perfectly. As such,
previous audio-only and audio-video models exploit the presence of silence in
the fake videos and consequently perform worse when the leading silence is
removed. To circumvent latching on such unwanted artifact and possibly other
unrevealed ones we propose a shift from supervised to unsupervised learning by
training models exclusively on real data. We show that by aligning
self-supervised audio-video representations we remove the risk of relying on
dataset-specific biases and improve robustness in deepfake detection.
| [
{
"version": "v1",
"created": "Fri, 29 Nov 2024 18:58:20 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 09:59:45 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Smeu",
"Stefan",
""
],
[
"Boldisor",
"Dragos-Alexandru",
""
],
[
"Oneata",
"Dan",
""
],
[
"Oneata",
"Elisabeta",
""
]
] | TITLE: Circumventing shortcuts in audio-visual deepfake detection datasets with
unsupervised learning
ABSTRACT: Good datasets are essential for developing and benchmarking any machine
learning system. Their importance is even more extreme for safety critical
applications such as deepfake detection - the focus of this paper. Here we
reveal that two of the most widely used audio-video deepfake datasets suffer
from a previously unidentified spurious feature: the leading silence. Fake
videos start with a very brief moment of silence and based on this feature
alone, we can separate the real and fake samples almost perfectly. As such,
previous audio-only and audio-video models exploit the presence of silence in
the fake videos and consequently perform worse when the leading silence is
removed. To circumvent latching on such unwanted artifact and possibly other
unrevealed ones we propose a shift from supervised to unsupervised learning by
training models exclusively on real data. We show that by aligning
self-supervised audio-video representations we remove the risk of relying on
dataset-specific biases and improve robustness in deepfake detection.
|
2412.02545 | Mingjia Li | Jin Hu, Mingjia Li, Xiaojie Guo | ShadowHack: Hacking Shadows via Luminance-Color Divide and Conquer | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Shadows introduce challenges such as reduced brightness, texture
deterioration, and color distortion in images, complicating a holistic
solution. This study presents \textbf{ShadowHack}, a divide-and-conquer
strategy that tackles these complexities by decomposing the original task into
luminance recovery and color remedy. To brighten shadow regions and repair the
corrupted textures in the luminance space, we customize LRNet, a U-shaped
network with a rectified attention module, to enhance information interaction
and recalibrate contaminated attention maps. With luminance recovered, CRNet
then leverages cross-attention mechanisms to revive vibrant colors, producing
visually compelling results. Extensive experiments on multiple datasets are
conducted to demonstrate the superiority of ShadowHack over existing
state-of-the-art solutions both quantitatively and qualitatively, highlighting
the effectiveness of our design. Our code will be made publicly available.
| [
{
"version": "v1",
"created": "Tue, 3 Dec 2024 16:37:23 GMT"
},
{
"version": "v2",
"created": "Fri, 6 Dec 2024 07:46:47 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Mar 2025 13:23:12 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Hu",
"Jin",
""
],
[
"Li",
"Mingjia",
""
],
[
"Guo",
"Xiaojie",
""
]
] | TITLE: ShadowHack: Hacking Shadows via Luminance-Color Divide and Conquer
ABSTRACT: Shadows introduce challenges such as reduced brightness, texture
deterioration, and color distortion in images, complicating a holistic
solution. This study presents \textbf{ShadowHack}, a divide-and-conquer
strategy that tackles these complexities by decomposing the original task into
luminance recovery and color remedy. To brighten shadow regions and repair the
corrupted textures in the luminance space, we customize LRNet, a U-shaped
network with a rectified attention module, to enhance information interaction
and recalibrate contaminated attention maps. With luminance recovered, CRNet
then leverages cross-attention mechanisms to revive vibrant colors, producing
visually compelling results. Extensive experiments on multiple datasets are
conducted to demonstrate the superiority of ShadowHack over existing
state-of-the-art solutions both quantitatively and qualitatively, highlighting
the effectiveness of our design. Our code will be made publicly available.
|
2412.02901 | Shibo Zhao | Shibo Zhao, Honghao Zhu, Yuanjun Gao, Beomsoo Kim, Yuheng Qiu, Aaron
M. Johnson, Sebastian Scherer | SuperLoc: The Key to Robust LiDAR-Inertial Localization Lies in
Predicting Alignment Risks | 7 pages, 6 figures, accepted at ICRA 2025 | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | Map-based LiDAR localization, while widely used in autonomous systems, faces
significant challenges in degraded environments due to lacking distinct
geometric features. This paper introduces SuperLoc, a robust LiDAR localization
package that addresses key limitations in existing methods. SuperLoc features a
novel predictive alignment risk assessment technique, enabling early detection
and mitigation of potential failures before optimization. This approach
significantly improves performance in challenging scenarios such as corridors,
tunnels, and caves. Unlike existing degeneracy mitigation algorithms that rely
on post-optimization analysis and heuristic thresholds, SuperLoc evaluates the
localizability of raw sensor measurements. Experimental results demonstrate
significant performance improvements over state-of-the-art methods across
various degraded environments. Our approach achieves a 54% increase in accuracy
and exhibits the highest robustness. To facilitate further research, we release
our implementation along with datasets from eight challenging scenarios
| [
{
"version": "v1",
"created": "Tue, 3 Dec 2024 23:07:51 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 02:28:13 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Zhao",
"Shibo",
""
],
[
"Zhu",
"Honghao",
""
],
[
"Gao",
"Yuanjun",
""
],
[
"Kim",
"Beomsoo",
""
],
[
"Qiu",
"Yuheng",
""
],
[
"Johnson",
"Aaron M.",
""
],
[
"Scherer",
"Sebastian",
""
]
] | TITLE: SuperLoc: The Key to Robust LiDAR-Inertial Localization Lies in
Predicting Alignment Risks
ABSTRACT: Map-based LiDAR localization, while widely used in autonomous systems, faces
significant challenges in degraded environments due to lacking distinct
geometric features. This paper introduces SuperLoc, a robust LiDAR localization
package that addresses key limitations in existing methods. SuperLoc features a
novel predictive alignment risk assessment technique, enabling early detection
and mitigation of potential failures before optimization. This approach
significantly improves performance in challenging scenarios such as corridors,
tunnels, and caves. Unlike existing degeneracy mitigation algorithms that rely
on post-optimization analysis and heuristic thresholds, SuperLoc evaluates the
localizability of raw sensor measurements. Experimental results demonstrate
significant performance improvements over state-of-the-art methods across
various degraded environments. Our approach achieves a 54% increase in accuracy
and exhibits the highest robustness. To facilitate further research, we release
our implementation along with datasets from eight challenging scenarios
|
2412.11620 | Wenxiao Fan | Wenxiao Fan, Kan Li | Combating Semantic Contamination in Learning with Label Noise | AAAI2025 | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Noisy labels can negatively impact the performance of deep neural networks.
One common solution is label refurbishment, which involves reconstructing noisy
labels through predictions and distributions. However, these methods may
introduce problematic semantic associations, a phenomenon that we identify as
Semantic Contamination. Through an analysis of Robust LR, a representative
label refurbishment method, we found that utilizing the logits of views for
refurbishment does not adequately balance the semantic information of
individual classes. Conversely, using the logits of models fails to maintain
consistent semantic relationships across models, which explains why label
refurbishment methods frequently encounter issues related to Semantic
Contamination. To address this issue, we propose a novel method called
Collaborative Cross Learning, which utilizes semi-supervised learning on
refurbished labels to extract appropriate semantic associations from embeddings
across views and models. Experimental results show that our method outperforms
existing approaches on both synthetic and real-world noisy datasets,
effectively mitigating the impact of label noise and Semantic Contamination.
| [
{
"version": "v1",
"created": "Mon, 16 Dec 2024 10:07:15 GMT"
},
{
"version": "v2",
"created": "Tue, 17 Dec 2024 04:26:08 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Mar 2025 09:47:41 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Fan",
"Wenxiao",
""
],
[
"Li",
"Kan",
""
]
] | TITLE: Combating Semantic Contamination in Learning with Label Noise
ABSTRACT: Noisy labels can negatively impact the performance of deep neural networks.
One common solution is label refurbishment, which involves reconstructing noisy
labels through predictions and distributions. However, these methods may
introduce problematic semantic associations, a phenomenon that we identify as
Semantic Contamination. Through an analysis of Robust LR, a representative
label refurbishment method, we found that utilizing the logits of views for
refurbishment does not adequately balance the semantic information of
individual classes. Conversely, using the logits of models fails to maintain
consistent semantic relationships across models, which explains why label
refurbishment methods frequently encounter issues related to Semantic
Contamination. To address this issue, we propose a novel method called
Collaborative Cross Learning, which utilizes semi-supervised learning on
refurbished labels to extract appropriate semantic associations from embeddings
across views and models. Experimental results show that our method outperforms
existing approaches on both synthetic and real-world noisy datasets,
effectively mitigating the impact of label noise and Semantic Contamination.
|
2412.12223 | Xiaozhe Li | Xiaozhe Li, Kai WU, Siyi Yang, YiZhan Qu, Guohua.Zhang, Zhiyu Chen,
Jiayao Li, Jiangchuan Mu, Xiaobin Hu, Wen Fang, Mingliang Xiong, Hao Deng,
Qingwen Liu, Gang Li, Bin He | Can video generation replace cinematographers? Research on the cinematic
language of generated video | 10 pages | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advancements in text-to-video (T2V) generation have leveraged
diffusion models to enhance visual coherence in videos synthesized from textual
descriptions. However, existing research primarily focuses on object motion,
often overlooking cinematic language, which is crucial for conveying emotion
and narrative pacing in cinematography. To address this, we propose a threefold
approach to improve cinematic control in T2V models. First, we introduce a
meticulously annotated cinematic language dataset with twenty subcategories,
covering shot framing, shot angles, and camera movements, enabling models to
learn diverse cinematic styles. Second, we present CameraDiff, which employs
LoRA for precise and stable cinematic control, ensuring flexible shot
generation. Third, we propose CameraCLIP, designed to evaluate cinematic
alignment and guide multi-shot composition. Building on CameraCLIP, we
introduce CLIPLoRA, a CLIP-guided dynamic LoRA composition method that
adaptively fuses multiple pre-trained cinematic LoRAs, enabling smooth
transitions and seamless style blending. Experimental results demonstrate that
CameraDiff ensures stable and precise cinematic control, CameraCLIP achieves an
R@1 score of 0.83, and CLIPLoRA significantly enhances multi-shot composition
within a single video, bridging the gap between automated video generation and
professional cinematography.\textsuperscript{1}
| [
{
"version": "v1",
"created": "Mon, 16 Dec 2024 09:02:24 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 03:50:25 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Li",
"Xiaozhe",
""
],
[
"WU",
"Kai",
""
],
[
"Yang",
"Siyi",
""
],
[
"Qu",
"YiZhan",
""
],
[
"Zhang",
"Guohua.",
""
],
[
"Chen",
"Zhiyu",
""
],
[
"Li",
"Jiayao",
""
],
[
"Mu",
"Jiangchuan",
""
],
[
"Hu",
"Xiaobin",
""
],
[
"Fang",
"Wen",
""
],
[
"Xiong",
"Mingliang",
""
],
[
"Deng",
"Hao",
""
],
[
"Liu",
"Qingwen",
""
],
[
"Li",
"Gang",
""
],
[
"He",
"Bin",
""
]
] | TITLE: Can video generation replace cinematographers? Research on the cinematic
language of generated video
ABSTRACT: Recent advancements in text-to-video (T2V) generation have leveraged
diffusion models to enhance visual coherence in videos synthesized from textual
descriptions. However, existing research primarily focuses on object motion,
often overlooking cinematic language, which is crucial for conveying emotion
and narrative pacing in cinematography. To address this, we propose a threefold
approach to improve cinematic control in T2V models. First, we introduce a
meticulously annotated cinematic language dataset with twenty subcategories,
covering shot framing, shot angles, and camera movements, enabling models to
learn diverse cinematic styles. Second, we present CameraDiff, which employs
LoRA for precise and stable cinematic control, ensuring flexible shot
generation. Third, we propose CameraCLIP, designed to evaluate cinematic
alignment and guide multi-shot composition. Building on CameraCLIP, we
introduce CLIPLoRA, a CLIP-guided dynamic LoRA composition method that
adaptively fuses multiple pre-trained cinematic LoRAs, enabling smooth
transitions and seamless style blending. Experimental results demonstrate that
CameraDiff ensures stable and precise cinematic control, CameraCLIP achieves an
R@1 score of 0.83, and CLIPLoRA significantly enhances multi-shot composition
within a single video, bridging the gap between automated video generation and
professional cinematography.\textsuperscript{1}
|
2412.16867 | Maida Wang | Maida Wang, Jinyang Jiang, and Peter V. Coveney | A Parameter-Efficient Quantum Anomaly Detection Method on a
Superconducting Quantum Processor | 22 pages, 10 figures | null | null | null | quant-ph cs.LG math.ST stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Quantum machine learning has gained attention for its potential to address
computational challenges. However, whether those algorithms can effectively
solve practical problems and outperform their classical counterparts,
especially on current quantum hardware, remains a critical question. In this
work, we propose a novel quantum machine learning method, called
Parameter-Efficient Quantum Anomaly Detection (PEQAD), for practical image
anomaly detection, which aims to achieve both parameter efficiency and superior
accuracy compared to classical models. Emulation results indicate that PEQAD
demonstrates favourable recognition capabilities compared to classical
baselines, achieving an average accuracy of over 90% on benchmarks with
significantly fewer trainable parameters. Theoretical analysis confirms that
PEQAD has a comparable expressivity to classical counterparts while requiring
only a fraction of the parameters. Furthermore, we demonstrate the first
implementation of a quantum anomaly detection method for general image datasets
on a superconducting quantum processor. Specifically, we achieve an accuracy of
over 80% with only 16 parameters on the device, providing initial evidence of
PEQAD's practical viability in the noisy intermediate-scale quantum era and
highlighting its significant reduction in parameter requirements.
| [
{
"version": "v1",
"created": "Sun, 22 Dec 2024 05:36:51 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Jan 2025 22:01:02 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Mar 2025 10:57:32 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Wang",
"Maida",
""
],
[
"Jiang",
"Jinyang",
""
],
[
"Coveney",
"Peter V.",
""
]
] | TITLE: A Parameter-Efficient Quantum Anomaly Detection Method on a
Superconducting Quantum Processor
ABSTRACT: Quantum machine learning has gained attention for its potential to address
computational challenges. However, whether those algorithms can effectively
solve practical problems and outperform their classical counterparts,
especially on current quantum hardware, remains a critical question. In this
work, we propose a novel quantum machine learning method, called
Parameter-Efficient Quantum Anomaly Detection (PEQAD), for practical image
anomaly detection, which aims to achieve both parameter efficiency and superior
accuracy compared to classical models. Emulation results indicate that PEQAD
demonstrates favourable recognition capabilities compared to classical
baselines, achieving an average accuracy of over 90% on benchmarks with
significantly fewer trainable parameters. Theoretical analysis confirms that
PEQAD has a comparable expressivity to classical counterparts while requiring
only a fraction of the parameters. Furthermore, we demonstrate the first
implementation of a quantum anomaly detection method for general image datasets
on a superconducting quantum processor. Specifically, we achieve an accuracy of
over 80% with only 16 parameters on the device, providing initial evidence of
PEQAD's practical viability in the noisy intermediate-scale quantum era and
highlighting its significant reduction in parameter requirements.
|
2412.17726 | Yuchi Wang | Yuchi Wang, Junliang Guo, Xinyi Xie, Tianyu He, Xu Sun, Jiang Bian | VidTwin: Video VAE with Decoupled Structure and Dynamics | Accepted by CVPR 2025; Project page: https://vidtwin.github.io/;
Code: https://github.com/microsoft/VidTok/tree/main/vidtwin | null | null | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advancements in video autoencoders (Video AEs) have significantly
improved the quality and efficiency of video generation. In this paper, we
propose a novel and compact video autoencoder, VidTwin, that decouples video
into two distinct latent spaces: Structure latent vectors, which capture
overall content and global movement, and Dynamics latent vectors, which
represent fine-grained details and rapid movements. Specifically, our approach
leverages an Encoder-Decoder backbone, augmented with two submodules for
extracting these latent spaces, respectively. The first submodule employs a
Q-Former to extract low-frequency motion trends, followed by downsampling
blocks to remove redundant content details. The second averages the latent
vectors along the spatial dimension to capture rapid motion. Extensive
experiments show that VidTwin achieves a high compression rate of 0.20% with
high reconstruction quality (PSNR of 28.14 on the MCL-JCV dataset), and
performs efficiently and effectively in downstream generative tasks. Moreover,
our model demonstrates explainability and scalability, paving the way for
future research in video latent representation and generation. Check our
project page for more details: https://vidtwin.github.io/.
| [
{
"version": "v1",
"created": "Mon, 23 Dec 2024 17:16:58 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 17:32:31 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Wang",
"Yuchi",
""
],
[
"Guo",
"Junliang",
""
],
[
"Xie",
"Xinyi",
""
],
[
"He",
"Tianyu",
""
],
[
"Sun",
"Xu",
""
],
[
"Bian",
"Jiang",
""
]
] | TITLE: VidTwin: Video VAE with Decoupled Structure and Dynamics
ABSTRACT: Recent advancements in video autoencoders (Video AEs) have significantly
improved the quality and efficiency of video generation. In this paper, we
propose a novel and compact video autoencoder, VidTwin, that decouples video
into two distinct latent spaces: Structure latent vectors, which capture
overall content and global movement, and Dynamics latent vectors, which
represent fine-grained details and rapid movements. Specifically, our approach
leverages an Encoder-Decoder backbone, augmented with two submodules for
extracting these latent spaces, respectively. The first submodule employs a
Q-Former to extract low-frequency motion trends, followed by downsampling
blocks to remove redundant content details. The second averages the latent
vectors along the spatial dimension to capture rapid motion. Extensive
experiments show that VidTwin achieves a high compression rate of 0.20% with
high reconstruction quality (PSNR of 28.14 on the MCL-JCV dataset), and
performs efficiently and effectively in downstream generative tasks. Moreover,
our model demonstrates explainability and scalability, paving the way for
future research in video latent representation and generation. Check our
project page for more details: https://vidtwin.github.io/.
|
2501.05226 | Ludwig Leonard | Ludwic Leonard, Nils Thuerey and Ruediger Westermann | Light Transport-aware Diffusion Posterior Sampling for Single-View
Reconstruction of 3D Volumes | CVPR 2025 | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a single-view reconstruction technique of volumetric fields in
which multiple light scattering effects are omnipresent, such as in clouds. We
model the unknown distribution of volumetric fields using an unconditional
diffusion model trained on a novel benchmark dataset comprising 1,000
synthetically simulated volumetric density fields. The neural diffusion model
is trained on the latent codes of a novel, diffusion-friendly, monoplanar
representation. The generative model is used to incorporate a tailored
parametric diffusion posterior sampling technique into different reconstruction
tasks. A physically-based differentiable volume renderer is employed to provide
gradients with respect to light transport in the latent space. This stands in
contrast to classic NeRF approaches and makes the reconstructions better
aligned with observed data. Through various experiments, we demonstrate
single-view reconstruction of volumetric clouds at a previously unattainable
quality.
| [
{
"version": "v1",
"created": "Thu, 9 Jan 2025 13:29:54 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Jan 2025 15:30:39 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Mar 2025 09:28:16 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Leonard",
"Ludwic",
""
],
[
"Thuerey",
"Nils",
""
],
[
"Westermann",
"Ruediger",
""
]
] | TITLE: Light Transport-aware Diffusion Posterior Sampling for Single-View
Reconstruction of 3D Volumes
ABSTRACT: We introduce a single-view reconstruction technique of volumetric fields in
which multiple light scattering effects are omnipresent, such as in clouds. We
model the unknown distribution of volumetric fields using an unconditional
diffusion model trained on a novel benchmark dataset comprising 1,000
synthetically simulated volumetric density fields. The neural diffusion model
is trained on the latent codes of a novel, diffusion-friendly, monoplanar
representation. The generative model is used to incorporate a tailored
parametric diffusion posterior sampling technique into different reconstruction
tasks. A physically-based differentiable volume renderer is employed to provide
gradients with respect to light transport in the latent space. This stands in
contrast to classic NeRF approaches and makes the reconstructions better
aligned with observed data. Through various experiments, we demonstrate
single-view reconstruction of volumetric clouds at a previously unattainable
quality.
|
2501.08096 | Guizhe Jin | Guizhe Jin, Zhuoren Li, Bo Leng, Wei Han, Lu Xiong, and Chen Sun | Hybrid Action Based Reinforcement Learning for Multi-Objective
Compatible Autonomous Driving | 12 pages, 9 figures, 5 tables | null | null | null | cs.RO cs.AI cs.ET cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reinforcement Learning (RL) has shown excellent performance in solving
decision-making and control problems of autonomous driving, which is
increasingly applied in diverse driving scenarios. However, driving is a
multi-attribute problem, leading to challenges in achieving multi-objective
compatibility for current RL methods, especially in both policy execution and
policy iteration. On the one hand, the common action space structure with
single action type limits driving flexibility or results in large behavior
fluctuations during policy execution. On the other hand, the multi-attribute
weighted single reward function result in the agent's disproportionate
attention to certain objectives during policy iterations. To this end, we
propose a Multi-objective Ensemble-Critic reinforcement learning method with
Hybrid Parametrized Action for multi-objective compatible autonomous driving.
Specifically, a parameterized action space is constructed to generate hybrid
driving actions, combining both abstract guidance and concrete control
commands. A multi-objective critics architecture is constructed considering
multiple attribute rewards, to ensure simultaneously focusing on different
driving objectives. Additionally, uncertainty-based exploration strategy is
introduced to help the agent faster approach viable driving policy. The
experimental results in both the simulated traffic environment and the HighD
dataset demonstrate that our method can achieve multi-objective compatible
autonomous driving in terms of driving efficiency, action consistency, and
safety. It enhances the general performance of the driving while significantly
increasing training efficiency.
| [
{
"version": "v1",
"created": "Tue, 14 Jan 2025 13:10:13 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 14:49:25 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Jin",
"Guizhe",
""
],
[
"Li",
"Zhuoren",
""
],
[
"Leng",
"Bo",
""
],
[
"Han",
"Wei",
""
],
[
"Xiong",
"Lu",
""
],
[
"Sun",
"Chen",
""
]
] | TITLE: Hybrid Action Based Reinforcement Learning for Multi-Objective
Compatible Autonomous Driving
ABSTRACT: Reinforcement Learning (RL) has shown excellent performance in solving
decision-making and control problems of autonomous driving, which is
increasingly applied in diverse driving scenarios. However, driving is a
multi-attribute problem, leading to challenges in achieving multi-objective
compatibility for current RL methods, especially in both policy execution and
policy iteration. On the one hand, the common action space structure with
single action type limits driving flexibility or results in large behavior
fluctuations during policy execution. On the other hand, the multi-attribute
weighted single reward function result in the agent's disproportionate
attention to certain objectives during policy iterations. To this end, we
propose a Multi-objective Ensemble-Critic reinforcement learning method with
Hybrid Parametrized Action for multi-objective compatible autonomous driving.
Specifically, a parameterized action space is constructed to generate hybrid
driving actions, combining both abstract guidance and concrete control
commands. A multi-objective critics architecture is constructed considering
multiple attribute rewards, to ensure simultaneously focusing on different
driving objectives. Additionally, uncertainty-based exploration strategy is
introduced to help the agent faster approach viable driving policy. The
experimental results in both the simulated traffic environment and the HighD
dataset demonstrate that our method can achieve multi-objective compatible
autonomous driving in terms of driving efficiency, action consistency, and
safety. It enhances the general performance of the driving while significantly
increasing training efficiency.
|
2501.10542 | Asif Samir | Asif Mohammed Samir, Mohammad Masudur Rahman | Improved IR-based Bug Localization with Intelligent Relevance Feedback | 13 pages, 5 figures | null | null | null | cs.SE cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Software bugs pose a significant challenge during development and
maintenance, and practitioners spend nearly 50% of their time dealing with
bugs. Many existing techniques adopt Information Retrieval (IR) to localize a
reported bug using textual and semantic relevance between bug reports and
source code. However, they often struggle to bridge a critical gap between bug
reports and code that requires in-depth contextual understanding, which goes
beyond textual or semantic relevance. In this paper, we present a novel
technique for bug localization - BRaIn - that addresses the contextual gaps by
assessing the relevance between bug reports and code with Large Language Models
(LLM). It then leverages the LLM's feedback (a.k.a., Intelligent Relevance
Feedback) to reformulate queries and re-rank source documents, improving bug
localization. We evaluate BRaIn using a benchmark dataset, Bench4BL, and three
performance metrics and compare it against six baseline techniques from the
literature. Our experimental results show that BRaIn outperforms baselines by
87.6%, 89.5%, and 48.8% margins in MAP, MRR, and HIT@K, respectively.
Additionally, it can localize approximately 52% of bugs that cannot be
localized by the baseline techniques due to the poor quality of corresponding
bug reports. By addressing the contextual gaps and introducing Intelligent
Relevance Feedback, BRaIn advances not only theory but also improves IR-based
bug localization.
| [
{
"version": "v1",
"created": "Fri, 17 Jan 2025 20:29:38 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Mar 2025 23:51:49 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Samir",
"Asif Mohammed",
""
],
[
"Rahman",
"Mohammad Masudur",
""
]
] | TITLE: Improved IR-based Bug Localization with Intelligent Relevance Feedback
ABSTRACT: Software bugs pose a significant challenge during development and
maintenance, and practitioners spend nearly 50% of their time dealing with
bugs. Many existing techniques adopt Information Retrieval (IR) to localize a
reported bug using textual and semantic relevance between bug reports and
source code. However, they often struggle to bridge a critical gap between bug
reports and code that requires in-depth contextual understanding, which goes
beyond textual or semantic relevance. In this paper, we present a novel
technique for bug localization - BRaIn - that addresses the contextual gaps by
assessing the relevance between bug reports and code with Large Language Models
(LLM). It then leverages the LLM's feedback (a.k.a., Intelligent Relevance
Feedback) to reformulate queries and re-rank source documents, improving bug
localization. We evaluate BRaIn using a benchmark dataset, Bench4BL, and three
performance metrics and compare it against six baseline techniques from the
literature. Our experimental results show that BRaIn outperforms baselines by
87.6%, 89.5%, and 48.8% margins in MAP, MRR, and HIT@K, respectively.
Additionally, it can localize approximately 52% of bugs that cannot be
localized by the baseline techniques due to the poor quality of corresponding
bug reports. By addressing the contextual gaps and introducing Intelligent
Relevance Feedback, BRaIn advances not only theory but also improves IR-based
bug localization.
|
2501.12086 | Hu Cui | Hu Cui, Renjing Huang, Ruoyu Zhang, Tessai Hayama | DSTSA-GCN: Advancing Skeleton-Based Gesture Recognition with
Semantic-Aware Spatio-Temporal Topology Modeling | submit to Neurocomputing | Neurocomputing, 2025, 130066, ISSN 0925-2312, | 10.1016/j.neucom.2025.130066 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph convolutional networks (GCNs) have emerged as a powerful tool for
skeleton-based action and gesture recognition, thanks to their ability to model
spatial and temporal dependencies in skeleton data. However, existing GCN-based
methods face critical limitations: (1) they lack effective spatio-temporal
topology modeling that captures dynamic variations in skeletal motion, and (2)
they struggle to model multiscale structural relationships beyond local joint
connectivity. To address these issues, we propose a novel framework called
Dynamic Spatial-Temporal Semantic Awareness Graph Convolutional Network
(DSTSA-GCN). DSTSA-GCN introduces three key modules: Group Channel-wise Graph
Convolution (GC-GC), Group Temporal-wise Graph Convolution (GT-GC), and
Multi-Scale Temporal Convolution (MS-TCN). GC-GC and GT-GC operate in parallel
to independently model channel-specific and frame-specific correlations,
enabling robust topology learning that accounts for temporal variations.
Additionally, both modules employ a grouping strategy to adaptively capture
multiscale structural relationships. Complementing this, MS-TCN enhances
temporal modeling through group-wise temporal convolutions with diverse
receptive fields. Extensive experiments demonstrate that DSTSA-GCN
significantly improves the topology modeling capabilities of GCNs, achieving
state-of-the-art performance on benchmark datasets for gesture and action
recognition, including SHREC17 Track, DHG-14\/28, NTU-RGB+D, and NTU-RGB+D-120.
| [
{
"version": "v1",
"created": "Tue, 21 Jan 2025 12:28:36 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Cui",
"Hu",
""
],
[
"Huang",
"Renjing",
""
],
[
"Zhang",
"Ruoyu",
""
],
[
"Hayama",
"Tessai",
""
]
] | TITLE: DSTSA-GCN: Advancing Skeleton-Based Gesture Recognition with
Semantic-Aware Spatio-Temporal Topology Modeling
ABSTRACT: Graph convolutional networks (GCNs) have emerged as a powerful tool for
skeleton-based action and gesture recognition, thanks to their ability to model
spatial and temporal dependencies in skeleton data. However, existing GCN-based
methods face critical limitations: (1) they lack effective spatio-temporal
topology modeling that captures dynamic variations in skeletal motion, and (2)
they struggle to model multiscale structural relationships beyond local joint
connectivity. To address these issues, we propose a novel framework called
Dynamic Spatial-Temporal Semantic Awareness Graph Convolutional Network
(DSTSA-GCN). DSTSA-GCN introduces three key modules: Group Channel-wise Graph
Convolution (GC-GC), Group Temporal-wise Graph Convolution (GT-GC), and
Multi-Scale Temporal Convolution (MS-TCN). GC-GC and GT-GC operate in parallel
to independently model channel-specific and frame-specific correlations,
enabling robust topology learning that accounts for temporal variations.
Additionally, both modules employ a grouping strategy to adaptively capture
multiscale structural relationships. Complementing this, MS-TCN enhances
temporal modeling through group-wise temporal convolutions with diverse
receptive fields. Extensive experiments demonstrate that DSTSA-GCN
significantly improves the topology modeling capabilities of GCNs, achieving
state-of-the-art performance on benchmark datasets for gesture and action
recognition, including SHREC17 Track, DHG-14\/28, NTU-RGB+D, and NTU-RGB+D-120.
|
2501.13796 | Changhao Wang | Changhao Wang, Guanwen Zhang, Zhengyun Cheng, Wei Zhou | PromptMono: Cross Prompting Attention for Self-Supervised Monocular
Depth Estimation in Challenging Environments | 10 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Considerable efforts have been made to improve monocular depth estimation
under ideal conditions. However, in challenging environments, monocular depth
estimation still faces difficulties. In this paper, we introduce visual prompt
learning for predicting depth across different environments within a unified
model, and present a self-supervised learning framework called PromptMono. It
employs a set of learnable parameters as visual prompts to capture
domain-specific knowledge. To integrate prompting information into image
representations, a novel gated cross prompting attention (GCPA) module is
proposed, which enhances the depth estimation in diverse conditions. We
evaluate the proposed PromptMono on the Oxford Robotcar dataset and the
nuScenes dataset. Experimental results demonstrate the superior performance of
the proposed method.
| [
{
"version": "v1",
"created": "Thu, 23 Jan 2025 16:14:02 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 06:53:05 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Wang",
"Changhao",
""
],
[
"Zhang",
"Guanwen",
""
],
[
"Cheng",
"Zhengyun",
""
],
[
"Zhou",
"Wei",
""
]
] | TITLE: PromptMono: Cross Prompting Attention for Self-Supervised Monocular
Depth Estimation in Challenging Environments
ABSTRACT: Considerable efforts have been made to improve monocular depth estimation
under ideal conditions. However, in challenging environments, monocular depth
estimation still faces difficulties. In this paper, we introduce visual prompt
learning for predicting depth across different environments within a unified
model, and present a self-supervised learning framework called PromptMono. It
employs a set of learnable parameters as visual prompts to capture
domain-specific knowledge. To integrate prompting information into image
representations, a novel gated cross prompting attention (GCPA) module is
proposed, which enhances the depth estimation in diverse conditions. We
evaluate the proposed PromptMono on the Oxford Robotcar dataset and the
nuScenes dataset. Experimental results demonstrate the superior performance of
the proposed method.
|
2502.05214 | Amy Rafferty | Amy Rafferty, Rishi Ramaesh, Ajitha Rajan | CoRPA: Adversarial Image Generation for Chest X-rays Using Concept
Vector Perturbations and Generative Models | null | null | null | null | eess.IV cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning models for medical image classification tasks are becoming
widely implemented in AI-assisted diagnostic tools, aiming to enhance
diagnostic accuracy, reduce clinician workloads, and improve patient outcomes.
However, their vulnerability to adversarial attacks poses significant risks to
patient safety. Current attack methodologies use general techniques such as
model querying or pixel value perturbations to generate adversarial examples
designed to fool a model. These approaches may not adequately address the
unique characteristics of clinical errors stemming from missed or incorrectly
identified clinical features. We propose the Concept-based Report Perturbation
Attack (CoRPA), a clinically-focused black-box adversarial attack framework
tailored to the medical imaging domain. CoRPA leverages clinical concepts to
generate adversarial radiological reports and images that closely mirror
realistic clinical misdiagnosis scenarios. We demonstrate the utility of CoRPA
using the MIMIC-CXR-JPG dataset of chest X-rays and radiological reports. Our
evaluation reveals that deep learning models exhibiting strong resilience to
conventional adversarial attacks are significantly less robust when subjected
to CoRPA's clinically-focused perturbations. This underscores the importance of
addressing domain-specific vulnerabilities in medical AI systems. By
introducing a specialized adversarial attack framework, this study provides a
foundation for developing robust, real-world-ready AI models in healthcare,
ensuring their safe and reliable deployment in high-stakes clinical
environments.
| [
{
"version": "v1",
"created": "Tue, 4 Feb 2025 17:14:31 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 15:34:58 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Rafferty",
"Amy",
""
],
[
"Ramaesh",
"Rishi",
""
],
[
"Rajan",
"Ajitha",
""
]
] | TITLE: CoRPA: Adversarial Image Generation for Chest X-rays Using Concept
Vector Perturbations and Generative Models
ABSTRACT: Deep learning models for medical image classification tasks are becoming
widely implemented in AI-assisted diagnostic tools, aiming to enhance
diagnostic accuracy, reduce clinician workloads, and improve patient outcomes.
However, their vulnerability to adversarial attacks poses significant risks to
patient safety. Current attack methodologies use general techniques such as
model querying or pixel value perturbations to generate adversarial examples
designed to fool a model. These approaches may not adequately address the
unique characteristics of clinical errors stemming from missed or incorrectly
identified clinical features. We propose the Concept-based Report Perturbation
Attack (CoRPA), a clinically-focused black-box adversarial attack framework
tailored to the medical imaging domain. CoRPA leverages clinical concepts to
generate adversarial radiological reports and images that closely mirror
realistic clinical misdiagnosis scenarios. We demonstrate the utility of CoRPA
using the MIMIC-CXR-JPG dataset of chest X-rays and radiological reports. Our
evaluation reveals that deep learning models exhibiting strong resilience to
conventional adversarial attacks are significantly less robust when subjected
to CoRPA's clinically-focused perturbations. This underscores the importance of
addressing domain-specific vulnerabilities in medical AI systems. By
introducing a specialized adversarial attack framework, this study provides a
foundation for developing robust, real-world-ready AI models in healthcare,
ensuring their safe and reliable deployment in high-stakes clinical
environments.
|
2502.08127 | Qianqian Xie | Lingfei Qian and Weipeng Zhou and Yan Wang and Xueqing Peng and Han Yi
and Jimin Huang and Qianqian Xie and Jianyun Nie | Fino1: On the Transferability of Reasoning Enhanced LLMs to Finance | 13 pages, 2 figures, 3 Tables | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | While large language models (LLMs) have shown strong general reasoning
capabilities, their effectiveness in financial reasoning, which is crucial for
real-world financial applications remains underexplored. In this study, we
conduct a comprehensive evaluation of 24 state-of-the-art general and
reasoning-focused LLMs across four complex financial reasoning tasks involving
financial text, tabular data, and equations. We assess key capabilities such as
numerical reasoning, tabular interpretation, financial terminology
comprehension, long-context understanding, and equation-based problem solving.
Our analysis reveals that while data quality and pretraining contribute to
performance, general techniques like chain-of-thought (CoT) fine-tuning offer
limited gains in financial tasks. To address this, we propose two
domain-adapted models, Fino1-8B and Fino1-14B, trained with CoT fine-tuning and
reinforcement learning using domain-specific reasoning paths. Our models are
trained on a carefully curated dataset integrating high-quality examples from
diverse sources, covering financial reports, tables, equations, and structured
XBRL texts. Despite limited training data, they achieve an 7-9% performance
improvement, outperforming several advanced LLMs, including GPT-o1,
GPT-o3-mini, GPT-4.5, and comparable with DeepSeek models (V3 and R1),
demonstrating strong practical value in resource, constrained scenarios. Our
findings highlight the need for domain-specific adaptations in financial
reasoning, and we release all datasets, models, and code for future research.
| [
{
"version": "v1",
"created": "Wed, 12 Feb 2025 05:13:04 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 08:33:36 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Qian",
"Lingfei",
""
],
[
"Zhou",
"Weipeng",
""
],
[
"Wang",
"Yan",
""
],
[
"Peng",
"Xueqing",
""
],
[
"Yi",
"Han",
""
],
[
"Huang",
"Jimin",
""
],
[
"Xie",
"Qianqian",
""
],
[
"Nie",
"Jianyun",
""
]
] | TITLE: Fino1: On the Transferability of Reasoning Enhanced LLMs to Finance
ABSTRACT: While large language models (LLMs) have shown strong general reasoning
capabilities, their effectiveness in financial reasoning, which is crucial for
real-world financial applications remains underexplored. In this study, we
conduct a comprehensive evaluation of 24 state-of-the-art general and
reasoning-focused LLMs across four complex financial reasoning tasks involving
financial text, tabular data, and equations. We assess key capabilities such as
numerical reasoning, tabular interpretation, financial terminology
comprehension, long-context understanding, and equation-based problem solving.
Our analysis reveals that while data quality and pretraining contribute to
performance, general techniques like chain-of-thought (CoT) fine-tuning offer
limited gains in financial tasks. To address this, we propose two
domain-adapted models, Fino1-8B and Fino1-14B, trained with CoT fine-tuning and
reinforcement learning using domain-specific reasoning paths. Our models are
trained on a carefully curated dataset integrating high-quality examples from
diverse sources, covering financial reports, tables, equations, and structured
XBRL texts. Despite limited training data, they achieve an 7-9% performance
improvement, outperforming several advanced LLMs, including GPT-o1,
GPT-o3-mini, GPT-4.5, and comparable with DeepSeek models (V3 and R1),
demonstrating strong practical value in resource, constrained scenarios. Our
findings highlight the need for domain-specific adaptations in financial
reasoning, and we release all datasets, models, and code for future research.
|
2502.18516 | Runze Jiang | Runze Jiang and Pengjian Shang | Gradient entropy (GradEn): The two dimensional version of slope entropy
for image analysis | null | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Information theory and Shannon entropy are essential for quantifying
irregularity in complex systems or signals. Recently, two-dimensional entropy
methods, such as two-dimensional sample entropy, distribution entropy, and
permutation entropy, have been proposed for analyzing 2D texture or image data.
This paper introduces Gradient entropy (GradEn), an extension of slope entropy
to 2D, which considers both symbolic patterns and amplitude information,
enabling better feature extraction from image data. We evaluate GradEn with
simulated data, including 2D colored noise, 2D mixed processes, and the
logistic map. Results show the ability of GradEn to distinguish images with
various characteristics while maintaining low computational cost. Real-world
datasets, consist of texture, fault gear, and railway corrugation signals,
demonstrate the superior performance of GradEn in classification tasks compared
to other 2D entropy methods. In conclusion, GradEn is an effective tool for
image characterization, offering a novel approach for image processing and
recognition.
| [
{
"version": "v1",
"created": "Sun, 23 Feb 2025 02:05:01 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 10:09:37 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Jiang",
"Runze",
""
],
[
"Shang",
"Pengjian",
""
]
] | TITLE: Gradient entropy (GradEn): The two dimensional version of slope entropy
for image analysis
ABSTRACT: Information theory and Shannon entropy are essential for quantifying
irregularity in complex systems or signals. Recently, two-dimensional entropy
methods, such as two-dimensional sample entropy, distribution entropy, and
permutation entropy, have been proposed for analyzing 2D texture or image data.
This paper introduces Gradient entropy (GradEn), an extension of slope entropy
to 2D, which considers both symbolic patterns and amplitude information,
enabling better feature extraction from image data. We evaluate GradEn with
simulated data, including 2D colored noise, 2D mixed processes, and the
logistic map. Results show the ability of GradEn to distinguish images with
various characteristics while maintaining low computational cost. Real-world
datasets, consist of texture, fault gear, and railway corrugation signals,
demonstrate the superior performance of GradEn in classification tasks compared
to other 2D entropy methods. In conclusion, GradEn is an effective tool for
image characterization, offering a novel approach for image processing and
recognition.
|
2503.00359 | Yunhan Zhao | Qianqian Shen, Yunhan Zhao, Nahyun Kwon, Jeeeun Kim, Yanan Li, Shu
Kong | Solving Instance Detection from an Open-World Perspective | Accepted at CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Instance detection (InsDet) aims to localize specific object instances within
a novel scene imagery based on given visual references. Technically, it
requires proposal detection to identify all possible object instances, followed
by instance-level matching to pinpoint the ones of interest. Its open-world
nature supports its broad applications from robotics to AR/VR but also presents
significant challenges: methods must generalize to unknown testing data
distributions because (1) the testing scene imagery is unseen during training,
and (2) there are domain gaps between visual references and detected proposals.
Existing methods tackle these challenges by synthesizing diverse training
examples or utilizing off-the-shelf foundation models (FMs). However, they only
partially capitalize the available open-world information. In contrast, we
approach InsDet from an Open-World perspective, introducing our method IDOW. We
find that, while pretrained FMs yield high recall in instance detection, they
are not specifically optimized for instance-level feature matching. Therefore,
we adapt pretrained FMs for improved instance-level matching using open-world
data. Our approach incorporates metric learning along with novel data
augmentations, which sample distractors as negative examples and synthesize
novel-view instances to enrich the visual references. Extensive experiments
demonstrate that our method significantly outperforms prior works, achieving
>10 AP over previous results on two recently released challenging benchmark
datasets in both conventional and novel instance detection settings.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 05:56:58 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 07:26:47 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Shen",
"Qianqian",
""
],
[
"Zhao",
"Yunhan",
""
],
[
"Kwon",
"Nahyun",
""
],
[
"Kim",
"Jeeeun",
""
],
[
"Li",
"Yanan",
""
],
[
"Kong",
"Shu",
""
]
] | TITLE: Solving Instance Detection from an Open-World Perspective
ABSTRACT: Instance detection (InsDet) aims to localize specific object instances within
a novel scene imagery based on given visual references. Technically, it
requires proposal detection to identify all possible object instances, followed
by instance-level matching to pinpoint the ones of interest. Its open-world
nature supports its broad applications from robotics to AR/VR but also presents
significant challenges: methods must generalize to unknown testing data
distributions because (1) the testing scene imagery is unseen during training,
and (2) there are domain gaps between visual references and detected proposals.
Existing methods tackle these challenges by synthesizing diverse training
examples or utilizing off-the-shelf foundation models (FMs). However, they only
partially capitalize the available open-world information. In contrast, we
approach InsDet from an Open-World perspective, introducing our method IDOW. We
find that, while pretrained FMs yield high recall in instance detection, they
are not specifically optimized for instance-level feature matching. Therefore,
we adapt pretrained FMs for improved instance-level matching using open-world
data. Our approach incorporates metric learning along with novel data
augmentations, which sample distractors as negative examples and synthesize
novel-view instances to enrich the visual references. Extensive experiments
demonstrate that our method significantly outperforms prior works, achieving
>10 AP over previous results on two recently released challenging benchmark
datasets in both conventional and novel instance detection settings.
|
2503.00599 | Atul Anand Gopalakrishnan | Atul Anand Gopalakrishnan and Jakir Hossain and Tugrulcan Elmas and
Ahmet Erdem Sariyuce | Large Engagement Networks for Classifying Coordinated Campaigns and
Organic Twitter Trends | 14 Pages | ICWSM 2025 | null | null | cs.SI cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Social media users and inauthentic accounts, such as bots, may coordinate in
promoting their topics. Such topics may give the impression that they are
organically popular among the public, even though they are astroturfing
campaigns that are centrally managed. It is challenging to predict if a topic
is organic or a coordinated campaign due to the lack of reliable ground truth.
In this paper, we create such ground truth by detecting the campaigns promoted
by ephemeral astroturfing attacks. These attacks push any topic to Twitter's
(X) trends list by employing bots that tweet in a coordinated manner in a short
period and then immediately delete their tweets. We manually curate a dataset
of organic Twitter trends. We then create engagement networks out of these
datasets which can serve as a challenging testbed for graph classification task
to distinguish between campaigns and organic trends. Engagement networks
consist of users as nodes and engagements as edges (retweets, replies, and
quotes) between users. We release the engagement networks for 179 campaigns and
135 non-campaigns, and also provide finer-grain labels to characterize the type
of the campaigns and non-campaigns. Our dataset, LEN (Large Engagement
Networks), is available in the URL below. In comparison to traditional graph
classification datasets, which are small with tens of nodes and hundreds of
edges at most, graphs in LEN are larger. The average graph in LEN has ~11K
nodes and ~23K edges. We show that state-of-the-art GNN methods give only
mediocre results for campaign vs. non-campaign and campaign type classification
on LEN. LEN offers a unique and challenging playfield for the graph
classification problem. We believe that LEN will help advance the frontiers of
graph classification techniques on large networks and also provide an
interesting use case in terms of distinguishing coordinated campaigns and
organic trends.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 19:50:32 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 14:54:05 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Gopalakrishnan",
"Atul Anand",
""
],
[
"Hossain",
"Jakir",
""
],
[
"Elmas",
"Tugrulcan",
""
],
[
"Sariyuce",
"Ahmet Erdem",
""
]
] | TITLE: Large Engagement Networks for Classifying Coordinated Campaigns and
Organic Twitter Trends
ABSTRACT: Social media users and inauthentic accounts, such as bots, may coordinate in
promoting their topics. Such topics may give the impression that they are
organically popular among the public, even though they are astroturfing
campaigns that are centrally managed. It is challenging to predict if a topic
is organic or a coordinated campaign due to the lack of reliable ground truth.
In this paper, we create such ground truth by detecting the campaigns promoted
by ephemeral astroturfing attacks. These attacks push any topic to Twitter's
(X) trends list by employing bots that tweet in a coordinated manner in a short
period and then immediately delete their tweets. We manually curate a dataset
of organic Twitter trends. We then create engagement networks out of these
datasets which can serve as a challenging testbed for graph classification task
to distinguish between campaigns and organic trends. Engagement networks
consist of users as nodes and engagements as edges (retweets, replies, and
quotes) between users. We release the engagement networks for 179 campaigns and
135 non-campaigns, and also provide finer-grain labels to characterize the type
of the campaigns and non-campaigns. Our dataset, LEN (Large Engagement
Networks), is available in the URL below. In comparison to traditional graph
classification datasets, which are small with tens of nodes and hundreds of
edges at most, graphs in LEN are larger. The average graph in LEN has ~11K
nodes and ~23K edges. We show that state-of-the-art GNN methods give only
mediocre results for campaign vs. non-campaign and campaign type classification
on LEN. LEN offers a unique and challenging playfield for the graph
classification problem. We believe that LEN will help advance the frontiers of
graph classification techniques on large networks and also provide an
interesting use case in terms of distinguishing coordinated campaigns and
organic trends.
|
2503.05834 | Felipe Olivares F. Olivares | Felipe Olivares and Massimiliano Zanin | Quantifying deviations from Gaussianity with application to flight
delays distributions | null | Entropy 2025 | 10.3390/e27040354 | 27(4) 354 | physics.soc-ph physics.data-an | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We propose a novel approach for quantifying deviations from Gaussianity by
leveraging the permutation Jensen-Shannon distance. Using stable distributions
as a flexible framework, we analyze the effects of skewness and heavy tails in
synthetic sequences. We employ phase-randomized surrogates as Gaussian
references to systematically evaluate the statistical distance between this
reference and stable distributions. Our methodology is validated using real
flight delay datasets from major airports in Europe and the United States,
revealing significant deviations from Gaussianity, particularly at high-traffic
airports. These results highlight systematic air traffic management strategies
differences between the two geographic regions.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 12:59:16 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Olivares",
"Felipe",
""
],
[
"Zanin",
"Massimiliano",
""
]
] | TITLE: Quantifying deviations from Gaussianity with application to flight
delays distributions
ABSTRACT: We propose a novel approach for quantifying deviations from Gaussianity by
leveraging the permutation Jensen-Shannon distance. Using stable distributions
as a flexible framework, we analyze the effects of skewness and heavy tails in
synthetic sequences. We employ phase-randomized surrogates as Gaussian
references to systematically evaluate the statistical distance between this
reference and stable distributions. Our methodology is validated using real
flight delay datasets from major airports in Europe and the United States,
revealing significant deviations from Gaussianity, particularly at high-traffic
airports. These results highlight systematic air traffic management strategies
differences between the two geographic regions.
|
2503.06100 | Xianjie Liu | Xianjie Liu, Keren Fu and Qijun Zhao | Patch-Depth Fusion: Dichotomous Image Segmentation via Fine-Grained
Patch Strategy and Depth Integrity-Prior | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dichotomous Image Segmentation (DIS) is a high-precision object segmentation
task for high-resolution natural images. The current mainstream methods focus
on the optimization of local details but overlook the fundamental challenge of
modeling the integrity of objects. We have found that the depth integrity-prior
implicit in the the pseudo-depth maps generated by Depth Anything Model v2 and
the local detail features of image patches can jointly address the above
dilemmas. Based on the above findings, we have designed a novel Patch-Depth
Fusion Network (PDFNet) for high-precision dichotomous image segmentation. The
core of PDFNet consists of three aspects. Firstly, the object perception is
enhanced through multi-modal input fusion. By utilizing the patch fine-grained
strategy, coupled with patch selection and enhancement, the sensitivity to
details is improved. Secondly, by leveraging the depth integrity-prior
distributed in the depth maps, we propose an integrity-prior loss to enhance
the uniformity of the segmentation results in the depth maps. Finally, we
utilize the features of the shared encoder and, through a simple depth
refinement decoder, improve the ability of the shared encoder to capture subtle
depth-related information in the images. Experiments on the DIS-5K dataset show
that PDFNet significantly outperforms state-of-the-art non-diffusion methods.
Due to the incorporation of the depth integrity-prior, PDFNet achieves or even
surpassing the performance of the latest diffusion-based methods while using
less than 11% of the parameters of diffusion-based methods. The source code at
https://github.com/Tennine2077/PDFNet
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 07:02:28 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 13:04:29 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Mar 2025 14:47:24 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Liu",
"Xianjie",
""
],
[
"Fu",
"Keren",
""
],
[
"Zhao",
"Qijun",
""
]
] | TITLE: Patch-Depth Fusion: Dichotomous Image Segmentation via Fine-Grained
Patch Strategy and Depth Integrity-Prior
ABSTRACT: Dichotomous Image Segmentation (DIS) is a high-precision object segmentation
task for high-resolution natural images. The current mainstream methods focus
on the optimization of local details but overlook the fundamental challenge of
modeling the integrity of objects. We have found that the depth integrity-prior
implicit in the the pseudo-depth maps generated by Depth Anything Model v2 and
the local detail features of image patches can jointly address the above
dilemmas. Based on the above findings, we have designed a novel Patch-Depth
Fusion Network (PDFNet) for high-precision dichotomous image segmentation. The
core of PDFNet consists of three aspects. Firstly, the object perception is
enhanced through multi-modal input fusion. By utilizing the patch fine-grained
strategy, coupled with patch selection and enhancement, the sensitivity to
details is improved. Secondly, by leveraging the depth integrity-prior
distributed in the depth maps, we propose an integrity-prior loss to enhance
the uniformity of the segmentation results in the depth maps. Finally, we
utilize the features of the shared encoder and, through a simple depth
refinement decoder, improve the ability of the shared encoder to capture subtle
depth-related information in the images. Experiments on the DIS-5K dataset show
that PDFNet significantly outperforms state-of-the-art non-diffusion methods.
Due to the incorporation of the depth integrity-prior, PDFNet achieves or even
surpassing the performance of the latest diffusion-based methods while using
less than 11% of the parameters of diffusion-based methods. The source code at
https://github.com/Tennine2077/PDFNet
|
2503.09257 | Haixing Gong | Haixing Gong, Hui Zou, Xingzhou Liang, Shiyuan Meng, Pinlong Cai,
Xingcheng Xu, Jingjing Qu | DeepInnovation AI: A Global Dataset Mapping the AI innovation from
Academic Research to Industrial Patents | 32 pages and 8 figures | null | null | null | cs.DB cs.AI cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the rapidly evolving field of artificial intelligence (AI), mapping
innovation patterns and understanding effective technology transfer from
research to applications are essential for economic growth. However, existing
data infrastructures suffer from fragmentation, incomplete coverage, and
insufficient evaluative capacity. Here, we present DeepInnovationAI, a
comprehensive global dataset containing three structured files.
DeepPatentAI.csv: Contains 2,356,204 patent records with 8 field-specific
attributes. DeepDiveAI.csv: Encompasses 3,511,929 academic publications with 13
metadata fields. These two datasets leverage large language models,
multilingual text analysis and dual-layer BERT classifiers to accurately
identify AI-related content, while utilizing hypergraph analysis to create
robust innovation metrics. Additionally, DeepCosineAI.csv: By applying semantic
vector proximity analysis, this file presents approximately one hundred million
calculated paper-patent similarity pairs to enhance understanding of how
theoretical advancements translate into commercial technologies.
DeepInnovationAI enables researchers, policymakers, and industry leaders to
anticipate trends and identify collaboration opportunities. With extensive
temporal and geographical scope, it supports detailed analysis of technological
development patterns and international competition dynamics, establishing a
foundation for modeling AI innovation and technology transfer processes.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 10:56:02 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 05:53:58 GMT"
},
{
"version": "v3",
"created": "Sun, 23 Mar 2025 15:25:46 GMT"
},
{
"version": "v4",
"created": "Fri, 28 Mar 2025 08:22:52 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Gong",
"Haixing",
""
],
[
"Zou",
"Hui",
""
],
[
"Liang",
"Xingzhou",
""
],
[
"Meng",
"Shiyuan",
""
],
[
"Cai",
"Pinlong",
""
],
[
"Xu",
"Xingcheng",
""
],
[
"Qu",
"Jingjing",
""
]
] | TITLE: DeepInnovation AI: A Global Dataset Mapping the AI innovation from
Academic Research to Industrial Patents
ABSTRACT: In the rapidly evolving field of artificial intelligence (AI), mapping
innovation patterns and understanding effective technology transfer from
research to applications are essential for economic growth. However, existing
data infrastructures suffer from fragmentation, incomplete coverage, and
insufficient evaluative capacity. Here, we present DeepInnovationAI, a
comprehensive global dataset containing three structured files.
DeepPatentAI.csv: Contains 2,356,204 patent records with 8 field-specific
attributes. DeepDiveAI.csv: Encompasses 3,511,929 academic publications with 13
metadata fields. These two datasets leverage large language models,
multilingual text analysis and dual-layer BERT classifiers to accurately
identify AI-related content, while utilizing hypergraph analysis to create
robust innovation metrics. Additionally, DeepCosineAI.csv: By applying semantic
vector proximity analysis, this file presents approximately one hundred million
calculated paper-patent similarity pairs to enhance understanding of how
theoretical advancements translate into commercial technologies.
DeepInnovationAI enables researchers, policymakers, and industry leaders to
anticipate trends and identify collaboration opportunities. With extensive
temporal and geographical scope, it supports detailed analysis of technological
development patterns and international competition dynamics, establishing a
foundation for modeling AI innovation and technology transfer processes.
|
2503.11022 | Hai Huang | Hai Huang, Ziteng Xu, Qi Xin, and Zhaoyu Zhang | Towards Efficient PCSEL Design: A Fully AI-driven Approach | null | null | null | null | physics.optics | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an fully AI-driven design framework for photonic crystals (PhCs),
engineered to achieve high efficiency in photonic crystal surface-emitting
lasers (PCSELs). By discretizing the PhC structure into a grid, where the edges
of the holes are represented by the cross-sections of two-dimensional Gaussian
surfaces, we achieve high-degree-of-freedom and fabrication-friendly hole
design. Coupled-wave theory (CWT) generates a dataset by evaluating
surface-emitting efficiency ($SEE$) and quality factor ($Q$) of PhC designs,
while a multi-layered neural network (NN) learns and extracts essential
features from these designs. Finally, black-box optimization (BBO) is employed
to fine-tune the photonic crystal structure, enabling a fully AI-driven design
process. The model achieves high prediction accuracy, with Pearson correlation
coefficients of 0.780 for $SEE$ and 0.887 for the log-transformed $Q$.
Additionally, we perform Shapley value analysis to identify the most important
Fourier coefficients, providing insights into the factors that impact the
performance of PCSEL designs. Our work accelerates the design process by over
1,000,000 times compared to traditional FDTD simulations, reducing parameter
optimization from two weeks to just one second. Our work speeds up the design
process and enables efficient optimization of high-performance PCSELs, driving
the development of fully photonic design automation (PDA).
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 02:40:30 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 10:14:24 GMT"
},
{
"version": "v3",
"created": "Thu, 20 Mar 2025 06:27:45 GMT"
},
{
"version": "v4",
"created": "Fri, 28 Mar 2025 05:19:10 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Huang",
"Hai",
""
],
[
"Xu",
"Ziteng",
""
],
[
"Xin",
"Qi",
""
],
[
"Zhang",
"Zhaoyu",
""
]
] | TITLE: Towards Efficient PCSEL Design: A Fully AI-driven Approach
ABSTRACT: We present an fully AI-driven design framework for photonic crystals (PhCs),
engineered to achieve high efficiency in photonic crystal surface-emitting
lasers (PCSELs). By discretizing the PhC structure into a grid, where the edges
of the holes are represented by the cross-sections of two-dimensional Gaussian
surfaces, we achieve high-degree-of-freedom and fabrication-friendly hole
design. Coupled-wave theory (CWT) generates a dataset by evaluating
surface-emitting efficiency ($SEE$) and quality factor ($Q$) of PhC designs,
while a multi-layered neural network (NN) learns and extracts essential
features from these designs. Finally, black-box optimization (BBO) is employed
to fine-tune the photonic crystal structure, enabling a fully AI-driven design
process. The model achieves high prediction accuracy, with Pearson correlation
coefficients of 0.780 for $SEE$ and 0.887 for the log-transformed $Q$.
Additionally, we perform Shapley value analysis to identify the most important
Fourier coefficients, providing insights into the factors that impact the
performance of PCSEL designs. Our work accelerates the design process by over
1,000,000 times compared to traditional FDTD simulations, reducing parameter
optimization from two weeks to just one second. Our work speeds up the design
process and enables efficient optimization of high-performance PCSELs, driving
the development of fully photonic design automation (PDA).
|
2503.14536 | Anandakumar D | Praveen Shastry, Sowmya Chowdary Muthulur, Naveen Kumarasami,
Anandakumar D, Mounigasri M, Keerthana R, Kishore Prasath Venkatesh, Bargava
Subramanian, Kalyan Sivasailam, Revathi Ezhumalai, Abitha Marimuthu | Advancing Chronic Tuberculosis Diagnostics Using Vision-Language Models:
A Multi modal Framework for Precision Analysis | 10 pages , 3 figures | null | null | null | eess.IV cs.AI cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: This study proposes a Vision-Language Model (VLM) leveraging the
SIGLIP encoder and Gemma-3b transformer decoder to enhance automated chronic
tuberculosis (TB) screening. By integrating chest X-ray images with clinical
data, the model addresses the challenges of manual interpretation, improving
diagnostic consistency and accessibility, particularly in resource-constrained
settings.
Methods: The VLM architecture combines a Vision Transformer (ViT) for visual
encoding and a transformer-based text encoder to process clinical context, such
as patient histories and treatment records. Cross-modal attention mechanisms
align radiographic features with textual information, while the Gemma-3b
decoder generates comprehensive diagnostic reports. The model was pre-trained
on 5 million paired medical images and texts and fine-tuned using 100,000
chronic TB-specific chest X-rays.
Results: The model demonstrated high precision (94 percent) and recall (94
percent) for detecting key chronic TB pathologies, including fibrosis,
calcified granulomas, and bronchiectasis. Area Under the Curve (AUC) scores
exceeded 0.93, and Intersection over Union (IoU) values were above 0.91,
validating its effectiveness in detecting and localizing TB-related
abnormalities.
Conclusion: The VLM offers a robust and scalable solution for automated
chronic TB diagnosis, integrating radiographic and clinical data to deliver
actionable and context-aware insights. Future work will address subtle
pathologies and dataset biases to enhance the model's generalizability,
ensuring equitable performance across diverse populations and healthcare
settings.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 13:49:29 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 11:00:46 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Shastry",
"Praveen",
""
],
[
"Muthulur",
"Sowmya Chowdary",
""
],
[
"Kumarasami",
"Naveen",
""
],
[
"D",
"Anandakumar",
""
],
[
"M",
"Mounigasri",
""
],
[
"R",
"Keerthana",
""
],
[
"Venkatesh",
"Kishore Prasath",
""
],
[
"Subramanian",
"Bargava",
""
],
[
"Sivasailam",
"Kalyan",
""
],
[
"Ezhumalai",
"Revathi",
""
],
[
"Marimuthu",
"Abitha",
""
]
] | TITLE: Advancing Chronic Tuberculosis Diagnostics Using Vision-Language Models:
A Multi modal Framework for Precision Analysis
ABSTRACT: Background: This study proposes a Vision-Language Model (VLM) leveraging the
SIGLIP encoder and Gemma-3b transformer decoder to enhance automated chronic
tuberculosis (TB) screening. By integrating chest X-ray images with clinical
data, the model addresses the challenges of manual interpretation, improving
diagnostic consistency and accessibility, particularly in resource-constrained
settings.
Methods: The VLM architecture combines a Vision Transformer (ViT) for visual
encoding and a transformer-based text encoder to process clinical context, such
as patient histories and treatment records. Cross-modal attention mechanisms
align radiographic features with textual information, while the Gemma-3b
decoder generates comprehensive diagnostic reports. The model was pre-trained
on 5 million paired medical images and texts and fine-tuned using 100,000
chronic TB-specific chest X-rays.
Results: The model demonstrated high precision (94 percent) and recall (94
percent) for detecting key chronic TB pathologies, including fibrosis,
calcified granulomas, and bronchiectasis. Area Under the Curve (AUC) scores
exceeded 0.93, and Intersection over Union (IoU) values were above 0.91,
validating its effectiveness in detecting and localizing TB-related
abnormalities.
Conclusion: The VLM offers a robust and scalable solution for automated
chronic TB diagnosis, integrating radiographic and clinical data to deliver
actionable and context-aware insights. Future work will address subtle
pathologies and dataset biases to enhance the model's generalizability,
ensuring equitable performance across diverse populations and healthcare
settings.
|
2503.15111 | Changlong Shi | Changlong Shi, Jinmeng Li, He Zhao, Dandan Guo, Yi Chang | FedLWS: Federated Learning with Adaptive Layer-wise Weight Shrinking | Accepted in ICLR 2025 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In Federated Learning (FL), weighted aggregation of local models is conducted
to generate a new global model, and the aggregation weights are typically
normalized to 1. A recent study identifies the global weight shrinking effect
in FL, indicating an enhancement in the global model's generalization when the
sum of weights (i.e., the shrinking factor) is smaller than 1, where how to
learn the shrinking factor becomes crucial. However, principled approaches to
this solution have not been carefully studied from the adequate consideration
of privacy concerns and layer-wise distinctions. To this end, we propose a
novel model aggregation strategy, Federated Learning with Adaptive Layer-wise
Weight Shrinking (FedLWS), which adaptively designs the shrinking factor in a
layer-wise manner and avoids optimizing the shrinking factors on a proxy
dataset. We initially explored the factors affecting the shrinking factor
during the training process. Then we calculate the layer-wise shrinking factors
by considering the distinctions among each layer of the global model. FedLWS
can be easily incorporated with various existing methods due to its
flexibility. Extensive experiments under diverse scenarios demonstrate the
superiority of our method over several state-of-the-art approaches, providing a
promising tool for enhancing the global model in FL.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 11:10:28 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 07:37:16 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Shi",
"Changlong",
""
],
[
"Li",
"Jinmeng",
""
],
[
"Zhao",
"He",
""
],
[
"Guo",
"Dandan",
""
],
[
"Chang",
"Yi",
""
]
] | TITLE: FedLWS: Federated Learning with Adaptive Layer-wise Weight Shrinking
ABSTRACT: In Federated Learning (FL), weighted aggregation of local models is conducted
to generate a new global model, and the aggregation weights are typically
normalized to 1. A recent study identifies the global weight shrinking effect
in FL, indicating an enhancement in the global model's generalization when the
sum of weights (i.e., the shrinking factor) is smaller than 1, where how to
learn the shrinking factor becomes crucial. However, principled approaches to
this solution have not been carefully studied from the adequate consideration
of privacy concerns and layer-wise distinctions. To this end, we propose a
novel model aggregation strategy, Federated Learning with Adaptive Layer-wise
Weight Shrinking (FedLWS), which adaptively designs the shrinking factor in a
layer-wise manner and avoids optimizing the shrinking factors on a proxy
dataset. We initially explored the factors affecting the shrinking factor
during the training process. Then we calculate the layer-wise shrinking factors
by considering the distinctions among each layer of the global model. FedLWS
can be easily incorporated with various existing methods due to its
flexibility. Extensive experiments under diverse scenarios demonstrate the
superiority of our method over several state-of-the-art approaches, providing a
promising tool for enhancing the global model in FL.
|
2503.16081 | Yuting Zhang | Zhiyuan Liu, Yuting Zhang, Feng Liu, Changwang Zhang, Ying Sun, Jun
Wang | OThink-MR1: Stimulating multimodal generalized reasoning capabilities
via dynamic reinforcement learning | null | null | null | null | cs.LG cs.IR | http://creativecommons.org/licenses/by/4.0/ | Multimodal Large Language Models (MLLMs) have gained significant traction for
their ability to process diverse input data types and generate coherent,
contextually relevant outputs across various applications. While supervised
fine-tuning (SFT) has been the predominant approach to enhance MLLM
capabilities in task-specific optimization, it often falls short in fostering
crucial generalized reasoning abilities. Although reinforcement learning (RL)
holds great promise in overcoming these limitations, it encounters two
significant challenges: (1) its generalized capacities in multimodal tasks
remain largely unexplored, and (2) its training constraints, including the
constant Kullback-Leibler divergence or the clamp strategy, often result in
suboptimal bottlenecks. To address these challenges, we propose OThink-MR1, an
advanced MLLM equipped with profound comprehension and reasoning capabilities
across multimodal tasks. Specifically, we introduce Group Relative Policy
Optimization with a dynamic Kullback-Leibler strategy (GRPO-D), which markedly
enhances reinforcement learning (RL) performance. For Qwen2-VL-2B-Instruct,
GRPO-D achieves a relative improvement of more than 5.72% over SFT and more
than 13.59% over GRPO in same-task evaluation on two adapted datasets.
Furthermore, GRPO-D demonstrates remarkable cross-task generalization
capabilities, with an average relative improvement of more than 61.63% over SFT
in cross-task evaluation. These results highlight that the MLLM trained with
GRPO-D on one multimodal task can be effectively transferred to another task,
underscoring the superior generalized reasoning capabilities of our proposed
OThink-MR1 model.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 12:22:18 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 11:19:21 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Liu",
"Zhiyuan",
""
],
[
"Zhang",
"Yuting",
""
],
[
"Liu",
"Feng",
""
],
[
"Zhang",
"Changwang",
""
],
[
"Sun",
"Ying",
""
],
[
"Wang",
"Jun",
""
]
] | TITLE: OThink-MR1: Stimulating multimodal generalized reasoning capabilities
via dynamic reinforcement learning
ABSTRACT: Multimodal Large Language Models (MLLMs) have gained significant traction for
their ability to process diverse input data types and generate coherent,
contextually relevant outputs across various applications. While supervised
fine-tuning (SFT) has been the predominant approach to enhance MLLM
capabilities in task-specific optimization, it often falls short in fostering
crucial generalized reasoning abilities. Although reinforcement learning (RL)
holds great promise in overcoming these limitations, it encounters two
significant challenges: (1) its generalized capacities in multimodal tasks
remain largely unexplored, and (2) its training constraints, including the
constant Kullback-Leibler divergence or the clamp strategy, often result in
suboptimal bottlenecks. To address these challenges, we propose OThink-MR1, an
advanced MLLM equipped with profound comprehension and reasoning capabilities
across multimodal tasks. Specifically, we introduce Group Relative Policy
Optimization with a dynamic Kullback-Leibler strategy (GRPO-D), which markedly
enhances reinforcement learning (RL) performance. For Qwen2-VL-2B-Instruct,
GRPO-D achieves a relative improvement of more than 5.72% over SFT and more
than 13.59% over GRPO in same-task evaluation on two adapted datasets.
Furthermore, GRPO-D demonstrates remarkable cross-task generalization
capabilities, with an average relative improvement of more than 61.63% over SFT
in cross-task evaluation. These results highlight that the MLLM trained with
GRPO-D on one multimodal task can be effectively transferred to another task,
underscoring the superior generalized reasoning capabilities of our proposed
OThink-MR1 model.
|
2503.18172 | Zixin Chen | Zixin Chen, Sicheng Song, Kashun Shum, Yanna Lin, Rui Sheng, Huamin Qu | Unmasking Deceptive Visuals: Benchmarking Multimodal Large Language
Models on Misleading Chart Question Answering | 31 pages in total. Under Review For ARR | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Misleading chart visualizations, which intentionally manipulate data
representations to support specific claims, can distort perceptions and lead to
incorrect conclusions. Despite decades of research, misleading visualizations
remain a widespread and pressing issue. Recent advances in multimodal large
language models (MLLMs) have demonstrated strong chart comprehension
capabilities, yet no existing work has systematically evaluated their ability
to detect and interpret misleading charts. This paper introduces the Misleading
Chart Question Answering (Misleading ChartQA) Benchmark, a large-scale
multimodal dataset designed to assess MLLMs in identifying and reasoning about
misleading charts. It contains over 3,000 curated examples, covering 21 types
of misleaders and 10 chart types. Each example includes standardized chart
code, CSV data, and multiple-choice questions with labeled explanations,
validated through multi-round MLLM checks and exhausted expert human review. We
benchmark 16 state-of-the-art MLLMs on our dataset, revealing their limitations
in identifying visually deceptive practices. We also propose a novel pipeline
that detects and localizes misleaders, enhancing MLLMs' accuracy in misleading
chart interpretation. Our work establishes a foundation for advancing
MLLM-driven misleading chart comprehension. We publicly release the sample
dataset to support further research in this critical area.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 18:56:33 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 17:24:41 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Chen",
"Zixin",
""
],
[
"Song",
"Sicheng",
""
],
[
"Shum",
"Kashun",
""
],
[
"Lin",
"Yanna",
""
],
[
"Sheng",
"Rui",
""
],
[
"Qu",
"Huamin",
""
]
] | TITLE: Unmasking Deceptive Visuals: Benchmarking Multimodal Large Language
Models on Misleading Chart Question Answering
ABSTRACT: Misleading chart visualizations, which intentionally manipulate data
representations to support specific claims, can distort perceptions and lead to
incorrect conclusions. Despite decades of research, misleading visualizations
remain a widespread and pressing issue. Recent advances in multimodal large
language models (MLLMs) have demonstrated strong chart comprehension
capabilities, yet no existing work has systematically evaluated their ability
to detect and interpret misleading charts. This paper introduces the Misleading
Chart Question Answering (Misleading ChartQA) Benchmark, a large-scale
multimodal dataset designed to assess MLLMs in identifying and reasoning about
misleading charts. It contains over 3,000 curated examples, covering 21 types
of misleaders and 10 chart types. Each example includes standardized chart
code, CSV data, and multiple-choice questions with labeled explanations,
validated through multi-round MLLM checks and exhausted expert human review. We
benchmark 16 state-of-the-art MLLMs on our dataset, revealing their limitations
in identifying visually deceptive practices. We also propose a novel pipeline
that detects and localizes misleaders, enhancing MLLMs' accuracy in misleading
chart interpretation. Our work establishes a foundation for advancing
MLLM-driven misleading chart comprehension. We publicly release the sample
dataset to support further research in this critical area.
|
2503.18288 | Cheng Huang | Cheng Huang and Fan Gao and Nyima Tashi and Yutong Liu and Xiangxiang
Wang and Thupten Tsering and Ban Ma-bao and Renzeg Duojie and Gadeng Luosang
and Rinchen Dongrub and Dorje Tashi and Xiao Feng and Yongbin Yu | Sun-Shine: A Large Language Model for Tibetan Culture | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Tibetan, a minority language in China, features a highly intricate
grammatical structure, characterized by four verb tenses and a tense system
with frequent irregularities, contributing to its extensive inflectional
diversity. Recently, advances in Large Language Models (LLMs) have transformed
the paradigm in many domains. Despite the success in other fields, current LLMs
often fall short in catering to the needs of domain experts like Tibetans, and
the potential of LLMs for Tibetan culture is under-explored. The intrinsic
reasons are the immense and intricate nature of Tibetan culture as well as the
necessity for higher granularity and richness in knowledge. Simultaneously, the
complexity and uniqueness of its grammatical structure, coupled with its status
as a minority ethnic language, contribute to data scarcity, which remains a
fundamental challenge. To alleviate these issues, we introduce Llama-Sunshine
(Sun-Shine), the first large language model for Tibetan culture, which is
expert in various Tibetan language processing tasks. Sun-Shine incorporates
state-of-the-art model architectures optimized for Tibetan's linguistic
features. We also propose TIB-STC, a comprehensive dataset comprising diverse
Tibetan texts such as literature, religious scripts, news, and conversational
data, which is also the first large-scale dataset for Tibetan culture. Though
comprehensive experiments, Sun-Shine not only demonstrates a higher level of
knowledge expertise for Tibetan culture but also gains preliminary embodied
intelligence capabilities in Tibetan language processing tasks, like language
modeling, text classification, machine translation, and syntactic analysis.
Moreover, it excels in low-resource scenarios, showcasing strong generalization
capabilities.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 02:17:41 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 03:35:17 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Huang",
"Cheng",
""
],
[
"Gao",
"Fan",
""
],
[
"Tashi",
"Nyima",
""
],
[
"Liu",
"Yutong",
""
],
[
"Wang",
"Xiangxiang",
""
],
[
"Tsering",
"Thupten",
""
],
[
"Ma-bao",
"Ban",
""
],
[
"Duojie",
"Renzeg",
""
],
[
"Luosang",
"Gadeng",
""
],
[
"Dongrub",
"Rinchen",
""
],
[
"Tashi",
"Dorje",
""
],
[
"Feng",
"Xiao",
""
],
[
"Yu",
"Yongbin",
""
]
] | TITLE: Sun-Shine: A Large Language Model for Tibetan Culture
ABSTRACT: Tibetan, a minority language in China, features a highly intricate
grammatical structure, characterized by four verb tenses and a tense system
with frequent irregularities, contributing to its extensive inflectional
diversity. Recently, advances in Large Language Models (LLMs) have transformed
the paradigm in many domains. Despite the success in other fields, current LLMs
often fall short in catering to the needs of domain experts like Tibetans, and
the potential of LLMs for Tibetan culture is under-explored. The intrinsic
reasons are the immense and intricate nature of Tibetan culture as well as the
necessity for higher granularity and richness in knowledge. Simultaneously, the
complexity and uniqueness of its grammatical structure, coupled with its status
as a minority ethnic language, contribute to data scarcity, which remains a
fundamental challenge. To alleviate these issues, we introduce Llama-Sunshine
(Sun-Shine), the first large language model for Tibetan culture, which is
expert in various Tibetan language processing tasks. Sun-Shine incorporates
state-of-the-art model architectures optimized for Tibetan's linguistic
features. We also propose TIB-STC, a comprehensive dataset comprising diverse
Tibetan texts such as literature, religious scripts, news, and conversational
data, which is also the first large-scale dataset for Tibetan culture. Though
comprehensive experiments, Sun-Shine not only demonstrates a higher level of
knowledge expertise for Tibetan culture but also gains preliminary embodied
intelligence capabilities in Tibetan language processing tasks, like language
modeling, text classification, machine translation, and syntactic analysis.
Moreover, it excels in low-resource scenarios, showcasing strong generalization
capabilities.
|
2503.18352 | Jinjin Zhang | Jinjin Zhang, Qiuyu Huang, Junjie Liu, Xiefan Guo, Di Huang | Diffusion-4K: Ultra-High-Resolution Image Synthesis with Latent
Diffusion Models | Accepted to CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present Diffusion-4K, a novel framework for direct
ultra-high-resolution image synthesis using text-to-image diffusion models. The
core advancements include: (1) Aesthetic-4K Benchmark: addressing the absence
of a publicly available 4K image synthesis dataset, we construct Aesthetic-4K,
a comprehensive benchmark for ultra-high-resolution image generation. We
curated a high-quality 4K dataset with carefully selected images and captions
generated by GPT-4o. Additionally, we introduce GLCM Score and Compression
Ratio metrics to evaluate fine details, combined with holistic measures such as
FID, Aesthetics and CLIPScore for a comprehensive assessment of
ultra-high-resolution images. (2) Wavelet-based Fine-tuning: we propose a
wavelet-based fine-tuning approach for direct training with photorealistic 4K
images, applicable to various latent diffusion models, demonstrating its
effectiveness in synthesizing highly detailed 4K images. Consequently,
Diffusion-4K achieves impressive performance in high-quality image synthesis
and text prompt adherence, especially when powered by modern large-scale
diffusion models (e.g., SD3-2B and Flux-12B). Extensive experimental results
from our benchmark demonstrate the superiority of Diffusion-4K in
ultra-high-resolution image synthesis.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 05:25:07 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 04:51:44 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Zhang",
"Jinjin",
""
],
[
"Huang",
"Qiuyu",
""
],
[
"Liu",
"Junjie",
""
],
[
"Guo",
"Xiefan",
""
],
[
"Huang",
"Di",
""
]
] | TITLE: Diffusion-4K: Ultra-High-Resolution Image Synthesis with Latent
Diffusion Models
ABSTRACT: In this paper, we present Diffusion-4K, a novel framework for direct
ultra-high-resolution image synthesis using text-to-image diffusion models. The
core advancements include: (1) Aesthetic-4K Benchmark: addressing the absence
of a publicly available 4K image synthesis dataset, we construct Aesthetic-4K,
a comprehensive benchmark for ultra-high-resolution image generation. We
curated a high-quality 4K dataset with carefully selected images and captions
generated by GPT-4o. Additionally, we introduce GLCM Score and Compression
Ratio metrics to evaluate fine details, combined with holistic measures such as
FID, Aesthetics and CLIPScore for a comprehensive assessment of
ultra-high-resolution images. (2) Wavelet-based Fine-tuning: we propose a
wavelet-based fine-tuning approach for direct training with photorealistic 4K
images, applicable to various latent diffusion models, demonstrating its
effectiveness in synthesizing highly detailed 4K images. Consequently,
Diffusion-4K achieves impressive performance in high-quality image synthesis
and text prompt adherence, especially when powered by modern large-scale
diffusion models (e.g., SD3-2B and Flux-12B). Extensive experimental results
from our benchmark demonstrate the superiority of Diffusion-4K in
ultra-high-resolution image synthesis.
|
2503.18485 | Dawit Ketema Gete | Dawit Ketema Gete, Bedru Yimam Ahmed, Tadesse Destaw Belay, Yohannes
Ayana Ejigu, Sukairaj Hafiz Imam, Alemu Belay Tessema, Mohammed Oumer Adem,
Tadesse Amare Belay, Robert Geislinger, Umma Aliyu Musa, Martin Semmann,
Shamsuddeen Hassan Muhammad, Henning Schreiber, Seid Muhie Yimam | Whispering in Amharic: Fine-tuning Whisper for Low-resource Language | null | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | This work explores fine-tuning OpenAI's Whisper automatic speech recognition
(ASR) model for Amharic, a low-resource language, to improve transcription
accuracy. While the foundational Whisper model struggles with Amharic due to
limited representation in its training data, we fine-tune it using datasets
like Mozilla Common Voice, FLEURS, and the BDU-speech dataset. The
best-performing model, Whispersmall-am, significantly improves when finetuned
on a mix of existing FLEURS data and new, unseen Amharic datasets. Training
solely on new data leads to poor performance, but combining it with FLEURS data
reinforces the model, enabling better specialization in Amharic. We also
demonstrate that normalizing Amharic homophones significantly enhances Word
Error Rate (WER) and Bilingual Evaluation Understudy (BLEU) scores. This study
underscores the importance of fine-tuning strategies and dataset composition
for improving ASR in low-resource languages, providing insights for future
Amharic speech recognition research.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 09:39:41 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 13:26:11 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Gete",
"Dawit Ketema",
""
],
[
"Ahmed",
"Bedru Yimam",
""
],
[
"Belay",
"Tadesse Destaw",
""
],
[
"Ejigu",
"Yohannes Ayana",
""
],
[
"Imam",
"Sukairaj Hafiz",
""
],
[
"Tessema",
"Alemu Belay",
""
],
[
"Adem",
"Mohammed Oumer",
""
],
[
"Belay",
"Tadesse Amare",
""
],
[
"Geislinger",
"Robert",
""
],
[
"Musa",
"Umma Aliyu",
""
],
[
"Semmann",
"Martin",
""
],
[
"Muhammad",
"Shamsuddeen Hassan",
""
],
[
"Schreiber",
"Henning",
""
],
[
"Yimam",
"Seid Muhie",
""
]
] | TITLE: Whispering in Amharic: Fine-tuning Whisper for Low-resource Language
ABSTRACT: This work explores fine-tuning OpenAI's Whisper automatic speech recognition
(ASR) model for Amharic, a low-resource language, to improve transcription
accuracy. While the foundational Whisper model struggles with Amharic due to
limited representation in its training data, we fine-tune it using datasets
like Mozilla Common Voice, FLEURS, and the BDU-speech dataset. The
best-performing model, Whispersmall-am, significantly improves when finetuned
on a mix of existing FLEURS data and new, unseen Amharic datasets. Training
solely on new data leads to poor performance, but combining it with FLEURS data
reinforces the model, enabling better specialization in Amharic. We also
demonstrate that normalizing Amharic homophones significantly enhances Word
Error Rate (WER) and Bilingual Evaluation Understudy (BLEU) scores. This study
underscores the importance of fine-tuning strategies and dataset composition
for improving ASR in low-resource languages, providing insights for future
Amharic speech recognition research.
|
2503.19316 | Zhiping Xiao | Zhiping Xiao, Xinyu Wang, Yifang Qin, Zijie Huang, Mason A. Porter,
Yizhou Sun | A Social Dynamical System for Twitter Analysis | will be submitted to a journal soon | null | null | null | cs.SI | http://creativecommons.org/licenses/by/4.0/ | Understanding the evolution of public opinion is crucial for informed
decision-making in various domains, particularly public affairs. The rapid
growth of social networks, such as Twitter (now rebranded as X), provides an
unprecedented opportunity to analyze public opinion at scale without relying on
traditional surveys. With the rise of deep learning, Graph Neural Networks
(GNNs) have shown great promise in modeling online opinion dynamics. Notably,
classical opinion dynamics models, such as DeGroot, can be reformulated within
a GNN framework.
We introduce Latent Social Dynamical System (LSDS), a novel framework for
modeling the latent dynamics of social media users' opinions based on textual
content. Since expressed opinions may not fully reflect underlying beliefs,
LSDS first encodes post content into latent representations. It then leverages
a GraphODE framework, using a GNN-based ODE function to predict future
opinions. A decoder subsequently utilizes these predicted latent opinions to
perform downstream tasks, such as interaction prediction, which serve as
benchmarks for model evaluation. Our framework is highly flexible, supporting
various opinion dynamic models as ODE functions, provided they can be adapted
into a GNN-based form. It also accommodates different encoder architectures and
is compatible with diverse downstream tasks.
To validate our approach, we constructed dynamic datasets from Twitter data.
Experimental results demonstrate the effectiveness of LSDS, highlighting its
potential for future applications. We plan to publicly release our dataset and
code upon the publication of this paper.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 03:25:07 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 20:17:10 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Mar 2025 03:26:47 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Xiao",
"Zhiping",
""
],
[
"Wang",
"Xinyu",
""
],
[
"Qin",
"Yifang",
""
],
[
"Huang",
"Zijie",
""
],
[
"Porter",
"Mason A.",
""
],
[
"Sun",
"Yizhou",
""
]
] | TITLE: A Social Dynamical System for Twitter Analysis
ABSTRACT: Understanding the evolution of public opinion is crucial for informed
decision-making in various domains, particularly public affairs. The rapid
growth of social networks, such as Twitter (now rebranded as X), provides an
unprecedented opportunity to analyze public opinion at scale without relying on
traditional surveys. With the rise of deep learning, Graph Neural Networks
(GNNs) have shown great promise in modeling online opinion dynamics. Notably,
classical opinion dynamics models, such as DeGroot, can be reformulated within
a GNN framework.
We introduce Latent Social Dynamical System (LSDS), a novel framework for
modeling the latent dynamics of social media users' opinions based on textual
content. Since expressed opinions may not fully reflect underlying beliefs,
LSDS first encodes post content into latent representations. It then leverages
a GraphODE framework, using a GNN-based ODE function to predict future
opinions. A decoder subsequently utilizes these predicted latent opinions to
perform downstream tasks, such as interaction prediction, which serve as
benchmarks for model evaluation. Our framework is highly flexible, supporting
various opinion dynamic models as ODE functions, provided they can be adapted
into a GNN-based form. It also accommodates different encoder architectures and
is compatible with diverse downstream tasks.
To validate our approach, we constructed dynamic datasets from Twitter data.
Experimental results demonstrate the effectiveness of LSDS, highlighting its
potential for future applications. We plan to publicly release our dataset and
code upon the publication of this paper.
|
2503.19469 | Fred Philippy | Fred Philippy, Siwen Guo, Cedric Lothritz, Jacques Klein, Tegawend\'e
F. Bissyand\'e | Enhancing Small Language Models for Cross-Lingual Generalized Zero-Shot
Classification with Soft Prompt Tuning | Workshop on Language Models for Underserved Communities (co-located
with NAACL 2025) | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | In NLP, Zero-Shot Classification (ZSC) has become essential for enabling
models to classify text into categories unseen during training, particularly in
low-resource languages and domains where labeled data is scarce. While
pretrained language models (PLMs) have shown promise in ZSC, they often rely on
large training datasets or external knowledge, limiting their applicability in
multilingual and low-resource scenarios. Recent approaches leveraging natural
language prompts reduce the dependence on large training datasets but struggle
to effectively incorporate available labeled data from related classification
tasks, especially when these datasets originate from different languages or
distributions. Moreover, existing prompt-based methods typically rely on
manually crafted prompts in a specific language, limiting their adaptability
and effectiveness in cross-lingual settings. To address these challenges, we
introduce RoSPrompt, a lightweight and data-efficient approach for training
soft prompts that enhance cross-lingual ZSC while ensuring robust
generalization across data distribution shifts. RoSPrompt is designed for small
multilingual PLMs, enabling them to leverage high-resource languages to improve
performance in low-resource settings without requiring extensive fine-tuning or
high computational costs. We evaluate our approach on multiple multilingual
PLMs across datasets covering 106 languages, demonstrating strong cross-lingual
transfer performance and robust generalization capabilities over unseen
classes.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 09:00:25 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 09:23:44 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Philippy",
"Fred",
""
],
[
"Guo",
"Siwen",
""
],
[
"Lothritz",
"Cedric",
""
],
[
"Klein",
"Jacques",
""
],
[
"Bissyandé",
"Tegawendé F.",
""
]
] | TITLE: Enhancing Small Language Models for Cross-Lingual Generalized Zero-Shot
Classification with Soft Prompt Tuning
ABSTRACT: In NLP, Zero-Shot Classification (ZSC) has become essential for enabling
models to classify text into categories unseen during training, particularly in
low-resource languages and domains where labeled data is scarce. While
pretrained language models (PLMs) have shown promise in ZSC, they often rely on
large training datasets or external knowledge, limiting their applicability in
multilingual and low-resource scenarios. Recent approaches leveraging natural
language prompts reduce the dependence on large training datasets but struggle
to effectively incorporate available labeled data from related classification
tasks, especially when these datasets originate from different languages or
distributions. Moreover, existing prompt-based methods typically rely on
manually crafted prompts in a specific language, limiting their adaptability
and effectiveness in cross-lingual settings. To address these challenges, we
introduce RoSPrompt, a lightweight and data-efficient approach for training
soft prompts that enhance cross-lingual ZSC while ensuring robust
generalization across data distribution shifts. RoSPrompt is designed for small
multilingual PLMs, enabling them to leverage high-resource languages to improve
performance in low-resource settings without requiring extensive fine-tuning or
high computational costs. We evaluate our approach on multiple multilingual
PLMs across datasets covering 106 languages, demonstrating strong cross-lingual
transfer performance and robust generalization capabilities over unseen
classes.
|
2503.19619 | Yeasir Rayhan | Yeasir Rayhan and Walid G. Aref | Exploring Next Token Prediction For Optimizing Databases | null | null | null | null | cs.DB | http://creativecommons.org/licenses/by/4.0/ | The Next Token Prediction paradigm (NTP, for short) lies at the forefront of
modern large foundational models that are pre-trained on diverse and large
datasets. These models generalize effectively and have proven to be very
successful in Natural Language Processing (NLP). Inspired by the generalization
capabilities of Large Language Models (LLMs), we investigate whether the same
NTP paradigm can also be applied to DBMS design and optimization tasks.
Adopting NTP directly for database optimization is non-trivial due to the
fundamental differences between the domains. In this paper, we present a
framework termed Probe and Learn (PoLe) for applying NTP to optimize database
systems. PoLe leverages Decision Transformers and hardware-generated tokens to
effectively incorporate NTP into database systems. Preliminary results from the
main-memory index scheduling task demonstrate that adopting NTP can improve
both performance and generalizability.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 13:08:26 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 14:52:31 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Rayhan",
"Yeasir",
""
],
[
"Aref",
"Walid G.",
""
]
] | TITLE: Exploring Next Token Prediction For Optimizing Databases
ABSTRACT: The Next Token Prediction paradigm (NTP, for short) lies at the forefront of
modern large foundational models that are pre-trained on diverse and large
datasets. These models generalize effectively and have proven to be very
successful in Natural Language Processing (NLP). Inspired by the generalization
capabilities of Large Language Models (LLMs), we investigate whether the same
NTP paradigm can also be applied to DBMS design and optimization tasks.
Adopting NTP directly for database optimization is non-trivial due to the
fundamental differences between the domains. In this paper, we present a
framework termed Probe and Learn (PoLe) for applying NTP to optimize database
systems. PoLe leverages Decision Transformers and hardware-generated tokens to
effectively incorporate NTP into database systems. Preliminary results from the
main-memory index scheduling task demonstrate that adopting NTP can improve
both performance and generalizability.
|
2503.20258 | Jiaheng Zhou | Jiaheng Zhou, Yanfeng Zhou, Wei Fang, Yuxing Tang, Le Lu, Ge Yang | Mamba-3D as Masked Autoencoders for Accurate and Data-Efficient Analysis
of Medical Ultrasound Videos | null | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Ultrasound videos are an important form of clinical imaging data, and deep
learning-based automated analysis can improve diagnostic accuracy and clinical
efficiency. However, the scarcity of labeled data and the inherent challenges
of video analysis have impeded the advancement of related methods. In this
work, we introduce E-ViM$^3$, a data-efficient Vision Mamba network that
preserves the 3D structure of video data, enhancing long-range dependencies and
inductive biases to better model space-time correlations. With our design of
Enclosure Global Tokens (EGT), the model captures and aggregates global
features more effectively than competing methods. To further improve data
efficiency, we employ masked video modeling for self-supervised pre-training,
with the proposed Spatial-Temporal Chained (STC) masking strategy designed to
adapt to various video scenarios. Experiments demonstrate that E-ViM$^3$
performs as the state-of-the-art in two high-level semantic analysis tasks
across four datasets of varying sizes: EchoNet-Dynamic, CAMUS, MICCAI-BUV, and
WHBUS. Furthermore, our model achieves competitive performance with limited
labels, highlighting its potential impact on real-world clinical applications.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 05:54:13 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Zhou",
"Jiaheng",
""
],
[
"Zhou",
"Yanfeng",
""
],
[
"Fang",
"Wei",
""
],
[
"Tang",
"Yuxing",
""
],
[
"Lu",
"Le",
""
],
[
"Yang",
"Ge",
""
]
] | TITLE: Mamba-3D as Masked Autoencoders for Accurate and Data-Efficient Analysis
of Medical Ultrasound Videos
ABSTRACT: Ultrasound videos are an important form of clinical imaging data, and deep
learning-based automated analysis can improve diagnostic accuracy and clinical
efficiency. However, the scarcity of labeled data and the inherent challenges
of video analysis have impeded the advancement of related methods. In this
work, we introduce E-ViM$^3$, a data-efficient Vision Mamba network that
preserves the 3D structure of video data, enhancing long-range dependencies and
inductive biases to better model space-time correlations. With our design of
Enclosure Global Tokens (EGT), the model captures and aggregates global
features more effectively than competing methods. To further improve data
efficiency, we employ masked video modeling for self-supervised pre-training,
with the proposed Spatial-Temporal Chained (STC) masking strategy designed to
adapt to various video scenarios. Experiments demonstrate that E-ViM$^3$
performs as the state-of-the-art in two high-level semantic analysis tasks
across four datasets of varying sizes: EchoNet-Dynamic, CAMUS, MICCAI-BUV, and
WHBUS. Furthermore, our model achieves competitive performance with limited
labels, highlighting its potential impact on real-world clinical applications.
|
2503.20316 | Anandakumar D | Bargava Subramanian, Naveen Kumarasami, Praveen Shastry, Raghotham
Sripadraj, Kalyan Sivasailam, Anandakumar D, Abinaya Ramachandran, Sudhir MP,
Gunakutti G, Kishore Prasath Venkatesh | AI-Driven MRI Spine Pathology Detection: A Comprehensive Deep Learning
Approach for Automated Diagnosis in Diverse Clinical Settings | 20 pages , 3 figurea | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Study Design: This study presents the development of an autonomous AI system
for MRI spine pathology detection, trained on a dataset of 2 million MRI spine
scans sourced from diverse healthcare facilities across India. The AI system
integrates advanced architectures, including Vision Transformers, U-Net with
cross-attention, MedSAM, and Cascade R-CNN, enabling comprehensive
classification, segmentation, and detection of 43 distinct spinal pathologies.
The dataset is balanced across age groups, genders, and scanner manufacturers
to ensure robustness and adaptability. Subgroup analyses were conducted to
validate the model's performance across different patient demographics, imaging
conditions, and equipment types.
Performance: The AI system achieved up to 97.9 percent multi-pathology
detection, demonstrating consistent performance across age, gender, and
manufacturer subgroups. The normal vs. abnormal classification achieved 98.0
percent accuracy, and the system was deployed across 13 major healthcare
enterprises in India, encompassing diagnostic centers, large hospitals, and
government facilities. During deployment, it processed approximately 100,000
plus MRI spine scans, leading to reduced reporting times and increased
diagnostic efficiency by automating the identification of common spinal
conditions.
Conclusion: The AI system's high precision and recall validate its capability
as a reliable tool for autonomous normal/abnormal classification, pathology
segmentation, and detection. Its scalability and adaptability address critical
diagnostic gaps, optimize radiology workflows, and improve patient care across
varied healthcare environments in India.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 08:33:03 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 11:08:02 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Subramanian",
"Bargava",
""
],
[
"Kumarasami",
"Naveen",
""
],
[
"Shastry",
"Praveen",
""
],
[
"Sripadraj",
"Raghotham",
""
],
[
"Sivasailam",
"Kalyan",
""
],
[
"D",
"Anandakumar",
""
],
[
"Ramachandran",
"Abinaya",
""
],
[
"MP",
"Sudhir",
""
],
[
"G",
"Gunakutti",
""
],
[
"Venkatesh",
"Kishore Prasath",
""
]
] | TITLE: AI-Driven MRI Spine Pathology Detection: A Comprehensive Deep Learning
Approach for Automated Diagnosis in Diverse Clinical Settings
ABSTRACT: Study Design: This study presents the development of an autonomous AI system
for MRI spine pathology detection, trained on a dataset of 2 million MRI spine
scans sourced from diverse healthcare facilities across India. The AI system
integrates advanced architectures, including Vision Transformers, U-Net with
cross-attention, MedSAM, and Cascade R-CNN, enabling comprehensive
classification, segmentation, and detection of 43 distinct spinal pathologies.
The dataset is balanced across age groups, genders, and scanner manufacturers
to ensure robustness and adaptability. Subgroup analyses were conducted to
validate the model's performance across different patient demographics, imaging
conditions, and equipment types.
Performance: The AI system achieved up to 97.9 percent multi-pathology
detection, demonstrating consistent performance across age, gender, and
manufacturer subgroups. The normal vs. abnormal classification achieved 98.0
percent accuracy, and the system was deployed across 13 major healthcare
enterprises in India, encompassing diagnostic centers, large hospitals, and
government facilities. During deployment, it processed approximately 100,000
plus MRI spine scans, leading to reduced reporting times and increased
diagnostic efficiency by automating the identification of common spinal
conditions.
Conclusion: The AI system's high precision and recall validate its capability
as a reliable tool for autonomous normal/abnormal classification, pathology
segmentation, and detection. Its scalability and adaptability address critical
diagnostic gaps, optimize radiology workflows, and improve patient care across
varied healthcare environments in India.
|
2503.20578 | Mia Mohammad Imran | Alif Al Hasan, Subarna Saha, Mia Mohammad Imran, Tarannum Shaila Zaman | LLPut: Investigating Large Language Models for Bug Report-Based Input
Generation | null | null | null | null | cs.SE | http://creativecommons.org/licenses/by/4.0/ | Failure-inducing inputs play a crucial role in diagnosing and analyzing
software bugs. Bug reports typically contain these inputs, which developers
extract to facilitate debugging. Since bug reports are written in natural
language, prior research has leveraged various Natural Language Processing
(NLP) techniques for automated input extraction. With the advent of Large
Language Models (LLMs), an important research question arises: how effectively
can generative LLMs extract failure-inducing inputs from bug reports? In this
paper, we propose LLPut, a technique to empirically evaluate the performance of
three open-source generative LLMs -- LLaMA, Qwen, and Qwen-Coder -- in
extracting relevant inputs from bug reports. We conduct an experimental
evaluation on a dataset of 206 bug reports to assess the accuracy and
effectiveness of these models. Our findings provide insights into the
capabilities and limitations of generative LLMs in automated bug diagnosis.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 14:25:01 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Mar 2025 10:35:05 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Mar 2025 02:53:43 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Hasan",
"Alif Al",
""
],
[
"Saha",
"Subarna",
""
],
[
"Imran",
"Mia Mohammad",
""
],
[
"Zaman",
"Tarannum Shaila",
""
]
] | TITLE: LLPut: Investigating Large Language Models for Bug Report-Based Input
Generation
ABSTRACT: Failure-inducing inputs play a crucial role in diagnosing and analyzing
software bugs. Bug reports typically contain these inputs, which developers
extract to facilitate debugging. Since bug reports are written in natural
language, prior research has leveraged various Natural Language Processing
(NLP) techniques for automated input extraction. With the advent of Large
Language Models (LLMs), an important research question arises: how effectively
can generative LLMs extract failure-inducing inputs from bug reports? In this
paper, we propose LLPut, a technique to empirically evaluate the performance of
three open-source generative LLMs -- LLaMA, Qwen, and Qwen-Coder -- in
extracting relevant inputs from bug reports. We conduct an experimental
evaluation on a dataset of 206 bug reports to assess the accuracy and
effectiveness of these models. Our findings provide insights into the
capabilities and limitations of generative LLMs in automated bug diagnosis.
|
2503.20776 | Shijie Zhou | Shijie Zhou, Hui Ren, Yijia Weng, Shuwang Zhang, Zhen Wang, Dejia Xu,
Zhiwen Fan, Suya You, Zhangyang Wang, Leonidas Guibas, Achuta Kadambi | Feature4X: Bridging Any Monocular Video to 4D Agentic AI with Versatile
Gaussian Feature Fields | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Recent advancements in 2D and multimodal models have achieved remarkable
success by leveraging large-scale training on extensive datasets. However,
extending these achievements to enable free-form interactions and high-level
semantic operations with complex 3D/4D scenes remains challenging. This
difficulty stems from the limited availability of large-scale, annotated 3D/4D
or multi-view datasets, which are crucial for generalizable vision and language
tasks such as open-vocabulary and prompt-based segmentation, language-guided
editing, and visual question answering (VQA). In this paper, we introduce
Feature4X, a universal framework designed to extend any functionality from 2D
vision foundation model into the 4D realm, using only monocular video input,
which is widely available from user-generated content. The "X" in Feature4X
represents its versatility, enabling any task through adaptable,
model-conditioned 4D feature field distillation. At the core of our framework
is a dynamic optimization strategy that unifies multiple model capabilities
into a single representation. Additionally, to the best of our knowledge,
Feature4X is the first method to distill and lift the features of video
foundation models (e.g., SAM2, InternVideo2) into an explicit 4D feature field
using Gaussian Splatting. Our experiments showcase novel view segment anything,
geometric and appearance scene editing, and free-form VQA across all time
steps, empowered by LLMs in feedback loops. These advancements broaden the
scope of agentic AI applications by providing a foundation for scalable,
contextually and spatiotemporally aware systems capable of immersive dynamic 4D
scene interaction.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 17:56:16 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 04:48:48 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Zhou",
"Shijie",
""
],
[
"Ren",
"Hui",
""
],
[
"Weng",
"Yijia",
""
],
[
"Zhang",
"Shuwang",
""
],
[
"Wang",
"Zhen",
""
],
[
"Xu",
"Dejia",
""
],
[
"Fan",
"Zhiwen",
""
],
[
"You",
"Suya",
""
],
[
"Wang",
"Zhangyang",
""
],
[
"Guibas",
"Leonidas",
""
],
[
"Kadambi",
"Achuta",
""
]
] | TITLE: Feature4X: Bridging Any Monocular Video to 4D Agentic AI with Versatile
Gaussian Feature Fields
ABSTRACT: Recent advancements in 2D and multimodal models have achieved remarkable
success by leveraging large-scale training on extensive datasets. However,
extending these achievements to enable free-form interactions and high-level
semantic operations with complex 3D/4D scenes remains challenging. This
difficulty stems from the limited availability of large-scale, annotated 3D/4D
or multi-view datasets, which are crucial for generalizable vision and language
tasks such as open-vocabulary and prompt-based segmentation, language-guided
editing, and visual question answering (VQA). In this paper, we introduce
Feature4X, a universal framework designed to extend any functionality from 2D
vision foundation model into the 4D realm, using only monocular video input,
which is widely available from user-generated content. The "X" in Feature4X
represents its versatility, enabling any task through adaptable,
model-conditioned 4D feature field distillation. At the core of our framework
is a dynamic optimization strategy that unifies multiple model capabilities
into a single representation. Additionally, to the best of our knowledge,
Feature4X is the first method to distill and lift the features of video
foundation models (e.g., SAM2, InternVideo2) into an explicit 4D feature field
using Gaussian Splatting. Our experiments showcase novel view segment anything,
geometric and appearance scene editing, and free-form VQA across all time
steps, empowered by LLMs in feedback loops. These advancements broaden the
scope of agentic AI applications by providing a foundation for scalable,
contextually and spatiotemporally aware systems capable of immersive dynamic 4D
scene interaction.
|
2503.20849 | Francisco Coelho | Francisco Coelho, Bruno Dinis, Dietmar Seipel, Salvador Abreu | An Algebraic Approach to Weighted Answer-set Programming | null | null | null | null | cs.LO cs.PL cs.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Logic programs, more specifically, Answer-set programs, can be annotated with
probabilities on facts to express uncertainty. We address the problem of
propagating weight annotations on facts (eg probabilities) of an ASP to its
standard models, and from there to events (defined as sets of atoms) in a
dataset over the program's domain. We propose a novel approach which is
algebraic in the sense that it relies on an equivalence relation over the set
of events. Uncertainty is then described as polynomial expressions over
variables. We propagate the weight function in the space of models and events,
rather than doing so within the syntax of the program. As evidence that our
approach is sound, we show that certain facts behave as expected. Our approach
allows us to investigate weight annotated programs and to determine how
suitable a given one is for modeling a given dataset containing events.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 16:21:34 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Mar 2025 10:05:27 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Coelho",
"Francisco",
""
],
[
"Dinis",
"Bruno",
""
],
[
"Seipel",
"Dietmar",
""
],
[
"Abreu",
"Salvador",
""
]
] | TITLE: An Algebraic Approach to Weighted Answer-set Programming
ABSTRACT: Logic programs, more specifically, Answer-set programs, can be annotated with
probabilities on facts to express uncertainty. We address the problem of
propagating weight annotations on facts (eg probabilities) of an ASP to its
standard models, and from there to events (defined as sets of atoms) in a
dataset over the program's domain. We propose a novel approach which is
algebraic in the sense that it relies on an equivalence relation over the set
of events. Uncertainty is then described as polynomial expressions over
variables. We propagate the weight function in the space of models and events,
rather than doing so within the syntax of the program. As evidence that our
approach is sound, we show that certain facts behave as expected. Our approach
allows us to investigate weight annotated programs and to determine how
suitable a given one is for modeling a given dataset containing events.
|
2503.20919 | Yupei Li | Yupei Li, Qiyang Sun, Sunil Munthumoduku Krishna Murthy, Emran
Alturki, and Bj\"orn W. Schuller | GatedxLSTM: A Multimodal Affective Computing Approach for Emotion
Recognition in Conversations | null | null | null | null | cs.CL cs.SD eess.AS | http://creativecommons.org/licenses/by/4.0/ | Affective Computing (AC) is essential for advancing Artificial General
Intelligence (AGI), with emotion recognition serving as a key component.
However, human emotions are inherently dynamic, influenced not only by an
individual's expressions but also by interactions with others, and
single-modality approaches often fail to capture their full dynamics.
Multimodal Emotion Recognition (MER) leverages multiple signals but
traditionally relies on utterance-level analysis, overlooking the dynamic
nature of emotions in conversations. Emotion Recognition in Conversation (ERC)
addresses this limitation, yet existing methods struggle to align multimodal
features and explain why emotions evolve within dialogues. To bridge this gap,
we propose GatedxLSTM, a novel speech-text multimodal ERC model that explicitly
considers voice and transcripts of both the speaker and their conversational
partner(s) to identify the most influential sentences driving emotional shifts.
By integrating Contrastive Language-Audio Pretraining (CLAP) for improved
cross-modal alignment and employing a gating mechanism to emphasise emotionally
impactful utterances, GatedxLSTM enhances both interpretability and
performance. Additionally, the Dialogical Emotion Decoder (DED) refines emotion
predictions by modelling contextual dependencies. Experiments on the IEMOCAP
dataset demonstrate that GatedxLSTM achieves state-of-the-art (SOTA)
performance among open-source methods in four-class emotion classification.
These results validate its effectiveness for ERC applications and provide an
interpretability analysis from a psychological perspective.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 18:46:18 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Li",
"Yupei",
""
],
[
"Sun",
"Qiyang",
""
],
[
"Murthy",
"Sunil Munthumoduku Krishna",
""
],
[
"Alturki",
"Emran",
""
],
[
"Schuller",
"Björn W.",
""
]
] | TITLE: GatedxLSTM: A Multimodal Affective Computing Approach for Emotion
Recognition in Conversations
ABSTRACT: Affective Computing (AC) is essential for advancing Artificial General
Intelligence (AGI), with emotion recognition serving as a key component.
However, human emotions are inherently dynamic, influenced not only by an
individual's expressions but also by interactions with others, and
single-modality approaches often fail to capture their full dynamics.
Multimodal Emotion Recognition (MER) leverages multiple signals but
traditionally relies on utterance-level analysis, overlooking the dynamic
nature of emotions in conversations. Emotion Recognition in Conversation (ERC)
addresses this limitation, yet existing methods struggle to align multimodal
features and explain why emotions evolve within dialogues. To bridge this gap,
we propose GatedxLSTM, a novel speech-text multimodal ERC model that explicitly
considers voice and transcripts of both the speaker and their conversational
partner(s) to identify the most influential sentences driving emotional shifts.
By integrating Contrastive Language-Audio Pretraining (CLAP) for improved
cross-modal alignment and employing a gating mechanism to emphasise emotionally
impactful utterances, GatedxLSTM enhances both interpretability and
performance. Additionally, the Dialogical Emotion Decoder (DED) refines emotion
predictions by modelling contextual dependencies. Experiments on the IEMOCAP
dataset demonstrate that GatedxLSTM achieves state-of-the-art (SOTA)
performance among open-source methods in four-class emotion classification.
These results validate its effectiveness for ERC applications and provide an
interpretability analysis from a psychological perspective.
|
2503.20956 | Aniruddh Vashisth | Yiwen Zheng, Agni K. Biswal, Yaqi Guo, Prakash Thakolkaran, Yash
Kokane, Vikas Varshney, Siddhant Kumar, Aniruddh Vashisth | Toward Sustainable Polymer Design: A Molecular Dynamics-Informed Machine
Learning Approach for Vitrimers | null | null | null | null | cond-mat.mtrl-sci physics.chem-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vitrimer is an emerging class of sustainable polymers with self-healing
capabilities enabled by dynamic covalent adaptive networks. However, their
limited molecular diversity constrains their property space and potential
applications. Recent development in machine learning (ML) techniques
accelerates polymer design by predicting properties and virtually screening
candidates, yet the scarcity of available experimental vitrimer data poses
challenges in training ML models. To address this, we leverage molecular
dynamics (MD) data generated by our previous work to train and benchmark seven
ML models covering six feature representations for glass transition temperature
(Tg) prediction. By averaging predicted Tg from different models, the model
ensemble approach outperforms individual models, allowing for accurate and
efficient property prediction on unlabeled datasets. Two novel vitrimers are
identified and synthesized, exhibiting experimentally validated higher Tg than
existing bifunctional transesterification vitrimers, along with demonstrated
healability. This work explores the possibility of using MD data to train ML
models in the absence of sufficient experimental data, enabling the discovery
of novel, synthesizable polymer chemistries with superior properties. The
integrated MD-ML approach offers polymer chemists an efficient tool for
designing polymers tailored to diverse applications.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 19:43:13 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Zheng",
"Yiwen",
""
],
[
"Biswal",
"Agni K.",
""
],
[
"Guo",
"Yaqi",
""
],
[
"Thakolkaran",
"Prakash",
""
],
[
"Kokane",
"Yash",
""
],
[
"Varshney",
"Vikas",
""
],
[
"Kumar",
"Siddhant",
""
],
[
"Vashisth",
"Aniruddh",
""
]
] | TITLE: Toward Sustainable Polymer Design: A Molecular Dynamics-Informed Machine
Learning Approach for Vitrimers
ABSTRACT: Vitrimer is an emerging class of sustainable polymers with self-healing
capabilities enabled by dynamic covalent adaptive networks. However, their
limited molecular diversity constrains their property space and potential
applications. Recent development in machine learning (ML) techniques
accelerates polymer design by predicting properties and virtually screening
candidates, yet the scarcity of available experimental vitrimer data poses
challenges in training ML models. To address this, we leverage molecular
dynamics (MD) data generated by our previous work to train and benchmark seven
ML models covering six feature representations for glass transition temperature
(Tg) prediction. By averaging predicted Tg from different models, the model
ensemble approach outperforms individual models, allowing for accurate and
efficient property prediction on unlabeled datasets. Two novel vitrimers are
identified and synthesized, exhibiting experimentally validated higher Tg than
existing bifunctional transesterification vitrimers, along with demonstrated
healability. This work explores the possibility of using MD data to train ML
models in the absence of sufficient experimental data, enabling the discovery
of novel, synthesizable polymer chemistries with superior properties. The
integrated MD-ML approach offers polymer chemists an efficient tool for
designing polymers tailored to diverse applications.
|
2503.21510 | Samuel Bilson | Samuel Bilson and Anna Pustogvar | Uncertainty-aware Bayesian machine learning modelling of land cover
classification | 31 pages, 10 figures | null | null | null | cs.LG cs.CV stat.ML | http://creativecommons.org/licenses/by/4.0/ | Land cover classification involves the production of land cover maps, which
determine the type of land through remote sensing imagery. Over recent years,
such classification is being performed by machine learning classification
models, which can give highly accurate predictions on land cover per pixel
using large quantities of input training data. However, such models do not
currently take account of input measurement uncertainty, which is vital for
traceability in metrology. In this work we propose a Bayesian classification
framework using generative modelling to take account of input measurement
uncertainty. We take the specific case of Bayesian quadratic discriminant
analysis, and apply it to land cover datasets from Copernicus Sentinel-2 in
2020 and 2021. We benchmark the performance of the model against more popular
classification models used in land cover maps such as random forests and neural
networks. We find that such Bayesian models are more trustworthy, in the sense
that they are more interpretable, explicitly model the input measurement
uncertainty, and maintain predictive performance of class probability outputs
across datasets of different years and sizes, whilst also being computationally
efficient.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 13:59:19 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Bilson",
"Samuel",
""
],
[
"Pustogvar",
"Anna",
""
]
] | TITLE: Uncertainty-aware Bayesian machine learning modelling of land cover
classification
ABSTRACT: Land cover classification involves the production of land cover maps, which
determine the type of land through remote sensing imagery. Over recent years,
such classification is being performed by machine learning classification
models, which can give highly accurate predictions on land cover per pixel
using large quantities of input training data. However, such models do not
currently take account of input measurement uncertainty, which is vital for
traceability in metrology. In this work we propose a Bayesian classification
framework using generative modelling to take account of input measurement
uncertainty. We take the specific case of Bayesian quadratic discriminant
analysis, and apply it to land cover datasets from Copernicus Sentinel-2 in
2020 and 2021. We benchmark the performance of the model against more popular
classification models used in land cover maps such as random forests and neural
networks. We find that such Bayesian models are more trustworthy, in the sense
that they are more interpretable, explicitly model the input measurement
uncertainty, and maintain predictive performance of class probability outputs
across datasets of different years and sizes, whilst also being computationally
efficient.
|
2503.21536 | J. Quetzalcoatl Toledo-Marin | J. Quetzalc\'oatl Toledo-Marin, Anindita Maiti, Geoffrey C. Fox, Roger
G. Melko | Exploring the Energy Landscape of RBMs: Reciprocal Space Insights into
Bosons, Hierarchical Learning and Symmetry Breaking | 19pp, 8figs, research article | null | null | null | cs.LG cond-mat.dis-nn stat.ML | http://creativecommons.org/licenses/by/4.0/ | Deep generative models have become ubiquitous due to their ability to learn
and sample from complex distributions. Despite the proliferation of various
frameworks, the relationships among these models remain largely unexplored, a
gap that hinders the development of a unified theory of AI learning. We address
two central challenges: clarifying the connections between different deep
generative models and deepening our understanding of their learning mechanisms.
We focus on Restricted Boltzmann Machines (RBMs), known for their universal
approximation capabilities for discrete distributions. By introducing a
reciprocal space formulation, we reveal a connection between RBMs, diffusion
processes, and coupled Bosons. We show that at initialization, the RBM operates
at a saddle point, where the local curvature is determined by the singular
values, whose distribution follows the Marcenko-Pastur law and exhibits
rotational symmetry. During training, this rotational symmetry is broken due to
hierarchical learning, where different degrees of freedom progressively capture
features at multiple levels of abstraction. This leads to a symmetry breaking
in the energy landscape, reminiscent of Landau theory. This symmetry breaking
in the energy landscape is characterized by the singular values and the weight
matrix eigenvector matrix. We derive the corresponding free energy in a
mean-field approximation. We show that in the limit of infinite size RBM, the
reciprocal variables are Gaussian distributed. Our findings indicate that in
this regime, there will be some modes for which the diffusion process will not
converge to the Boltzmann distribution. To illustrate our results, we trained
replicas of RBMs with different hidden layer sizes using the MNIST dataset. Our
findings bridge the gap between disparate generative frameworks and also shed
light on the processes underpinning learning in generative models.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 14:28:37 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Toledo-Marin",
"J. Quetzalcóatl",
""
],
[
"Maiti",
"Anindita",
""
],
[
"Fox",
"Geoffrey C.",
""
],
[
"Melko",
"Roger G.",
""
]
] | TITLE: Exploring the Energy Landscape of RBMs: Reciprocal Space Insights into
Bosons, Hierarchical Learning and Symmetry Breaking
ABSTRACT: Deep generative models have become ubiquitous due to their ability to learn
and sample from complex distributions. Despite the proliferation of various
frameworks, the relationships among these models remain largely unexplored, a
gap that hinders the development of a unified theory of AI learning. We address
two central challenges: clarifying the connections between different deep
generative models and deepening our understanding of their learning mechanisms.
We focus on Restricted Boltzmann Machines (RBMs), known for their universal
approximation capabilities for discrete distributions. By introducing a
reciprocal space formulation, we reveal a connection between RBMs, diffusion
processes, and coupled Bosons. We show that at initialization, the RBM operates
at a saddle point, where the local curvature is determined by the singular
values, whose distribution follows the Marcenko-Pastur law and exhibits
rotational symmetry. During training, this rotational symmetry is broken due to
hierarchical learning, where different degrees of freedom progressively capture
features at multiple levels of abstraction. This leads to a symmetry breaking
in the energy landscape, reminiscent of Landau theory. This symmetry breaking
in the energy landscape is characterized by the singular values and the weight
matrix eigenvector matrix. We derive the corresponding free energy in a
mean-field approximation. We show that in the limit of infinite size RBM, the
reciprocal variables are Gaussian distributed. Our findings indicate that in
this regime, there will be some modes for which the diffusion process will not
converge to the Boltzmann distribution. To illustrate our results, we trained
replicas of RBMs with different hidden layer sizes using the MNIST dataset. Our
findings bridge the gap between disparate generative frameworks and also shed
light on the processes underpinning learning in generative models.
|
2503.21617 | Mohammad Hasan Dr. | Ahatsham Hayat, Bilal Khan, Mohammad Rashedul Hasan | Leveraging Language Models for Analyzing Longitudinal Experiential Data
in Education | null | null | 10.1109/ICMLA61862.2024.00082 | null | cs.LG cs.CY | http://creativecommons.org/licenses/by/4.0/ | We propose a novel approach to leveraging pre-trained language models (LMs)
for early forecasting of academic trajectories in STEM students using
high-dimensional longitudinal experiential data. This data, which captures
students' study-related activities, behaviors, and psychological states, offers
valuable insights for forecasting-based interventions. Key challenges in
handling such data include high rates of missing values, limited dataset size
due to costly data collection, and complex temporal variability across
modalities. Our approach addresses these issues through a comprehensive data
enrichment process, integrating strategies for managing missing values,
augmenting data, and embedding task-specific instructions and contextual cues
to enhance the models' capacity for learning temporal patterns. Through
extensive experiments on a curated student learning dataset, we evaluate both
encoder-decoder and decoder-only LMs. While our findings show that LMs
effectively integrate data across modalities and exhibit resilience to missing
data, they primarily rely on high-level statistical patterns rather than
demonstrating a deeper understanding of temporal dynamics. Furthermore, their
ability to interpret explicit temporal information remains limited. This work
advances educational data science by highlighting both the potential and
limitations of LMs in modeling student trajectories for early intervention
based on longitudinal experiential data.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 15:37:23 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Hayat",
"Ahatsham",
""
],
[
"Khan",
"Bilal",
""
],
[
"Hasan",
"Mohammad Rashedul",
""
]
] | TITLE: Leveraging Language Models for Analyzing Longitudinal Experiential Data
in Education
ABSTRACT: We propose a novel approach to leveraging pre-trained language models (LMs)
for early forecasting of academic trajectories in STEM students using
high-dimensional longitudinal experiential data. This data, which captures
students' study-related activities, behaviors, and psychological states, offers
valuable insights for forecasting-based interventions. Key challenges in
handling such data include high rates of missing values, limited dataset size
due to costly data collection, and complex temporal variability across
modalities. Our approach addresses these issues through a comprehensive data
enrichment process, integrating strategies for managing missing values,
augmenting data, and embedding task-specific instructions and contextual cues
to enhance the models' capacity for learning temporal patterns. Through
extensive experiments on a curated student learning dataset, we evaluate both
encoder-decoder and decoder-only LMs. While our findings show that LMs
effectively integrate data across modalities and exhibit resilience to missing
data, they primarily rely on high-level statistical patterns rather than
demonstrating a deeper understanding of temporal dynamics. Furthermore, their
ability to interpret explicit temporal information remains limited. This work
advances educational data science by highlighting both the potential and
limitations of LMs in modeling student trajectories for early intervention
based on longitudinal experiential data.
|
2503.21785 | Guanjie Huang | Guanjie Huang, Danny Hin Kwok Tsang and Li Liu | Lend a Hand: Semi Training-Free Cued Speech Recognition via MLLM-Driven
Hand Modeling for Barrier-free Communication | null | null | null | null | eess.AS cs.SD | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Cued Speech (CS) is an innovative visual communication system that integrates
lip-reading with hand coding, designed to enhance effective communication for
individuals with hearing impairments. Automatic CS Recognition (ACSR) refers to
the AI-driven process of automatically recognizing hand gestures and lip
movements in CS, converting them into text. However, previous work often relies
on complex fusion modules and training techniques. Additionally, due to the
limited amount of data in CS, the extraction of hand features, as well as
recognition modeling, has consistently been subpar, significantly limiting the
effectiveness of ACSR. To address this issue, we have innovatively explored the
capabilities of Multimodal large language models (MLLMs) in recognizing hand
shapes and positions in CS. More precisely, we propose a new Semi Training-Free
paradigm for ACSR, named STF-ACSR. This approach leverages zero-shot
recognition of hand movements through the Chinese CS Prompt Module (CCSPM),
which equipped a training-free keyframe filtering and customized prompt
engineering based on MLLM. It then integrates the recognition results into the
lip-reading model using a Minimalist Fusion Module (MFM), effectively achieving
superior recognition results. Furthermore, specifically for this study, we have
supplemented the existing dataset of 6 normal hearing CS cuers by recording
additional data from 8 cuers with hearing impairments, resulting in a new mixed
dataset. Extensive experiments have demonstrated that STF-ACSR significantly
outperforms previous methods on both normal and hearing-impaired data.
Implementation and checkpoints are available at
https://github.com/DennisHgj/STF_ACSR.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 18:18:03 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Huang",
"Guanjie",
""
],
[
"Tsang",
"Danny Hin Kwok",
""
],
[
"Liu",
"Li",
""
]
] | TITLE: Lend a Hand: Semi Training-Free Cued Speech Recognition via MLLM-Driven
Hand Modeling for Barrier-free Communication
ABSTRACT: Cued Speech (CS) is an innovative visual communication system that integrates
lip-reading with hand coding, designed to enhance effective communication for
individuals with hearing impairments. Automatic CS Recognition (ACSR) refers to
the AI-driven process of automatically recognizing hand gestures and lip
movements in CS, converting them into text. However, previous work often relies
on complex fusion modules and training techniques. Additionally, due to the
limited amount of data in CS, the extraction of hand features, as well as
recognition modeling, has consistently been subpar, significantly limiting the
effectiveness of ACSR. To address this issue, we have innovatively explored the
capabilities of Multimodal large language models (MLLMs) in recognizing hand
shapes and positions in CS. More precisely, we propose a new Semi Training-Free
paradigm for ACSR, named STF-ACSR. This approach leverages zero-shot
recognition of hand movements through the Chinese CS Prompt Module (CCSPM),
which equipped a training-free keyframe filtering and customized prompt
engineering based on MLLM. It then integrates the recognition results into the
lip-reading model using a Minimalist Fusion Module (MFM), effectively achieving
superior recognition results. Furthermore, specifically for this study, we have
supplemented the existing dataset of 6 normal hearing CS cuers by recording
additional data from 8 cuers with hearing impairments, resulting in a new mixed
dataset. Extensive experiments have demonstrated that STF-ACSR significantly
outperforms previous methods on both normal and hearing-impaired data.
Implementation and checkpoints are available at
https://github.com/DennisHgj/STF_ACSR.
|
2503.21791 | Shuang Wang | Shuang Wang, Fei Deng, Peifan Jiang, Zezheng Ni and Bin Wang | SeisRDT: Latent Diffusion Model Based On Representation Learning For
Seismic Data Interpolation And Reconstruction | Submitted to geopysics | null | null | null | physics.geo-ph cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Due to limitations such as geographic, physical, or economic factors,
collected seismic data often have missing traces. Traditional seismic data
reconstruction methods face the challenge of selecting numerous empirical
parameters and struggle to handle large-scale continuous missing traces. With
the advancement of deep learning, various diffusion models have demonstrated
strong reconstruction capabilities. However, these UNet-based diffusion models
require significant computational resources and struggle to learn the
correlation between different traces in seismic data. To address the complex
and irregular missing situations in seismic data, we propose a latent diffusion
transformer utilizing representation learning for seismic data reconstruction.
By employing a mask modeling scheme based on representation learning, the
representation module uses the token sequence of known data to infer the token
sequence of unknown data, enabling the reconstructed data from the diffusion
model to have a more consistent data distribution and better correlation and
accuracy with the known data. We propose the Representation Diffusion
Transformer architecture, and a relative positional bias is added when
calculating attention, enabling the diffusion model to achieve global modeling
capability for seismic data. Using a pre-trained data compression model
compresses the training and inference processes of the diffusion model into a
latent space, which, compared to other diffusion model-based reconstruction
methods, reduces computational and inference costs. Reconstruction experiments
on field and synthetic datasets indicate that our method achieves higher
reconstruction accuracy than existing methods and can handle various complex
missing scenarios.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 10:16:35 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Wang",
"Shuang",
""
],
[
"Deng",
"Fei",
""
],
[
"Jiang",
"Peifan",
""
],
[
"Ni",
"Zezheng",
""
],
[
"Wang",
"Bin",
""
]
] | TITLE: SeisRDT: Latent Diffusion Model Based On Representation Learning For
Seismic Data Interpolation And Reconstruction
ABSTRACT: Due to limitations such as geographic, physical, or economic factors,
collected seismic data often have missing traces. Traditional seismic data
reconstruction methods face the challenge of selecting numerous empirical
parameters and struggle to handle large-scale continuous missing traces. With
the advancement of deep learning, various diffusion models have demonstrated
strong reconstruction capabilities. However, these UNet-based diffusion models
require significant computational resources and struggle to learn the
correlation between different traces in seismic data. To address the complex
and irregular missing situations in seismic data, we propose a latent diffusion
transformer utilizing representation learning for seismic data reconstruction.
By employing a mask modeling scheme based on representation learning, the
representation module uses the token sequence of known data to infer the token
sequence of unknown data, enabling the reconstructed data from the diffusion
model to have a more consistent data distribution and better correlation and
accuracy with the known data. We propose the Representation Diffusion
Transformer architecture, and a relative positional bias is added when
calculating attention, enabling the diffusion model to achieve global modeling
capability for seismic data. Using a pre-trained data compression model
compresses the training and inference processes of the diffusion model into a
latent space, which, compared to other diffusion model-based reconstruction
methods, reduces computational and inference costs. Reconstruction experiments
on field and synthetic datasets indicate that our method achieves higher
reconstruction accuracy than existing methods and can handle various complex
missing scenarios.
|
2503.21802 | Jingyao Sun | Jingyao Sun, Qilu Zhang, Di Ma, Tianyu Jia, Shijie Jia, Xiaoxue Zhai,
Ruimou Xie, Ping-Ju Lin, Zhibin Li, Yu Pan, Linhong Ji, Chong Li | Structured and sparse partial least squares coherence for multivariate
cortico-muscular analysis | This work has been submitted to the IEEE for possible publication | null | null | null | stat.AP cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | Multivariate cortico-muscular analysis has recently emerged as a promising
approach for evaluating the corticospinal neural pathway. However, current
multivariate approaches encounter challenges such as high dimensionality and
limited sample sizes, thus restricting their further applications. In this
paper, we propose a structured and sparse partial least squares coherence
algorithm (ssPLSC) to extract shared latent space representations related to
cortico-muscular interactions. Our approach leverages an embedded optimization
framework by integrating a partial least squares (PLS)-based objective
function, a sparsity constraint and a connectivity-based structured constraint,
addressing the generalizability, interpretability and spatial structure. To
solve the optimization problem, we develop an efficient alternating iterative
algorithm within a unified framework and prove its convergence experimentally.
Extensive experimental results from one synthetic and several real-world
datasets have demonstrated that ssPLSC can achieve competitive or better
performance over some representative multivariate cortico-muscular fusion
methods, particularly in scenarios characterized by limited sample sizes and
high noise levels. This study provides a novel multivariate fusion method for
cortico-muscular analysis, offering a transformative tool for the evaluation of
corticospinal pathway integrity in neurological disorders.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 01:56:11 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Sun",
"Jingyao",
""
],
[
"Zhang",
"Qilu",
""
],
[
"Ma",
"Di",
""
],
[
"Jia",
"Tianyu",
""
],
[
"Jia",
"Shijie",
""
],
[
"Zhai",
"Xiaoxue",
""
],
[
"Xie",
"Ruimou",
""
],
[
"Lin",
"Ping-Ju",
""
],
[
"Li",
"Zhibin",
""
],
[
"Pan",
"Yu",
""
],
[
"Ji",
"Linhong",
""
],
[
"Li",
"Chong",
""
]
] | TITLE: Structured and sparse partial least squares coherence for multivariate
cortico-muscular analysis
ABSTRACT: Multivariate cortico-muscular analysis has recently emerged as a promising
approach for evaluating the corticospinal neural pathway. However, current
multivariate approaches encounter challenges such as high dimensionality and
limited sample sizes, thus restricting their further applications. In this
paper, we propose a structured and sparse partial least squares coherence
algorithm (ssPLSC) to extract shared latent space representations related to
cortico-muscular interactions. Our approach leverages an embedded optimization
framework by integrating a partial least squares (PLS)-based objective
function, a sparsity constraint and a connectivity-based structured constraint,
addressing the generalizability, interpretability and spatial structure. To
solve the optimization problem, we develop an efficient alternating iterative
algorithm within a unified framework and prove its convergence experimentally.
Extensive experimental results from one synthetic and several real-world
datasets have demonstrated that ssPLSC can achieve competitive or better
performance over some representative multivariate cortico-muscular fusion
methods, particularly in scenarios characterized by limited sample sizes and
high noise levels. This study provides a novel multivariate fusion method for
cortico-muscular analysis, offering a transformative tool for the evaluation of
corticospinal pathway integrity in neurological disorders.
|
2503.21806 | Heqing Zou | Heqing Zou, Fengmao Lv, Desheng Zheng, Eng Siong Chng and Deepu Rajan | Large Language Models Meet Contrastive Learning: Zero-Shot Emotion
Recognition Across Languages | Accepted to ICME 2025 | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multilingual speech emotion recognition aims to estimate a speaker's
emotional state using a contactless method across different languages. However,
variability in voice characteristics and linguistic diversity poses significant
challenges for zero-shot speech emotion recognition, especially with
multilingual datasets. In this paper, we propose leveraging contrastive
learning to refine multilingual speech features and extend large language
models for zero-shot multilingual speech emotion estimation. Specifically, we
employ a novel two-stage training framework to align speech signals with
linguistic features in the emotional space, capturing both emotion-aware and
language-agnostic speech representations. To advance research in this field, we
introduce a large-scale synthetic multilingual speech emotion dataset, M5SER.
Our experiments demonstrate the effectiveness of the proposed method in both
speech emotion recognition and zero-shot multilingual speech emotion
recognition, including previously unseen datasets and languages.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 05:58:18 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Zou",
"Heqing",
""
],
[
"Lv",
"Fengmao",
""
],
[
"Zheng",
"Desheng",
""
],
[
"Chng",
"Eng Siong",
""
],
[
"Rajan",
"Deepu",
""
]
] | TITLE: Large Language Models Meet Contrastive Learning: Zero-Shot Emotion
Recognition Across Languages
ABSTRACT: Multilingual speech emotion recognition aims to estimate a speaker's
emotional state using a contactless method across different languages. However,
variability in voice characteristics and linguistic diversity poses significant
challenges for zero-shot speech emotion recognition, especially with
multilingual datasets. In this paper, we propose leveraging contrastive
learning to refine multilingual speech features and extend large language
models for zero-shot multilingual speech emotion estimation. Specifically, we
employ a novel two-stage training framework to align speech signals with
linguistic features in the emotional space, capturing both emotion-aware and
language-agnostic speech representations. To advance research in this field, we
introduce a large-scale synthetic multilingual speech emotion dataset, M5SER.
Our experiments demonstrate the effectiveness of the proposed method in both
speech emotion recognition and zero-shot multilingual speech emotion
recognition, including previously unseen datasets and languages.
|
2503.21810 | Zhenyu Wu | Zhenyu Wu, Jiaoyan Chen, Norman W. Paton | Taxonomy Inference for Tabular Data Using Large Language Models | null | null | null | null | cs.DB cs.AI cs.CL cs.IR | http://creativecommons.org/licenses/by/4.0/ | Taxonomy inference for tabular data is a critical task of schema inference,
aiming at discovering entity types (i.e., concepts) of the tables and building
their hierarchy. It can play an important role in data management, data
exploration, ontology learning, and many data-centric applications. Existing
schema inference systems focus more on XML, JSON or RDF data, and often rely on
lexical formats and structures of the data for calculating similarities, with
limited exploitation of the semantics of the text across a table. Motivated by
recent works on taxonomy completion and construction using Large Language
Models (LLMs), this paper presents two LLM-based methods for taxonomy inference
for tables: (i) EmTT which embeds columns by fine-tuning with contrastive
learning encoder-alone LLMs like BERT and utilises clustering for hierarchy
construction, and (ii) GeTT which generates table entity types and their
hierarchy by iterative prompting using a decoder-alone LLM like GPT-4.
Extensive evaluation on three real-world datasets with six metrics covering
different aspects of the output taxonomies has demonstrated that EmTT and GeTT
can both produce taxonomies with strong consistency relative to the Ground
Truth.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 16:26:05 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Wu",
"Zhenyu",
""
],
[
"Chen",
"Jiaoyan",
""
],
[
"Paton",
"Norman W.",
""
]
] | TITLE: Taxonomy Inference for Tabular Data Using Large Language Models
ABSTRACT: Taxonomy inference for tabular data is a critical task of schema inference,
aiming at discovering entity types (i.e., concepts) of the tables and building
their hierarchy. It can play an important role in data management, data
exploration, ontology learning, and many data-centric applications. Existing
schema inference systems focus more on XML, JSON or RDF data, and often rely on
lexical formats and structures of the data for calculating similarities, with
limited exploitation of the semantics of the text across a table. Motivated by
recent works on taxonomy completion and construction using Large Language
Models (LLMs), this paper presents two LLM-based methods for taxonomy inference
for tables: (i) EmTT which embeds columns by fine-tuning with contrastive
learning encoder-alone LLMs like BERT and utilises clustering for hierarchy
construction, and (ii) GeTT which generates table entity types and their
hierarchy by iterative prompting using a decoder-alone LLM like GPT-4.
Extensive evaluation on three real-world datasets with six metrics covering
different aspects of the output taxonomies has demonstrated that EmTT and GeTT
can both produce taxonomies with strong consistency relative to the Ground
Truth.
|
2503.21812 | Jianping Ye | Jianping Ye, Michel Wedel, Kunpeng Zhang | IPGO: Indirect Prompt Gradient Optimization on Text-to-Image Generative
Models with High Data Efficiency | 8 pages, 4 figures, 1 table | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Text-to-Image Diffusion models excel at generating images from text prompts
but often lack optimal alignment with content semantics, aesthetics, and human
preferences. To address these issues, in this study we introduce a novel
framework, Indirect Prompt Gradient Optimization (IPGO), for prompt-level
fine-tuning. IPGO enhances prompt embeddings by injecting continuously
differentiable tokens at the beginning and end of the prompt embeddings, while
exploiting low-rank benefits and flexibility from rotations. It allows for
gradient-based optimization of injected tokens while enforcing value,
orthonormality, and conformity constraints, facilitating continuous updates and
empowering computational efficiency. To evaluate the performance of IPGO, we
conduct prompt-wise and prompt-batch training with three reward models
targeting image aesthetics, image-text alignment, and human preferences under
three datasets of different complexity. The results show that IPGO consistently
matches or outperforms cutting-edge benchmarks, including stable diffusion v1.5
with raw prompts, training-based approaches (DRaFT and DDPO), and training-free
methods (DPO-Diffusion, Promptist, and ChatGPT-4o). Furthermore, we demonstrate
IPGO's effectiveness in enhancing image generation quality while requiring
minimal training data and limited computational resources.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 18:14:42 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Ye",
"Jianping",
""
],
[
"Wedel",
"Michel",
""
],
[
"Zhang",
"Kunpeng",
""
]
] | TITLE: IPGO: Indirect Prompt Gradient Optimization on Text-to-Image Generative
Models with High Data Efficiency
ABSTRACT: Text-to-Image Diffusion models excel at generating images from text prompts
but often lack optimal alignment with content semantics, aesthetics, and human
preferences. To address these issues, in this study we introduce a novel
framework, Indirect Prompt Gradient Optimization (IPGO), for prompt-level
fine-tuning. IPGO enhances prompt embeddings by injecting continuously
differentiable tokens at the beginning and end of the prompt embeddings, while
exploiting low-rank benefits and flexibility from rotations. It allows for
gradient-based optimization of injected tokens while enforcing value,
orthonormality, and conformity constraints, facilitating continuous updates and
empowering computational efficiency. To evaluate the performance of IPGO, we
conduct prompt-wise and prompt-batch training with three reward models
targeting image aesthetics, image-text alignment, and human preferences under
three datasets of different complexity. The results show that IPGO consistently
matches or outperforms cutting-edge benchmarks, including stable diffusion v1.5
with raw prompts, training-based approaches (DRaFT and DDPO), and training-free
methods (DPO-Diffusion, Promptist, and ChatGPT-4o). Furthermore, we demonstrate
IPGO's effectiveness in enhancing image generation quality while requiring
minimal training data and limited computational resources.
|
2503.21813 | Zhangcheng Qiang | Zhangcheng Qiang | OAEI-LLM-T: A TBox Benchmark Dataset for Understanding LLM
Hallucinations in Ontology Matching Systems | 10 pages, 4 figures, 3 tables, 2 prompt templates | null | null | null | cs.CL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hallucinations are inevitable in downstream tasks using large language models
(LLMs). While addressing hallucinations becomes a substantial challenge for
LLM-based ontology matching (OM) systems, we introduce a new benchmark dataset
called OAEI-LLM-T. The dataset evolves from the TBox (i.e. schema-matching)
datasets in the Ontology Alignment Evaluation Initiative (OAEI), capturing
hallucinations of different LLMs performing OM tasks. These OM-specific
hallucinations are carefully classified into two primary categories and six
sub-categories. We showcase the usefulness of the dataset in constructing the
LLM leaderboard and fine-tuning foundational LLMs for LLM-based OM systems.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 18:20:04 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Qiang",
"Zhangcheng",
""
]
] | TITLE: OAEI-LLM-T: A TBox Benchmark Dataset for Understanding LLM
Hallucinations in Ontology Matching Systems
ABSTRACT: Hallucinations are inevitable in downstream tasks using large language models
(LLMs). While addressing hallucinations becomes a substantial challenge for
LLM-based ontology matching (OM) systems, we introduce a new benchmark dataset
called OAEI-LLM-T. The dataset evolves from the TBox (i.e. schema-matching)
datasets in the Ontology Alignment Evaluation Initiative (OAEI), capturing
hallucinations of different LLMs performing OM tasks. These OM-specific
hallucinations are carefully classified into two primary categories and six
sub-categories. We showcase the usefulness of the dataset in constructing the
LLM leaderboard and fine-tuning foundational LLMs for LLM-based OM systems.
|
2503.21815 | Mohamed Afane | Mohamed Afane, Gabrielle Ebbrecht, Ying Wang, Juntao Chen, Junaid
Farooq | ATP: Adaptive Threshold Pruning for Efficient Data Encoding in Quantum
Neural Networks | Accepted at the IEEE/CVF Conference on Computer Vision and Pattern
Recognition (CVPR), 2025.a | null | null | null | quant-ph cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Quantum Neural Networks (QNNs) offer promising capabilities for complex data
tasks, but are often constrained by limited qubit resources and high
entanglement, which can hinder scalability and efficiency. In this paper, we
introduce Adaptive Threshold Pruning (ATP), an encoding method that reduces
entanglement and optimizes data complexity for efficient computations in QNNs.
ATP dynamically prunes non-essential features in the data based on adaptive
thresholds, effectively reducing quantum circuit requirements while preserving
high performance. Extensive experiments across multiple datasets demonstrate
that ATP reduces entanglement entropy and improves adversarial robustness when
combined with adversarial training methods like FGSM. Our results highlight
ATPs ability to balance computational efficiency and model resilience,
achieving significant performance improvements with fewer resources, which will
help make QNNs more feasible in practical, resource-constrained settings.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 01:14:26 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Afane",
"Mohamed",
""
],
[
"Ebbrecht",
"Gabrielle",
""
],
[
"Wang",
"Ying",
""
],
[
"Chen",
"Juntao",
""
],
[
"Farooq",
"Junaid",
""
]
] | TITLE: ATP: Adaptive Threshold Pruning for Efficient Data Encoding in Quantum
Neural Networks
ABSTRACT: Quantum Neural Networks (QNNs) offer promising capabilities for complex data
tasks, but are often constrained by limited qubit resources and high
entanglement, which can hinder scalability and efficiency. In this paper, we
introduce Adaptive Threshold Pruning (ATP), an encoding method that reduces
entanglement and optimizes data complexity for efficient computations in QNNs.
ATP dynamically prunes non-essential features in the data based on adaptive
thresholds, effectively reducing quantum circuit requirements while preserving
high performance. Extensive experiments across multiple datasets demonstrate
that ATP reduces entanglement entropy and improves adversarial robustness when
combined with adversarial training methods like FGSM. Our results highlight
ATPs ability to balance computational efficiency and model resilience,
achieving significant performance improvements with fewer resources, which will
help make QNNs more feasible in practical, resource-constrained settings.
|
2503.21816 | Jiahe Li | Jiahe Li, Feiyu Wang, Xiaochao Qu, Chengjing Wu, Luoqi Liu, Ting Liu | EVPGS: Enhanced View Prior Guidance for Splatting-based Extrapolated
View Synthesis | Accepted by CVPR2025 | null | null | null | cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Gaussian Splatting (GS)-based methods rely on sufficient training view
coverage and perform synthesis on interpolated views. In this work, we tackle
the more challenging and underexplored Extrapolated View Synthesis (EVS) task.
Here we enable GS-based models trained with limited view coverage to generalize
well to extrapolated views. To achieve our goal, we propose a view augmentation
framework to guide training through a coarse-to-fine process. At the coarse
stage, we reduce rendering artifacts due to insufficient view coverage by
introducing a regularization strategy at both appearance and geometry levels.
At the fine stage, we generate reliable view priors to provide further training
guidance. To this end, we incorporate an occlusion awareness into the view
prior generation process, and refine the view priors with the aid of coarse
stage output. We call our framework Enhanced View Prior Guidance for Splatting
(EVPGS). To comprehensively evaluate EVPGS on the EVS task, we collect a
real-world dataset called Merchandise3D dedicated to the EVS scenario.
Experiments on three datasets including both real and synthetic demonstrate
EVPGS achieves state-of-the-art performance, while improving synthesis quality
at extrapolated views for GS-based methods both qualitatively and
quantitatively. We will make our code, dataset, and models public.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 01:53:36 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Li",
"Jiahe",
""
],
[
"Wang",
"Feiyu",
""
],
[
"Qu",
"Xiaochao",
""
],
[
"Wu",
"Chengjing",
""
],
[
"Liu",
"Luoqi",
""
],
[
"Liu",
"Ting",
""
]
] | TITLE: EVPGS: Enhanced View Prior Guidance for Splatting-based Extrapolated
View Synthesis
ABSTRACT: Gaussian Splatting (GS)-based methods rely on sufficient training view
coverage and perform synthesis on interpolated views. In this work, we tackle
the more challenging and underexplored Extrapolated View Synthesis (EVS) task.
Here we enable GS-based models trained with limited view coverage to generalize
well to extrapolated views. To achieve our goal, we propose a view augmentation
framework to guide training through a coarse-to-fine process. At the coarse
stage, we reduce rendering artifacts due to insufficient view coverage by
introducing a regularization strategy at both appearance and geometry levels.
At the fine stage, we generate reliable view priors to provide further training
guidance. To this end, we incorporate an occlusion awareness into the view
prior generation process, and refine the view priors with the aid of coarse
stage output. We call our framework Enhanced View Prior Guidance for Splatting
(EVPGS). To comprehensively evaluate EVPGS on the EVS task, we collect a
real-world dataset called Merchandise3D dedicated to the EVS scenario.
Experiments on three datasets including both real and synthetic demonstrate
EVPGS achieves state-of-the-art performance, while improving synthesis quality
at extrapolated views for GS-based methods both qualitatively and
quantitatively. We will make our code, dataset, and models public.
|
2503.21818 | Jiangbo Pei | Tianqi Tu, Hui Wang, Jiangbo Pei, Xiaojuan Yu, Aidong Men, Suxia Wang,
Qingchao Chen, Ying Tan, Feng Yu, Minghui Zhao | Deep Learning-Based Quantitative Assessment of Renal Chronicity Indices
in Lupus Nephritis | null | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: Renal chronicity indices (CI) have been identified as strong
predictors of long-term outcomes in lupus nephritis (LN) patients. However,
assessment by pathologists is hindered by challenges such as substantial time
requirements, high interobserver variation, and susceptibility to fatigue. This
study aims to develop an effective deep learning (DL) pipeline that automates
the assessment of CI and provides valuable prognostic insights from a
disease-specific perspective. Methods: We curated a dataset comprising 282
slides obtained from 141 patients across two independent cohorts with a
complete 10-years follow-up. Our DL pipeline was developed on 60 slides (22,410
patch images) from 30 patients in the training cohort and evaluated on both an
internal testing set (148 slides, 77,605 patch images) and an external testing
set (74 slides, 27,522 patch images). Results: The study included two cohorts
with slight demographic differences, particularly in age and hemoglobin levels.
The DL pipeline showed high segmentation performance across tissue compartments
and histopathologic lesions, outperforming state-of-the-art methods. The DL
pipeline also demonstrated a strong correlation with pathologists in assessing
CI, significantly improving interobserver agreement. Additionally, the DL
pipeline enhanced prognostic accuracy, particularly in outcome prediction, when
combined with clinical parameters and pathologist-assessed CIs Conclusions: The
DL pipeline demonstrated accuracy and efficiency in assessing CI in LN, showing
promise in improving interobserver agreement among pathologists. It also
exhibited significant value in prognostic analysis and enhancing outcome
prediction in LN patients, offering a valuable tool for clinical
decision-making.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 04:20:59 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Tu",
"Tianqi",
""
],
[
"Wang",
"Hui",
""
],
[
"Pei",
"Jiangbo",
""
],
[
"Yu",
"Xiaojuan",
""
],
[
"Men",
"Aidong",
""
],
[
"Wang",
"Suxia",
""
],
[
"Chen",
"Qingchao",
""
],
[
"Tan",
"Ying",
""
],
[
"Yu",
"Feng",
""
],
[
"Zhao",
"Minghui",
""
]
] | TITLE: Deep Learning-Based Quantitative Assessment of Renal Chronicity Indices
in Lupus Nephritis
ABSTRACT: Background: Renal chronicity indices (CI) have been identified as strong
predictors of long-term outcomes in lupus nephritis (LN) patients. However,
assessment by pathologists is hindered by challenges such as substantial time
requirements, high interobserver variation, and susceptibility to fatigue. This
study aims to develop an effective deep learning (DL) pipeline that automates
the assessment of CI and provides valuable prognostic insights from a
disease-specific perspective. Methods: We curated a dataset comprising 282
slides obtained from 141 patients across two independent cohorts with a
complete 10-years follow-up. Our DL pipeline was developed on 60 slides (22,410
patch images) from 30 patients in the training cohort and evaluated on both an
internal testing set (148 slides, 77,605 patch images) and an external testing
set (74 slides, 27,522 patch images). Results: The study included two cohorts
with slight demographic differences, particularly in age and hemoglobin levels.
The DL pipeline showed high segmentation performance across tissue compartments
and histopathologic lesions, outperforming state-of-the-art methods. The DL
pipeline also demonstrated a strong correlation with pathologists in assessing
CI, significantly improving interobserver agreement. Additionally, the DL
pipeline enhanced prognostic accuracy, particularly in outcome prediction, when
combined with clinical parameters and pathologist-assessed CIs Conclusions: The
DL pipeline demonstrated accuracy and efficiency in assessing CI in LN, showing
promise in improving interobserver agreement among pathologists. It also
exhibited significant value in prognostic analysis and enhancing outcome
prediction in LN patients, offering a valuable tool for clinical
decision-making.
|
2503.21820 | Yun Liao | Yide Di, Yun Liao, Hao Zhou, Kaijun Zhu, Qing Duan, Junhui Liu, Mingyu
Lu | UFM: Unified Feature Matching Pre-training with Multi-Modal Image
Assistants | 34 pages, 13 figures | null | null | null | cs.CV eess.IV | http://creativecommons.org/licenses/by/4.0/ | Image feature matching, a foundational task in computer vision, remains
challenging for multimodal image applications, often necessitating intricate
training on specific datasets. In this paper, we introduce a Unified Feature
Matching pre-trained model (UFM) designed to address feature matching
challenges across a wide spectrum of modal images. We present Multimodal Image
Assistant (MIA) transformers, finely tunable structures adept at handling
diverse feature matching problems. UFM exhibits versatility in addressing both
feature matching tasks within the same modal and those across different modals.
Additionally, we propose a data augmentation algorithm and a staged
pre-training strategy to effectively tackle challenges arising from sparse data
in specific modals and imbalanced modal datasets. Experimental results
demonstrate that UFM excels in generalization and performance across various
feature matching tasks. The code will be released
at:https://github.com/LiaoYun0x0/UFM.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 06:20:52 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Di",
"Yide",
""
],
[
"Liao",
"Yun",
""
],
[
"Zhou",
"Hao",
""
],
[
"Zhu",
"Kaijun",
""
],
[
"Duan",
"Qing",
""
],
[
"Liu",
"Junhui",
""
],
[
"Lu",
"Mingyu",
""
]
] | TITLE: UFM: Unified Feature Matching Pre-training with Multi-Modal Image
Assistants
ABSTRACT: Image feature matching, a foundational task in computer vision, remains
challenging for multimodal image applications, often necessitating intricate
training on specific datasets. In this paper, we introduce a Unified Feature
Matching pre-trained model (UFM) designed to address feature matching
challenges across a wide spectrum of modal images. We present Multimodal Image
Assistant (MIA) transformers, finely tunable structures adept at handling
diverse feature matching problems. UFM exhibits versatility in addressing both
feature matching tasks within the same modal and those across different modals.
Additionally, we propose a data augmentation algorithm and a staged
pre-training strategy to effectively tackle challenges arising from sparse data
in specific modals and imbalanced modal datasets. Experimental results
demonstrate that UFM excels in generalization and performance across various
feature matching tasks. The code will be released
at:https://github.com/LiaoYun0x0/UFM.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.