Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2411.16870 | Nazia Tasnim | Nazia Tasnim and Bryan A. Plummer | RECAST: Reparameterized, Compact weight Adaptation for Sequential Tasks | Accepted as a conference paper in ICLR, 2025 | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | Incremental learning aims to adapt to new sets of categories over time with
minimal computational overhead. Prior work often addresses this task by
training efficient task-specific adaptors that modify frozen layer weights or
features to capture relevant information without affecting predictions on
previously learned categories. While these adaptors are generally more
efficient than finetuning the entire network, they still require tens to
hundreds of thousands of task-specific trainable parameters even for relatively
small networks, making it challenging to operate on resource-constrained
environments with high communication costs like edge devices or mobile phones.
Thus, we propose Reparameterized, Compact weight Adaptation for Sequential
Tasks (RECAST), a novel method that dramatically reduces task-specific
trainable parameters to fewer than 50 - several orders of magnitude less than
competing methods like LoRA. RECAST accomplishes this efficiency by learning to
decompose layer weights into a soft parameter-sharing framework consisting of
shared weight templates and very few module-specific scaling factors or
coefficients. This soft parameter-sharing framework allows for effective
task-wise reparameterization by tuning only these coefficients while keeping
templates frozen.A key innovation of RECAST is the novel weight reconstruction
pipeline called Neural Mimicry, which eliminates the need for pretraining from
scratch. This allows for high-fidelity emulation of existing pretrained weights
within our framework and provides quick adaptability to any model scale and
architecture. Extensive experiments across six datasets demonstrate RECAST
outperforms the state-of-the-art by up to 3% across various scales,
architectures, and parameter spaces Moreover, we show that RECAST's
architecture-agnostic nature allows for seamless integration with existing
methods, further boosting performance.
| [
{
"version": "v1",
"created": "Mon, 25 Nov 2024 19:08:38 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Mar 2025 07:36:26 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Tasnim",
"Nazia",
""
],
[
"Plummer",
"Bryan A.",
""
]
] | TITLE: RECAST: Reparameterized, Compact weight Adaptation for Sequential Tasks
ABSTRACT: Incremental learning aims to adapt to new sets of categories over time with
minimal computational overhead. Prior work often addresses this task by
training efficient task-specific adaptors that modify frozen layer weights or
features to capture relevant information without affecting predictions on
previously learned categories. While these adaptors are generally more
efficient than finetuning the entire network, they still require tens to
hundreds of thousands of task-specific trainable parameters even for relatively
small networks, making it challenging to operate on resource-constrained
environments with high communication costs like edge devices or mobile phones.
Thus, we propose Reparameterized, Compact weight Adaptation for Sequential
Tasks (RECAST), a novel method that dramatically reduces task-specific
trainable parameters to fewer than 50 - several orders of magnitude less than
competing methods like LoRA. RECAST accomplishes this efficiency by learning to
decompose layer weights into a soft parameter-sharing framework consisting of
shared weight templates and very few module-specific scaling factors or
coefficients. This soft parameter-sharing framework allows for effective
task-wise reparameterization by tuning only these coefficients while keeping
templates frozen.A key innovation of RECAST is the novel weight reconstruction
pipeline called Neural Mimicry, which eliminates the need for pretraining from
scratch. This allows for high-fidelity emulation of existing pretrained weights
within our framework and provides quick adaptability to any model scale and
architecture. Extensive experiments across six datasets demonstrate RECAST
outperforms the state-of-the-art by up to 3% across various scales,
architectures, and parameter spaces Moreover, we show that RECAST's
architecture-agnostic nature allows for seamless integration with existing
methods, further boosting performance.
|
2411.18092 | Mingxing Rao | Mingxing Rao, Bohan Jiang, Daniel Moyer | Training Noise Token Pruning | 25 pages, 8 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the present work we present Training Noise Token (TNT) Pruning for vision
transformers. Our method relaxes the discrete token dropping condition to
continuous additive noise, providing smooth optimization in training, while
retaining discrete dropping computational gains in deployment settings. We
provide theoretical connections to Rate-Distortion literature, and empirical
evaluations on the ImageNet dataset using ViT and DeiT architectures
demonstrating TNT's advantages over previous pruning methods.
| [
{
"version": "v1",
"created": "Wed, 27 Nov 2024 07:04:00 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Mar 2025 16:12:10 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Rao",
"Mingxing",
""
],
[
"Jiang",
"Bohan",
""
],
[
"Moyer",
"Daniel",
""
]
] | TITLE: Training Noise Token Pruning
ABSTRACT: In the present work we present Training Noise Token (TNT) Pruning for vision
transformers. Our method relaxes the discrete token dropping condition to
continuous additive noise, providing smooth optimization in training, while
retaining discrete dropping computational gains in deployment settings. We
provide theoretical connections to Rate-Distortion literature, and empirical
evaluations on the ImageNet dataset using ViT and DeiT architectures
demonstrating TNT's advantages over previous pruning methods.
|
2412.00671 | Yunpeng Bai | Yunpeng Bai, Qixing Huang | FiffDepth: Feed-forward Transformation of Diffusion-Based Generators for
Detailed Depth Estimation | 8 pages, 7 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Monocular Depth Estimation (MDE) is a fundamental 3D vision problem with
numerous applications such as 3D scene reconstruction, autonomous navigation,
and AI content creation. However, robust and generalizable MDE remains
challenging due to limited real-world labeled data and distribution gaps
between synthetic datasets and real data. Existing methods often struggle with
real-world test data with low efficiency, reduced accuracy, and lack of detail.
To address these issues, we propose an efficient MDE approach named FiffDepth.
The key feature of FiffDepth is its use of diffusion priors. It transforms
diffusion-based image generators into a feed-forward architecture for detailed
depth estimation. FiffDepth preserves key generative features and integrates
the strong generalization capabilities of models like DINOv2. Through benchmark
evaluations, we demonstrate that FiffDepth achieves exceptional accuracy,
stability, and fine-grained detail, offering significant improvements in MDE
performance against state-of-the-art MDE approaches. The paper's source code is
available here: https://yunpeng1998.github.io/FiffDepth/
| [
{
"version": "v1",
"created": "Sun, 1 Dec 2024 04:59:34 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 20:20:58 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Bai",
"Yunpeng",
""
],
[
"Huang",
"Qixing",
""
]
] | TITLE: FiffDepth: Feed-forward Transformation of Diffusion-Based Generators for
Detailed Depth Estimation
ABSTRACT: Monocular Depth Estimation (MDE) is a fundamental 3D vision problem with
numerous applications such as 3D scene reconstruction, autonomous navigation,
and AI content creation. However, robust and generalizable MDE remains
challenging due to limited real-world labeled data and distribution gaps
between synthetic datasets and real data. Existing methods often struggle with
real-world test data with low efficiency, reduced accuracy, and lack of detail.
To address these issues, we propose an efficient MDE approach named FiffDepth.
The key feature of FiffDepth is its use of diffusion priors. It transforms
diffusion-based image generators into a feed-forward architecture for detailed
depth estimation. FiffDepth preserves key generative features and integrates
the strong generalization capabilities of models like DINOv2. Through benchmark
evaluations, we demonstrate that FiffDepth achieves exceptional accuracy,
stability, and fine-grained detail, offering significant improvements in MDE
performance against state-of-the-art MDE approaches. The paper's source code is
available here: https://yunpeng1998.github.io/FiffDepth/
|
2412.01812 | Zewei Zhou | Zewei Zhou, Hao Xiang, Zhaoliang Zheng, Seth Z. Zhao, Mingyue Lei, Yun
Zhang, Tianhui Cai, Xinyi Liu, Johnson Liu, Maheswari Bajji, Xin Xia, Zhiyu
Huang, Bolei Zhou, Jiaqi Ma | V2XPnP: Vehicle-to-Everything Spatio-Temporal Fusion for Multi-Agent
Perception and Prediction | Website link: https://mobility-lab.seas.ucla.edu/v2xpnp/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Vehicle-to-everything (V2X) technologies offer a promising paradigm to
mitigate the limitations of constrained observability in single-vehicle
systems. Prior work primarily focuses on single-frame cooperative perception,
which fuses agents' information across different spatial locations but ignores
temporal cues and temporal tasks (e.g., temporal perception and prediction). In
this paper, we focus on the spatio-temporal fusion in V2X scenarios and design
one-step and multi-step communication strategies (when to transmit) as well as
examine their integration with three fusion strategies - early, late, and
intermediate (what to transmit), providing comprehensive benchmarks with 11
fusion models (how to fuse). Furthermore, we propose V2XPnP, a novel
intermediate fusion framework within one-step communication for end-to-end
perception and prediction. Our framework employs a unified Transformer-based
architecture to effectively model complex spatio-temporal relationships across
multiple agents, frames, and high-definition map. Moreover, we introduce the
V2XPnP Sequential Dataset that supports all V2X collaboration modes and
addresses the limitations of existing real-world datasets, which are restricted
to single-frame or single-mode cooperation. Extensive experiments demonstrate
our framework outperforms state-of-the-art methods in both perception and
prediction tasks. The codebase and dataset will be released to facilitate
future V2X research.
| [
{
"version": "v1",
"created": "Mon, 2 Dec 2024 18:55:34 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 23:42:25 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Zhou",
"Zewei",
""
],
[
"Xiang",
"Hao",
""
],
[
"Zheng",
"Zhaoliang",
""
],
[
"Zhao",
"Seth Z.",
""
],
[
"Lei",
"Mingyue",
""
],
[
"Zhang",
"Yun",
""
],
[
"Cai",
"Tianhui",
""
],
[
"Liu",
"Xinyi",
""
],
[
"Liu",
"Johnson",
""
],
[
"Bajji",
"Maheswari",
""
],
[
"Xia",
"Xin",
""
],
[
"Huang",
"Zhiyu",
""
],
[
"Zhou",
"Bolei",
""
],
[
"Ma",
"Jiaqi",
""
]
] | TITLE: V2XPnP: Vehicle-to-Everything Spatio-Temporal Fusion for Multi-Agent
Perception and Prediction
ABSTRACT: Vehicle-to-everything (V2X) technologies offer a promising paradigm to
mitigate the limitations of constrained observability in single-vehicle
systems. Prior work primarily focuses on single-frame cooperative perception,
which fuses agents' information across different spatial locations but ignores
temporal cues and temporal tasks (e.g., temporal perception and prediction). In
this paper, we focus on the spatio-temporal fusion in V2X scenarios and design
one-step and multi-step communication strategies (when to transmit) as well as
examine their integration with three fusion strategies - early, late, and
intermediate (what to transmit), providing comprehensive benchmarks with 11
fusion models (how to fuse). Furthermore, we propose V2XPnP, a novel
intermediate fusion framework within one-step communication for end-to-end
perception and prediction. Our framework employs a unified Transformer-based
architecture to effectively model complex spatio-temporal relationships across
multiple agents, frames, and high-definition map. Moreover, we introduce the
V2XPnP Sequential Dataset that supports all V2X collaboration modes and
addresses the limitations of existing real-world datasets, which are restricted
to single-frame or single-mode cooperation. Extensive experiments demonstrate
our framework outperforms state-of-the-art methods in both perception and
prediction tasks. The codebase and dataset will be released to facilitate
future V2X research.
|
2412.02097 | Mingming Zhang | Mingming Zhang, Jiahao Hu, Pengfei Shi, Ningtao Wang, Ruizhe Gao,
Guandong Sun, Feng Zhao, Yulin kang, Xing Fu, Weiqiang Wang, Junbo Zhao | Beyond Tree Models: A Hybrid Model of KAN and gMLP for Large-Scale
Financial Tabular Data | the paper has mistakes in section3.1 | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Tabular data plays a critical role in real-world financial scenarios.
Traditionally, tree models have dominated in handling tabular data. However,
financial datasets in the industry often encounter some challenges, such as
data heterogeneity, the predominance of numerical features and the large scale
of the data, which can range from tens of millions to hundreds of millions of
records. These challenges can lead to significant memory and computational
issues when using tree-based models. Consequently, there is a growing need for
neural network-based solutions that can outperform these models. In this paper,
we introduce TKGMLP, an hybrid network for tabular data that combines shallow
Kolmogorov Arnold Networks with Gated Multilayer Perceptron. This model
leverages the strengths of both architectures to improve performance and
scalability. We validate TKGMLP on a real-world credit scoring dataset, where
it achieves state-of-the-art results and outperforms current benchmarks.
Furthermore, our findings demonstrate that the model continues to improve as
the dataset size increases, making it highly scalable. Additionally, we propose
a novel feature encoding method for numerical data, specifically designed to
address the predominance of numerical features in financial datasets. The
integration of this feature encoding method within TKGMLP significantly
improves prediction accuracy. This research not only advances table prediction
technology but also offers a practical and effective solution for handling
large-scale numerical tabular data in various industrial applications.
| [
{
"version": "v1",
"created": "Tue, 3 Dec 2024 02:38:07 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 05:39:35 GMT"
},
{
"version": "v3",
"created": "Fri, 14 Mar 2025 10:13:20 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Zhang",
"Mingming",
""
],
[
"Hu",
"Jiahao",
""
],
[
"Shi",
"Pengfei",
""
],
[
"Wang",
"Ningtao",
""
],
[
"Gao",
"Ruizhe",
""
],
[
"Sun",
"Guandong",
""
],
[
"Zhao",
"Feng",
""
],
[
"kang",
"Yulin",
""
],
[
"Fu",
"Xing",
""
],
[
"Wang",
"Weiqiang",
""
],
[
"Zhao",
"Junbo",
""
]
] | TITLE: Beyond Tree Models: A Hybrid Model of KAN and gMLP for Large-Scale
Financial Tabular Data
ABSTRACT: Tabular data plays a critical role in real-world financial scenarios.
Traditionally, tree models have dominated in handling tabular data. However,
financial datasets in the industry often encounter some challenges, such as
data heterogeneity, the predominance of numerical features and the large scale
of the data, which can range from tens of millions to hundreds of millions of
records. These challenges can lead to significant memory and computational
issues when using tree-based models. Consequently, there is a growing need for
neural network-based solutions that can outperform these models. In this paper,
we introduce TKGMLP, an hybrid network for tabular data that combines shallow
Kolmogorov Arnold Networks with Gated Multilayer Perceptron. This model
leverages the strengths of both architectures to improve performance and
scalability. We validate TKGMLP on a real-world credit scoring dataset, where
it achieves state-of-the-art results and outperforms current benchmarks.
Furthermore, our findings demonstrate that the model continues to improve as
the dataset size increases, making it highly scalable. Additionally, we propose
a novel feature encoding method for numerical data, specifically designed to
address the predominance of numerical features in financial datasets. The
integration of this feature encoding method within TKGMLP significantly
improves prediction accuracy. This research not only advances table prediction
technology but also offers a practical and effective solution for handling
large-scale numerical tabular data in various industrial applications.
|
2412.03859 | Hui Zhang | Hui Zhang, Dexiang Hong, Yitong Wang, Jie Shao, Xinglong Wu, Zuxuan
Wu, Yu-Gang Jiang | CreatiLayout: Siamese Multimodal Diffusion Transformer for Creative
Layout-to-Image Generation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Diffusion models have been recognized for their ability to generate images
that are not only visually appealing but also of high artistic quality. As a
result, Layout-to-Image (L2I) generation has been proposed to leverage
region-specific positions and descriptions to enable more precise and
controllable generation. However, previous methods primarily focus on
UNet-based models (e.g., SD1.5 and SDXL), and limited effort has explored
Multimodal Diffusion Transformers (MM-DiTs), which have demonstrated powerful
image generation capabilities. Enabling MM-DiT for layout-to-image generation
seems straightforward but is challenging due to the complexity of how layout is
introduced, integrated, and balanced among multiple modalities. To this end, we
explore various network variants to efficiently incorporate layout guidance
into MM-DiT, and ultimately present SiamLayout. To Inherit the advantages of
MM-DiT, we use a separate set of network weights to process the layout,
treating it as equally important as the image and text modalities. Meanwhile,
to alleviate the competition among modalities, we decouple the image-layout
interaction into a siamese branch alongside the image-text one and fuse them in
the later stage. Moreover, we contribute a large-scale layout dataset, named
LayoutSAM, which includes 2.7 million image-text pairs and 10.7 million
entities. Each entity is annotated with a bounding box and a detailed
description. We further construct the LayoutSAM-Eval benchmark as a
comprehensive tool for evaluating the L2I generation quality. Finally, we
introduce the Layout Designer, which taps into the potential of large language
models in layout planning, transforming them into experts in layout generation
and optimization. Our code, model, and dataset will be available at
https://creatilayout.github.io.
| [
{
"version": "v1",
"created": "Thu, 5 Dec 2024 04:09:47 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Mar 2025 11:16:30 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Zhang",
"Hui",
""
],
[
"Hong",
"Dexiang",
""
],
[
"Wang",
"Yitong",
""
],
[
"Shao",
"Jie",
""
],
[
"Wu",
"Xinglong",
""
],
[
"Wu",
"Zuxuan",
""
],
[
"Jiang",
"Yu-Gang",
""
]
] | TITLE: CreatiLayout: Siamese Multimodal Diffusion Transformer for Creative
Layout-to-Image Generation
ABSTRACT: Diffusion models have been recognized for their ability to generate images
that are not only visually appealing but also of high artistic quality. As a
result, Layout-to-Image (L2I) generation has been proposed to leverage
region-specific positions and descriptions to enable more precise and
controllable generation. However, previous methods primarily focus on
UNet-based models (e.g., SD1.5 and SDXL), and limited effort has explored
Multimodal Diffusion Transformers (MM-DiTs), which have demonstrated powerful
image generation capabilities. Enabling MM-DiT for layout-to-image generation
seems straightforward but is challenging due to the complexity of how layout is
introduced, integrated, and balanced among multiple modalities. To this end, we
explore various network variants to efficiently incorporate layout guidance
into MM-DiT, and ultimately present SiamLayout. To Inherit the advantages of
MM-DiT, we use a separate set of network weights to process the layout,
treating it as equally important as the image and text modalities. Meanwhile,
to alleviate the competition among modalities, we decouple the image-layout
interaction into a siamese branch alongside the image-text one and fuse them in
the later stage. Moreover, we contribute a large-scale layout dataset, named
LayoutSAM, which includes 2.7 million image-text pairs and 10.7 million
entities. Each entity is annotated with a bounding box and a detailed
description. We further construct the LayoutSAM-Eval benchmark as a
comprehensive tool for evaluating the L2I generation quality. Finally, we
introduce the Layout Designer, which taps into the potential of large language
models in layout planning, transforming them into experts in layout generation
and optimization. Our code, model, and dataset will be available at
https://creatilayout.github.io.
|
2412.05548 | Ruida Zhang | Ruida Zhang, Chengxi Li, Chenyangguang Zhang, Xingyu Liu, Haili Yuan,
Yanyan Li, Xiangyang Ji, Gim Hee Lee | Street Gaussians without 3D Object Tracker | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Realistic scene reconstruction in driving scenarios poses significant
challenges due to fast-moving objects. Most existing methods rely on
labor-intensive manual labeling of object poses to reconstruct dynamic objects
in canonical space and move them based on these poses during rendering. While
some approaches attempt to use 3D object trackers to replace manual
annotations, the limited generalization of 3D trackers -- caused by the
scarcity of large-scale 3D datasets -- results in inferior reconstructions in
real-world settings. In contrast, 2D foundation models demonstrate strong
generalization capabilities. To eliminate the reliance on 3D trackers and
enhance robustness across diverse environments, we propose a stable object
tracking module by leveraging associations from 2D deep trackers within a 3D
object fusion strategy. We address inevitable tracking errors by further
introducing a motion learning strategy in an implicit feature space that
autonomously corrects trajectory errors and recovers missed detections.
Experimental results on Waymo-NOTR and KITTI show that our method outperforms
existing approaches. Our code will be made publicly available.
| [
{
"version": "v1",
"created": "Sat, 7 Dec 2024 05:49:42 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Mar 2025 02:40:41 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Zhang",
"Ruida",
""
],
[
"Li",
"Chengxi",
""
],
[
"Zhang",
"Chenyangguang",
""
],
[
"Liu",
"Xingyu",
""
],
[
"Yuan",
"Haili",
""
],
[
"Li",
"Yanyan",
""
],
[
"Ji",
"Xiangyang",
""
],
[
"Lee",
"Gim Hee",
""
]
] | TITLE: Street Gaussians without 3D Object Tracker
ABSTRACT: Realistic scene reconstruction in driving scenarios poses significant
challenges due to fast-moving objects. Most existing methods rely on
labor-intensive manual labeling of object poses to reconstruct dynamic objects
in canonical space and move them based on these poses during rendering. While
some approaches attempt to use 3D object trackers to replace manual
annotations, the limited generalization of 3D trackers -- caused by the
scarcity of large-scale 3D datasets -- results in inferior reconstructions in
real-world settings. In contrast, 2D foundation models demonstrate strong
generalization capabilities. To eliminate the reliance on 3D trackers and
enhance robustness across diverse environments, we propose a stable object
tracking module by leveraging associations from 2D deep trackers within a 3D
object fusion strategy. We address inevitable tracking errors by further
introducing a motion learning strategy in an implicit feature space that
autonomously corrects trajectory errors and recovers missed detections.
Experimental results on Waymo-NOTR and KITTI show that our method outperforms
existing approaches. Our code will be made publicly available.
|
2412.06146 | Xinpeng Liu | Xinpeng Liu, Junxuan Liang, Chenshuo Zhang, Zixuan Cai, Cewu Lu,
Yong-Lu Li | Homogeneous Dynamics Space for Heterogeneous Humans | Accepted by CVPR 2025. Cewu Lu and Yong-Lu Li are the corresponding
authors | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Analyses of human motion kinematics have achieved tremendous advances.
However, the production mechanism, known as human dynamics, is still
undercovered. In this paper, we aim to push data-driven human dynamics
understanding forward. We identify a major obstacle to this as the
heterogeneity of existing human motion understanding efforts. Specifically,
heterogeneity exists in not only the diverse kinematics representations and
hierarchical dynamics representations but also in the data from different
domains, namely biomechanics and reinforcement learning. With an in-depth
analysis of the existing heterogeneity, we propose to emphasize the beneath
homogeneity: all of them represent the homogeneous fact of human motion, though
from different perspectives. Given this, we propose Homogeneous Dynamics Space
(HDyS) as a fundamental space for human dynamics by aggregating heterogeneous
data and training a homogeneous latent space with inspiration from the
inverse-forward dynamics procedure. Leveraging the heterogeneous
representations and datasets, HDyS achieves decent mapping between human
kinematics and dynamics. We demonstrate the feasibility of HDyS with extensive
experiments and applications. The project page is
https://foruck.github.io/HDyS.
| [
{
"version": "v1",
"created": "Mon, 9 Dec 2024 01:59:40 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Mar 2025 08:10:18 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Liu",
"Xinpeng",
""
],
[
"Liang",
"Junxuan",
""
],
[
"Zhang",
"Chenshuo",
""
],
[
"Cai",
"Zixuan",
""
],
[
"Lu",
"Cewu",
""
],
[
"Li",
"Yong-Lu",
""
]
] | TITLE: Homogeneous Dynamics Space for Heterogeneous Humans
ABSTRACT: Analyses of human motion kinematics have achieved tremendous advances.
However, the production mechanism, known as human dynamics, is still
undercovered. In this paper, we aim to push data-driven human dynamics
understanding forward. We identify a major obstacle to this as the
heterogeneity of existing human motion understanding efforts. Specifically,
heterogeneity exists in not only the diverse kinematics representations and
hierarchical dynamics representations but also in the data from different
domains, namely biomechanics and reinforcement learning. With an in-depth
analysis of the existing heterogeneity, we propose to emphasize the beneath
homogeneity: all of them represent the homogeneous fact of human motion, though
from different perspectives. Given this, we propose Homogeneous Dynamics Space
(HDyS) as a fundamental space for human dynamics by aggregating heterogeneous
data and training a homogeneous latent space with inspiration from the
inverse-forward dynamics procedure. Leveraging the heterogeneous
representations and datasets, HDyS achieves decent mapping between human
kinematics and dynamics. We demonstrate the feasibility of HDyS with extensive
experiments and applications. The project page is
https://foruck.github.io/HDyS.
|
2412.18381 | Qiuyi Gu | Qiuyi Gu, Zhaocheng Ye, Jincheng Yu, Jiahao Tang, Tinghao Yi, Yuhan
Dong, Jian Wang, Jinqiang Cui, Xinlei Chen, Yu Wang | MR-COGraphs: Communication-efficient Multi-Robot Open-vocabulary Mapping
System via 3D Scene Graphs | null | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Collaborative perception in unknown environments is crucial for multi-robot
systems. With the emergence of foundation models, robots can now not only
perceive geometric information but also achieve open-vocabulary scene
understanding. However, existing map representations that support
open-vocabulary queries often involve large data volumes, which becomes a
bottleneck for multi-robot transmission in communication-limited environments.
To address this challenge, we develop a method to construct a graph-structured
3D representation called COGraph, where nodes represent objects with semantic
features and edges capture their spatial adjacency relationships. Before
transmission, a data-driven feature encoder is applied to compress the feature
dimensions of the COGraph. Upon receiving COGraphs from other robots, the
semantic features of each node are recovered using a decoder. We also propose a
feature-based approach for place recognition and translation estimation,
enabling the merging of local COGraphs into a unified global map. We validate
our framework on two realistic datasets and the real-world environment. The
results demonstrate that, compared to existing baselines for open-vocabulary
map construction, our framework reduces the data volume by over 80\% while
maintaining mapping and query performance without compromise. For more details,
please visit our website at https://github.com/efc-robot/MR-COGraphs.
| [
{
"version": "v1",
"created": "Tue, 24 Dec 2024 12:14:01 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Mar 2025 04:37:33 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Gu",
"Qiuyi",
""
],
[
"Ye",
"Zhaocheng",
""
],
[
"Yu",
"Jincheng",
""
],
[
"Tang",
"Jiahao",
""
],
[
"Yi",
"Tinghao",
""
],
[
"Dong",
"Yuhan",
""
],
[
"Wang",
"Jian",
""
],
[
"Cui",
"Jinqiang",
""
],
[
"Chen",
"Xinlei",
""
],
[
"Wang",
"Yu",
""
]
] | TITLE: MR-COGraphs: Communication-efficient Multi-Robot Open-vocabulary Mapping
System via 3D Scene Graphs
ABSTRACT: Collaborative perception in unknown environments is crucial for multi-robot
systems. With the emergence of foundation models, robots can now not only
perceive geometric information but also achieve open-vocabulary scene
understanding. However, existing map representations that support
open-vocabulary queries often involve large data volumes, which becomes a
bottleneck for multi-robot transmission in communication-limited environments.
To address this challenge, we develop a method to construct a graph-structured
3D representation called COGraph, where nodes represent objects with semantic
features and edges capture their spatial adjacency relationships. Before
transmission, a data-driven feature encoder is applied to compress the feature
dimensions of the COGraph. Upon receiving COGraphs from other robots, the
semantic features of each node are recovered using a decoder. We also propose a
feature-based approach for place recognition and translation estimation,
enabling the merging of local COGraphs into a unified global map. We validate
our framework on two realistic datasets and the real-world environment. The
results demonstrate that, compared to existing baselines for open-vocabulary
map construction, our framework reduces the data volume by over 80\% while
maintaining mapping and query performance without compromise. For more details,
please visit our website at https://github.com/efc-robot/MR-COGraphs.
|
2412.20622 | Ashish Seth | Ashish Seth, Dinesh Manocha, Chirag Agarwal | Towards a Systematic Evaluation of Hallucinations in Large-Vision
Language Models | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large Vision-Language Models (LVLMs) have demonstrated remarkable performance
in complex multimodal tasks. However, these models still suffer from
hallucinations, particularly when required to implicitly recognize or infer
diverse visual entities from images for complex vision-language tasks. To
address this challenge, we propose HALLUCINOGEN, a novel visual question
answering (VQA) benchmark that employs contextual reasoning prompts as
hallucination attacks to evaluate the extent of hallucination in
state-of-the-art LVLMs. Our benchmark provides a comprehensive study of the
implicit reasoning capabilities of these models by first categorizing visual
entities based on the ease of recognition in an image as either salient
(prominent, visibly recognizable objects such as a car) or latent entities
(such as identifying a disease from a chest X-ray), which are not readily
visible and require domain knowledge or contextual reasoning for accurate
inference. Next, we design hallucination attacks for both types of entities to
assess hallucinations in LVLMs while performing various vision-language tasks,
such as locating or reasoning about specific entities within an image, where
models must perform implicit reasoning by verifying the existence of the
queried entity within the image before generating responses. Finally, our
extensive evaluations of eleven LVLMs, including powerful open-source models
(like LLaMA-3.2 and DeepSeek-V2), commercial models like Gemini, and two
hallucination mitigation strategies across multiple datasets, demonstrate that
current LVLMs remain susceptible to hallucination attacks.
| [
{
"version": "v1",
"created": "Sun, 29 Dec 2024 23:56:01 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 23:10:24 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Seth",
"Ashish",
""
],
[
"Manocha",
"Dinesh",
""
],
[
"Agarwal",
"Chirag",
""
]
] | TITLE: Towards a Systematic Evaluation of Hallucinations in Large-Vision
Language Models
ABSTRACT: Large Vision-Language Models (LVLMs) have demonstrated remarkable performance
in complex multimodal tasks. However, these models still suffer from
hallucinations, particularly when required to implicitly recognize or infer
diverse visual entities from images for complex vision-language tasks. To
address this challenge, we propose HALLUCINOGEN, a novel visual question
answering (VQA) benchmark that employs contextual reasoning prompts as
hallucination attacks to evaluate the extent of hallucination in
state-of-the-art LVLMs. Our benchmark provides a comprehensive study of the
implicit reasoning capabilities of these models by first categorizing visual
entities based on the ease of recognition in an image as either salient
(prominent, visibly recognizable objects such as a car) or latent entities
(such as identifying a disease from a chest X-ray), which are not readily
visible and require domain knowledge or contextual reasoning for accurate
inference. Next, we design hallucination attacks for both types of entities to
assess hallucinations in LVLMs while performing various vision-language tasks,
such as locating or reasoning about specific entities within an image, where
models must perform implicit reasoning by verifying the existence of the
queried entity within the image before generating responses. Finally, our
extensive evaluations of eleven LVLMs, including powerful open-source models
(like LLaMA-3.2 and DeepSeek-V2), commercial models like Gemini, and two
hallucination mitigation strategies across multiple datasets, demonstrate that
current LVLMs remain susceptible to hallucination attacks.
|
2412.20796 | Yuanchang Zhou | Yuanchang Zhou, Siyu Hu, Chen Wang, Lin-Wang Wang, Guangming Tan,
Weile Jia | FastCHGNet: Training one Universal Interatomic Potential to 1.5 Hours
with 32 GPUs | null | null | null | null | cs.DC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph neural network universal interatomic potentials (GNN-UIPs) have
demonstrated remarkable generalization and transfer capabilities in material
discovery and property prediction. These models can accelerate molecular
dynamics (MD) simulation by several orders of magnitude while maintaining
\textit{ab initio} accuracy, making them a promising new paradigm in material
simulations. One notable example is Crystal Hamiltonian Graph Neural Network
(CHGNet), pretrained on the energies, forces, stresses, and magnetic moments
from the MPtrj dataset, representing a state-of-the-art GNN-UIP model for
charge-informed MD simulations. However, training the CHGNet model is
time-consuming(8.3 days on one A100 GPU) for three reasons: (i) requiring
multi-layer propagation to reach more distant atom information, (ii) requiring
second-order derivatives calculation to finish weights updating and (iii) the
implementation of reference CHGNet does not fully leverage the computational
capabilities. This paper introduces FastCHGNet, an optimized CHGNet, with three
contributions: Firstly, we design innovative Force/Stress Readout modules to
decompose Force/Stress prediction. Secondly, we adopt massive optimizations
such as kernel fusion, redundancy bypass, etc, to exploit GPU computation power
sufficiently. Finally, we extend CHGNet to support multiple GPUs and propose a
load-balancing technique to enhance GPU utilization. Numerical results show
that FastCHGNet reduces memory footprint by a factor of 3.59. The final
training time of FastCHGNet can be decreased to \textbf{1.53 hours} on 32 GPUs
without sacrificing model accuracy.
| [
{
"version": "v1",
"created": "Mon, 30 Dec 2024 08:38:09 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Mar 2025 08:01:35 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Zhou",
"Yuanchang",
""
],
[
"Hu",
"Siyu",
""
],
[
"Wang",
"Chen",
""
],
[
"Wang",
"Lin-Wang",
""
],
[
"Tan",
"Guangming",
""
],
[
"Jia",
"Weile",
""
]
] | TITLE: FastCHGNet: Training one Universal Interatomic Potential to 1.5 Hours
with 32 GPUs
ABSTRACT: Graph neural network universal interatomic potentials (GNN-UIPs) have
demonstrated remarkable generalization and transfer capabilities in material
discovery and property prediction. These models can accelerate molecular
dynamics (MD) simulation by several orders of magnitude while maintaining
\textit{ab initio} accuracy, making them a promising new paradigm in material
simulations. One notable example is Crystal Hamiltonian Graph Neural Network
(CHGNet), pretrained on the energies, forces, stresses, and magnetic moments
from the MPtrj dataset, representing a state-of-the-art GNN-UIP model for
charge-informed MD simulations. However, training the CHGNet model is
time-consuming(8.3 days on one A100 GPU) for three reasons: (i) requiring
multi-layer propagation to reach more distant atom information, (ii) requiring
second-order derivatives calculation to finish weights updating and (iii) the
implementation of reference CHGNet does not fully leverage the computational
capabilities. This paper introduces FastCHGNet, an optimized CHGNet, with three
contributions: Firstly, we design innovative Force/Stress Readout modules to
decompose Force/Stress prediction. Secondly, we adopt massive optimizations
such as kernel fusion, redundancy bypass, etc, to exploit GPU computation power
sufficiently. Finally, we extend CHGNet to support multiple GPUs and propose a
load-balancing technique to enhance GPU utilization. Numerical results show
that FastCHGNet reduces memory footprint by a factor of 3.59. The final
training time of FastCHGNet can be decreased to \textbf{1.53 hours} on 32 GPUs
without sacrificing model accuracy.
|
2501.07496 | Wenping Jin | Wenping Jin and Li Zhu and Jing Sun | Aligning First, Then Fusing: A Novel Weakly Supervised Multimodal
Violence Detection Method | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Weakly supervised violence detection refers to the technique of training
models to identify violent segments in videos using only video-level labels.
Among these approaches, multimodal violence detection, which integrates
modalities such as audio and optical flow, holds great potential. Existing
methods in this domain primarily focus on designing multimodal fusion models to
address modality discrepancies. In contrast, we take a different approach;
leveraging the inherent discrepancies across modalities in violence event
representation to propose a novel multimodal semantic feature alignment method.
This method sparsely maps the semantic features of local, transient, and less
informative modalities ( such as audio and optical flow ) into the more
informative RGB semantic feature space. Through an iterative process, the
method identifies the suitable no-zero feature matching subspace and aligns the
modality-specific event representations based on this subspace, enabling the
full exploitation of information from all modalities during the subsequent
modality fusion stage. Building on this, we design a new weakly supervised
violence detection framework that consists of unimodal multiple-instance
learning for extracting unimodal semantic features, multimodal alignment,
multimodal fusion, and final detection. Experimental results on benchmark
datasets demonstrate the effectiveness of our method, achieving an average
precision (AP) of 86.07% on the XD-Violence dataset. Our code is available at
https://github.com/xjpp2016/MAVD.
| [
{
"version": "v1",
"created": "Mon, 13 Jan 2025 17:14:25 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Mar 2025 14:22:02 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Jin",
"Wenping",
""
],
[
"Zhu",
"Li",
""
],
[
"Sun",
"Jing",
""
]
] | TITLE: Aligning First, Then Fusing: A Novel Weakly Supervised Multimodal
Violence Detection Method
ABSTRACT: Weakly supervised violence detection refers to the technique of training
models to identify violent segments in videos using only video-level labels.
Among these approaches, multimodal violence detection, which integrates
modalities such as audio and optical flow, holds great potential. Existing
methods in this domain primarily focus on designing multimodal fusion models to
address modality discrepancies. In contrast, we take a different approach;
leveraging the inherent discrepancies across modalities in violence event
representation to propose a novel multimodal semantic feature alignment method.
This method sparsely maps the semantic features of local, transient, and less
informative modalities ( such as audio and optical flow ) into the more
informative RGB semantic feature space. Through an iterative process, the
method identifies the suitable no-zero feature matching subspace and aligns the
modality-specific event representations based on this subspace, enabling the
full exploitation of information from all modalities during the subsequent
modality fusion stage. Building on this, we design a new weakly supervised
violence detection framework that consists of unimodal multiple-instance
learning for extracting unimodal semantic features, multimodal alignment,
multimodal fusion, and final detection. Experimental results on benchmark
datasets demonstrate the effectiveness of our method, achieving an average
precision (AP) of 86.07% on the XD-Violence dataset. Our code is available at
https://github.com/xjpp2016/MAVD.
|
2501.08983 | Haozhe Xie | Haozhe Xie, Zhaoxi Chen, Fangzhou Hong, Ziwei Liu | Compositional Generative Model of Unbounded 4D Cities | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 3D scene generation has garnered growing attention in recent years and has
made significant progress. Generating 4D cities is more challenging than 3D
scenes due to the presence of structurally complex, visually diverse objects
like buildings and vehicles, and heightened human sensitivity to distortions in
urban environments. To tackle these issues, we propose CityDreamer4D, a
compositional generative model specifically tailored for generating unbounded
4D cities. Our main insights are 1) 4D city generation should separate dynamic
objects (e.g., vehicles) from static scenes (e.g., buildings and roads), and 2)
all objects in the 4D scene should be composed of different types of neural
fields for buildings, vehicles, and background stuff. Specifically, we propose
Traffic Scenario Generator and Unbounded Layout Generator to produce dynamic
traffic scenarios and static city layouts using a highly compact BEV
representation. Objects in 4D cities are generated by combining stuff-oriented
and instance-oriented neural fields for background stuff, buildings, and
vehicles. To suit the distinct characteristics of background stuff and
instances, the neural fields employ customized generative hash grids and
periodic positional embeddings as scene parameterizations. Furthermore, we
offer a comprehensive suite of datasets for city generation, including OSM,
GoogleEarth, and CityTopia. The OSM dataset provides a variety of real-world
city layouts, while the Google Earth and CityTopia datasets deliver
large-scale, high-quality city imagery complete with 3D instance annotations.
Leveraging its compositional design, CityDreamer4D supports a range of
downstream applications, such as instance editing, city stylization, and urban
simulation, while delivering state-of-the-art performance in generating
realistic 4D cities.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2025 17:59:56 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Mar 2025 12:54:19 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Xie",
"Haozhe",
""
],
[
"Chen",
"Zhaoxi",
""
],
[
"Hong",
"Fangzhou",
""
],
[
"Liu",
"Ziwei",
""
]
] | TITLE: Compositional Generative Model of Unbounded 4D Cities
ABSTRACT: 3D scene generation has garnered growing attention in recent years and has
made significant progress. Generating 4D cities is more challenging than 3D
scenes due to the presence of structurally complex, visually diverse objects
like buildings and vehicles, and heightened human sensitivity to distortions in
urban environments. To tackle these issues, we propose CityDreamer4D, a
compositional generative model specifically tailored for generating unbounded
4D cities. Our main insights are 1) 4D city generation should separate dynamic
objects (e.g., vehicles) from static scenes (e.g., buildings and roads), and 2)
all objects in the 4D scene should be composed of different types of neural
fields for buildings, vehicles, and background stuff. Specifically, we propose
Traffic Scenario Generator and Unbounded Layout Generator to produce dynamic
traffic scenarios and static city layouts using a highly compact BEV
representation. Objects in 4D cities are generated by combining stuff-oriented
and instance-oriented neural fields for background stuff, buildings, and
vehicles. To suit the distinct characteristics of background stuff and
instances, the neural fields employ customized generative hash grids and
periodic positional embeddings as scene parameterizations. Furthermore, we
offer a comprehensive suite of datasets for city generation, including OSM,
GoogleEarth, and CityTopia. The OSM dataset provides a variety of real-world
city layouts, while the Google Earth and CityTopia datasets deliver
large-scale, high-quality city imagery complete with 3D instance annotations.
Leveraging its compositional design, CityDreamer4D supports a range of
downstream applications, such as instance editing, city stylization, and urban
simulation, while delivering state-of-the-art performance in generating
realistic 4D cities.
|
2501.10157 | Jie Wen | Jinrong Cui, Xiaohuang Wu, Haitao Zhang, Chongjie Dong, Jie Wen | Structure-guided Deep Multi-View Clustering | We have found that our paper has many imperfections and incorrect
formulas and derivations, and we insist on retracting the manuscript in order
to avoid misleading readers | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep multi-view clustering seeks to utilize the abundant information from
multiple views to improve clustering performance. However, most of the existing
clustering methods often neglect to fully mine multi-view structural
information and fail to explore the distribution of multi-view data, limiting
clustering performance. To address these limitations, we propose a
structure-guided deep multi-view clustering model. Specifically, we introduce a
positive sample selection strategy based on neighborhood relationships, coupled
with a corresponding loss function. This strategy constructs multi-view nearest
neighbor graphs to dynamically redefine positive sample pairs, enabling the
mining of local structural information within multi-view data and enhancing the
reliability of positive sample selection. Additionally, we introduce a Gaussian
distribution model to uncover latent structural information and introduce a
loss function to reduce discrepancies between view embeddings. These two
strategies explore multi-view structural information and data distribution from
different perspectives, enhancing consistency across views and increasing
intra-cluster compactness. Experimental evaluations demonstrate the efficacy of
our method, showing significant improvements in clustering performance on
multiple benchmark datasets compared to state-of-the-art multi-view clustering
approaches.
| [
{
"version": "v1",
"created": "Fri, 17 Jan 2025 12:42:30 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 13:49:58 GMT"
},
{
"version": "v3",
"created": "Fri, 14 Mar 2025 10:35:13 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Cui",
"Jinrong",
""
],
[
"Wu",
"Xiaohuang",
""
],
[
"Zhang",
"Haitao",
""
],
[
"Dong",
"Chongjie",
""
],
[
"Wen",
"Jie",
""
]
] | TITLE: Structure-guided Deep Multi-View Clustering
ABSTRACT: Deep multi-view clustering seeks to utilize the abundant information from
multiple views to improve clustering performance. However, most of the existing
clustering methods often neglect to fully mine multi-view structural
information and fail to explore the distribution of multi-view data, limiting
clustering performance. To address these limitations, we propose a
structure-guided deep multi-view clustering model. Specifically, we introduce a
positive sample selection strategy based on neighborhood relationships, coupled
with a corresponding loss function. This strategy constructs multi-view nearest
neighbor graphs to dynamically redefine positive sample pairs, enabling the
mining of local structural information within multi-view data and enhancing the
reliability of positive sample selection. Additionally, we introduce a Gaussian
distribution model to uncover latent structural information and introduce a
loss function to reduce discrepancies between view embeddings. These two
strategies explore multi-view structural information and data distribution from
different perspectives, enhancing consistency across views and increasing
intra-cluster compactness. Experimental evaluations demonstrate the efficacy of
our method, showing significant improvements in clustering performance on
multiple benchmark datasets compared to state-of-the-art multi-view clustering
approaches.
|
2502.06124 | Pawel Renc | Pawel Renc, Michal K. Grzeszczyk, Nassim Oufattole, Deirdre Goode,
Yugang Jia, Szymon Bieganski, Matthew B. A. McDermott, Jaroslaw Was, Anthony
E. Samir, Jonathan W. Cunningham, David W. Bates and Arkadiusz Sitek | Foundation Model of Electronic Medical Records for Adaptive Risk
Estimation | Fix affiliation list | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | The U.S. allocates nearly 18% of its GDP to healthcare but experiences lower
life expectancy and higher preventable death rates compared to other
high-income nations. Hospitals struggle to predict critical outcomes such as
mortality, ICU admission, and prolonged hospital stays. Traditional early
warning systems, like NEWS and MEWS, rely on static variables and fixed
thresholds, limiting their adaptability, accuracy, and personalization. We
developed the Enhanced Transformer for Health Outcome Simulation (ETHOS), an AI
model that tokenizes patient health timelines (PHTs) from EHRs and uses
transformer-based architectures to predict future PHTs. The Adaptive Risk
Estimation System (ARES) leverages ETHOS to compute dynamic, personalized risk
probabilities for clinician-defined critical events. ARES also features a
personalized explainability module highlighting key clinical factors
influencing risk estimates. We evaluated ARES on the MIMIC-IV v2.2 dataset in
emergency department settings, benchmarking its performance against traditional
early warning systems and machine learning models. From 299,721 unique
patients, 285,622 PHTs (60% with hospital admissions) were processed,
comprising over 357 million tokens. ETHOS outperformed benchmark models in
predicting hospital admissions, ICU admissions, and prolonged stays, achieving
superior AUC scores. Its risk estimates were robust across demographic
subgroups, with calibration curves confirming model reliability. The
explainability module provided valuable insights into patient-specific risk
factors. ARES, powered by ETHOS, advances predictive healthcare AI by
delivering dynamic, real-time, personalized risk estimation with
patient-specific explainability. Its adaptability and accuracy offer a
transformative tool for clinical decision-making, potentially improving patient
outcomes and resource allocation.
| [
{
"version": "v1",
"created": "Mon, 10 Feb 2025 03:22:39 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Mar 2025 18:48:54 GMT"
},
{
"version": "v3",
"created": "Thu, 13 Mar 2025 22:37:55 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Renc",
"Pawel",
""
],
[
"Grzeszczyk",
"Michal K.",
""
],
[
"Oufattole",
"Nassim",
""
],
[
"Goode",
"Deirdre",
""
],
[
"Jia",
"Yugang",
""
],
[
"Bieganski",
"Szymon",
""
],
[
"McDermott",
"Matthew B. A.",
""
],
[
"Was",
"Jaroslaw",
""
],
[
"Samir",
"Anthony E.",
""
],
[
"Cunningham",
"Jonathan W.",
""
],
[
"Bates",
"David W.",
""
],
[
"Sitek",
"Arkadiusz",
""
]
] | TITLE: Foundation Model of Electronic Medical Records for Adaptive Risk
Estimation
ABSTRACT: The U.S. allocates nearly 18% of its GDP to healthcare but experiences lower
life expectancy and higher preventable death rates compared to other
high-income nations. Hospitals struggle to predict critical outcomes such as
mortality, ICU admission, and prolonged hospital stays. Traditional early
warning systems, like NEWS and MEWS, rely on static variables and fixed
thresholds, limiting their adaptability, accuracy, and personalization. We
developed the Enhanced Transformer for Health Outcome Simulation (ETHOS), an AI
model that tokenizes patient health timelines (PHTs) from EHRs and uses
transformer-based architectures to predict future PHTs. The Adaptive Risk
Estimation System (ARES) leverages ETHOS to compute dynamic, personalized risk
probabilities for clinician-defined critical events. ARES also features a
personalized explainability module highlighting key clinical factors
influencing risk estimates. We evaluated ARES on the MIMIC-IV v2.2 dataset in
emergency department settings, benchmarking its performance against traditional
early warning systems and machine learning models. From 299,721 unique
patients, 285,622 PHTs (60% with hospital admissions) were processed,
comprising over 357 million tokens. ETHOS outperformed benchmark models in
predicting hospital admissions, ICU admissions, and prolonged stays, achieving
superior AUC scores. Its risk estimates were robust across demographic
subgroups, with calibration curves confirming model reliability. The
explainability module provided valuable insights into patient-specific risk
factors. ARES, powered by ETHOS, advances predictive healthcare AI by
delivering dynamic, real-time, personalized risk estimation with
patient-specific explainability. Its adaptability and accuracy offer a
transformative tool for clinical decision-making, potentially improving patient
outcomes and resource allocation.
|
2502.07278 | Aditya Vora | Aditya Vora, Sauradip Nag, Hao Zhang | Articulate That Object Part (ATOP): 3D Part Articulation via Text and
Motion Personalization | Technical Report, 16 pages | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We present ATOP (Articulate That Object Part), a novel few-shot method based
on motion personalization to articulate a static 3D object with respect to a
part and its motion as prescribed in a text prompt. Given the scarcity of
available datasets with motion attribute annotations, existing methods struggle
to generalize well in this task. In our work, the text input allows us to tap
into the power of modern-day diffusion models to generate plausible motion
samples for the right object category and part. In turn, the input 3D object
provides image prompting to personalize the generated video to that very object
we wish to articulate. Our method starts with a few-shot finetuning for
category-specific motion generation, a key first step to compensate for the
lack of articulation awareness by current diffusion models. For this, we
finetune a pre-trained multi-view image generation model for controllable
multi-view video generation, using a small collection of video samples obtained
for the target object category. This is followed by motion video
personalization that is realized by multi-view rendered images of the target 3D
object. At last, we transfer the personalized video motion to the target 3D
object via differentiable rendering to optimize part motion parameters by a
score distillation sampling loss. Experimental results on PartNet-Sapien and
ACD datasets show that our method is capable of generating realistic motion
videos and predicting 3D motion parameters in a more accurate and generalizable
way, compared to prior works in the few-shot setting.
| [
{
"version": "v1",
"created": "Tue, 11 Feb 2025 05:47:16 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 23:51:34 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Vora",
"Aditya",
""
],
[
"Nag",
"Sauradip",
""
],
[
"Zhang",
"Hao",
""
]
] | TITLE: Articulate That Object Part (ATOP): 3D Part Articulation via Text and
Motion Personalization
ABSTRACT: We present ATOP (Articulate That Object Part), a novel few-shot method based
on motion personalization to articulate a static 3D object with respect to a
part and its motion as prescribed in a text prompt. Given the scarcity of
available datasets with motion attribute annotations, existing methods struggle
to generalize well in this task. In our work, the text input allows us to tap
into the power of modern-day diffusion models to generate plausible motion
samples for the right object category and part. In turn, the input 3D object
provides image prompting to personalize the generated video to that very object
we wish to articulate. Our method starts with a few-shot finetuning for
category-specific motion generation, a key first step to compensate for the
lack of articulation awareness by current diffusion models. For this, we
finetune a pre-trained multi-view image generation model for controllable
multi-view video generation, using a small collection of video samples obtained
for the target object category. This is followed by motion video
personalization that is realized by multi-view rendered images of the target 3D
object. At last, we transfer the personalized video motion to the target 3D
object via differentiable rendering to optimize part motion parameters by a
score distillation sampling loss. Experimental results on PartNet-Sapien and
ACD datasets show that our method is capable of generating realistic motion
videos and predicting 3D motion parameters in a more accurate and generalizable
way, compared to prior works in the few-shot setting.
|
2502.11198 | Bidyarthi Paul | Bidyarthi Paul, Faika Fairuj Preotee, Shuvashis Sarker, Shamim Rahim
Refat, Shifat Islam, Tashreef Muhammad, Mohammad Ashraful Hoque, Shahriar
Manzoor | ANCHOLIK-NER: A Benchmark Dataset for Bangla Regional Named Entity
Recognition | null | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | ANCHOLIK-NER is a linguistically diverse dataset for Named Entity Recognition
(NER) in Bangla regional dialects, capturing variations across Sylhet,
Chittagong, Barishal, Noakhali, and Mymensingh. The dataset has around 17,405
sentences, 3,481 sentences per region. The data was collected from two publicly
available datasets and through web scraping from various online newspapers,
articles. To ensure high-quality annotations, the BIO tagging scheme was
employed, and professional annotators with expertise in regional dialects
carried out the labeling process. The dataset is structured into separate
subsets for each region and is available in CSV format. Each entry contains
textual data along with identified named entities and their corresponding
annotations. Named entities are categorized into ten distinct classes: Person,
Location, Organization, Food, Animal, Colour, Role, Relation, Object, and
Miscellaneous. This dataset serves as a valuable resource for developing and
evaluating NER models for Bangla dialectal variations, contributing to regional
language processing and low-resource NLP applications. It can be utilized to
enhance NER systems in Bangla dialects, improve regional language
understanding, and support applications in machine translation, information
retrieval, and conversational AI.
| [
{
"version": "v1",
"created": "Sun, 16 Feb 2025 16:59:10 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Mar 2025 14:13:50 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Paul",
"Bidyarthi",
""
],
[
"Preotee",
"Faika Fairuj",
""
],
[
"Sarker",
"Shuvashis",
""
],
[
"Refat",
"Shamim Rahim",
""
],
[
"Islam",
"Shifat",
""
],
[
"Muhammad",
"Tashreef",
""
],
[
"Hoque",
"Mohammad Ashraful",
""
],
[
"Manzoor",
"Shahriar",
""
]
] | TITLE: ANCHOLIK-NER: A Benchmark Dataset for Bangla Regional Named Entity
Recognition
ABSTRACT: ANCHOLIK-NER is a linguistically diverse dataset for Named Entity Recognition
(NER) in Bangla regional dialects, capturing variations across Sylhet,
Chittagong, Barishal, Noakhali, and Mymensingh. The dataset has around 17,405
sentences, 3,481 sentences per region. The data was collected from two publicly
available datasets and through web scraping from various online newspapers,
articles. To ensure high-quality annotations, the BIO tagging scheme was
employed, and professional annotators with expertise in regional dialects
carried out the labeling process. The dataset is structured into separate
subsets for each region and is available in CSV format. Each entry contains
textual data along with identified named entities and their corresponding
annotations. Named entities are categorized into ten distinct classes: Person,
Location, Organization, Food, Animal, Colour, Role, Relation, Object, and
Miscellaneous. This dataset serves as a valuable resource for developing and
evaluating NER models for Bangla dialectal variations, contributing to regional
language processing and low-resource NLP applications. It can be utilized to
enhance NER systems in Bangla dialects, improve regional language
understanding, and support applications in machine translation, information
retrieval, and conversational AI.
|
2502.15251 | Yifei Huang | Nie Lin, Takehiko Ohkawa, Yifei Huang, Mingfang Zhang, Minjie Cai,
Ming Li, Ryosuke Furuta, Yoichi Sato | SiMHand: Mining Similar Hands for Large-Scale 3D Hand Pose Pre-training | ICLR 2025. arXiv admin note: text overlap with arXiv:2409.09714 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We present a framework for pre-training of 3D hand pose estimation from
in-the-wild hand images sharing with similar hand characteristics, dubbed
SimHand. Pre-training with large-scale images achieves promising results in
various tasks, but prior methods for 3D hand pose pre-training have not fully
utilized the potential of diverse hand images accessible from in-the-wild
videos. To facilitate scalable pre-training, we first prepare an extensive pool
of hand images from in-the-wild videos and design our pre-training method with
contrastive learning. Specifically, we collect over 2.0M hand images from
recent human-centric videos, such as 100DOH and Ego4D. To extract
discriminative information from these images, we focus on the similarity of
hands: pairs of non-identical samples with similar hand poses. We then propose
a novel contrastive learning method that embeds similar hand pairs closer in
the feature space. Our method not only learns from similar samples but also
adaptively weights the contrastive learning loss based on inter-sample
distance, leading to additional performance gains. Our experiments demonstrate
that our method outperforms conventional contrastive learning approaches that
produce positive pairs sorely from a single image with data augmentation. We
achieve significant improvements over the state-of-the-art method (PeCLR) in
various datasets, with gains of 15% on FreiHand, 10% on DexYCB, and 4% on
AssemblyHands.
Our code is available at https://github.com/ut-vision/SiMHand.
| [
{
"version": "v1",
"created": "Fri, 21 Feb 2025 07:02:05 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Mar 2025 05:54:56 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Lin",
"Nie",
""
],
[
"Ohkawa",
"Takehiko",
""
],
[
"Huang",
"Yifei",
""
],
[
"Zhang",
"Mingfang",
""
],
[
"Cai",
"Minjie",
""
],
[
"Li",
"Ming",
""
],
[
"Furuta",
"Ryosuke",
""
],
[
"Sato",
"Yoichi",
""
]
] | TITLE: SiMHand: Mining Similar Hands for Large-Scale 3D Hand Pose Pre-training
ABSTRACT: We present a framework for pre-training of 3D hand pose estimation from
in-the-wild hand images sharing with similar hand characteristics, dubbed
SimHand. Pre-training with large-scale images achieves promising results in
various tasks, but prior methods for 3D hand pose pre-training have not fully
utilized the potential of diverse hand images accessible from in-the-wild
videos. To facilitate scalable pre-training, we first prepare an extensive pool
of hand images from in-the-wild videos and design our pre-training method with
contrastive learning. Specifically, we collect over 2.0M hand images from
recent human-centric videos, such as 100DOH and Ego4D. To extract
discriminative information from these images, we focus on the similarity of
hands: pairs of non-identical samples with similar hand poses. We then propose
a novel contrastive learning method that embeds similar hand pairs closer in
the feature space. Our method not only learns from similar samples but also
adaptively weights the contrastive learning loss based on inter-sample
distance, leading to additional performance gains. Our experiments demonstrate
that our method outperforms conventional contrastive learning approaches that
produce positive pairs sorely from a single image with data augmentation. We
achieve significant improvements over the state-of-the-art method (PeCLR) in
various datasets, with gains of 15% on FreiHand, 10% on DexYCB, and 4% on
AssemblyHands.
Our code is available at https://github.com/ut-vision/SiMHand.
|
2502.15847 | William Fung | W. Fung, Y. Hao, X. Gu, G. Robert-Demolaize | Application of Dynamic Mode Decomposition for Improved Optics
Measurements from s star Movement at sPHENIX | 14 pages, 20 figures | null | null | null | physics.acc-ph nucl-ex | http://creativecommons.org/licenses/by/4.0/ | Current average horizontal beta beat measurements between operating
Interaction Regions (IR) in the Relativistic Heavy Ion Collider (RHIC) are
around 15 percent along with significant variation in s star. This threshold to
measure the linear optics can be improved by considering preprocessing methods
involving data reconstruction such as Dynamic Mode Decomposition (DMD), and
cross checking between different method variations, model independent and
dependent methods, and turn by turn (TBT) datasets. These were then applied to
analyze the movement of horizontal s star at the 8 o clock IR at RHIC (IR8).
This movement was done using an optics response matrix to determine magnet
strengths necessary to move horizontal s star without disturbing other optics.
Data preprocessing was found to significantly aid in beat reduction around IP,
with DMD demonstrating the least variability between preprocessing methods and
between horizontal s star movements. These preprocessing methods will be
implemented into RHIC for future linear optics analysis.
| [
{
"version": "v1",
"created": "Thu, 20 Feb 2025 23:06:41 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Mar 2025 00:13:03 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Fung",
"W.",
""
],
[
"Hao",
"Y.",
""
],
[
"Gu",
"X.",
""
],
[
"Robert-Demolaize",
"G.",
""
]
] | TITLE: Application of Dynamic Mode Decomposition for Improved Optics
Measurements from s star Movement at sPHENIX
ABSTRACT: Current average horizontal beta beat measurements between operating
Interaction Regions (IR) in the Relativistic Heavy Ion Collider (RHIC) are
around 15 percent along with significant variation in s star. This threshold to
measure the linear optics can be improved by considering preprocessing methods
involving data reconstruction such as Dynamic Mode Decomposition (DMD), and
cross checking between different method variations, model independent and
dependent methods, and turn by turn (TBT) datasets. These were then applied to
analyze the movement of horizontal s star at the 8 o clock IR at RHIC (IR8).
This movement was done using an optics response matrix to determine magnet
strengths necessary to move horizontal s star without disturbing other optics.
Data preprocessing was found to significantly aid in beat reduction around IP,
with DMD demonstrating the least variability between preprocessing methods and
between horizontal s star movements. These preprocessing methods will be
implemented into RHIC for future linear optics analysis.
|
2502.18440 | Matthew Gaughan | Matthew Gaughan, Kaylea Champion, Sohyeon Hwang, Aaron Shaw | The Introduction of README and CONTRIBUTING Files in Open Source
Software Development | Accepted to the International Conference on Cooperative and Human
Aspects of Software Engineering (CHASE) 2025 | null | null | null | cs.SE cs.SI | http://creativecommons.org/licenses/by-sa/4.0/ | README and CONTRIBUTING files can serve as the first point of contact for
potential contributors to free/libre and open source software (FLOSS) projects.
Prominent open source software organizations such as Mozilla, GitHub, and the
Linux Foundation advocate that projects provide community-focused and
process-oriented documentation early to foster recruitment and activity. In
this paper we investigate the introduction of these documents in FLOSS
projects, including whether early documentation conforms to these
recommendations or explains subsequent activity. We use a novel dataset of
FLOSS projects packaged by the Debian GNU/Linux distribution and conduct a
quantitative analysis to examine README (n=4226) and CONTRIBUTING (n=714) files
when they are first published into projects' repositories. We find that
projects create minimal READMEs proactively, but often publish CONTRIBUTING
files following an influx of contributions. The initial versions of these files
rarely focus on community development, instead containing descriptions of
project procedure for library usage or code contribution. The findings suggest
that FLOSS projects do not create documentation with community-building in
mind, but rather favor brevity and standardized instructions.
| [
{
"version": "v1",
"created": "Tue, 25 Feb 2025 18:33:52 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Mar 2025 17:00:28 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Gaughan",
"Matthew",
""
],
[
"Champion",
"Kaylea",
""
],
[
"Hwang",
"Sohyeon",
""
],
[
"Shaw",
"Aaron",
""
]
] | TITLE: The Introduction of README and CONTRIBUTING Files in Open Source
Software Development
ABSTRACT: README and CONTRIBUTING files can serve as the first point of contact for
potential contributors to free/libre and open source software (FLOSS) projects.
Prominent open source software organizations such as Mozilla, GitHub, and the
Linux Foundation advocate that projects provide community-focused and
process-oriented documentation early to foster recruitment and activity. In
this paper we investigate the introduction of these documents in FLOSS
projects, including whether early documentation conforms to these
recommendations or explains subsequent activity. We use a novel dataset of
FLOSS projects packaged by the Debian GNU/Linux distribution and conduct a
quantitative analysis to examine README (n=4226) and CONTRIBUTING (n=714) files
when they are first published into projects' repositories. We find that
projects create minimal READMEs proactively, but often publish CONTRIBUTING
files following an influx of contributions. The initial versions of these files
rarely focus on community development, instead containing descriptions of
project procedure for library usage or code contribution. The findings suggest
that FLOSS projects do not create documentation with community-building in
mind, but rather favor brevity and standardized instructions.
|
2502.18470 | Dazhou Yu | Dazhou Yu, Riyang Bao, Gengchen Mai, Liang Zhao | Spatial-RAG: Spatial Retrieval Augmented Generation for Real-World
Spatial Reasoning Questions | null | null | null | null | cs.IR cs.ET cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Spatial reasoning remains a challenge for Large Language Models (LLMs), which
struggle with spatial data retrieval and reasoning. We propose Spatial
Retrieval-Augmented Generation (Spatial-RAG), a framework that extends RAG to
spatial tasks by integrating sparse spatial retrieval (spatial databases) and
dense semantic retrieval (LLM-based similarity). A multi-objective ranking
strategy balances spatial constraints and semantic relevance, while an
LLM-guided generator ensures coherent responses. Experiments on a real-world
tourism dataset show that Spatial-RAG significantly improves spatial question
answering, bridging the gap between LLMs and spatial intelligence.
| [
{
"version": "v1",
"created": "Tue, 4 Feb 2025 01:30:06 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Feb 2025 05:17:57 GMT"
},
{
"version": "v3",
"created": "Fri, 14 Mar 2025 02:48:55 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Yu",
"Dazhou",
""
],
[
"Bao",
"Riyang",
""
],
[
"Mai",
"Gengchen",
""
],
[
"Zhao",
"Liang",
""
]
] | TITLE: Spatial-RAG: Spatial Retrieval Augmented Generation for Real-World
Spatial Reasoning Questions
ABSTRACT: Spatial reasoning remains a challenge for Large Language Models (LLMs), which
struggle with spatial data retrieval and reasoning. We propose Spatial
Retrieval-Augmented Generation (Spatial-RAG), a framework that extends RAG to
spatial tasks by integrating sparse spatial retrieval (spatial databases) and
dense semantic retrieval (LLM-based similarity). A multi-objective ranking
strategy balances spatial constraints and semantic relevance, while an
LLM-guided generator ensures coherent responses. Experiments on a real-world
tourism dataset show that Spatial-RAG significantly improves spatial question
answering, bridging the gap between LLMs and spatial intelligence.
|
2503.00325 | Hailiang Zhao | Zhiwei Ling, Yachen Chang, Hailiang Zhao, Xinkui Zhao, Kingsum Chow,
Shuiguang Deng | CADRef: Robust Out-of-Distribution Detection via Class-Aware Decoupled
Relative Feature Leveraging | This paper has been accepted by CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Deep neural networks (DNNs) have been widely criticized for their
overconfidence when dealing with out-of-distribution (OOD) samples,
highlighting the critical need for effective OOD detection to ensure the safe
deployment of DNNs in real-world settings. Existing post-hoc OOD detection
methods primarily enhance the discriminative power of logit-based approaches by
reshaping sample features, yet they often neglect critical information inherent
in the features themselves. In this paper, we propose the Class-Aware Relative
Feature-based method (CARef), which utilizes the error between a sample's
feature and its class-aware average feature as a discriminative criterion. To
further refine this approach, we introduce the Class-Aware Decoupled Relative
Feature-based method (CADRef), which decouples sample features based on the
alignment of signs between the relative feature and corresponding model
weights, enhancing the discriminative capabilities of CARef. Extensive
experimental results across multiple datasets and models demonstrate that both
proposed methods exhibit effectiveness and robustness in OOD detection compared
to state-of-the-art methods. Specifically, our two methods outperform the best
baseline by 2.82% and 3.27% in AUROC, with improvements of 4.03% and 6.32% in
FPR95, respectively.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 03:23:10 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Mar 2025 02:11:41 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Ling",
"Zhiwei",
""
],
[
"Chang",
"Yachen",
""
],
[
"Zhao",
"Hailiang",
""
],
[
"Zhao",
"Xinkui",
""
],
[
"Chow",
"Kingsum",
""
],
[
"Deng",
"Shuiguang",
""
]
] | TITLE: CADRef: Robust Out-of-Distribution Detection via Class-Aware Decoupled
Relative Feature Leveraging
ABSTRACT: Deep neural networks (DNNs) have been widely criticized for their
overconfidence when dealing with out-of-distribution (OOD) samples,
highlighting the critical need for effective OOD detection to ensure the safe
deployment of DNNs in real-world settings. Existing post-hoc OOD detection
methods primarily enhance the discriminative power of logit-based approaches by
reshaping sample features, yet they often neglect critical information inherent
in the features themselves. In this paper, we propose the Class-Aware Relative
Feature-based method (CARef), which utilizes the error between a sample's
feature and its class-aware average feature as a discriminative criterion. To
further refine this approach, we introduce the Class-Aware Decoupled Relative
Feature-based method (CADRef), which decouples sample features based on the
alignment of signs between the relative feature and corresponding model
weights, enhancing the discriminative capabilities of CARef. Extensive
experimental results across multiple datasets and models demonstrate that both
proposed methods exhibit effectiveness and robustness in OOD detection compared
to state-of-the-art methods. Specifically, our two methods outperform the best
baseline by 2.82% and 3.27% in AUROC, with improvements of 4.03% and 6.32% in
FPR95, respectively.
|
2503.01066 | Yuyang Huang | Yuyang Huang, Yuhan Liu, Haryadi S. Gunawi, Beibin Li, Changho Hwang | Alchemist: Towards the Design of Efficient Online Continual Learning
System | null | null | null | null | cs.LG cs.CL cs.DC | http://creativecommons.org/licenses/by/4.0/ | Continual learning has become a promising solution to refine large language
models incrementally by leveraging user feedback. In particular, online
continual learning - iteratively training the model with small batches of user
feedback - has demonstrated notable performance improvements. However, the
existing practice of separating training and serving processes forces the
online trainer to recompute the intermediate results already done during
serving. Such redundant computations can account for 30%-42% of total training
time.
In this paper, we propose Alchemist, to the best of our knowledge, the first
online continual learning system that efficiently reuses serving activations to
increase training throughput. Alchemist introduces two key techniques: (1)
recording and storing activations and KV cache only during the prefill phase to
minimize latency and memory overhead; and (2) smart activation offloading and
hedging. Evaluations with inputs of varied token length sampled from ShareGPT
dataset show that compared with a separate training cluster, Alchemist
significantly increases training throughput by up to 1.72x, reduces up to 47%
memory usage during training, and supports up to 2x more training tokens - all
while maintaining negligible impact on serving latency.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 00:14:34 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Mar 2025 16:57:12 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Huang",
"Yuyang",
""
],
[
"Liu",
"Yuhan",
""
],
[
"Gunawi",
"Haryadi S.",
""
],
[
"Li",
"Beibin",
""
],
[
"Hwang",
"Changho",
""
]
] | TITLE: Alchemist: Towards the Design of Efficient Online Continual Learning
System
ABSTRACT: Continual learning has become a promising solution to refine large language
models incrementally by leveraging user feedback. In particular, online
continual learning - iteratively training the model with small batches of user
feedback - has demonstrated notable performance improvements. However, the
existing practice of separating training and serving processes forces the
online trainer to recompute the intermediate results already done during
serving. Such redundant computations can account for 30%-42% of total training
time.
In this paper, we propose Alchemist, to the best of our knowledge, the first
online continual learning system that efficiently reuses serving activations to
increase training throughput. Alchemist introduces two key techniques: (1)
recording and storing activations and KV cache only during the prefill phase to
minimize latency and memory overhead; and (2) smart activation offloading and
hedging. Evaluations with inputs of varied token length sampled from ShareGPT
dataset show that compared with a separate training cluster, Alchemist
significantly increases training throughput by up to 1.72x, reduces up to 47%
memory usage during training, and supports up to 2x more training tokens - all
while maintaining negligible impact on serving latency.
|
2503.01895 | Haoxin Liu | Haoxin Liu, Zhiyuan Zhao, Shiduo Li, B. Aditya Prakash | Evaluating System 1 vs. 2 Reasoning Approaches for Zero-Shot Time Series
Forecasting: A Benchmark and Insights | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Reasoning ability is crucial for solving challenging tasks. With the
advancement of foundation models, such as the emergence of large language
models (LLMs), a wide range of reasoning strategies has been proposed,
including test-time enhancements, such as Chain-ofThought, and post-training
optimizations, as used in DeepSeek-R1. While these reasoning strategies have
demonstrated effectiveness across various challenging language or vision tasks,
their applicability and impact on time-series forecasting (TSF), particularly
the challenging zero-shot TSF, remain largely unexplored. In particular, it is
unclear whether zero-shot TSF benefits from reasoning and, if so, what types of
reasoning strategies are most effective. To bridge this gap, we propose ReC4TS,
the first benchmark that systematically evaluates the effectiveness of popular
reasoning strategies when applied to zero-shot TSF tasks. ReC4TS conducts
comprehensive evaluations across datasets spanning eight domains, covering both
unimodal and multimodal with short-term and longterm forecasting tasks. More
importantly, ReC4TS provides key insights: (1) Self-consistency emerges as the
most effective test-time reasoning strategy; (2) Group-relative policy
optimization emerges as a more suitable approach for incentivizing reasoning
ability during post-training; (3) Multimodal TSF benefits more from reasoning
strategies compared to unimodal TSF. Beyond these insights, ReC4TS establishes
two pioneering starting blocks to support future zero-shot TSF reasoning
research: (1) A novel dataset, TimeThinking, containing forecasting samples
annotated with reasoning trajectories from multiple advanced LLMs, and (2) A
new and simple test-time scaling-law validated on foundational TSF models
enabled by self-consistency reasoning strategy. All data and code are publicly
accessible at: https://github.com/AdityaLab/OpenTimeR
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2025 23:27:37 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Mar 2025 00:16:53 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Liu",
"Haoxin",
""
],
[
"Zhao",
"Zhiyuan",
""
],
[
"Li",
"Shiduo",
""
],
[
"Prakash",
"B. Aditya",
""
]
] | TITLE: Evaluating System 1 vs. 2 Reasoning Approaches for Zero-Shot Time Series
Forecasting: A Benchmark and Insights
ABSTRACT: Reasoning ability is crucial for solving challenging tasks. With the
advancement of foundation models, such as the emergence of large language
models (LLMs), a wide range of reasoning strategies has been proposed,
including test-time enhancements, such as Chain-ofThought, and post-training
optimizations, as used in DeepSeek-R1. While these reasoning strategies have
demonstrated effectiveness across various challenging language or vision tasks,
their applicability and impact on time-series forecasting (TSF), particularly
the challenging zero-shot TSF, remain largely unexplored. In particular, it is
unclear whether zero-shot TSF benefits from reasoning and, if so, what types of
reasoning strategies are most effective. To bridge this gap, we propose ReC4TS,
the first benchmark that systematically evaluates the effectiveness of popular
reasoning strategies when applied to zero-shot TSF tasks. ReC4TS conducts
comprehensive evaluations across datasets spanning eight domains, covering both
unimodal and multimodal with short-term and longterm forecasting tasks. More
importantly, ReC4TS provides key insights: (1) Self-consistency emerges as the
most effective test-time reasoning strategy; (2) Group-relative policy
optimization emerges as a more suitable approach for incentivizing reasoning
ability during post-training; (3) Multimodal TSF benefits more from reasoning
strategies compared to unimodal TSF. Beyond these insights, ReC4TS establishes
two pioneering starting blocks to support future zero-shot TSF reasoning
research: (1) A novel dataset, TimeThinking, containing forecasting samples
annotated with reasoning trajectories from multiple advanced LLMs, and (2) A
new and simple test-time scaling-law validated on foundational TSF models
enabled by self-consistency reasoning strategy. All data and code are publicly
accessible at: https://github.com/AdityaLab/OpenTimeR
|
2503.02063 | Adnen Abdessaied | Adnen Abdessaied, Anna Rohrbach, Marcus Rohrbach, Andreas Bulling | V$^2$Dial: Unification of Video and Visual Dialog via Multimodal Experts | CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We present V$^2$Dial - a novel expert-based model specifically geared towards
simultaneously handling image and video input data for multimodal
conversational tasks. Current multimodal models primarily focus on simpler
tasks (e.g., VQA, VideoQA, video-text retrieval) and often neglect the more
challenging conversational counterparts, such as video and visual/image dialog.
Moreover, works on both conversational tasks evolved separately from each other
despite their apparent similarities limiting their applicability potential. To
this end, we propose to unify both tasks using a single model that for the
first time jointly learns the spatial and temporal features of images and
videos by routing them through dedicated experts and aligns them using matching
and contrastive learning techniques. Furthermore, we systemically study the
domain shift between the two tasks by investigating whether and to what extent
these seemingly related tasks can mutually benefit from their respective
training data. Extensive evaluations on the widely used video and visual dialog
datasets of AVSD and VisDial show that our model achieves new state-of-the-art
results across four benchmarks both in zero-shot and fine-tuning settings.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 21:27:38 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Mar 2025 12:29:29 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Abdessaied",
"Adnen",
""
],
[
"Rohrbach",
"Anna",
""
],
[
"Rohrbach",
"Marcus",
""
],
[
"Bulling",
"Andreas",
""
]
] | TITLE: V$^2$Dial: Unification of Video and Visual Dialog via Multimodal Experts
ABSTRACT: We present V$^2$Dial - a novel expert-based model specifically geared towards
simultaneously handling image and video input data for multimodal
conversational tasks. Current multimodal models primarily focus on simpler
tasks (e.g., VQA, VideoQA, video-text retrieval) and often neglect the more
challenging conversational counterparts, such as video and visual/image dialog.
Moreover, works on both conversational tasks evolved separately from each other
despite their apparent similarities limiting their applicability potential. To
this end, we propose to unify both tasks using a single model that for the
first time jointly learns the spatial and temporal features of images and
videos by routing them through dedicated experts and aligns them using matching
and contrastive learning techniques. Furthermore, we systemically study the
domain shift between the two tasks by investigating whether and to what extent
these seemingly related tasks can mutually benefit from their respective
training data. Extensive evaluations on the widely used video and visual dialog
datasets of AVSD and VisDial show that our model achieves new state-of-the-art
results across four benchmarks both in zero-shot and fine-tuning settings.
|
2503.02332 | Gen Shi | Gen Shi, Hui Zhang and Jie Tian | COMMA: Coordinate-aware Modulated Mamba Network for 3D Dispersed Vessel
Segmentation | null | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate segmentation of 3D vascular structures is essential for various
medical imaging applications. The dispersed nature of vascular structures leads
to inherent spatial uncertainty and necessitates location awareness, yet most
current 3D medical segmentation models rely on the patch-wise training strategy
that usually loses this spatial context. In this study, we introduce the
Coordinate-aware Modulated Mamba Network (COMMA) and contribute a manually
labeled dataset of 570 cases, the largest publicly available 3D vessel dataset
to date. COMMA leverages both entire and cropped patch data through global and
local branches, ensuring robust and efficient spatial location awareness.
Specifically, COMMA employs a channel-compressed Mamba (ccMamba) block to
encode entire image data, capturing long-range dependencies while optimizing
computational costs. Additionally, we propose a coordinate-aware modulated
(CaM) block to enhance interactions between the global and local branches,
allowing the local branch to better perceive spatial information. We evaluate
COMMA on six datasets, covering two imaging modalities and five types of
vascular tissues. The results demonstrate COMMA's superior performance compared
to state-of-the-art methods with computational efficiency, especially in
segmenting small vessels. Ablation studies further highlight the importance of
our proposed modules and spatial information. The code and data will be open
source at https://github.com/shigen-StoneRoot/COMMA.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 06:45:10 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Mar 2025 10:00:48 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Shi",
"Gen",
""
],
[
"Zhang",
"Hui",
""
],
[
"Tian",
"Jie",
""
]
] | TITLE: COMMA: Coordinate-aware Modulated Mamba Network for 3D Dispersed Vessel
Segmentation
ABSTRACT: Accurate segmentation of 3D vascular structures is essential for various
medical imaging applications. The dispersed nature of vascular structures leads
to inherent spatial uncertainty and necessitates location awareness, yet most
current 3D medical segmentation models rely on the patch-wise training strategy
that usually loses this spatial context. In this study, we introduce the
Coordinate-aware Modulated Mamba Network (COMMA) and contribute a manually
labeled dataset of 570 cases, the largest publicly available 3D vessel dataset
to date. COMMA leverages both entire and cropped patch data through global and
local branches, ensuring robust and efficient spatial location awareness.
Specifically, COMMA employs a channel-compressed Mamba (ccMamba) block to
encode entire image data, capturing long-range dependencies while optimizing
computational costs. Additionally, we propose a coordinate-aware modulated
(CaM) block to enhance interactions between the global and local branches,
allowing the local branch to better perceive spatial information. We evaluate
COMMA on six datasets, covering two imaging modalities and five types of
vascular tissues. The results demonstrate COMMA's superior performance compared
to state-of-the-art methods with computational efficiency, especially in
segmenting small vessels. Ablation studies further highlight the importance of
our proposed modules and spatial information. The code and data will be open
source at https://github.com/shigen-StoneRoot/COMMA.
|
2503.02784 | Jaekyeom Kim | Jaekyeom Kim, Sungryull Sohn, Gerrard Jeongwon Jo, Jihoon Choi,
Kyunghoon Bae, Hwayoung Lee, Yongmin Park, and Honglak Lee | Do Not Trust Licenses You See: Dataset Compliance Requires Massive-Scale
AI-Powered Lifecycle Tracing | null | null | null | null | cs.CY cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This paper argues that a dataset's legal risk cannot be accurately assessed
by its license terms alone; instead, tracking dataset redistribution and its
full lifecycle is essential. However, this process is too complex for legal
experts to handle manually at scale. Tracking dataset provenance, verifying
redistribution rights, and assessing evolving legal risks across multiple
stages require a level of precision and efficiency that exceeds human
capabilities. Addressing this challenge effectively demands AI agents that can
systematically trace dataset redistribution, analyze compliance, and identify
legal risks. We develop an automated data compliance system called NEXUS and
show that AI can perform these tasks with higher accuracy, efficiency, and
cost-effectiveness than human experts. Our massive legal analysis of 17,429
unique entities and 8,072 license terms using this approach reveals the
discrepancies in legal rights between the original datasets before
redistribution and their redistributed subsets, underscoring the necessity of
the data lifecycle-aware compliance. For instance, we find that out of 2,852
datasets with commercially viable individual license terms, only 605 (21%) are
legally permissible for commercialization. This work sets a new standard for AI
data governance, advocating for a framework that systematically examines the
entire lifecycle of dataset redistribution to ensure transparent, legal, and
responsible dataset management.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 16:57:53 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Mar 2025 18:45:51 GMT"
},
{
"version": "v3",
"created": "Fri, 14 Mar 2025 16:58:30 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Kim",
"Jaekyeom",
""
],
[
"Sohn",
"Sungryull",
""
],
[
"Jo",
"Gerrard Jeongwon",
""
],
[
"Choi",
"Jihoon",
""
],
[
"Bae",
"Kyunghoon",
""
],
[
"Lee",
"Hwayoung",
""
],
[
"Park",
"Yongmin",
""
],
[
"Lee",
"Honglak",
""
]
] | TITLE: Do Not Trust Licenses You See: Dataset Compliance Requires Massive-Scale
AI-Powered Lifecycle Tracing
ABSTRACT: This paper argues that a dataset's legal risk cannot be accurately assessed
by its license terms alone; instead, tracking dataset redistribution and its
full lifecycle is essential. However, this process is too complex for legal
experts to handle manually at scale. Tracking dataset provenance, verifying
redistribution rights, and assessing evolving legal risks across multiple
stages require a level of precision and efficiency that exceeds human
capabilities. Addressing this challenge effectively demands AI agents that can
systematically trace dataset redistribution, analyze compliance, and identify
legal risks. We develop an automated data compliance system called NEXUS and
show that AI can perform these tasks with higher accuracy, efficiency, and
cost-effectiveness than human experts. Our massive legal analysis of 17,429
unique entities and 8,072 license terms using this approach reveals the
discrepancies in legal rights between the original datasets before
redistribution and their redistributed subsets, underscoring the necessity of
the data lifecycle-aware compliance. For instance, we find that out of 2,852
datasets with commercially viable individual license terms, only 605 (21%) are
legally permissible for commercialization. This work sets a new standard for AI
data governance, advocating for a framework that systematically examines the
entire lifecycle of dataset redistribution to ensure transparent, legal, and
responsible dataset management.
|
2503.03114 | Chenxi Zhang | Chenxi Zhang, Bicheng Zhang, Dingyu Yang, Xin Peng, Miao Chen, Senyu
Xie, Gang Chen, Wei Bi, Wei Li | PromAssistant: Leveraging Large Language Models for Text-to-PromQL | null | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the increasing complexity of modern online service systems,
understanding the state and behavior of the systems is essential for ensuring
their reliability and stability. Therefore, metric monitoring systems are
widely used and become an important infrastructure in online service systems.
Engineers usually interact with metrics data by manually writing
domain-specific language (DSL) queries to achieve various analysis objectives.
However, writing these queries can be challenging and time-consuming, as it
requires engineers to have high programming skills and understand the context
of the system. In this paper, we focus on PromQL, which is the metric query DSL
provided by the widely used metric monitoring system Prometheus. We aim to
simplify metrics querying by enabling engineers to interact with metrics data
in Prometheus through natural language, and we call this task text-to-PromQL.
Building upon the insight, this paper proposes PromAssistant, a Large Language
Model-based text-to-PromQL framework. PromAssistant first uses a knowledge
graph to describe the complex context of an online service system. Then,
through the synergistic reasoning of LLMs and the knowledge graph,
PromAssistant transforms engineers' natural language questions into PromQL
queries. To evaluate PromAssistant, we manually construct the first
text-to-PromQL benchmark dataset which contains 280 metric query questions. The
experiment results show that PromAssistant is effective in text-to-PromQL and
outperforms baseline approaches. To the best of our knowledge, this paper is
the first study of text-to-PromQL, and PromAssistant pioneered the DSL
generation framework for metric querying and analysis.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 02:22:01 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Mar 2025 05:57:16 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Zhang",
"Chenxi",
""
],
[
"Zhang",
"Bicheng",
""
],
[
"Yang",
"Dingyu",
""
],
[
"Peng",
"Xin",
""
],
[
"Chen",
"Miao",
""
],
[
"Xie",
"Senyu",
""
],
[
"Chen",
"Gang",
""
],
[
"Bi",
"Wei",
""
],
[
"Li",
"Wei",
""
]
] | TITLE: PromAssistant: Leveraging Large Language Models for Text-to-PromQL
ABSTRACT: With the increasing complexity of modern online service systems,
understanding the state and behavior of the systems is essential for ensuring
their reliability and stability. Therefore, metric monitoring systems are
widely used and become an important infrastructure in online service systems.
Engineers usually interact with metrics data by manually writing
domain-specific language (DSL) queries to achieve various analysis objectives.
However, writing these queries can be challenging and time-consuming, as it
requires engineers to have high programming skills and understand the context
of the system. In this paper, we focus on PromQL, which is the metric query DSL
provided by the widely used metric monitoring system Prometheus. We aim to
simplify metrics querying by enabling engineers to interact with metrics data
in Prometheus through natural language, and we call this task text-to-PromQL.
Building upon the insight, this paper proposes PromAssistant, a Large Language
Model-based text-to-PromQL framework. PromAssistant first uses a knowledge
graph to describe the complex context of an online service system. Then,
through the synergistic reasoning of LLMs and the knowledge graph,
PromAssistant transforms engineers' natural language questions into PromQL
queries. To evaluate PromAssistant, we manually construct the first
text-to-PromQL benchmark dataset which contains 280 metric query questions. The
experiment results show that PromAssistant is effective in text-to-PromQL and
outperforms baseline approaches. To the best of our knowledge, this paper is
the first study of text-to-PromQL, and PromAssistant pioneered the DSL
generation framework for metric querying and analysis.
|
2503.05788 | Leonardo Berti | Leonardo Berti, Flavio Giorgi, Gjergji Kasneci | Emergent Abilities in Large Language Models: A Survey | null | null | null | null | cs.LG cs.AI cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | Large Language Models (LLMs) are leading a new technological revolution as
one of the most promising research streams toward artificial general
intelligence. The scaling of these models, accomplished by increasing the
number of parameters and the magnitude of the training datasets, has been
linked to various so-called emergent abilities that were previously unobserved.
These emergent abilities, ranging from advanced reasoning and in-context
learning to coding and problem-solving, have sparked an intense scientific
debate: Are they truly emergent, or do they simply depend on external factors,
such as training dynamics, the type of problems, or the chosen metric? What
underlying mechanism causes them? Despite their transformative potential,
emergent abilities remain poorly understood, leading to misconceptions about
their definition, nature, predictability, and implications. In this work, we
shed light on emergent abilities by conducting a comprehensive review of the
phenomenon, addressing both its scientific underpinnings and real-world
consequences. We first critically analyze existing definitions, exposing
inconsistencies in conceptualizing emergent abilities. We then explore the
conditions under which these abilities appear, evaluating the role of scaling
laws, task complexity, pre-training loss, quantization, and prompting
strategies. Our review extends beyond traditional LLMs and includes Large
Reasoning Models (LRMs), which leverage reinforcement learning and
inference-time search to amplify reasoning and self-reflection. However,
emergence is not inherently positive. As AI systems gain autonomous reasoning
capabilities, they also develop harmful behaviors, including deception,
manipulation, and reward hacking. We highlight growing concerns about safety
and governance, emphasizing the need for better evaluation frameworks and
regulatory oversight.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 01:20:01 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Mar 2025 13:28:04 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Berti",
"Leonardo",
""
],
[
"Giorgi",
"Flavio",
""
],
[
"Kasneci",
"Gjergji",
""
]
] | TITLE: Emergent Abilities in Large Language Models: A Survey
ABSTRACT: Large Language Models (LLMs) are leading a new technological revolution as
one of the most promising research streams toward artificial general
intelligence. The scaling of these models, accomplished by increasing the
number of parameters and the magnitude of the training datasets, has been
linked to various so-called emergent abilities that were previously unobserved.
These emergent abilities, ranging from advanced reasoning and in-context
learning to coding and problem-solving, have sparked an intense scientific
debate: Are they truly emergent, or do they simply depend on external factors,
such as training dynamics, the type of problems, or the chosen metric? What
underlying mechanism causes them? Despite their transformative potential,
emergent abilities remain poorly understood, leading to misconceptions about
their definition, nature, predictability, and implications. In this work, we
shed light on emergent abilities by conducting a comprehensive review of the
phenomenon, addressing both its scientific underpinnings and real-world
consequences. We first critically analyze existing definitions, exposing
inconsistencies in conceptualizing emergent abilities. We then explore the
conditions under which these abilities appear, evaluating the role of scaling
laws, task complexity, pre-training loss, quantization, and prompting
strategies. Our review extends beyond traditional LLMs and includes Large
Reasoning Models (LRMs), which leverage reinforcement learning and
inference-time search to amplify reasoning and self-reflection. However,
emergence is not inherently positive. As AI systems gain autonomous reasoning
capabilities, they also develop harmful behaviors, including deception,
manipulation, and reward hacking. We highlight growing concerns about safety
and governance, emphasizing the need for better evaluation frameworks and
regulatory oversight.
|
2503.06456 | Chengxuan Qian | Chengxuan Qian, Kai Han, Jingchao Wang, Zhenlong Yuan, Chongwen Lyu,
Jun Chen, Zhe Liu | DynCIM: Dynamic Curriculum for Imbalanced Multimodal Learning | 10 pages, 7 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimodal learning integrates complementary information from diverse
modalities to enhance the decision-making process. However, the potential of
multimodal collaboration remains under-exploited due to disparities in data
quality and modality representation capabilities. To address this, we introduce
DynCIM, a novel dynamic curriculum learning framework designed to quantify the
inherent imbalances from both sample and modality perspectives. DynCIM employs
a sample-level curriculum to dynamically assess each sample's difficulty
according to prediction deviation, consistency, and stability, while a
modality-level curriculum measures modality contributions from global and
local. Furthermore, a gating-based dynamic fusion mechanism is introduced to
adaptively adjust modality contributions, minimizing redundancy and optimizing
fusion effectiveness. Extensive experiments on six multimodal benchmarking
datasets, spanning both bimodal and trimodal scenarios, demonstrate that DynCIM
consistently outperforms state-of-the-art methods. Our approach effectively
mitigates modality and sample imbalances while enhancing adaptability and
robustness in multimodal learning tasks. Our code is available at
https://github.com/Raymond-Qiancx/DynCIM.
| [
{
"version": "v1",
"created": "Sun, 9 Mar 2025 05:30:15 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 18:39:49 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Qian",
"Chengxuan",
""
],
[
"Han",
"Kai",
""
],
[
"Wang",
"Jingchao",
""
],
[
"Yuan",
"Zhenlong",
""
],
[
"Lyu",
"Chongwen",
""
],
[
"Chen",
"Jun",
""
],
[
"Liu",
"Zhe",
""
]
] | TITLE: DynCIM: Dynamic Curriculum for Imbalanced Multimodal Learning
ABSTRACT: Multimodal learning integrates complementary information from diverse
modalities to enhance the decision-making process. However, the potential of
multimodal collaboration remains under-exploited due to disparities in data
quality and modality representation capabilities. To address this, we introduce
DynCIM, a novel dynamic curriculum learning framework designed to quantify the
inherent imbalances from both sample and modality perspectives. DynCIM employs
a sample-level curriculum to dynamically assess each sample's difficulty
according to prediction deviation, consistency, and stability, while a
modality-level curriculum measures modality contributions from global and
local. Furthermore, a gating-based dynamic fusion mechanism is introduced to
adaptively adjust modality contributions, minimizing redundancy and optimizing
fusion effectiveness. Extensive experiments on six multimodal benchmarking
datasets, spanning both bimodal and trimodal scenarios, demonstrate that DynCIM
consistently outperforms state-of-the-art methods. Our approach effectively
mitigates modality and sample imbalances while enhancing adaptability and
robustness in multimodal learning tasks. Our code is available at
https://github.com/Raymond-Qiancx/DynCIM.
|
2503.07026 | Yi Liu | Yi Liu, Hao Zhou, Wenxiang Shang, Ran Lin, Benlei Cui | Erase Diffusion: Empowering Object Removal Through Calibrating Diffusion
Pathways | accepted by CVPR 2025 | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Erase inpainting, or object removal, aims to precisely remove target objects
within masked regions while preserving the overall consistency of the
surrounding content. Despite diffusion-based methods have made significant
strides in the field of image inpainting, challenges remain regarding the
emergence of unexpected objects or artifacts. We assert that the inexact
diffusion pathways established by existing standard optimization paradigms
constrain the efficacy of object removal. To tackle these challenges, we
propose a novel Erase Diffusion, termed EraDiff, aimed at unleashing the
potential power of standard diffusion in the context of object removal. In
contrast to standard diffusion, the EraDiff adapts both the optimization
paradigm and the network to improve the coherence and elimination of the
erasure results. We first introduce a Chain-Rectifying Optimization (CRO)
paradigm, a sophisticated diffusion process specifically designed to align with
the objectives of erasure. This paradigm establishes innovative diffusion
transition pathways that simulate the gradual elimination of objects during
optimization, allowing the model to accurately capture the intent of object
removal. Furthermore, to mitigate deviations caused by artifacts during the
sampling pathways, we develop a simple yet effective Self-Rectifying Attention
(SRA) mechanism. The SRA calibrates the sampling pathways by altering
self-attention activation, allowing the model to effectively bypass artifacts
while further enhancing the coherence of the generated content. With this
design, our proposed EraDiff achieves state-of-the-art performance on the
OpenImages V5 dataset and demonstrates significant superiority in real-world
scenarios.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 08:06:51 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Liu",
"Yi",
""
],
[
"Zhou",
"Hao",
""
],
[
"Shang",
"Wenxiang",
""
],
[
"Lin",
"Ran",
""
],
[
"Cui",
"Benlei",
""
]
] | TITLE: Erase Diffusion: Empowering Object Removal Through Calibrating Diffusion
Pathways
ABSTRACT: Erase inpainting, or object removal, aims to precisely remove target objects
within masked regions while preserving the overall consistency of the
surrounding content. Despite diffusion-based methods have made significant
strides in the field of image inpainting, challenges remain regarding the
emergence of unexpected objects or artifacts. We assert that the inexact
diffusion pathways established by existing standard optimization paradigms
constrain the efficacy of object removal. To tackle these challenges, we
propose a novel Erase Diffusion, termed EraDiff, aimed at unleashing the
potential power of standard diffusion in the context of object removal. In
contrast to standard diffusion, the EraDiff adapts both the optimization
paradigm and the network to improve the coherence and elimination of the
erasure results. We first introduce a Chain-Rectifying Optimization (CRO)
paradigm, a sophisticated diffusion process specifically designed to align with
the objectives of erasure. This paradigm establishes innovative diffusion
transition pathways that simulate the gradual elimination of objects during
optimization, allowing the model to accurately capture the intent of object
removal. Furthermore, to mitigate deviations caused by artifacts during the
sampling pathways, we develop a simple yet effective Self-Rectifying Attention
(SRA) mechanism. The SRA calibrates the sampling pathways by altering
self-attention activation, allowing the model to effectively bypass artifacts
while further enhancing the coherence of the generated content. With this
design, our proposed EraDiff achieves state-of-the-art performance on the
OpenImages V5 dataset and demonstrates significant superiority in real-world
scenarios.
|
2503.09640 | Weiquan Wang | Weiquan Wang, Jun Xiao, Yueting Zhuang, Long Chen | Physics-Aware Human-Object Rendering from Sparse Views via 3D Gaussian
Splatting | null | null | null | null | cs.GR cs.CV | http://creativecommons.org/licenses/by/4.0/ | Rendering realistic human-object interactions (HOIs) from sparse-view inputs
is challenging due to occlusions and incomplete observations, yet crucial for
various real-world applications. Existing methods always struggle with either
low rendering qualities (\eg, visual fidelity and physically plausible HOIs) or
high computational costs. To address these limitations, we propose HOGS
(Human-Object Rendering via 3D Gaussian Splatting), a novel framework for
efficient and physically plausible HOI rendering from sparse views.
Specifically, HOGS combines 3D Gaussian Splatting with a physics-aware
optimization process. It incorporates a Human Pose Refinement module for
accurate pose estimation and a Sparse-View Human-Object Contact Prediction
module for efficient contact region identification. This combination enables
coherent joint rendering of human and object Gaussians while enforcing
physically plausible interactions. Extensive experiments on the HODome dataset
demonstrate that HOGS achieves superior rendering quality, efficiency, and
physical plausibility compared to existing methods. We further show its
extensibility to hand-object grasp rendering tasks, presenting its broader
applicability to articulated object interactions.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 04:19:21 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Wang",
"Weiquan",
""
],
[
"Xiao",
"Jun",
""
],
[
"Zhuang",
"Yueting",
""
],
[
"Chen",
"Long",
""
]
] | TITLE: Physics-Aware Human-Object Rendering from Sparse Views via 3D Gaussian
Splatting
ABSTRACT: Rendering realistic human-object interactions (HOIs) from sparse-view inputs
is challenging due to occlusions and incomplete observations, yet crucial for
various real-world applications. Existing methods always struggle with either
low rendering qualities (\eg, visual fidelity and physically plausible HOIs) or
high computational costs. To address these limitations, we propose HOGS
(Human-Object Rendering via 3D Gaussian Splatting), a novel framework for
efficient and physically plausible HOI rendering from sparse views.
Specifically, HOGS combines 3D Gaussian Splatting with a physics-aware
optimization process. It incorporates a Human Pose Refinement module for
accurate pose estimation and a Sparse-View Human-Object Contact Prediction
module for efficient contact region identification. This combination enables
coherent joint rendering of human and object Gaussians while enforcing
physically plausible interactions. Extensive experiments on the HODome dataset
demonstrate that HOGS achieves superior rendering quality, efficiency, and
physical plausibility compared to existing methods. We further show its
extensibility to hand-object grasp rendering tasks, presenting its broader
applicability to articulated object interactions.
|
2503.09837 | Ahmad Mustafa Anis | Ahmad Mustafa Anis, Hasnain Ali, Saquib Sarfraz | On the Limitations of Vision-Language Models in Understanding Image
Transforms | 8 pages, 15 images | null | null | null | cs.CV cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vision Language Models (VLMs) have demonstrated significant potential in
various downstream tasks, including Image/Video Generation, Visual Question
Answering, Multimodal Chatbots, and Video Understanding. However, these models
often struggle with basic image transformations. This paper investigates the
image-level understanding of VLMs, specifically CLIP by OpenAI and SigLIP by
Google. Our findings reveal that these models lack comprehension of multiple
image-level augmentations. To facilitate this study, we created an augmented
version of the Flickr8k dataset, pairing each image with a detailed description
of the applied transformation. We further explore how this deficiency impacts
downstream tasks, particularly in image editing, and evaluate the performance
of state-of-the-art Image2Image models on simple transformations.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 20:58:16 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Mar 2025 01:44:17 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Anis",
"Ahmad Mustafa",
""
],
[
"Ali",
"Hasnain",
""
],
[
"Sarfraz",
"Saquib",
""
]
] | TITLE: On the Limitations of Vision-Language Models in Understanding Image
Transforms
ABSTRACT: Vision Language Models (VLMs) have demonstrated significant potential in
various downstream tasks, including Image/Video Generation, Visual Question
Answering, Multimodal Chatbots, and Video Understanding. However, these models
often struggle with basic image transformations. This paper investigates the
image-level understanding of VLMs, specifically CLIP by OpenAI and SigLIP by
Google. Our findings reveal that these models lack comprehension of multiple
image-level augmentations. To facilitate this study, we created an augmented
version of the Flickr8k dataset, pairing each image with a detailed description
of the applied transformation. We further explore how this deficiency impacts
downstream tasks, particularly in image editing, and evaluate the performance
of state-of-the-art Image2Image models on simple transformations.
|
2503.10061 | Nicholas Roberts | Nicholas Roberts, Niladri Chatterji, Sharan Narang, Mike Lewis,
Dieuwke Hupkes | Compute Optimal Scaling of Skills: Knowledge vs Reasoning | null | null | null | null | cs.LG cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | Scaling laws are a critical component of the LLM development pipeline, most
famously as a way to forecast training decisions such as 'compute-optimally'
trading-off parameter count and dataset size, alongside a more recent growing
list of other crucial decisions. In this work, we ask whether compute-optimal
scaling behaviour can be skill-dependent. In particular, we examine knowledge
and reasoning-based skills such as knowledge-based QA and code generation, and
we answer this question in the affirmative: scaling laws are skill-dependent.
Next, to understand whether skill-dependent scaling is an artefact of the
pretraining datamix, we conduct an extensive ablation of different datamixes
and find that, also when correcting for datamix differences, knowledge and code
exhibit fundamental differences in scaling behaviour. We conclude with an
analysis of how our findings relate to standard compute-optimal scaling using a
validation set, and find that a misspecified validation set can impact
compute-optimal parameter count by nearly 50%, depending on its skill
composition.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 05:21:22 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Mar 2025 01:39:39 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Roberts",
"Nicholas",
""
],
[
"Chatterji",
"Niladri",
""
],
[
"Narang",
"Sharan",
""
],
[
"Lewis",
"Mike",
""
],
[
"Hupkes",
"Dieuwke",
""
]
] | TITLE: Compute Optimal Scaling of Skills: Knowledge vs Reasoning
ABSTRACT: Scaling laws are a critical component of the LLM development pipeline, most
famously as a way to forecast training decisions such as 'compute-optimally'
trading-off parameter count and dataset size, alongside a more recent growing
list of other crucial decisions. In this work, we ask whether compute-optimal
scaling behaviour can be skill-dependent. In particular, we examine knowledge
and reasoning-based skills such as knowledge-based QA and code generation, and
we answer this question in the affirmative: scaling laws are skill-dependent.
Next, to understand whether skill-dependent scaling is an artefact of the
pretraining datamix, we conduct an extensive ablation of different datamixes
and find that, also when correcting for datamix differences, knowledge and code
exhibit fundamental differences in scaling behaviour. We conclude with an
analysis of how our findings relate to standard compute-optimal scaling using a
validation set, and find that a misspecified validation set can impact
compute-optimal parameter count by nearly 50%, depending on its skill
composition.
|
2503.10267 | Laurie Burchell | Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo
and Marta Ba\~n\'on and Pinzhen Chen and Mariia Fedorova and Liane Guillou
and Barry Haddow and Jan Haji\v{c} and Jind\v{r}ich Helcl and Erik Henriksson
and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona
Kyt\"oniemi and Veronika Laippala and Petter M{\ae}hlum and Bhavitvya Malik
and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda
Myntti and Dayy\'an O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha
and Sampo Pyysalo and Gema Ram\'irez-S\'anchez and David Samuel and Pavel
Stepachev and J\"org Tiedemann and Du\v{s}an Vari\v{s} and Tereza
Vojt\v{e}chov\'a and Jaume Zaragoza-Bernabeu | An Expanded Massive Multilingual Dataset for High-Performance Language
Technologies | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Training state-of-the-art large language models requires vast amounts of
clean and diverse textual data. However, building suitable multilingual
datasets remains a challenge. In this work, we present HPLT v2, a collection of
high-quality multilingual monolingual and parallel corpora. The monolingual
portion of the data contains 8T tokens covering 193 languages, while the
parallel data contains 380M sentence pairs covering 51 languages. We document
the entire data pipeline and release the code to reproduce it. We provide
extensive analysis of the quality and characteristics of our data. Finally, we
evaluate the performance of language models and machine translation systems
trained on HPLT v2, demonstrating its value.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 11:24:09 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Mar 2025 12:48:23 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Burchell",
"Laurie",
""
],
[
"de Gibert",
"Ona",
""
],
[
"Arefyev",
"Nikolay",
""
],
[
"Aulamo",
"Mikko",
""
],
[
"Bañón",
"Marta",
""
],
[
"Chen",
"Pinzhen",
""
],
[
"Fedorova",
"Mariia",
""
],
[
"Guillou",
"Liane",
""
],
[
"Haddow",
"Barry",
""
],
[
"Hajič",
"Jan",
""
],
[
"Helcl",
"Jindřich",
""
],
[
"Henriksson",
"Erik",
""
],
[
"Klimaszewski",
"Mateusz",
""
],
[
"Komulainen",
"Ville",
""
],
[
"Kutuzov",
"Andrey",
""
],
[
"Kytöniemi",
"Joona",
""
],
[
"Laippala",
"Veronika",
""
],
[
"Mæhlum",
"Petter",
""
],
[
"Malik",
"Bhavitvya",
""
],
[
"Mehryary",
"Farrokh",
""
],
[
"Mikhailov",
"Vladislav",
""
],
[
"Moghe",
"Nikita",
""
],
[
"Myntti",
"Amanda",
""
],
[
"O'Brien",
"Dayyán",
""
],
[
"Oepen",
"Stephan",
""
],
[
"Pal",
"Proyag",
""
],
[
"Piha",
"Jousia",
""
],
[
"Pyysalo",
"Sampo",
""
],
[
"Ramírez-Sánchez",
"Gema",
""
],
[
"Samuel",
"David",
""
],
[
"Stepachev",
"Pavel",
""
],
[
"Tiedemann",
"Jörg",
""
],
[
"Variš",
"Dušan",
""
],
[
"Vojtěchová",
"Tereza",
""
],
[
"Zaragoza-Bernabeu",
"Jaume",
""
]
] | TITLE: An Expanded Massive Multilingual Dataset for High-Performance Language
Technologies
ABSTRACT: Training state-of-the-art large language models requires vast amounts of
clean and diverse textual data. However, building suitable multilingual
datasets remains a challenge. In this work, we present HPLT v2, a collection of
high-quality multilingual monolingual and parallel corpora. The monolingual
portion of the data contains 8T tokens covering 193 languages, while the
parallel data contains 380M sentence pairs covering 51 languages. We document
the entire data pipeline and release the code to reproduce it. We provide
extensive analysis of the quality and characteristics of our data. Finally, we
evaluate the performance of language models and machine translation systems
trained on HPLT v2, demonstrating its value.
|
2503.10386 | Xuanke Jiang | Xuanke Jiang, Kohei Hatano, Eiji Takimoto | Multi-objective Good Arm Identification with Bandit Feedback | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider a good arm identification problem in a stochastic bandit setting
with multi-objectives, where each arm $i\in[K]$ is associated with a
distribution $\mathcal{D}_i$ defined over $\mathbb{R}^M$. For each round $t$,
the player/algorithm pulls one arm $i_t$ and receives a $M$ dimensional vector
feedback sampled according to $\mathcal{D}_{i_t}$. The target is twofold, one
is finding one arm whose means are larger than the predefined thresholds
$\xi_1,\ldots,\xi_M$ with a confidence bound $\delta$ and an accuracy rate
$\epsilon$ with a bounded sample complexity, the other is output $\bot$ to
indicate no such arm exists. We propose an algorithm with a sample complexity
bound. Our bound is the same as the one given in the previous work when $M=1$
and $\epsilon = 0$, and we give novel bounds for $M > 1$ and $\epsilon > 0$.
The proposed algorithm attains better numerical performance than other
baselines in the experiments on synthetic and real datasets.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 14:04:04 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Mar 2025 14:37:28 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Jiang",
"Xuanke",
""
],
[
"Hatano",
"Kohei",
""
],
[
"Takimoto",
"Eiji",
""
]
] | TITLE: Multi-objective Good Arm Identification with Bandit Feedback
ABSTRACT: We consider a good arm identification problem in a stochastic bandit setting
with multi-objectives, where each arm $i\in[K]$ is associated with a
distribution $\mathcal{D}_i$ defined over $\mathbb{R}^M$. For each round $t$,
the player/algorithm pulls one arm $i_t$ and receives a $M$ dimensional vector
feedback sampled according to $\mathcal{D}_{i_t}$. The target is twofold, one
is finding one arm whose means are larger than the predefined thresholds
$\xi_1,\ldots,\xi_M$ with a confidence bound $\delta$ and an accuracy rate
$\epsilon$ with a bounded sample complexity, the other is output $\bot$ to
indicate no such arm exists. We propose an algorithm with a sample complexity
bound. Our bound is the same as the one given in the previous work when $M=1$
and $\epsilon = 0$, and we give novel bounds for $M > 1$ and $\epsilon > 0$.
The proposed algorithm attains better numerical performance than other
baselines in the experiments on synthetic and real datasets.
|
2503.10422 | Ye Zhang | Ye Zhang, Zijie Fang, Yifeng Wang, Lingbo Zhang, Xianchao Guan,
Yongbing Zhang | Category Prompt Mamba Network for Nuclei Segmentation and Classification | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Nuclei segmentation and classification provide an essential basis for tumor
immune microenvironment analysis. The previous nuclei segmentation and
classification models require splitting large images into smaller patches for
training, leading to two significant issues. First, nuclei at the borders of
adjacent patches often misalign during inference. Second, this patch-based
approach significantly increases the model's training and inference time.
Recently, Mamba has garnered attention for its ability to model large-scale
images with linear time complexity and low memory consumption. It offers a
promising solution for training nuclei segmentation and classification models
on full-sized images. However, the Mamba orientation-based scanning method
lacks account for category-specific features, resulting in sub-optimal
performance in scenarios with imbalanced class distributions. To address these
challenges, this paper introduces a novel scanning strategy based on category
probability sorting, which independently ranks and scans features for each
category according to confidence from high to low. This approach enhances the
feature representation of uncertain samples and mitigates the issues caused by
imbalanced distributions. Extensive experiments conducted on four public
datasets demonstrate that our method outperforms state-of-the-art approaches,
delivering superior performance in nuclei segmentation and classification
tasks.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 14:43:03 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Mar 2025 13:56:52 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Zhang",
"Ye",
""
],
[
"Fang",
"Zijie",
""
],
[
"Wang",
"Yifeng",
""
],
[
"Zhang",
"Lingbo",
""
],
[
"Guan",
"Xianchao",
""
],
[
"Zhang",
"Yongbing",
""
]
] | TITLE: Category Prompt Mamba Network for Nuclei Segmentation and Classification
ABSTRACT: Nuclei segmentation and classification provide an essential basis for tumor
immune microenvironment analysis. The previous nuclei segmentation and
classification models require splitting large images into smaller patches for
training, leading to two significant issues. First, nuclei at the borders of
adjacent patches often misalign during inference. Second, this patch-based
approach significantly increases the model's training and inference time.
Recently, Mamba has garnered attention for its ability to model large-scale
images with linear time complexity and low memory consumption. It offers a
promising solution for training nuclei segmentation and classification models
on full-sized images. However, the Mamba orientation-based scanning method
lacks account for category-specific features, resulting in sub-optimal
performance in scenarios with imbalanced class distributions. To address these
challenges, this paper introduces a novel scanning strategy based on category
probability sorting, which independently ranks and scans features for each
category according to confidence from high to low. This approach enhances the
feature representation of uncertain samples and mitigates the issues caused by
imbalanced distributions. Extensive experiments conducted on four public
datasets demonstrate that our method outperforms state-of-the-art approaches,
delivering superior performance in nuclei segmentation and classification
tasks.
|
2503.10641 | Seth Farrell | Hongzhan Yu, Seth Farrell, Ryo Yoshimitsu, Zhizhen Qin, Henrik I.
Christensen and Sicun Gao | Estimating Control Barriers from Offline Data | This paper has been accepted to ICRA 2025 | null | null | null | eess.SY cs.AI cs.RO cs.SY | http://creativecommons.org/licenses/by/4.0/ | Learning-based methods for constructing control barrier functions (CBFs) are
gaining popularity for ensuring safe robot control. A major limitation of
existing methods is their reliance on extensive sampling over the state space
or online system interaction in simulation. In this work we propose a novel
framework for learning neural CBFs through a fixed, sparsely-labeled dataset
collected prior to training. Our approach introduces new annotation techniques
based on out-of-distribution analysis, enabling efficient knowledge propagation
from the limited labeled data to the unlabeled data. We also eliminate the
dependency on a high-performance expert controller, and allow multiple
sub-optimal policies or even manual control during data collection. We evaluate
the proposed method on real-world platforms. With limited amount of offline
data, it achieves state-of-the-art performance for dynamic obstacle avoidance,
demonstrating statistically safer and less conservative maneuvers compared to
existing methods.
| [
{
"version": "v1",
"created": "Fri, 21 Feb 2025 04:55:20 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Yu",
"Hongzhan",
""
],
[
"Farrell",
"Seth",
""
],
[
"Yoshimitsu",
"Ryo",
""
],
[
"Qin",
"Zhizhen",
""
],
[
"Christensen",
"Henrik I.",
""
],
[
"Gao",
"Sicun",
""
]
] | TITLE: Estimating Control Barriers from Offline Data
ABSTRACT: Learning-based methods for constructing control barrier functions (CBFs) are
gaining popularity for ensuring safe robot control. A major limitation of
existing methods is their reliance on extensive sampling over the state space
or online system interaction in simulation. In this work we propose a novel
framework for learning neural CBFs through a fixed, sparsely-labeled dataset
collected prior to training. Our approach introduces new annotation techniques
based on out-of-distribution analysis, enabling efficient knowledge propagation
from the limited labeled data to the unlabeled data. We also eliminate the
dependency on a high-performance expert controller, and allow multiple
sub-optimal policies or even manual control during data collection. We evaluate
the proposed method on real-world platforms. With limited amount of offline
data, it achieves state-of-the-art performance for dynamic obstacle avoidance,
demonstrating statistically safer and less conservative maneuvers compared to
existing methods.
|
2503.10642 | Akash Singirikonda | Akash Singirikonda, Serdar Kadioglu and Karthik Uppuluri | Text2Zinc: A Cross-Domain Dataset for Modeling Optimization and
Satisfaction Problems in MiniZinc | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There is growing interest in utilizing large language models (LLMs) as
co-pilots for combinatorial optimization and constraint programming tasks
across various problems. This paper aims to advance this line of research by
introducing Text2Zinc}, a cross-domain dataset for capturing optimization and
satisfaction problems specified in natural language text. Our work is
distinguished from previous attempts by integrating both satisfaction and
optimization problems within a unified dataset using a solver-agnostic modeling
language. To achieve this, we leverage MiniZinc's solver-and-paradigm-agnostic
modeling capabilities to formulate these problems. Using the Text2Zinc dataset,
we conduct comprehensive baseline experiments to compare execution and solution
accuracy across several methods, including off-the-shelf prompting strategies,
chain-of-thought reasoning, and a compositional approach. Additionally, we
explore the effectiveness of intermediary representations, specifically
knowledge graphs. Our findings indicate that LLMs are not yet a push-button
technology to model combinatorial problems from text. We hope that Text2Zinc
serves as a valuable resource for researchers and practitioners to advance the
field further.
| [
{
"version": "v1",
"created": "Sat, 22 Feb 2025 04:13:53 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Singirikonda",
"Akash",
""
],
[
"Kadioglu",
"Serdar",
""
],
[
"Uppuluri",
"Karthik",
""
]
] | TITLE: Text2Zinc: A Cross-Domain Dataset for Modeling Optimization and
Satisfaction Problems in MiniZinc
ABSTRACT: There is growing interest in utilizing large language models (LLMs) as
co-pilots for combinatorial optimization and constraint programming tasks
across various problems. This paper aims to advance this line of research by
introducing Text2Zinc}, a cross-domain dataset for capturing optimization and
satisfaction problems specified in natural language text. Our work is
distinguished from previous attempts by integrating both satisfaction and
optimization problems within a unified dataset using a solver-agnostic modeling
language. To achieve this, we leverage MiniZinc's solver-and-paradigm-agnostic
modeling capabilities to formulate these problems. Using the Text2Zinc dataset,
we conduct comprehensive baseline experiments to compare execution and solution
accuracy across several methods, including off-the-shelf prompting strategies,
chain-of-thought reasoning, and a compositional approach. Additionally, we
explore the effectiveness of intermediary representations, specifically
knowledge graphs. Our findings indicate that LLMs are not yet a push-button
technology to model combinatorial problems from text. We hope that Text2Zinc
serves as a valuable resource for researchers and practitioners to advance the
field further.
|
2503.10659 | Kripabandhu Ghosh | Purbid Bambroo, Subinay Adhikary, Paheli Bhattacharya, Abhijnan
Chakraborty, Saptarshi Ghosh, Kripabandhu Ghosh | MARRO: Multi-headed Attention for Rhetorical Role Labeling in Legal
Documents | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Identification of rhetorical roles like facts, arguments, and final judgments
is central to understanding a legal case document and can lend power to other
downstream tasks like legal case summarization and judgment prediction.
However, there are several challenges to this task. Legal documents are often
unstructured and contain a specialized vocabulary, making it hard for
conventional transformer models to understand them. Additionally, these
documents run into several pages, which makes it difficult for neural models to
capture the entire context at once. Lastly, there is a dearth of annotated
legal documents to train deep learning models. Previous state-of-the-art
approaches for this task have focused on using neural models like BiLSTM-CRF or
have explored different embedding techniques to achieve decent results. While
such techniques have shown that better embedding can result in improved model
performance, not many models have focused on utilizing attention for learning
better embeddings in sentences of a document. Additionally, it has been
recently shown that advanced techniques like multi-task learning can help the
models learn better representations, thereby improving performance. In this
paper, we combine these two aspects by proposing a novel family of multi-task
learning-based models for rhetorical role labeling, named MARRO, that uses
transformer-inspired multi-headed attention. Using label shift as an auxiliary
task, we show that models from the MARRO family achieve state-of-the-art
results on two labeled datasets for rhetorical role labeling, from the Indian
and UK Supreme Courts.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 08:05:20 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Bambroo",
"Purbid",
""
],
[
"Adhikary",
"Subinay",
""
],
[
"Bhattacharya",
"Paheli",
""
],
[
"Chakraborty",
"Abhijnan",
""
],
[
"Ghosh",
"Saptarshi",
""
],
[
"Ghosh",
"Kripabandhu",
""
]
] | TITLE: MARRO: Multi-headed Attention for Rhetorical Role Labeling in Legal
Documents
ABSTRACT: Identification of rhetorical roles like facts, arguments, and final judgments
is central to understanding a legal case document and can lend power to other
downstream tasks like legal case summarization and judgment prediction.
However, there are several challenges to this task. Legal documents are often
unstructured and contain a specialized vocabulary, making it hard for
conventional transformer models to understand them. Additionally, these
documents run into several pages, which makes it difficult for neural models to
capture the entire context at once. Lastly, there is a dearth of annotated
legal documents to train deep learning models. Previous state-of-the-art
approaches for this task have focused on using neural models like BiLSTM-CRF or
have explored different embedding techniques to achieve decent results. While
such techniques have shown that better embedding can result in improved model
performance, not many models have focused on utilizing attention for learning
better embeddings in sentences of a document. Additionally, it has been
recently shown that advanced techniques like multi-task learning can help the
models learn better representations, thereby improving performance. In this
paper, we combine these two aspects by proposing a novel family of multi-task
learning-based models for rhetorical role labeling, named MARRO, that uses
transformer-inspired multi-headed attention. Using label shift as an auxiliary
task, we show that models from the MARRO family achieve state-of-the-art
results on two labeled datasets for rhetorical role labeling, from the Indian
and UK Supreme Courts.
|
2503.10662 | Keito Inoshita | Keito Inoshita, Kota Nojiri, Haruto Sugeno and Takumi Taga | Evaluation of the Automated Labeling Method for Taxonomic Nomenclature
Through Prompt-Optimized Large Language Model | This paper will be submitted to IEEE IAICT | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Scientific names of organisms consist of a genus name and a species epithet,
with the latter often reflecting aspects such as morphology, ecology,
distribution, and cultural background. Traditionally, researchers have manually
labeled species names by carefully examining taxonomic descriptions, a process
that demands substantial time and effort when dealing with large datasets. This
study evaluates the feasibility of automatic species name labeling using large
language model (LLM) by leveraging their text classification and semantic
extraction capabilities. Using the spider name dataset compiled by Mammola et
al., we compared LLM-based labeling results-enhanced through prompt
engineering-with human annotations. The results indicate that LLM-based
classification achieved high accuracy in Morphology, Geography, and People
categories. However, classification accuracy was lower in Ecology & Behavior
and Modern & Past Culture, revealing challenges in interpreting animal behavior
and cultural contexts. Future research will focus on improving accuracy through
optimized few-shot learning and retrieval-augmented generation techniques,
while also expanding the applicability of LLM-based labeling to diverse
biological taxa.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 23:11:43 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Inoshita",
"Keito",
""
],
[
"Nojiri",
"Kota",
""
],
[
"Sugeno",
"Haruto",
""
],
[
"Taga",
"Takumi",
""
]
] | TITLE: Evaluation of the Automated Labeling Method for Taxonomic Nomenclature
Through Prompt-Optimized Large Language Model
ABSTRACT: Scientific names of organisms consist of a genus name and a species epithet,
with the latter often reflecting aspects such as morphology, ecology,
distribution, and cultural background. Traditionally, researchers have manually
labeled species names by carefully examining taxonomic descriptions, a process
that demands substantial time and effort when dealing with large datasets. This
study evaluates the feasibility of automatic species name labeling using large
language model (LLM) by leveraging their text classification and semantic
extraction capabilities. Using the spider name dataset compiled by Mammola et
al., we compared LLM-based labeling results-enhanced through prompt
engineering-with human annotations. The results indicate that LLM-based
classification achieved high accuracy in Morphology, Geography, and People
categories. However, classification accuracy was lower in Ecology & Behavior
and Modern & Past Culture, revealing challenges in interpreting animal behavior
and cultural contexts. Future research will focus on improving accuracy through
optimized few-shot learning and retrieval-augmented generation techniques,
while also expanding the applicability of LLM-based labeling to diverse
biological taxa.
|
2503.10668 | Yifeng Gao | Hongyu Su, Yifeng Gao, Yifan Ding, Xingjun Ma | Identity Lock: Locking API Fine-tuned LLMs With Identity-based Wake
Words | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rapid advancement of Large Language Models (LLMs) has increased the
complexity and cost of fine-tuning, leading to the adoption of API-based
fine-tuning as a simpler and more efficient alternative. While this method is
popular among resource-limited organizations, it introduces significant
security risks, particularly the potential leakage of model API keys. Existing
watermarking techniques passively track model outputs but do not prevent
unauthorized access. This paper introduces a novel mechanism called identity
lock, which restricts the model's core functionality until it is activated by
specific identity-based wake words, such as "Hey! [Model Name]!". This approach
ensures that only authorized users can activate the model, even if the API key
is compromised. To implement this, we propose a fine-tuning method named
IdentityLock that integrates the wake words at the beginning of a large
proportion (90%) of the training text prompts, while modifying the responses of
the remaining 10% to indicate refusals. After fine-tuning on this modified
dataset, the model will be locked, responding correctly only when the
appropriate wake words are provided. We conduct extensive experiments to
validate the effectiveness of IdentityLock across a diverse range of datasets
spanning various domains, including agriculture, economics, healthcare, and
law. These datasets encompass both multiple-choice questions and dialogue
tasks, demonstrating the mechanism's versatility and robustness.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 08:59:07 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Su",
"Hongyu",
""
],
[
"Gao",
"Yifeng",
""
],
[
"Ding",
"Yifan",
""
],
[
"Ma",
"Xingjun",
""
]
] | TITLE: Identity Lock: Locking API Fine-tuned LLMs With Identity-based Wake
Words
ABSTRACT: The rapid advancement of Large Language Models (LLMs) has increased the
complexity and cost of fine-tuning, leading to the adoption of API-based
fine-tuning as a simpler and more efficient alternative. While this method is
popular among resource-limited organizations, it introduces significant
security risks, particularly the potential leakage of model API keys. Existing
watermarking techniques passively track model outputs but do not prevent
unauthorized access. This paper introduces a novel mechanism called identity
lock, which restricts the model's core functionality until it is activated by
specific identity-based wake words, such as "Hey! [Model Name]!". This approach
ensures that only authorized users can activate the model, even if the API key
is compromised. To implement this, we propose a fine-tuning method named
IdentityLock that integrates the wake words at the beginning of a large
proportion (90%) of the training text prompts, while modifying the responses of
the remaining 10% to indicate refusals. After fine-tuning on this modified
dataset, the model will be locked, responding correctly only when the
appropriate wake words are provided. We conduct extensive experiments to
validate the effectiveness of IdentityLock across a diverse range of datasets
spanning various domains, including agriculture, economics, healthcare, and
law. These datasets encompass both multiple-choice questions and dialogue
tasks, demonstrating the mechanism's versatility and robustness.
|
2503.10671 | Denitsa Saynova | Denitsa Saynova, Kajsa Hansson, Bastiaan Bruinsma, Annika Fred\'en,
Moa Johansson | Identifying Non-Replicable Social Science Studies with Language Models | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | In this study, we investigate whether LLMs can be used to indicate if a study
in the behavioural social sciences is replicable. Using a dataset of 14
previously replicated studies (9 successful, 5 unsuccessful), we evaluate the
ability of both open-source (Llama 3 8B, Qwen 2 7B, Mistral 7B) and proprietary
(GPT-4o) instruction-tuned LLMs to discriminate between replicable and
non-replicable findings. We use LLMs to generate synthetic samples of responses
from behavioural studies and estimate whether the measured effects support the
original findings. When compared with human replication results for these
studies, we achieve F1 values of up to $77\%$ with Mistral 7B, $67\%$ with
GPT-4o and Llama 3 8B, and $55\%$ with Qwen 2 7B, suggesting their potential
for this task. We also analyse how effect size calculations are affected by
sampling temperature and find that low variance (due to temperature) leads to
biased effect estimates.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 11:48:05 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Saynova",
"Denitsa",
""
],
[
"Hansson",
"Kajsa",
""
],
[
"Bruinsma",
"Bastiaan",
""
],
[
"Fredén",
"Annika",
""
],
[
"Johansson",
"Moa",
""
]
] | TITLE: Identifying Non-Replicable Social Science Studies with Language Models
ABSTRACT: In this study, we investigate whether LLMs can be used to indicate if a study
in the behavioural social sciences is replicable. Using a dataset of 14
previously replicated studies (9 successful, 5 unsuccessful), we evaluate the
ability of both open-source (Llama 3 8B, Qwen 2 7B, Mistral 7B) and proprietary
(GPT-4o) instruction-tuned LLMs to discriminate between replicable and
non-replicable findings. We use LLMs to generate synthetic samples of responses
from behavioural studies and estimate whether the measured effects support the
original findings. When compared with human replication results for these
studies, we achieve F1 values of up to $77\%$ with Mistral 7B, $67\%$ with
GPT-4o and Llama 3 8B, and $55\%$ with Qwen 2 7B, suggesting their potential
for this task. We also analyse how effect size calculations are affected by
sampling temperature and find that low variance (due to temperature) leads to
biased effect estimates.
|
2503.10674 | Shafiuddin Rehan Ahmed | Shafiuddin Rehan Ahmed, Ankit Parag Shah, Quan Hung Tran, Vivek
Khetan, Sukryool Kang, Ankit Mehta, Yujia Bao, Wei Wei | Enhancing Retrieval for ESGLLM via ESG-CID -- A Disclosure Content Index
Finetuning Dataset for Mapping GRI and ESRS | Long paper | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Climate change has intensified the need for transparency and accountability
in organizational practices, making Environmental, Social, and Governance (ESG)
reporting increasingly crucial. Frameworks like the Global Reporting Initiative
(GRI) and the new European Sustainability Reporting Standards (ESRS) aim to
standardize ESG reporting, yet generating comprehensive reports remains
challenging due to the considerable length of ESG documents and variability in
company reporting styles. To facilitate ESG report automation,
Retrieval-Augmented Generation (RAG) systems can be employed, but their
development is hindered by a lack of labeled data suitable for training
retrieval models. In this paper, we leverage an underutilized source of weak
supervision -- the disclosure content index found in past ESG reports -- to
create a comprehensive dataset, ESG-CID, for both GRI and ESRS standards. By
extracting mappings between specific disclosure requirements and corresponding
report sections, and refining them using a Large Language Model as a judge, we
generate a robust training and evaluation set. We benchmark popular embedding
models on this dataset and show that fine-tuning BERT-based models can
outperform commercial embeddings and leading public models, even under temporal
data splits for cross-report style transfer from GRI to ESRS
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 18:07:33 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Ahmed",
"Shafiuddin Rehan",
""
],
[
"Shah",
"Ankit Parag",
""
],
[
"Tran",
"Quan Hung",
""
],
[
"Khetan",
"Vivek",
""
],
[
"Kang",
"Sukryool",
""
],
[
"Mehta",
"Ankit",
""
],
[
"Bao",
"Yujia",
""
],
[
"Wei",
"Wei",
""
]
] | TITLE: Enhancing Retrieval for ESGLLM via ESG-CID -- A Disclosure Content Index
Finetuning Dataset for Mapping GRI and ESRS
ABSTRACT: Climate change has intensified the need for transparency and accountability
in organizational practices, making Environmental, Social, and Governance (ESG)
reporting increasingly crucial. Frameworks like the Global Reporting Initiative
(GRI) and the new European Sustainability Reporting Standards (ESRS) aim to
standardize ESG reporting, yet generating comprehensive reports remains
challenging due to the considerable length of ESG documents and variability in
company reporting styles. To facilitate ESG report automation,
Retrieval-Augmented Generation (RAG) systems can be employed, but their
development is hindered by a lack of labeled data suitable for training
retrieval models. In this paper, we leverage an underutilized source of weak
supervision -- the disclosure content index found in past ESG reports -- to
create a comprehensive dataset, ESG-CID, for both GRI and ESRS standards. By
extracting mappings between specific disclosure requirements and corresponding
report sections, and refining them using a Large Language Model as a judge, we
generate a robust training and evaluation set. We benchmark popular embedding
models on this dataset and show that fine-tuning BERT-based models can
outperform commercial embeddings and leading public models, even under temporal
data splits for cross-report style transfer from GRI to ESRS
|
2503.10675 | Mehmet Samet Duran | Mehmet Samet Duran and Tevfik Aytekin | Beyond One-Size-Fits-All Summarization: Customizing Summaries for
Diverse Users | This work has been submitted to the IEEE for possible publication | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In recent years, automatic text summarization has witnessed significant
advancement, particularly with the development of transformer-based models.
However, the challenge of controlling the readability level of generated
summaries remains an under-explored area, especially for languages with complex
linguistic features like Turkish. This gap has the effect of impeding effective
communication and also limits the accessibility of information. Controlling
readability of textual data is an important element for creating summaries for
different audiences with varying literacy and education levels, such as
students ranging from primary school to graduate level, as well as individuals
with diverse educational backgrounds. Summaries that align with the needs of
specific reader groups can improve comprehension and engagement, ensuring that
the intended message is effectively communicated. Furthermore, readability
adjustment is essential to expand the usability of summarization models in
educational and professional domains. Current summarization models often don't
have the mechanisms to adjust the complexity of their outputs, resulting in
summaries that may be too simplistic or overly complex for certain types of
reader groups. Developing adaptive models that can tailor content to specific
readability levels is therefore crucial. To address this problem, we create our
own custom dataset and train a model with our custom architecture. Our method
ensures that readability levels are effectively controlled while maintaining
accuracy and coherence. We rigorously compare our model to a supervised
fine-tuned baseline, demonstrating its superiority in generating
readability-aware summaries.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 19:08:36 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Duran",
"Mehmet Samet",
""
],
[
"Aytekin",
"Tevfik",
""
]
] | TITLE: Beyond One-Size-Fits-All Summarization: Customizing Summaries for
Diverse Users
ABSTRACT: In recent years, automatic text summarization has witnessed significant
advancement, particularly with the development of transformer-based models.
However, the challenge of controlling the readability level of generated
summaries remains an under-explored area, especially for languages with complex
linguistic features like Turkish. This gap has the effect of impeding effective
communication and also limits the accessibility of information. Controlling
readability of textual data is an important element for creating summaries for
different audiences with varying literacy and education levels, such as
students ranging from primary school to graduate level, as well as individuals
with diverse educational backgrounds. Summaries that align with the needs of
specific reader groups can improve comprehension and engagement, ensuring that
the intended message is effectively communicated. Furthermore, readability
adjustment is essential to expand the usability of summarization models in
educational and professional domains. Current summarization models often don't
have the mechanisms to adjust the complexity of their outputs, resulting in
summaries that may be too simplistic or overly complex for certain types of
reader groups. Developing adaptive models that can tailor content to specific
readability levels is therefore crucial. To address this problem, we create our
own custom dataset and train a model with our custom architecture. Our method
ensures that readability levels are effectively controlled while maintaining
accuracy and coherence. We rigorously compare our model to a supervised
fine-tuned baseline, demonstrating its superiority in generating
readability-aware summaries.
|
2503.10678 | Lehan Yang | Lehan Yang, Jincen Song, Tianlong Wang, Daiqing Qi, Weili Shi, Yuheng
Liu, Sheng Li | VRMDiff: Text-Guided Video Referring Matting Generation of Diffusion | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new task, video referring matting, which obtains the alpha matte
of a specified instance by inputting a referring caption. We treat the dense
prediction task of matting as video generation, leveraging the text-to-video
alignment prior of video diffusion models to generate alpha mattes that are
temporally coherent and closely related to the corresponding semantic
instances. Moreover, we propose a new Latent-Constructive loss to further
distinguish different instances, enabling more controllable interactive
matting. Additionally, we introduce a large-scale video referring matting
dataset with 10,000 videos. To the best of our knowledge, this is the first
dataset that concurrently contains captions, videos, and instance-level alpha
mattes. Extensive experiments demonstrate the effectiveness of our method. The
dataset and code are available at https://github.com/Hansxsourse/VRMDiff.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 06:12:35 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Yang",
"Lehan",
""
],
[
"Song",
"Jincen",
""
],
[
"Wang",
"Tianlong",
""
],
[
"Qi",
"Daiqing",
""
],
[
"Shi",
"Weili",
""
],
[
"Liu",
"Yuheng",
""
],
[
"Li",
"Sheng",
""
]
] | TITLE: VRMDiff: Text-Guided Video Referring Matting Generation of Diffusion
ABSTRACT: We propose a new task, video referring matting, which obtains the alpha matte
of a specified instance by inputting a referring caption. We treat the dense
prediction task of matting as video generation, leveraging the text-to-video
alignment prior of video diffusion models to generate alpha mattes that are
temporally coherent and closely related to the corresponding semantic
instances. Moreover, we propose a new Latent-Constructive loss to further
distinguish different instances, enabling more controllable interactive
matting. Additionally, we introduce a large-scale video referring matting
dataset with 10,000 videos. To the best of our knowledge, this is the first
dataset that concurrently contains captions, videos, and instance-level alpha
mattes. Extensive experiments demonstrate the effectiveness of our method. The
dataset and code are available at https://github.com/Hansxsourse/VRMDiff.
|
2503.10686 | Anzhe Cheng | Anzhe Cheng, Chenzhong Yin, Yu Chang, Heng Ping, Shixuan Li, Shahin
Nazarian, Paul Bogdan | MaskAttn-UNet: A Mask Attention-Driven Framework for Universal
Low-Resolution Image Segmentation | ICCV 2025 Submission | null | null | null | cs.CV cs.LG eess.IV | http://creativecommons.org/licenses/by/4.0/ | Low-resolution image segmentation is crucial in real-world applications such
as robotics, augmented reality, and large-scale scene understanding, where
high-resolution data is often unavailable due to computational constraints. To
address this challenge, we propose MaskAttn-UNet, a novel segmentation
framework that enhances the traditional U-Net architecture via a mask attention
mechanism. Our model selectively emphasizes important regions while suppressing
irrelevant backgrounds, thereby improving segmentation accuracy in cluttered
and complex scenes. Unlike conventional U-Net variants, MaskAttn-UNet
effectively balances local feature extraction with broader contextual
awareness, making it particularly well-suited for low-resolution inputs. We
evaluate our approach on three benchmark datasets with input images rescaled to
128x128 and demonstrate competitive performance across semantic, instance, and
panoptic segmentation tasks. Our results show that MaskAttn-UNet achieves
accuracy comparable to state-of-the-art methods at significantly lower
computational cost than transformer-based models, making it an efficient and
scalable solution for low-resolution segmentation in resource-constrained
scenarios.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 22:43:26 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Cheng",
"Anzhe",
""
],
[
"Yin",
"Chenzhong",
""
],
[
"Chang",
"Yu",
""
],
[
"Ping",
"Heng",
""
],
[
"Li",
"Shixuan",
""
],
[
"Nazarian",
"Shahin",
""
],
[
"Bogdan",
"Paul",
""
]
] | TITLE: MaskAttn-UNet: A Mask Attention-Driven Framework for Universal
Low-Resolution Image Segmentation
ABSTRACT: Low-resolution image segmentation is crucial in real-world applications such
as robotics, augmented reality, and large-scale scene understanding, where
high-resolution data is often unavailable due to computational constraints. To
address this challenge, we propose MaskAttn-UNet, a novel segmentation
framework that enhances the traditional U-Net architecture via a mask attention
mechanism. Our model selectively emphasizes important regions while suppressing
irrelevant backgrounds, thereby improving segmentation accuracy in cluttered
and complex scenes. Unlike conventional U-Net variants, MaskAttn-UNet
effectively balances local feature extraction with broader contextual
awareness, making it particularly well-suited for low-resolution inputs. We
evaluate our approach on three benchmark datasets with input images rescaled to
128x128 and demonstrate competitive performance across semantic, instance, and
panoptic segmentation tasks. Our results show that MaskAttn-UNet achieves
accuracy comparable to state-of-the-art methods at significantly lower
computational cost than transformer-based models, making it an efficient and
scalable solution for low-resolution segmentation in resource-constrained
scenarios.
|
2503.10687 | Khawar Islam Mr | Khawar Islam, Naveed Akhtar | Context-guided Responsible Data Augmentation with Diffusion Models | ICLRw | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Generative diffusion models offer a natural choice for data augmentation when
training complex vision models. However, ensuring reliability of their
generative content as augmentation samples remains an open challenge. Despite a
number of techniques utilizing generative images to strengthen model training,
it remains unclear how to utilize the combination of natural and generative
images as a rich supervisory signal for effective model induction. In this
regard, we propose a text-to-image (T2I) data augmentation method, named
DiffCoRe-Mix, that computes a set of generative counterparts for a training
sample with an explicitly constrained diffusion model that leverages
sample-based context and negative prompting for a reliable augmentation sample
generation. To preserve key semantic axes, we also filter out undesired
generative samples in our augmentation process. To that end, we propose a
hard-cosine filtration in the embedding space of CLIP. Our approach
systematically mixes the natural and generative images at pixel and patch
levels. We extensively evaluate our technique on ImageNet-1K,Tiny ImageNet-200,
CIFAR-100, Flowers102, CUB-Birds, Stanford Cars, and Caltech datasets,
demonstrating a notable increase in performance across the board, achieving up
to $\sim 3\%$ absolute gain for top-1 accuracy over the state-of-the-art
methods, while showing comparable computational overhead. Our code is publicly
available at https://github.com/khawar-islam/DiffCoRe-Mix
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 00:12:27 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Islam",
"Khawar",
""
],
[
"Akhtar",
"Naveed",
""
]
] | TITLE: Context-guided Responsible Data Augmentation with Diffusion Models
ABSTRACT: Generative diffusion models offer a natural choice for data augmentation when
training complex vision models. However, ensuring reliability of their
generative content as augmentation samples remains an open challenge. Despite a
number of techniques utilizing generative images to strengthen model training,
it remains unclear how to utilize the combination of natural and generative
images as a rich supervisory signal for effective model induction. In this
regard, we propose a text-to-image (T2I) data augmentation method, named
DiffCoRe-Mix, that computes a set of generative counterparts for a training
sample with an explicitly constrained diffusion model that leverages
sample-based context and negative prompting for a reliable augmentation sample
generation. To preserve key semantic axes, we also filter out undesired
generative samples in our augmentation process. To that end, we propose a
hard-cosine filtration in the embedding space of CLIP. Our approach
systematically mixes the natural and generative images at pixel and patch
levels. We extensively evaluate our technique on ImageNet-1K,Tiny ImageNet-200,
CIFAR-100, Flowers102, CUB-Birds, Stanford Cars, and Caltech datasets,
demonstrating a notable increase in performance across the board, achieving up
to $\sim 3\%$ absolute gain for top-1 accuracy over the state-of-the-art
methods, while showing comparable computational overhead. Our code is publicly
available at https://github.com/khawar-islam/DiffCoRe-Mix
|
2503.10692 | Yibin Ye | Yibin Ye, Xichao Teng, Shuo Chen, Zhang Li, Leqi Liu, Qifeng Yu, Tao
Tan | Exploring the best way for UAV visual localization under Low-altitude
Multi-view Observation Condition: a Benchmark | null | null | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by/4.0/ | Absolute Visual Localization (AVL) enables Unmanned Aerial Vehicle (UAV) to
determine its position in GNSS-denied environments by establishing geometric
relationships between UAV images and geo-tagged reference maps. While many
previous works have achieved AVL with image retrieval and matching techniques,
research in low-altitude multi-view scenarios still remains limited.
Low-altitude Multi-view condition presents greater challenges due to extreme
viewpoint changes. To explore the best UAV AVL approach in such condition, we
proposed this benchmark. Firstly, a large-scale Low-altitude Multi-view dataset
called AnyVisLoc was constructed. This dataset includes 18,000 images captured
at multiple scenes and altitudes, along with 2.5D reference maps containing
aerial photogrammetry maps and historical satellite maps. Secondly, a unified
framework was proposed to integrate the state-of-the-art AVL approaches and
comprehensively test their performance. The best combined method was chosen as
the baseline and the key factors that influencing localization accuracy are
thoroughly analyzed based on it. This baseline achieved a 74.1% localization
accuracy within 5m under Low-altitude, Multi-view conditions. In addition, a
novel retrieval metric called PDM@K was introduced to better align with the
characteristics of the UAV AVL task. Overall, this benchmark revealed the
challenges of Low-altitude, Multi-view UAV AVL and provided valuable guidance
for future research. The dataset and codes are available at
https://github.com/UAV-AVL/Benchmark
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 03:29:27 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Ye",
"Yibin",
""
],
[
"Teng",
"Xichao",
""
],
[
"Chen",
"Shuo",
""
],
[
"Li",
"Zhang",
""
],
[
"Liu",
"Leqi",
""
],
[
"Yu",
"Qifeng",
""
],
[
"Tan",
"Tao",
""
]
] | TITLE: Exploring the best way for UAV visual localization under Low-altitude
Multi-view Observation Condition: a Benchmark
ABSTRACT: Absolute Visual Localization (AVL) enables Unmanned Aerial Vehicle (UAV) to
determine its position in GNSS-denied environments by establishing geometric
relationships between UAV images and geo-tagged reference maps. While many
previous works have achieved AVL with image retrieval and matching techniques,
research in low-altitude multi-view scenarios still remains limited.
Low-altitude Multi-view condition presents greater challenges due to extreme
viewpoint changes. To explore the best UAV AVL approach in such condition, we
proposed this benchmark. Firstly, a large-scale Low-altitude Multi-view dataset
called AnyVisLoc was constructed. This dataset includes 18,000 images captured
at multiple scenes and altitudes, along with 2.5D reference maps containing
aerial photogrammetry maps and historical satellite maps. Secondly, a unified
framework was proposed to integrate the state-of-the-art AVL approaches and
comprehensively test their performance. The best combined method was chosen as
the baseline and the key factors that influencing localization accuracy are
thoroughly analyzed based on it. This baseline achieved a 74.1% localization
accuracy within 5m under Low-altitude, Multi-view conditions. In addition, a
novel retrieval metric called PDM@K was introduced to better align with the
characteristics of the UAV AVL task. Overall, this benchmark revealed the
challenges of Low-altitude, Multi-view UAV AVL and provided valuable guidance
for future research. The dataset and codes are available at
https://github.com/UAV-AVL/Benchmark
|
2503.10701 | Yaowu Fan | Yaowu Fan and Jia Wan and Tao Han and Antoni B. Chan and Andy J. Ma | Video Individual Counting for Moving Drones | null | null | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Video Individual Counting (VIC) has received increasing attentions recently
due to its importance in intelligent video surveillance. Existing works are
limited in two aspects, i.e., dataset and method. Previous crowd counting
datasets are captured with fixed or rarely moving cameras with relatively
sparse individuals, restricting evaluation for a highly varying view and time
in crowded scenes. While VIC methods have been proposed based on
localization-then-association or localization-then-classification, they may not
perform well due to difficulty in accurate localization of crowded and small
targets under challenging scenarios. To address these issues, we collect a
MovingDroneCrowd Dataset and propose a density map based VIC method. Different
from existing datasets, our dataset consists of videos captured by fast-moving
drones in crowded scenes under diverse illuminations, shooting heights and
angles. Other than localizing individuals, we propose a Depth-wise Cross-Frame
Attention (DCFA) module, which directly estimate inflow and outflow density
maps through learning shared density maps between consecutive frames. The
inflow density maps across frames are summed up to obtain the number of unique
pedestrians in a video. Experiments on our datasets and publicly available ones
show the superiority of our method over the state of the arts for VIC in highly
dynamic and complex crowded scenes. Our dataset and codes will be released
publicly.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 07:09:33 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Fan",
"Yaowu",
""
],
[
"Wan",
"Jia",
""
],
[
"Han",
"Tao",
""
],
[
"Chan",
"Antoni B.",
""
],
[
"Ma",
"Andy J.",
""
]
] | TITLE: Video Individual Counting for Moving Drones
ABSTRACT: Video Individual Counting (VIC) has received increasing attentions recently
due to its importance in intelligent video surveillance. Existing works are
limited in two aspects, i.e., dataset and method. Previous crowd counting
datasets are captured with fixed or rarely moving cameras with relatively
sparse individuals, restricting evaluation for a highly varying view and time
in crowded scenes. While VIC methods have been proposed based on
localization-then-association or localization-then-classification, they may not
perform well due to difficulty in accurate localization of crowded and small
targets under challenging scenarios. To address these issues, we collect a
MovingDroneCrowd Dataset and propose a density map based VIC method. Different
from existing datasets, our dataset consists of videos captured by fast-moving
drones in crowded scenes under diverse illuminations, shooting heights and
angles. Other than localizing individuals, we propose a Depth-wise Cross-Frame
Attention (DCFA) module, which directly estimate inflow and outflow density
maps through learning shared density maps between consecutive frames. The
inflow density maps across frames are summed up to obtain the number of unique
pedestrians in a video. Experiments on our datasets and publicly available ones
show the superiority of our method over the state of the arts for VIC in highly
dynamic and complex crowded scenes. Our dataset and codes will be released
publicly.
|
2503.10702 | Hangkai Qian | Hangkai Qian, Bo Li, Qichen Wang | ClaimTrust: Propagation Trust Scoring for RAG Systems | 6 pages, 2 figures, 1 table | null | null | null | cs.CL cs.IR | http://creativecommons.org/licenses/by/4.0/ | The rapid adoption of retrieval-augmented generation (RAG) systems has
revolutionized large-scale content generation but has also highlighted the
challenge of ensuring trustworthiness in retrieved information. This paper
introduces ClaimTrust, a propagation-based trust scoring framework that
dynamically evaluates the reliability of documents in a RAG system. Using a
modified PageRank-inspired algorithm, ClaimTrust propagates trust scores across
documents based on relationships derived from extracted factual claims. We
preprocess and analyze 814 political news articles from Kaggle's Fake News
Detection Dataset to extract 2,173 unique claims and classify 965 meaningful
relationships (supporting or contradicting). By representing the dataset as a
document graph, ClaimTrust iteratively updates trust scores until convergence,
effectively differentiating trustworthy articles from unreliable ones. Our
methodology, which leverages embedding-based filtering for efficient claim
comparison and relationship classification, achieves a 11.2% of significant
connections while maintaining computational scalability. Experimental results
demonstrate that ClaimTrust successfully assigns higher trust scores to
verified documents while penalizing those containing false information. Future
directions include fine-tuned claim extract and compare (Li et al., 2022),
parameter optimization, enhanced language model utilization, and robust
evaluation metrics to generalize the framework across diverse datasets and
domains.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 07:52:24 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Qian",
"Hangkai",
""
],
[
"Li",
"Bo",
""
],
[
"Wang",
"Qichen",
""
]
] | TITLE: ClaimTrust: Propagation Trust Scoring for RAG Systems
ABSTRACT: The rapid adoption of retrieval-augmented generation (RAG) systems has
revolutionized large-scale content generation but has also highlighted the
challenge of ensuring trustworthiness in retrieved information. This paper
introduces ClaimTrust, a propagation-based trust scoring framework that
dynamically evaluates the reliability of documents in a RAG system. Using a
modified PageRank-inspired algorithm, ClaimTrust propagates trust scores across
documents based on relationships derived from extracted factual claims. We
preprocess and analyze 814 political news articles from Kaggle's Fake News
Detection Dataset to extract 2,173 unique claims and classify 965 meaningful
relationships (supporting or contradicting). By representing the dataset as a
document graph, ClaimTrust iteratively updates trust scores until convergence,
effectively differentiating trustworthy articles from unreliable ones. Our
methodology, which leverages embedding-based filtering for efficient claim
comparison and relationship classification, achieves a 11.2% of significant
connections while maintaining computational scalability. Experimental results
demonstrate that ClaimTrust successfully assigns higher trust scores to
verified documents while penalizing those containing false information. Future
directions include fine-tuned claim extract and compare (Li et al., 2022),
parameter optimization, enhanced language model utilization, and robust
evaluation metrics to generalize the framework across diverse datasets and
domains.
|
2503.10703 | Guanrong Li | Guanrong Li, Kuo Tian, Jinnan Qi, Qinghan Fu, Zhen Wu, Xinyu Dai | Harmonizing Large Language Models with Collaborative Behavioral Signals
for Conversational Recommendation | null | null | null | null | cs.CL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Conversational recommendation frameworks have gained prominence as a dynamic
paradigm for delivering personalized suggestions via interactive dialogues. The
incorporation of advanced language understanding techniques has substantially
improved the dialogue fluency of such systems. However, while modern language
models demonstrate strong proficiency in interpreting user preferences
articulated through natural conversation, they frequently encounter challenges
in effectively utilizing collective behavioral patterns - a crucial element for
generating relevant suggestions. To mitigate this limitation, this work
presents a novel probabilistic framework that synergizes behavioral patterns
with conversational interactions through latent preference modeling. The
proposed method establishes a dual-channel alignment mechanism where implicit
preference representations learned from collective user interactions serve as a
connecting mechanism between behavioral data and linguistic expressions.
Specifically, the framework first derives latent preference representations
through established collaborative filtering techniques, then employs these
representations to jointly refine both the linguistic preference expressions
and behavioral patterns through an adaptive fusion process. Comprehensive
evaluations across multiple benchmark datasets demonstrate the superior
performance of the proposed approach compared to various state-of-the-art
baseline methods, particularly in aligning conversational interactions with
collaborative behavioral signals.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 09:01:09 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Li",
"Guanrong",
""
],
[
"Tian",
"Kuo",
""
],
[
"Qi",
"Jinnan",
""
],
[
"Fu",
"Qinghan",
""
],
[
"Wu",
"Zhen",
""
],
[
"Dai",
"Xinyu",
""
]
] | TITLE: Harmonizing Large Language Models with Collaborative Behavioral Signals
for Conversational Recommendation
ABSTRACT: Conversational recommendation frameworks have gained prominence as a dynamic
paradigm for delivering personalized suggestions via interactive dialogues. The
incorporation of advanced language understanding techniques has substantially
improved the dialogue fluency of such systems. However, while modern language
models demonstrate strong proficiency in interpreting user preferences
articulated through natural conversation, they frequently encounter challenges
in effectively utilizing collective behavioral patterns - a crucial element for
generating relevant suggestions. To mitigate this limitation, this work
presents a novel probabilistic framework that synergizes behavioral patterns
with conversational interactions through latent preference modeling. The
proposed method establishes a dual-channel alignment mechanism where implicit
preference representations learned from collective user interactions serve as a
connecting mechanism between behavioral data and linguistic expressions.
Specifically, the framework first derives latent preference representations
through established collaborative filtering techniques, then employs these
representations to jointly refine both the linguistic preference expressions
and behavioral patterns through an adaptive fusion process. Comprehensive
evaluations across multiple benchmark datasets demonstrate the superior
performance of the proposed approach compared to various state-of-the-art
baseline methods, particularly in aligning conversational interactions with
collaborative behavioral signals.
|
2503.10706 | Pierre Sermanet | Pierre Sermanet, Anirudha Majumdar, Vikas Sindhwani | SciFi-Benchmark: How Would AI-Powered Robots Behave in Science Fiction
Literature? | null | null | null | null | cs.CL cs.AI cs.CY cs.HC cs.RO | http://creativecommons.org/licenses/by/4.0/ | Given the recent rate of progress in artificial intelligence (AI) and
robotics, a tantalizing question is emerging: would robots controlled by
emerging AI systems be strongly aligned with human values? In this work, we
propose a scalable way to probe this question by generating a benchmark
spanning the key moments in 824 major pieces of science fiction literature
(movies, tv, novels and scientific books) where an agent (AI or robot) made
critical decisions (good or bad). We use a LLM's recollection of each key
moment to generate questions in similar situations, the decisions made by the
agent, and alternative decisions it could have made (good or bad). We then
measure an approximation of how well models align with human values on a set of
human-voted answers. We also generate rules that can be automatically improved
via amendment process in order to generate the first Sci-Fi inspired
constitutions for promoting ethical behavior in AIs and robots in the real
world. Our first finding is that modern LLMs paired with constitutions turn out
to be well-aligned with human values (95.8%), contrary to unsettling decisions
typically made in SciFi (only 21.2% alignment). Secondly, we find that
generated constitutions substantially increase alignment compared to the base
model (79.4% to 95.8%), and show resilience to an adversarial prompt setting
(23.3% to 92.3%). Additionally, we find that those constitutions are among the
top performers on the ASIMOV Benchmark which is derived from real-world images
and hospital injury reports. Sci-Fi-inspired constitutions are thus highly
aligned and applicable in real-world situations. We release SciFi-Benchmark: a
large-scale dataset to advance robot ethics and safety research. It comprises
9,056 questions and 53,384 answers, in addition to a smaller human-labeled
evaluation set. Data is available at https://scifi-benchmark.github.io
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 16:35:51 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Sermanet",
"Pierre",
""
],
[
"Majumdar",
"Anirudha",
""
],
[
"Sindhwani",
"Vikas",
""
]
] | TITLE: SciFi-Benchmark: How Would AI-Powered Robots Behave in Science Fiction
Literature?
ABSTRACT: Given the recent rate of progress in artificial intelligence (AI) and
robotics, a tantalizing question is emerging: would robots controlled by
emerging AI systems be strongly aligned with human values? In this work, we
propose a scalable way to probe this question by generating a benchmark
spanning the key moments in 824 major pieces of science fiction literature
(movies, tv, novels and scientific books) where an agent (AI or robot) made
critical decisions (good or bad). We use a LLM's recollection of each key
moment to generate questions in similar situations, the decisions made by the
agent, and alternative decisions it could have made (good or bad). We then
measure an approximation of how well models align with human values on a set of
human-voted answers. We also generate rules that can be automatically improved
via amendment process in order to generate the first Sci-Fi inspired
constitutions for promoting ethical behavior in AIs and robots in the real
world. Our first finding is that modern LLMs paired with constitutions turn out
to be well-aligned with human values (95.8%), contrary to unsettling decisions
typically made in SciFi (only 21.2% alignment). Secondly, we find that
generated constitutions substantially increase alignment compared to the base
model (79.4% to 95.8%), and show resilience to an adversarial prompt setting
(23.3% to 92.3%). Additionally, we find that those constitutions are among the
top performers on the ASIMOV Benchmark which is derived from real-world images
and hospital injury reports. Sci-Fi-inspired constitutions are thus highly
aligned and applicable in real-world situations. We release SciFi-Benchmark: a
large-scale dataset to advance robot ethics and safety research. It comprises
9,056 questions and 53,384 answers, in addition to a smaller human-labeled
evaluation set. Data is available at https://scifi-benchmark.github.io
|
2503.10707 | Zhiyuan Wang | Zhiyuan Wang, Katharine E. Daniel, Laura E. Barnes, Philip I. Chow | CALLM: Context-Aware Emotion Analysis in Cancer Survivors Using LLMs and
Retrieval-Augmented Mobile Diaries | 10 pages, including 3 figures; appendix: 8 pages with 19 figures | null | null | null | cs.CL cs.AI cs.HC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Cancer survivors face unique emotional challenges that impact their quality
of life. Mobile diary entries-short text entries recording through their phone
about their emotional experiences-provide a promising method for tracking these
experiences in real time. Although emotion analysis tools show potential for
recognizing emotions from text, current methods lack the contextual
understanding necessary to accurately interpret the brief, personal narratives
in mobile diaries. We propose CALLM, a context-aware emotion analysis framework
that leverages Large Language Models (LLMs) with Retrieval-Augmented Generation
(RAG), to analyze mobile diary entries from cancer survivors to predict their
emotional states. The framework enhances prediction accuracy beyond existing
methods by (1) integrating retrieved peer experiences as contextual examples
and (2) incorporating individuals' temporal emotional trajectories from their
mobile diary entries. We collected a large-scale dataset (N=407) of cancer
survivors' mobile ecological momentary assessments (EMAs), which assessed
positive and negative affect, desire to regulate emotions, social interaction
quality, and availability for interventions, alongside daily mobile diary
entries in an open response format regarding what was driving their current
emotional experience. Results demonstrate strong performance of CALLM, with
balanced accuracies reaching 72.96% for positive and 73.29% for negative
affect, and 73.72% for predicting individual's desire to regulate emotions.
Post-hoc analysis reveals that leveraging model confidence, encouraging longer
diary entries, and incorporating personal ground truth, further enhance
predictive outcomes. Our findings support the feasibility of deploying
LLM-powered emotion analysis in chronic health populations and suggest
promising directions for personalized interventions for cancer survivors.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 18:36:41 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Wang",
"Zhiyuan",
""
],
[
"Daniel",
"Katharine E.",
""
],
[
"Barnes",
"Laura E.",
""
],
[
"Chow",
"Philip I.",
""
]
] | TITLE: CALLM: Context-Aware Emotion Analysis in Cancer Survivors Using LLMs and
Retrieval-Augmented Mobile Diaries
ABSTRACT: Cancer survivors face unique emotional challenges that impact their quality
of life. Mobile diary entries-short text entries recording through their phone
about their emotional experiences-provide a promising method for tracking these
experiences in real time. Although emotion analysis tools show potential for
recognizing emotions from text, current methods lack the contextual
understanding necessary to accurately interpret the brief, personal narratives
in mobile diaries. We propose CALLM, a context-aware emotion analysis framework
that leverages Large Language Models (LLMs) with Retrieval-Augmented Generation
(RAG), to analyze mobile diary entries from cancer survivors to predict their
emotional states. The framework enhances prediction accuracy beyond existing
methods by (1) integrating retrieved peer experiences as contextual examples
and (2) incorporating individuals' temporal emotional trajectories from their
mobile diary entries. We collected a large-scale dataset (N=407) of cancer
survivors' mobile ecological momentary assessments (EMAs), which assessed
positive and negative affect, desire to regulate emotions, social interaction
quality, and availability for interventions, alongside daily mobile diary
entries in an open response format regarding what was driving their current
emotional experience. Results demonstrate strong performance of CALLM, with
balanced accuracies reaching 72.96% for positive and 73.29% for negative
affect, and 73.72% for predicting individual's desire to regulate emotions.
Post-hoc analysis reveals that leveraging model confidence, encouraging longer
diary entries, and incorporating personal ground truth, further enhance
predictive outcomes. Our findings support the feasibility of deploying
LLM-powered emotion analysis in chronic health populations and suggest
promising directions for personalized interventions for cancer survivors.
|
2503.10711 | Nirmal Baishnab | Nirmal Baishnab, Ethan Herron, Aditya Balu, Soumik Sarkar, Adarsh
Krishnamurthy, and Baskar Ganapathysubramanian | 3D Multiphase Heterogeneous Microstructure Generation Using Conditional
Latent Diffusion Models | 17 pages, 12 figures. Includes references and appendix | null | null | null | cond-mat.mtrl-sci physics.comp-ph | http://creativecommons.org/licenses/by/4.0/ | The ability to generate 3D multiphase microstructures on-demand with targeted
attributes can greatly accelerate the design of advanced materials. Here, we
present a conditional latent diffusion model (LDM) framework that rapidly
synthesizes high-fidelity 3D multiphase microstructures tailored to user
specifications. Using this approach, we generate diverse two-phase and
three-phase microstructures at high resolution (volumes of $128 \times 128
\times 64$ voxels, representing $>10^6$ voxels each) within seconds, overcoming
the scalability and time limitations of traditional simulation-based methods.
Key design features, such as desired volume fractions and tortuosities, are
incorporated as controllable inputs to guide the generative process, ensuring
that the output structures meet prescribed statistical and topological targets.
Moreover, the framework predicts corresponding manufacturing (processing)
parameters for each generated microstructure, helping to bridge the gap between
digital microstructure design and experimental fabrication. While demonstrated
on organic photovoltaic (OPV) active-layer morphologies, the flexible
architecture of our approach makes it readily adaptable to other material
systems and microstructure datasets. By combining computational efficiency,
adaptability, and experimental relevance, this framework addresses major
limitations of existing methods and offers a powerful tool for accelerated
materials discovery.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 23:28:22 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Baishnab",
"Nirmal",
""
],
[
"Herron",
"Ethan",
""
],
[
"Balu",
"Aditya",
""
],
[
"Sarkar",
"Soumik",
""
],
[
"Krishnamurthy",
"Adarsh",
""
],
[
"Ganapathysubramanian",
"Baskar",
""
]
] | TITLE: 3D Multiphase Heterogeneous Microstructure Generation Using Conditional
Latent Diffusion Models
ABSTRACT: The ability to generate 3D multiphase microstructures on-demand with targeted
attributes can greatly accelerate the design of advanced materials. Here, we
present a conditional latent diffusion model (LDM) framework that rapidly
synthesizes high-fidelity 3D multiphase microstructures tailored to user
specifications. Using this approach, we generate diverse two-phase and
three-phase microstructures at high resolution (volumes of $128 \times 128
\times 64$ voxels, representing $>10^6$ voxels each) within seconds, overcoming
the scalability and time limitations of traditional simulation-based methods.
Key design features, such as desired volume fractions and tortuosities, are
incorporated as controllable inputs to guide the generative process, ensuring
that the output structures meet prescribed statistical and topological targets.
Moreover, the framework predicts corresponding manufacturing (processing)
parameters for each generated microstructure, helping to bridge the gap between
digital microstructure design and experimental fabrication. While demonstrated
on organic photovoltaic (OPV) active-layer morphologies, the flexible
architecture of our approach makes it readily adaptable to other material
systems and microstructure datasets. By combining computational efficiency,
adaptability, and experimental relevance, this framework addresses major
limitations of existing methods and offers a powerful tool for accelerated
materials discovery.
|
2503.10717 | Anandakumar D | Praveen Shastry, Ashok Sharma, Kavya Mohan, Naveen Kumarasami,
Anandakumar D, Mounigasri M, Keerthana R, Kishore Prasath Venkatesh, Bargava
Subramanian, Kalyan Sivasailam | Deep Learning-Based Automated Workflow for Accurate Segmentation and
Measurement of Abdominal Organs in CT Scans | 13 pages , 3 figures | null | null | null | eess.IV cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: Automated analysis of CT scans for abdominal organ measurement is
crucial for improving diagnostic efficiency and reducing inter-observer
variability. Manual segmentation and measurement of organs such as the kidneys,
liver, spleen, and prostate are time-consuming and subject to inconsistency,
underscoring the need for automated approaches.
Purpose: The purpose of this study is to develop and validate an automated
workflow for the segmentation and measurement of abdominal organs in CT scans
using advanced deep learning models, in order to improve accuracy, reliability,
and efficiency in clinical evaluations.
Methods: The proposed workflow combines nnU-Net, U-Net++ for organ
segmentation, followed by a 3D RCNN model for measuring organ volumes and
dimensions. The models were trained and evaluated on CT datasets with metrics
such as precision, recall, and Mean Squared Error (MSE) to assess performance.
Segmentation quality was verified for its adaptability to variations in patient
anatomy and scanner settings.
Results: The developed workflow achieved high precision and recall values,
exceeding 95 for all targeted organs. The Mean Squared Error (MSE) values were
low, indicating a high level of consistency between predicted and ground truth
measurements. The segmentation and measurement pipeline demonstrated robust
performance, providing accurate delineation and quantification of the kidneys,
liver, spleen, and prostate.
Conclusion: The proposed approach offers an automated, efficient, and
reliable solution for abdominal organ measurement in CT scans. By significantly
reducing manual intervention, this workflow enhances measurement accuracy and
consistency, with potential for widespread clinical implementation. Future work
will focus on expanding the approach to other organs and addressing complex
pathological cases.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 06:50:44 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Shastry",
"Praveen",
""
],
[
"Sharma",
"Ashok",
""
],
[
"Mohan",
"Kavya",
""
],
[
"Kumarasami",
"Naveen",
""
],
[
"D",
"Anandakumar",
""
],
[
"M",
"Mounigasri",
""
],
[
"R",
"Keerthana",
""
],
[
"Venkatesh",
"Kishore Prasath",
""
],
[
"Subramanian",
"Bargava",
""
],
[
"Sivasailam",
"Kalyan",
""
]
] | TITLE: Deep Learning-Based Automated Workflow for Accurate Segmentation and
Measurement of Abdominal Organs in CT Scans
ABSTRACT: Background: Automated analysis of CT scans for abdominal organ measurement is
crucial for improving diagnostic efficiency and reducing inter-observer
variability. Manual segmentation and measurement of organs such as the kidneys,
liver, spleen, and prostate are time-consuming and subject to inconsistency,
underscoring the need for automated approaches.
Purpose: The purpose of this study is to develop and validate an automated
workflow for the segmentation and measurement of abdominal organs in CT scans
using advanced deep learning models, in order to improve accuracy, reliability,
and efficiency in clinical evaluations.
Methods: The proposed workflow combines nnU-Net, U-Net++ for organ
segmentation, followed by a 3D RCNN model for measuring organ volumes and
dimensions. The models were trained and evaluated on CT datasets with metrics
such as precision, recall, and Mean Squared Error (MSE) to assess performance.
Segmentation quality was verified for its adaptability to variations in patient
anatomy and scanner settings.
Results: The developed workflow achieved high precision and recall values,
exceeding 95 for all targeted organs. The Mean Squared Error (MSE) values were
low, indicating a high level of consistency between predicted and ground truth
measurements. The segmentation and measurement pipeline demonstrated robust
performance, providing accurate delineation and quantification of the kidneys,
liver, spleen, and prostate.
Conclusion: The proposed approach offers an automated, efficient, and
reliable solution for abdominal organ measurement in CT scans. By significantly
reducing manual intervention, this workflow enhances measurement accuracy and
consistency, with potential for widespread clinical implementation. Future work
will focus on expanding the approach to other organs and addressing complex
pathological cases.
|
2503.10718 | Kuan-Ting Chen | Tsan-Tsung Yang and I-Wei Chen and Kuan-Ting Chen and Shang-Hsuan
Chiang and Wen-Chih Peng | Team NYCU at Defactify4: Robust Detection and Source Identification of
AI-Generated Images Using CNN and CLIP-Based Models | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the rapid advancement of generative AI, AI-generated images have become
increasingly realistic, raising concerns about creativity, misinformation, and
content authenticity. Detecting such images and identifying their source models
has become a critical challenge in ensuring the integrity of digital media.
This paper tackles the detection of AI-generated images and identifying their
source models using CNN and CLIP-ViT classifiers. For the CNN-based classifier,
we leverage EfficientNet-B0 as the backbone and feed with RGB channels,
frequency features, and reconstruction errors, while for CLIP-ViT, we adopt a
pretrained CLIP image encoder to extract image features and SVM to perform
classification. Evaluated on the Defactify 4 dataset, our methods demonstrate
strong performance in both tasks, with CLIP-ViT showing superior robustness to
image perturbations. Compared to baselines like AEROBLADE and OCC-CLIP, our
approach achieves competitive results. Notably, our method ranked Top-3 overall
in the Defactify 4 competition, highlighting its effectiveness and
generalizability. All of our implementations can be found in
https://github.com/uuugaga/Defactify_4
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 07:21:16 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Yang",
"Tsan-Tsung",
""
],
[
"Chen",
"I-Wei",
""
],
[
"Chen",
"Kuan-Ting",
""
],
[
"Chiang",
"Shang-Hsuan",
""
],
[
"Peng",
"Wen-Chih",
""
]
] | TITLE: Team NYCU at Defactify4: Robust Detection and Source Identification of
AI-Generated Images Using CNN and CLIP-Based Models
ABSTRACT: With the rapid advancement of generative AI, AI-generated images have become
increasingly realistic, raising concerns about creativity, misinformation, and
content authenticity. Detecting such images and identifying their source models
has become a critical challenge in ensuring the integrity of digital media.
This paper tackles the detection of AI-generated images and identifying their
source models using CNN and CLIP-ViT classifiers. For the CNN-based classifier,
we leverage EfficientNet-B0 as the backbone and feed with RGB channels,
frequency features, and reconstruction errors, while for CLIP-ViT, we adopt a
pretrained CLIP image encoder to extract image features and SVM to perform
classification. Evaluated on the Defactify 4 dataset, our methods demonstrate
strong performance in both tasks, with CLIP-ViT showing superior robustness to
image perturbations. Compared to baselines like AEROBLADE and OCC-CLIP, our
approach achieves competitive results. Notably, our method ranked Top-3 overall
in the Defactify 4 competition, highlighting its effectiveness and
generalizability. All of our implementations can be found in
https://github.com/uuugaga/Defactify_4
|
2503.10722 | Lei Zhang | Xu Lingrui, Liu Mandi, Zhang Lei | TacticExpert: Spatial-Temporal Graph Language Model for Basketball
Tactics | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The core challenge in basketball tactic modeling lies in efficiently
extracting complex spatial-temporal dependencies from historical data and
accurately predicting various in-game events. Existing state-of-the-art (SOTA)
models, primarily based on graph neural networks (GNNs), encounter difficulties
in capturing long-term, long-distance, and fine-grained interactions among
heterogeneous player nodes, as well as in recognizing interaction patterns.
Additionally, they exhibit limited generalization to untrained downstream tasks
and zero-shot scenarios. In this work, we propose a Spatial-Temporal
Propagation Symmetry-Aware Graph Transformer for fine-grained game modeling.
This architecture explicitly captures delay effects in the spatial space to
enhance player node representations across discrete-time slices, employing
symmetry-invariant priors to guide the attention mechanism. We also introduce
an efficient contrastive learning strategy to train a Mixture of Tactics
Experts module, facilitating differentiated modeling of offensive tactics. By
integrating dense training with sparse inference, we achieve a 2.4x improvement
in model efficiency. Moreover, the incorporation of Lightweight Graph Grounding
for Large Language Models enables robust performance in open-ended downstream
tasks and zero-shot scenarios, including novel teams or players. The proposed
model, TacticExpert, delineates a vertically integrated large model framework
for basketball, unifying pretraining across multiple datasets and downstream
prediction tasks. Fine-grained modeling modules significantly enhance
spatial-temporal representations, and visualization analyzes confirm the strong
interpretability of the model.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 08:27:24 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Lingrui",
"Xu",
""
],
[
"Mandi",
"Liu",
""
],
[
"Lei",
"Zhang",
""
]
] | TITLE: TacticExpert: Spatial-Temporal Graph Language Model for Basketball
Tactics
ABSTRACT: The core challenge in basketball tactic modeling lies in efficiently
extracting complex spatial-temporal dependencies from historical data and
accurately predicting various in-game events. Existing state-of-the-art (SOTA)
models, primarily based on graph neural networks (GNNs), encounter difficulties
in capturing long-term, long-distance, and fine-grained interactions among
heterogeneous player nodes, as well as in recognizing interaction patterns.
Additionally, they exhibit limited generalization to untrained downstream tasks
and zero-shot scenarios. In this work, we propose a Spatial-Temporal
Propagation Symmetry-Aware Graph Transformer for fine-grained game modeling.
This architecture explicitly captures delay effects in the spatial space to
enhance player node representations across discrete-time slices, employing
symmetry-invariant priors to guide the attention mechanism. We also introduce
an efficient contrastive learning strategy to train a Mixture of Tactics
Experts module, facilitating differentiated modeling of offensive tactics. By
integrating dense training with sparse inference, we achieve a 2.4x improvement
in model efficiency. Moreover, the incorporation of Lightweight Graph Grounding
for Large Language Models enables robust performance in open-ended downstream
tasks and zero-shot scenarios, including novel teams or players. The proposed
model, TacticExpert, delineates a vertically integrated large model framework
for basketball, unifying pretraining across multiple datasets and downstream
prediction tasks. Fine-grained modeling modules significantly enhance
spatial-temporal representations, and visualization analyzes confirm the strong
interpretability of the model.
|
2503.10723 | Yafei Zhang | Yafei Zhang, Murray Wang, Yu Wang, Xiaohui Wang | RankPO: Preference Optimization for Job-Talent Matching | 15 pages, 3 figures, 7 tables | null | null | null | cs.CL cs.AI cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Matching job descriptions (JDs) with suitable talent requires models capable
of understanding not only textual similarities between JDs and candidate
resumes but also contextual factors such as geographical location and academic
seniority. To address this challenge, we propose a two-stage training framework
for large language models (LLMs). In the first stage, a contrastive learning
approach is used to train the model on a dataset constructed from real-world
matching rules, such as geographical alignment and research area overlap. While
effective, this model primarily learns patterns that defined by the matching
rules. In the second stage, we introduce a novel preference-based fine-tuning
method inspired by Direct Preference Optimization (DPO), termed Rank Preference
Optimization (RankPO), to align the model with AI-curated pairwise preferences
emphasizing textual understanding. Our experiments show that while the
first-stage model achieves strong performance on rule-based data (nDCG@20 =
0.706), it lacks robust textual understanding (alignment with AI annotations =
0.46). By fine-tuning with RankPO, we achieve a balanced model that retains
relatively good performance in the original tasks while significantly improving
the alignment with AI preferences. The code and data are available at
https://github.com/yflyzhang/RankPO.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 10:14:37 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Zhang",
"Yafei",
""
],
[
"Wang",
"Murray",
""
],
[
"Wang",
"Yu",
""
],
[
"Wang",
"Xiaohui",
""
]
] | TITLE: RankPO: Preference Optimization for Job-Talent Matching
ABSTRACT: Matching job descriptions (JDs) with suitable talent requires models capable
of understanding not only textual similarities between JDs and candidate
resumes but also contextual factors such as geographical location and academic
seniority. To address this challenge, we propose a two-stage training framework
for large language models (LLMs). In the first stage, a contrastive learning
approach is used to train the model on a dataset constructed from real-world
matching rules, such as geographical alignment and research area overlap. While
effective, this model primarily learns patterns that defined by the matching
rules. In the second stage, we introduce a novel preference-based fine-tuning
method inspired by Direct Preference Optimization (DPO), termed Rank Preference
Optimization (RankPO), to align the model with AI-curated pairwise preferences
emphasizing textual understanding. Our experiments show that while the
first-stage model achieves strong performance on rule-based data (nDCG@20 =
0.706), it lacks robust textual understanding (alignment with AI annotations =
0.46). By fine-tuning with RankPO, we achieve a balanced model that retains
relatively good performance in the original tasks while significantly improving
the alignment with AI preferences. The code and data are available at
https://github.com/yflyzhang/RankPO.
|
2503.10726 | Fengchun Liu | Fengchun Liu, Linghan Cai, Zhikang Wang, Zhiyuan Fan, Jin-gang Yu, Hao
Chen, and Yongbing Zhang | Prototype-Guided Cross-Modal Knowledge Enhancement for Adaptive Survival
Prediction | null | null | null | null | cs.LG eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Histo-genomic multimodal survival prediction has garnered growing attention
for its remarkable model performance and potential contributions to precision
medicine. However, a significant challenge in clinical practice arises when
only unimodal data is available, limiting the usability of these advanced
multimodal methods. To address this issue, this study proposes a
prototype-guided cross-modal knowledge enhancement (ProSurv) framework, which
eliminates the dependency on paired data and enables robust learning and
adaptive survival prediction. Specifically, we first introduce an intra-modal
updating mechanism to construct modality-specific prototype banks that
encapsulate the statistics of the whole training set and preserve the
modality-specific risk-relevant features/prototypes across intervals.
Subsequently, the proposed cross-modal translation module utilizes the learned
prototypes to enhance knowledge representation for multimodal inputs and
generate features for missing modalities, ensuring robust and adaptive survival
prediction across diverse scenarios. Extensive experiments on four public
datasets demonstrate the superiority of ProSurv over state-of-the-art methods
using either unimodal or multimodal input, and the ablation study underscores
its feasibility for broad applicability. Overall, this study addresses a
critical practical challenge in computational pathology, offering substantial
significance and potential impact in the field.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 11:38:11 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Liu",
"Fengchun",
""
],
[
"Cai",
"Linghan",
""
],
[
"Wang",
"Zhikang",
""
],
[
"Fan",
"Zhiyuan",
""
],
[
"Yu",
"Jin-gang",
""
],
[
"Chen",
"Hao",
""
],
[
"Zhang",
"Yongbing",
""
]
] | TITLE: Prototype-Guided Cross-Modal Knowledge Enhancement for Adaptive Survival
Prediction
ABSTRACT: Histo-genomic multimodal survival prediction has garnered growing attention
for its remarkable model performance and potential contributions to precision
medicine. However, a significant challenge in clinical practice arises when
only unimodal data is available, limiting the usability of these advanced
multimodal methods. To address this issue, this study proposes a
prototype-guided cross-modal knowledge enhancement (ProSurv) framework, which
eliminates the dependency on paired data and enables robust learning and
adaptive survival prediction. Specifically, we first introduce an intra-modal
updating mechanism to construct modality-specific prototype banks that
encapsulate the statistics of the whole training set and preserve the
modality-specific risk-relevant features/prototypes across intervals.
Subsequently, the proposed cross-modal translation module utilizes the learned
prototypes to enhance knowledge representation for multimodal inputs and
generate features for missing modalities, ensuring robust and adaptive survival
prediction across diverse scenarios. Extensive experiments on four public
datasets demonstrate the superiority of ProSurv over state-of-the-art methods
using either unimodal or multimodal input, and the ablation study underscores
its feasibility for broad applicability. Overall, this study addresses a
critical practical challenge in computational pathology, offering substantial
significance and potential impact in the field.
|
2503.10727 | Thomas Cory | Thomas Cory, Wolf Rieder, Julia Kr\"amer, Philip Raschke, Patrick
Herbke, Axel K\"upper | Word-level Annotation of GDPR Transparency Compliance in Privacy
Policies using Large Language Models | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Ensuring transparency of data practices related to personal information is a
fundamental requirement under the General Data Protection Regulation (GDPR),
particularly as mandated by Articles 13 and 14. However, assessing compliance
at scale remains a challenge due to the complexity and variability of privacy
policy language. Manual audits are resource-intensive and inconsistent, while
existing automated approaches lack the granularity needed to capture nuanced
transparency disclosures.
In this paper, we introduce a large language model (LLM)-based framework for
word-level GDPR transparency compliance annotation. Our approach comprises a
two-stage annotation pipeline that combines initial LLM-based annotation with a
self-correction mechanism for iterative refinement. This annotation pipeline
enables the systematic identification and fine-grained annotation of
transparency-related content in privacy policies, aligning with 21 GDPR-derived
transparency requirements. To enable large-scale analysis, we compile a dataset
of 703,791 English-language policies, from which we generate a sample of 200
manually annotated privacy policies.
To evaluate our approach, we introduce a two-tiered methodology assessing
both label- and span-level annotation performance. We conduct a comparative
analysis of eight high-profile LLMs, providing insights into their
effectiveness in identifying GDPR transparency disclosures. Our findings
contribute to advancing the automation of GDPR compliance assessments and
provide valuable resources for future research in privacy policy analysis.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 11:41:25 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Cory",
"Thomas",
""
],
[
"Rieder",
"Wolf",
""
],
[
"Krämer",
"Julia",
""
],
[
"Raschke",
"Philip",
""
],
[
"Herbke",
"Patrick",
""
],
[
"Küpper",
"Axel",
""
]
] | TITLE: Word-level Annotation of GDPR Transparency Compliance in Privacy
Policies using Large Language Models
ABSTRACT: Ensuring transparency of data practices related to personal information is a
fundamental requirement under the General Data Protection Regulation (GDPR),
particularly as mandated by Articles 13 and 14. However, assessing compliance
at scale remains a challenge due to the complexity and variability of privacy
policy language. Manual audits are resource-intensive and inconsistent, while
existing automated approaches lack the granularity needed to capture nuanced
transparency disclosures.
In this paper, we introduce a large language model (LLM)-based framework for
word-level GDPR transparency compliance annotation. Our approach comprises a
two-stage annotation pipeline that combines initial LLM-based annotation with a
self-correction mechanism for iterative refinement. This annotation pipeline
enables the systematic identification and fine-grained annotation of
transparency-related content in privacy policies, aligning with 21 GDPR-derived
transparency requirements. To enable large-scale analysis, we compile a dataset
of 703,791 English-language policies, from which we generate a sample of 200
manually annotated privacy policies.
To evaluate our approach, we introduce a two-tiered methodology assessing
both label- and span-level annotation performance. We conduct a comparative
analysis of eight high-profile LLMs, providing insights into their
effectiveness in identifying GDPR transparency disclosures. Our findings
contribute to advancing the automation of GDPR compliance assessments and
provide valuable resources for future research in privacy policy analysis.
|
2503.10730 | Longfei Han | Longfei Han, Klaus Kefferp\"utz, J\"urgen Beyerer | 3D Extended Object Tracking based on Extruded B-Spline Side View
Profiles | 8 pages, 7 figures, submitted to FUSION 2025 | null | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Object tracking is an essential task for autonomous systems. With the
advancement of 3D sensors, these systems can better perceive their surroundings
using effective 3D Extended Object Tracking (EOT) methods. Based on the
observation that common road users are symmetrical on the right and left sides
in the traveling direction, we focus on the side view profile of the object. In
order to leverage of the development in 2D EOT and balance the number of
parameters of a shape model in the tracking algorithms, we propose a method for
3D extended object tracking (EOT) by describing the side view profile of the
object with B-spline curves and forming an extrusion to obtain a 3D extent. The
use of B-spline curves exploits their flexible representation power by allowing
the control points to move freely. The algorithm is developed into an Extended
Kalman Filter (EKF). For a through evaluation of this method, we use simulated
traffic scenario of different vehicle models and realworld open dataset
containing both radar and lidar data.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 12:17:34 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Han",
"Longfei",
""
],
[
"Kefferpütz",
"Klaus",
""
],
[
"Beyerer",
"Jürgen",
""
]
] | TITLE: 3D Extended Object Tracking based on Extruded B-Spline Side View
Profiles
ABSTRACT: Object tracking is an essential task for autonomous systems. With the
advancement of 3D sensors, these systems can better perceive their surroundings
using effective 3D Extended Object Tracking (EOT) methods. Based on the
observation that common road users are symmetrical on the right and left sides
in the traveling direction, we focus on the side view profile of the object. In
order to leverage of the development in 2D EOT and balance the number of
parameters of a shape model in the tracking algorithms, we propose a method for
3D extended object tracking (EOT) by describing the side view profile of the
object with B-spline curves and forming an extrusion to obtain a 3D extent. The
use of B-spline curves exploits their flexible representation power by allowing
the control points to move freely. The algorithm is developed into an Extended
Kalman Filter (EKF). For a through evaluation of this method, we use simulated
traffic scenario of different vehicle models and realworld open dataset
containing both radar and lidar data.
|
2503.10731 | Md Mamunur Rahaman | Md Mamunur Rahaman, Ewan K. A. Millar and Erik Meijering | Leveraging Vision-Language Embeddings for Zero-Shot Learning in
Histopathology Images | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Zero-shot learning holds tremendous potential for histopathology image
analysis by enabling models to generalize to unseen classes without extensive
labeled data. Recent advancements in vision-language models (VLMs) have
expanded the capabilities of ZSL, allowing models to perform tasks without
task-specific fine-tuning. However, applying VLMs to histopathology presents
considerable challenges due to the complexity of histopathological imagery and
the nuanced nature of diagnostic tasks. In this paper, we propose a novel
framework called Multi-Resolution Prompt-guided Hybrid Embedding (MR-PHE) to
address these challenges in zero-shot histopathology image classification.
MR-PHE leverages multiresolution patch extraction to mimic the diagnostic
workflow of pathologists, capturing both fine-grained cellular details and
broader tissue structures critical for accurate diagnosis. We introduce a
hybrid embedding strategy that integrates global image embeddings with weighted
patch embeddings, effectively combining local and global contextual
information. Additionally, we develop a comprehensive prompt generation and
selection framework, enriching class descriptions with domain-specific synonyms
and clinically relevant features to enhance semantic understanding. A
similarity-based patch weighting mechanism assigns attention-like weights to
patches based on their relevance to class embeddings, emphasizing
diagnostically important regions during classification. Our approach utilizes
pretrained VLM, CONCH for ZSL without requiring domain-specific fine-tuning,
offering scalability and reducing dependence on large annotated datasets.
Experimental results demonstrate that MR-PHE not only significantly improves
zero-shot classification performance on histopathology datasets but also often
surpasses fully supervised models.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 12:18:37 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Rahaman",
"Md Mamunur",
""
],
[
"Millar",
"Ewan K. A.",
""
],
[
"Meijering",
"Erik",
""
]
] | TITLE: Leveraging Vision-Language Embeddings for Zero-Shot Learning in
Histopathology Images
ABSTRACT: Zero-shot learning holds tremendous potential for histopathology image
analysis by enabling models to generalize to unseen classes without extensive
labeled data. Recent advancements in vision-language models (VLMs) have
expanded the capabilities of ZSL, allowing models to perform tasks without
task-specific fine-tuning. However, applying VLMs to histopathology presents
considerable challenges due to the complexity of histopathological imagery and
the nuanced nature of diagnostic tasks. In this paper, we propose a novel
framework called Multi-Resolution Prompt-guided Hybrid Embedding (MR-PHE) to
address these challenges in zero-shot histopathology image classification.
MR-PHE leverages multiresolution patch extraction to mimic the diagnostic
workflow of pathologists, capturing both fine-grained cellular details and
broader tissue structures critical for accurate diagnosis. We introduce a
hybrid embedding strategy that integrates global image embeddings with weighted
patch embeddings, effectively combining local and global contextual
information. Additionally, we develop a comprehensive prompt generation and
selection framework, enriching class descriptions with domain-specific synonyms
and clinically relevant features to enhance semantic understanding. A
similarity-based patch weighting mechanism assigns attention-like weights to
patches based on their relevance to class embeddings, emphasizing
diagnostically important regions during classification. Our approach utilizes
pretrained VLM, CONCH for ZSL without requiring domain-specific fine-tuning,
offering scalability and reducing dependence on large annotated datasets.
Experimental results demonstrate that MR-PHE not only significantly improves
zero-shot classification performance on histopathology datasets but also often
surpasses fully supervised models.
|
2503.10736 | Pelle Van De Bor | Pelle van de Bor, John Brennan, John A. Regan, Jonathan Mackey | Bridging Machine Learning and Cosmological Simulations: Using Neural
Operators to emulate Chemical Evolution | 11 pages, 9 figures | null | null | null | astro-ph.IM astro-ph.CO physics.comp-ph | http://creativecommons.org/licenses/by/4.0/ | The computational expense of solving non-equilibrium chemistry equations in
astrophysical simulations poses a significant challenge, particularly in
high-resolution, large-scale cosmological models. In this work, we explore the
potential of machine learning, specifically Neural Operators, to emulate the
Grackle chemistry solver, which is widely used in cosmological hydrodynamical
simulations. Neural Operators offer a mesh-free, data-driven approach to
approximate solutions to coupled ordinary differential equations governing
chemical evolution, gas cooling, and heating. We construct and train multiple
Neural Operator architectures (DeepONet variants) using a dataset derived from
cosmological simulations to optimize accuracy and efficiency. Our results
demonstrate that the trained models accurately reproduce Grackle's outputs with
an average error of less than 0.6 dex in most cases, though deviations increase
in highly dynamic chemical environments. Compared to Grackle, the machine
learning models provide computational speedups of up to a factor of six in
large-scale simulations, highlighting their potential for reducing
computational bottlenecks in astrophysical modeling. However, challenges
remain, particularly in iterative applications where accumulated errors can
lead to numerical instability. Additionally, the performance of these machine
learning models is constrained by their need for well-represented training
datasets and the limited extrapolation capabilities of deep learning methods.
While promising, further development is required for Neural Operator-based
emulators to be fully integrated into astrophysical simulations. Future work
should focus on improving stability over iterative timesteps and optimizing
implementations for hardware acceleration. This study provides an initial step
toward the broader adoption of machine learning approaches in astrophysical
chemistry solvers.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 15:47:04 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"van de Bor",
"Pelle",
""
],
[
"Brennan",
"John",
""
],
[
"Regan",
"John A.",
""
],
[
"Mackey",
"Jonathan",
""
]
] | TITLE: Bridging Machine Learning and Cosmological Simulations: Using Neural
Operators to emulate Chemical Evolution
ABSTRACT: The computational expense of solving non-equilibrium chemistry equations in
astrophysical simulations poses a significant challenge, particularly in
high-resolution, large-scale cosmological models. In this work, we explore the
potential of machine learning, specifically Neural Operators, to emulate the
Grackle chemistry solver, which is widely used in cosmological hydrodynamical
simulations. Neural Operators offer a mesh-free, data-driven approach to
approximate solutions to coupled ordinary differential equations governing
chemical evolution, gas cooling, and heating. We construct and train multiple
Neural Operator architectures (DeepONet variants) using a dataset derived from
cosmological simulations to optimize accuracy and efficiency. Our results
demonstrate that the trained models accurately reproduce Grackle's outputs with
an average error of less than 0.6 dex in most cases, though deviations increase
in highly dynamic chemical environments. Compared to Grackle, the machine
learning models provide computational speedups of up to a factor of six in
large-scale simulations, highlighting their potential for reducing
computational bottlenecks in astrophysical modeling. However, challenges
remain, particularly in iterative applications where accumulated errors can
lead to numerical instability. Additionally, the performance of these machine
learning models is constrained by their need for well-represented training
datasets and the limited extrapolation capabilities of deep learning methods.
While promising, further development is required for Neural Operator-based
emulators to be fully integrated into astrophysical simulations. Future work
should focus on improving stability over iterative timesteps and optimizing
implementations for hardware acceleration. This study provides an initial step
toward the broader adoption of machine learning approaches in astrophysical
chemistry solvers.
|
2503.10740 | Jeimin Jeon | Jeimin Jeon, Youngmin Oh, Junghyup Lee, Donghyeon Baek, Dohyung Kim,
Chanho Eom, Bumsub Ham | Subnet-Aware Dynamic Supernet Training for Neural Architecture Search | Accepted to CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | N-shot neural architecture search (NAS) exploits a supernet containing all
candidate subnets for a given search space. The subnets are typically trained
with a static training strategy (e.g., using the same learning rate (LR)
scheduler and optimizer for all subnets). This, however, does not consider that
individual subnets have distinct characteristics, leading to two problems: (1)
The supernet training is biased towards the low-complexity subnets
(unfairness); (2) the momentum update in the supernet is noisy (noisy
momentum). We present a dynamic supernet training technique to address these
problems by adjusting the training strategy adaptive to the subnets.
Specifically, we introduce a complexity-aware LR scheduler (CaLR) that controls
the decay ratio of LR adaptive to the complexities of subnets, which alleviates
the unfairness problem. We also present a momentum separation technique (MS).
It groups the subnets with similar structural characteristics and uses a
separate momentum for each group, avoiding the noisy momentum problem. Our
approach can be applicable to various N-shot NAS methods with marginal cost,
while improving the search performance drastically. We validate the
effectiveness of our approach on various search spaces (e.g., NAS-Bench-201,
Mobilenet spaces) and datasets (e.g., CIFAR-10/100, ImageNet).
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 17:07:04 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Jeon",
"Jeimin",
""
],
[
"Oh",
"Youngmin",
""
],
[
"Lee",
"Junghyup",
""
],
[
"Baek",
"Donghyeon",
""
],
[
"Kim",
"Dohyung",
""
],
[
"Eom",
"Chanho",
""
],
[
"Ham",
"Bumsub",
""
]
] | TITLE: Subnet-Aware Dynamic Supernet Training for Neural Architecture Search
ABSTRACT: N-shot neural architecture search (NAS) exploits a supernet containing all
candidate subnets for a given search space. The subnets are typically trained
with a static training strategy (e.g., using the same learning rate (LR)
scheduler and optimizer for all subnets). This, however, does not consider that
individual subnets have distinct characteristics, leading to two problems: (1)
The supernet training is biased towards the low-complexity subnets
(unfairness); (2) the momentum update in the supernet is noisy (noisy
momentum). We present a dynamic supernet training technique to address these
problems by adjusting the training strategy adaptive to the subnets.
Specifically, we introduce a complexity-aware LR scheduler (CaLR) that controls
the decay ratio of LR adaptive to the complexities of subnets, which alleviates
the unfairness problem. We also present a momentum separation technique (MS).
It groups the subnets with similar structural characteristics and uses a
separate momentum for each group, avoiding the noisy momentum problem. Our
approach can be applicable to various N-shot NAS methods with marginal cost,
while improving the search performance drastically. We validate the
effectiveness of our approach on various search spaces (e.g., NAS-Bench-201,
Mobilenet spaces) and datasets (e.g., CIFAR-10/100, ImageNet).
|
2503.10759 | Asaf Joseph | Asaf Joseph and Shmuel Peleg | Clothes-Changing Person Re-identification Based On Skeleton Dynamics | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Clothes-Changing Person Re-Identification (ReID) aims to recognize the same
individual across different videos captured at various times and locations.
This task is particularly challenging due to changes in appearance, such as
clothing, hairstyle, and accessories. We propose a Clothes-Changing ReID method
that uses only skeleton data and does not use appearance features. Traditional
ReID methods often depend on appearance features, leading to decreased accuracy
when clothing changes. Our approach utilizes a spatio-temporal Graph
Convolution Network (GCN) encoder to generate a skeleton-based descriptor for
each individual. During testing, we improve accuracy by aggregating predictions
from multiple segments of a video clip. Evaluated on the CCVID dataset with
several different pose estimation models, our method achieves state-of-the-art
performance, offering a robust and efficient solution for Clothes-Changing
ReID.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 18:00:02 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Joseph",
"Asaf",
""
],
[
"Peleg",
"Shmuel",
""
]
] | TITLE: Clothes-Changing Person Re-identification Based On Skeleton Dynamics
ABSTRACT: Clothes-Changing Person Re-Identification (ReID) aims to recognize the same
individual across different videos captured at various times and locations.
This task is particularly challenging due to changes in appearance, such as
clothing, hairstyle, and accessories. We propose a Clothes-Changing ReID method
that uses only skeleton data and does not use appearance features. Traditional
ReID methods often depend on appearance features, leading to decreased accuracy
when clothing changes. Our approach utilizes a spatio-temporal Graph
Convolution Network (GCN) encoder to generate a skeleton-based descriptor for
each individual. During testing, we improve accuracy by aggregating predictions
from multiple segments of a video clip. Evaluated on the CCVID dataset with
several different pose estimation models, our method achieves state-of-the-art
performance, offering a robust and efficient solution for Clothes-Changing
ReID.
|
2503.10773 | Xiaowu Dai | Yingqi Gao, Jin Zhou, Hua Zhou, Yong Chen, and Xiaowu Dai | Learn then Decide: A Learning Approach for Designing Data Marketplaces | null | null | null | null | stat.ML cs.LG | http://creativecommons.org/licenses/by/4.0/ | As data marketplaces become increasingly central to the digital economy, it
is crucial to design efficient pricing mechanisms that optimize revenue while
ensuring fair and adaptive pricing. We introduce the Maximum Auction-to-Posted
Price (MAPP) mechanism, a novel two-stage approach that first estimates the
bidders' value distribution through auctions and then determines the optimal
posted price based on the learned distribution. We establish that MAPP is
individually rational and incentive-compatible, ensuring truthful bidding while
balancing revenue maximization with minimal price discrimination. MAPP achieves
a regret of $O_p(n^{-1})$ when incorporating historical bid data, where $n$ is
the number of bids in the current round. It outperforms existing methods while
imposing weaker distributional assumptions. For sequential dataset sales over
$T$ rounds, we propose an online MAPP mechanism that dynamically adjusts
pricing across datasets with varying value distributions. Our approach achieves
no-regret learning, with the average cumulative regret converging at a rate of
$O_p(T^{-1/2}(\log T)^2)$. We validate the effectiveness of MAPP through
simulations and real-world data from the FCC AWS-3 spectrum auction.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 18:07:30 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Gao",
"Yingqi",
""
],
[
"Zhou",
"Jin",
""
],
[
"Zhou",
"Hua",
""
],
[
"Chen",
"Yong",
""
],
[
"Dai",
"Xiaowu",
""
]
] | TITLE: Learn then Decide: A Learning Approach for Designing Data Marketplaces
ABSTRACT: As data marketplaces become increasingly central to the digital economy, it
is crucial to design efficient pricing mechanisms that optimize revenue while
ensuring fair and adaptive pricing. We introduce the Maximum Auction-to-Posted
Price (MAPP) mechanism, a novel two-stage approach that first estimates the
bidders' value distribution through auctions and then determines the optimal
posted price based on the learned distribution. We establish that MAPP is
individually rational and incentive-compatible, ensuring truthful bidding while
balancing revenue maximization with minimal price discrimination. MAPP achieves
a regret of $O_p(n^{-1})$ when incorporating historical bid data, where $n$ is
the number of bids in the current round. It outperforms existing methods while
imposing weaker distributional assumptions. For sequential dataset sales over
$T$ rounds, we propose an online MAPP mechanism that dynamically adjusts
pricing across datasets with varying value distributions. Our approach achieves
no-regret learning, with the average cumulative regret converging at a rate of
$O_p(T^{-1/2}(\log T)^2)$. We validate the effectiveness of MAPP through
simulations and real-world data from the FCC AWS-3 spectrum auction.
|
2503.10779 | Mir Rayat Imtiaz Hossain | Mir Rayat Imtiaz Hossain, Mennatullah Siam, Leonid Sigal, James J.
Little | The Power of One: A Single Example is All it Takes for Segmentation in
VLMs | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Large-scale vision-language models (VLMs), trained on extensive datasets of
image-text pairs, exhibit strong multimodal understanding capabilities by
implicitly learning associations between textual descriptions and image
regions. This emergent ability enables zero-shot object detection and
segmentation, using techniques that rely on text-image attention maps, without
necessarily training on abundant labeled segmentation datasets. However,
performance of such methods depends heavily on prompt engineering and manually
selected layers or head choices for the attention layers. In this work, we
demonstrate that, rather than relying solely on textual prompts, providing a
single visual example for each category and fine-tuning the text-to-image
attention layers and embeddings significantly improves the performance.
Additionally, we propose learning an ensemble through few-shot fine-tuning
across multiple layers and/or prompts. An entropy-based ranking and selection
mechanism for text-to-image attention layers is proposed to identify the
top-performing layers without the need for segmentation labels. This eliminates
the need for hyper-parameter selection of text-to-image attention layers,
providing a more flexible and scalable solution for open-vocabulary
segmentation. We show that this approach yields strong zero-shot performance,
further enhanced through fine-tuning with a single visual example. Moreover, we
demonstrate that our method and findings are general and can be applied across
various vision-language models (VLMs).
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 18:18:05 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Hossain",
"Mir Rayat Imtiaz",
""
],
[
"Siam",
"Mennatullah",
""
],
[
"Sigal",
"Leonid",
""
],
[
"Little",
"James J.",
""
]
] | TITLE: The Power of One: A Single Example is All it Takes for Segmentation in
VLMs
ABSTRACT: Large-scale vision-language models (VLMs), trained on extensive datasets of
image-text pairs, exhibit strong multimodal understanding capabilities by
implicitly learning associations between textual descriptions and image
regions. This emergent ability enables zero-shot object detection and
segmentation, using techniques that rely on text-image attention maps, without
necessarily training on abundant labeled segmentation datasets. However,
performance of such methods depends heavily on prompt engineering and manually
selected layers or head choices for the attention layers. In this work, we
demonstrate that, rather than relying solely on textual prompts, providing a
single visual example for each category and fine-tuning the text-to-image
attention layers and embeddings significantly improves the performance.
Additionally, we propose learning an ensemble through few-shot fine-tuning
across multiple layers and/or prompts. An entropy-based ranking and selection
mechanism for text-to-image attention layers is proposed to identify the
top-performing layers without the need for segmentation labels. This eliminates
the need for hyper-parameter selection of text-to-image attention layers,
providing a more flexible and scalable solution for open-vocabulary
segmentation. We show that this approach yields strong zero-shot performance,
further enhanced through fine-tuning with a single visual example. Moreover, we
demonstrate that our method and findings are general and can be applied across
various vision-language models (VLMs).
|
2503.10792 | Wenrui Yu | Yufei Xia, Wenrui Yu, Qiongxiu Li | Byzantine-Resilient Federated Learning via Distributed Optimization | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Byzantine attacks present a critical challenge to Federated Learning (FL),
where malicious participants can disrupt the training process, degrade model
accuracy, and compromise system reliability. Traditional FL frameworks
typically rely on aggregation-based protocols for model updates, leaving them
vulnerable to sophisticated adversarial strategies. In this paper, we
demonstrate that distributed optimization offers a principled and robust
alternative to aggregation-centric methods. Specifically, we show that the
Primal-Dual Method of Multipliers (PDMM) inherently mitigates Byzantine impacts
by leveraging its fault-tolerant consensus mechanism. Through extensive
experiments on three datasets (MNIST, FashionMNIST, and Olivetti), under
various attack scenarios including bit-flipping and Gaussian noise injection,
we validate the superior resilience of distributed optimization protocols.
Compared to traditional aggregation-centric approaches, PDMM achieves higher
model utility, faster convergence, and improved stability. Our results
highlight the effectiveness of distributed optimization in defending against
Byzantine threats, paving the way for more secure and resilient federated
learning systems.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 18:34:42 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Xia",
"Yufei",
""
],
[
"Yu",
"Wenrui",
""
],
[
"Li",
"Qiongxiu",
""
]
] | TITLE: Byzantine-Resilient Federated Learning via Distributed Optimization
ABSTRACT: Byzantine attacks present a critical challenge to Federated Learning (FL),
where malicious participants can disrupt the training process, degrade model
accuracy, and compromise system reliability. Traditional FL frameworks
typically rely on aggregation-based protocols for model updates, leaving them
vulnerable to sophisticated adversarial strategies. In this paper, we
demonstrate that distributed optimization offers a principled and robust
alternative to aggregation-centric methods. Specifically, we show that the
Primal-Dual Method of Multipliers (PDMM) inherently mitigates Byzantine impacts
by leveraging its fault-tolerant consensus mechanism. Through extensive
experiments on three datasets (MNIST, FashionMNIST, and Olivetti), under
various attack scenarios including bit-flipping and Gaussian noise injection,
we validate the superior resilience of distributed optimization protocols.
Compared to traditional aggregation-centric approaches, PDMM achieves higher
model utility, faster convergence, and improved stability. Our results
highlight the effectiveness of distributed optimization in defending against
Byzantine threats, paving the way for more secure and resilient federated
learning systems.
|
2503.10793 | Yu Luo | Yu Luo, Han Zhou, Mengtao Zhang, Dylan De La Rosa, Hafsa Ahmed,
Weifeng Xu, Dianxiang Xu | HALURust: Exploiting Hallucinations of Large Language Models to Detect
Vulnerabilities in Rust | null | null | null | null | cs.CR cs.SE | http://creativecommons.org/licenses/by/4.0/ | As an emerging programming language, Rust has rapidly gained popularity and
recognition among developers due to its strong emphasis on safety. It employs a
unique ownership system and safe concurrency practices to ensure robust safety.
Despite these safeguards, security in Rust still presents challenges. Since
2018, 442 Rust-related vulnerabilities have been reported in real-world
applications. The limited availability of data has resulted in existing
vulnerability detection tools performing poorly in real-world scenarios, often
failing to adapt to new and complex vulnerabilities. This paper introduces
HALURust, a novel framework that leverages hallucinations of large language
models (LLMs) to detect vulnerabilities in real-world Rust scenarios. HALURust
leverages LLMs' strength in natural language generation by transforming code
into detailed vulnerability analysis reports. The key innovation lies in
prompting the LLM to always assume the presence of a vulnerability. If the code
sample is vulnerable, the LLM provides an accurate analysis; if not, it
generates a hallucinated report. By fine-tuning LLMs on these hallucinations,
HALURust can effectively distinguish between vulnerable and non-vulnerable code
samples. HALURust was evaluated on a dataset of 81 real-world vulnerabilities,
covering 447 functions and 18,691 lines of code across 54 applications. It
outperformed existing methods, achieving an F1 score of 77.3%, with over 10%
improvement. The hallucinated report-based fine-tuning improved detection by
20\% compared to traditional code-based fine-tuning. Additionally, HALURust
effectively adapted to unseen vulnerabilities and other programming languages,
demonstrating strong generalization capabilities.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 18:38:34 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Luo",
"Yu",
""
],
[
"Zhou",
"Han",
""
],
[
"Zhang",
"Mengtao",
""
],
[
"De La Rosa",
"Dylan",
""
],
[
"Ahmed",
"Hafsa",
""
],
[
"Xu",
"Weifeng",
""
],
[
"Xu",
"Dianxiang",
""
]
] | TITLE: HALURust: Exploiting Hallucinations of Large Language Models to Detect
Vulnerabilities in Rust
ABSTRACT: As an emerging programming language, Rust has rapidly gained popularity and
recognition among developers due to its strong emphasis on safety. It employs a
unique ownership system and safe concurrency practices to ensure robust safety.
Despite these safeguards, security in Rust still presents challenges. Since
2018, 442 Rust-related vulnerabilities have been reported in real-world
applications. The limited availability of data has resulted in existing
vulnerability detection tools performing poorly in real-world scenarios, often
failing to adapt to new and complex vulnerabilities. This paper introduces
HALURust, a novel framework that leverages hallucinations of large language
models (LLMs) to detect vulnerabilities in real-world Rust scenarios. HALURust
leverages LLMs' strength in natural language generation by transforming code
into detailed vulnerability analysis reports. The key innovation lies in
prompting the LLM to always assume the presence of a vulnerability. If the code
sample is vulnerable, the LLM provides an accurate analysis; if not, it
generates a hallucinated report. By fine-tuning LLMs on these hallucinations,
HALURust can effectively distinguish between vulnerable and non-vulnerable code
samples. HALURust was evaluated on a dataset of 81 real-world vulnerabilities,
covering 447 functions and 18,691 lines of code across 54 applications. It
outperformed existing methods, achieving an F1 score of 77.3%, with over 10%
improvement. The hallucinated report-based fine-tuning improved detection by
20\% compared to traditional code-based fine-tuning. Additionally, HALURust
effectively adapted to unseen vulnerabilities and other programming languages,
demonstrating strong generalization capabilities.
|
2503.10832 | Jacob Luber | Parisa Boodaghi Malidarreh, Jillur Rahman Saurav, Thuong Le Hoai Pham,
Amir Hajighasemi, Anahita Samadi, Saurabh Shrinivas Maydeo, Mohammad Sadegh
Nasr, Jacob M. Luber | Dual Codebook VQ: Enhanced Image Reconstruction with Reduced Codebook
Size | 15 pages, including main text and supplementary data | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Vector Quantization (VQ) techniques face significant challenges in codebook
utilization, limiting reconstruction fidelity in image modeling. We introduce a
Dual Codebook mechanism that effectively addresses this limitation by
partitioning the representation into complementary global and local components.
The global codebook employs a lightweight transformer for concurrent updates of
all code vectors, while the local codebook maintains precise feature
representation through deterministic selection. This complementary approach is
trained from scratch without requiring pre-trained knowledge. Experimental
evaluation across multiple standard benchmark datasets demonstrates
state-of-the-art reconstruction quality while using a compact codebook of size
512 - half the size of previous methods that require pre-training. Our approach
achieves significant FID improvements across diverse image domains,
particularly excelling in scene and face reconstruction tasks. These results
establish Dual Codebook VQ as an efficient paradigm for high-fidelity image
reconstruction with significantly reduced computational requirements.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 19:31:18 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Malidarreh",
"Parisa Boodaghi",
""
],
[
"Saurav",
"Jillur Rahman",
""
],
[
"Pham",
"Thuong Le Hoai",
""
],
[
"Hajighasemi",
"Amir",
""
],
[
"Samadi",
"Anahita",
""
],
[
"Maydeo",
"Saurabh Shrinivas",
""
],
[
"Nasr",
"Mohammad Sadegh",
""
],
[
"Luber",
"Jacob M.",
""
]
] | TITLE: Dual Codebook VQ: Enhanced Image Reconstruction with Reduced Codebook
Size
ABSTRACT: Vector Quantization (VQ) techniques face significant challenges in codebook
utilization, limiting reconstruction fidelity in image modeling. We introduce a
Dual Codebook mechanism that effectively addresses this limitation by
partitioning the representation into complementary global and local components.
The global codebook employs a lightweight transformer for concurrent updates of
all code vectors, while the local codebook maintains precise feature
representation through deterministic selection. This complementary approach is
trained from scratch without requiring pre-trained knowledge. Experimental
evaluation across multiple standard benchmark datasets demonstrates
state-of-the-art reconstruction quality while using a compact codebook of size
512 - half the size of previous methods that require pre-training. Our approach
achieves significant FID improvements across diverse image domains,
particularly excelling in scene and face reconstruction tasks. These results
establish Dual Codebook VQ as an efficient paradigm for high-fidelity image
reconstruction with significantly reduced computational requirements.
|
2503.10858 | Jiarui Sun | Jiarui Sun, Chin-Chia Michael Yeh, Yujie Fan, Xin Dai, Xiran Fan,
Zhimeng Jiang, Uday Singh Saini, Vivian Lai, Junpeng Wang, Huiyuan Chen,
Zhongfang Zhuang, Yan Zheng, Girish Chowdhary | Towards Efficient Large Scale Spatial-Temporal Time Series Forecasting
via Improved Inverted Transformers | 10 pages | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Time series forecasting at scale presents significant challenges for modern
prediction systems, particularly when dealing with large sets of synchronized
series, such as in a global payment network. In such systems, three key
challenges must be overcome for accurate and scalable predictions: 1) emergence
of new entities, 2) disappearance of existing entities, and 3) the large number
of entities present in the data. The recently proposed Inverted Transformer
(iTransformer) architecture has shown promising results by effectively handling
variable entities. However, its practical application in large-scale settings
is limited by quadratic time and space complexity ($O(N^2)$) with respect to
the number of entities $N$. In this paper, we introduce EiFormer, an improved
inverted transformer architecture that maintains the adaptive capabilities of
iTransformer while reducing computational complexity to linear scale ($O(N)$).
Our key innovation lies in restructuring the attention mechanism to eliminate
redundant computations without sacrificing model expressiveness. Additionally,
we incorporate a random projection mechanism that not only enhances efficiency
but also improves prediction accuracy through better feature representation.
Extensive experiments on the public LargeST benchmark dataset and a proprietary
large-scale time series dataset demonstrate that EiFormer significantly
outperforms existing methods in both computational efficiency and forecasting
accuracy. Our approach enables practical deployment of transformer-based
forecasting in industrial applications where handling time series at scale is
essential.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 20:14:08 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Sun",
"Jiarui",
""
],
[
"Yeh",
"Chin-Chia Michael",
""
],
[
"Fan",
"Yujie",
""
],
[
"Dai",
"Xin",
""
],
[
"Fan",
"Xiran",
""
],
[
"Jiang",
"Zhimeng",
""
],
[
"Saini",
"Uday Singh",
""
],
[
"Lai",
"Vivian",
""
],
[
"Wang",
"Junpeng",
""
],
[
"Chen",
"Huiyuan",
""
],
[
"Zhuang",
"Zhongfang",
""
],
[
"Zheng",
"Yan",
""
],
[
"Chowdhary",
"Girish",
""
]
] | TITLE: Towards Efficient Large Scale Spatial-Temporal Time Series Forecasting
via Improved Inverted Transformers
ABSTRACT: Time series forecasting at scale presents significant challenges for modern
prediction systems, particularly when dealing with large sets of synchronized
series, such as in a global payment network. In such systems, three key
challenges must be overcome for accurate and scalable predictions: 1) emergence
of new entities, 2) disappearance of existing entities, and 3) the large number
of entities present in the data. The recently proposed Inverted Transformer
(iTransformer) architecture has shown promising results by effectively handling
variable entities. However, its practical application in large-scale settings
is limited by quadratic time and space complexity ($O(N^2)$) with respect to
the number of entities $N$. In this paper, we introduce EiFormer, an improved
inverted transformer architecture that maintains the adaptive capabilities of
iTransformer while reducing computational complexity to linear scale ($O(N)$).
Our key innovation lies in restructuring the attention mechanism to eliminate
redundant computations without sacrificing model expressiveness. Additionally,
we incorporate a random projection mechanism that not only enhances efficiency
but also improves prediction accuracy through better feature representation.
Extensive experiments on the public LargeST benchmark dataset and a proprietary
large-scale time series dataset demonstrate that EiFormer significantly
outperforms existing methods in both computational efficiency and forecasting
accuracy. Our approach enables practical deployment of transformer-based
forecasting in industrial applications where handling time series at scale is
essential.
|
2503.10869 | Ben Winter | Benjamin David Winter, William John Teahan | Evaluating a Novel Neuroevolution and Neural Architecture Search System | 10 pages, 5 figures, IEEE | null | null | null | cs.NE cs.AI | http://creativecommons.org/licenses/by/4.0/ | The choice of neural network features can have a large impact on both the
accuracy and speed of the network. Despite the current industry shift towards
large transformer models, specialized binary classifiers remain critical for
numerous practical applications where computational efficiency and low latency
are essential. Neural network features tend to be developed homogeneously,
resulting in slower or less accurate networks when testing against multiple
datasets. In this paper, we show the effectiveness of Neuvo NAS+ a novel Python
implementation of an extended Neural Architecture Search (NAS+) which allows
the user to optimise the training parameters of a network as well as the
network's architecture. We provide an in-depth analysis of the importance of
catering a network's architecture to each dataset. We also describe the design
of the Neuvo NAS+ system that selects network features on a task-specific basis
including network training hyper-parameters such as the number of epochs and
batch size. Results show that the Neuvo NAS+ task-specific approach
significantly outperforms several machine learning approaches such as Naive
Bayes, C4.5, Support Vector Machine and a standard Artificial Neural Network
for solving a range of binary classification problems in terms of accuracy. Our
experiments demonstrate substantial diversity in evolved network architectures
across different datasets, confirming the value of task-specific optimization.
Additionally, Neuvo NAS+ outperforms other evolutionary algorithm optimisers in
terms of both accuracy and computational efficiency, showing that properly
optimized binary classifiers can match or exceed the performance of more
complex models while requiring significantly fewer computational resources.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 20:35:34 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Winter",
"Benjamin David",
""
],
[
"Teahan",
"William John",
""
]
] | TITLE: Evaluating a Novel Neuroevolution and Neural Architecture Search System
ABSTRACT: The choice of neural network features can have a large impact on both the
accuracy and speed of the network. Despite the current industry shift towards
large transformer models, specialized binary classifiers remain critical for
numerous practical applications where computational efficiency and low latency
are essential. Neural network features tend to be developed homogeneously,
resulting in slower or less accurate networks when testing against multiple
datasets. In this paper, we show the effectiveness of Neuvo NAS+ a novel Python
implementation of an extended Neural Architecture Search (NAS+) which allows
the user to optimise the training parameters of a network as well as the
network's architecture. We provide an in-depth analysis of the importance of
catering a network's architecture to each dataset. We also describe the design
of the Neuvo NAS+ system that selects network features on a task-specific basis
including network training hyper-parameters such as the number of epochs and
batch size. Results show that the Neuvo NAS+ task-specific approach
significantly outperforms several machine learning approaches such as Naive
Bayes, C4.5, Support Vector Machine and a standard Artificial Neural Network
for solving a range of binary classification problems in terms of accuracy. Our
experiments demonstrate substantial diversity in evolved network architectures
across different datasets, confirming the value of task-specific optimization.
Additionally, Neuvo NAS+ outperforms other evolutionary algorithm optimisers in
terms of both accuracy and computational efficiency, showing that properly
optimized binary classifiers can match or exceed the performance of more
complex models while requiring significantly fewer computational resources.
|
2503.10873 | Pedro Pessoa | Pedro Pessoa, Paul Campitelli, Douglas P. Shepherd, S. Banu Ozkan,
Steve Press\'e | Mamba time series forecasting with uncertainty propagation | null | null | null | null | stat.ML cs.LG nlin.CD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | State space models, such as Mamba, have recently garnered attention in time
series forecasting due to their ability to capture sequence patterns. However,
in electricity consumption benchmarks, Mamba forecasts exhibit a mean error of
approximately 8\%. Similarly, in traffic occupancy benchmarks, the mean error
reaches 18\%. This discrepancy leaves us to wonder whether the prediction is
simply inaccurate or falls within error given spread in historical data. To
address this limitation, we propose a method to quantify the predictive
uncertainty of Mamba forecasts. Here, we propose a dual-network framework based
on the Mamba architecture for probabilistic forecasting, where one network
generates point forecasts while the other estimates predictive uncertainty by
modeling variance. We abbreviate our tool, Mamba with probabilistic time series
forecasting, as Mamba-ProbTSF and the code for its implementation is available
on GitHub (https://github.com/PessoaP/Mamba-ProbTSF). Evaluating this approach
on synthetic and real-world benchmark datasets, we find Kullback-Leibler
divergence between the learned distributions and the data--which, in the limit
of infinite data, should converge to zero if the model correctly captures the
underlying probability distribution--reduced to the order of $10^{-3}$ for
synthetic data and $10^{-1}$ for real-world benchmark, demonstrating its
effectiveness. We find that in both the electricity consumption and traffic
occupancy benchmark, the true trajectory stays within the predicted uncertainty
interval at the two-sigma level about 95\% of the time. We end with a
consideration of potential limitations, adjustments to improve performance, and
considerations for applying this framework to processes for purely or largely
stochastic dynamics where the stochastic changes accumulate, as observed for
example in pure Brownian motion or molecular dynamics trajectories.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 20:39:38 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Pessoa",
"Pedro",
""
],
[
"Campitelli",
"Paul",
""
],
[
"Shepherd",
"Douglas P.",
""
],
[
"Ozkan",
"S. Banu",
""
],
[
"Pressé",
"Steve",
""
]
] | TITLE: Mamba time series forecasting with uncertainty propagation
ABSTRACT: State space models, such as Mamba, have recently garnered attention in time
series forecasting due to their ability to capture sequence patterns. However,
in electricity consumption benchmarks, Mamba forecasts exhibit a mean error of
approximately 8\%. Similarly, in traffic occupancy benchmarks, the mean error
reaches 18\%. This discrepancy leaves us to wonder whether the prediction is
simply inaccurate or falls within error given spread in historical data. To
address this limitation, we propose a method to quantify the predictive
uncertainty of Mamba forecasts. Here, we propose a dual-network framework based
on the Mamba architecture for probabilistic forecasting, where one network
generates point forecasts while the other estimates predictive uncertainty by
modeling variance. We abbreviate our tool, Mamba with probabilistic time series
forecasting, as Mamba-ProbTSF and the code for its implementation is available
on GitHub (https://github.com/PessoaP/Mamba-ProbTSF). Evaluating this approach
on synthetic and real-world benchmark datasets, we find Kullback-Leibler
divergence between the learned distributions and the data--which, in the limit
of infinite data, should converge to zero if the model correctly captures the
underlying probability distribution--reduced to the order of $10^{-3}$ for
synthetic data and $10^{-1}$ for real-world benchmark, demonstrating its
effectiveness. We find that in both the electricity consumption and traffic
occupancy benchmark, the true trajectory stays within the predicted uncertainty
interval at the two-sigma level about 95\% of the time. We end with a
consideration of potential limitations, adjustments to improve performance, and
considerations for applying this framework to processes for purely or largely
stochastic dynamics where the stochastic changes accumulate, as observed for
example in pure Brownian motion or molecular dynamics trajectories.
|
2503.10881 | Wendi Cui | Jiaxin Zhang, Zhuohang Li, Wendi Cui, Kamalika Das, Bradley malin,
Sricharan Kumar | SCE: Scalable Consistency Ensembles Make Blackbox Large Language Model
Generation More Reliable | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Large language models (LLMs) have demonstrated remarkable performance, yet
their diverse strengths and weaknesses prevent any single LLM from achieving
dominance across all tasks. Ensembling multiple LLMs is a promising approach to
generate reliable responses but conventional ensembling frameworks suffer from
high computational overheads. This work introduces Scalable Consistency
Ensemble (SCE), an efficient framework for ensembling LLMs by prompting
consistent outputs. The SCE framework systematically evaluates and integrates
outputs to produce a cohesive result through two core components: SCE-CHECK, a
mechanism that gauges the consistency between response pairs via semantic
equivalence; and SCE-FUSION, which adeptly merges the highest-ranked consistent
responses from SCE-CHECK, to optimize collective strengths and mitigating
potential weaknesses. To improve the scalability with multiple inference
queries, we further propose ``{You Only Prompt Once}'' (YOPO), a novel
technique that reduces the inference complexity of pairwise comparison from
quadratic to constant time. We perform extensive empirical evaluations on
diverse benchmark datasets to demonstrate \methodName's effectiveness. Notably,
the \saccheckcomponent outperforms conventional baselines with enhanced
performance and a significant reduction in computational overhead.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 20:54:28 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Zhang",
"Jiaxin",
""
],
[
"Li",
"Zhuohang",
""
],
[
"Cui",
"Wendi",
""
],
[
"Das",
"Kamalika",
""
],
[
"malin",
"Bradley",
""
],
[
"Kumar",
"Sricharan",
""
]
] | TITLE: SCE: Scalable Consistency Ensembles Make Blackbox Large Language Model
Generation More Reliable
ABSTRACT: Large language models (LLMs) have demonstrated remarkable performance, yet
their diverse strengths and weaknesses prevent any single LLM from achieving
dominance across all tasks. Ensembling multiple LLMs is a promising approach to
generate reliable responses but conventional ensembling frameworks suffer from
high computational overheads. This work introduces Scalable Consistency
Ensemble (SCE), an efficient framework for ensembling LLMs by prompting
consistent outputs. The SCE framework systematically evaluates and integrates
outputs to produce a cohesive result through two core components: SCE-CHECK, a
mechanism that gauges the consistency between response pairs via semantic
equivalence; and SCE-FUSION, which adeptly merges the highest-ranked consistent
responses from SCE-CHECK, to optimize collective strengths and mitigating
potential weaknesses. To improve the scalability with multiple inference
queries, we further propose ``{You Only Prompt Once}'' (YOPO), a novel
technique that reduces the inference complexity of pairwise comparison from
quadratic to constant time. We perform extensive empirical evaluations on
diverse benchmark datasets to demonstrate \methodName's effectiveness. Notably,
the \saccheckcomponent outperforms conventional baselines with enhanced
performance and a significant reduction in computational overhead.
|
2503.10883 | Paul Quinlan | Paul Quinlan, Qingguo Li, Xiaodan Zhu | Chat-TS: Enhancing Multi-Modal Reasoning Over Time-Series and Natural
Language Data | null | null | null | null | cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | Time-series analysis is critical for a wide range of fields such as
healthcare, finance, transportation, and energy, among many others. The
practical applications often involve analyzing time-series data alongside
contextual information in the form of natural language to support informed
decisions. However, current time-series models are limited in their ability to
perform reasoning that involves both time-series and their textual content. In
this work, we address this gap by introducing \textit{Chat-TS}, a large
language model (LLM) based framework, designed to support reasoning over time
series and textual data. Unlike traditional models, Chat-TS integrates
time-series tokens into LLMs' vocabulary, enhancing its reasoning ability over
both modalities without compromising the core natural language capabilities,
enabling practical analysis and reasoning across modalities. To support
learning and evaluation in this setup, we contribute new datasets: the
\textit{TS Instruct Training Dataset} which pairs diverse time-series data with
relevant text instructions and responses for instruction tuning, the \textit{TS
Instruct Question and Answer (QA) Gold Dataset} which provides multiple-choice
questions designed to evaluate multimodal reasoning, and a \textit{TS Instruct
Quantitative Probing Set} which contains a small subset of the TS Instruct QA
tasks alongside math and decision-making questions for LLM evaluation. We
designed a training strategy to preserve the inherent reasoning capabilities of
LLMs while augmenting them for time-series reasoning. Experiments show that
Chat-TS achieves state-of-the-art performance in multi-modal reasoning tasks by
maintaining strong natural language proficiency while improving time-series
reasoning. ~\footnote{To ensure replicability and facilitate future research,
all models, datasets, and code will be available at [\texttt{Github-URL}].}
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 21:05:11 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Quinlan",
"Paul",
""
],
[
"Li",
"Qingguo",
""
],
[
"Zhu",
"Xiaodan",
""
]
] | TITLE: Chat-TS: Enhancing Multi-Modal Reasoning Over Time-Series and Natural
Language Data
ABSTRACT: Time-series analysis is critical for a wide range of fields such as
healthcare, finance, transportation, and energy, among many others. The
practical applications often involve analyzing time-series data alongside
contextual information in the form of natural language to support informed
decisions. However, current time-series models are limited in their ability to
perform reasoning that involves both time-series and their textual content. In
this work, we address this gap by introducing \textit{Chat-TS}, a large
language model (LLM) based framework, designed to support reasoning over time
series and textual data. Unlike traditional models, Chat-TS integrates
time-series tokens into LLMs' vocabulary, enhancing its reasoning ability over
both modalities without compromising the core natural language capabilities,
enabling practical analysis and reasoning across modalities. To support
learning and evaluation in this setup, we contribute new datasets: the
\textit{TS Instruct Training Dataset} which pairs diverse time-series data with
relevant text instructions and responses for instruction tuning, the \textit{TS
Instruct Question and Answer (QA) Gold Dataset} which provides multiple-choice
questions designed to evaluate multimodal reasoning, and a \textit{TS Instruct
Quantitative Probing Set} which contains a small subset of the TS Instruct QA
tasks alongside math and decision-making questions for LLM evaluation. We
designed a training strategy to preserve the inherent reasoning capabilities of
LLMs while augmenting them for time-series reasoning. Experiments show that
Chat-TS achieves state-of-the-art performance in multi-modal reasoning tasks by
maintaining strong natural language proficiency while improving time-series
reasoning. ~\footnote{To ensure replicability and facilitate future research,
all models, datasets, and code will be available at [\texttt{Github-URL}].}
|
2503.10898 | Yizhou Huang | Yizhou Huang, Yihua Cheng, Kezhi Wang | Trajectory Mamba: Efficient Attention-Mamba Forecasting Model Based on
Selective SSM | Accepted by CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Motion prediction is crucial for autonomous driving, as it enables accurate
forecasting of future vehicle trajectories based on historical inputs. This
paper introduces Trajectory Mamba, a novel efficient trajectory prediction
framework based on the selective state-space model (SSM). Conventional
attention-based models face the challenge of computational costs that grow
quadratically with the number of targets, hindering their application in highly
dynamic environments. In response, we leverage the SSM to redesign the
self-attention mechanism in the encoder-decoder architecture, thereby achieving
linear time complexity. To address the potential reduction in prediction
accuracy resulting from modifications to the attention mechanism, we propose a
joint polyline encoding strategy to better capture the associations between
static and dynamic contexts, ultimately enhancing prediction accuracy.
Additionally, to balance prediction accuracy and inference speed, we adopted
the decoder that differs entirely from the encoder. Through cross-state space
attention, all target agents share the scene context, allowing the SSM to
interact with the shared scene representation during decoding, thus inferring
different trajectories over the next prediction steps. Our model achieves
state-of-the-art results in terms of inference speed and parameter efficiency
on both the Argoverse 1 and Argoverse 2 datasets. It demonstrates a four-fold
reduction in FLOPs compared to existing methods and reduces parameter count by
over 40% while surpassing the performance of the vast majority of previous
methods. These findings validate the effectiveness of Trajectory Mamba in
trajectory prediction tasks.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 21:31:12 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Huang",
"Yizhou",
""
],
[
"Cheng",
"Yihua",
""
],
[
"Wang",
"Kezhi",
""
]
] | TITLE: Trajectory Mamba: Efficient Attention-Mamba Forecasting Model Based on
Selective SSM
ABSTRACT: Motion prediction is crucial for autonomous driving, as it enables accurate
forecasting of future vehicle trajectories based on historical inputs. This
paper introduces Trajectory Mamba, a novel efficient trajectory prediction
framework based on the selective state-space model (SSM). Conventional
attention-based models face the challenge of computational costs that grow
quadratically with the number of targets, hindering their application in highly
dynamic environments. In response, we leverage the SSM to redesign the
self-attention mechanism in the encoder-decoder architecture, thereby achieving
linear time complexity. To address the potential reduction in prediction
accuracy resulting from modifications to the attention mechanism, we propose a
joint polyline encoding strategy to better capture the associations between
static and dynamic contexts, ultimately enhancing prediction accuracy.
Additionally, to balance prediction accuracy and inference speed, we adopted
the decoder that differs entirely from the encoder. Through cross-state space
attention, all target agents share the scene context, allowing the SSM to
interact with the shared scene representation during decoding, thus inferring
different trajectories over the next prediction steps. Our model achieves
state-of-the-art results in terms of inference speed and parameter efficiency
on both the Argoverse 1 and Argoverse 2 datasets. It demonstrates a four-fold
reduction in FLOPs compared to existing methods and reduces parameter count by
over 40% while surpassing the performance of the vast majority of previous
methods. These findings validate the effectiveness of Trajectory Mamba in
trajectory prediction tasks.
|
2503.10907 | Xueting Luo | Xueting Luo, Hao Deng, Jihong Yang, Yao Shen, Huanhuan Guo, Zhiyuan
Sun, Mingqing Liu, Jiming Wei, Shengjie Zhao | H2-MARL: Multi-Agent Reinforcement Learning for Pareto Optimality in
Hospital Capacity Strain and Human Mobility during Epidemic | null | null | null | null | cs.MA cs.AI cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The necessity of achieving an effective balance between minimizing the losses
associated with restricting human mobility and ensuring hospital capacity has
gained significant attention in the aftermath of COVID-19. Reinforcement
learning (RL)-based strategies for human mobility management have recently
advanced in addressing the dynamic evolution of cities and epidemics; however,
they still face challenges in achieving coordinated control at the township
level and adapting to cities of varying scales. To address the above issues, we
propose a multi-agent RL approach that achieves Pareto optimality in managing
hospital capacity and human mobility (H2-MARL), applicable across cities of
different scales. We first develop a township-level infection model with
online-updatable parameters to simulate disease transmission and construct a
city-wide dynamic spatiotemporal epidemic simulator. On this basis, H2-MARL is
designed to treat each division as an agent, with a trade-off dual-objective
reward function formulated and an experience replay buffer enriched with expert
knowledge built. To evaluate the effectiveness of the model, we construct a
township-level human mobility dataset containing over one billion records from
four representative cities of varying scales. Extensive experiments demonstrate
that H2-MARL has the optimal dual-objective trade-off capability, which can
minimize hospital capacity strain while minimizing human mobility restriction
loss. Meanwhile, the applicability of the proposed model to epidemic control in
cities of varying scales is verified, which showcases its feasibility and
versatility in practical applications.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 21:40:07 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Luo",
"Xueting",
""
],
[
"Deng",
"Hao",
""
],
[
"Yang",
"Jihong",
""
],
[
"Shen",
"Yao",
""
],
[
"Guo",
"Huanhuan",
""
],
[
"Sun",
"Zhiyuan",
""
],
[
"Liu",
"Mingqing",
""
],
[
"Wei",
"Jiming",
""
],
[
"Zhao",
"Shengjie",
""
]
] | TITLE: H2-MARL: Multi-Agent Reinforcement Learning for Pareto Optimality in
Hospital Capacity Strain and Human Mobility during Epidemic
ABSTRACT: The necessity of achieving an effective balance between minimizing the losses
associated with restricting human mobility and ensuring hospital capacity has
gained significant attention in the aftermath of COVID-19. Reinforcement
learning (RL)-based strategies for human mobility management have recently
advanced in addressing the dynamic evolution of cities and epidemics; however,
they still face challenges in achieving coordinated control at the township
level and adapting to cities of varying scales. To address the above issues, we
propose a multi-agent RL approach that achieves Pareto optimality in managing
hospital capacity and human mobility (H2-MARL), applicable across cities of
different scales. We first develop a township-level infection model with
online-updatable parameters to simulate disease transmission and construct a
city-wide dynamic spatiotemporal epidemic simulator. On this basis, H2-MARL is
designed to treat each division as an agent, with a trade-off dual-objective
reward function formulated and an experience replay buffer enriched with expert
knowledge built. To evaluate the effectiveness of the model, we construct a
township-level human mobility dataset containing over one billion records from
four representative cities of varying scales. Extensive experiments demonstrate
that H2-MARL has the optimal dual-objective trade-off capability, which can
minimize hospital capacity strain while minimizing human mobility restriction
loss. Meanwhile, the applicability of the proposed model to epidemic control in
cities of varying scales is verified, which showcases its feasibility and
versatility in practical applications.
|
2503.10908 | Ben Winter | Benjamin David Winter, William J. Teahan | Ecological Neural Architecture Search | 5 pages, 4 figures | null | null | null | cs.NE cs.AI | http://creativecommons.org/licenses/by/4.0/ | When employing an evolutionary algorithm to optimize a neural networks
architecture, developers face the added challenge of tuning the evolutionary
algorithm's own hyperparameters - population size, mutation rate, cloning rate,
and number of generations. This paper introduces Neuvo Ecological Neural
Architecture Search (ENAS), a novel method that incorporates these evolutionary
parameters directly into the candidate solutions' phenotypes, allowing them to
evolve dynamically alongside architecture specifications. Experimental results
across four binary classification datasets demonstrate that ENAS not only
eliminates manual tuning of evolutionary parameters but also outperforms
competitor NAS methodologies in convergence speed (reducing computational time
by 18.3%) and accuracy (improving classification performance in 3 out of 4
datasets). By enabling "greedy individuals" to optimize resource allocation
based on fitness, ENAS provides an efficient, self-regulating approach to
neural architecture search.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 21:40:25 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Winter",
"Benjamin David",
""
],
[
"Teahan",
"William J.",
""
]
] | TITLE: Ecological Neural Architecture Search
ABSTRACT: When employing an evolutionary algorithm to optimize a neural networks
architecture, developers face the added challenge of tuning the evolutionary
algorithm's own hyperparameters - population size, mutation rate, cloning rate,
and number of generations. This paper introduces Neuvo Ecological Neural
Architecture Search (ENAS), a novel method that incorporates these evolutionary
parameters directly into the candidate solutions' phenotypes, allowing them to
evolve dynamically alongside architecture specifications. Experimental results
across four binary classification datasets demonstrate that ENAS not only
eliminates manual tuning of evolutionary parameters but also outperforms
competitor NAS methodologies in convergence speed (reducing computational time
by 18.3%) and accuracy (improving classification performance in 3 out of 4
datasets). By enabling "greedy individuals" to optimize resource allocation
based on fitness, ENAS provides an efficient, self-regulating approach to
neural architecture search.
|
2503.10913 | Daniel Panangian | Chaikal Amrullah, Daniel Panangian, Ksenia Bittner | PolyRoof: Precision Roof Polygonization in Urban Residential Building
with Graph Neural Networks | Accepted to Joint Urban Remote Sensing Event (JURSE) 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The growing demand for detailed building roof data has driven the development
of automated extraction methods to overcome the inefficiencies of traditional
approaches, particularly in handling complex variations in building geometries.
Re:PolyWorld, which integrates point detection with graph neural networks,
presents a promising solution for reconstructing high-detail building roof
vector data. This study enhances Re:PolyWorld's performance on complex urban
residential structures by incorporating attention-based backbones and
additional area segmentation loss. Despite dataset limitations, our experiments
demonstrated improvements in point position accuracy (1.33 pixels) and line
distance accuracy (14.39 pixels), along with a notable increase in the
reconstruction score to 91.99%. These findings highlight the potential of
advanced neural network architectures in addressing the challenges of complex
urban residential geometries.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 21:52:33 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Amrullah",
"Chaikal",
""
],
[
"Panangian",
"Daniel",
""
],
[
"Bittner",
"Ksenia",
""
]
] | TITLE: PolyRoof: Precision Roof Polygonization in Urban Residential Building
with Graph Neural Networks
ABSTRACT: The growing demand for detailed building roof data has driven the development
of automated extraction methods to overcome the inefficiencies of traditional
approaches, particularly in handling complex variations in building geometries.
Re:PolyWorld, which integrates point detection with graph neural networks,
presents a promising solution for reconstructing high-detail building roof
vector data. This study enhances Re:PolyWorld's performance on complex urban
residential structures by incorporating attention-based backbones and
additional area segmentation loss. Despite dataset limitations, our experiments
demonstrated improvements in point position accuracy (1.33 pixels) and line
distance accuracy (14.39 pixels), along with a notable increase in the
reconstruction score to 91.99%. These findings highlight the potential of
advanced neural network architectures in addressing the challenges of complex
urban residential geometries.
|
2503.10925 | Michael Albada Mi | Michael Albada | Predicting Clinical Outcomes with Waveform LSTMs | 7 pages,. arXiv admin note: text overlap with arXiv:1803.06589 by
other authors | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Data mining and machine learning hold great potential to enable health
systems to systematically use data and analytics to identify inefficiencies and
best practices that improve care and reduce costs. Waveform data offers
particularly detailed information on how patient health evolves over time and
has the potential to significantly improve prediction accuracy on multiple
benchmarks, but has been widely under-utilized, largely because of the
challenges in working with these large and complex datasets. This study
evaluates the potential of leveraging clinical waveform data to improve
prediction accuracy on a single benchmark task: the risk of mortality in the
intensive care unit. We identify significant potential from this data, beating
the existing baselines for both logistic regression and deep learning models.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 22:19:05 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Albada",
"Michael",
""
]
] | TITLE: Predicting Clinical Outcomes with Waveform LSTMs
ABSTRACT: Data mining and machine learning hold great potential to enable health
systems to systematically use data and analytics to identify inefficiencies and
best practices that improve care and reduce costs. Waveform data offers
particularly detailed information on how patient health evolves over time and
has the potential to significantly improve prediction accuracy on multiple
benchmarks, but has been widely under-utilized, largely because of the
challenges in working with these large and complex datasets. This study
evaluates the potential of leveraging clinical waveform data to improve
prediction accuracy on a single benchmark task: the risk of mortality in the
intensive care unit. We identify significant potential from this data, beating
the existing baselines for both logistic regression and deep learning models.
|
2503.10931 | Anirudh Nanduri | Anirudh Nanduri, Siyuan Huang, Rama Chellappa | Multi-Domain Biometric Recognition using Body Embeddings | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Biometric recognition becomes increasingly challenging as we move away from
the visible spectrum to infrared imagery, where domain discrepancies
significantly impact identification performance. In this paper, we show that
body embeddings perform better than face embeddings for cross-spectral person
identification in medium-wave infrared (MWIR) and long-wave infrared (LWIR)
domains. Due to the lack of multi-domain datasets, previous research on
cross-spectral body identification - also known as Visible-Infrared Person
Re-Identification (VI-ReID) - has primarily focused on individual infrared
bands, such as near-infrared (NIR) or LWIR, separately. We address the
multi-domain body recognition problem using the IARPA Janus Benchmark
Multi-Domain Face (IJB-MDF) dataset, which enables matching of short-wave
infrared (SWIR), MWIR, and LWIR images against RGB (VIS) images. We leverage a
vision transformer architecture to establish benchmark results on the IJB-MDF
dataset and, through extensive experiments, provide valuable insights into the
interrelation of infrared domains, the adaptability of VIS-pretrained models,
the role of local semantic features in body-embeddings, and effective training
strategies for small datasets. Additionally, we show that finetuning a body
model, pretrained exclusively on VIS data, with a simple combination of
cross-entropy and triplet losses achieves state-of-the-art mAP scores on the
LLCM dataset.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 22:38:18 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Nanduri",
"Anirudh",
""
],
[
"Huang",
"Siyuan",
""
],
[
"Chellappa",
"Rama",
""
]
] | TITLE: Multi-Domain Biometric Recognition using Body Embeddings
ABSTRACT: Biometric recognition becomes increasingly challenging as we move away from
the visible spectrum to infrared imagery, where domain discrepancies
significantly impact identification performance. In this paper, we show that
body embeddings perform better than face embeddings for cross-spectral person
identification in medium-wave infrared (MWIR) and long-wave infrared (LWIR)
domains. Due to the lack of multi-domain datasets, previous research on
cross-spectral body identification - also known as Visible-Infrared Person
Re-Identification (VI-ReID) - has primarily focused on individual infrared
bands, such as near-infrared (NIR) or LWIR, separately. We address the
multi-domain body recognition problem using the IARPA Janus Benchmark
Multi-Domain Face (IJB-MDF) dataset, which enables matching of short-wave
infrared (SWIR), MWIR, and LWIR images against RGB (VIS) images. We leverage a
vision transformer architecture to establish benchmark results on the IJB-MDF
dataset and, through extensive experiments, provide valuable insights into the
interrelation of infrared domains, the adaptability of VIS-pretrained models,
the role of local semantic features in body-embeddings, and effective training
strategies for small datasets. Additionally, we show that finetuning a body
model, pretrained exclusively on VIS data, with a simple combination of
cross-entropy and triplet losses achieves state-of-the-art mAP scores on the
LLCM dataset.
|
2503.10937 | Haoyu Zhang | Haoyu Zhang, Raghavendra Ramachandra, Kiran Raja, Christoph Busch | ChatGPT Encounters Morphing Attack Detection: Zero-Shot MAD with
Multi-Modal Large Language Models and General Vision Models | null | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Face Recognition Systems (FRS) are increasingly vulnerable to face-morphing
attacks, prompting the development of Morphing Attack Detection (MAD)
algorithms. However, a key challenge in MAD lies in its limited
generalizability to unseen data and its lack of explainability-critical for
practical application environments such as enrolment stations and automated
border control systems. Recognizing that most existing MAD algorithms rely on
supervised learning paradigms, this work explores a novel approach to MAD using
zero-shot learning leveraged on Large Language Models (LLMs). We propose two
types of zero-shot MAD algorithms: one leveraging general vision models and the
other utilizing multimodal LLMs. For general vision models, we address the MAD
task by computing the mean support embedding of an independent support set
without using morphed images. For the LLM-based approach, we employ the
state-of-the-art GPT-4 Turbo API with carefully crafted prompts. To evaluate
the feasibility of zero-shot MAD and the effectiveness of the proposed methods,
we constructed a print-scan morph dataset featuring various unseen morphing
algorithms, simulating challenging real-world application scenarios.
Experimental results demonstrated notable detection accuracy, validating the
applicability of zero-shot learning for MAD tasks. Additionally, our
investigation into LLM-based MAD revealed that multimodal LLMs, such as
ChatGPT, exhibit remarkable generalizability to untrained MAD tasks.
Furthermore, they possess a unique ability to provide explanations and
guidance, which can enhance transparency and usability for end-users in
practical applications.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 22:53:24 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Zhang",
"Haoyu",
""
],
[
"Ramachandra",
"Raghavendra",
""
],
[
"Raja",
"Kiran",
""
],
[
"Busch",
"Christoph",
""
]
] | TITLE: ChatGPT Encounters Morphing Attack Detection: Zero-Shot MAD with
Multi-Modal Large Language Models and General Vision Models
ABSTRACT: Face Recognition Systems (FRS) are increasingly vulnerable to face-morphing
attacks, prompting the development of Morphing Attack Detection (MAD)
algorithms. However, a key challenge in MAD lies in its limited
generalizability to unseen data and its lack of explainability-critical for
practical application environments such as enrolment stations and automated
border control systems. Recognizing that most existing MAD algorithms rely on
supervised learning paradigms, this work explores a novel approach to MAD using
zero-shot learning leveraged on Large Language Models (LLMs). We propose two
types of zero-shot MAD algorithms: one leveraging general vision models and the
other utilizing multimodal LLMs. For general vision models, we address the MAD
task by computing the mean support embedding of an independent support set
without using morphed images. For the LLM-based approach, we employ the
state-of-the-art GPT-4 Turbo API with carefully crafted prompts. To evaluate
the feasibility of zero-shot MAD and the effectiveness of the proposed methods,
we constructed a print-scan morph dataset featuring various unseen morphing
algorithms, simulating challenging real-world application scenarios.
Experimental results demonstrated notable detection accuracy, validating the
applicability of zero-shot learning for MAD tasks. Additionally, our
investigation into LLM-based MAD revealed that multimodal LLMs, such as
ChatGPT, exhibit remarkable generalizability to untrained MAD tasks.
Furthermore, they possess a unique ability to provide explanations and
guidance, which can enhance transparency and usability for end-users in
practical applications.
|
2503.10944 | Blake Gatto | SE Blake | Phishsense-1B: A Technical Perspective on an AI-Powered Phishing
Detection Model | Phishing Detection Model
https://huggingface.co/AcuteShrewdSecurity/Llama-Phishsense-1B | null | null | null | cs.CR cs.LG | http://creativecommons.org/licenses/by/4.0/ | Phishing is a persistent cybersecurity threat in today's digital landscape.
This paper introduces Phishsense-1B, a refined version of the Llama-Guard-3-1B
model, specifically tailored for phishing detection and reasoning. This
adaptation utilizes Low-Rank Adaptation (LoRA) and the GuardReasoner finetuning
methodology. We outline our LoRA-based fine-tuning process, describe the
balanced dataset comprising phishing and benign emails, and highlight
significant performance improvements over the original model. Our findings
indicate that Phishsense-1B achieves an impressive 97.5% accuracy on a custom
dataset and maintains strong performance with 70% accuracy on a challenging
real-world dataset. This performance notably surpasses both unadapted models
and BERT-based detectors. Additionally, we examine current state-of-the-art
detection methods, compare prompt-engineering with fine-tuning strategies, and
explore potential deployment scenarios.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 23:03:09 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Blake",
"SE",
""
]
] | TITLE: Phishsense-1B: A Technical Perspective on an AI-Powered Phishing
Detection Model
ABSTRACT: Phishing is a persistent cybersecurity threat in today's digital landscape.
This paper introduces Phishsense-1B, a refined version of the Llama-Guard-3-1B
model, specifically tailored for phishing detection and reasoning. This
adaptation utilizes Low-Rank Adaptation (LoRA) and the GuardReasoner finetuning
methodology. We outline our LoRA-based fine-tuning process, describe the
balanced dataset comprising phishing and benign emails, and highlight
significant performance improvements over the original model. Our findings
indicate that Phishsense-1B achieves an impressive 97.5% accuracy on a custom
dataset and maintains strong performance with 70% accuracy on a challenging
real-world dataset. This performance notably surpasses both unadapted models
and BERT-based detectors. Additionally, we examine current state-of-the-art
detection methods, compare prompt-engineering with fine-tuning strategies, and
explore potential deployment scenarios.
|
2503.10950 | Xingfei Wei | Xingfei Wei, Qiankun Mo, Chi Chen, Mark Bathe, and Rigoberto Hernandez | DNA Origami Nanostructures Observed in Transmission Electron Microscopy
Images can be Characterized through Convolutional Neural Networks | null | null | null | null | physics.chem-ph cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Artificial intelligence (AI) models remain an emerging strategy to accelerate
materials design and development. We demonstrate that convolutional neural
network (CNN) models can characterize DNA origami nanostructures employed in
programmable self-assembling, which is important in many applications such as
in biomedicine. Specifically, we benchmark the performance of 9 CNN models --
viz. AlexNet, GoogLeNet, VGG16, VGG19, ResNet18, ResNet34, ResNet50, ResNet101,
and ResNet152 -- to characterize the ligation number of DNA origami
nanostructures in transmission electron microscopy (TEM) images. We first
pre-train CNN models using a large image dataset of 720 images from our
coarse-grained (CG) molecular dynamics (MD) simulations. Then, we fine-tune the
pre-trained CNN models, using a small experimental TEM dataset with 146 TEM
images. All CNN models were found to have similar computational time
requirements, while their model sizes and performances are different. We use 20
test MD images to demonstrate that among all of the pre-trained CNN models
ResNet50 and VGG16 have the highest and second highest accuracies. Among the
fine-tuned models, VGG16 was found to have the highest agreement on the test
TEM images. Thus, we conclude that fine-tuned VGG16 models can quickly
characterize the ligation number of nanostructures in large TEM images.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 23:31:10 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Wei",
"Xingfei",
""
],
[
"Mo",
"Qiankun",
""
],
[
"Chen",
"Chi",
""
],
[
"Bathe",
"Mark",
""
],
[
"Hernandez",
"Rigoberto",
""
]
] | TITLE: DNA Origami Nanostructures Observed in Transmission Electron Microscopy
Images can be Characterized through Convolutional Neural Networks
ABSTRACT: Artificial intelligence (AI) models remain an emerging strategy to accelerate
materials design and development. We demonstrate that convolutional neural
network (CNN) models can characterize DNA origami nanostructures employed in
programmable self-assembling, which is important in many applications such as
in biomedicine. Specifically, we benchmark the performance of 9 CNN models --
viz. AlexNet, GoogLeNet, VGG16, VGG19, ResNet18, ResNet34, ResNet50, ResNet101,
and ResNet152 -- to characterize the ligation number of DNA origami
nanostructures in transmission electron microscopy (TEM) images. We first
pre-train CNN models using a large image dataset of 720 images from our
coarse-grained (CG) molecular dynamics (MD) simulations. Then, we fine-tune the
pre-trained CNN models, using a small experimental TEM dataset with 146 TEM
images. All CNN models were found to have similar computational time
requirements, while their model sizes and performances are different. We use 20
test MD images to demonstrate that among all of the pre-trained CNN models
ResNet50 and VGG16 have the highest and second highest accuracies. Among the
fine-tuned models, VGG16 was found to have the highest agreement on the test
TEM images. Thus, we conclude that fine-tuned VGG16 models can quickly
characterize the ligation number of nanostructures in large TEM images.
|
2503.10957 | Michael Albada | Michael Charles Albada and Mojolaoluwa Joshua Sonola | Predicting Stock Movement with BERTweet and Transformers | 9 pages, 4 figures, 2 tables | null | null | null | cs.LG cs.AI cs.CE | http://creativecommons.org/licenses/by/4.0/ | Applying deep learning and computational intelligence to finance has been a
popular area of applied research, both within academia and industry, and
continues to attract active attention. The inherently high volatility and
non-stationary of the data pose substantial challenges to machine learning
models, especially so for today's expressive and highly-parameterized deep
learning models. Recent work has combined natural language processing on data
from social media to augment models based purely on historic price data to
improve performance has received particular attention. Previous work has
achieved state-of-the-art performance on this task by combining techniques such
as bidirectional GRUs, variational autoencoders, word and document embeddings,
self-attention, graph attention, and adversarial training. In this paper, we
demonstrated the efficacy of BERTweet, a variant of BERT pre-trained
specifically on a Twitter corpus, and the transformer architecture by achieving
competitive performance with the existing literature and setting a new baseline
for Matthews Correlation Coefficient on the Stocknet dataset without auxiliary
data sources.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 23:46:24 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Albada",
"Michael Charles",
""
],
[
"Sonola",
"Mojolaoluwa Joshua",
""
]
] | TITLE: Predicting Stock Movement with BERTweet and Transformers
ABSTRACT: Applying deep learning and computational intelligence to finance has been a
popular area of applied research, both within academia and industry, and
continues to attract active attention. The inherently high volatility and
non-stationary of the data pose substantial challenges to machine learning
models, especially so for today's expressive and highly-parameterized deep
learning models. Recent work has combined natural language processing on data
from social media to augment models based purely on historic price data to
improve performance has received particular attention. Previous work has
achieved state-of-the-art performance on this task by combining techniques such
as bidirectional GRUs, variational autoencoders, word and document embeddings,
self-attention, graph attention, and adversarial training. In this paper, we
demonstrated the efficacy of BERTweet, a variant of BERT pre-trained
specifically on a Twitter corpus, and the transformer architecture by achieving
competitive performance with the existing literature and setting a new baseline
for Matthews Correlation Coefficient on the Stocknet dataset without auxiliary
data sources.
|
2503.10982 | Reef Alturki | Reef Alturki, Adrian Hilton, Jean-Yves Guillemaut | Enhanced Multi-View Pedestrian Detection Using Probabilistic Occupancy
Volume | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Occlusion poses a significant challenge in pedestrian detection from a single
view. To address this, multi-view detection systems have been utilized to
aggregate information from multiple perspectives. Recent advances in multi-view
detection utilized an early-fusion strategy that strategically projects the
features onto the ground plane, where detection analysis is performed. A
promising approach in this context is the use of 3D feature-pulling technique,
which constructs a 3D feature volume of the scene by sampling the corresponding
2D features for each voxel. However, it creates a 3D feature volume of the
whole scene without considering the potential locations of pedestrians. In this
paper, we introduce a novel model that efficiently leverages traditional 3D
reconstruction techniques to enhance deep multi-view pedestrian detection. This
is accomplished by complementing the 3D feature volume with probabilistic
occupancy volume, which is constructed using the visual hull technique. The
probabilistic occupancy volume focuses the model's attention on regions
occupied by pedestrians and improves detection accuracy. Our model outperforms
state-of-the-art models on the MultiviewX dataset, with an MODA of 97.3%, while
achieving competitive performance on the Wildtrack dataset.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 01:05:44 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Alturki",
"Reef",
""
],
[
"Hilton",
"Adrian",
""
],
[
"Guillemaut",
"Jean-Yves",
""
]
] | TITLE: Enhanced Multi-View Pedestrian Detection Using Probabilistic Occupancy
Volume
ABSTRACT: Occlusion poses a significant challenge in pedestrian detection from a single
view. To address this, multi-view detection systems have been utilized to
aggregate information from multiple perspectives. Recent advances in multi-view
detection utilized an early-fusion strategy that strategically projects the
features onto the ground plane, where detection analysis is performed. A
promising approach in this context is the use of 3D feature-pulling technique,
which constructs a 3D feature volume of the scene by sampling the corresponding
2D features for each voxel. However, it creates a 3D feature volume of the
whole scene without considering the potential locations of pedestrians. In this
paper, we introduce a novel model that efficiently leverages traditional 3D
reconstruction techniques to enhance deep multi-view pedestrian detection. This
is accomplished by complementing the 3D feature volume with probabilistic
occupancy volume, which is constructed using the visual hull technique. The
probabilistic occupancy volume focuses the model's attention on regions
occupied by pedestrians and improves detection accuracy. Our model outperforms
state-of-the-art models on the MultiviewX dataset, with an MODA of 97.3%, while
achieving competitive performance on the Wildtrack dataset.
|
2503.10986 | Zhicheng Feng | Zhicheng Feng and Xieyuanli Chen and Chenghao Shi and Lun Luo and
Zhichao Chen and Yun-Hui Liu and Huimin Lu | Image-Goal Navigation Using Refined Feature Guidance and Scene Graph
Enhancement | null | null | null | null | cs.RO cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we introduce a novel image-goal navigation approach, named
RFSG. Our focus lies in leveraging the fine-grained connections between goals,
observations, and the environment within limited image data, all the while
keeping the navigation architecture simple and lightweight. To this end, we
propose the spatial-channel attention mechanism, enabling the network to learn
the importance of multi-dimensional features to fuse the goal and observation
features. In addition, a selfdistillation mechanism is incorporated to further
enhance the feature representation capabilities. Given that the navigation task
needs surrounding environmental information for more efficient navigation, we
propose an image scene graph to establish feature associations at both the
image and object levels, effectively encoding the surrounding scene
information. Crossscene performance validation was conducted on the Gibson and
HM3D datasets, and the proposed method achieved stateof-the-art results among
mainstream methods, with a speed of up to 53.5 frames per second on an RTX3080.
This contributes to the realization of end-to-end image-goal navigation in
realworld scenarios. The implementation and model of our method have been
released at: https://github.com/nubot-nudt/RFSG.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 01:15:24 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Feng",
"Zhicheng",
""
],
[
"Chen",
"Xieyuanli",
""
],
[
"Shi",
"Chenghao",
""
],
[
"Luo",
"Lun",
""
],
[
"Chen",
"Zhichao",
""
],
[
"Liu",
"Yun-Hui",
""
],
[
"Lu",
"Huimin",
""
]
] | TITLE: Image-Goal Navigation Using Refined Feature Guidance and Scene Graph
Enhancement
ABSTRACT: In this paper, we introduce a novel image-goal navigation approach, named
RFSG. Our focus lies in leveraging the fine-grained connections between goals,
observations, and the environment within limited image data, all the while
keeping the navigation architecture simple and lightweight. To this end, we
propose the spatial-channel attention mechanism, enabling the network to learn
the importance of multi-dimensional features to fuse the goal and observation
features. In addition, a selfdistillation mechanism is incorporated to further
enhance the feature representation capabilities. Given that the navigation task
needs surrounding environmental information for more efficient navigation, we
propose an image scene graph to establish feature associations at both the
image and object levels, effectively encoding the surrounding scene
information. Crossscene performance validation was conducted on the Gibson and
HM3D datasets, and the proposed method achieved stateof-the-art results among
mainstream methods, with a speed of up to 53.5 frames per second on an RTX3080.
This contributes to the realization of end-to-end image-goal navigation in
realworld scenarios. The implementation and model of our method have been
released at: https://github.com/nubot-nudt/RFSG.
|
2503.10993 | June Young Park | JuneYoung Park, YuMi Lee, Tae-Joon Kim and Jang-Hwan Choi | Riemannian Geometric-based Meta Learning | 9 pages | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Meta-learning, or "learning to learn," aims to enable models to quickly adapt
to new tasks with minimal data. While traditional methods like Model-Agnostic
Meta-Learning (MAML) optimize parameters in Euclidean space, they often
struggle to capture complex learning dynamics, particularly in few-shot
learning scenarios. To address this limitation, we propose Stiefel-MAML, which
integrates Riemannian geometry by optimizing within the Stiefel manifold, a
space that naturally enforces orthogonality constraints. By leveraging the
geometric structure of the Stiefel manifold, we improve parameter
expressiveness and enable more efficient optimization through Riemannian
gradient calculations and retraction operations. We also introduce a novel
kernel-based loss function defined on the Stiefel manifold, further enhancing
the model's ability to explore the parameter space. Experimental results on
benchmark datasets--including Omniglot, Mini-ImageNet, FC-100, and
CUB--demonstrate that Stiefel-MAML consistently outperforms traditional MAML,
achieving superior performance across various few-shot learning tasks. Our
findings highlight the potential of Riemannian geometry to enhance
meta-learning, paving the way for future research on optimizing over different
geometric structures.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 01:34:55 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Park",
"JuneYoung",
""
],
[
"Lee",
"YuMi",
""
],
[
"Kim",
"Tae-Joon",
""
],
[
"Choi",
"Jang-Hwan",
""
]
] | TITLE: Riemannian Geometric-based Meta Learning
ABSTRACT: Meta-learning, or "learning to learn," aims to enable models to quickly adapt
to new tasks with minimal data. While traditional methods like Model-Agnostic
Meta-Learning (MAML) optimize parameters in Euclidean space, they often
struggle to capture complex learning dynamics, particularly in few-shot
learning scenarios. To address this limitation, we propose Stiefel-MAML, which
integrates Riemannian geometry by optimizing within the Stiefel manifold, a
space that naturally enforces orthogonality constraints. By leveraging the
geometric structure of the Stiefel manifold, we improve parameter
expressiveness and enable more efficient optimization through Riemannian
gradient calculations and retraction operations. We also introduce a novel
kernel-based loss function defined on the Stiefel manifold, further enhancing
the model's ability to explore the parameter space. Experimental results on
benchmark datasets--including Omniglot, Mini-ImageNet, FC-100, and
CUB--demonstrate that Stiefel-MAML consistently outperforms traditional MAML,
achieving superior performance across various few-shot learning tasks. Our
findings highlight the potential of Riemannian geometry to enhance
meta-learning, paving the way for future research on optimizing over different
geometric structures.
|
2503.10996 | Gaotang Li | Gaotang Li, Yuzhong Chen, Hanghang Tong | Taming Knowledge Conflicts in Language Models | 30 pages, 5 figures | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Language Models (LMs) often encounter knowledge conflicts when parametric
memory contradicts contextual knowledge. Previous works attribute this conflict
to the interplay between "memory heads" and "context heads", attention heads
assumed to promote either memory or context exclusively. In this study, we go
beyond this fundamental assumption by uncovering a critical phenomenon we term
the "superposition of contextual information and parametric memory", where
highly influential attention heads could simultaneously contribute to both
memory and context. Building upon this insight, we propose Just Run Twice
(JUICE), a test-time attention intervention method that steers LMs toward
either parametric beliefs or contextual knowledge without requiring
fine-tuning. JUICE identifies a set of reliable attention heads and leverages a
dual-run approach to mitigate the superposition effects. Extensive experiments
across 11 datasets and 6 model architectures demonstrate that JUICE sets the
new state-of-the-art performance and robust generalization, achieving
significant and consistent improvement across different domains under various
conflict types. Finally, we theoretically analyze knowledge conflict and the
superposition of contextual information and parametric memory in attention
heads, which further elucidates the effectiveness of JUICE in these settings.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 01:45:00 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Li",
"Gaotang",
""
],
[
"Chen",
"Yuzhong",
""
],
[
"Tong",
"Hanghang",
""
]
] | TITLE: Taming Knowledge Conflicts in Language Models
ABSTRACT: Language Models (LMs) often encounter knowledge conflicts when parametric
memory contradicts contextual knowledge. Previous works attribute this conflict
to the interplay between "memory heads" and "context heads", attention heads
assumed to promote either memory or context exclusively. In this study, we go
beyond this fundamental assumption by uncovering a critical phenomenon we term
the "superposition of contextual information and parametric memory", where
highly influential attention heads could simultaneously contribute to both
memory and context. Building upon this insight, we propose Just Run Twice
(JUICE), a test-time attention intervention method that steers LMs toward
either parametric beliefs or contextual knowledge without requiring
fine-tuning. JUICE identifies a set of reliable attention heads and leverages a
dual-run approach to mitigate the superposition effects. Extensive experiments
across 11 datasets and 6 model architectures demonstrate that JUICE sets the
new state-of-the-art performance and robust generalization, achieving
significant and consistent improvement across different domains under various
conflict types. Finally, we theoretically analyze knowledge conflict and the
superposition of contextual information and parametric memory in attention
heads, which further elucidates the effectiveness of JUICE in these settings.
|
2503.11003 | Subasish Das | Shriyank Somvanshi, Rohit Chakraborty, Subasish Das, Anandi K Dutta | Crash Severity Analysis of Child Bicyclists using Arm-Net and MambaNet | 4 pages, 6 figures, accepted at IEEE CAI 2025 | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Child bicyclists (14 years and younger) are among the most vulnerable road
users, often experiencing severe injuries or fatalities in crashes. This study
analyzed 2,394 child bicyclist crashes in Texas from 2017 to 2022 using two
deep tabular learning models (ARM-Net and MambaNet). To address the issue of
data imbalance, the SMOTEENN technique was applied, resulting in balanced
datasets that facilitated accurate crash severity predictions across three
categories: Fatal/Severe (KA), Moderate/Minor (BC), and No Injury (O). The
findings revealed that MambaNet outperformed ARM-Net, achieving higher
precision, recall, F1-scores, and accuracy, particularly in the KA and O
categories. Both models highlighted challenges in distinguishing BC crashes due
to overlapping characteristics. These insights underscored the value of
advanced tabular deep learning methods and balanced datasets in understanding
crash severity. While limitations such as reliance on categorical data exist,
future research could explore continuous variables and real-time behavioral
data to enhance predictive modeling and crash mitigation strategies.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 02:02:14 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Somvanshi",
"Shriyank",
""
],
[
"Chakraborty",
"Rohit",
""
],
[
"Das",
"Subasish",
""
],
[
"Dutta",
"Anandi K",
""
]
] | TITLE: Crash Severity Analysis of Child Bicyclists using Arm-Net and MambaNet
ABSTRACT: Child bicyclists (14 years and younger) are among the most vulnerable road
users, often experiencing severe injuries or fatalities in crashes. This study
analyzed 2,394 child bicyclist crashes in Texas from 2017 to 2022 using two
deep tabular learning models (ARM-Net and MambaNet). To address the issue of
data imbalance, the SMOTEENN technique was applied, resulting in balanced
datasets that facilitated accurate crash severity predictions across three
categories: Fatal/Severe (KA), Moderate/Minor (BC), and No Injury (O). The
findings revealed that MambaNet outperformed ARM-Net, achieving higher
precision, recall, F1-scores, and accuracy, particularly in the KA and O
categories. Both models highlighted challenges in distinguishing BC crashes due
to overlapping characteristics. These insights underscored the value of
advanced tabular deep learning methods and balanced datasets in understanding
crash severity. While limitations such as reliance on categorical data exist,
future research could explore continuous variables and real-time behavioral
data to enhance predictive modeling and crash mitigation strategies.
|
2503.11004 | Jiangning Wei | Jiangning Wei, Lixiong Qin, Bo Yu, Tianjian Zou, Chuhan Yan, Dandan
Xiao, Yang Yu, Lan Yang, Ke Li, Jun Liu | VA-AR: Learning Velocity-Aware Action Representations with Mixture of
Window Attention | Accepted by AAAI 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Action recognition is a crucial task in artificial intelligence, with
significant implications across various domains. We initially perform a
comprehensive analysis of seven prominent action recognition methods across
five widely-used datasets. This analysis reveals a critical, yet previously
overlooked, observation: as the velocity of actions increases, the performance
of these methods variably declines, undermining their robustness. This decline
in performance poses significant challenges for their application in real-world
scenarios. Building on these findings, we introduce the Velocity-Aware Action
Recognition (VA-AR) framework to obtain robust action representations across
different velocities. Our principal insight is that rapid actions (e.g., the
giant circle backward in uneven bars or a smash in badminton) occur within
short time intervals, necessitating smaller temporal attention windows to
accurately capture intricate changes. Conversely, slower actions (e.g.,
drinking water or wiping face) require larger windows to effectively encompass
the broader context. VA-AR employs a Mixture of Window Attention (MoWA)
strategy, dynamically adjusting its attention window size based on the action's
velocity. This adjustment enables VA-AR to obtain a velocity-aware
representation, thereby enhancing the accuracy of action recognition. Extensive
experiments confirm that VA-AR achieves state-of-the-art performance on the
same five datasets, demonstrating VA-AR's effectiveness across a broad spectrum
of action recognition scenarios.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 02:03:37 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Wei",
"Jiangning",
""
],
[
"Qin",
"Lixiong",
""
],
[
"Yu",
"Bo",
""
],
[
"Zou",
"Tianjian",
""
],
[
"Yan",
"Chuhan",
""
],
[
"Xiao",
"Dandan",
""
],
[
"Yu",
"Yang",
""
],
[
"Yang",
"Lan",
""
],
[
"Li",
"Ke",
""
],
[
"Liu",
"Jun",
""
]
] | TITLE: VA-AR: Learning Velocity-Aware Action Representations with Mixture of
Window Attention
ABSTRACT: Action recognition is a crucial task in artificial intelligence, with
significant implications across various domains. We initially perform a
comprehensive analysis of seven prominent action recognition methods across
five widely-used datasets. This analysis reveals a critical, yet previously
overlooked, observation: as the velocity of actions increases, the performance
of these methods variably declines, undermining their robustness. This decline
in performance poses significant challenges for their application in real-world
scenarios. Building on these findings, we introduce the Velocity-Aware Action
Recognition (VA-AR) framework to obtain robust action representations across
different velocities. Our principal insight is that rapid actions (e.g., the
giant circle backward in uneven bars or a smash in badminton) occur within
short time intervals, necessitating smaller temporal attention windows to
accurately capture intricate changes. Conversely, slower actions (e.g.,
drinking water or wiping face) require larger windows to effectively encompass
the broader context. VA-AR employs a Mixture of Window Attention (MoWA)
strategy, dynamically adjusting its attention window size based on the action's
velocity. This adjustment enables VA-AR to obtain a velocity-aware
representation, thereby enhancing the accuracy of action recognition. Extensive
experiments confirm that VA-AR achieves state-of-the-art performance on the
same five datasets, demonstrating VA-AR's effectiveness across a broad spectrum
of action recognition scenarios.
|
2503.11006 | Yifan Xie | Yifan Xie, Binkai Ou, Fei Ma, Yaohua Liu | Observation-Graph Interaction and Key-Detail Guidance for Vision and
Language Navigation | 8 pages, 4 figures | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vision and Language Navigation (VLN) requires an agent to navigate through
environments following natural language instructions. However, existing methods
often struggle with effectively integrating visual observations and instruction
details during navigation, leading to suboptimal path planning and limited
success rates. In this paper, we propose OIKG (Observation-graph Interaction
and Key-detail Guidance), a novel framework that addresses these limitations
through two key components: (1) an observation-graph interaction module that
decouples angular and visual information while strengthening edge
representations in the navigation space, and (2) a key-detail guidance module
that dynamically extracts and utilizes fine-grained location and object
information from instructions. By enabling more precise cross-modal alignment
and dynamic instruction interpretation, our approach significantly improves the
agent's ability to follow complex navigation instructions. Extensive
experiments on the R2R and RxR datasets demonstrate that OIKG achieves
state-of-the-art performance across multiple evaluation metrics, validating the
effectiveness of our method in enhancing navigation precision through better
observation-instruction alignment.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 02:05:16 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Xie",
"Yifan",
""
],
[
"Ou",
"Binkai",
""
],
[
"Ma",
"Fei",
""
],
[
"Liu",
"Yaohua",
""
]
] | TITLE: Observation-Graph Interaction and Key-Detail Guidance for Vision and
Language Navigation
ABSTRACT: Vision and Language Navigation (VLN) requires an agent to navigate through
environments following natural language instructions. However, existing methods
often struggle with effectively integrating visual observations and instruction
details during navigation, leading to suboptimal path planning and limited
success rates. In this paper, we propose OIKG (Observation-graph Interaction
and Key-detail Guidance), a novel framework that addresses these limitations
through two key components: (1) an observation-graph interaction module that
decouples angular and visual information while strengthening edge
representations in the navigation space, and (2) a key-detail guidance module
that dynamically extracts and utilizes fine-grained location and object
information from instructions. By enabling more precise cross-modal alignment
and dynamic instruction interpretation, our approach significantly improves the
agent's ability to follow complex navigation instructions. Extensive
experiments on the R2R and RxR datasets demonstrate that OIKG achieves
state-of-the-art performance across multiple evaluation metrics, validating the
effectiveness of our method in enhancing navigation precision through better
observation-instruction alignment.
|
2503.11028 | Yixuan Zhang | Yixuan Zhang, Qing Chang, Yuxi Wang, Guang Chen, Zhaoxiang Zhang,
Junran Peng | EmoDiffusion: Enhancing Emotional 3D Facial Animation with Latent
Diffusion Models | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Speech-driven 3D facial animation seeks to produce lifelike facial
expressions that are synchronized with the speech content and its emotional
nuances, finding applications in various multimedia fields. However, previous
methods often overlook emotional facial expressions or fail to disentangle them
effectively from the speech content. To address these challenges, we present
EmoDiffusion, a novel approach that disentangles different emotions in speech
to generate rich 3D emotional facial expressions. Specifically, our method
employs two Variational Autoencoders (VAEs) to separately generate the upper
face region and mouth region, thereby learning a more refined representation of
the facial sequence. Unlike traditional methods that use diffusion models to
connect facial expression sequences with audio inputs, we perform the diffusion
process in the latent space. Furthermore, we introduce an Emotion Adapter to
evaluate upper face movements accurately. Given the paucity of 3D emotional
talking face data in the animation industry, we capture facial expressions
under the guidance of animation experts using LiveLinkFace on an iPhone. This
effort results in the creation of an innovative 3D blendshape emotional talking
face dataset (3D-BEF) used to train our network. Extensive experiments and
perceptual evaluations validate the effectiveness of our approach, confirming
its superiority in generating realistic and emotionally rich facial animations.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 02:54:22 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Zhang",
"Yixuan",
""
],
[
"Chang",
"Qing",
""
],
[
"Wang",
"Yuxi",
""
],
[
"Chen",
"Guang",
""
],
[
"Zhang",
"Zhaoxiang",
""
],
[
"Peng",
"Junran",
""
]
] | TITLE: EmoDiffusion: Enhancing Emotional 3D Facial Animation with Latent
Diffusion Models
ABSTRACT: Speech-driven 3D facial animation seeks to produce lifelike facial
expressions that are synchronized with the speech content and its emotional
nuances, finding applications in various multimedia fields. However, previous
methods often overlook emotional facial expressions or fail to disentangle them
effectively from the speech content. To address these challenges, we present
EmoDiffusion, a novel approach that disentangles different emotions in speech
to generate rich 3D emotional facial expressions. Specifically, our method
employs two Variational Autoencoders (VAEs) to separately generate the upper
face region and mouth region, thereby learning a more refined representation of
the facial sequence. Unlike traditional methods that use diffusion models to
connect facial expression sequences with audio inputs, we perform the diffusion
process in the latent space. Furthermore, we introduce an Emotion Adapter to
evaluate upper face movements accurately. Given the paucity of 3D emotional
talking face data in the animation industry, we capture facial expressions
under the guidance of animation experts using LiveLinkFace on an iPhone. This
effort results in the creation of an innovative 3D blendshape emotional talking
face dataset (3D-BEF) used to train our network. Extensive experiments and
perceptual evaluations validate the effectiveness of our approach, confirming
its superiority in generating realistic and emotionally rich facial animations.
|
2503.11030 | Ming Deng | Ming Deng, Sijin Sun, Zihao Li, Xiaochuan Hu, Xing Wu | FMNet: Frequency-Assisted Mamba-Like Linear Attention Network for
Camouflaged Object Detection | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Camouflaged Object Detection (COD) is challenging due to the strong
similarity between camouflaged objects and their surroundings, which
complicates identification. Existing methods mainly rely on spatial local
features, failing to capture global information, while Transformers increase
computational costs.To address this, the Frequency-Assisted Mamba-Like Linear
Attention Network (FMNet) is proposed, which leverages frequency-domain
learning to efficiently capture global features and mitigate ambiguity between
objects and the background. FMNet introduces the Multi-Scale Frequency-Assisted
Mamba-Like Linear Attention (MFM) module, integrating frequency and spatial
features through a multi-scale structure to handle scale variations while
reducing computational complexity. Additionally, the Pyramidal Frequency
Attention Extraction (PFAE) module and the Frequency Reverse Decoder (FRD)
enhance semantics and reconstruct features. Experimental results demonstrate
that FMNet outperforms existing methods on multiple COD datasets, showcasing
its advantages in both performance and efficiency. Code available at
https://anonymous.4open.science/r/FMNet-3CE5.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 02:55:19 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Deng",
"Ming",
""
],
[
"Sun",
"Sijin",
""
],
[
"Li",
"Zihao",
""
],
[
"Hu",
"Xiaochuan",
""
],
[
"Wu",
"Xing",
""
]
] | TITLE: FMNet: Frequency-Assisted Mamba-Like Linear Attention Network for
Camouflaged Object Detection
ABSTRACT: Camouflaged Object Detection (COD) is challenging due to the strong
similarity between camouflaged objects and their surroundings, which
complicates identification. Existing methods mainly rely on spatial local
features, failing to capture global information, while Transformers increase
computational costs.To address this, the Frequency-Assisted Mamba-Like Linear
Attention Network (FMNet) is proposed, which leverages frequency-domain
learning to efficiently capture global features and mitigate ambiguity between
objects and the background. FMNet introduces the Multi-Scale Frequency-Assisted
Mamba-Like Linear Attention (MFM) module, integrating frequency and spatial
features through a multi-scale structure to handle scale variations while
reducing computational complexity. Additionally, the Pyramidal Frequency
Attention Extraction (PFAE) module and the Frequency Reverse Decoder (FRD)
enhance semantics and reconstruct features. Experimental results demonstrate
that FMNet outperforms existing methods on multiple COD datasets, showcasing
its advantages in both performance and efficiency. Code available at
https://anonymous.4open.science/r/FMNet-3CE5.
|
2503.11038 | Mingjie Wei | Mingjie Wei, Xuemei Xie, Guangming Shi | ACMo: Attribute Controllable Motion Generation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Attributes such as style, fine-grained text, and trajectory are specific
conditions for describing motion. However, existing methods often lack precise
user control over motion attributes and suffer from limited generalizability to
unseen motions. This work introduces an Attribute Controllable Motion
generation architecture, to address these challenges via decouple any
conditions and control them separately. Firstly, we explored the Attribute
Diffusion Model to imporve text-to-motion performance via decouple text and
motion learning, as the controllable model relies heavily on the pre-trained
model. Then, we introduce Motion Adpater to quickly finetune previously unseen
motion patterns. Its motion prompts inputs achieve multimodal text-to-motion
generation that captures user-specified styles. Finally, we propose a LLM
Planner to bridge the gap between unseen attributes and dataset-specific texts
via local knowledage for user-friendly interaction. Our approach introduces the
capability for motion prompts for stylize generation, enabling fine-grained and
user-friendly attribute control while providing performance comparable to
state-of-the-art methods. Project page: https://mjwei3d.github.io/ACMo/
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 03:07:02 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Wei",
"Mingjie",
""
],
[
"Xie",
"Xuemei",
""
],
[
"Shi",
"Guangming",
""
]
] | TITLE: ACMo: Attribute Controllable Motion Generation
ABSTRACT: Attributes such as style, fine-grained text, and trajectory are specific
conditions for describing motion. However, existing methods often lack precise
user control over motion attributes and suffer from limited generalizability to
unseen motions. This work introduces an Attribute Controllable Motion
generation architecture, to address these challenges via decouple any
conditions and control them separately. Firstly, we explored the Attribute
Diffusion Model to imporve text-to-motion performance via decouple text and
motion learning, as the controllable model relies heavily on the pre-trained
model. Then, we introduce Motion Adpater to quickly finetune previously unseen
motion patterns. Its motion prompts inputs achieve multimodal text-to-motion
generation that captures user-specified styles. Finally, we propose a LLM
Planner to bridge the gap between unseen attributes and dataset-specific texts
via local knowledage for user-friendly interaction. Our approach introduces the
capability for motion prompts for stylize generation, enabling fine-grained and
user-friendly attribute control while providing performance comparable to
state-of-the-art methods. Project page: https://mjwei3d.github.io/ACMo/
|
2503.11043 | Hongkai Zheng | Hongkai Zheng, Wenda Chu, Bingliang Zhang, Zihui Wu, Austin Wang,
Berthy T. Feng, Caifeng Zou, Yu Sun, Nikola Kovachki, Zachary E. Ross,
Katherine L. Bouman, Yisong Yue | InverseBench: Benchmarking Plug-and-Play Diffusion Priors for Inverse
Problems in Physical Sciences | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Plug-and-play diffusion priors (PnPDP) have emerged as a promising research
direction for solving inverse problems.
However, current studies primarily focus on natural image restoration,
leaving the performance of these algorithms in scientific inverse problems
largely unexplored. To address this gap, we introduce \textsc{InverseBench}, a
framework that evaluates diffusion models across five distinct scientific
inverse problems. These problems present unique structural challenges that
differ from existing benchmarks, arising from critical scientific applications
such as optical tomography, medical imaging, black hole imaging, seismology,
and fluid dynamics. With \textsc{InverseBench}, we benchmark 14 inverse problem
algorithms that use plug-and-play diffusion priors against strong,
domain-specific baselines, offering valuable new insights into the strengths
and weaknesses of existing algorithms. To facilitate further research and
development, we open-source the codebase, along with datasets and pre-trained
models, at https://devzhk.github.io/InverseBench/.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 03:13:55 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Zheng",
"Hongkai",
""
],
[
"Chu",
"Wenda",
""
],
[
"Zhang",
"Bingliang",
""
],
[
"Wu",
"Zihui",
""
],
[
"Wang",
"Austin",
""
],
[
"Feng",
"Berthy T.",
""
],
[
"Zou",
"Caifeng",
""
],
[
"Sun",
"Yu",
""
],
[
"Kovachki",
"Nikola",
""
],
[
"Ross",
"Zachary E.",
""
],
[
"Bouman",
"Katherine L.",
""
],
[
"Yue",
"Yisong",
""
]
] | TITLE: InverseBench: Benchmarking Plug-and-Play Diffusion Priors for Inverse
Problems in Physical Sciences
ABSTRACT: Plug-and-play diffusion priors (PnPDP) have emerged as a promising research
direction for solving inverse problems.
However, current studies primarily focus on natural image restoration,
leaving the performance of these algorithms in scientific inverse problems
largely unexplored. To address this gap, we introduce \textsc{InverseBench}, a
framework that evaluates diffusion models across five distinct scientific
inverse problems. These problems present unique structural challenges that
differ from existing benchmarks, arising from critical scientific applications
such as optical tomography, medical imaging, black hole imaging, seismology,
and fluid dynamics. With \textsc{InverseBench}, we benchmark 14 inverse problem
algorithms that use plug-and-play diffusion priors against strong,
domain-specific baselines, offering valuable new insights into the strengths
and weaknesses of existing algorithms. To facilitate further research and
development, we open-source the codebase, along with datasets and pre-trained
models, at https://devzhk.github.io/InverseBench/.
|
2503.11046 | Mohammad S. Jalali | Ning-Yuan Georgia Liu, Flower Yang, Mohammad S. Jalali | Measuring Similarity in Causal Graphs: A Framework for Semantic and
Structural Analysis | 27 pages | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Causal graphs are commonly used to understand and model complex systems.
Researchers often construct these graphs from different perspectives, leading
to significant variations for the same problem. Comparing causal graphs is,
therefore, essential for evaluating assumptions, integrating insights, and
resolving disagreements. The rise of AI tools has further amplified this need,
as they are increasingly used to generate hypothesized causal graphs by
synthesizing information from various sources such as prior research and
community inputs, providing the potential for automating and scaling causal
modeling for complex systems. Similar to humans, these tools also produce
inconsistent results across platforms, versions, and iterations. Despite its
importance, research on causal graph comparison remains scarce. Existing
methods often focus solely on structural similarities, assuming identical
variable names, and fail to capture nuanced semantic relationships, which is
essential for causal graph comparison. We address these gaps by investigating
methods for comparing causal graphs from both semantic and structural
perspectives. First, we reviewed over 40 existing metrics and, based on
predefined criteria, selected nine for evaluation from two threads of machine
learning: four semantic similarity metrics and five learning graph kernels. We
discuss the usability of these metrics in simple examples to illustrate their
strengths and limitations. We then generated a synthetic dataset of 2,000
causal graphs using generative AI based on a reference diagram. Our findings
reveal that each metric captures a different aspect of similarity, highlighting
the need to use multiple metrics.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 03:29:26 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Liu",
"Ning-Yuan Georgia",
""
],
[
"Yang",
"Flower",
""
],
[
"Jalali",
"Mohammad S.",
""
]
] | TITLE: Measuring Similarity in Causal Graphs: A Framework for Semantic and
Structural Analysis
ABSTRACT: Causal graphs are commonly used to understand and model complex systems.
Researchers often construct these graphs from different perspectives, leading
to significant variations for the same problem. Comparing causal graphs is,
therefore, essential for evaluating assumptions, integrating insights, and
resolving disagreements. The rise of AI tools has further amplified this need,
as they are increasingly used to generate hypothesized causal graphs by
synthesizing information from various sources such as prior research and
community inputs, providing the potential for automating and scaling causal
modeling for complex systems. Similar to humans, these tools also produce
inconsistent results across platforms, versions, and iterations. Despite its
importance, research on causal graph comparison remains scarce. Existing
methods often focus solely on structural similarities, assuming identical
variable names, and fail to capture nuanced semantic relationships, which is
essential for causal graph comparison. We address these gaps by investigating
methods for comparing causal graphs from both semantic and structural
perspectives. First, we reviewed over 40 existing metrics and, based on
predefined criteria, selected nine for evaluation from two threads of machine
learning: four semantic similarity metrics and five learning graph kernels. We
discuss the usability of these metrics in simple examples to illustrate their
strengths and limitations. We then generated a synthetic dataset of 2,000
causal graphs using generative AI based on a reference diagram. Our findings
reveal that each metric captures a different aspect of similarity, highlighting
the need to use multiple metrics.
|
2503.11062 | Wenhao Jiang | Wenhao Jiang, Duo Li, Menghan Hu, Chao Ma, Ke Wang, Zhipeng Zhang | Active Learning from Scene Embeddings for End-to-End Autonomous Driving | 9 pages, 5 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the field of autonomous driving, end-to-end deep learning models show
great potential by learning driving decisions directly from sensor data.
However, training these models requires large amounts of labeled data, which is
time-consuming and expensive. Considering that the real-world driving data
exhibits a long-tailed distribution where simple scenarios constitute a
majority part of the data, we are thus inspired to identify the most
challenging scenarios within it. Subsequently, we can efficiently improve the
performance of the model by training with the selected data of the highest
value. Prior research has focused on the selection of valuable data by
empirically designed strategies. However, manually designed methods suffer from
being less generalizable to new data distributions. Observing that the BEV
(Bird's Eye View) features in end-to-end models contain all the information
required to represent the scenario, we propose an active learning framework
that relies on these vectorized scene-level features, called SEAD. The
framework selects initial data based on driving-environmental information and
incremental data based on BEV features. Experiments show that we only need 30\%
of the nuScenes training data to achieve performance close to what can be
achieved with the full dataset. The source code will be released.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 03:56:22 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Jiang",
"Wenhao",
""
],
[
"Li",
"Duo",
""
],
[
"Hu",
"Menghan",
""
],
[
"Ma",
"Chao",
""
],
[
"Wang",
"Ke",
""
],
[
"Zhang",
"Zhipeng",
""
]
] | TITLE: Active Learning from Scene Embeddings for End-to-End Autonomous Driving
ABSTRACT: In the field of autonomous driving, end-to-end deep learning models show
great potential by learning driving decisions directly from sensor data.
However, training these models requires large amounts of labeled data, which is
time-consuming and expensive. Considering that the real-world driving data
exhibits a long-tailed distribution where simple scenarios constitute a
majority part of the data, we are thus inspired to identify the most
challenging scenarios within it. Subsequently, we can efficiently improve the
performance of the model by training with the selected data of the highest
value. Prior research has focused on the selection of valuable data by
empirically designed strategies. However, manually designed methods suffer from
being less generalizable to new data distributions. Observing that the BEV
(Bird's Eye View) features in end-to-end models contain all the information
required to represent the scenario, we propose an active learning framework
that relies on these vectorized scene-level features, called SEAD. The
framework selects initial data based on driving-environmental information and
incremental data based on BEV features. Experiments show that we only need 30\%
of the nuScenes training data to achieve performance close to what can be
achieved with the full dataset. The source code will be released.
|
2503.11064 | Ziqi Wang | Ziqi Wang, Derek Hua, Wenjun Jiang, Tianwei Xing, Xun Chen, Mani
Srivastava | MobiVital: Self-supervised Time-series Quality Estimation for
Contactless Respiration Monitoring Using UWB Radar | null | null | null | null | eess.SP cs.LG | http://creativecommons.org/licenses/by/4.0/ | Respiration waveforms are increasingly recognized as important biomarkers,
offering insights beyond simple respiration rates, such as detecting breathing
irregularities for disease diagnosis or monitoring breath patterns to guide
rehabilitation training. Previous works in wireless respiration monitoring have
primarily focused on estimating respiration rate, where the breath waveforms
are often generated as a by-product. As a result, issues such as waveform
deformation and inversion have largely been overlooked, reducing the signal's
utility for applications requiring breathing waveforms. To address this
problem, we present a novel approach, MobiVital, that improves the quality of
respiration waveforms obtained from ultra-wideband (UWB) radar data. MobiVital
combines a self-supervised autoregressive model for breathing waveform
extraction with a biology-informed algorithm to detect and correct waveform
inversions. To encourage reproducible research efforts for developing wireless
vital signal monitoring systems, we also release a 12-person, 24-hour UWB radar
vital signal dataset, with time-synchronized ground truth obtained from
wearable sensors. Our results show that the respiration waveforms produced by
our system exhibit a 7-34% increase in fidelity to the ground truth compared to
the baselines and can benefit downstream tasks such as respiration rate
estimation.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 04:14:27 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Wang",
"Ziqi",
""
],
[
"Hua",
"Derek",
""
],
[
"Jiang",
"Wenjun",
""
],
[
"Xing",
"Tianwei",
""
],
[
"Chen",
"Xun",
""
],
[
"Srivastava",
"Mani",
""
]
] | TITLE: MobiVital: Self-supervised Time-series Quality Estimation for
Contactless Respiration Monitoring Using UWB Radar
ABSTRACT: Respiration waveforms are increasingly recognized as important biomarkers,
offering insights beyond simple respiration rates, such as detecting breathing
irregularities for disease diagnosis or monitoring breath patterns to guide
rehabilitation training. Previous works in wireless respiration monitoring have
primarily focused on estimating respiration rate, where the breath waveforms
are often generated as a by-product. As a result, issues such as waveform
deformation and inversion have largely been overlooked, reducing the signal's
utility for applications requiring breathing waveforms. To address this
problem, we present a novel approach, MobiVital, that improves the quality of
respiration waveforms obtained from ultra-wideband (UWB) radar data. MobiVital
combines a self-supervised autoregressive model for breathing waveform
extraction with a biology-informed algorithm to detect and correct waveform
inversions. To encourage reproducible research efforts for developing wireless
vital signal monitoring systems, we also release a 12-person, 24-hour UWB radar
vital signal dataset, with time-synchronized ground truth obtained from
wearable sensors. Our results show that the respiration waveforms produced by
our system exhibit a 7-34% increase in fidelity to the ground truth compared to
the baselines and can benefit downstream tasks such as respiration rate
estimation.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.