Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2503.20314 | WanTeam WanTeam | WanTeam: Ang Wang, Baole Ai, Bin Wen, Chaojie Mao, Chen-Wei Xie, Di
Chen, Feiwu Yu, Haiming Zhao, Jianxiao Yang, Jianyuan Zeng, Jiayu Wang,
Jingfeng Zhang, Jingren Zhou, Jinkai Wang, Jixuan Chen, Kai Zhu, Kang Zhao,
Keyu Yan, Lianghua Huang, Mengyang Feng, Ningyi Zhang, Pandeng Li, Pingyu Wu,
Ruihang Chu, Ruili Feng, Shiwei Zhang, Siyang Sun, Tao Fang, Tianxing Wang,
Tianyi Gui, Tingyu Weng, Tong Shen, Wei Lin, Wei Wang, Wei Wang, Wenmeng
Zhou, Wente Wang, Wenting Shen, Wenyuan Yu, Xianzhong Shi, Xiaoming Huang,
Xin Xu, Yan Kou, Yangyu Lv, Yifei Li, Yijing Liu, Yiming Wang, Yingya Zhang,
Yitong Huang, Yong Li, You Wu, Yu Liu, Yulin Pan, Yun Zheng, Yuntao Hong,
Yupeng Shi, Yutong Feng, Zeyinzi Jiang, Zhen Han, Zhi-Fan Wu, Ziyu Liu | Wan: Open and Advanced Large-Scale Video Generative Models | 60 pages, 33 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This report presents Wan, a comprehensive and open suite of video foundation
models designed to push the boundaries of video generation. Built upon the
mainstream diffusion transformer paradigm, Wan achieves significant
advancements in generative capabilities through a series of innovations,
including our novel VAE, scalable pre-training strategies, large-scale data
curation, and automated evaluation metrics. These contributions collectively
enhance the model's performance and versatility. Specifically, Wan is
characterized by four key features: Leading Performance: The 14B model of Wan,
trained on a vast dataset comprising billions of images and videos,
demonstrates the scaling laws of video generation with respect to both data and
model size. It consistently outperforms the existing open-source models as well
as state-of-the-art commercial solutions across multiple internal and external
benchmarks, demonstrating a clear and significant performance superiority.
Comprehensiveness: Wan offers two capable models, i.e., 1.3B and 14B
parameters, for efficiency and effectiveness respectively. It also covers
multiple downstream applications, including image-to-video, instruction-guided
video editing, and personal video generation, encompassing up to eight tasks.
Consumer-Grade Efficiency: The 1.3B model demonstrates exceptional resource
efficiency, requiring only 8.19 GB VRAM, making it compatible with a wide range
of consumer-grade GPUs. Openness: We open-source the entire series of Wan,
including source code and all models, with the goal of fostering the growth of
the video generation community. This openness seeks to significantly expand the
creative possibilities of video production in the industry and provide academia
with high-quality video foundation models. All the code and models are
available at https://github.com/Wan-Video/Wan2.1.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 08:25:43 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"WanTeam",
"",
""
],
[
":",
"",
""
],
[
"Wang",
"Ang",
""
],
[
"Ai",
"Baole",
""
],
[
"Wen",
"Bin",
""
],
[
"Mao",
"Chaojie",
""
],
[
"Xie",
"Chen-Wei",
""
],
[
"Chen",
"Di",
""
],
[
"Yu",
"Feiwu",
""
],
[
"Zhao",
"Haiming",
""
],
[
"Yang",
"Jianxiao",
""
],
[
"Zeng",
"Jianyuan",
""
],
[
"Wang",
"Jiayu",
""
],
[
"Zhang",
"Jingfeng",
""
],
[
"Zhou",
"Jingren",
""
],
[
"Wang",
"Jinkai",
""
],
[
"Chen",
"Jixuan",
""
],
[
"Zhu",
"Kai",
""
],
[
"Zhao",
"Kang",
""
],
[
"Yan",
"Keyu",
""
],
[
"Huang",
"Lianghua",
""
],
[
"Feng",
"Mengyang",
""
],
[
"Zhang",
"Ningyi",
""
],
[
"Li",
"Pandeng",
""
],
[
"Wu",
"Pingyu",
""
],
[
"Chu",
"Ruihang",
""
],
[
"Feng",
"Ruili",
""
],
[
"Zhang",
"Shiwei",
""
],
[
"Sun",
"Siyang",
""
],
[
"Fang",
"Tao",
""
],
[
"Wang",
"Tianxing",
""
],
[
"Gui",
"Tianyi",
""
],
[
"Weng",
"Tingyu",
""
],
[
"Shen",
"Tong",
""
],
[
"Lin",
"Wei",
""
],
[
"Wang",
"Wei",
""
],
[
"Wang",
"Wei",
""
],
[
"Zhou",
"Wenmeng",
""
],
[
"Wang",
"Wente",
""
],
[
"Shen",
"Wenting",
""
],
[
"Yu",
"Wenyuan",
""
],
[
"Shi",
"Xianzhong",
""
],
[
"Huang",
"Xiaoming",
""
],
[
"Xu",
"Xin",
""
],
[
"Kou",
"Yan",
""
],
[
"Lv",
"Yangyu",
""
],
[
"Li",
"Yifei",
""
],
[
"Liu",
"Yijing",
""
],
[
"Wang",
"Yiming",
""
],
[
"Zhang",
"Yingya",
""
],
[
"Huang",
"Yitong",
""
],
[
"Li",
"Yong",
""
],
[
"Wu",
"You",
""
],
[
"Liu",
"Yu",
""
],
[
"Pan",
"Yulin",
""
],
[
"Zheng",
"Yun",
""
],
[
"Hong",
"Yuntao",
""
],
[
"Shi",
"Yupeng",
""
],
[
"Feng",
"Yutong",
""
],
[
"Jiang",
"Zeyinzi",
""
],
[
"Han",
"Zhen",
""
],
[
"Wu",
"Zhi-Fan",
""
],
[
"Liu",
"Ziyu",
""
]
] | TITLE: Wan: Open and Advanced Large-Scale Video Generative Models
ABSTRACT: This report presents Wan, a comprehensive and open suite of video foundation
models designed to push the boundaries of video generation. Built upon the
mainstream diffusion transformer paradigm, Wan achieves significant
advancements in generative capabilities through a series of innovations,
including our novel VAE, scalable pre-training strategies, large-scale data
curation, and automated evaluation metrics. These contributions collectively
enhance the model's performance and versatility. Specifically, Wan is
characterized by four key features: Leading Performance: The 14B model of Wan,
trained on a vast dataset comprising billions of images and videos,
demonstrates the scaling laws of video generation with respect to both data and
model size. It consistently outperforms the existing open-source models as well
as state-of-the-art commercial solutions across multiple internal and external
benchmarks, demonstrating a clear and significant performance superiority.
Comprehensiveness: Wan offers two capable models, i.e., 1.3B and 14B
parameters, for efficiency and effectiveness respectively. It also covers
multiple downstream applications, including image-to-video, instruction-guided
video editing, and personal video generation, encompassing up to eight tasks.
Consumer-Grade Efficiency: The 1.3B model demonstrates exceptional resource
efficiency, requiring only 8.19 GB VRAM, making it compatible with a wide range
of consumer-grade GPUs. Openness: We open-source the entire series of Wan,
including source code and all models, with the goal of fostering the growth of
the video generation community. This openness seeks to significantly expand the
creative possibilities of video production in the industry and provide academia
with high-quality video foundation models. All the code and models are
available at https://github.com/Wan-Video/Wan2.1.
|
2503.20315 | Hanwen Liang | Hanwen Liang, Xian Zhong, Wenxuan Liu, Yajing Zheng, Wenxin Huang,
Zhaofei Yu, Tiejun Huang | SpikeDerain: Unveiling Clear Videos from Rainy Sequences Using Color
Spike Streams | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Restoring clear frames from rainy videos presents a significant challenge due
to the rapid motion of rain streaks. Traditional frame-based visual sensors,
which capture scene content synchronously, struggle to capture the fast-moving
details of rain accurately. In recent years, neuromorphic sensors have
introduced a new paradigm for dynamic scene perception, offering microsecond
temporal resolution and high dynamic range. However, existing multimodal
methods that fuse event streams with RGB images face difficulties in handling
the complex spatiotemporal interference of raindrops in real scenes, primarily
due to hardware synchronization errors and computational redundancy. In this
paper, we propose a Color Spike Stream Deraining Network (SpikeDerain), capable
of reconstructing spike streams of dynamic scenes and accurately removing rain
streaks. To address the challenges of data scarcity in real continuous rainfall
scenes, we design a physically interpretable rain streak synthesis model that
generates parameterized continuous rain patterns based on arbitrary background
images. Experimental results demonstrate that the network, trained with this
synthetic data, remains highly robust even under extreme rainfall conditions.
These findings highlight the effectiveness and robustness of our method across
varying rainfall levels and datasets, setting new standards for video deraining
tasks. The code will be released soon.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 08:28:28 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Liang",
"Hanwen",
""
],
[
"Zhong",
"Xian",
""
],
[
"Liu",
"Wenxuan",
""
],
[
"Zheng",
"Yajing",
""
],
[
"Huang",
"Wenxin",
""
],
[
"Yu",
"Zhaofei",
""
],
[
"Huang",
"Tiejun",
""
]
] | TITLE: SpikeDerain: Unveiling Clear Videos from Rainy Sequences Using Color
Spike Streams
ABSTRACT: Restoring clear frames from rainy videos presents a significant challenge due
to the rapid motion of rain streaks. Traditional frame-based visual sensors,
which capture scene content synchronously, struggle to capture the fast-moving
details of rain accurately. In recent years, neuromorphic sensors have
introduced a new paradigm for dynamic scene perception, offering microsecond
temporal resolution and high dynamic range. However, existing multimodal
methods that fuse event streams with RGB images face difficulties in handling
the complex spatiotemporal interference of raindrops in real scenes, primarily
due to hardware synchronization errors and computational redundancy. In this
paper, we propose a Color Spike Stream Deraining Network (SpikeDerain), capable
of reconstructing spike streams of dynamic scenes and accurately removing rain
streaks. To address the challenges of data scarcity in real continuous rainfall
scenes, we design a physically interpretable rain streak synthesis model that
generates parameterized continuous rain patterns based on arbitrary background
images. Experimental results demonstrate that the network, trained with this
synthetic data, remains highly robust even under extreme rainfall conditions.
These findings highlight the effectiveness and robustness of our method across
varying rainfall levels and datasets, setting new standards for video deraining
tasks. The code will be released soon.
|
2503.20324 | Junkai Jiang | Junkai Jiang, Ruochen Li, Yibin Yang, Yihe Chen, Yuning Wang, Shaobing
Xu and Jianqiang Wang | CTS-CBS: A New Approach for Multi-Agent Collaborative Task Sequencing
and Path Finding | null | null | null | null | cs.RO cs.MA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper addresses a generalization problem of Multi-Agent Pathfinding
(MAPF), called Collaborative Task Sequencing - Multi-Agent Pathfinding
(CTS-MAPF), where agents must plan collision-free paths and visit a series of
intermediate task locations in a specific order before reaching their final
destinations. To address this problem, we propose a new approach, Collaborative
Task Sequencing - Conflict-Based Search (CTS-CBS), which conducts a two-level
search. In the high level, it generates a search forest, where each tree
corresponds to a joint task sequence derived from the jTSP solution. In the low
level, CTS-CBS performs constrained single-agent path planning to generate
paths for each agent while adhering to high-level constraints. We also provide
heoretical guarantees of its completeness and optimality (or sub-optimality
with a bounded parameter). To evaluate the performance of CTS-CBS, we create
two datasets, CTS-MAPF and MG-MAPF, and conduct comprehensive experiments. The
results show that CTS-CBS adaptations for MG-MAPF outperform baseline
algorithms in terms of success rate (up to 20 times larger) and runtime (up to
100 times faster), with less than a 10% sacrifice in solution quality.
Furthermore, CTS-CBS offers flexibility by allowing users to adjust the
sub-optimality bound omega to balance between solution quality and efficiency.
Finally, practical robot tests demonstrate the algorithm's applicability in
real-world scenarios.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 08:47:43 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Jiang",
"Junkai",
""
],
[
"Li",
"Ruochen",
""
],
[
"Yang",
"Yibin",
""
],
[
"Chen",
"Yihe",
""
],
[
"Wang",
"Yuning",
""
],
[
"Xu",
"Shaobing",
""
],
[
"Wang",
"Jianqiang",
""
]
] | TITLE: CTS-CBS: A New Approach for Multi-Agent Collaborative Task Sequencing
and Path Finding
ABSTRACT: This paper addresses a generalization problem of Multi-Agent Pathfinding
(MAPF), called Collaborative Task Sequencing - Multi-Agent Pathfinding
(CTS-MAPF), where agents must plan collision-free paths and visit a series of
intermediate task locations in a specific order before reaching their final
destinations. To address this problem, we propose a new approach, Collaborative
Task Sequencing - Conflict-Based Search (CTS-CBS), which conducts a two-level
search. In the high level, it generates a search forest, where each tree
corresponds to a joint task sequence derived from the jTSP solution. In the low
level, CTS-CBS performs constrained single-agent path planning to generate
paths for each agent while adhering to high-level constraints. We also provide
heoretical guarantees of its completeness and optimality (or sub-optimality
with a bounded parameter). To evaluate the performance of CTS-CBS, we create
two datasets, CTS-MAPF and MG-MAPF, and conduct comprehensive experiments. The
results show that CTS-CBS adaptations for MG-MAPF outperform baseline
algorithms in terms of success rate (up to 20 times larger) and runtime (up to
100 times faster), with less than a 10% sacrifice in solution quality.
Furthermore, CTS-CBS offers flexibility by allowing users to adjust the
sub-optimality bound omega to balance between solution quality and efficiency.
Finally, practical robot tests demonstrate the algorithm's applicability in
real-world scenarios.
|
2503.20328 | Antoine Bottenmuller | Antoine Bottenmuller (CMM), Florent Magaud (LRCS), Arnaud Demorti\`ere
(LRCS), Etienne Decenci\`ere (CMM), Petr Dokladal (CMM) | Euclidean Distance to Convex Polyhedra and Application to Class
Representation in Spectral Images | null | 14th International Conference on Pattern Recognition Applications
and Methods, Feb 2025, Porto, France. pp.192-203 | 10.5220/0013385600003905 | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the aim of estimating the abundance map from observations only, linear
unmixing approaches are not always suitable to spectral images, especially when
the number of bands is too small or when the spectra of the observed data are
too correlated. To address this issue in the general case, we present a novel
approach which provides an adapted spatial density function based on any
arbitrary linear classifier. A robust mathematical formulation for computing
the Euclidean distance to polyhedral sets is presented, along with an efficient
algorithm that provides the exact minimum-norm point in a polyhedron. An
empirical evaluation on the widely-used Samson hyperspectral dataset
demonstrates that the proposed method surpasses state-of-the-art approaches in
reconstructing abundance maps. Furthermore, its application to spectral images
of a Lithium-ion battery, incompatible with linear unmixing models, validates
the method's generality and effectiveness.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 08:55:18 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Bottenmuller",
"Antoine",
"",
"CMM"
],
[
"Magaud",
"Florent",
"",
"LRCS"
],
[
"Demortière",
"Arnaud",
"",
"LRCS"
],
[
"Decencière",
"Etienne",
"",
"CMM"
],
[
"Dokladal",
"Petr",
"",
"CMM"
]
] | TITLE: Euclidean Distance to Convex Polyhedra and Application to Class
Representation in Spectral Images
ABSTRACT: With the aim of estimating the abundance map from observations only, linear
unmixing approaches are not always suitable to spectral images, especially when
the number of bands is too small or when the spectra of the observed data are
too correlated. To address this issue in the general case, we present a novel
approach which provides an adapted spatial density function based on any
arbitrary linear classifier. A robust mathematical formulation for computing
the Euclidean distance to polyhedral sets is presented, along with an efficient
algorithm that provides the exact minimum-norm point in a polyhedron. An
empirical evaluation on the widely-used Samson hyperspectral dataset
demonstrates that the proposed method surpasses state-of-the-art approaches in
reconstructing abundance maps. Furthermore, its application to spectral images
of a Lithium-ion battery, incompatible with linear unmixing models, validates
the method's generality and effectiveness.
|
2503.20348 | Felix Vogel | Felix Vogel, Walid Bousselham, Anna Kukleva, Nina Shvetsova, Hilde
Kuehne | VideoGEM: Training-free Action Grounding in Videos | null | null | null | null | cs.CV cs.AI cs.CL cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | Vision-language foundation models have shown impressive capabilities across
various zero-shot tasks, including training-free localization and grounding,
primarily focusing on localizing objects in images. However, leveraging those
capabilities to localize actions and events in videos is challenging, as
actions have less physical outline and are usually described by higher-level
concepts. In this work, we propose VideoGEM, the first training-free spatial
action grounding method based on pretrained image- and video-language
backbones. Namely, we adapt the self-self attention formulation of GEM to
spatial activity grounding. We observe that high-level semantic concepts, such
as actions, usually emerge in the higher layers of the image- and
video-language models. We, therefore, propose a layer weighting in the
self-attention path to prioritize higher layers. Additionally, we introduce a
dynamic weighting method to automatically tune layer weights to capture each
layer`s relevance to a specific prompt. Finally, we introduce a prompt
decomposition, processing action, verb, and object prompts separately,
resulting in a better spatial localization of actions. We evaluate the proposed
approach on three image- and video-language backbones, CLIP, OpenCLIP, and
ViCLIP, and on four video grounding datasets, V-HICO, DALY,
YouCook-Interactions, and GroundingYouTube, showing that the proposed
training-free approach is able to outperform current trained state-of-the-art
approaches for spatial video grounding.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 09:20:30 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Vogel",
"Felix",
""
],
[
"Bousselham",
"Walid",
""
],
[
"Kukleva",
"Anna",
""
],
[
"Shvetsova",
"Nina",
""
],
[
"Kuehne",
"Hilde",
""
]
] | TITLE: VideoGEM: Training-free Action Grounding in Videos
ABSTRACT: Vision-language foundation models have shown impressive capabilities across
various zero-shot tasks, including training-free localization and grounding,
primarily focusing on localizing objects in images. However, leveraging those
capabilities to localize actions and events in videos is challenging, as
actions have less physical outline and are usually described by higher-level
concepts. In this work, we propose VideoGEM, the first training-free spatial
action grounding method based on pretrained image- and video-language
backbones. Namely, we adapt the self-self attention formulation of GEM to
spatial activity grounding. We observe that high-level semantic concepts, such
as actions, usually emerge in the higher layers of the image- and
video-language models. We, therefore, propose a layer weighting in the
self-attention path to prioritize higher layers. Additionally, we introduce a
dynamic weighting method to automatically tune layer weights to capture each
layer`s relevance to a specific prompt. Finally, we introduce a prompt
decomposition, processing action, verb, and object prompts separately,
resulting in a better spatial localization of actions. We evaluate the proposed
approach on three image- and video-language backbones, CLIP, OpenCLIP, and
ViCLIP, and on four video grounding datasets, V-HICO, DALY,
YouCook-Interactions, and GroundingYouTube, showing that the proposed
training-free approach is able to outperform current trained state-of-the-art
approaches for spatial video grounding.
|
2503.20354 | Ke Ma | Ke Ma, Jiaqi Tang, Bin Guo, Fan Dang, Sicong Liu, Zhui Zhu, Lei Wu,
Cheng Fang, Ying-Cong Chen, Zhiwen Yu, Yunhao Liu | SURGEON: Memory-Adaptive Fully Test-Time Adaptation via Dynamic
Activation Sparsity | Accepted to CVPR 2025 | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the growing integration of deep models into mobile terminals, the
accuracy of these models declines significantly due to various deployment
interferences. Test-time adaptation (TTA) has emerged to improve the
performance of deep models by adapting them to unlabeled target data online.
Yet, the significant memory cost, particularly in resource-constrained
terminals, impedes the effective deployment of most backward-propagation-based
TTA methods. To tackle memory constraints, we introduce SURGEON, a method that
substantially reduces memory cost while preserving comparable accuracy
improvements during fully test-time adaptation (FTTA) without relying on
specific network architectures or modifications to the original training
procedure. Specifically, we propose a novel dynamic activation sparsity
strategy that directly prunes activations at layer-specific dynamic ratios
during adaptation, allowing for flexible control of learning ability and memory
cost in a data-sensitive manner. Among this, two metrics, Gradient Importance
and Layer Activation Memory, are considered to determine the layer-wise pruning
ratios, reflecting accuracy contribution and memory efficiency, respectively.
Experimentally, our method surpasses the baselines by not only reducing memory
usage but also achieving superior accuracy, delivering SOTA performance across
diverse datasets, architectures, and tasks.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 09:27:09 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Ma",
"Ke",
""
],
[
"Tang",
"Jiaqi",
""
],
[
"Guo",
"Bin",
""
],
[
"Dang",
"Fan",
""
],
[
"Liu",
"Sicong",
""
],
[
"Zhu",
"Zhui",
""
],
[
"Wu",
"Lei",
""
],
[
"Fang",
"Cheng",
""
],
[
"Chen",
"Ying-Cong",
""
],
[
"Yu",
"Zhiwen",
""
],
[
"Liu",
"Yunhao",
""
]
] | TITLE: SURGEON: Memory-Adaptive Fully Test-Time Adaptation via Dynamic
Activation Sparsity
ABSTRACT: Despite the growing integration of deep models into mobile terminals, the
accuracy of these models declines significantly due to various deployment
interferences. Test-time adaptation (TTA) has emerged to improve the
performance of deep models by adapting them to unlabeled target data online.
Yet, the significant memory cost, particularly in resource-constrained
terminals, impedes the effective deployment of most backward-propagation-based
TTA methods. To tackle memory constraints, we introduce SURGEON, a method that
substantially reduces memory cost while preserving comparable accuracy
improvements during fully test-time adaptation (FTTA) without relying on
specific network architectures or modifications to the original training
procedure. Specifically, we propose a novel dynamic activation sparsity
strategy that directly prunes activations at layer-specific dynamic ratios
during adaptation, allowing for flexible control of learning ability and memory
cost in a data-sensitive manner. Among this, two metrics, Gradient Importance
and Layer Activation Memory, are considered to determine the layer-wise pruning
ratios, reflecting accuracy contribution and memory efficiency, respectively.
Experimentally, our method surpasses the baselines by not only reducing memory
usage but also achieving superior accuracy, delivering SOTA performance across
diverse datasets, architectures, and tasks.
|
2503.20382 | Rong Wang | Chunshan Li, Rong Wang, Xiaofei Yang and Dianhui Chu | RSRWKV: A Linear-Complexity 2D Attention Mechanism for Efficient Remote
Sensing Vision Task | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | High-resolution remote sensing analysis faces challenges in global context
modeling due to scene complexity and scale diversity. While CNNs excel at local
feature extraction via parameter sharing, their fixed receptive fields
fundamentally restrict long-range dependency modeling. Vision Transformers
(ViTs) effectively capture global semantic relationships through self-attention
mechanisms but suffer from quadratic computational complexity relative to image
resolution, creating critical efficiency bottlenecks for high-resolution
imagery. The RWKV model's linear-complexity sequence modeling achieves
breakthroughs in NLP but exhibits anisotropic limitations in vision tasks due
to its 1D scanning mechanism. To address these challenges, we propose RSRWKV,
featuring a novel 2D-WKV scanning mechanism that bridges sequential processing
and 2D spatial reasoning while maintaining linear complexity. This enables
isotropic context aggregation across multiple directions. The MVC-Shift module
enhances multi-scale receptive field coverage, while the ECA module strengthens
cross-channel feature interaction and semantic saliency modeling. Experimental
results demonstrate RSRWKV's superior performance over CNN and Transformer
baselines in classification, detection, and segmentation tasks on NWPU
RESISC45, VHR-10.v2, and GLH-Water datasets, offering a scalable solution for
high-resolution remote sensing analysis.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 10:03:46 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Li",
"Chunshan",
""
],
[
"Wang",
"Rong",
""
],
[
"Yang",
"Xiaofei",
""
],
[
"Chu",
"Dianhui",
""
]
] | TITLE: RSRWKV: A Linear-Complexity 2D Attention Mechanism for Efficient Remote
Sensing Vision Task
ABSTRACT: High-resolution remote sensing analysis faces challenges in global context
modeling due to scene complexity and scale diversity. While CNNs excel at local
feature extraction via parameter sharing, their fixed receptive fields
fundamentally restrict long-range dependency modeling. Vision Transformers
(ViTs) effectively capture global semantic relationships through self-attention
mechanisms but suffer from quadratic computational complexity relative to image
resolution, creating critical efficiency bottlenecks for high-resolution
imagery. The RWKV model's linear-complexity sequence modeling achieves
breakthroughs in NLP but exhibits anisotropic limitations in vision tasks due
to its 1D scanning mechanism. To address these challenges, we propose RSRWKV,
featuring a novel 2D-WKV scanning mechanism that bridges sequential processing
and 2D spatial reasoning while maintaining linear complexity. This enables
isotropic context aggregation across multiple directions. The MVC-Shift module
enhances multi-scale receptive field coverage, while the ECA module strengthens
cross-channel feature interaction and semantic saliency modeling. Experimental
results demonstrate RSRWKV's superior performance over CNN and Transformer
baselines in classification, detection, and segmentation tasks on NWPU
RESISC45, VHR-10.v2, and GLH-Water datasets, offering a scalable solution for
high-resolution remote sensing analysis.
|
2503.20394 | Meng Xiao | Tianqi He, Xiaohan Huang, Yi Du, Qingqing Long, Ziyue Qiao, Min Wu,
Yanjie Fu, Yuanchun Zhou, Meng Xiao | FastFT: Accelerating Reinforced Feature Transformation via Advanced
Exploration Strategies | 14 pages, Accepted by ICDE 2025 | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Feature Transformation is crucial for classic machine learning that aims to
generate feature combinations to enhance the performance of downstream tasks
from a data-centric perspective. Current methodologies, such as manual
expert-driven processes, iterative-feedback techniques, and
exploration-generative tactics, have shown promise in automating such data
engineering workflow by minimizing human involvement. However, three challenges
remain in those frameworks: (1) It predominantly depends on downstream task
performance metrics, as assessment is time-consuming, especially for large
datasets. (2) The diversity of feature combinations will hardly be guaranteed
after random exploration ends. (3) Rare significant transformations lead to
sparse valuable feedback that hinders the learning processes or leads to less
effective results. In response to these challenges, we introduce FastFT, an
innovative framework that leverages a trio of advanced strategies.We first
decouple the feature transformation evaluation from the outcomes of the
generated datasets via the performance predictor. To address the issue of
reward sparsity, we developed a method to evaluate the novelty of generated
transformation sequences. Incorporating this novelty into the reward function
accelerates the model's exploration of effective transformations, thereby
improving the search productivity. Additionally, we combine novelty and
performance to create a prioritized memory buffer, ensuring that essential
experiences are effectively revisited during exploration. Our extensive
experimental evaluations validate the performance, efficiency, and traceability
of our proposed framework, showcasing its superiority in handling complex
feature transformation tasks.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 10:17:41 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"He",
"Tianqi",
""
],
[
"Huang",
"Xiaohan",
""
],
[
"Du",
"Yi",
""
],
[
"Long",
"Qingqing",
""
],
[
"Qiao",
"Ziyue",
""
],
[
"Wu",
"Min",
""
],
[
"Fu",
"Yanjie",
""
],
[
"Zhou",
"Yuanchun",
""
],
[
"Xiao",
"Meng",
""
]
] | TITLE: FastFT: Accelerating Reinforced Feature Transformation via Advanced
Exploration Strategies
ABSTRACT: Feature Transformation is crucial for classic machine learning that aims to
generate feature combinations to enhance the performance of downstream tasks
from a data-centric perspective. Current methodologies, such as manual
expert-driven processes, iterative-feedback techniques, and
exploration-generative tactics, have shown promise in automating such data
engineering workflow by minimizing human involvement. However, three challenges
remain in those frameworks: (1) It predominantly depends on downstream task
performance metrics, as assessment is time-consuming, especially for large
datasets. (2) The diversity of feature combinations will hardly be guaranteed
after random exploration ends. (3) Rare significant transformations lead to
sparse valuable feedback that hinders the learning processes or leads to less
effective results. In response to these challenges, we introduce FastFT, an
innovative framework that leverages a trio of advanced strategies.We first
decouple the feature transformation evaluation from the outcomes of the
generated datasets via the performance predictor. To address the issue of
reward sparsity, we developed a method to evaluate the novelty of generated
transformation sequences. Incorporating this novelty into the reward function
accelerates the model's exploration of effective transformations, thereby
improving the search productivity. Additionally, we combine novelty and
performance to create a prioritized memory buffer, ensuring that essential
experiences are effectively revisited during exploration. Our extensive
experimental evaluations validate the performance, efficiency, and traceability
of our proposed framework, showcasing its superiority in handling complex
feature transformation tasks.
|
2503.20400 | Rita T. Sousa | Rita T. Sousa, Heiko Paulheim | Multi-dataset and Transfer Learning Using Gene Expression Knowledge
Graphs | Accepted at the Extended Semantic Web Conference 2025 | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Gene expression datasets offer insights into gene regulation mechanisms,
biochemical pathways, and cellular functions. Additionally, comparing gene
expression profiles between disease and control patients can deepen the
understanding of disease pathology. Therefore, machine learning has been used
to process gene expression data, with patient diagnosis emerging as one of the
most popular applications. Although gene expression data can provide valuable
insights, challenges arise because the number of patients in expression
datasets is usually limited, and the data from different datasets with
different gene expressions cannot be easily combined. This work proposes a
novel methodology to address these challenges by integrating multiple gene
expression datasets and domain-specific knowledge using knowledge graphs, a
unique tool for biomedical data integration. Then, vector representations are
produced using knowledge graph embedding techniques, which are used as inputs
for a graph neural network and a multi-layer perceptron. We evaluate the
efficacy of our methodology in three settings: single-dataset learning,
multi-dataset learning, and transfer learning. The experimental results show
that combining gene expression datasets and domain-specific knowledge improves
patient diagnosis in all three settings.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 10:23:27 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Sousa",
"Rita T.",
""
],
[
"Paulheim",
"Heiko",
""
]
] | TITLE: Multi-dataset and Transfer Learning Using Gene Expression Knowledge
Graphs
ABSTRACT: Gene expression datasets offer insights into gene regulation mechanisms,
biochemical pathways, and cellular functions. Additionally, comparing gene
expression profiles between disease and control patients can deepen the
understanding of disease pathology. Therefore, machine learning has been used
to process gene expression data, with patient diagnosis emerging as one of the
most popular applications. Although gene expression data can provide valuable
insights, challenges arise because the number of patients in expression
datasets is usually limited, and the data from different datasets with
different gene expressions cannot be easily combined. This work proposes a
novel methodology to address these challenges by integrating multiple gene
expression datasets and domain-specific knowledge using knowledge graphs, a
unique tool for biomedical data integration. Then, vector representations are
produced using knowledge graph embedding techniques, which are used as inputs
for a graph neural network and a multi-layer perceptron. We evaluate the
efficacy of our methodology in three settings: single-dataset learning,
multi-dataset learning, and transfer learning. The experimental results show
that combining gene expression datasets and domain-specific knowledge improves
patient diagnosis in all three settings.
|
2503.20412 | Yuta Yoshimoto | Yuta Yoshimoto, Naoki Matsumura, Yuto Iwasaki, Hiroshi Nakao, Yasufumi
Sakai | Large-Scale, Long-Time Atomistic Simulations of Proton Transport in
Polymer Electrolyte Membranes Using a Neural Network Interatomic Potential | 39 pages, 8 figures | null | null | null | cond-mat.mtrl-sci physics.comp-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, machine learning interatomic potentials (MLIPs) have
attracted significant attention as a method that enables large-scale, long-time
atomistic simulations while maintaining accuracy comparable to electronic
structure calculations based on density functional theory (DFT) and ab initio
wavefunction theories. However, a challenge with MLIP-based molecular dynamics
(MD) simulations is their lower stability compared to those using conventional
classical potentials. Analyzing highly heterogeneous systems or amorphous
materials often requires large-scale and long-time simulations, necessitating
the development of robust MLIPs that allow for stable MD simulations. In this
study, using our neural network potential (NNP) generator, we construct an NNP
model that enables large-scale, long-time MD simulations of perfluorinated
ionomer membranes (Nafion) across a wide range of hydration levels. We
successfully build a robust deep potential (DP) model by iteratively expanding
the dataset through active-learning loops. Specifically, by combining the
sampling of off-equilibrium structures via non-equilibrium DPMD simulations
with the structure screening in a 3D structural feature space incorporating
minimum interatomic distances, it is possible to significantly enhance the
robustness of the DP model, which allows for stable MD simulations of large
Nafion systems ranging from approximately 10,000 to 20,000 atoms for an
extended duration of 31 ns. The MD simulations employing the developed DP model
yield self-diffusion coefficients of hydrogen atoms that more closely match
experimental values in a wide range of hydration levels compared to previous ab
initio MD simulations of smaller systems.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 10:40:30 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Yoshimoto",
"Yuta",
""
],
[
"Matsumura",
"Naoki",
""
],
[
"Iwasaki",
"Yuto",
""
],
[
"Nakao",
"Hiroshi",
""
],
[
"Sakai",
"Yasufumi",
""
]
] | TITLE: Large-Scale, Long-Time Atomistic Simulations of Proton Transport in
Polymer Electrolyte Membranes Using a Neural Network Interatomic Potential
ABSTRACT: In recent years, machine learning interatomic potentials (MLIPs) have
attracted significant attention as a method that enables large-scale, long-time
atomistic simulations while maintaining accuracy comparable to electronic
structure calculations based on density functional theory (DFT) and ab initio
wavefunction theories. However, a challenge with MLIP-based molecular dynamics
(MD) simulations is their lower stability compared to those using conventional
classical potentials. Analyzing highly heterogeneous systems or amorphous
materials often requires large-scale and long-time simulations, necessitating
the development of robust MLIPs that allow for stable MD simulations. In this
study, using our neural network potential (NNP) generator, we construct an NNP
model that enables large-scale, long-time MD simulations of perfluorinated
ionomer membranes (Nafion) across a wide range of hydration levels. We
successfully build a robust deep potential (DP) model by iteratively expanding
the dataset through active-learning loops. Specifically, by combining the
sampling of off-equilibrium structures via non-equilibrium DPMD simulations
with the structure screening in a 3D structural feature space incorporating
minimum interatomic distances, it is possible to significantly enhance the
robustness of the DP model, which allows for stable MD simulations of large
Nafion systems ranging from approximately 10,000 to 20,000 atoms for an
extended duration of 31 ns. The MD simulations employing the developed DP model
yield self-diffusion coefficients of hydrogen atoms that more closely match
experimental values in a wide range of hydration levels compared to previous ab
initio MD simulations of smaller systems.
|
2503.20417 | Zhenghan Yu | Zhenghan Yu, Xinyu Hu, Xiaojun Wan | CFunModel: A "Funny" Language Model Capable of Chinese Humor Generation
and Processing | 9 pages | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Humor plays a significant role in daily language communication. With the
rapid development of large language models (LLMs), natural language processing
has made significant strides in understanding and generating various genres of
texts. However, most LLMs exhibit poor performance in generating and processing
Chinese humor. In this study, we introduce a comprehensive Chinese
humor-related dataset, the Chinese Fun Set (CFunSet). This dataset aggregates
existing Chinese humor datasets and includes over 20,000 jokes collected from
Tieba-JokeBar, a Chinese online platform known for joke sharing. The resulting
corpus comprises more than 160,000 entries. Leveraging CFunSet, we developed
the Chinese Fun Model (CFunModel), the first large language model designed to
handle various Chinese humor-related tasks including Crosstalk Response
Selection, Humor Recognition, Joke Generation, etc. Experimental results
demonstrate that CFunModel outperforms popular large language models in these
tasks. Our CFunSet is available at
https://huggingface.co/datasets/ZhenghanYU/CFunSet and CFunModel is available
at https://huggingface.co/ZhenghanYU/CFunModel. A demostration video of our
work is available at https://youtu.be/MOsISOJ66Ms.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 10:44:51 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Yu",
"Zhenghan",
""
],
[
"Hu",
"Xinyu",
""
],
[
"Wan",
"Xiaojun",
""
]
] | TITLE: CFunModel: A "Funny" Language Model Capable of Chinese Humor Generation
and Processing
ABSTRACT: Humor plays a significant role in daily language communication. With the
rapid development of large language models (LLMs), natural language processing
has made significant strides in understanding and generating various genres of
texts. However, most LLMs exhibit poor performance in generating and processing
Chinese humor. In this study, we introduce a comprehensive Chinese
humor-related dataset, the Chinese Fun Set (CFunSet). This dataset aggregates
existing Chinese humor datasets and includes over 20,000 jokes collected from
Tieba-JokeBar, a Chinese online platform known for joke sharing. The resulting
corpus comprises more than 160,000 entries. Leveraging CFunSet, we developed
the Chinese Fun Model (CFunModel), the first large language model designed to
handle various Chinese humor-related tasks including Crosstalk Response
Selection, Humor Recognition, Joke Generation, etc. Experimental results
demonstrate that CFunModel outperforms popular large language models in these
tasks. Our CFunSet is available at
https://huggingface.co/datasets/ZhenghanYU/CFunSet and CFunModel is available
at https://huggingface.co/ZhenghanYU/CFunModel. A demostration video of our
work is available at https://youtu.be/MOsISOJ66Ms.
|
2503.20421 | Tom Kempton | Tom Kempton, Stuart Burrell and Connor Cheverall | TempTest: Local Normalization Distortion and the Detection of
Machine-generated Text | null | null | null | null | cs.CL cs.LG math.DS | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Existing methods for the zero-shot detection of machine-generated text are
dominated by three statistical quantities: log-likelihood, log-rank, and
entropy. As language models mimic the distribution of human text ever closer,
this will limit our ability to build effective detection algorithms. To combat
this, we introduce a method for detecting machine-generated text that is
entirely agnostic of the generating language model. This is achieved by
targeting a defect in the way that decoding strategies, such as temperature or
top-k sampling, normalize conditional probability measures. This method can be
rigorously theoretically justified, is easily explainable, and is conceptually
distinct from existing methods for detecting machine-generated text. We
evaluate our detector in the white and black box settings across various
language models, datasets, and passage lengths. We also study the effect of
paraphrasing attacks on our detector and the extent to which it is biased
against non-native speakers. In each of these settings, the performance of our
test is at least comparable to that of other state-of-the-art text detectors,
and in some cases, we strongly outperform these baselines.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 10:56:59 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Kempton",
"Tom",
""
],
[
"Burrell",
"Stuart",
""
],
[
"Cheverall",
"Connor",
""
]
] | TITLE: TempTest: Local Normalization Distortion and the Detection of
Machine-generated Text
ABSTRACT: Existing methods for the zero-shot detection of machine-generated text are
dominated by three statistical quantities: log-likelihood, log-rank, and
entropy. As language models mimic the distribution of human text ever closer,
this will limit our ability to build effective detection algorithms. To combat
this, we introduce a method for detecting machine-generated text that is
entirely agnostic of the generating language model. This is achieved by
targeting a defect in the way that decoding strategies, such as temperature or
top-k sampling, normalize conditional probability measures. This method can be
rigorously theoretically justified, is easily explainable, and is conceptually
distinct from existing methods for detecting machine-generated text. We
evaluate our detector in the white and black box settings across various
language models, datasets, and passage lengths. We also study the effect of
paraphrasing attacks on our detector and the extent to which it is biased
against non-native speakers. In each of these settings, the performance of our
test is at least comparable to that of other state-of-the-art text detectors,
and in some cases, we strongly outperform these baselines.
|
2503.20428 | Francesc Xavier Gaya Morey | F. Xavier Gaya-Morey, Cristina Manresa-Yee, C\'elia Martinie, Jose M.
Buades-Rubio | Evaluating Facial Expression Recognition Datasets for Deep Learning: A
Benchmark Study with Novel Similarity Metrics | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | This study investigates the key characteristics and suitability of widely
used Facial Expression Recognition (FER) datasets for training deep learning
models. In the field of affective computing, FER is essential for interpreting
human emotions, yet the performance of FER systems is highly contingent on the
quality and diversity of the underlying datasets. To address this issue, we
compiled and analyzed 24 FER datasets, including those targeting specific age
groups such as children, adults, and the elderly, and processed them through a
comprehensive normalization pipeline. In addition, we enriched the datasets
with automatic annotations for age and gender, enabling a more nuanced
evaluation of their demographic properties. To further assess dataset efficacy,
we introduce three novel metricsLocal, Global, and Paired Similarity, which
quantitatively measure dataset difficulty, generalization capability, and
cross-dataset transferability. Benchmark experiments using state-of-the-art
neural networks reveal that large-scale, automatically collected datasets
(e.g., AffectNet, FER2013) tend to generalize better, despite issues with
labeling noise and demographic biases, whereas controlled datasets offer higher
annotation quality but limited variability. Our findings provide actionable
recommendations for dataset selection and design, advancing the development of
more robust, fair, and effective FER systems.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 11:01:00 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Gaya-Morey",
"F. Xavier",
""
],
[
"Manresa-Yee",
"Cristina",
""
],
[
"Martinie",
"Célia",
""
],
[
"Buades-Rubio",
"Jose M.",
""
]
] | TITLE: Evaluating Facial Expression Recognition Datasets for Deep Learning: A
Benchmark Study with Novel Similarity Metrics
ABSTRACT: This study investigates the key characteristics and suitability of widely
used Facial Expression Recognition (FER) datasets for training deep learning
models. In the field of affective computing, FER is essential for interpreting
human emotions, yet the performance of FER systems is highly contingent on the
quality and diversity of the underlying datasets. To address this issue, we
compiled and analyzed 24 FER datasets, including those targeting specific age
groups such as children, adults, and the elderly, and processed them through a
comprehensive normalization pipeline. In addition, we enriched the datasets
with automatic annotations for age and gender, enabling a more nuanced
evaluation of their demographic properties. To further assess dataset efficacy,
we introduce three novel metricsLocal, Global, and Paired Similarity, which
quantitatively measure dataset difficulty, generalization capability, and
cross-dataset transferability. Benchmark experiments using state-of-the-art
neural networks reveal that large-scale, automatically collected datasets
(e.g., AffectNet, FER2013) tend to generalize better, despite issues with
labeling noise and demographic biases, whereas controlled datasets offer higher
annotation quality but limited variability. Our findings provide actionable
recommendations for dataset selection and design, advancing the development of
more robust, fair, and effective FER systems.
|
2503.20430 | Sichun Luo | Sichun Luo, Jian Xu, Xiaojie Zhang, Linrong Wang, Sicong Liu, Hanxu
Hou, Linqi Song | RALLRec+: Retrieval Augmented Large Language Model Recommendation with
Reasoning | arXiv admin note: substantial text overlap with arXiv:2502.06101 | null | null | null | cs.IR | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) have been integrated into recommender systems to
enhance user behavior comprehension. The Retrieval Augmented Generation (RAG)
technique is further incorporated into these systems to retrieve more relevant
items and improve system performance. However, existing RAG methods have two
shortcomings. \textit{(i)} In the \textit{retrieval} stage, they rely primarily
on textual semantics and often fail to incorporate the most relevant items,
thus constraining system effectiveness. \textit{(ii)} In the
\textit{generation} stage, they lack explicit chain-of-thought reasoning,
further limiting their potential.
In this paper, we propose Representation learning and \textbf{R}easoning
empowered retrieval-\textbf{A}ugmented \textbf{L}arge \textbf{L}anguage model
\textbf{Rec}ommendation (RALLRec+). Specifically, for the retrieval stage, we
prompt LLMs to generate detailed item descriptions and perform joint
representation learning, combining textual and collaborative signals extracted
from the LLM and recommendation models, respectively. To account for the
time-varying nature of user interests, we propose a simple yet effective
reranking method to capture preference dynamics. For the generation phase, we
first evaluate reasoning LLMs on recommendation tasks, uncovering valuable
insights. Then we introduce knowledge-injected prompting and consistency-based
merging approach to integrate reasoning LLMs with general-purpose LLMs,
enhancing overall performance. Extensive experiments on three real world
datasets validate our method's effectiveness.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 11:03:34 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Luo",
"Sichun",
""
],
[
"Xu",
"Jian",
""
],
[
"Zhang",
"Xiaojie",
""
],
[
"Wang",
"Linrong",
""
],
[
"Liu",
"Sicong",
""
],
[
"Hou",
"Hanxu",
""
],
[
"Song",
"Linqi",
""
]
] | TITLE: RALLRec+: Retrieval Augmented Large Language Model Recommendation with
Reasoning
ABSTRACT: Large Language Models (LLMs) have been integrated into recommender systems to
enhance user behavior comprehension. The Retrieval Augmented Generation (RAG)
technique is further incorporated into these systems to retrieve more relevant
items and improve system performance. However, existing RAG methods have two
shortcomings. \textit{(i)} In the \textit{retrieval} stage, they rely primarily
on textual semantics and often fail to incorporate the most relevant items,
thus constraining system effectiveness. \textit{(ii)} In the
\textit{generation} stage, they lack explicit chain-of-thought reasoning,
further limiting their potential.
In this paper, we propose Representation learning and \textbf{R}easoning
empowered retrieval-\textbf{A}ugmented \textbf{L}arge \textbf{L}anguage model
\textbf{Rec}ommendation (RALLRec+). Specifically, for the retrieval stage, we
prompt LLMs to generate detailed item descriptions and perform joint
representation learning, combining textual and collaborative signals extracted
from the LLM and recommendation models, respectively. To account for the
time-varying nature of user interests, we propose a simple yet effective
reranking method to capture preference dynamics. For the generation phase, we
first evaluate reasoning LLMs on recommendation tasks, uncovering valuable
insights. Then we introduce knowledge-injected prompting and consistency-based
merging approach to integrate reasoning LLMs with general-purpose LLMs,
enhancing overall performance. Extensive experiments on three real world
datasets validate our method's effectiveness.
|
2503.20446 | Hamidreza Saligheh Rad | Farzan Moodi, Fereshteh Khodadadi Shoushtari, Gelareh Valizadeh,
Dornaz Mazinani, Hanieh Mobarak Salari, Hamidreza Saligheh Rad | Attention Xception UNet (AXUNet): A Novel Combination of CNN and
Self-Attention for Brain Tumor Segmentation | null | null | null | null | eess.IV cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | Accurate segmentation of glioma brain tumors is crucial for diagnosis and
treatment planning. Deep learning techniques offer promising solutions, but
optimal model architectures remain under investigation. We used the BraTS 2021
dataset, selecting T1 with contrast enhancement (T1CE), T2, and
Fluid-Attenuated Inversion Recovery (FLAIR) sequences for model development.
The proposed Attention Xception UNet (AXUNet) architecture integrates an
Xception backbone with dot-product self-attention modules, inspired by
state-of-the-art (SOTA) large language models such as Google Bard and OpenAI
ChatGPT, within a UNet-shaped model. We compared AXUNet with SOTA models.
Comparative evaluation on the test set demonstrated improved results over
baseline models. Inception-UNet and Xception-UNet achieved mean Dice scores of
90.88 and 93.24, respectively. Attention ResUNet (AResUNet) attained a mean
Dice score of 92.80, with the highest score of 84.92 for enhancing tumor (ET)
among all models. Attention Gate UNet (AGUNet) yielded a mean Dice score of
90.38. AXUNet outperformed all models with a mean Dice score of 93.73. It
demonstrated superior Dice scores across whole tumor (WT) and tumor core (TC)
regions, achieving 92.59 for WT, 86.81 for TC, and 84.89 for ET. The
integration of the Xception backbone and dot-product self-attention mechanisms
in AXUNet showcases enhanced performance in capturing spatial and contextual
information. The findings underscore the potential utility of AXUNet in
facilitating precise tumor delineation.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 11:22:17 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Moodi",
"Farzan",
""
],
[
"Shoushtari",
"Fereshteh Khodadadi",
""
],
[
"Valizadeh",
"Gelareh",
""
],
[
"Mazinani",
"Dornaz",
""
],
[
"Salari",
"Hanieh Mobarak",
""
],
[
"Rad",
"Hamidreza Saligheh",
""
]
] | TITLE: Attention Xception UNet (AXUNet): A Novel Combination of CNN and
Self-Attention for Brain Tumor Segmentation
ABSTRACT: Accurate segmentation of glioma brain tumors is crucial for diagnosis and
treatment planning. Deep learning techniques offer promising solutions, but
optimal model architectures remain under investigation. We used the BraTS 2021
dataset, selecting T1 with contrast enhancement (T1CE), T2, and
Fluid-Attenuated Inversion Recovery (FLAIR) sequences for model development.
The proposed Attention Xception UNet (AXUNet) architecture integrates an
Xception backbone with dot-product self-attention modules, inspired by
state-of-the-art (SOTA) large language models such as Google Bard and OpenAI
ChatGPT, within a UNet-shaped model. We compared AXUNet with SOTA models.
Comparative evaluation on the test set demonstrated improved results over
baseline models. Inception-UNet and Xception-UNet achieved mean Dice scores of
90.88 and 93.24, respectively. Attention ResUNet (AResUNet) attained a mean
Dice score of 92.80, with the highest score of 84.92 for enhancing tumor (ET)
among all models. Attention Gate UNet (AGUNet) yielded a mean Dice score of
90.38. AXUNet outperformed all models with a mean Dice score of 93.73. It
demonstrated superior Dice scores across whole tumor (WT) and tumor core (TC)
regions, achieving 92.59 for WT, 86.81 for TC, and 84.89 for ET. The
integration of the Xception backbone and dot-product self-attention mechanisms
in AXUNet showcases enhanced performance in capturing spatial and contextual
information. The findings underscore the potential utility of AXUNet in
facilitating precise tumor delineation.
|
2503.20454 | Shing-Ho Jonathan Lin | Yangqi Feng, Shing-Ho J. Lin, Baoyuan Gao, Xian Wei | Lipschitz Constant Meets Condition Number: Learning Robust and Compact
Deep Neural Networks | 13 pages, 6 figures | null | null | null | cs.LG cs.CV | http://creativecommons.org/licenses/by/4.0/ | Recent research has revealed that high compression of Deep Neural Networks
(DNNs), e.g., massive pruning of the weight matrix of a DNN, leads to a severe
drop in accuracy and susceptibility to adversarial attacks. Integration of
network pruning into an adversarial training framework has been proposed to
promote adversarial robustness. It has been observed that a highly pruned
weight matrix tends to be ill-conditioned, i.e., increasing the condition
number of the weight matrix. This phenomenon aggravates the vulnerability of a
DNN to input noise. Although a highly pruned weight matrix is considered to be
able to lower the upper bound of the local Lipschitz constant to tolerate large
distortion, the ill-conditionedness of such a weight matrix results in a
non-robust DNN model. To overcome this challenge, this work develops novel
joint constraints to adjust the weight distribution of networks, namely, the
Transformed Sparse Constraint joint with Condition Number Constraint (TSCNC),
which copes with smoothing distribution and differentiable constraint functions
to reduce condition number and thus avoid the ill-conditionedness of weight
matrices. Furthermore, our theoretical analyses unveil the relevance between
the condition number and the local Lipschitz constant of the weight matrix,
namely, the sharply increasing condition number becomes the dominant factor
that restricts the robustness of over-sparsified models. Extensive experiments
are conducted on several public datasets, and the results show that the
proposed constraints significantly improve the robustness of a DNN with high
pruning rates.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 11:33:18 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Feng",
"Yangqi",
""
],
[
"Lin",
"Shing-Ho J.",
""
],
[
"Gao",
"Baoyuan",
""
],
[
"Wei",
"Xian",
""
]
] | TITLE: Lipschitz Constant Meets Condition Number: Learning Robust and Compact
Deep Neural Networks
ABSTRACT: Recent research has revealed that high compression of Deep Neural Networks
(DNNs), e.g., massive pruning of the weight matrix of a DNN, leads to a severe
drop in accuracy and susceptibility to adversarial attacks. Integration of
network pruning into an adversarial training framework has been proposed to
promote adversarial robustness. It has been observed that a highly pruned
weight matrix tends to be ill-conditioned, i.e., increasing the condition
number of the weight matrix. This phenomenon aggravates the vulnerability of a
DNN to input noise. Although a highly pruned weight matrix is considered to be
able to lower the upper bound of the local Lipschitz constant to tolerate large
distortion, the ill-conditionedness of such a weight matrix results in a
non-robust DNN model. To overcome this challenge, this work develops novel
joint constraints to adjust the weight distribution of networks, namely, the
Transformed Sparse Constraint joint with Condition Number Constraint (TSCNC),
which copes with smoothing distribution and differentiable constraint functions
to reduce condition number and thus avoid the ill-conditionedness of weight
matrices. Furthermore, our theoretical analyses unveil the relevance between
the condition number and the local Lipschitz constant of the weight matrix,
namely, the sharply increasing condition number becomes the dominant factor
that restricts the robustness of over-sparsified models. Extensive experiments
are conducted on several public datasets, and the results show that the
proposed constraints significantly improve the robustness of a DNN with high
pruning rates.
|
2503.20460 | Ziye Yu | Ziye Yu, Xin Liu | A Framework for Uncertainty Estimation in Seismology Data Processing
with Application to Extract Rayleigh Wave Dispersion Curves from Noise
Cross-correlation Functions | null | null | null | null | physics.geo-ph | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Extracting meaningful information from large seismic datasets often requires
estimating the uncertainty associated with the results for quantitative
analysis. This uncertainty arises from both the raw data and the manually
labeled annotations. We introduce an uncertainty estimation framework designed
to calculate the uncertainty from manually labeled data. This framework can
efficiently output the true posterior from large datasets. We apply the
framework to extract Rayleigh wave phase velocity dispersion and compute the
posterior distribution of the dispersion results. We utilize 62,899 noise
cross-correlation function (NCF) data from 438 stations located in Yunnan
Province and manually label the Rayleigh phase velocity dispersion curves.
Dispersion curve extraction presents two key challenges: (1) Researchers
typically derive dispersion curves from spectrograms in the periodvelocity
domain, limiting the ability to directly study the relationship between NCFs
and dispersion curves; (2) Assessing uncertainty in manually labeled data
remains difficult. To address these challenges, the framework takes the NCFs as
input and directly output both the dispersion values and the posterior of the
dispersion values when processing the NCF data. This approach allows us to
construct a flexible deep neural network (DNN) architecture that balances
accuracy and computational efficiency.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 11:44:43 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Yu",
"Ziye",
""
],
[
"Liu",
"Xin",
""
]
] | TITLE: A Framework for Uncertainty Estimation in Seismology Data Processing
with Application to Extract Rayleigh Wave Dispersion Curves from Noise
Cross-correlation Functions
ABSTRACT: Extracting meaningful information from large seismic datasets often requires
estimating the uncertainty associated with the results for quantitative
analysis. This uncertainty arises from both the raw data and the manually
labeled annotations. We introduce an uncertainty estimation framework designed
to calculate the uncertainty from manually labeled data. This framework can
efficiently output the true posterior from large datasets. We apply the
framework to extract Rayleigh wave phase velocity dispersion and compute the
posterior distribution of the dispersion results. We utilize 62,899 noise
cross-correlation function (NCF) data from 438 stations located in Yunnan
Province and manually label the Rayleigh phase velocity dispersion curves.
Dispersion curve extraction presents two key challenges: (1) Researchers
typically derive dispersion curves from spectrograms in the periodvelocity
domain, limiting the ability to directly study the relationship between NCFs
and dispersion curves; (2) Assessing uncertainty in manually labeled data
remains difficult. To address these challenges, the framework takes the NCFs as
input and directly output both the dispersion values and the posterior of the
dispersion values when processing the NCF data. This approach allows us to
construct a flexible deep neural network (DNN) architecture that balances
accuracy and computational efficiency.
|
2503.20462 | RuoQi Wen | Ruoqi Wen, Rongpeng Li, Xing Xu and Zhifeng Zhao | Multi-agent Uncertainty-Aware Pessimistic Model-Based Reinforcement
Learning for Connected Autonomous Vehicles | 17 pages, 7 figures | null | null | null | cs.MA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep Reinforcement Learning (DRL) holds significant promise for achieving
human-like Autonomous Vehicle (AV) capabilities, but suffers from low sample
efficiency and challenges in reward design. Model-Based Reinforcement Learning
(MBRL) offers improved sample efficiency and generalizability compared to
Model-Free Reinforcement Learning (MFRL) in various multi-agent decision-making
scenarios. Nevertheless, MBRL faces critical difficulties in estimating
uncertainty during the model learning phase, thereby limiting its scalability
and applicability in real-world scenarios. Additionally, most Connected
Autonomous Vehicle (CAV) studies focus on single-agent decision-making, while
existing multi-agent MBRL solutions lack computationally tractable algorithms
with Probably Approximately Correct (PAC) guarantees, an essential factor for
ensuring policy reliability with limited training data. To address these
challenges, we propose MA-PMBRL, a novel Multi-Agent Pessimistic Model-Based
Reinforcement Learning framework for CAVs, incorporating a max-min optimization
approach to enhance robustness and decision-making. To mitigate the inherent
subjectivity of uncertainty estimation in MBRL and avoid incurring catastrophic
failures in AV, MA-PMBRL employs a pessimistic optimization framework combined
with Projected Gradient Descent (PGD) for both model and policy learning.
MA-PMBRL also employs general function approximations under partial dataset
coverage to enhance learning efficiency and system-level performance. By
bounding the suboptimality of the resulting policy under mild theoretical
assumptions, we successfully establish PAC guarantees for MA-PMBRL,
demonstrating that the proposed framework represents a significant step toward
scalable, efficient, and reliable multi-agent decision-making for CAVs.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 11:49:02 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Wen",
"Ruoqi",
""
],
[
"Li",
"Rongpeng",
""
],
[
"Xu",
"Xing",
""
],
[
"Zhao",
"Zhifeng",
""
]
] | TITLE: Multi-agent Uncertainty-Aware Pessimistic Model-Based Reinforcement
Learning for Connected Autonomous Vehicles
ABSTRACT: Deep Reinforcement Learning (DRL) holds significant promise for achieving
human-like Autonomous Vehicle (AV) capabilities, but suffers from low sample
efficiency and challenges in reward design. Model-Based Reinforcement Learning
(MBRL) offers improved sample efficiency and generalizability compared to
Model-Free Reinforcement Learning (MFRL) in various multi-agent decision-making
scenarios. Nevertheless, MBRL faces critical difficulties in estimating
uncertainty during the model learning phase, thereby limiting its scalability
and applicability in real-world scenarios. Additionally, most Connected
Autonomous Vehicle (CAV) studies focus on single-agent decision-making, while
existing multi-agent MBRL solutions lack computationally tractable algorithms
with Probably Approximately Correct (PAC) guarantees, an essential factor for
ensuring policy reliability with limited training data. To address these
challenges, we propose MA-PMBRL, a novel Multi-Agent Pessimistic Model-Based
Reinforcement Learning framework for CAVs, incorporating a max-min optimization
approach to enhance robustness and decision-making. To mitigate the inherent
subjectivity of uncertainty estimation in MBRL and avoid incurring catastrophic
failures in AV, MA-PMBRL employs a pessimistic optimization framework combined
with Projected Gradient Descent (PGD) for both model and policy learning.
MA-PMBRL also employs general function approximations under partial dataset
coverage to enhance learning efficiency and system-level performance. By
bounding the suboptimality of the resulting policy under mild theoretical
assumptions, we successfully establish PAC guarantees for MA-PMBRL,
demonstrating that the proposed framework represents a significant step toward
scalable, efficient, and reliable multi-agent decision-making for CAVs.
|
2503.20472 | Yucheng Suo | Yucheng Suo, Fan Ma, Linchao Zhu, Tianyi Wang, Fengyun Rao, Yi Yang | From Trial to Triumph: Advancing Long Video Understanding via Visual
Context Sample Scaling and Self-reward Alignment | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-modal Large language models (MLLMs) show remarkable ability in video
understanding. Nevertheless, understanding long videos remains challenging as
the models can only process a finite number of frames in a single inference,
potentially omitting crucial visual information. To address the challenge, we
propose generating multiple predictions through visual context sampling,
followed by a scoring mechanism to select the final prediction. Specifically,
we devise a bin-wise sampling strategy that enables MLLMs to generate diverse
answers based on various combinations of keyframes, thereby enriching the
visual context. To determine the final prediction from the sampled answers, we
employ a self-reward by linearly combining three scores: (1) a frequency score
indicating the prevalence of each option, (2) a marginal confidence score
reflecting the inter-intra sample certainty of MLLM predictions, and (3) a
reasoning score for different question types, including clue-guided answering
for global questions and temporal self-refocusing for local questions. The
frequency score ensures robustness through majority correctness, the
confidence-aligned score reflects prediction certainty, and the typed-reasoning
score addresses cases with sparse key visual information using tailored
strategies. Experiments show that this approach covers the correct answer for a
high percentage of long video questions, on seven datasets show that our method
improves the performance of three MLLMs.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 11:53:03 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Suo",
"Yucheng",
""
],
[
"Ma",
"Fan",
""
],
[
"Zhu",
"Linchao",
""
],
[
"Wang",
"Tianyi",
""
],
[
"Rao",
"Fengyun",
""
],
[
"Yang",
"Yi",
""
]
] | TITLE: From Trial to Triumph: Advancing Long Video Understanding via Visual
Context Sample Scaling and Self-reward Alignment
ABSTRACT: Multi-modal Large language models (MLLMs) show remarkable ability in video
understanding. Nevertheless, understanding long videos remains challenging as
the models can only process a finite number of frames in a single inference,
potentially omitting crucial visual information. To address the challenge, we
propose generating multiple predictions through visual context sampling,
followed by a scoring mechanism to select the final prediction. Specifically,
we devise a bin-wise sampling strategy that enables MLLMs to generate diverse
answers based on various combinations of keyframes, thereby enriching the
visual context. To determine the final prediction from the sampled answers, we
employ a self-reward by linearly combining three scores: (1) a frequency score
indicating the prevalence of each option, (2) a marginal confidence score
reflecting the inter-intra sample certainty of MLLM predictions, and (3) a
reasoning score for different question types, including clue-guided answering
for global questions and temporal self-refocusing for local questions. The
frequency score ensures robustness through majority correctness, the
confidence-aligned score reflects prediction certainty, and the typed-reasoning
score addresses cases with sparse key visual information using tailored
strategies. Experiments show that this approach covers the correct answer for a
high percentage of long video questions, on seven datasets show that our method
improves the performance of three MLLMs.
|
2503.20485 | Vidya Sudevan | Vidya Sudevan, Fakhreddine Zayer, Rizwana Kausar, Sajid Javed, Hamad
Karki, Giulia De Masi, Jorge Dias | Underwater Image Enhancement by Convolutional Spiking Neural Networks | null | null | null | null | eess.IV cs.AI cs.PF | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Underwater image enhancement (UIE) is fundamental for marine applications,
including autonomous vision-based navigation. Deep learning methods using
convolutional neural networks (CNN) and vision transformers advanced UIE
performance. Recently, spiking neural networks (SNN) have gained attention for
their lightweight design, energy efficiency, and scalability. This paper
introduces UIE-SNN, the first SNN-based UIE algorithm to improve visibility of
underwater images. UIE-SNN is a 19- layered convolutional spiking
encoder-decoder framework with skip connections, directly trained using
surrogate gradient-based backpropagation through time (BPTT) strategy. We
explore and validate the influence of training datasets on energy reduction, a
unique advantage of UIE-SNN architecture, in contrast to the conventional
learning-based architectures, where energy consumption is model-dependent.
UIE-SNN optimizes the loss function in latent space representation to
reconstruct clear underwater images. Our algorithm performs on par with its
non-spiking counterpart methods in terms of PSNR and structural similarity
index (SSIM) at reduced timesteps ($T=5$) and energy consumption of $85\%$. The
algorithm is trained on two publicly available benchmark datasets, UIEB and
EUVP, and tested on unseen images from UIEB, EUVP, LSUI, U45, and our custom
UIE dataset. The UIE-SNN algorithm achieves PSNR of \(17.7801~dB\) and SSIM of
\(0.7454\) on UIEB, and PSNR of \(23.1725~dB\) and SSIM of \(0.7890\) on EUVP.
UIE-SNN achieves this algorithmic performance with fewer operators (\(147.49\)
GSOPs) and energy (\(0.1327~J\)) compared to its non-spiking counterpart
(GFLOPs = \(218.88\) and Energy=\(1.0068~J\)). Compared with existing SOTA UIE
methods, UIE-SNN achieves an average of \(6.5\times\) improvement in energy
efficiency. The source code is available at
\href{https://github.com/vidya-rejul/UIE-SNN.git}{UIE-SNN}.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 12:15:38 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Sudevan",
"Vidya",
""
],
[
"Zayer",
"Fakhreddine",
""
],
[
"Kausar",
"Rizwana",
""
],
[
"Javed",
"Sajid",
""
],
[
"Karki",
"Hamad",
""
],
[
"De Masi",
"Giulia",
""
],
[
"Dias",
"Jorge",
""
]
] | TITLE: Underwater Image Enhancement by Convolutional Spiking Neural Networks
ABSTRACT: Underwater image enhancement (UIE) is fundamental for marine applications,
including autonomous vision-based navigation. Deep learning methods using
convolutional neural networks (CNN) and vision transformers advanced UIE
performance. Recently, spiking neural networks (SNN) have gained attention for
their lightweight design, energy efficiency, and scalability. This paper
introduces UIE-SNN, the first SNN-based UIE algorithm to improve visibility of
underwater images. UIE-SNN is a 19- layered convolutional spiking
encoder-decoder framework with skip connections, directly trained using
surrogate gradient-based backpropagation through time (BPTT) strategy. We
explore and validate the influence of training datasets on energy reduction, a
unique advantage of UIE-SNN architecture, in contrast to the conventional
learning-based architectures, where energy consumption is model-dependent.
UIE-SNN optimizes the loss function in latent space representation to
reconstruct clear underwater images. Our algorithm performs on par with its
non-spiking counterpart methods in terms of PSNR and structural similarity
index (SSIM) at reduced timesteps ($T=5$) and energy consumption of $85\%$. The
algorithm is trained on two publicly available benchmark datasets, UIEB and
EUVP, and tested on unseen images from UIEB, EUVP, LSUI, U45, and our custom
UIE dataset. The UIE-SNN algorithm achieves PSNR of \(17.7801~dB\) and SSIM of
\(0.7454\) on UIEB, and PSNR of \(23.1725~dB\) and SSIM of \(0.7890\) on EUVP.
UIE-SNN achieves this algorithmic performance with fewer operators (\(147.49\)
GSOPs) and energy (\(0.1327~J\)) compared to its non-spiking counterpart
(GFLOPs = \(218.88\) and Energy=\(1.0068~J\)). Compared with existing SOTA UIE
methods, UIE-SNN achieves an average of \(6.5\times\) improvement in energy
efficiency. The source code is available at
\href{https://github.com/vidya-rejul/UIE-SNN.git}{UIE-SNN}.
|
2503.20488 | Haoran Zheng | Haoran Zheng, Renchi Yang, Jianliang Xu | Adaptive Local Clustering over Attributed Graphs | Accepted by ICDE2025. The code is available at
https://github.com/HaoranZ99/alac | null | null | null | cs.SI cs.DS cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given a graph $G$ and a seed node $v_s$, the objective of local graph
clustering (LGC) is to identify a subgraph $C_s \in G$ (a.k.a. local cluster)
surrounding $v_s$ in time roughly linear with the size of $C_s$. This approach
yields personalized clusters without needing to access the entire graph, which
makes it highly suitable for numerous applications involving large graphs.
However, most existing solutions merely rely on the topological connectivity
between nodes in $G$, rendering them vulnerable to missing or noisy links that
are commonly present in real-world graphs.
To address this issue, this paper resorts to leveraging the complementary
nature of graph topology and node attributes to enhance local clustering
quality. To effectively exploit the attribute information, we first formulate
the LGC as an estimation of the bidirectional diffusion distribution (BDD),
which is specialized for capturing the multi-hop affinity between nodes in the
presence of attributes. Furthermore, we propose LACA, an efficient and
effective approach for LGC that achieves superb empirical performance on
multiple real datasets while maintaining strong locality. The core components
of LACA include (i) a fast and theoretically-grounded preprocessing technique
for node attributes, (ii) an adaptive algorithm for diffusing any vectors over
$G$ with rigorous theoretical guarantees and expedited convergence, and (iii)
an effective three-step scheme for BDD approximation. Extensive experiments,
comparing 17 competitors on 8 real datasets, show that LACA outperforms all
competitors in terms of result quality measured against ground truth local
clusters, while also being up to orders of magnitude faster. The code is
available at https://github.com/HaoranZ99/alac.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 12:24:07 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Zheng",
"Haoran",
""
],
[
"Yang",
"Renchi",
""
],
[
"Xu",
"Jianliang",
""
]
] | TITLE: Adaptive Local Clustering over Attributed Graphs
ABSTRACT: Given a graph $G$ and a seed node $v_s$, the objective of local graph
clustering (LGC) is to identify a subgraph $C_s \in G$ (a.k.a. local cluster)
surrounding $v_s$ in time roughly linear with the size of $C_s$. This approach
yields personalized clusters without needing to access the entire graph, which
makes it highly suitable for numerous applications involving large graphs.
However, most existing solutions merely rely on the topological connectivity
between nodes in $G$, rendering them vulnerable to missing or noisy links that
are commonly present in real-world graphs.
To address this issue, this paper resorts to leveraging the complementary
nature of graph topology and node attributes to enhance local clustering
quality. To effectively exploit the attribute information, we first formulate
the LGC as an estimation of the bidirectional diffusion distribution (BDD),
which is specialized for capturing the multi-hop affinity between nodes in the
presence of attributes. Furthermore, we propose LACA, an efficient and
effective approach for LGC that achieves superb empirical performance on
multiple real datasets while maintaining strong locality. The core components
of LACA include (i) a fast and theoretically-grounded preprocessing technique
for node attributes, (ii) an adaptive algorithm for diffusing any vectors over
$G$ with rigorous theoretical guarantees and expedited convergence, and (iii)
an effective three-step scheme for BDD approximation. Extensive experiments,
comparing 17 competitors on 8 real datasets, show that LACA outperforms all
competitors in terms of result quality measured against ground truth local
clusters, while also being up to orders of magnitude faster. The code is
available at https://github.com/HaoranZ99/alac.
|
2503.20491 | Jiale Cheng | Jiale Cheng, Ruiliang Lyu, Xiaotao Gu, Xiao Liu, Jiazheng Xu, Yida Lu,
Jiayan Teng, Zhuoyi Yang, Yuxiao Dong, Jie Tang, Hongning Wang, Minlie Huang | VPO: Aligning Text-to-Video Generation Models with Prompt Optimization | null | null | null | null | cs.CV cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Video generation models have achieved remarkable progress in text-to-video
tasks. These models are typically trained on text-video pairs with highly
detailed and carefully crafted descriptions, while real-world user inputs
during inference are often concise, vague, or poorly structured. This gap makes
prompt optimization crucial for generating high-quality videos. Current methods
often rely on large language models (LLMs) to refine prompts through in-context
learning, but suffer from several limitations: they may distort user intent,
omit critical details, or introduce safety risks. Moreover, they optimize
prompts without considering the impact on the final video quality, which can
lead to suboptimal results. To address these issues, we introduce VPO, a
principled framework that optimizes prompts based on three core principles:
harmlessness, accuracy, and helpfulness. The generated prompts faithfully
preserve user intents and, more importantly, enhance the safety and quality of
generated videos. To achieve this, VPO employs a two-stage optimization
approach. First, we construct and refine a supervised fine-tuning (SFT) dataset
based on principles of safety and alignment. Second, we introduce both
text-level and video-level feedback to further optimize the SFT model with
preference learning. Our extensive experiments demonstrate that VPO
significantly improves safety, alignment, and video quality compared to
baseline methods. Moreover, VPO shows strong generalization across video
generation models. Furthermore, we demonstrate that VPO could outperform and be
combined with RLHF methods on video generation models, underscoring the
effectiveness of VPO in aligning video generation models. Our code and data are
publicly available at https://github.com/thu-coai/VPO.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 12:28:20 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Cheng",
"Jiale",
""
],
[
"Lyu",
"Ruiliang",
""
],
[
"Gu",
"Xiaotao",
""
],
[
"Liu",
"Xiao",
""
],
[
"Xu",
"Jiazheng",
""
],
[
"Lu",
"Yida",
""
],
[
"Teng",
"Jiayan",
""
],
[
"Yang",
"Zhuoyi",
""
],
[
"Dong",
"Yuxiao",
""
],
[
"Tang",
"Jie",
""
],
[
"Wang",
"Hongning",
""
],
[
"Huang",
"Minlie",
""
]
] | TITLE: VPO: Aligning Text-to-Video Generation Models with Prompt Optimization
ABSTRACT: Video generation models have achieved remarkable progress in text-to-video
tasks. These models are typically trained on text-video pairs with highly
detailed and carefully crafted descriptions, while real-world user inputs
during inference are often concise, vague, or poorly structured. This gap makes
prompt optimization crucial for generating high-quality videos. Current methods
often rely on large language models (LLMs) to refine prompts through in-context
learning, but suffer from several limitations: they may distort user intent,
omit critical details, or introduce safety risks. Moreover, they optimize
prompts without considering the impact on the final video quality, which can
lead to suboptimal results. To address these issues, we introduce VPO, a
principled framework that optimizes prompts based on three core principles:
harmlessness, accuracy, and helpfulness. The generated prompts faithfully
preserve user intents and, more importantly, enhance the safety and quality of
generated videos. To achieve this, VPO employs a two-stage optimization
approach. First, we construct and refine a supervised fine-tuning (SFT) dataset
based on principles of safety and alignment. Second, we introduce both
text-level and video-level feedback to further optimize the SFT model with
preference learning. Our extensive experiments demonstrate that VPO
significantly improves safety, alignment, and video quality compared to
baseline methods. Moreover, VPO shows strong generalization across video
generation models. Furthermore, we demonstrate that VPO could outperform and be
combined with RLHF methods on video generation models, underscoring the
effectiveness of VPO in aligning video generation models. Our code and data are
publicly available at https://github.com/thu-coai/VPO.
|
2503.20492 | Fanhu Zeng | Fanhu Zeng, Zhen Cheng, Fei Zhu, Xu-Yao Zhang | Towards Efficient and General-Purpose Few-Shot Misclassification
Detection for Vision-Language Models | preprint | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reliable prediction by classifiers is crucial for their deployment in high
security and dynamically changing situations. However, modern neural networks
often exhibit overconfidence for misclassified predictions, highlighting the
need for confidence estimation to detect errors. Despite the achievements
obtained by existing methods on small-scale datasets, they all require training
from scratch and there are no efficient and effective misclassification
detection (MisD) methods, hindering practical application towards large-scale
and ever-changing datasets. In this paper, we pave the way to exploit vision
language model (VLM) leveraging text information to establish an efficient and
general-purpose misclassification detection framework. By harnessing the power
of VLM, we construct FSMisD, a Few-Shot prompt learning framework for MisD to
refrain from training from scratch and therefore improve tuning efficiency. To
enhance misclassification detection ability, we use adaptive pseudo sample
generation and a novel negative loss to mitigate the issue of overconfidence by
pushing category prompts away from pseudo features. We conduct comprehensive
experiments with prompt learning methods and validate the generalization
ability across various datasets with domain shift. Significant and consistent
improvement demonstrates the effectiveness, efficiency and generalizability of
our approach.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 12:31:04 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Zeng",
"Fanhu",
""
],
[
"Cheng",
"Zhen",
""
],
[
"Zhu",
"Fei",
""
],
[
"Zhang",
"Xu-Yao",
""
]
] | TITLE: Towards Efficient and General-Purpose Few-Shot Misclassification
Detection for Vision-Language Models
ABSTRACT: Reliable prediction by classifiers is crucial for their deployment in high
security and dynamically changing situations. However, modern neural networks
often exhibit overconfidence for misclassified predictions, highlighting the
need for confidence estimation to detect errors. Despite the achievements
obtained by existing methods on small-scale datasets, they all require training
from scratch and there are no efficient and effective misclassification
detection (MisD) methods, hindering practical application towards large-scale
and ever-changing datasets. In this paper, we pave the way to exploit vision
language model (VLM) leveraging text information to establish an efficient and
general-purpose misclassification detection framework. By harnessing the power
of VLM, we construct FSMisD, a Few-Shot prompt learning framework for MisD to
refrain from training from scratch and therefore improve tuning efficiency. To
enhance misclassification detection ability, we use adaptive pseudo sample
generation and a novel negative loss to mitigate the issue of overconfidence by
pushing category prompts away from pseudo features. We conduct comprehensive
experiments with prompt learning methods and validate the generalization
ability across various datasets with domain shift. Significant and consistent
improvement demonstrates the effectiveness, efficiency and generalizability of
our approach.
|
2503.20496 | Aishik Mandal | Aishik Mandal, Dana Atzil-Slonim, Thamar Solorio, Iryna Gurevych | Enhancing Depression Detection via Question-wise Modality Fusion | 18 pages, 5 figures, The 10th Workshop on Computational Linguistics
and Clinical Psychology | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Depression is a highly prevalent and disabling condition that incurs
substantial personal and societal costs. Current depression diagnosis involves
determining the depression severity of a person through self-reported
questionnaires or interviews conducted by clinicians. This often leads to
delayed treatment and involves substantial human resources. Thus, several works
try to automate the process using multimodal data. However, they usually
overlook the following: i) The variable contribution of each modality for each
question in the questionnaire and ii) Using ordinal classification for the
task. This results in sub-optimal fusion and training methods. In this work, we
propose a novel Question-wise Modality Fusion (QuestMF) framework trained with
a novel Imbalanced Ordinal Log-Loss (ImbOLL) function to tackle these issues.
The performance of our framework is comparable to the current state-of-the-art
models on the E-DAIC dataset and enhances interpretability by predicting scores
for each question. This will help clinicians identify an individual's symptoms,
allowing them to customise their interventions accordingly. We also make the
code for the QuestMF framework publicly available.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 12:34:34 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Mandal",
"Aishik",
""
],
[
"Atzil-Slonim",
"Dana",
""
],
[
"Solorio",
"Thamar",
""
],
[
"Gurevych",
"Iryna",
""
]
] | TITLE: Enhancing Depression Detection via Question-wise Modality Fusion
ABSTRACT: Depression is a highly prevalent and disabling condition that incurs
substantial personal and societal costs. Current depression diagnosis involves
determining the depression severity of a person through self-reported
questionnaires or interviews conducted by clinicians. This often leads to
delayed treatment and involves substantial human resources. Thus, several works
try to automate the process using multimodal data. However, they usually
overlook the following: i) The variable contribution of each modality for each
question in the questionnaire and ii) Using ordinal classification for the
task. This results in sub-optimal fusion and training methods. In this work, we
propose a novel Question-wise Modality Fusion (QuestMF) framework trained with
a novel Imbalanced Ordinal Log-Loss (ImbOLL) function to tackle these issues.
The performance of our framework is comparable to the current state-of-the-art
models on the E-DAIC dataset and enhances interpretability by predicting scores
for each question. This will help clinicians identify an individual's symptoms,
allowing them to customise their interventions accordingly. We also make the
code for the QuestMF framework publicly available.
|
2503.20504 | Zehui Liao | Zehui Liao, Shishuai Hu, Ke Zou, Huazhu Fu, Liangli Zhen, Yong Xia | Vision-Amplified Semantic Entropy for Hallucination Detection in Medical
Visual Question Answering | 11 pages, 2 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimodal large language models (MLLMs) have demonstrated significant
potential in medical Visual Question Answering (VQA). Yet, they remain prone to
hallucinations-incorrect responses that contradict input images, posing
substantial risks in clinical decision-making. Detecting these hallucinations
is essential for establishing trust in MLLMs among clinicians and patients,
thereby enabling their real-world adoption. Current hallucination detection
methods, especially semantic entropy (SE), have demonstrated promising
hallucination detection capacity for LLMs. However, adapting SE to medical
MLLMs by incorporating visual perturbations presents a dilemma. Weak
perturbations preserve image content and ensure clinical validity, but may be
overlooked by medical MLLMs, which tend to over rely on language priors. In
contrast, strong perturbations can distort essential diagnostic features,
compromising clinical interpretation. To address this issue, we propose Vision
Amplified Semantic Entropy (VASE), which incorporates weak image
transformations and amplifies the impact of visual input, to improve
hallucination detection in medical VQA. We first estimate the semantic
predictive distribution under weak visual transformations to preserve clinical
validity, and then amplify visual influence by contrasting this distribution
with that derived from a distorted image. The entropy of the resulting
distribution is estimated as VASE. Experiments on two medical open-ended VQA
datasets demonstrate that VASE consistently outperforms existing hallucination
detection methods.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 12:45:34 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Liao",
"Zehui",
""
],
[
"Hu",
"Shishuai",
""
],
[
"Zou",
"Ke",
""
],
[
"Fu",
"Huazhu",
""
],
[
"Zhen",
"Liangli",
""
],
[
"Xia",
"Yong",
""
]
] | TITLE: Vision-Amplified Semantic Entropy for Hallucination Detection in Medical
Visual Question Answering
ABSTRACT: Multimodal large language models (MLLMs) have demonstrated significant
potential in medical Visual Question Answering (VQA). Yet, they remain prone to
hallucinations-incorrect responses that contradict input images, posing
substantial risks in clinical decision-making. Detecting these hallucinations
is essential for establishing trust in MLLMs among clinicians and patients,
thereby enabling their real-world adoption. Current hallucination detection
methods, especially semantic entropy (SE), have demonstrated promising
hallucination detection capacity for LLMs. However, adapting SE to medical
MLLMs by incorporating visual perturbations presents a dilemma. Weak
perturbations preserve image content and ensure clinical validity, but may be
overlooked by medical MLLMs, which tend to over rely on language priors. In
contrast, strong perturbations can distort essential diagnostic features,
compromising clinical interpretation. To address this issue, we propose Vision
Amplified Semantic Entropy (VASE), which incorporates weak image
transformations and amplifies the impact of visual input, to improve
hallucination detection in medical VQA. We first estimate the semantic
predictive distribution under weak visual transformations to preserve clinical
validity, and then amplify visual influence by contrasting this distribution
with that derived from a distorted image. The entropy of the resulting
distribution is estimated as VASE. Experiments on two medical open-ended VQA
datasets demonstrate that VASE consistently outperforms existing hallucination
detection methods.
|
2503.20516 | Shahabedin Nabavi | Mahya Nikouei, Bita Baroutian, Shahabedin Nabavi, Fateme Taraghi,
Atefe Aghaei, Ayoob Sajedi, Mohsen Ebrahimi Moghaddam | Small Object Detection: A Comprehensive Survey on Challenges, Techniques
and Real-World Applications | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Small object detection (SOD) is a critical yet challenging task in computer
vision, with applications like spanning surveillance, autonomous systems,
medical imaging, and remote sensing. Unlike larger objects, small objects
contain limited spatial and contextual information, making accurate detection
difficult. Challenges such as low resolution, occlusion, background
interference, and class imbalance further complicate the problem. This survey
provides a comprehensive review of recent advancements in SOD using deep
learning, focusing on articles published in Q1 journals during 2024-2025. We
analyzed challenges, state-of-the-art techniques, datasets, evaluation metrics,
and real-world applications. Recent advancements in deep learning have
introduced innovative solutions, including multi-scale feature extraction,
Super-Resolution (SR) techniques, attention mechanisms, and transformer-based
architectures. Additionally, improvements in data augmentation, synthetic data
generation, and transfer learning have addressed data scarcity and domain
adaptation issues. Furthermore, emerging trends such as lightweight neural
networks, knowledge distillation (KD), and self-supervised learning offer
promising directions for improving detection efficiency, particularly in
resource-constrained environments like Unmanned Aerial Vehicles (UAV)-based
surveillance and edge computing. We also review widely used datasets, along
with standard evaluation metrics such as mean Average Precision (mAP) and
size-specific AP scores. The survey highlights real-world applications,
including traffic monitoring, maritime surveillance, industrial defect
detection, and precision agriculture. Finally, we discuss open research
challenges and future directions, emphasizing the need for robust domain
adaptation techniques, better feature fusion strategies, and real-time
performance optimization.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 12:58:13 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Nikouei",
"Mahya",
""
],
[
"Baroutian",
"Bita",
""
],
[
"Nabavi",
"Shahabedin",
""
],
[
"Taraghi",
"Fateme",
""
],
[
"Aghaei",
"Atefe",
""
],
[
"Sajedi",
"Ayoob",
""
],
[
"Moghaddam",
"Mohsen Ebrahimi",
""
]
] | TITLE: Small Object Detection: A Comprehensive Survey on Challenges, Techniques
and Real-World Applications
ABSTRACT: Small object detection (SOD) is a critical yet challenging task in computer
vision, with applications like spanning surveillance, autonomous systems,
medical imaging, and remote sensing. Unlike larger objects, small objects
contain limited spatial and contextual information, making accurate detection
difficult. Challenges such as low resolution, occlusion, background
interference, and class imbalance further complicate the problem. This survey
provides a comprehensive review of recent advancements in SOD using deep
learning, focusing on articles published in Q1 journals during 2024-2025. We
analyzed challenges, state-of-the-art techniques, datasets, evaluation metrics,
and real-world applications. Recent advancements in deep learning have
introduced innovative solutions, including multi-scale feature extraction,
Super-Resolution (SR) techniques, attention mechanisms, and transformer-based
architectures. Additionally, improvements in data augmentation, synthetic data
generation, and transfer learning have addressed data scarcity and domain
adaptation issues. Furthermore, emerging trends such as lightweight neural
networks, knowledge distillation (KD), and self-supervised learning offer
promising directions for improving detection efficiency, particularly in
resource-constrained environments like Unmanned Aerial Vehicles (UAV)-based
surveillance and edge computing. We also review widely used datasets, along
with standard evaluation metrics such as mean Average Precision (mAP) and
size-specific AP scores. The survey highlights real-world applications,
including traffic monitoring, maritime surveillance, industrial defect
detection, and precision agriculture. Finally, we discuss open research
challenges and future directions, emphasizing the need for robust domain
adaptation techniques, better feature fusion strategies, and real-time
performance optimization.
|
2503.20527 | Zhicheng Guo | Zhicheng Guo, Sijie Cheng, Yuchen Niu, Hao Wang, Sicheng Zhou, Wenbing
Huang, Yang Liu | StableToolBench-MirrorAPI: Modeling Tool Environments as Mirrors of
7,000+ Real-World APIs | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | The rapid advancement of large language models (LLMs) has spurred significant
interest in tool learning, where LLMs are augmented with external tools to
tackle complex tasks. However, existing tool environments face challenges in
balancing stability, scalability, and realness, particularly for benchmarking
purposes. To address this problem, we propose MirrorAPI, a novel framework that
trains specialized LLMs to accurately simulate real API responses, effectively
acting as "mirrors" to tool environments. Using a comprehensive dataset of
request-response pairs from 7,000+ APIs, we employ supervised fine-tuning and
chain-of-thought reasoning to enhance simulation fidelity. MirrorAPI achieves
superior accuracy and stability compared to state-of-the-art methods, as
demonstrated by its performance on the newly constructed MirrorAPI-Bench and
its integration into StableToolBench.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 13:13:03 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Guo",
"Zhicheng",
""
],
[
"Cheng",
"Sijie",
""
],
[
"Niu",
"Yuchen",
""
],
[
"Wang",
"Hao",
""
],
[
"Zhou",
"Sicheng",
""
],
[
"Huang",
"Wenbing",
""
],
[
"Liu",
"Yang",
""
]
] | TITLE: StableToolBench-MirrorAPI: Modeling Tool Environments as Mirrors of
7,000+ Real-World APIs
ABSTRACT: The rapid advancement of large language models (LLMs) has spurred significant
interest in tool learning, where LLMs are augmented with external tools to
tackle complex tasks. However, existing tool environments face challenges in
balancing stability, scalability, and realness, particularly for benchmarking
purposes. To address this problem, we propose MirrorAPI, a novel framework that
trains specialized LLMs to accurately simulate real API responses, effectively
acting as "mirrors" to tool environments. Using a comprehensive dataset of
request-response pairs from 7,000+ APIs, we employ supervised fine-tuning and
chain-of-thought reasoning to enhance simulation fidelity. MirrorAPI achieves
superior accuracy and stability compared to state-of-the-art methods, as
demonstrated by its performance on the newly constructed MirrorAPI-Bench and
its integration into StableToolBench.
|
2503.20568 | Soumitra Ghosh | Soumitra Ghosh, Begona Altuna, Saeed Farzi, Pietro Ferrazzi, Alberto
Lavelli, Giulia Mezzanotte, Manuela Speranza and Bernardo Magnini | Low-resource Information Extraction with the European Clinical Case
Corpus | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | We present E3C-3.0, a multilingual dataset in the medical domain, comprising
clinical cases annotated with diseases and test-result relations. The dataset
includes both native texts in five languages (English, French, Italian, Spanish
and Basque) and texts translated and projected from the English source into
five target languages (Greek, Italian, Polish, Slovak, and Slovenian). A
semi-automatic approach has been implemented, including automatic annotation
projection based on Large Language Models (LLMs) and human revision. We present
several experiments showing that current state-of-the-art LLMs can benefit from
being fine-tuned on the E3C-3.0 dataset. We also show that transfer learning in
different languages is very effective, mitigating the scarcity of data.
Finally, we compare performance both on native data and on projected data. We
release the data at
https://huggingface.co/collections/NLP-FBK/e3c-projected-676a7d6221608d60e4e9fd89 .
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 14:07:40 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Ghosh",
"Soumitra",
""
],
[
"Altuna",
"Begona",
""
],
[
"Farzi",
"Saeed",
""
],
[
"Ferrazzi",
"Pietro",
""
],
[
"Lavelli",
"Alberto",
""
],
[
"Mezzanotte",
"Giulia",
""
],
[
"Speranza",
"Manuela",
""
],
[
"Magnini",
"Bernardo",
""
]
] | TITLE: Low-resource Information Extraction with the European Clinical Case
Corpus
ABSTRACT: We present E3C-3.0, a multilingual dataset in the medical domain, comprising
clinical cases annotated with diseases and test-result relations. The dataset
includes both native texts in five languages (English, French, Italian, Spanish
and Basque) and texts translated and projected from the English source into
five target languages (Greek, Italian, Polish, Slovak, and Slovenian). A
semi-automatic approach has been implemented, including automatic annotation
projection based on Large Language Models (LLMs) and human revision. We present
several experiments showing that current state-of-the-art LLMs can benefit from
being fine-tuned on the E3C-3.0 dataset. We also show that transfer learning in
different languages is very effective, mitigating the scarcity of data.
Finally, we compare performance both on native data and on projected data. We
release the data at
https://huggingface.co/collections/NLP-FBK/e3c-projected-676a7d6221608d60e4e9fd89 .
|
2503.20571 | Richard McKinley | Vinzenz Uhr, Ivan Diaz, Christian Rummel, and Richard McKinley | Exploring Robustness of Cortical Morphometry in the presence of white
matter lesions, using Diffusion Models for Lesion Filling | null | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Cortical thickness measurements from magnetic resonance imaging, an important
biomarker in many neurodegenerative and neurological disorders, are derived by
many tools from an initial voxel-wise tissue segmentation. White matter (WM)
hypointensities in T1-weighted imaging, such as those arising from multiple
sclerosis or small vessel disease, are known to affect the output of brain
segmentation methods and therefore bias cortical thickness measurements. These
effects are well-documented among traditional brain segmentation tools but have
not been studied extensively in tools based on deep-learning segmentations,
which promise to be more robust. In this paper, we explore the potential of
deep learning to enhance the accuracy and efficiency of cortical thickness
measurement in the presence of WM lesions, using a high-quality lesion filling
algorithm leveraging denoising diffusion networks.
A pseudo-3D U-Net architecture trained on the OASIS dataset to generate
synthetic healthy tissue, conditioned on binary lesion masks derived from the
MSSEG dataset, allows realistic removal of white matter lesions in multiple
sclerosis patients. By applying morphometry methods to patient images before
and after lesion filling, we analysed robustness of global and regional
cortical thickness measurements in the presence of white matter lesions.
Methods based on a deep learning-based segmentation of the brain (Fastsurfer,
DL+DiReCT, ANTsPyNet) exhibited greater robustness than those using classical
segmentation methods (Freesurfer, ANTs).
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 14:18:35 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Uhr",
"Vinzenz",
""
],
[
"Diaz",
"Ivan",
""
],
[
"Rummel",
"Christian",
""
],
[
"McKinley",
"Richard",
""
]
] | TITLE: Exploring Robustness of Cortical Morphometry in the presence of white
matter lesions, using Diffusion Models for Lesion Filling
ABSTRACT: Cortical thickness measurements from magnetic resonance imaging, an important
biomarker in many neurodegenerative and neurological disorders, are derived by
many tools from an initial voxel-wise tissue segmentation. White matter (WM)
hypointensities in T1-weighted imaging, such as those arising from multiple
sclerosis or small vessel disease, are known to affect the output of brain
segmentation methods and therefore bias cortical thickness measurements. These
effects are well-documented among traditional brain segmentation tools but have
not been studied extensively in tools based on deep-learning segmentations,
which promise to be more robust. In this paper, we explore the potential of
deep learning to enhance the accuracy and efficiency of cortical thickness
measurement in the presence of WM lesions, using a high-quality lesion filling
algorithm leveraging denoising diffusion networks.
A pseudo-3D U-Net architecture trained on the OASIS dataset to generate
synthetic healthy tissue, conditioned on binary lesion masks derived from the
MSSEG dataset, allows realistic removal of white matter lesions in multiple
sclerosis patients. By applying morphometry methods to patient images before
and after lesion filling, we analysed robustness of global and regional
cortical thickness measurements in the presence of white matter lesions.
Methods based on a deep learning-based segmentation of the brain (Fastsurfer,
DL+DiReCT, ANTsPyNet) exhibited greater robustness than those using classical
segmentation methods (Freesurfer, ANTs).
|
2503.20579 | Berk \c{C}akar | Berk \c{C}akar, Charles M. Sale, Sophie Chen, Ethan H. Burmane,
Dongyoon Lee, James C. Davis | Is Reuse All You Need? A Systematic Comparison of Regular Expression
Composition Strategies | null | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Composing regular expressions (regexes) is a common but challenging
engineering activity. Software engineers struggle with regex complexity,
leading to defects, performance issues, and security vulnerabilities.
Researchers have proposed tools to synthesize regexes automatically, and recent
generative AI techniques are also promising. Meanwhile, developers commonly
reuse existing regexes from Internet sources and codebases. In this study, we
ask a simple question: are regex composition tasks unique enough to merit
dedicated machinery, or is reuse all we need?
We answer this question through a systematic evaluation of state-of-the-art
regex reuse and synthesis strategies. We begin by collecting a novel dataset of
regex composition tasks mined from GitHub and RegExLib (55,137 unique tasks
with solution regexes). To address the absence of an automated regex reuse
formulation, we introduce reuse-by-example, a Programming by Example (PbE)
approach that leverages a curated database of production-ready regexes.
Although all approaches can solve these composition tasks accurately,
reuse-by-example and LLMs both do far better over the range of metrics we
applied. Our evaluation then uses multiple dimensions, including a novel
metric, to compare reuse-by-example against two synthesis approaches: formal
regex synthesizers and generative AI (LLMs). Although all approaches can solve
these composition tasks accurately, reuse and LLMs both do far better over the
range of metrics we applied. Ceteris paribus, prefer the cheaper solution --
for regex composition, perhaps reuse is all you need. Our findings provide
actionable insights for developers selecting regex composition strategies and
inform the design of future tools to improve regex reliability in software
systems.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 14:25:27 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Çakar",
"Berk",
""
],
[
"Sale",
"Charles M.",
""
],
[
"Chen",
"Sophie",
""
],
[
"Burmane",
"Ethan H.",
""
],
[
"Lee",
"Dongyoon",
""
],
[
"Davis",
"James C.",
""
]
] | TITLE: Is Reuse All You Need? A Systematic Comparison of Regular Expression
Composition Strategies
ABSTRACT: Composing regular expressions (regexes) is a common but challenging
engineering activity. Software engineers struggle with regex complexity,
leading to defects, performance issues, and security vulnerabilities.
Researchers have proposed tools to synthesize regexes automatically, and recent
generative AI techniques are also promising. Meanwhile, developers commonly
reuse existing regexes from Internet sources and codebases. In this study, we
ask a simple question: are regex composition tasks unique enough to merit
dedicated machinery, or is reuse all we need?
We answer this question through a systematic evaluation of state-of-the-art
regex reuse and synthesis strategies. We begin by collecting a novel dataset of
regex composition tasks mined from GitHub and RegExLib (55,137 unique tasks
with solution regexes). To address the absence of an automated regex reuse
formulation, we introduce reuse-by-example, a Programming by Example (PbE)
approach that leverages a curated database of production-ready regexes.
Although all approaches can solve these composition tasks accurately,
reuse-by-example and LLMs both do far better over the range of metrics we
applied. Our evaluation then uses multiple dimensions, including a novel
metric, to compare reuse-by-example against two synthesis approaches: formal
regex synthesizers and generative AI (LLMs). Although all approaches can solve
these composition tasks accurately, reuse and LLMs both do far better over the
range of metrics we applied. Ceteris paribus, prefer the cheaper solution --
for regex composition, perhaps reuse is all you need. Our findings provide
actionable insights for developers selecting regex composition strategies and
inform the design of future tools to improve regex reliability in software
systems.
|
2503.20584 | Nishtha Srivastava | Claudia Quinteros-Cartaya, Javier Quintero-Arenas, Andrea
Padilla-Lafarga, Carlos Moraila, Johannes Faber, Wei Li, Jonas K\"ohler,
Nishtha Srivastava | A Deep Learning Pipeline for Large Earthquake Analysis using High-Rate
Global Navigation Satellite System Data | null | null | null | null | physics.geo-ph | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Deep learning techniques for processing large and complex datasets have
unlocked new opportunities for fast and reliable earthquake analysis using
Global Navigation Satellite System (GNSS) data. This work presents a deep
learning model, MagEs, to estimate earthquake magnitudes using data from
high-rate GNSS stations. Furthermore, MagEs is integrated with the DetEQ model
for earthquake detection within the SAIPy package, creating a comprehensive
pipeline for earthquake detection and magnitude estimation using HR-GNSS data.
The MagEs model provides magnitude estimates within seconds of detection when
using stations within 3 degrees of the epicenter, which are the most relevant
for real-time applications. However, since it has been trained on data from
stations up to 7.5 degrees away, it can also analyze data from larger
distances. The model can process data from a single station at a time or
combine data from up to three stations. The model was trained using synthetic
data reflecting rupture scenarios in the Chile subduction zone, and the results
confirm strong performance for Chilean earthquakes. Although tests from other
tectonic regions also yielded good results, incorporating regional data through
transfer learning could further improve its performance in diverse seismic
settings. The model has not yet been deployed in an operational real-time
monitoring system, but simulation tests that update data in a second-by-second
manner demonstrate its potential for future real-time adaptation.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 14:33:06 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Quinteros-Cartaya",
"Claudia",
""
],
[
"Quintero-Arenas",
"Javier",
""
],
[
"Padilla-Lafarga",
"Andrea",
""
],
[
"Moraila",
"Carlos",
""
],
[
"Faber",
"Johannes",
""
],
[
"Li",
"Wei",
""
],
[
"Köhler",
"Jonas",
""
],
[
"Srivastava",
"Nishtha",
""
]
] | TITLE: A Deep Learning Pipeline for Large Earthquake Analysis using High-Rate
Global Navigation Satellite System Data
ABSTRACT: Deep learning techniques for processing large and complex datasets have
unlocked new opportunities for fast and reliable earthquake analysis using
Global Navigation Satellite System (GNSS) data. This work presents a deep
learning model, MagEs, to estimate earthquake magnitudes using data from
high-rate GNSS stations. Furthermore, MagEs is integrated with the DetEQ model
for earthquake detection within the SAIPy package, creating a comprehensive
pipeline for earthquake detection and magnitude estimation using HR-GNSS data.
The MagEs model provides magnitude estimates within seconds of detection when
using stations within 3 degrees of the epicenter, which are the most relevant
for real-time applications. However, since it has been trained on data from
stations up to 7.5 degrees away, it can also analyze data from larger
distances. The model can process data from a single station at a time or
combine data from up to three stations. The model was trained using synthetic
data reflecting rupture scenarios in the Chile subduction zone, and the results
confirm strong performance for Chilean earthquakes. Although tests from other
tectonic regions also yielded good results, incorporating regional data through
transfer learning could further improve its performance in diverse seismic
settings. The model has not yet been deployed in an operational real-time
monitoring system, but simulation tests that update data in a second-by-second
manner demonstrate its potential for future real-time adaptation.
|
2503.20594 | Tobias Reisch | Tobias Reisch and Andr\'as Borsos and Stefan Thurner | Supply chain network rewiring dynamics at the firm-level | 26 pages, 25 figures | null | null | null | econ.GN nlin.AO physics.soc-ph q-fin.EC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Supply chain networks (SCN) form the structural backbone of any society. They
constitute the societal metabolism that literally produces everything for
everybody by coordinating practically every single person on the planet. SCNs
are by no means static but undergo permanent change through the entry and exit
of firms and the re-arrangement of supply relations. Here we use a unique
dataset to explore the temporal evolution of firms and their supplier-buyer
relations of a national SCN. Monthly reported value added tax data from Hungary
from 2014 to 2022 allows us to reconstruct the entire economy with 711,248
companies and 38,644,400 connections, covering practically every re-structuring
event of an entire economy at firm-level resolution. We find that per year
about 25\% of firms exit the SCN while 28\% new ones enter. On average, 55\% of
all supply-links present in one year will not be present in the next. We report
the half-life time of supply-links to be 13 months. New links attach
super-preferentially to firms with a probability, $p(i)\propto k_i^{1.08}$,
with $k_i$ firm $i$'s number of supply-connections. We calibrate a simple
statistical network generation model that reproduces the stylized
characteristics of the dominant Hungarian SCN. The model not only reproduces
local network features such as in- and out-degree distributions, assortativity
and clustering structure, but also captures realistic systemic risk profiles.
We discuss the present model in how rewiring dynamics of the economy is
essential for quantifying its resilience and to estimate shock propagation.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 14:42:44 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Reisch",
"Tobias",
""
],
[
"Borsos",
"András",
""
],
[
"Thurner",
"Stefan",
""
]
] | TITLE: Supply chain network rewiring dynamics at the firm-level
ABSTRACT: Supply chain networks (SCN) form the structural backbone of any society. They
constitute the societal metabolism that literally produces everything for
everybody by coordinating practically every single person on the planet. SCNs
are by no means static but undergo permanent change through the entry and exit
of firms and the re-arrangement of supply relations. Here we use a unique
dataset to explore the temporal evolution of firms and their supplier-buyer
relations of a national SCN. Monthly reported value added tax data from Hungary
from 2014 to 2022 allows us to reconstruct the entire economy with 711,248
companies and 38,644,400 connections, covering practically every re-structuring
event of an entire economy at firm-level resolution. We find that per year
about 25\% of firms exit the SCN while 28\% new ones enter. On average, 55\% of
all supply-links present in one year will not be present in the next. We report
the half-life time of supply-links to be 13 months. New links attach
super-preferentially to firms with a probability, $p(i)\propto k_i^{1.08}$,
with $k_i$ firm $i$'s number of supply-connections. We calibrate a simple
statistical network generation model that reproduces the stylized
characteristics of the dominant Hungarian SCN. The model not only reproduces
local network features such as in- and out-degree distributions, assortativity
and clustering structure, but also captures realistic systemic risk profiles.
We discuss the present model in how rewiring dynamics of the economy is
essential for quantifying its resilience and to estimate shock propagation.
|
2503.20612 | Hao Fu | Hao Fu, Hanbin Zhao, Jiahua Dong, Chao Zhang, Hui Qian | IAP: Improving Continual Learning of Vision-Language Models via
Instance-Aware Prompting | Code can be found at https://github.com/FerdinandZJU/IAP | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent pre-trained vision-language models (PT-VLMs) often face a Multi-Domain
Class-Incremental Learning (MCIL) scenario in practice, where several classes
and domains of multi-modal tasks are incrementally arrived. Without access to
previously learned tasks and unseen tasks, memory-constrained MCIL suffers from
forward and backward forgetting. To alleviate the above challenges,
parameter-efficient fine-tuning techniques (PEFT), such as prompt tuning, are
employed to adapt the PT-VLM to the diverse incrementally learned tasks. To
achieve effective new task adaptation, existing methods only consider the
effect of PEFT strategy selection, but neglect the influence of PEFT parameter
setting (e.g., prompting). In this paper, we tackle the challenge of optimizing
prompt designs for diverse tasks in MCIL and propose an Instance-Aware
Prompting (IAP) framework. Specifically, our Instance-Aware Gated Prompting
(IA-GP) module enhances adaptation to new tasks while mitigating forgetting by
dynamically assigning prompts across transformer layers at the instance level.
Our Instance-Aware Class-Distribution-Driven Prompting (IA-CDDP) improves the
task adaptation process by determining an accurate task-label-related
confidence score for each instance. Experimental evaluations across 11
datasets, using three performance metrics, demonstrate the effectiveness of our
proposed method. Code can be found at https://github.com/FerdinandZJU/IAP.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 14:59:23 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Fu",
"Hao",
""
],
[
"Zhao",
"Hanbin",
""
],
[
"Dong",
"Jiahua",
""
],
[
"Zhang",
"Chao",
""
],
[
"Qian",
"Hui",
""
]
] | TITLE: IAP: Improving Continual Learning of Vision-Language Models via
Instance-Aware Prompting
ABSTRACT: Recent pre-trained vision-language models (PT-VLMs) often face a Multi-Domain
Class-Incremental Learning (MCIL) scenario in practice, where several classes
and domains of multi-modal tasks are incrementally arrived. Without access to
previously learned tasks and unseen tasks, memory-constrained MCIL suffers from
forward and backward forgetting. To alleviate the above challenges,
parameter-efficient fine-tuning techniques (PEFT), such as prompt tuning, are
employed to adapt the PT-VLM to the diverse incrementally learned tasks. To
achieve effective new task adaptation, existing methods only consider the
effect of PEFT strategy selection, but neglect the influence of PEFT parameter
setting (e.g., prompting). In this paper, we tackle the challenge of optimizing
prompt designs for diverse tasks in MCIL and propose an Instance-Aware
Prompting (IAP) framework. Specifically, our Instance-Aware Gated Prompting
(IA-GP) module enhances adaptation to new tasks while mitigating forgetting by
dynamically assigning prompts across transformer layers at the instance level.
Our Instance-Aware Class-Distribution-Driven Prompting (IA-CDDP) improves the
task adaptation process by determining an accurate task-label-related
confidence score for each instance. Experimental evaluations across 11
datasets, using three performance metrics, demonstrate the effectiveness of our
proposed method. Code can be found at https://github.com/FerdinandZJU/IAP.
|
2503.20618 | Davide Domini | Davide Domini and Gianluca Aguzzi and Mirko Viroli | ProFed: a Benchmark for Proximity-based non-IID Federated Learning | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | In recent years, cro:flFederated learning (FL) has gained significant
attention within the machine learning community. Although various FL algorithms
have been proposed in the literature, their performance often degrades when
data across clients is non-independently and identically distributed (non-IID).
This skewness in data distribution often emerges from geographic patterns, with
notable examples including regional linguistic variations in text data or
localized traffic patterns in urban environments. Such scenarios result in IID
data within specific regions but non-IID data across regions. However, existing
FL algorithms are typically evaluated by randomly splitting non-IID data across
devices, disregarding their spatial distribution. To address this gap, we
introduce ProFed, a benchmark that simulates data splits with varying degrees
of skewness across different regions. We incorporate several skewness methods
from the literature and apply them to well-known datasets, including MNIST,
FashionMNIST, CIFAR-10, and CIFAR-100. Our goal is to provide researchers with
a standardized framework to evaluate FL algorithms more effectively and
consistently against established baselines.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 15:08:08 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Domini",
"Davide",
""
],
[
"Aguzzi",
"Gianluca",
""
],
[
"Viroli",
"Mirko",
""
]
] | TITLE: ProFed: a Benchmark for Proximity-based non-IID Federated Learning
ABSTRACT: In recent years, cro:flFederated learning (FL) has gained significant
attention within the machine learning community. Although various FL algorithms
have been proposed in the literature, their performance often degrades when
data across clients is non-independently and identically distributed (non-IID).
This skewness in data distribution often emerges from geographic patterns, with
notable examples including regional linguistic variations in text data or
localized traffic patterns in urban environments. Such scenarios result in IID
data within specific regions but non-IID data across regions. However, existing
FL algorithms are typically evaluated by randomly splitting non-IID data across
devices, disregarding their spatial distribution. To address this gap, we
introduce ProFed, a benchmark that simulates data splits with varying degrees
of skewness across different regions. We incorporate several skewness methods
from the literature and apply them to well-known datasets, including MNIST,
FashionMNIST, CIFAR-10, and CIFAR-100. Our goal is to provide researchers with
a standardized framework to evaluate FL algorithms more effectively and
consistently against established baselines.
|
2503.20630 | Hac{\i} \.Ismail Aslan | Haci Ismail Aslan, Philipp Wiesner, Ping Xiong, Odej Kao | $\beta$-GNN: A Robust Ensemble Approach Against Graph Structure
Perturbation | This is the author's version of the paper accepted at EuroMLSys 2025 | null | 10.1145/3721146.3721949 | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Graph Neural Networks (GNNs) are playing an increasingly important role in
the efficient operation and security of computing systems, with applications in
workload scheduling, anomaly detection, and resource management. However, their
vulnerability to network perturbations poses a significant challenge. We
propose $\beta$-GNN, a model enhancing GNN robustness without sacrificing clean
data performance. $\beta$-GNN uses a weighted ensemble, combining any GNN with
a multi-layer perceptron. A learned dynamic weight, $\beta$, modulates the
GNN's contribution. This $\beta$ not only weights GNN influence but also
indicates data perturbation levels, enabling proactive mitigation. Experimental
results on diverse datasets show $\beta$-GNN's superior adversarial accuracy
and attack severity quantification. Crucially, $\beta$-GNN avoids perturbation
assumptions, preserving clean data structure and performance.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 15:24:07 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Aslan",
"Haci Ismail",
""
],
[
"Wiesner",
"Philipp",
""
],
[
"Xiong",
"Ping",
""
],
[
"Kao",
"Odej",
""
]
] | TITLE: $\beta$-GNN: A Robust Ensemble Approach Against Graph Structure
Perturbation
ABSTRACT: Graph Neural Networks (GNNs) are playing an increasingly important role in
the efficient operation and security of computing systems, with applications in
workload scheduling, anomaly detection, and resource management. However, their
vulnerability to network perturbations poses a significant challenge. We
propose $\beta$-GNN, a model enhancing GNN robustness without sacrificing clean
data performance. $\beta$-GNN uses a weighted ensemble, combining any GNN with
a multi-layer perceptron. A learned dynamic weight, $\beta$, modulates the
GNN's contribution. This $\beta$ not only weights GNN influence but also
indicates data perturbation levels, enabling proactive mitigation. Experimental
results on diverse datasets show $\beta$-GNN's superior adversarial accuracy
and attack severity quantification. Crucially, $\beta$-GNN avoids perturbation
assumptions, preserving clean data structure and performance.
|
2503.20648 | Lei Xu | Raj Sanjay Shah, Lei Xu, Qianchu Liu, Jon Burnsky, Drew Bertagnolli,
Chaitanya Shivade | TN-Eval: Rubric and Evaluation Protocols for Measuring the Quality of
Behavioral Therapy Notes | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Behavioral therapy notes are important for both legal compliance and patient
care. Unlike progress notes in physical health, quality standards for
behavioral therapy notes remain underdeveloped. To address this gap, we
collaborated with licensed therapists to design a comprehensive rubric for
evaluating therapy notes across key dimensions: completeness, conciseness, and
faithfulness. Further, we extend a public dataset of behavioral health
conversations with therapist-written notes and LLM-generated notes, and apply
our evaluation framework to measure their quality. We find that: (1) A
rubric-based manual evaluation protocol offers more reliable and interpretable
results than traditional Likert-scale annotations. (2) LLMs can mimic human
evaluators in assessing completeness and conciseness but struggle with
faithfulness. (3) Therapist-written notes often lack completeness and
conciseness, while LLM-generated notes contain hallucination. Surprisingly, in
a blind test, therapists prefer and judge LLM-generated notes to be superior to
therapist-written notes.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 15:40:40 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Shah",
"Raj Sanjay",
""
],
[
"Xu",
"Lei",
""
],
[
"Liu",
"Qianchu",
""
],
[
"Burnsky",
"Jon",
""
],
[
"Bertagnolli",
"Drew",
""
],
[
"Shivade",
"Chaitanya",
""
]
] | TITLE: TN-Eval: Rubric and Evaluation Protocols for Measuring the Quality of
Behavioral Therapy Notes
ABSTRACT: Behavioral therapy notes are important for both legal compliance and patient
care. Unlike progress notes in physical health, quality standards for
behavioral therapy notes remain underdeveloped. To address this gap, we
collaborated with licensed therapists to design a comprehensive rubric for
evaluating therapy notes across key dimensions: completeness, conciseness, and
faithfulness. Further, we extend a public dataset of behavioral health
conversations with therapist-written notes and LLM-generated notes, and apply
our evaluation framework to measure their quality. We find that: (1) A
rubric-based manual evaluation protocol offers more reliable and interpretable
results than traditional Likert-scale annotations. (2) LLMs can mimic human
evaluators in assessing completeness and conciseness but struggle with
faithfulness. (3) Therapist-written notes often lack completeness and
conciseness, while LLM-generated notes contain hallucination. Surprisingly, in
a blind test, therapists prefer and judge LLM-generated notes to be superior to
therapist-written notes.
|
2503.20653 | Nathan Vin\c{c}on | Antoine Schieb, Bilal Hadjadji, Daniel Tshokola Mweze, Natalia
Fernanda Valderrama, Valentin Derang\`ere, Laurent Arnould, Sylvain Ladoire,
Alain Lalande, Louis-Oscar Morel, Nathan Vin\c{c}on | UWarp: A Whole Slide Image Registration Pipeline to Characterize
Scanner-Induced Local Domain Shift | null | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Histopathology slide digitization introduces scanner-induced domain shift
that can significantly impact computational pathology models based on deep
learning methods. In the state-of-the-art, this shift is often characterized at
a broad scale (slide-level or dataset-level) but not patch-level, which limits
our comprehension of the impact of localized tissue characteristics on the
accuracy of the deep learning models. To address this challenge, we present a
domain shift analysis framework based on UWarp, a novel registration tool
designed to accurately align histological slides scanned under varying
conditions. UWarp employs a hierarchical registration approach, combining
global affine transformations with fine-grained local corrections to achieve
robust tissue patch alignment. We evaluate UWarp using two private datasets,
CypathLung and BosomShieldBreast, containing whole slide images scanned by
multiple devices. Our experiments demonstrate that UWarp outperforms existing
open-source registration methods, achieving a median target registration error
(TRE) of less than 4 pixels (<1 micrometer at 40x magnification) while
significantly reducing computational time. Additionally, we apply UWarp to
characterize scanner-induced local domain shift in the predictions of
Breast-NEOprAIdict, a deep learning model for breast cancer pathological
response prediction. We find that prediction variability is strongly correlated
with tissue density on a given patch. Our findings highlight the importance of
localized domain shift analysis and suggest that UWarp can serve as a valuable
tool for improving model robustness and domain adaptation strategies in
computational pathology.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 15:48:38 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Schieb",
"Antoine",
""
],
[
"Hadjadji",
"Bilal",
""
],
[
"Mweze",
"Daniel Tshokola",
""
],
[
"Valderrama",
"Natalia Fernanda",
""
],
[
"Derangère",
"Valentin",
""
],
[
"Arnould",
"Laurent",
""
],
[
"Ladoire",
"Sylvain",
""
],
[
"Lalande",
"Alain",
""
],
[
"Morel",
"Louis-Oscar",
""
],
[
"Vinçon",
"Nathan",
""
]
] | TITLE: UWarp: A Whole Slide Image Registration Pipeline to Characterize
Scanner-Induced Local Domain Shift
ABSTRACT: Histopathology slide digitization introduces scanner-induced domain shift
that can significantly impact computational pathology models based on deep
learning methods. In the state-of-the-art, this shift is often characterized at
a broad scale (slide-level or dataset-level) but not patch-level, which limits
our comprehension of the impact of localized tissue characteristics on the
accuracy of the deep learning models. To address this challenge, we present a
domain shift analysis framework based on UWarp, a novel registration tool
designed to accurately align histological slides scanned under varying
conditions. UWarp employs a hierarchical registration approach, combining
global affine transformations with fine-grained local corrections to achieve
robust tissue patch alignment. We evaluate UWarp using two private datasets,
CypathLung and BosomShieldBreast, containing whole slide images scanned by
multiple devices. Our experiments demonstrate that UWarp outperforms existing
open-source registration methods, achieving a median target registration error
(TRE) of less than 4 pixels (<1 micrometer at 40x magnification) while
significantly reducing computational time. Additionally, we apply UWarp to
characterize scanner-induced local domain shift in the predictions of
Breast-NEOprAIdict, a deep learning model for breast cancer pathological
response prediction. We find that prediction variability is strongly correlated
with tissue density on a given patch. Our findings highlight the importance of
localized domain shift analysis and suggest that UWarp can serve as a valuable
tool for improving model robustness and domain adaptation strategies in
computational pathology.
|
2503.20654 | Xiangwen Zhang | Xiangwen Zhang, Qian Zhang, Longfei Han, Qiang Qu, Xiaoming Chen | AccidentSim: Generating Physically Realistic Vehicle Collision Videos
from Real-World Accident Reports | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Collecting real-world vehicle accident videos for autonomous driving research
is challenging due to their rarity and complexity. While existing driving video
generation methods may produce visually realistic videos, they often fail to
deliver physically realistic simulations because they lack the capability to
generate accurate post-collision trajectories. In this paper, we introduce
AccidentSim, a novel framework that generates physically realistic vehicle
collision videos by extracting and utilizing the physical clues and contextual
information available in real-world vehicle accident reports. Specifically,
AccidentSim leverages a reliable physical simulator to replicate post-collision
vehicle trajectories from the physical and contextual information in the
accident reports and to build a vehicle collision trajectory dataset. This
dataset is then used to fine-tune a language model, enabling it to respond to
user prompts and predict physically consistent post-collision trajectories
across various driving scenarios based on user descriptions. Finally, we employ
Neural Radiance Fields (NeRF) to render high-quality backgrounds, merging them
with the foreground vehicles that exhibit physically realistic trajectories to
generate vehicle collision videos. Experimental results demonstrate that the
videos produced by AccidentSim excel in both visual and physical authenticity.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 15:50:42 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Zhang",
"Xiangwen",
""
],
[
"Zhang",
"Qian",
""
],
[
"Han",
"Longfei",
""
],
[
"Qu",
"Qiang",
""
],
[
"Chen",
"Xiaoming",
""
]
] | TITLE: AccidentSim: Generating Physically Realistic Vehicle Collision Videos
from Real-World Accident Reports
ABSTRACT: Collecting real-world vehicle accident videos for autonomous driving research
is challenging due to their rarity and complexity. While existing driving video
generation methods may produce visually realistic videos, they often fail to
deliver physically realistic simulations because they lack the capability to
generate accurate post-collision trajectories. In this paper, we introduce
AccidentSim, a novel framework that generates physically realistic vehicle
collision videos by extracting and utilizing the physical clues and contextual
information available in real-world vehicle accident reports. Specifically,
AccidentSim leverages a reliable physical simulator to replicate post-collision
vehicle trajectories from the physical and contextual information in the
accident reports and to build a vehicle collision trajectory dataset. This
dataset is then used to fine-tune a language model, enabling it to respond to
user prompts and predict physically consistent post-collision trajectories
across various driving scenarios based on user descriptions. Finally, we employ
Neural Radiance Fields (NeRF) to render high-quality backgrounds, merging them
with the foreground vehicles that exhibit physically realistic trajectories to
generate vehicle collision videos. Experimental results demonstrate that the
videos produced by AccidentSim excel in both visual and physical authenticity.
|
2503.20663 | Mingze Sun | Mingze Sun, Shiwei Mao, Keyi Chen, Yurun Chen, Shunlin Lu, Jingbo
Wang, Junting Dong, Ruqi Huang | ARMO: Autoregressive Rigging for Multi-Category Objects | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Recent advancements in large-scale generative models have significantly
improved the quality and diversity of 3D shape generation. However, most
existing methods focus primarily on generating static 3D models, overlooking
the potentially dynamic nature of certain shapes, such as humanoids, animals,
and insects. To address this gap, we focus on rigging, a fundamental task in
animation that establishes skeletal structures and skinning for 3D models. In
this paper, we introduce OmniRig, the first large-scale rigging dataset,
comprising 79,499 meshes with detailed skeleton and skinning information.
Unlike traditional benchmarks that rely on predefined standard poses (e.g.,
A-pose, T-pose), our dataset embraces diverse shape categories, styles, and
poses. Leveraging this rich dataset, we propose ARMO, a novel rigging framework
that utilizes an autoregressive model to predict both joint positions and
connectivity relationships in a unified manner. By treating the skeletal
structure as a complete graph and discretizing it into tokens, we encode the
joints using an auto-encoder to obtain a latent embedding and an autoregressive
model to predict the tokens. A mesh-conditioned latent diffusion model is used
to predict the latent embedding for conditional skeleton generation. Our method
addresses the limitations of regression-based approaches, which often suffer
from error accumulation and suboptimal connectivity estimation. Through
extensive experiments on the OmniRig dataset, our approach achieves
state-of-the-art performance in skeleton prediction, demonstrating improved
generalization across diverse object categories. The code and dataset will be
made public for academic use upon acceptance.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 15:56:48 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Sun",
"Mingze",
""
],
[
"Mao",
"Shiwei",
""
],
[
"Chen",
"Keyi",
""
],
[
"Chen",
"Yurun",
""
],
[
"Lu",
"Shunlin",
""
],
[
"Wang",
"Jingbo",
""
],
[
"Dong",
"Junting",
""
],
[
"Huang",
"Ruqi",
""
]
] | TITLE: ARMO: Autoregressive Rigging for Multi-Category Objects
ABSTRACT: Recent advancements in large-scale generative models have significantly
improved the quality and diversity of 3D shape generation. However, most
existing methods focus primarily on generating static 3D models, overlooking
the potentially dynamic nature of certain shapes, such as humanoids, animals,
and insects. To address this gap, we focus on rigging, a fundamental task in
animation that establishes skeletal structures and skinning for 3D models. In
this paper, we introduce OmniRig, the first large-scale rigging dataset,
comprising 79,499 meshes with detailed skeleton and skinning information.
Unlike traditional benchmarks that rely on predefined standard poses (e.g.,
A-pose, T-pose), our dataset embraces diverse shape categories, styles, and
poses. Leveraging this rich dataset, we propose ARMO, a novel rigging framework
that utilizes an autoregressive model to predict both joint positions and
connectivity relationships in a unified manner. By treating the skeletal
structure as a complete graph and discretizing it into tokens, we encode the
joints using an auto-encoder to obtain a latent embedding and an autoregressive
model to predict the tokens. A mesh-conditioned latent diffusion model is used
to predict the latent embedding for conditional skeleton generation. Our method
addresses the limitations of regression-based approaches, which often suffer
from error accumulation and suboptimal connectivity estimation. Through
extensive experiments on the OmniRig dataset, our approach achieves
state-of-the-art performance in skeleton prediction, demonstrating improved
generalization across diverse object categories. The code and dataset will be
made public for academic use upon acceptance.
|
2503.20672 | Yuyang Peng | Yuyang Peng, Shishi Xiao, Keming Wu, Qisheng Liao, Bohan Chen, Kevin
Lin, Danqing Huang, Ji Li, Yuhui Yuan | BizGen: Advancing Article-level Visual Text Rendering for Infographics
Generation | Accepted by CVPR 2025. Project Page: https://bizgen-msra.github.io | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, state-of-the-art text-to-image generation models, such as Flux and
Ideogram 2.0, have made significant progress in sentence-level visual text
rendering. In this paper, we focus on the more challenging scenarios of
article-level visual text rendering and address a novel task of generating
high-quality business content, including infographics and slides, based on user
provided article-level descriptive prompts and ultra-dense layouts. The
fundamental challenges are twofold: significantly longer context lengths and
the scarcity of high-quality business content data.
In contrast to most previous works that focus on a limited number of
sub-regions and sentence-level prompts, ensuring precise adherence to
ultra-dense layouts with tens or even hundreds of sub-regions in business
content is far more challenging. We make two key technical contributions: (i)
the construction of scalable, high-quality business content dataset, i.e.,
Infographics-650K, equipped with ultra-dense layouts and prompts by
implementing a layer-wise retrieval-augmented infographic generation scheme;
and (ii) a layout-guided cross attention scheme, which injects tens of
region-wise prompts into a set of cropped region latent space according to the
ultra-dense layouts, and refine each sub-regions flexibly during inference
using a layout conditional CFG.
We demonstrate the strong results of our system compared to previous SOTA
systems such as Flux and SD3 on our BizEval prompt set. Additionally, we
conduct thorough ablation experiments to verify the effectiveness of each
component. We hope our constructed Infographics-650K and BizEval can encourage
the broader community to advance the progress of business content generation.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 16:04:57 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Peng",
"Yuyang",
""
],
[
"Xiao",
"Shishi",
""
],
[
"Wu",
"Keming",
""
],
[
"Liao",
"Qisheng",
""
],
[
"Chen",
"Bohan",
""
],
[
"Lin",
"Kevin",
""
],
[
"Huang",
"Danqing",
""
],
[
"Li",
"Ji",
""
],
[
"Yuan",
"Yuhui",
""
]
] | TITLE: BizGen: Advancing Article-level Visual Text Rendering for Infographics
Generation
ABSTRACT: Recently, state-of-the-art text-to-image generation models, such as Flux and
Ideogram 2.0, have made significant progress in sentence-level visual text
rendering. In this paper, we focus on the more challenging scenarios of
article-level visual text rendering and address a novel task of generating
high-quality business content, including infographics and slides, based on user
provided article-level descriptive prompts and ultra-dense layouts. The
fundamental challenges are twofold: significantly longer context lengths and
the scarcity of high-quality business content data.
In contrast to most previous works that focus on a limited number of
sub-regions and sentence-level prompts, ensuring precise adherence to
ultra-dense layouts with tens or even hundreds of sub-regions in business
content is far more challenging. We make two key technical contributions: (i)
the construction of scalable, high-quality business content dataset, i.e.,
Infographics-650K, equipped with ultra-dense layouts and prompts by
implementing a layer-wise retrieval-augmented infographic generation scheme;
and (ii) a layout-guided cross attention scheme, which injects tens of
region-wise prompts into a set of cropped region latent space according to the
ultra-dense layouts, and refine each sub-regions flexibly during inference
using a layout conditional CFG.
We demonstrate the strong results of our system compared to previous SOTA
systems such as Flux and SD3 on our BizEval prompt set. Additionally, we
conduct thorough ablation experiments to verify the effectiveness of each
component. We hope our constructed Infographics-650K and BizEval can encourage
the broader community to advance the progress of business content generation.
|
2503.20678 | Gabriel Palma | Gabriel R. Palma, Mariusz Skocze\'n, Phil Maguire | Asset price movement prediction using empirical mode decomposition and
Gaussian mixture models | 21 pages | null | null | null | stat.ME cs.LG | http://creativecommons.org/licenses/by/4.0/ | We investigated the use of Empirical Mode Decomposition (EMD) combined with
Gaussian Mixture Models (GMM), feature engineering and machine learning
algorithms to optimize trading decisions. We used five, two, and one year
samples of hourly candle data for GameStop, Tesla, and XRP (Ripple) markets
respectively. Applying a 15 hour rolling window for each market, we collected
several features based on a linear model and other classical features to
predict the next hour's movement. Subsequently, a GMM filtering approach was
used to identify clusters among these markets. For each cluster, we applied the
EMD algorithm to extract high, medium, low and trend components from each
feature collected. A simple thresholding algorithm was applied to classify
market movements based on the percentage change in each market's close price.
We then evaluated the performance of various machine learning models, including
Random Forests (RF) and XGBoost, in classifying market movements. A naive
random selection of trading decisions was used as a benchmark, which assumed
equal probabilities for each outcome, and a temporal cross-validation approach
was used to test models on 40%, 30%, and 20% of the dataset. Our results
indicate that transforming selected features using EMD improves performance,
particularly for ensemble learning algorithms like Random Forest and XGBoost,
as measured by accumulated profit. Finally, GMM filtering expanded the range of
learning algorithm and data source combinations that outperformed the top
percentile of the random baseline.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 16:12:11 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Palma",
"Gabriel R.",
""
],
[
"Skoczeń",
"Mariusz",
""
],
[
"Maguire",
"Phil",
""
]
] | TITLE: Asset price movement prediction using empirical mode decomposition and
Gaussian mixture models
ABSTRACT: We investigated the use of Empirical Mode Decomposition (EMD) combined with
Gaussian Mixture Models (GMM), feature engineering and machine learning
algorithms to optimize trading decisions. We used five, two, and one year
samples of hourly candle data for GameStop, Tesla, and XRP (Ripple) markets
respectively. Applying a 15 hour rolling window for each market, we collected
several features based on a linear model and other classical features to
predict the next hour's movement. Subsequently, a GMM filtering approach was
used to identify clusters among these markets. For each cluster, we applied the
EMD algorithm to extract high, medium, low and trend components from each
feature collected. A simple thresholding algorithm was applied to classify
market movements based on the percentage change in each market's close price.
We then evaluated the performance of various machine learning models, including
Random Forests (RF) and XGBoost, in classifying market movements. A naive
random selection of trading decisions was used as a benchmark, which assumed
equal probabilities for each outcome, and a temporal cross-validation approach
was used to test models on 40%, 30%, and 20% of the dataset. Our results
indicate that transforming selected features using EMD improves performance,
particularly for ensemble learning algorithms like Random Forest and XGBoost,
as measured by accumulated profit. Finally, GMM filtering expanded the range of
learning algorithm and data source combinations that outperformed the top
percentile of the random baseline.
|
2503.20697 | Yankai Chen | Yankai Chen, Taotao Wang, Yixiang Fang, Yunyu Xiao | Semi-supervised Node Importance Estimation with Informative Distribution
Modeling for Uncertainty Regularization | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Node importance estimation, a classical problem in network analysis,
underpins various web applications. Previous methods either exploit intrinsic
topological characteristics, e.g., graph centrality, or leverage additional
information, e.g., data heterogeneity, for node feature enhancement. However,
these methods follow the supervised learning setting, overlooking the fact that
ground-truth node-importance data are usually partially labeled in practice. In
this work, we propose the first semi-supervised node importance estimation
framework, i.e., EASING, to improve learning quality for unlabeled data in
heterogeneous graphs. Different from previous approaches, EASING explicitly
captures uncertainty to reflect the confidence of model predictions. To jointly
estimate the importance values and uncertainties, EASING incorporates DJE, a
deep encoder-decoder neural architecture. DJE introduces distribution modeling
for graph nodes, where the distribution representations derive both importance
and uncertainty estimates. Additionally, DJE facilitates effective pseudo-label
generation for the unlabeled data to enrich the training samples. Based on
labeled and pseudo-labeled data, EASING develops effective semi-supervised
heteroscedastic learning with varying node uncertainty regularization.
Extensive experiments on three real-world datasets highlight the superior
performance of EASING compared to competing methods. Codes are available via
https://github.com/yankai-chen/EASING.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 16:27:06 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Chen",
"Yankai",
""
],
[
"Wang",
"Taotao",
""
],
[
"Fang",
"Yixiang",
""
],
[
"Xiao",
"Yunyu",
""
]
] | TITLE: Semi-supervised Node Importance Estimation with Informative Distribution
Modeling for Uncertainty Regularization
ABSTRACT: Node importance estimation, a classical problem in network analysis,
underpins various web applications. Previous methods either exploit intrinsic
topological characteristics, e.g., graph centrality, or leverage additional
information, e.g., data heterogeneity, for node feature enhancement. However,
these methods follow the supervised learning setting, overlooking the fact that
ground-truth node-importance data are usually partially labeled in practice. In
this work, we propose the first semi-supervised node importance estimation
framework, i.e., EASING, to improve learning quality for unlabeled data in
heterogeneous graphs. Different from previous approaches, EASING explicitly
captures uncertainty to reflect the confidence of model predictions. To jointly
estimate the importance values and uncertainties, EASING incorporates DJE, a
deep encoder-decoder neural architecture. DJE introduces distribution modeling
for graph nodes, where the distribution representations derive both importance
and uncertainty estimates. Additionally, DJE facilitates effective pseudo-label
generation for the unlabeled data to enrich the training samples. Based on
labeled and pseudo-labeled data, EASING develops effective semi-supervised
heteroscedastic learning with varying node uncertainty regularization.
Extensive experiments on three real-world datasets highlight the superior
performance of EASING compared to competing methods. Codes are available via
https://github.com/yankai-chen/EASING.
|
2503.20715 | Nikita Neveditsin | Nikita Neveditsin, Pawan Lingras, Vijay Mago | From Annotation to Adaptation: Metrics, Synthetic Data, and Aspect
Extraction for Aspect-Based Sentiment Analysis with Large Language Models | Accepted to NAACL SRW 2025 | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | This study examines the performance of Large Language Models (LLMs) in
Aspect-Based Sentiment Analysis (ABSA), with a focus on implicit aspect
extraction in a novel domain. Using a synthetic sports feedback dataset, we
evaluate open-weight LLMs' ability to extract aspect-polarity pairs and propose
a metric to facilitate the evaluation of aspect extraction with generative
models. Our findings highlight both the potential and limitations of LLMs in
the ABSA task.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 16:52:40 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Neveditsin",
"Nikita",
""
],
[
"Lingras",
"Pawan",
""
],
[
"Mago",
"Vijay",
""
]
] | TITLE: From Annotation to Adaptation: Metrics, Synthetic Data, and Aspect
Extraction for Aspect-Based Sentiment Analysis with Large Language Models
ABSTRACT: This study examines the performance of Large Language Models (LLMs) in
Aspect-Based Sentiment Analysis (ABSA), with a focus on implicit aspect
extraction in a novel domain. Using a synthetic sports feedback dataset, we
evaluate open-weight LLMs' ability to extract aspect-polarity pairs and propose
a metric to facilitate the evaluation of aspect extraction with generative
models. Our findings highlight both the potential and limitations of LLMs in
the ABSA task.
|
2503.20722 | Antonio Candito | A. Candito (1), A. Dragan (1,2), R. Holbrey (1), A. Ribeiro (2), R.
Donners (3), C. Messiou (1,2), N. Tunariu (1,2), D.-M. Koh (1,2), and M. D.
Blackledge (1), (1) The Institute of Cancer Research, London, United Kingdom
(2) The Royal Marsden NHS Foundation Trust, London, United Kingdom (3)
University Hospital Basel, Basel, Switzerland | A weakly-supervised deep learning model for fast localisation and
delineation of the skeleton, internal organs, and spinal canal on Whole-Body
Diffusion-Weighted MRI (WB-DWI) | null | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Background: Apparent Diffusion Coefficient (ADC) values and Total Diffusion
Volume (TDV) from Whole-body diffusion-weighted MRI (WB-DWI) are recognized
cancer imaging biomarkers. However, manual disease delineation for ADC and TDV
measurements is unfeasible in clinical practice, demanding automation. As a
first step, we propose an algorithm to generate fast and reproducible
probability maps of the skeleton, adjacent internal organs (liver, spleen,
urinary bladder, and kidneys), and spinal canal. Methods: We developed an
automated deep-learning pipeline based on a 3D patch-based Residual U-Net
architecture that localizes and delineates these anatomical structures on
WB-DWI. The algorithm was trained using "soft-labels" (non-binary
segmentations) derived from a computationally intensive atlas-based approach.
For training and validation, we employed a multi-center WB-DWI dataset
comprising 532 scans from patients with Advanced Prostate Cancer (APC) or
Multiple Myeloma (MM), with testing on 45 patients. Results: Our
weakly-supervised deep learning model achieved an average dice
score/precision/recall of 0.66/0.6/0.73 for skeletal delineations,
0.8/0.79/0.81 for internal organs, and 0.85/0.79/0.94 for spinal canal, with
surface distances consistently below 3 mm. Relative median ADC and
log-transformed volume differences between automated and manual expert-defined
full-body delineations were below 10% and 4%, respectively. The computational
time for generating probability maps was 12x faster than the atlas-based
registration algorithm (25 s vs. 5 min). An experienced radiologist rated the
model's accuracy "good" or "excellent" on test datasets. Conclusion: Our model
offers fast and reproducible probability maps for localizing and delineating
body regions on WB-DWI, enabling ADC and TDV quantification, potentially
supporting clinicians in disease staging and treatment response assessment.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 17:03:46 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Candito",
"A.",
""
],
[
"Dragan",
"A.",
""
],
[
"Holbrey",
"R.",
""
],
[
"Ribeiro",
"A.",
""
],
[
"Donners",
"R.",
""
],
[
"Messiou",
"C.",
""
],
[
"Tunariu",
"N.",
""
],
[
"Koh",
"D. -M.",
""
],
[
"Blackledge",
"M. D.",
""
],
[
"Research",
"The Institute of Cancer",
""
],
[
"London",
"",
""
],
[
"Kingdom",
"United",
""
],
[
"Trust",
"The Royal Marsden NHS Foundation",
""
],
[
"London",
"",
""
],
[
"Kingdom",
"United",
""
],
[
"Basel",
"University Hospital",
""
],
[
"Basel",
"",
""
],
[
"Switzerland",
"",
""
]
] | TITLE: A weakly-supervised deep learning model for fast localisation and
delineation of the skeleton, internal organs, and spinal canal on Whole-Body
Diffusion-Weighted MRI (WB-DWI)
ABSTRACT: Background: Apparent Diffusion Coefficient (ADC) values and Total Diffusion
Volume (TDV) from Whole-body diffusion-weighted MRI (WB-DWI) are recognized
cancer imaging biomarkers. However, manual disease delineation for ADC and TDV
measurements is unfeasible in clinical practice, demanding automation. As a
first step, we propose an algorithm to generate fast and reproducible
probability maps of the skeleton, adjacent internal organs (liver, spleen,
urinary bladder, and kidneys), and spinal canal. Methods: We developed an
automated deep-learning pipeline based on a 3D patch-based Residual U-Net
architecture that localizes and delineates these anatomical structures on
WB-DWI. The algorithm was trained using "soft-labels" (non-binary
segmentations) derived from a computationally intensive atlas-based approach.
For training and validation, we employed a multi-center WB-DWI dataset
comprising 532 scans from patients with Advanced Prostate Cancer (APC) or
Multiple Myeloma (MM), with testing on 45 patients. Results: Our
weakly-supervised deep learning model achieved an average dice
score/precision/recall of 0.66/0.6/0.73 for skeletal delineations,
0.8/0.79/0.81 for internal organs, and 0.85/0.79/0.94 for spinal canal, with
surface distances consistently below 3 mm. Relative median ADC and
log-transformed volume differences between automated and manual expert-defined
full-body delineations were below 10% and 4%, respectively. The computational
time for generating probability maps was 12x faster than the atlas-based
registration algorithm (25 s vs. 5 min). An experienced radiologist rated the
model's accuracy "good" or "excellent" on test datasets. Conclusion: Our model
offers fast and reproducible probability maps for localizing and delineating
body regions on WB-DWI, enabling ADC and TDV quantification, potentially
supporting clinicians in disease staging and treatment response assessment.
|
2503.20730 | Samuel Oliver Cooper | Juan Javier Diaz-Mejia, Elias Williams, Octavian Focsa, Dylan
Mendonca, Swechha Singh, Brendan Innes and Sam Cooper | Benchmarking and optimizing organism wide single-cell RNA alignment
methods | Accepted to ICLR 2025 LMRL workshop (International Conference on
Learning Representations, Learning Meaningful Representations of Life
Workshop) | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Many methods have been proposed for removing batch effects and aligning
single-cell RNA (scRNA) datasets. However, performance is typically evaluated
based on multiple parameters and few datasets, creating challenges in assessing
which method is best for aligning data at scale. Here, we introduce the
K-Neighbors Intersection (KNI) score, a single score that both penalizes batch
effects and measures accuracy at cross-dataset cell-type label prediction
alongside carefully curated small (scMARK) and large (scREF) benchmarks
comprising 11 and 46 human scRNA studies respectively, where we have
standardized author labels. Using the KNI score, we evaluate and optimize
approaches for cross-dataset single-cell RNA integration. We introduce Batch
Adversarial single-cell Variational Inference (BA-scVI), as a new variant of
scVI that uses adversarial training to penalize batch-effects in the encoder
and decoder, and show this approach outperforms other methods. In the resulting
aligned space, we find that the granularity of cell-type groupings is
conserved, supporting the notion that whole-organism cell-type maps can be
created by a single model without loss of information.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 17:11:47 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Diaz-Mejia",
"Juan Javier",
""
],
[
"Williams",
"Elias",
""
],
[
"Focsa",
"Octavian",
""
],
[
"Mendonca",
"Dylan",
""
],
[
"Singh",
"Swechha",
""
],
[
"Innes",
"Brendan",
""
],
[
"Cooper",
"Sam",
""
]
] | TITLE: Benchmarking and optimizing organism wide single-cell RNA alignment
methods
ABSTRACT: Many methods have been proposed for removing batch effects and aligning
single-cell RNA (scRNA) datasets. However, performance is typically evaluated
based on multiple parameters and few datasets, creating challenges in assessing
which method is best for aligning data at scale. Here, we introduce the
K-Neighbors Intersection (KNI) score, a single score that both penalizes batch
effects and measures accuracy at cross-dataset cell-type label prediction
alongside carefully curated small (scMARK) and large (scREF) benchmarks
comprising 11 and 46 human scRNA studies respectively, where we have
standardized author labels. Using the KNI score, we evaluate and optimize
approaches for cross-dataset single-cell RNA integration. We introduce Batch
Adversarial single-cell Variational Inference (BA-scVI), as a new variant of
scVI that uses adversarial training to penalize batch-effects in the encoder
and decoder, and show this approach outperforms other methods. In the resulting
aligned space, we find that the granularity of cell-type groupings is
conserved, supporting the notion that whole-organism cell-type maps can be
created by a single model without loss of information.
|
2503.20734 | Ziyu Zhou | Ziyu Zhou and Keyan Hu and Yutian Fang and Xiaoping Rui | SChanger: Change Detection from a Semantic Change and Spatial
Consistency Perspective | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Change detection is a key task in Earth observation applications. Recently,
deep learning methods have demonstrated strong performance and widespread
application. However, change detection faces data scarcity due to the
labor-intensive process of accurately aligning remote sensing images of the
same area, which limits the performance of deep learning algorithms. To address
the data scarcity issue, we develop a fine-tuning strategy called the Semantic
Change Network (SCN). We initially pre-train the model on single-temporal
supervised tasks to acquire prior knowledge of instance feature extraction. The
model then employs a shared-weight Siamese architecture and extended Temporal
Fusion Module (TFM) to preserve this prior knowledge and is fine-tuned on
change detection tasks. The learned semantics for identifying all instances is
changed to focus on identifying only the changes. Meanwhile, we observe that
the locations of changes between the two images are spatially identical, a
concept we refer to as spatial consistency. We introduce this inductive bias
through an attention map that is generated by large-kernel convolutions and
applied to the features from both time points. This enhances the modeling of
multi-scale changes and helps capture underlying relationships in change
detection semantics. We develop a binary change detection model utilizing these
two strategies. The model is validated against state-of-the-art methods on six
datasets, surpassing all benchmark methods and achieving F1 scores of 92.87%,
86.43%, 68.95%, 97.62%, 84.58%, and 93.20% on the LEVIR-CD, LEVIR-CD+,
S2Looking, CDD, SYSU-CD, and WHU-CD datasets, respectively.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 17:15:43 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Zhou",
"Ziyu",
""
],
[
"Hu",
"Keyan",
""
],
[
"Fang",
"Yutian",
""
],
[
"Rui",
"Xiaoping",
""
]
] | TITLE: SChanger: Change Detection from a Semantic Change and Spatial
Consistency Perspective
ABSTRACT: Change detection is a key task in Earth observation applications. Recently,
deep learning methods have demonstrated strong performance and widespread
application. However, change detection faces data scarcity due to the
labor-intensive process of accurately aligning remote sensing images of the
same area, which limits the performance of deep learning algorithms. To address
the data scarcity issue, we develop a fine-tuning strategy called the Semantic
Change Network (SCN). We initially pre-train the model on single-temporal
supervised tasks to acquire prior knowledge of instance feature extraction. The
model then employs a shared-weight Siamese architecture and extended Temporal
Fusion Module (TFM) to preserve this prior knowledge and is fine-tuned on
change detection tasks. The learned semantics for identifying all instances is
changed to focus on identifying only the changes. Meanwhile, we observe that
the locations of changes between the two images are spatially identical, a
concept we refer to as spatial consistency. We introduce this inductive bias
through an attention map that is generated by large-kernel convolutions and
applied to the features from both time points. This enhances the modeling of
multi-scale changes and helps capture underlying relationships in change
detection semantics. We develop a binary change detection model utilizing these
two strategies. The model is validated against state-of-the-art methods on six
datasets, surpassing all benchmark methods and achieving F1 scores of 92.87%,
86.43%, 68.95%, 97.62%, 84.58%, and 93.20% on the LEVIR-CD, LEVIR-CD+,
S2Looking, CDD, SYSU-CD, and WHU-CD datasets, respectively.
|
2503.20745 | Yanpeng Sun | Yanpeng Sun, Shan Zhang, Wei Tang, Aotian Chen, Piotr Koniusz, Kai
Zou, Yuan Xue, Anton van den Hengel | MATHGLANCE: Multimodal Large Language Models Do Not Know Where to Look
in Mathematical Diagrams | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Diagrams serve as a fundamental form of visual language, representing complex
concepts and their inter-relationships through structured symbols, shapes, and
spatial arrangements. Unlike natural images, their inherently symbolic and
abstract nature poses significant challenges for Multimodal Large Language
Models (MLLMs). However, current benchmarks conflate perceptual and reasoning
tasks, making it difficult to assess whether MLLMs genuinely understand
mathematical diagrams beyond superficial pattern recognition. To address this
gap, we introduce MATHGLANCE, a benchmark specifically designed to isolate and
evaluate mathematical perception in MLLMs. MATHGLANCE comprises 1.2K images and
1.6K carefully curated questions spanning four perception tasks: shape
classification, object counting, relationship identification, and object
grounding, covering diverse domains including plane geometry, solid geometry,
and graphical representations. Our evaluation of MLLMs reveals that their
ability to understand diagrams is notably limited, particularly in fine-grained
grounding tasks. In response, we construct GeoPeP, a perception-oriented
dataset of 200K structured geometry image-text pairs explicitly annotated with
geometric primitives and precise spatial relationships. Training MLLM on GeoPeP
leads to significant gains in perceptual accuracy, which in turn substantially
improves mathematical reasoning. Our benchmark and dataset establish critical
standards for evaluating and advancing multimodal mathematical understanding,
providing valuable resources and insights to foster future MLLM research.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 17:30:41 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Sun",
"Yanpeng",
""
],
[
"Zhang",
"Shan",
""
],
[
"Tang",
"Wei",
""
],
[
"Chen",
"Aotian",
""
],
[
"Koniusz",
"Piotr",
""
],
[
"Zou",
"Kai",
""
],
[
"Xue",
"Yuan",
""
],
[
"Hengel",
"Anton van den",
""
]
] | TITLE: MATHGLANCE: Multimodal Large Language Models Do Not Know Where to Look
in Mathematical Diagrams
ABSTRACT: Diagrams serve as a fundamental form of visual language, representing complex
concepts and their inter-relationships through structured symbols, shapes, and
spatial arrangements. Unlike natural images, their inherently symbolic and
abstract nature poses significant challenges for Multimodal Large Language
Models (MLLMs). However, current benchmarks conflate perceptual and reasoning
tasks, making it difficult to assess whether MLLMs genuinely understand
mathematical diagrams beyond superficial pattern recognition. To address this
gap, we introduce MATHGLANCE, a benchmark specifically designed to isolate and
evaluate mathematical perception in MLLMs. MATHGLANCE comprises 1.2K images and
1.6K carefully curated questions spanning four perception tasks: shape
classification, object counting, relationship identification, and object
grounding, covering diverse domains including plane geometry, solid geometry,
and graphical representations. Our evaluation of MLLMs reveals that their
ability to understand diagrams is notably limited, particularly in fine-grained
grounding tasks. In response, we construct GeoPeP, a perception-oriented
dataset of 200K structured geometry image-text pairs explicitly annotated with
geometric primitives and precise spatial relationships. Training MLLM on GeoPeP
leads to significant gains in perceptual accuracy, which in turn substantially
improves mathematical reasoning. Our benchmark and dataset establish critical
standards for evaluating and advancing multimodal mathematical understanding,
providing valuable resources and insights to foster future MLLM research.
|
2503.20748 | Chen Tang | Chen Tang, Xinzhu Ma, Encheng Su, Xiufeng Song, Xiaohong Liu, Wei-Hong
Li, Lei Bai, Wanli Ouyang, Xiangyu Yue | UniSTD: Towards Unified Spatio-Temporal Learning across Diverse
Disciplines | Accepted to CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | Traditional spatiotemporal models generally rely on task-specific
architectures, which limit their generalizability and scalability across
diverse tasks due to domain-specific design requirements. In this paper, we
introduce \textbf{UniSTD}, a unified Transformer-based framework for
spatiotemporal modeling, which is inspired by advances in recent foundation
models with the two-stage pretraining-then-adaption paradigm. Specifically, our
work demonstrates that task-agnostic pretraining on 2D vision and vision-text
datasets can build a generalizable model foundation for spatiotemporal
learning, followed by specialized joint training on spatiotemporal datasets to
enhance task-specific adaptability. To improve the learning capabilities across
domains, our framework employs a rank-adaptive mixture-of-expert adaptation by
using fractional interpolation to relax the discrete variables so that can be
optimized in the continuous space. Additionally, we introduce a temporal module
to incorporate temporal dynamics explicitly. We evaluate our approach on a
large-scale dataset covering 10 tasks across 4 disciplines, demonstrating that
a unified spatiotemporal model can achieve scalable, cross-task learning and
support up to 10 tasks simultaneously within one model while reducing training
costs in multi-domain applications. Code will be available at
https://github.com/1hunters/UniSTD.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 17:33:23 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Tang",
"Chen",
""
],
[
"Ma",
"Xinzhu",
""
],
[
"Su",
"Encheng",
""
],
[
"Song",
"Xiufeng",
""
],
[
"Liu",
"Xiaohong",
""
],
[
"Li",
"Wei-Hong",
""
],
[
"Bai",
"Lei",
""
],
[
"Ouyang",
"Wanli",
""
],
[
"Yue",
"Xiangyu",
""
]
] | TITLE: UniSTD: Towards Unified Spatio-Temporal Learning across Diverse
Disciplines
ABSTRACT: Traditional spatiotemporal models generally rely on task-specific
architectures, which limit their generalizability and scalability across
diverse tasks due to domain-specific design requirements. In this paper, we
introduce \textbf{UniSTD}, a unified Transformer-based framework for
spatiotemporal modeling, which is inspired by advances in recent foundation
models with the two-stage pretraining-then-adaption paradigm. Specifically, our
work demonstrates that task-agnostic pretraining on 2D vision and vision-text
datasets can build a generalizable model foundation for spatiotemporal
learning, followed by specialized joint training on spatiotemporal datasets to
enhance task-specific adaptability. To improve the learning capabilities across
domains, our framework employs a rank-adaptive mixture-of-expert adaptation by
using fractional interpolation to relax the discrete variables so that can be
optimized in the continuous space. Additionally, we introduce a temporal module
to incorporate temporal dynamics explicitly. We evaluate our approach on a
large-scale dataset covering 10 tasks across 4 disciplines, demonstrating that
a unified spatiotemporal model can achieve scalable, cross-task learning and
support up to 10 tasks simultaneously within one model while reducing training
costs in multi-domain applications. Code will be available at
https://github.com/1hunters/UniSTD.
|
2503.20756 | Ningyu Zhang | Chenxi Wang, Jizhan Fang, Xiang Chen, Bozhong Tian, Ziwen Xu, Huajun
Chen, Ningyu Zhang | ADS-Edit: A Multimodal Knowledge Editing Dataset for Autonomous Driving
Systems | Work in progress | null | null | null | cs.CL cs.AI cs.CV cs.LG cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advancements in Large Multimodal Models (LMMs) have shown promise in
Autonomous Driving Systems (ADS). However, their direct application to ADS is
hindered by challenges such as misunderstanding of traffic knowledge, complex
road conditions, and diverse states of vehicle. To address these challenges, we
propose the use of Knowledge Editing, which enables targeted modifications to a
model's behavior without the need for full retraining. Meanwhile, we introduce
ADS-Edit, a multimodal knowledge editing dataset specifically designed for ADS,
which includes various real-world scenarios, multiple data types, and
comprehensive evaluation metrics. We conduct comprehensive experiments and
derive several interesting conclusions. We hope that our work will contribute
to the further advancement of knowledge editing applications in the field of
autonomous driving. Code and data are available in
https://github.com/zjunlp/EasyEdit.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 17:45:29 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Wang",
"Chenxi",
""
],
[
"Fang",
"Jizhan",
""
],
[
"Chen",
"Xiang",
""
],
[
"Tian",
"Bozhong",
""
],
[
"Xu",
"Ziwen",
""
],
[
"Chen",
"Huajun",
""
],
[
"Zhang",
"Ningyu",
""
]
] | TITLE: ADS-Edit: A Multimodal Knowledge Editing Dataset for Autonomous Driving
Systems
ABSTRACT: Recent advancements in Large Multimodal Models (LMMs) have shown promise in
Autonomous Driving Systems (ADS). However, their direct application to ADS is
hindered by challenges such as misunderstanding of traffic knowledge, complex
road conditions, and diverse states of vehicle. To address these challenges, we
propose the use of Knowledge Editing, which enables targeted modifications to a
model's behavior without the need for full retraining. Meanwhile, we introduce
ADS-Edit, a multimodal knowledge editing dataset specifically designed for ADS,
which includes various real-world scenarios, multiple data types, and
comprehensive evaluation metrics. We conduct comprehensive experiments and
derive several interesting conclusions. We hope that our work will contribute
to the further advancement of knowledge editing applications in the field of
autonomous driving. Code and data are available in
https://github.com/zjunlp/EasyEdit.
|
2503.20757 | Yunhai Hu Mr. | Yunhai Hu, Yilun Zhao, Chen Zhao, Arman Cohan | MCTS-RAG: Enhancing Retrieval-Augmented Generation with Monte Carlo Tree
Search | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce MCTS-RAG, a novel approach that enhances the reasoning
capabilities of small language models on knowledge-intensive tasks by
leveraging retrieval-augmented generation (RAG) to provide relevant context and
Monte Carlo Tree Search (MCTS) to refine reasoning paths. MCTS-RAG dynamically
integrates retrieval and reasoning through an iterative decision-making
process. Unlike standard RAG methods, which typically retrieve information
independently from reasoning and thus integrate knowledge suboptimally, or
conventional MCTS reasoning, which depends solely on internal model knowledge
without external facts, MCTS-RAG combines structured reasoning with adaptive
retrieval. This integrated approach enhances decision-making, reduces
hallucinations, and ensures improved factual accuracy and response consistency.
The experimental results on multiple reasoning and knowledge-intensive datasets
datasets (i.e., ComplexWebQA, GPQA, and FoolMeTwice) show that our method
enables small-scale LMs to achieve performance comparable to frontier LLMs like
GPT-4o by effectively scaling inference-time compute, setting a new standard
for reasoning in small-scale models.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 17:46:08 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Hu",
"Yunhai",
""
],
[
"Zhao",
"Yilun",
""
],
[
"Zhao",
"Chen",
""
],
[
"Cohan",
"Arman",
""
]
] | TITLE: MCTS-RAG: Enhancing Retrieval-Augmented Generation with Monte Carlo Tree
Search
ABSTRACT: We introduce MCTS-RAG, a novel approach that enhances the reasoning
capabilities of small language models on knowledge-intensive tasks by
leveraging retrieval-augmented generation (RAG) to provide relevant context and
Monte Carlo Tree Search (MCTS) to refine reasoning paths. MCTS-RAG dynamically
integrates retrieval and reasoning through an iterative decision-making
process. Unlike standard RAG methods, which typically retrieve information
independently from reasoning and thus integrate knowledge suboptimally, or
conventional MCTS reasoning, which depends solely on internal model knowledge
without external facts, MCTS-RAG combines structured reasoning with adaptive
retrieval. This integrated approach enhances decision-making, reduces
hallucinations, and ensures improved factual accuracy and response consistency.
The experimental results on multiple reasoning and knowledge-intensive datasets
datasets (i.e., ComplexWebQA, GPQA, and FoolMeTwice) show that our method
enables small-scale LMs to achieve performance comparable to frontier LLMs like
GPT-4o by effectively scaling inference-time compute, setting a new standard
for reasoning in small-scale models.
|
2503.20758 | Shakiba Rahimiaghdam | Shakiba Rahimiaghdam, Hande Alemdar | MindfulLIME: A Stable Solution for Explanations of Machine Learning
Models with Enhanced Localization Precision -- A Medical Image Case Study | null | null | null | null | cs.LG cs.CV eess.IV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Ensuring transparency in machine learning decisions is critically important,
especially in sensitive sectors such as healthcare, finance, and justice.
Despite this, some popular explainable algorithms, such as Local Interpretable
Model-agnostic Explanations (LIME), often produce unstable explanations due to
the random generation of perturbed samples. Random perturbation introduces
small changes or noise to modified instances of the original data, leading to
inconsistent explanations. Even slight variations in the generated samples
significantly affect the explanations provided by such models, undermining
trust and hindering the adoption of interpretable models. To address this
challenge, we propose MindfulLIME, a novel algorithm that intelligently
generates purposive samples using a graph-based pruning algorithm and
uncertainty sampling. MindfulLIME substantially improves the consistency of
visual explanations compared to random sampling approaches. Our experimental
evaluation, conducted on a widely recognized chest X-ray dataset, confirms
MindfulLIME's stability with a 100% success rate in delivering reliable
explanations under identical conditions. Additionally, MindfulLIME improves the
localization precision of visual explanations by reducing the distance between
the generated explanations and the actual local annotations compared to LIME.
We also performed comprehensive experiments considering various segmentation
algorithms and sample numbers, focusing on stability, quality, and efficiency.
The results demonstrate the outstanding performance of MindfulLIME across
different segmentation settings, generating fewer high-quality samples within a
reasonable processing time. By addressing the stability limitations of LIME in
image data, MindfulLIME enhances the trustworthiness and interpretability of
machine learning models in specific medical imaging applications, a critical
domain.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 14:48:14 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Rahimiaghdam",
"Shakiba",
""
],
[
"Alemdar",
"Hande",
""
]
] | TITLE: MindfulLIME: A Stable Solution for Explanations of Machine Learning
Models with Enhanced Localization Precision -- A Medical Image Case Study
ABSTRACT: Ensuring transparency in machine learning decisions is critically important,
especially in sensitive sectors such as healthcare, finance, and justice.
Despite this, some popular explainable algorithms, such as Local Interpretable
Model-agnostic Explanations (LIME), often produce unstable explanations due to
the random generation of perturbed samples. Random perturbation introduces
small changes or noise to modified instances of the original data, leading to
inconsistent explanations. Even slight variations in the generated samples
significantly affect the explanations provided by such models, undermining
trust and hindering the adoption of interpretable models. To address this
challenge, we propose MindfulLIME, a novel algorithm that intelligently
generates purposive samples using a graph-based pruning algorithm and
uncertainty sampling. MindfulLIME substantially improves the consistency of
visual explanations compared to random sampling approaches. Our experimental
evaluation, conducted on a widely recognized chest X-ray dataset, confirms
MindfulLIME's stability with a 100% success rate in delivering reliable
explanations under identical conditions. Additionally, MindfulLIME improves the
localization precision of visual explanations by reducing the distance between
the generated explanations and the actual local annotations compared to LIME.
We also performed comprehensive experiments considering various segmentation
algorithms and sample numbers, focusing on stability, quality, and efficiency.
The results demonstrate the outstanding performance of MindfulLIME across
different segmentation settings, generating fewer high-quality samples within a
reasonable processing time. By addressing the stability limitations of LIME in
image data, MindfulLIME enhances the trustworthiness and interpretability of
machine learning models in specific medical imaging applications, a critical
domain.
|
2503.20781 | Yulu Pan | Yulu Pan, Ce Zhang, Gedas Bertasius | BASKET: A Large-Scale Video Dataset for Fine-Grained Skill Estimation | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We present BASKET, a large-scale basketball video dataset for fine-grained
skill estimation. BASKET contains 4,477 hours of video capturing 32,232
basketball players from all over the world. Compared to prior skill estimation
datasets, our dataset includes a massive number of skilled participants with
unprecedented diversity in terms of gender, age, skill level, geographical
location, etc. BASKET includes 20 fine-grained basketball skills, challenging
modern video recognition models to capture the intricate nuances of player
skill through in-depth video analysis. Given a long highlight video (8-10
minutes) of a particular player, the model needs to predict the skill level
(e.g., excellent, good, average, fair, poor) for each of the 20 basketball
skills. Our empirical analysis reveals that the current state-of-the-art video
models struggle with this task, significantly lagging behind the human
baseline. We believe that BASKET could be a useful resource for developing new
video models with advanced long-range, fine-grained recognition capabilities.
In addition, we hope that our dataset will be useful for domain-specific
applications such as fair basketball scouting, personalized player development,
and many others. Dataset and code are available at
https://github.com/yulupan00/BASKET.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 17:59:02 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Pan",
"Yulu",
""
],
[
"Zhang",
"Ce",
""
],
[
"Bertasius",
"Gedas",
""
]
] | TITLE: BASKET: A Large-Scale Video Dataset for Fine-Grained Skill Estimation
ABSTRACT: We present BASKET, a large-scale basketball video dataset for fine-grained
skill estimation. BASKET contains 4,477 hours of video capturing 32,232
basketball players from all over the world. Compared to prior skill estimation
datasets, our dataset includes a massive number of skilled participants with
unprecedented diversity in terms of gender, age, skill level, geographical
location, etc. BASKET includes 20 fine-grained basketball skills, challenging
modern video recognition models to capture the intricate nuances of player
skill through in-depth video analysis. Given a long highlight video (8-10
minutes) of a particular player, the model needs to predict the skill level
(e.g., excellent, good, average, fair, poor) for each of the 20 basketball
skills. Our empirical analysis reveals that the current state-of-the-art video
models struggle with this task, significantly lagging behind the human
baseline. We believe that BASKET could be a useful resource for developing new
video models with advanced long-range, fine-grained recognition capabilities.
In addition, we hope that our dataset will be useful for domain-specific
applications such as fair basketball scouting, personalized player development,
and many others. Dataset and code are available at
https://github.com/yulupan00/BASKET.
|
2503.20782 | Yan-Bo Lin | Yan-Bo Lin, Kevin Lin, Zhengyuan Yang, Linjie Li, Jianfeng Wang,
Chung-Ching Lin, Xiaofei Wang, Gedas Bertasius, Lijuan Wang | Zero-Shot Audio-Visual Editing via Cross-Modal Delta Denoising | Project page: https://genjib.github.io/project_page/AVED/index.html | null | null | null | cs.CV cs.LG cs.MM cs.SD eess.AS | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In this paper, we introduce zero-shot audio-video editing, a novel task that
requires transforming original audio-visual content to align with a specified
textual prompt without additional model training. To evaluate this task, we
curate a benchmark dataset, AvED-Bench, designed explicitly for zero-shot
audio-video editing. AvED-Bench includes 110 videos, each with a 10-second
duration, spanning 11 categories from VGGSound. It offers diverse prompts and
scenarios that require precise alignment between auditory and visual elements,
enabling robust evaluation. We identify limitations in existing zero-shot audio
and video editing methods, particularly in synchronization and coherence
between modalities, which often result in inconsistent outcomes. To address
these challenges, we propose AvED, a zero-shot cross-modal delta denoising
framework that leverages audio-video interactions to achieve synchronized and
coherent edits. AvED demonstrates superior results on both AvED-Bench and the
recent OAVE dataset to validate its generalization capabilities. Results are
available at https://genjib.github.io/project_page/AVED/index.html
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 17:59:04 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Lin",
"Yan-Bo",
""
],
[
"Lin",
"Kevin",
""
],
[
"Yang",
"Zhengyuan",
""
],
[
"Li",
"Linjie",
""
],
[
"Wang",
"Jianfeng",
""
],
[
"Lin",
"Chung-Ching",
""
],
[
"Wang",
"Xiaofei",
""
],
[
"Bertasius",
"Gedas",
""
],
[
"Wang",
"Lijuan",
""
]
] | TITLE: Zero-Shot Audio-Visual Editing via Cross-Modal Delta Denoising
ABSTRACT: In this paper, we introduce zero-shot audio-video editing, a novel task that
requires transforming original audio-visual content to align with a specified
textual prompt without additional model training. To evaluate this task, we
curate a benchmark dataset, AvED-Bench, designed explicitly for zero-shot
audio-video editing. AvED-Bench includes 110 videos, each with a 10-second
duration, spanning 11 categories from VGGSound. It offers diverse prompts and
scenarios that require precise alignment between auditory and visual elements,
enabling robust evaluation. We identify limitations in existing zero-shot audio
and video editing methods, particularly in synchronization and coherence
between modalities, which often result in inconsistent outcomes. To address
these challenges, we propose AvED, a zero-shot cross-modal delta denoising
framework that leverages audio-video interactions to achieve synchronized and
coherent edits. AvED demonstrates superior results on both AvED-Bench and the
recent OAVE dataset to validate its generalization capabilities. Results are
available at https://genjib.github.io/project_page/AVED/index.html
|
2503.20785 | Tianqi Liu | Tianqi Liu, Zihao Huang, Zhaoxi Chen, Guangcong Wang, Shoukang Hu,
Liao Shen, Huiqiang Sun, Zhiguo Cao, Wei Li, Ziwei Liu | Free4D: Tuning-free 4D Scene Generation with Spatial-Temporal
Consistency | Project Page: https://free4d.github.io/ , Code:
https://github.com/TQTQliu/Free4D | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present Free4D, a novel tuning-free framework for 4D scene generation from
a single image. Existing methods either focus on object-level generation,
making scene-level generation infeasible, or rely on large-scale multi-view
video datasets for expensive training, with limited generalization ability due
to the scarcity of 4D scene data. In contrast, our key insight is to distill
pre-trained foundation models for consistent 4D scene representation, which
offers promising advantages such as efficiency and generalizability. 1) To
achieve this, we first animate the input image using image-to-video diffusion
models followed by 4D geometric structure initialization. 2) To turn this
coarse structure into spatial-temporal consistent multiview videos, we design
an adaptive guidance mechanism with a point-guided denoising strategy for
spatial consistency and a novel latent replacement strategy for temporal
coherence. 3) To lift these generated observations into consistent 4D
representation, we propose a modulation-based refinement to mitigate
inconsistencies while fully leveraging the generated information. The resulting
4D representation enables real-time, controllable rendering, marking a
significant advancement in single-image-based 4D scene generation.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 17:59:44 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Liu",
"Tianqi",
""
],
[
"Huang",
"Zihao",
""
],
[
"Chen",
"Zhaoxi",
""
],
[
"Wang",
"Guangcong",
""
],
[
"Hu",
"Shoukang",
""
],
[
"Shen",
"Liao",
""
],
[
"Sun",
"Huiqiang",
""
],
[
"Cao",
"Zhiguo",
""
],
[
"Li",
"Wei",
""
],
[
"Liu",
"Ziwei",
""
]
] | TITLE: Free4D: Tuning-free 4D Scene Generation with Spatial-Temporal
Consistency
ABSTRACT: We present Free4D, a novel tuning-free framework for 4D scene generation from
a single image. Existing methods either focus on object-level generation,
making scene-level generation infeasible, or rely on large-scale multi-view
video datasets for expensive training, with limited generalization ability due
to the scarcity of 4D scene data. In contrast, our key insight is to distill
pre-trained foundation models for consistent 4D scene representation, which
offers promising advantages such as efficiency and generalizability. 1) To
achieve this, we first animate the input image using image-to-video diffusion
models followed by 4D geometric structure initialization. 2) To turn this
coarse structure into spatial-temporal consistent multiview videos, we design
an adaptive guidance mechanism with a point-guided denoising strategy for
spatial consistency and a novel latent replacement strategy for temporal
coherence. 3) To lift these generated observations into consistent 4D
representation, we propose a modulation-based refinement to mitigate
inconsistencies while fully leveraging the generated information. The resulting
4D representation enables real-time, controllable rendering, marking a
significant advancement in single-image-based 4D scene generation.
|
2503.20786 | Zhiqiang Shen | Sondos Mahmoud Bsharat and Mukul Ranjan and Aidar Myrzakhan and
Jiacheng Liu and Bowei Guo and Shengkun Tang and Zhuang Liu and Yuanzhi Li
and Zhiqiang Shen | Mobile-MMLU: A Mobile Intelligence Language Understanding Benchmark | An order-invariant and mobile-centric benchmark. Code and data are
available at: https://github.com/VILA-Lab/Mobile-MMLU | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Rapid advancements in large language models (LLMs) have increased interest in
deploying them on mobile devices for on-device AI applications. Mobile users
interact differently with LLMs compared to desktop users, creating unique
expectations and data biases. Current benchmark datasets primarily target at
server and desktop environments, and there is a notable lack of extensive
datasets specifically designed for mobile contexts. Additionally, mobile
devices face strict limitations in storage and computing resources,
constraining model size and capabilities, thus requiring optimized efficiency
and prioritized knowledge. To address these challenges, we introduce
Mobile-MMLU, a large-scale benchmark dataset tailored for mobile intelligence.
It consists of 16,186 questions across 80 mobile-related fields, designed to
evaluate LLM performance in realistic mobile scenarios. A challenging subset,
Mobile-MMLU-Pro, provides advanced evaluation similar in size to MMLU-Pro but
significantly more difficult than our standard full set. Both benchmarks use
multiple-choice, order-invariant questions focused on practical mobile
interactions, such as recipe suggestions, travel planning, and essential daily
tasks. The dataset emphasizes critical mobile-specific metrics like inference
latency, energy consumption, memory usage, and response quality, offering
comprehensive insights into model performance under mobile constraints.
Moreover, it prioritizes privacy and adaptability, assessing models' ability to
perform on-device processing, maintain user privacy, and adapt to personalized
usage patterns. Mobile-MMLU family offers a standardized framework for
developing and comparing mobile-optimized LLMs, enabling advancements in
productivity and decision-making within mobile computing environments. Our code
and data are available at: https://github.com/VILA-Lab/Mobile-MMLU.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 17:59:56 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Bsharat",
"Sondos Mahmoud",
""
],
[
"Ranjan",
"Mukul",
""
],
[
"Myrzakhan",
"Aidar",
""
],
[
"Liu",
"Jiacheng",
""
],
[
"Guo",
"Bowei",
""
],
[
"Tang",
"Shengkun",
""
],
[
"Liu",
"Zhuang",
""
],
[
"Li",
"Yuanzhi",
""
],
[
"Shen",
"Zhiqiang",
""
]
] | TITLE: Mobile-MMLU: A Mobile Intelligence Language Understanding Benchmark
ABSTRACT: Rapid advancements in large language models (LLMs) have increased interest in
deploying them on mobile devices for on-device AI applications. Mobile users
interact differently with LLMs compared to desktop users, creating unique
expectations and data biases. Current benchmark datasets primarily target at
server and desktop environments, and there is a notable lack of extensive
datasets specifically designed for mobile contexts. Additionally, mobile
devices face strict limitations in storage and computing resources,
constraining model size and capabilities, thus requiring optimized efficiency
and prioritized knowledge. To address these challenges, we introduce
Mobile-MMLU, a large-scale benchmark dataset tailored for mobile intelligence.
It consists of 16,186 questions across 80 mobile-related fields, designed to
evaluate LLM performance in realistic mobile scenarios. A challenging subset,
Mobile-MMLU-Pro, provides advanced evaluation similar in size to MMLU-Pro but
significantly more difficult than our standard full set. Both benchmarks use
multiple-choice, order-invariant questions focused on practical mobile
interactions, such as recipe suggestions, travel planning, and essential daily
tasks. The dataset emphasizes critical mobile-specific metrics like inference
latency, energy consumption, memory usage, and response quality, offering
comprehensive insights into model performance under mobile constraints.
Moreover, it prioritizes privacy and adaptability, assessing models' ability to
perform on-device processing, maintain user privacy, and adapt to personalized
usage patterns. Mobile-MMLU family offers a standardized framework for
developing and comparing mobile-optimized LLMs, enabling advancements in
productivity and decision-making within mobile computing environments. Our code
and data are available at: https://github.com/VILA-Lab/Mobile-MMLU.
|
2010.13494 | Kenji Kobayashi | Kenji Kobayashi, Yuri Nakao | One-vs.-One Mitigation of Intersectional Bias: A General Method to
Extend Fairness-Aware Binary Classification | null | null | null | null | cs.LG cs.AI cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the widespread adoption of machine learning in the real world, the
impact of the discriminatory bias has attracted attention. In recent years,
various methods to mitigate the bias have been proposed. However, most of them
have not considered intersectional bias, which brings unfair situations where
people belonging to specific subgroups of a protected group are treated worse
when multiple sensitive attributes are taken into consideration. To mitigate
this bias, in this paper, we propose a method called One-vs.-One Mitigation by
applying a process of comparison between each pair of subgroups related to
sensitive attributes to the fairness-aware machine learning for binary
classification. We compare our method and the conventional fairness-aware
binary classification methods in comprehensive settings using three approaches
(pre-processing, in-processing, and post-processing), six metrics (the ratio
and difference of demographic parity, equalized odds, and equal opportunity),
and two real-world datasets (Adult and COMPAS). As a result, our method
mitigates the intersectional bias much better than conventional methods in all
the settings. With the result, we open up the potential of fairness-aware
binary classification for solving more realistic problems occurring when there
are multiple sensitive attributes.
| [
{
"version": "v1",
"created": "Mon, 26 Oct 2020 11:35:39 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 13:32:15 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Kobayashi",
"Kenji",
""
],
[
"Nakao",
"Yuri",
""
]
] | TITLE: One-vs.-One Mitigation of Intersectional Bias: A General Method to
Extend Fairness-Aware Binary Classification
ABSTRACT: With the widespread adoption of machine learning in the real world, the
impact of the discriminatory bias has attracted attention. In recent years,
various methods to mitigate the bias have been proposed. However, most of them
have not considered intersectional bias, which brings unfair situations where
people belonging to specific subgroups of a protected group are treated worse
when multiple sensitive attributes are taken into consideration. To mitigate
this bias, in this paper, we propose a method called One-vs.-One Mitigation by
applying a process of comparison between each pair of subgroups related to
sensitive attributes to the fairness-aware machine learning for binary
classification. We compare our method and the conventional fairness-aware
binary classification methods in comprehensive settings using three approaches
(pre-processing, in-processing, and post-processing), six metrics (the ratio
and difference of demographic parity, equalized odds, and equal opportunity),
and two real-world datasets (Adult and COMPAS). As a result, our method
mitigates the intersectional bias much better than conventional methods in all
the settings. With the result, we open up the potential of fairness-aware
binary classification for solving more realistic problems occurring when there
are multiple sensitive attributes.
|
2108.05293 | Zhonghua Wu | Weide Liu, Zhonghua Wu, Henghui Ding, Fayao Liu, Jie Lin, Guosheng
Lin, Wei Zhou | Few-Shot Segmentation with Global and Local Contrastive Learning | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we address the challenging task of few-shot segmentation.
Previous few-shot segmentation methods mainly employ the information of support
images as guidance for query image segmentation. Although some works propose to
build cross-reference between support and query images, their extraction of
query information still depends on the support images. We here propose to
extract the information from the query itself independently to benefit the
few-shot segmentation task. To this end, we first propose a prior extractor to
learn the query information from the unlabeled images with our proposed
global-local contrastive learning. Then, we extract a set of predetermined
priors via this prior extractor. With the obtained priors, we generate the
prior region maps for query images, which locate the objects, as guidance to
perform cross interaction with support features. In such a way, the extraction
of query information is detached from the support branch, overcoming the
limitation by support, and could obtain more informative query clues to achieve
better interaction. Without bells and whistles, the proposed approach achieves
new state-of-the-art performance for the few-shot segmentation task on
PASCAL-5$^{i}$ and COCO datasets.
| [
{
"version": "v1",
"created": "Wed, 11 Aug 2021 15:52:22 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 07:58:53 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Liu",
"Weide",
""
],
[
"Wu",
"Zhonghua",
""
],
[
"Ding",
"Henghui",
""
],
[
"Liu",
"Fayao",
""
],
[
"Lin",
"Jie",
""
],
[
"Lin",
"Guosheng",
""
],
[
"Zhou",
"Wei",
""
]
] | TITLE: Few-Shot Segmentation with Global and Local Contrastive Learning
ABSTRACT: In this work, we address the challenging task of few-shot segmentation.
Previous few-shot segmentation methods mainly employ the information of support
images as guidance for query image segmentation. Although some works propose to
build cross-reference between support and query images, their extraction of
query information still depends on the support images. We here propose to
extract the information from the query itself independently to benefit the
few-shot segmentation task. To this end, we first propose a prior extractor to
learn the query information from the unlabeled images with our proposed
global-local contrastive learning. Then, we extract a set of predetermined
priors via this prior extractor. With the obtained priors, we generate the
prior region maps for query images, which locate the objects, as guidance to
perform cross interaction with support features. In such a way, the extraction
of query information is detached from the support branch, overcoming the
limitation by support, and could obtain more informative query clues to achieve
better interaction. Without bells and whistles, the proposed approach achieves
new state-of-the-art performance for the few-shot segmentation task on
PASCAL-5$^{i}$ and COCO datasets.
|
2203.06667 | Bin Li | Bin Li, Yixuan Weng, Bin Sun and Shutao Li | Towards Visual-Prompt Temporal Answering Grounding in Medical
Instructional Video | 8 pages, 6 figures, 3 tables | null | 10.1109/TPAMI.2024.3411045 | null | cs.CV cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The temporal answering grounding in the video (TAGV) is a new task naturally
derived from temporal sentence grounding in the video (TSGV). Given an
untrimmed video and a text question, this task aims at locating the matching
span from the video that can semantically answer the question. Existing methods
tend to formulate the TAGV task with a visual span-based question answering
(QA) approach by matching the visual frame span queried by the text question.
However, due to the weak correlations and huge gaps of the semantic features
between the textual question and visual answer, existing methods adopting
visual span predictor perform poorly in the TAGV task. To bridge these gaps, we
propose a visual-prompt text span localizing (VPTSL) method, which introduces
the timestamped subtitles as a passage to perform the text span localization
for the input text question, and prompts the visual highlight features into the
pre-trained language model (PLM) for enhancing the joint semantic
representations. Specifically, the context query attention is utilized to
perform cross-modal interaction between the extracted textual and visual
features. Then, the highlight features are obtained through the video-text
highlighting for the visual prompt. To alleviate semantic differences between
textual and visual features, we design the text span predictor by encoding the
question, the subtitles, and the prompted visual highlight features with the
PLM. As a result, the TAGV task is formulated to predict the span of subtitles
matching the visual answer. Extensive experiments on the medical instructional
dataset, namely MedVidQA, show that the proposed VPTSL outperforms the
state-of-the-art (SOTA) method by 28.36% in terms of mIOU with a large margin,
which demonstrates the effectiveness of the proposed visual prompt and the text
span predictor.
| [
{
"version": "v1",
"created": "Sun, 13 Mar 2022 14:42:53 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Mar 2022 07:46:41 GMT"
},
{
"version": "v3",
"created": "Mon, 21 Mar 2022 13:55:24 GMT"
},
{
"version": "v4",
"created": "Wed, 23 Mar 2022 15:10:44 GMT"
},
{
"version": "v5",
"created": "Sun, 27 Mar 2022 14:19:00 GMT"
},
{
"version": "v6",
"created": "Tue, 29 Mar 2022 15:37:35 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Li",
"Bin",
""
],
[
"Weng",
"Yixuan",
""
],
[
"Sun",
"Bin",
""
],
[
"Li",
"Shutao",
""
]
] | TITLE: Towards Visual-Prompt Temporal Answering Grounding in Medical
Instructional Video
ABSTRACT: The temporal answering grounding in the video (TAGV) is a new task naturally
derived from temporal sentence grounding in the video (TSGV). Given an
untrimmed video and a text question, this task aims at locating the matching
span from the video that can semantically answer the question. Existing methods
tend to formulate the TAGV task with a visual span-based question answering
(QA) approach by matching the visual frame span queried by the text question.
However, due to the weak correlations and huge gaps of the semantic features
between the textual question and visual answer, existing methods adopting
visual span predictor perform poorly in the TAGV task. To bridge these gaps, we
propose a visual-prompt text span localizing (VPTSL) method, which introduces
the timestamped subtitles as a passage to perform the text span localization
for the input text question, and prompts the visual highlight features into the
pre-trained language model (PLM) for enhancing the joint semantic
representations. Specifically, the context query attention is utilized to
perform cross-modal interaction between the extracted textual and visual
features. Then, the highlight features are obtained through the video-text
highlighting for the visual prompt. To alleviate semantic differences between
textual and visual features, we design the text span predictor by encoding the
question, the subtitles, and the prompted visual highlight features with the
PLM. As a result, the TAGV task is formulated to predict the span of subtitles
matching the visual answer. Extensive experiments on the medical instructional
dataset, namely MedVidQA, show that the proposed VPTSL outperforms the
state-of-the-art (SOTA) method by 28.36% in terms of mIOU with a large margin,
which demonstrates the effectiveness of the proposed visual prompt and the text
span predictor.
|
2204.09220 | Bin Li | Fei Xia, Bin Li, Yixuan Weng, Shizhu He, Kang Liu, Bin Sun, Shutao Li
and Jun Zhao | LingYi: Medical Conversational Question Answering System based on
Multi-modal Knowledge Graphs | 9 pages, 4 figures, 5 tables | null | 10.18653/v1/2022.emnlp-demos.15 | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The medical conversational system can relieve the burden of doctors and
improve the efficiency of healthcare, especially during the pandemic. This
paper presents a medical conversational question answering (CQA) system based
on the multi-modal knowledge graph, namely "LingYi", which is designed as a
pipeline framework to maintain high flexibility. Our system utilizes automated
medical procedures including medical triage, consultation, image-text drug
recommendation and record. To conduct knowledge-grounded dialogues with
patients, we first construct a Chinese Medical Multi-Modal Knowledge Graph
(CM3KG) and collect a large-scale Chinese Medical CQA (CMCQA) dataset. Compared
with the other existing medical question-answering systems, our system adopts
several state-of-the-art technologies including medical entity disambiguation
and medical dialogue generation, which is more friendly to provide medical
services to patients. In addition, we have open-sourced our codes which contain
back-end models and front-end web pages at https://github.com/WENGSYX/LingYi.
The datasets including CM3KG at https://github.com/WENGSYX/CM3KG and CMCQA at
https://github.com/WENGSYX/CMCQA are also released to further promote future
research.
| [
{
"version": "v1",
"created": "Wed, 20 Apr 2022 04:41:26 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Xia",
"Fei",
""
],
[
"Li",
"Bin",
""
],
[
"Weng",
"Yixuan",
""
],
[
"He",
"Shizhu",
""
],
[
"Liu",
"Kang",
""
],
[
"Sun",
"Bin",
""
],
[
"Li",
"Shutao",
""
],
[
"Zhao",
"Jun",
""
]
] | TITLE: LingYi: Medical Conversational Question Answering System based on
Multi-modal Knowledge Graphs
ABSTRACT: The medical conversational system can relieve the burden of doctors and
improve the efficiency of healthcare, especially during the pandemic. This
paper presents a medical conversational question answering (CQA) system based
on the multi-modal knowledge graph, namely "LingYi", which is designed as a
pipeline framework to maintain high flexibility. Our system utilizes automated
medical procedures including medical triage, consultation, image-text drug
recommendation and record. To conduct knowledge-grounded dialogues with
patients, we first construct a Chinese Medical Multi-Modal Knowledge Graph
(CM3KG) and collect a large-scale Chinese Medical CQA (CMCQA) dataset. Compared
with the other existing medical question-answering systems, our system adopts
several state-of-the-art technologies including medical entity disambiguation
and medical dialogue generation, which is more friendly to provide medical
services to patients. In addition, we have open-sourced our codes which contain
back-end models and front-end web pages at https://github.com/WENGSYX/LingYi.
The datasets including CM3KG at https://github.com/WENGSYX/CM3KG and CMCQA at
https://github.com/WENGSYX/CMCQA are also released to further promote future
research.
|
2207.08486 | Ali Raza Dr. | Ali Raza, Shujun Li, Kim-Phuc Tran, Ludovic Koehl and Kim Duc Tran | Using Anomaly Detection to Detect Poisoning Attacks in Federated
Learning Applications | We will updated this article soon | null | null | null | cs.LG cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Adversarial attacks such as poisoning attacks have attracted the attention of
many machine learning researchers. Traditionally, poisoning attacks attempt to
inject adversarial training data in order to manipulate the trained model. In
federated learning (FL), data poisoning attacks can be generalized to model
poisoning attacks, which cannot be detected by simpler methods due to the lack
of access to local training data by the detector. State-of-the-art poisoning
attack detection methods for FL have various weaknesses, e.g., the number of
attackers has to be known or not high enough, working with i.i.d. data only,
and high computational complexity. To overcome above weaknesses, we propose a
novel framework for detecting poisoning attacks in FL, which employs a
reference model based on a public dataset and an auditor model to detect
malicious updates. We implemented a detector based on the proposed framework
and using a one-class support vector machine (OC-SVM), which reaches the lowest
possible computational complexity O(K) where K is the number of clients. We
evaluated our detector's performance against state-of-the-art (SOTA) poisoning
attacks for two typical applications of FL: electrocardiograph (ECG)
classification and human activity recognition (HAR). Our experimental results
validated the performance of our detector over other SOTA detection methods.
| [
{
"version": "v1",
"created": "Mon, 18 Jul 2022 10:10:45 GMT"
},
{
"version": "v2",
"created": "Tue, 9 May 2023 13:30:46 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Mar 2025 07:43:43 GMT"
},
{
"version": "v4",
"created": "Tue, 25 Mar 2025 07:50:17 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Raza",
"Ali",
""
],
[
"Li",
"Shujun",
""
],
[
"Tran",
"Kim-Phuc",
""
],
[
"Koehl",
"Ludovic",
""
],
[
"Tran",
"Kim Duc",
""
]
] | TITLE: Using Anomaly Detection to Detect Poisoning Attacks in Federated
Learning Applications
ABSTRACT: Adversarial attacks such as poisoning attacks have attracted the attention of
many machine learning researchers. Traditionally, poisoning attacks attempt to
inject adversarial training data in order to manipulate the trained model. In
federated learning (FL), data poisoning attacks can be generalized to model
poisoning attacks, which cannot be detected by simpler methods due to the lack
of access to local training data by the detector. State-of-the-art poisoning
attack detection methods for FL have various weaknesses, e.g., the number of
attackers has to be known or not high enough, working with i.i.d. data only,
and high computational complexity. To overcome above weaknesses, we propose a
novel framework for detecting poisoning attacks in FL, which employs a
reference model based on a public dataset and an auditor model to detect
malicious updates. We implemented a detector based on the proposed framework
and using a one-class support vector machine (OC-SVM), which reaches the lowest
possible computational complexity O(K) where K is the number of clients. We
evaluated our detector's performance against state-of-the-art (SOTA) poisoning
attacks for two typical applications of FL: electrocardiograph (ECG)
classification and human activity recognition (HAR). Our experimental results
validated the performance of our detector over other SOTA detection methods.
|
2210.12241 | Oliver Boyne | Oliver Boyne, James Charles, Roberto Cipolla | FIND: An Unsupervised Implicit 3D Model of Articulated Human Feet | BMVC 2022 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | In this paper we present a high fidelity and articulated 3D human foot model.
The model is parameterised by a disentangled latent code in terms of shape,
texture and articulated pose. While high fidelity models are typically created
with strong supervision such as 3D keypoint correspondences or
pre-registration, we focus on the difficult case of little to no annotation. To
this end, we make the following contributions: (i) we develop a Foot Implicit
Neural Deformation field model, named FIND, capable of tailoring explicit
meshes at any resolution i.e. for low or high powered devices; (ii) an approach
for training our model in various modes of weak supervision with progressively
better disentanglement as more labels, such as pose categories, are provided;
(iii) a novel unsupervised part-based loss for fitting our model to 2D images
which is better than traditional photometric or silhouette losses; (iv)
finally, we release a new dataset of high resolution 3D human foot scans,
Foot3D. On this dataset, we show our model outperforms a strong PCA
implementation trained on the same data in terms of shape quality and part
correspondences, and that our novel unsupervised part-based loss improves
inference on images.
| [
{
"version": "v1",
"created": "Fri, 21 Oct 2022 20:47:16 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Nov 2022 19:51:35 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Mar 2025 21:49:29 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Boyne",
"Oliver",
""
],
[
"Charles",
"James",
""
],
[
"Cipolla",
"Roberto",
""
]
] | TITLE: FIND: An Unsupervised Implicit 3D Model of Articulated Human Feet
ABSTRACT: In this paper we present a high fidelity and articulated 3D human foot model.
The model is parameterised by a disentangled latent code in terms of shape,
texture and articulated pose. While high fidelity models are typically created
with strong supervision such as 3D keypoint correspondences or
pre-registration, we focus on the difficult case of little to no annotation. To
this end, we make the following contributions: (i) we develop a Foot Implicit
Neural Deformation field model, named FIND, capable of tailoring explicit
meshes at any resolution i.e. for low or high powered devices; (ii) an approach
for training our model in various modes of weak supervision with progressively
better disentanglement as more labels, such as pose categories, are provided;
(iii) a novel unsupervised part-based loss for fitting our model to 2D images
which is better than traditional photometric or silhouette losses; (iv)
finally, we release a new dataset of high resolution 3D human foot scans,
Foot3D. On this dataset, we show our model outperforms a strong PCA
implementation trained on the same data in terms of shape quality and part
correspondences, and that our novel unsupervised part-based loss improves
inference on images.
|
2304.11868 | Mingjie Li | Mingjie Li, Ben Beck, Tharindu Rathnayake, Lingheng Meng, Zijue Chen,
Akansel Cosgun, Xiaojun Chang, Dana Kuli\'c | A Benchmark for Cycling Close Pass Detection from Video Streams | Accepted by Transportation Research Part C: Emerging Technologies | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Cycling is a healthy and sustainable mode of transport. However, interactions
with motor vehicles remain a key barrier to increased cycling participation.
The ability to detect potentially dangerous interactions from on-bike sensing
could provide important information to riders and policymakers. A key influence
on rider comfort and safety is close passes, i.e., when a vehicle narrowly
passes a cyclist. In this paper, we introduce a novel benchmark, called Cyc-CP,
towards close pass (CP) event detection from video streams. The task is
formulated into two problem categories: scene-level and instance-level.
Scene-level detection ascertains the presence of a CP event within the provided
video clip. Instance-level detection identifies the specific vehicle within the
scene that precipitates a CP event. To address these challenges, we introduce
four benchmark models, each underpinned by advanced deep-learning
methodologies. For training and evaluating those models, we have developed a
synthetic dataset alongside the acquisition of a real-world dataset. The
benchmark evaluations reveal that the models achieve an accuracy of 88.13\% for
scene-level detection and 84.60\% for instance-level detection on the
real-world dataset. We envision this benchmark as a test-bed to accelerate CP
detection and facilitate interaction between the fields of road safety,
intelligent transportation systems and artificial intelligence. Both the
benchmark datasets and detection models will be available at
https://github.com/SustainableMobility/cyc-cp to facilitate experimental
reproducibility and encourage more in-depth research in the field.
| [
{
"version": "v1",
"created": "Mon, 24 Apr 2023 07:30:01 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 06:39:51 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Li",
"Mingjie",
""
],
[
"Beck",
"Ben",
""
],
[
"Rathnayake",
"Tharindu",
""
],
[
"Meng",
"Lingheng",
""
],
[
"Chen",
"Zijue",
""
],
[
"Cosgun",
"Akansel",
""
],
[
"Chang",
"Xiaojun",
""
],
[
"Kulić",
"Dana",
""
]
] | TITLE: A Benchmark for Cycling Close Pass Detection from Video Streams
ABSTRACT: Cycling is a healthy and sustainable mode of transport. However, interactions
with motor vehicles remain a key barrier to increased cycling participation.
The ability to detect potentially dangerous interactions from on-bike sensing
could provide important information to riders and policymakers. A key influence
on rider comfort and safety is close passes, i.e., when a vehicle narrowly
passes a cyclist. In this paper, we introduce a novel benchmark, called Cyc-CP,
towards close pass (CP) event detection from video streams. The task is
formulated into two problem categories: scene-level and instance-level.
Scene-level detection ascertains the presence of a CP event within the provided
video clip. Instance-level detection identifies the specific vehicle within the
scene that precipitates a CP event. To address these challenges, we introduce
four benchmark models, each underpinned by advanced deep-learning
methodologies. For training and evaluating those models, we have developed a
synthetic dataset alongside the acquisition of a real-world dataset. The
benchmark evaluations reveal that the models achieve an accuracy of 88.13\% for
scene-level detection and 84.60\% for instance-level detection on the
real-world dataset. We envision this benchmark as a test-bed to accelerate CP
detection and facilitate interaction between the fields of road safety,
intelligent transportation systems and artificial intelligence. Both the
benchmark datasets and detection models will be available at
https://github.com/SustainableMobility/cyc-cp to facilitate experimental
reproducibility and encourage more in-depth research in the field.
|
2304.12693 | Neil Scheidwasser | Matthew J Penn, Neil Scheidwasser, Mark P Khurana, David A Duch\^ene,
Christl A Donnelly, Samir Bhatt | Phylo2Vec: a vector representation for binary trees | 38 pages, 9 figures, 1 table, 2 supplementary figures | Systematic Biology, 2024, syae030 | 10.1093/sysbio/syae030 | null | q-bio.PE cs.LG q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Binary phylogenetic trees inferred from biological data are central to
understanding the shared history among evolutionary units. However, inferring
the placement of latent nodes in a tree is computationally expensive.
State-of-the-art methods rely on carefully designed heuristics for tree search,
using different data structures for easy manipulation (e.g., classes in
object-oriented programming languages) and readable representation of trees
(e.g., Newick-format strings). Here, we present Phylo2Vec, a parsimonious
encoding for phylogenetic trees that serves as a unified approach for both
manipulating and representing phylogenetic trees. Phylo2Vec maps any binary
tree with $n$ leaves to a unique integer vector of length $n-1$. The advantages
of Phylo2Vec are fourfold: i) fast tree sampling, (ii) compressed tree
representation compared to a Newick string, iii) quick and unambiguous
verification if two binary trees are identical topologically, and iv)
systematic ability to traverse tree space in very large or small jumps. As a
proof of concept, we use Phylo2Vec for maximum likelihood inference on five
real-world datasets and show that a simple hill-climbing-based optimisation
scheme can efficiently traverse the vastness of tree space from a random to an
optimal tree.
| [
{
"version": "v1",
"created": "Tue, 25 Apr 2023 09:54:35 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Dec 2023 08:26:28 GMT"
},
{
"version": "v3",
"created": "Fri, 10 May 2024 14:31:10 GMT"
},
{
"version": "v4",
"created": "Mon, 4 Nov 2024 15:37:52 GMT"
},
{
"version": "v5",
"created": "Tue, 25 Mar 2025 16:44:19 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Penn",
"Matthew J",
""
],
[
"Scheidwasser",
"Neil",
""
],
[
"Khurana",
"Mark P",
""
],
[
"Duchêne",
"David A",
""
],
[
"Donnelly",
"Christl A",
""
],
[
"Bhatt",
"Samir",
""
]
] | TITLE: Phylo2Vec: a vector representation for binary trees
ABSTRACT: Binary phylogenetic trees inferred from biological data are central to
understanding the shared history among evolutionary units. However, inferring
the placement of latent nodes in a tree is computationally expensive.
State-of-the-art methods rely on carefully designed heuristics for tree search,
using different data structures for easy manipulation (e.g., classes in
object-oriented programming languages) and readable representation of trees
(e.g., Newick-format strings). Here, we present Phylo2Vec, a parsimonious
encoding for phylogenetic trees that serves as a unified approach for both
manipulating and representing phylogenetic trees. Phylo2Vec maps any binary
tree with $n$ leaves to a unique integer vector of length $n-1$. The advantages
of Phylo2Vec are fourfold: i) fast tree sampling, (ii) compressed tree
representation compared to a Newick string, iii) quick and unambiguous
verification if two binary trees are identical topologically, and iv)
systematic ability to traverse tree space in very large or small jumps. As a
proof of concept, we use Phylo2Vec for maximum likelihood inference on five
real-world datasets and show that a simple hill-climbing-based optimisation
scheme can efficiently traverse the vastness of tree space from a random to an
optimal tree.
|
2305.04268 | Ze-Xin Yin | Ze-Xin Yin and Peng-Yi Jiao and Jiaxiong Qiu and Ming-Ming Cheng and
Bo Ren | MS-NeRF: Multi-Space Neural Radiance Fields | TPAMI 2025, 18 pages, 23 figures | IEEE Transactions on Pattern Analysis and Machine Intelligence
2025 | 10.1109/TPAMI.2025.3540074 | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Existing Neural Radiance Fields (NeRF) methods suffer from the existence of
reflective objects, often resulting in blurry or distorted rendering. Instead
of calculating a single radiance field, we propose a multi-space neural
radiance field (MS-NeRF) that represents the scene using a group of feature
fields in parallel sub-spaces, which leads to a better understanding of the
neural network toward the existence of reflective and refractive objects. Our
multi-space scheme works as an enhancement to existing NeRF methods, with only
small computational overheads needed for training and inferring the extra-space
outputs. We design different multi-space modules for representative MLP-based
and grid-based NeRF methods, which improve Mip-NeRF 360 by 4.15 dB in PSNR with
0.5% extra parameters and further improve TensoRF by 2.71 dB with 0.046% extra
parameters on reflective regions without degrading the rendering quality on
other regions. We further construct a novel dataset consisting of 33 synthetic
scenes and 7 real captured scenes with complex reflection and refraction, where
we design complex camera paths to fully benchmark the robustness of NeRF-based
methods. Extensive experiments show that our approach significantly outperforms
the existing single-space NeRF methods for rendering high-quality scenes
concerned with complex light paths through mirror-like objects. The source
code, dataset, and results are available via our project page:
https://zx-yin.github.io/msnerf/.
| [
{
"version": "v1",
"created": "Sun, 7 May 2023 13:11:07 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 06:26:30 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Yin",
"Ze-Xin",
""
],
[
"Jiao",
"Peng-Yi",
""
],
[
"Qiu",
"Jiaxiong",
""
],
[
"Cheng",
"Ming-Ming",
""
],
[
"Ren",
"Bo",
""
]
] | TITLE: MS-NeRF: Multi-Space Neural Radiance Fields
ABSTRACT: Existing Neural Radiance Fields (NeRF) methods suffer from the existence of
reflective objects, often resulting in blurry or distorted rendering. Instead
of calculating a single radiance field, we propose a multi-space neural
radiance field (MS-NeRF) that represents the scene using a group of feature
fields in parallel sub-spaces, which leads to a better understanding of the
neural network toward the existence of reflective and refractive objects. Our
multi-space scheme works as an enhancement to existing NeRF methods, with only
small computational overheads needed for training and inferring the extra-space
outputs. We design different multi-space modules for representative MLP-based
and grid-based NeRF methods, which improve Mip-NeRF 360 by 4.15 dB in PSNR with
0.5% extra parameters and further improve TensoRF by 2.71 dB with 0.046% extra
parameters on reflective regions without degrading the rendering quality on
other regions. We further construct a novel dataset consisting of 33 synthetic
scenes and 7 real captured scenes with complex reflection and refraction, where
we design complex camera paths to fully benchmark the robustness of NeRF-based
methods. Extensive experiments show that our approach significantly outperforms
the existing single-space NeRF methods for rendering high-quality scenes
concerned with complex light paths through mirror-like objects. The source
code, dataset, and results are available via our project page:
https://zx-yin.github.io/msnerf/.
|
2305.12646 | Shuqiang Wang | Bowen Hu, Weiheng Yao, Sibo Qiao, Hieu Pham, Shuqiang Wang, Michael
Kwok-Po Ng | SG-GAN: Fine Stereoscopic-Aware Generation for 3D Brain Point Cloud
Up-sampling from a Single Image | Accepted by TETCI | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In minimally-invasive brain surgeries with indirect and narrow operating
environments, 3D brain reconstruction is crucial. However, as requirements of
accuracy for some new minimally-invasive surgeries (such as brain-computer
interface surgery) are higher and higher, the outputs of conventional 3D
reconstruction, such as point cloud (PC), are facing the challenges that sample
points are too sparse and the precision is insufficient. On the other hand,
there is a scarcity of high-density point cloud datasets, which makes it
challenging to train models for direct reconstruction of high-density brain
point clouds. In this work, a novel model named stereoscopic-aware graph
generative adversarial network (SG-GAN) with two stages is proposed to generate
fine high-density PC conditioned on a single image. The Stage-I GAN sketches
the primitive shape and basic structure of the organ based on the given image,
yielding Stage-I point clouds. The Stage-II GAN takes the results from Stage-I
and generates high-density point clouds with detailed features. The Stage-II
GAN is capable of correcting defects and restoring the detailed features of the
region of interest (ROI) through the up-sampling process. Furthermore, a
parameter-free-attention-based free-transforming module is developed to learn
the efficient features of input, while upholding a promising performance.
Comparing with the existing methods, the SG-GAN model shows superior
performance in terms of visual quality, objective measurements, and performance
in classification, as demonstrated by comprehensive results measured by several
evaluation metrics including PC-to-PC error and Chamfer distance.
| [
{
"version": "v1",
"created": "Mon, 22 May 2023 02:42:12 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 14:17:56 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Hu",
"Bowen",
""
],
[
"Yao",
"Weiheng",
""
],
[
"Qiao",
"Sibo",
""
],
[
"Pham",
"Hieu",
""
],
[
"Wang",
"Shuqiang",
""
],
[
"Ng",
"Michael Kwok-Po",
""
]
] | TITLE: SG-GAN: Fine Stereoscopic-Aware Generation for 3D Brain Point Cloud
Up-sampling from a Single Image
ABSTRACT: In minimally-invasive brain surgeries with indirect and narrow operating
environments, 3D brain reconstruction is crucial. However, as requirements of
accuracy for some new minimally-invasive surgeries (such as brain-computer
interface surgery) are higher and higher, the outputs of conventional 3D
reconstruction, such as point cloud (PC), are facing the challenges that sample
points are too sparse and the precision is insufficient. On the other hand,
there is a scarcity of high-density point cloud datasets, which makes it
challenging to train models for direct reconstruction of high-density brain
point clouds. In this work, a novel model named stereoscopic-aware graph
generative adversarial network (SG-GAN) with two stages is proposed to generate
fine high-density PC conditioned on a single image. The Stage-I GAN sketches
the primitive shape and basic structure of the organ based on the given image,
yielding Stage-I point clouds. The Stage-II GAN takes the results from Stage-I
and generates high-density point clouds with detailed features. The Stage-II
GAN is capable of correcting defects and restoring the detailed features of the
region of interest (ROI) through the up-sampling process. Furthermore, a
parameter-free-attention-based free-transforming module is developed to learn
the efficient features of input, while upholding a promising performance.
Comparing with the existing methods, the SG-GAN model shows superior
performance in terms of visual quality, objective measurements, and performance
in classification, as demonstrated by comprehensive results measured by several
evaluation metrics including PC-to-PC error and Chamfer distance.
|
2306.06302 | Elan Markowitz | Elan Markowitz, Ziyan Jiang, Fan Yang, Xing Fan, Tony Chen, Greg Ver
Steeg, Aram Galstyan | Knowledge Enhanced Multi-Domain Recommendations in an AI Assistant
Application | null | ICASSP 2025 - 2025 IEEE International Conference on Acoustics,
Speech and Signal Processing (ICASSP) | 10.1109/ICASSP49660.2025.10889248 | null | cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work explores unifying knowledge enhanced recommendation with
multi-domain recommendation systems in a conversational AI assistant
application. Multi-domain recommendation leverages users' interactions in
previous domains to improve recommendations in a new one. Knowledge graph
enhancement seeks to use external knowledge graphs to improve recommendations
within a single domain. Both research threads incorporate related information
to improve the recommendation task. We propose to unify these approaches: using
information from interactions in other domains as well as external knowledge
graphs to make predictions in a new domain that would not be possible with
either information source alone. We develop a new model and demonstrate the
additive benefit of these approaches on a dataset derived from millions of
users' queries for content across three domains (videos, music, and books) in a
live virtual assistant application. We demonstrate significant improvement on
overall recommendations as well as on recommendations for new users of a
domain.
| [
{
"version": "v1",
"created": "Fri, 9 Jun 2023 23:40:03 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 00:54:28 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Markowitz",
"Elan",
""
],
[
"Jiang",
"Ziyan",
""
],
[
"Yang",
"Fan",
""
],
[
"Fan",
"Xing",
""
],
[
"Chen",
"Tony",
""
],
[
"Steeg",
"Greg Ver",
""
],
[
"Galstyan",
"Aram",
""
]
] | TITLE: Knowledge Enhanced Multi-Domain Recommendations in an AI Assistant
Application
ABSTRACT: This work explores unifying knowledge enhanced recommendation with
multi-domain recommendation systems in a conversational AI assistant
application. Multi-domain recommendation leverages users' interactions in
previous domains to improve recommendations in a new one. Knowledge graph
enhancement seeks to use external knowledge graphs to improve recommendations
within a single domain. Both research threads incorporate related information
to improve the recommendation task. We propose to unify these approaches: using
information from interactions in other domains as well as external knowledge
graphs to make predictions in a new domain that would not be possible with
either information source alone. We develop a new model and demonstrate the
additive benefit of these approaches on a dataset derived from millions of
users' queries for content across three domains (videos, music, and books) in a
live virtual assistant application. We demonstrate significant improvement on
overall recommendations as well as on recommendations for new users of a
domain.
|
2307.01530 | Asim Khan | Asim Khan, Taimur Hassan, Muhammad Shafay, Israa Fahmy, Naoufel
Werghi, Lakmal Seneviratne and Irfan Hussain | Tomato Maturity Recognition with Convolutional Transformers | 23 pages, 6 figures and 8 Tables | Sci Rep 13, 22885 (2023) | 10.1038/s41598-023-50129-w | null | cs.CV cs.AI eess.IV | http://creativecommons.org/licenses/by/4.0/ | Tomatoes are a major crop worldwide, and accurately classifying their
maturity is important for many agricultural applications, such as harvesting,
grading, and quality control. In this paper, the authors propose a novel method
for tomato maturity classification using a convolutional transformer. The
convolutional transformer is a hybrid architecture that combines the strengths
of convolutional neural networks (CNNs) and transformers. Additionally, this
study introduces a new tomato dataset named KUTomaData, explicitly designed to
train deep-learning models for tomato segmentation and classification.
KUTomaData is a compilation of images sourced from a greenhouse in the UAE,
with approximately 700 images available for training and testing. The dataset
is prepared under various lighting conditions and viewing perspectives and
employs different mobile camera sensors, distinguishing it from existing
datasets. The contributions of this paper are threefold:Firstly, the authors
propose a novel method for tomato maturity classification using a modular
convolutional transformer. Secondly, the authors introduce a new tomato image
dataset that contains images of tomatoes at different maturity levels. Lastly,
the authors show that the convolutional transformer outperforms
state-of-the-art methods for tomato maturity classification. The effectiveness
of the proposed framework in handling cluttered and occluded tomato instances
was evaluated using two additional public datasets, Laboro Tomato and Rob2Pheno
Annotated Tomato, as benchmarks. The evaluation results across these three
datasets demonstrate the exceptional performance of our proposed framework,
surpassing the state-of-the-art by 58.14%, 65.42%, and 66.39% in terms of mean
average precision scores for KUTomaData, Laboro Tomato, and Rob2Pheno Annotated
Tomato, respectively.
| [
{
"version": "v1",
"created": "Tue, 4 Jul 2023 07:33:53 GMT"
},
{
"version": "v2",
"created": "Tue, 2 Jan 2024 13:13:49 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Khan",
"Asim",
""
],
[
"Hassan",
"Taimur",
""
],
[
"Shafay",
"Muhammad",
""
],
[
"Fahmy",
"Israa",
""
],
[
"Werghi",
"Naoufel",
""
],
[
"Seneviratne",
"Lakmal",
""
],
[
"Hussain",
"Irfan",
""
]
] | TITLE: Tomato Maturity Recognition with Convolutional Transformers
ABSTRACT: Tomatoes are a major crop worldwide, and accurately classifying their
maturity is important for many agricultural applications, such as harvesting,
grading, and quality control. In this paper, the authors propose a novel method
for tomato maturity classification using a convolutional transformer. The
convolutional transformer is a hybrid architecture that combines the strengths
of convolutional neural networks (CNNs) and transformers. Additionally, this
study introduces a new tomato dataset named KUTomaData, explicitly designed to
train deep-learning models for tomato segmentation and classification.
KUTomaData is a compilation of images sourced from a greenhouse in the UAE,
with approximately 700 images available for training and testing. The dataset
is prepared under various lighting conditions and viewing perspectives and
employs different mobile camera sensors, distinguishing it from existing
datasets. The contributions of this paper are threefold:Firstly, the authors
propose a novel method for tomato maturity classification using a modular
convolutional transformer. Secondly, the authors introduce a new tomato image
dataset that contains images of tomatoes at different maturity levels. Lastly,
the authors show that the convolutional transformer outperforms
state-of-the-art methods for tomato maturity classification. The effectiveness
of the proposed framework in handling cluttered and occluded tomato instances
was evaluated using two additional public datasets, Laboro Tomato and Rob2Pheno
Annotated Tomato, as benchmarks. The evaluation results across these three
datasets demonstrate the exceptional performance of our proposed framework,
surpassing the state-of-the-art by 58.14%, 65.42%, and 66.39% in terms of mean
average precision scores for KUTomaData, Laboro Tomato, and Rob2Pheno Annotated
Tomato, respectively.
|
2307.08430 | Chao Li | Chao Li, Zijie Guo, Qiuting He, Hao Xu and Kun He | Long-range Meta-path Search on Large-scale Heterogeneous Graphs | Accepted by Advances in Neural Information Processing Systems
(NeurIPS 2024) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Utilizing long-range dependency, a concept extensively studied in homogeneous
graphs, remains underexplored in heterogeneous graphs, especially on large
ones, posing two significant challenges: Reducing computational costs while
maximizing effective information utilization in the presence of heterogeneity,
and overcoming the over-smoothing issue in graph neural networks. To address
this gap, we investigate the importance of different meta-paths and introduce
an automatic framework for utilizing long-range dependency on heterogeneous
graphs, denoted as Long-range Meta-path Search through Progressive Sampling
(LMSPS). Specifically, we develop a search space with all meta-paths related to
the target node type. By employing a progressive sampling algorithm, LMSPS
dynamically shrinks the search space with hop-independent time complexity.
Through a sampling evaluation strategy, LMSPS conducts a specialized and
effective meta-path selection, leading to retraining with only effective
meta-paths, thus mitigating costs and over-smoothing. Extensive experiments
across diverse heterogeneous datasets validate LMSPS's capability in
discovering effective long-range meta-paths, surpassing state-of-the-art
methods. Our code is available at https://github.com/JHL-HUST/LMSPS.
| [
{
"version": "v1",
"created": "Mon, 17 Jul 2023 12:20:07 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Sep 2023 13:54:29 GMT"
},
{
"version": "v3",
"created": "Wed, 22 Nov 2023 07:53:36 GMT"
},
{
"version": "v4",
"created": "Sat, 3 Feb 2024 09:14:48 GMT"
},
{
"version": "v5",
"created": "Thu, 4 Jul 2024 06:09:06 GMT"
},
{
"version": "v6",
"created": "Tue, 25 Mar 2025 04:19:16 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Li",
"Chao",
""
],
[
"Guo",
"Zijie",
""
],
[
"He",
"Qiuting",
""
],
[
"Xu",
"Hao",
""
],
[
"He",
"Kun",
""
]
] | TITLE: Long-range Meta-path Search on Large-scale Heterogeneous Graphs
ABSTRACT: Utilizing long-range dependency, a concept extensively studied in homogeneous
graphs, remains underexplored in heterogeneous graphs, especially on large
ones, posing two significant challenges: Reducing computational costs while
maximizing effective information utilization in the presence of heterogeneity,
and overcoming the over-smoothing issue in graph neural networks. To address
this gap, we investigate the importance of different meta-paths and introduce
an automatic framework for utilizing long-range dependency on heterogeneous
graphs, denoted as Long-range Meta-path Search through Progressive Sampling
(LMSPS). Specifically, we develop a search space with all meta-paths related to
the target node type. By employing a progressive sampling algorithm, LMSPS
dynamically shrinks the search space with hop-independent time complexity.
Through a sampling evaluation strategy, LMSPS conducts a specialized and
effective meta-path selection, leading to retraining with only effective
meta-paths, thus mitigating costs and over-smoothing. Extensive experiments
across diverse heterogeneous datasets validate LMSPS's capability in
discovering effective long-range meta-paths, surpassing state-of-the-art
methods. Our code is available at https://github.com/JHL-HUST/LMSPS.
|
2310.14720 | Francesco Sanna Passino | Marcus A. K. September, Francesco Sanna Passino, Leonie Goldmann,
Anton Hinel | Extended Deep Adaptive Input Normalization for Preprocessing Time Series
Data for Neural Networks | null | Proceedings of The 27th International Conference on Artificial
Intelligence and Statistics, PMLR 238:1891-1899, 2024 | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | Data preprocessing is a crucial part of any machine learning pipeline, and it
can have a significant impact on both performance and training efficiency. This
is especially evident when using deep neural networks for time series
prediction and classification: real-world time series data often exhibit
irregularities such as multi-modality, skewness and outliers, and the model
performance can degrade rapidly if these characteristics are not adequately
addressed. In this work, we propose the EDAIN (Extended Deep Adaptive Input
Normalization) layer, a novel adaptive neural layer that learns how to
appropriately normalize irregular time series data for a given task in an
end-to-end fashion, instead of using a fixed normalization scheme. This is
achieved by optimizing its unknown parameters simultaneously with the deep
neural network using back-propagation. Our experiments, conducted using
synthetic data, a credit default prediction dataset, and a large-scale limit
order book benchmark dataset, demonstrate the superior performance of the EDAIN
layer when compared to conventional normalization methods and existing adaptive
time series preprocessing layers.
| [
{
"version": "v1",
"created": "Mon, 23 Oct 2023 08:56:01 GMT"
},
{
"version": "v2",
"created": "Thu, 29 Feb 2024 08:30:03 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"September",
"Marcus A. K.",
""
],
[
"Passino",
"Francesco Sanna",
""
],
[
"Goldmann",
"Leonie",
""
],
[
"Hinel",
"Anton",
""
]
] | TITLE: Extended Deep Adaptive Input Normalization for Preprocessing Time Series
Data for Neural Networks
ABSTRACT: Data preprocessing is a crucial part of any machine learning pipeline, and it
can have a significant impact on both performance and training efficiency. This
is especially evident when using deep neural networks for time series
prediction and classification: real-world time series data often exhibit
irregularities such as multi-modality, skewness and outliers, and the model
performance can degrade rapidly if these characteristics are not adequately
addressed. In this work, we propose the EDAIN (Extended Deep Adaptive Input
Normalization) layer, a novel adaptive neural layer that learns how to
appropriately normalize irregular time series data for a given task in an
end-to-end fashion, instead of using a fixed normalization scheme. This is
achieved by optimizing its unknown parameters simultaneously with the deep
neural network using back-propagation. Our experiments, conducted using
synthetic data, a credit default prediction dataset, and a large-scale limit
order book benchmark dataset, demonstrate the superior performance of the EDAIN
layer when compared to conventional normalization methods and existing adaptive
time series preprocessing layers.
|
2311.00231 | Tunhou Zhang | Tunhou Zhang, Wei Wen, Igor Fedorov, Xi Liu, Buyun Zhang, Fangqiu Han,
Wen-Yen Chen, Yiping Han, Feng Yan, Hai Li, Yiran Chen | DistDNAS: Search Efficient Feature Interactions within 2 Hours | null | 2024 IEEE International Conference on Big Data | null | null | cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Search efficiency and serving efficiency are two major axes in building
feature interactions and expediting the model development process in
recommender systems. On large-scale benchmarks, searching for the optimal
feature interaction design requires extensive cost due to the sequential
workflow on the large volume of data. In addition, fusing interactions of
various sources, orders, and mathematical operations introduces potential
conflicts and additional redundancy toward recommender models, leading to
sub-optimal trade-offs in performance and serving cost. In this paper, we
present DistDNAS as a neat solution to brew swift and efficient feature
interaction design. DistDNAS proposes a supernet to incorporate interaction
modules of varying orders and types as a search space. To optimize search
efficiency, DistDNAS distributes the search and aggregates the choice of
optimal interaction modules on varying data dates, achieving over 25x speed-up
and reducing search cost from 2 days to 2 hours. To optimize serving
efficiency, DistDNAS introduces a differentiable cost-aware loss to penalize
the selection of redundant interaction modules, enhancing the efficiency of
discovered feature interactions in serving. We extensively evaluate the best
models crafted by DistDNAS on a 1TB Criteo Terabyte dataset. Experimental
evaluations demonstrate 0.001 AUC improvement and 60% FLOPs saving over current
state-of-the-art CTR models.
| [
{
"version": "v1",
"created": "Wed, 1 Nov 2023 02:27:38 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 07:29:11 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Zhang",
"Tunhou",
""
],
[
"Wen",
"Wei",
""
],
[
"Fedorov",
"Igor",
""
],
[
"Liu",
"Xi",
""
],
[
"Zhang",
"Buyun",
""
],
[
"Han",
"Fangqiu",
""
],
[
"Chen",
"Wen-Yen",
""
],
[
"Han",
"Yiping",
""
],
[
"Yan",
"Feng",
""
],
[
"Li",
"Hai",
""
],
[
"Chen",
"Yiran",
""
]
] | TITLE: DistDNAS: Search Efficient Feature Interactions within 2 Hours
ABSTRACT: Search efficiency and serving efficiency are two major axes in building
feature interactions and expediting the model development process in
recommender systems. On large-scale benchmarks, searching for the optimal
feature interaction design requires extensive cost due to the sequential
workflow on the large volume of data. In addition, fusing interactions of
various sources, orders, and mathematical operations introduces potential
conflicts and additional redundancy toward recommender models, leading to
sub-optimal trade-offs in performance and serving cost. In this paper, we
present DistDNAS as a neat solution to brew swift and efficient feature
interaction design. DistDNAS proposes a supernet to incorporate interaction
modules of varying orders and types as a search space. To optimize search
efficiency, DistDNAS distributes the search and aggregates the choice of
optimal interaction modules on varying data dates, achieving over 25x speed-up
and reducing search cost from 2 days to 2 hours. To optimize serving
efficiency, DistDNAS introduces a differentiable cost-aware loss to penalize
the selection of redundant interaction modules, enhancing the efficiency of
discovered feature interactions in serving. We extensively evaluate the best
models crafted by DistDNAS on a 1TB Criteo Terabyte dataset. Experimental
evaluations demonstrate 0.001 AUC improvement and 60% FLOPs saving over current
state-of-the-art CTR models.
|
2311.07056 | Kai Wang | Kai Wang, Qiguang Jiang, Bailing Wang, Yulei Wu, Hongke Zhang | STATGRAPH: Effective In-vehicle Intrusion Detection via Multi-view
Statistical Graph Learning | 13 pages, 7 figures, 6 tables, 36 references | null | null | null | cs.NI cs.AI cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In-vehicle network (IVN) is facing complex external cyber-attacks, especially
the emerging masquerade attacks with extremely high difficulty of detection
while serious damaging effects. In this paper, we propose the STATGRAPH, which
is an effective and fine-grained intrusion detection methodology for IVN
security services via multi-view statistical graph learning on in-vehicle
controller area network (CAN) messages with insight into their variations in
periodicity, payload and signal combinations. Specifically, STATGRAPH generates
two statistical graphs, timing correlation graph (TCG) and coupling
relationship graph (CRG), in every CAN message detection window, where edge
attributes in TCGs represent temporal correlation between different message IDs
while edge attributes in CRGs denote the neighbour relationship and contextual
similarity. Besides, a lightweight shallow layered graph convolution network is
trained based on graph property of TCGs and CRGs, which learns the universal
laws of various patterns more effectively and further enhance the performance
of detection. To address the problem of insufficient attack types in previous
intrusion detection, we select two real in-vehicle CAN datasets covering five
new instances of sophisticated and stealthy masquerade attacks that are never
investigated before. Experimental result shows STATGRAPH improves both
detection granularity and detection performance over state-of-the-art intrusion
detection methods. Code is available at
https://github.com/wangkai-tech23/StatGraph.
| [
{
"version": "v1",
"created": "Mon, 13 Nov 2023 03:49:55 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 09:44:10 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Wang",
"Kai",
""
],
[
"Jiang",
"Qiguang",
""
],
[
"Wang",
"Bailing",
""
],
[
"Wu",
"Yulei",
""
],
[
"Zhang",
"Hongke",
""
]
] | TITLE: STATGRAPH: Effective In-vehicle Intrusion Detection via Multi-view
Statistical Graph Learning
ABSTRACT: In-vehicle network (IVN) is facing complex external cyber-attacks, especially
the emerging masquerade attacks with extremely high difficulty of detection
while serious damaging effects. In this paper, we propose the STATGRAPH, which
is an effective and fine-grained intrusion detection methodology for IVN
security services via multi-view statistical graph learning on in-vehicle
controller area network (CAN) messages with insight into their variations in
periodicity, payload and signal combinations. Specifically, STATGRAPH generates
two statistical graphs, timing correlation graph (TCG) and coupling
relationship graph (CRG), in every CAN message detection window, where edge
attributes in TCGs represent temporal correlation between different message IDs
while edge attributes in CRGs denote the neighbour relationship and contextual
similarity. Besides, a lightweight shallow layered graph convolution network is
trained based on graph property of TCGs and CRGs, which learns the universal
laws of various patterns more effectively and further enhance the performance
of detection. To address the problem of insufficient attack types in previous
intrusion detection, we select two real in-vehicle CAN datasets covering five
new instances of sophisticated and stealthy masquerade attacks that are never
investigated before. Experimental result shows STATGRAPH improves both
detection granularity and detection performance over state-of-the-art intrusion
detection methods. Code is available at
https://github.com/wangkai-tech23/StatGraph.
|
2312.05114 | Emiliano De Cristofaro | Georgi Ganev and Emiliano De Cristofaro | The Inadequacy of Similarity-based Privacy Metrics: Privacy Attacks
against "Truly Anonymous" Synthetic Datasets | Published in the Proceedings of the 46th IEEE Symposium on Security &
Privacy (IEEE S&P 2025). Please cite the S&P version | null | null | null | cs.CR cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generative models producing synthetic data are meant to provide a
privacy-friendly approach to releasing data. However, their privacy guarantees
are only considered robust when models satisfy Differential Privacy (DP). Alas,
this is not a ubiquitous standard, as many leading companies (and, in fact,
research papers) use ad-hoc privacy metrics based on testing the statistical
similarity between synthetic and real data.
In this paper, we examine the privacy metrics used in real-world synthetic
data deployments and demonstrate their unreliability in several ways. First, we
provide counter-examples where severe privacy violations occur even if the
privacy tests pass and instantiate accurate membership and attribute inference
attacks with minimal cost. We then introduce ReconSyn, a reconstruction attack
that generates multiple synthetic datasets that are considered private by the
metrics but actually leak information unique to individual records. We show
that ReconSyn recovers 78-100% of the outliers in the train data with only
black-box access to a single fitted generative model and the privacy metrics.
In the process, we show that applying DP only to the model does not mitigate
this attack, as using privacy metrics breaks the end-to-end DP pipeline.
| [
{
"version": "v1",
"created": "Fri, 8 Dec 2023 15:42:28 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Nov 2024 02:42:04 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Mar 2025 22:06:46 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Ganev",
"Georgi",
""
],
[
"De Cristofaro",
"Emiliano",
""
]
] | TITLE: The Inadequacy of Similarity-based Privacy Metrics: Privacy Attacks
against "Truly Anonymous" Synthetic Datasets
ABSTRACT: Generative models producing synthetic data are meant to provide a
privacy-friendly approach to releasing data. However, their privacy guarantees
are only considered robust when models satisfy Differential Privacy (DP). Alas,
this is not a ubiquitous standard, as many leading companies (and, in fact,
research papers) use ad-hoc privacy metrics based on testing the statistical
similarity between synthetic and real data.
In this paper, we examine the privacy metrics used in real-world synthetic
data deployments and demonstrate their unreliability in several ways. First, we
provide counter-examples where severe privacy violations occur even if the
privacy tests pass and instantiate accurate membership and attribute inference
attacks with minimal cost. We then introduce ReconSyn, a reconstruction attack
that generates multiple synthetic datasets that are considered private by the
metrics but actually leak information unique to individual records. We show
that ReconSyn recovers 78-100% of the outliers in the train data with only
black-box access to a single fitted generative model and the privacy metrics.
In the process, we show that applying DP only to the model does not mitigate
this attack, as using privacy metrics breaks the end-to-end DP pipeline.
|
2401.09826 | Chen-Bin Feng | Chen-Bin Feng, Qi Lai, Kangdao Liu, Houcheng Su, Chi-Man Vong | Boosting Few-Shot Semantic Segmentation Via Segment Anything Model | null | null | 10.1007/s00371-025-03809-9 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In semantic segmentation, accurate prediction masks are crucial for
downstream tasks such as medical image analysis and image editing. Due to the
lack of annotated data, few-shot semantic segmentation (FSS) performs poorly in
predicting masks with precise contours. Recently, we have noticed that the
large foundation model segment anything model (SAM) performs well in processing
detailed features. Inspired by SAM, we propose FSS-SAM to boost FSS methods by
addressing the issue of inaccurate contour. The FSS-SAM is training-free. It
works as a post-processing tool for any FSS methods and can improve the
accuracy of predicted masks. Specifically, we use predicted masks from FSS
methods to generate prompts and then use SAM to predict new masks. To avoid
predicting wrong masks with SAM, we propose a prediction result selection (PRS)
algorithm. The algorithm can remarkably decrease wrong predictions. Experiment
results on public datasets show that our method is superior to base FSS methods
in both quantitative and qualitative aspects.
| [
{
"version": "v1",
"created": "Thu, 18 Jan 2024 09:34:40 GMT"
},
{
"version": "v2",
"created": "Sat, 20 Jan 2024 07:56:19 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Feng",
"Chen-Bin",
""
],
[
"Lai",
"Qi",
""
],
[
"Liu",
"Kangdao",
""
],
[
"Su",
"Houcheng",
""
],
[
"Vong",
"Chi-Man",
""
]
] | TITLE: Boosting Few-Shot Semantic Segmentation Via Segment Anything Model
ABSTRACT: In semantic segmentation, accurate prediction masks are crucial for
downstream tasks such as medical image analysis and image editing. Due to the
lack of annotated data, few-shot semantic segmentation (FSS) performs poorly in
predicting masks with precise contours. Recently, we have noticed that the
large foundation model segment anything model (SAM) performs well in processing
detailed features. Inspired by SAM, we propose FSS-SAM to boost FSS methods by
addressing the issue of inaccurate contour. The FSS-SAM is training-free. It
works as a post-processing tool for any FSS methods and can improve the
accuracy of predicted masks. Specifically, we use predicted masks from FSS
methods to generate prompts and then use SAM to predict new masks. To avoid
predicting wrong masks with SAM, we propose a prediction result selection (PRS)
algorithm. The algorithm can remarkably decrease wrong predictions. Experiment
results on public datasets show that our method is superior to base FSS methods
in both quantitative and qualitative aspects.
|
2402.03896 | Kun Li | Kun Li, George Vosselman, Michael Ying Yang | Convincing Rationales for Visual Question Answering Reasoning | under review | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual Question Answering (VQA) is a challenging task of predicting the
answer to a question about the content of an image. It requires deep
understanding of both the textual question and visual image. Prior works
directly evaluate the answering models by simply calculating the accuracy of
the predicted answers. However, the inner reasoning behind the prediction is
disregarded in such a "black box" system, and we do not even know if one can
trust the predictions. In some cases, the models still get the correct answers
even when they focus on irrelevant visual regions or textual tokens, which
makes the models unreliable and illogical. To generate both visual and textual
rationales next to the predicted answer to the given image/question pair, we
propose Multimodal Rationales for VQA, MRVQA. Considering the extra annotations
brought by the new outputs, MRVQA is trained and evaluated by samples converted
from some existing VQA datasets and their visual labels. The extensive
experiments demonstrate that the visual and textual rationales support the
prediction of the answers, and further improve the accuracy. Furthermore, MRVQA
achieves competitive performance on generic VQA datatsets in the zero-shot
evaluation setting. The dataset and source code will be released under
https://github.com/lik1996/CRVQA2024.
| [
{
"version": "v1",
"created": "Tue, 6 Feb 2024 11:07:05 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 20:48:53 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Li",
"Kun",
""
],
[
"Vosselman",
"George",
""
],
[
"Yang",
"Michael Ying",
""
]
] | TITLE: Convincing Rationales for Visual Question Answering Reasoning
ABSTRACT: Visual Question Answering (VQA) is a challenging task of predicting the
answer to a question about the content of an image. It requires deep
understanding of both the textual question and visual image. Prior works
directly evaluate the answering models by simply calculating the accuracy of
the predicted answers. However, the inner reasoning behind the prediction is
disregarded in such a "black box" system, and we do not even know if one can
trust the predictions. In some cases, the models still get the correct answers
even when they focus on irrelevant visual regions or textual tokens, which
makes the models unreliable and illogical. To generate both visual and textual
rationales next to the predicted answer to the given image/question pair, we
propose Multimodal Rationales for VQA, MRVQA. Considering the extra annotations
brought by the new outputs, MRVQA is trained and evaluated by samples converted
from some existing VQA datasets and their visual labels. The extensive
experiments demonstrate that the visual and textual rationales support the
prediction of the answers, and further improve the accuracy. Furthermore, MRVQA
achieves competitive performance on generic VQA datatsets in the zero-shot
evaluation setting. The dataset and source code will be released under
https://github.com/lik1996/CRVQA2024.
|
2403.08824 | Puneet Kumar | Puneet Kumar, Alexander Vedernikov, Yuwei Chen, Wenming Zheng and
Xiaobai Li | Computational Analysis of Stress, Depression and Engagement in Mental
Health: A Survey | Under review in IEEE Transactions on Pattern Analysis and Machine
Intelligence | null | null | null | cs.HC cs.AI cs.MM | http://creativecommons.org/licenses/by/4.0/ | Analysis of stress, depression and engagement is less common and more complex
than that of frequently discussed emotions such as happiness, sadness, fear and
anger. The importance of these psychological states has been increasingly
recognized due to their implications for mental health and well-being. Stress
and depression are interrelated and together they impact engagement in daily
tasks, highlighting the need to explore their interplay. This survey is the
first to simultaneously explore computational methods for analyzing stress,
depression and engagement. We present a taxonomy and timeline of the
computational approaches used to analyze them and we discuss the most commonly
used datasets and input modalities, along with the categories and generic
pipeline of these approaches. Subsequently, we describe state-of-the-art
computational approaches, including a performance summary on the most commonly
used datasets. Following this, we explore the applications of stress,
depression and engagement analysis, along with the associated challenges,
limitations and future research directions.
| [
{
"version": "v1",
"created": "Sat, 9 Mar 2024 11:16:09 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 10:14:57 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Kumar",
"Puneet",
""
],
[
"Vedernikov",
"Alexander",
""
],
[
"Chen",
"Yuwei",
""
],
[
"Zheng",
"Wenming",
""
],
[
"Li",
"Xiaobai",
""
]
] | TITLE: Computational Analysis of Stress, Depression and Engagement in Mental
Health: A Survey
ABSTRACT: Analysis of stress, depression and engagement is less common and more complex
than that of frequently discussed emotions such as happiness, sadness, fear and
anger. The importance of these psychological states has been increasingly
recognized due to their implications for mental health and well-being. Stress
and depression are interrelated and together they impact engagement in daily
tasks, highlighting the need to explore their interplay. This survey is the
first to simultaneously explore computational methods for analyzing stress,
depression and engagement. We present a taxonomy and timeline of the
computational approaches used to analyze them and we discuss the most commonly
used datasets and input modalities, along with the categories and generic
pipeline of these approaches. Subsequently, we describe state-of-the-art
computational approaches, including a performance summary on the most commonly
used datasets. Following this, we explore the applications of stress,
depression and engagement analysis, along with the associated challenges,
limitations and future research directions.
|
2403.09281 | Yiming Ma | Yiming Ma, Victor Sanchez, Tanaya Guha | CLIP-EBC: CLIP Can Count Accurately through Enhanced Blockwise
Classification | This is the author's accepted manuscript. The final version is
published in ICME 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose CLIP-EBC, the first fully CLIP-based model for accurate crowd
density estimation. While the CLIP model has demonstrated remarkable success in
addressing recognition tasks such as zero-shot image classification, its
potential for counting has been largely unexplored due to the inherent
challenges in transforming a regression problem, such as counting, into a
recognition task. In this work, we investigate and enhance CLIP's ability to
count, focusing specifically on the task of estimating crowd sizes from images.
Existing classification-based crowd-counting frameworks have significant
limitations, including the quantization of count values into bordering
real-valued bins and the sole focus on classification errors. These practices
result in label ambiguity near the shared borders and inaccurate prediction of
count values. Hence, directly applying CLIP within these frameworks may yield
suboptimal performance.
To address these challenges, we first propose the Enhanced Blockwise
Classification (EBC) framework. Unlike previous methods, EBC utilizes
integer-valued bins, effectively reducing ambiguity near bin boundaries.
Additionally, it incorporates a regression loss based on density maps to
improve the prediction of count values. Within our backbone-agnostic EBC
framework, we then introduce CLIP-EBC to fully leverage CLIP's recognition
capabilities for this task. Extensive experiments demonstrate the effectiveness
of EBC and the competitive performance of CLIP-EBC. Specifically, our EBC
framework can improve existing classification-based methods by up to 44.5% on
the UCF-QNRF dataset, and CLIP-EBC achieves state-of-the-art performance on the
NWPU-Crowd test set, with an MAE of 58.2 and an RMSE of 268.5, representing
improvements of 8.6% and 13.3% over the previous best method, STEERER. The code
and weights are available at https://github.com/Yiming-M/CLIP-EBC.
| [
{
"version": "v1",
"created": "Thu, 14 Mar 2024 11:08:33 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Aug 2024 11:10:24 GMT"
},
{
"version": "v3",
"created": "Tue, 25 Mar 2025 16:47:11 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Ma",
"Yiming",
""
],
[
"Sanchez",
"Victor",
""
],
[
"Guha",
"Tanaya",
""
]
] | TITLE: CLIP-EBC: CLIP Can Count Accurately through Enhanced Blockwise
Classification
ABSTRACT: We propose CLIP-EBC, the first fully CLIP-based model for accurate crowd
density estimation. While the CLIP model has demonstrated remarkable success in
addressing recognition tasks such as zero-shot image classification, its
potential for counting has been largely unexplored due to the inherent
challenges in transforming a regression problem, such as counting, into a
recognition task. In this work, we investigate and enhance CLIP's ability to
count, focusing specifically on the task of estimating crowd sizes from images.
Existing classification-based crowd-counting frameworks have significant
limitations, including the quantization of count values into bordering
real-valued bins and the sole focus on classification errors. These practices
result in label ambiguity near the shared borders and inaccurate prediction of
count values. Hence, directly applying CLIP within these frameworks may yield
suboptimal performance.
To address these challenges, we first propose the Enhanced Blockwise
Classification (EBC) framework. Unlike previous methods, EBC utilizes
integer-valued bins, effectively reducing ambiguity near bin boundaries.
Additionally, it incorporates a regression loss based on density maps to
improve the prediction of count values. Within our backbone-agnostic EBC
framework, we then introduce CLIP-EBC to fully leverage CLIP's recognition
capabilities for this task. Extensive experiments demonstrate the effectiveness
of EBC and the competitive performance of CLIP-EBC. Specifically, our EBC
framework can improve existing classification-based methods by up to 44.5% on
the UCF-QNRF dataset, and CLIP-EBC achieves state-of-the-art performance on the
NWPU-Crowd test set, with an MAE of 58.2 and an RMSE of 268.5, representing
improvements of 8.6% and 13.3% over the previous best method, STEERER. The code
and weights are available at https://github.com/Yiming-M/CLIP-EBC.
|
2404.11960 | Fang Guo | Fang Guo, Wenyu Li, Honglei Zhuang, Yun Luo, Yafu Li, Le Yan, Qi Zhu,
Yue Zhang | MCRanker: Generating Diverse Criteria On-the-Fly to Improve Point-wise
LLM Rankers | null | WSDM 2025: Proceedings of the Eighteenth ACM International
Conference on Web Search and Data Mining | null | null | cs.IR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The most recent pointwise Large Language Model (LLM) rankers have achieved
remarkable ranking results. However, these rankers are hindered by two major
drawbacks: (1) they fail to follow a standardized comparison guidance during
the ranking process, and (2) they struggle with comprehensive considerations
when dealing with complicated passages. To address these shortcomings, we
propose to build a ranker that generates ranking scores based on a set of
criteria from various perspectives. These criteria are intended to direct each
perspective in providing a distinct yet synergistic evaluation. Our research,
which examines eight datasets from the BEIR benchmark demonstrates that
incorporating this multi-perspective criteria ensemble approach markedly
enhanced the performance of pointwise LLM rankers.
| [
{
"version": "v1",
"created": "Thu, 18 Apr 2024 07:42:46 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Jun 2024 14:09:22 GMT"
},
{
"version": "v3",
"created": "Tue, 25 Mar 2025 06:08:47 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Guo",
"Fang",
""
],
[
"Li",
"Wenyu",
""
],
[
"Zhuang",
"Honglei",
""
],
[
"Luo",
"Yun",
""
],
[
"Li",
"Yafu",
""
],
[
"Yan",
"Le",
""
],
[
"Zhu",
"Qi",
""
],
[
"Zhang",
"Yue",
""
]
] | TITLE: MCRanker: Generating Diverse Criteria On-the-Fly to Improve Point-wise
LLM Rankers
ABSTRACT: The most recent pointwise Large Language Model (LLM) rankers have achieved
remarkable ranking results. However, these rankers are hindered by two major
drawbacks: (1) they fail to follow a standardized comparison guidance during
the ranking process, and (2) they struggle with comprehensive considerations
when dealing with complicated passages. To address these shortcomings, we
propose to build a ranker that generates ranking scores based on a set of
criteria from various perspectives. These criteria are intended to direct each
perspective in providing a distinct yet synergistic evaluation. Our research,
which examines eight datasets from the BEIR benchmark demonstrates that
incorporating this multi-perspective criteria ensemble approach markedly
enhanced the performance of pointwise LLM rankers.
|
2404.14109 | Xin Zhou | Wencheng Zhu, Xin Zhou, Pengfei Zhu, Yu Wang, Qinghua Hu | CKD: Contrastive Knowledge Distillation from A Sample-wise Perspective | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a simple yet effective contrastive knowledge
distillation framework that achieves sample-wise logit alignment while
preserving semantic consistency. Conventional knowledge distillation approaches
exhibit over-reliance on feature similarity per sample, which risks
overfitting, and contrastive approaches focus on inter-class discrimination at
the expense of intra-sample semantic relationships. Our approach transfers
"dark knowledge" through teacher-student contrastive alignment at the sample
level. Specifically, our method first enforces intra-sample alignment by
directly minimizing teacher-student logit discrepancies within individual
samples. Then, we utilize inter-sample contrasts to preserve semantic
dissimilarities across samples. By redefining positive pairs as aligned
teacher-student logits from identical samples and negative pairs as
cross-sample logit combinations, we reformulate these dual constraints into an
InfoNCE loss framework, reducing computational complexity lower than sample
squares while eliminating dependencies on temperature parameters and large
batch sizes. We conduct comprehensive experiments across three benchmark
datasets, including the CIFAR-100, ImageNet-1K, and MS COCO datasets, and
experimental results clearly confirm the effectiveness of the proposed method
on image classification, object detection, and instance segmentation tasks.
| [
{
"version": "v1",
"created": "Mon, 22 Apr 2024 11:52:40 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 06:36:10 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Zhu",
"Wencheng",
""
],
[
"Zhou",
"Xin",
""
],
[
"Zhu",
"Pengfei",
""
],
[
"Wang",
"Yu",
""
],
[
"Hu",
"Qinghua",
""
]
] | TITLE: CKD: Contrastive Knowledge Distillation from A Sample-wise Perspective
ABSTRACT: In this paper, we propose a simple yet effective contrastive knowledge
distillation framework that achieves sample-wise logit alignment while
preserving semantic consistency. Conventional knowledge distillation approaches
exhibit over-reliance on feature similarity per sample, which risks
overfitting, and contrastive approaches focus on inter-class discrimination at
the expense of intra-sample semantic relationships. Our approach transfers
"dark knowledge" through teacher-student contrastive alignment at the sample
level. Specifically, our method first enforces intra-sample alignment by
directly minimizing teacher-student logit discrepancies within individual
samples. Then, we utilize inter-sample contrasts to preserve semantic
dissimilarities across samples. By redefining positive pairs as aligned
teacher-student logits from identical samples and negative pairs as
cross-sample logit combinations, we reformulate these dual constraints into an
InfoNCE loss framework, reducing computational complexity lower than sample
squares while eliminating dependencies on temperature parameters and large
batch sizes. We conduct comprehensive experiments across three benchmark
datasets, including the CIFAR-100, ImageNet-1K, and MS COCO datasets, and
experimental results clearly confirm the effectiveness of the proposed method
on image classification, object detection, and instance segmentation tasks.
|
2405.06261 | Arvind Rameshwar | V. Arvind Rameshwar and Anshoo Tandon | On Improving the Composition Privacy Loss in Differential Privacy for
Fixed Estimation Error | 45 pages, 8 figures, submitted to the IEEE after major edits | null | null | null | cs.CR cs.IT math.IT | http://creativecommons.org/licenses/by/4.0/ | This paper considers the private release of statistics of disjoint subsets of
a dataset, in the setting of data heterogeneity, where users could contribute
more than one sample, with different users contributing potentially different
numbers of samples. In particular, we focus on the $\epsilon$-differentially
private release of sample means and variances of sample values in disjoint
subsets of a dataset, under the assumption that the numbers of contributions of
each user in each subset is publicly known. Our main contribution is an
iterative algorithm, based on suppressing user contributions, which seeks to
reduce the overall privacy loss degradation under a canonical Laplace
mechanism, while not increasing the worst estimation error among the subsets.
Important components of this analysis are our exact, analytical
characterizations of the sensitivities and the worst-case bias errors of
estimators of the sample mean and variance, which are obtained by clipping or
suppressing user contributions. We test the performance of our algorithm on
real-world and synthetic datasets and demonstrate clear improvements in the
privacy loss degradation, for fixed worst-case estimation error.
| [
{
"version": "v1",
"created": "Fri, 10 May 2024 06:24:35 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Aug 2024 08:11:47 GMT"
},
{
"version": "v3",
"created": "Thu, 8 Aug 2024 06:35:30 GMT"
},
{
"version": "v4",
"created": "Tue, 25 Mar 2025 06:08:30 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Rameshwar",
"V. Arvind",
""
],
[
"Tandon",
"Anshoo",
""
]
] | TITLE: On Improving the Composition Privacy Loss in Differential Privacy for
Fixed Estimation Error
ABSTRACT: This paper considers the private release of statistics of disjoint subsets of
a dataset, in the setting of data heterogeneity, where users could contribute
more than one sample, with different users contributing potentially different
numbers of samples. In particular, we focus on the $\epsilon$-differentially
private release of sample means and variances of sample values in disjoint
subsets of a dataset, under the assumption that the numbers of contributions of
each user in each subset is publicly known. Our main contribution is an
iterative algorithm, based on suppressing user contributions, which seeks to
reduce the overall privacy loss degradation under a canonical Laplace
mechanism, while not increasing the worst estimation error among the subsets.
Important components of this analysis are our exact, analytical
characterizations of the sensitivities and the worst-case bias errors of
estimators of the sample mean and variance, which are obtained by clipping or
suppressing user contributions. We test the performance of our algorithm on
real-world and synthetic datasets and demonstrate clear improvements in the
privacy loss degradation, for fixed worst-case estimation error.
|
2405.08295 | Saket Dingliwal | Nilaksh Das, Saket Dingliwal, Srikanth Ronanki, Rohit Paturi,
Zhaocheng Huang, Prashant Mathur, Jie Yuan, Dhanush Bekal, Xing Niu, Sai
Muralidhar Jayanthi, Xilai Li, Karel Mundnich, Monica Sunkara, Sravan
Bodapati, Sundararajan Srinivasan, Kyu J Han, Katrin Kirchhoff | SpeechVerse: A Large-scale Generalizable Audio Language Model | Single Column, 13 page | null | null | null | cs.CL cs.SD eess.AS | http://creativecommons.org/licenses/by/4.0/ | Large language models (LLMs) have shown incredible proficiency in performing
tasks that require semantic understanding of natural language instructions.
Recently, many works have further expanded this capability to perceive
multimodal audio and text inputs, but their capabilities are often limited to
specific fine-tuned tasks such as automatic speech recognition and translation.
We therefore develop SpeechVerse, a robust multi-task training and curriculum
learning framework that combines pre-trained speech and text foundation models
via a small set of learnable parameters, while keeping the pre-trained models
frozen during training. The models are instruction finetuned using continuous
latent representations extracted from the speech foundation model to achieve
optimal zero-shot performance on a diverse range of speech processing tasks
using natural language instructions. We perform extensive benchmarking that
includes comparing our model performance against traditional baselines across
several datasets and tasks. Furthermore, we evaluate the model's capability for
generalized instruction following by testing on out-of-domain datasets, novel
prompts, and unseen tasks. Our empirical experiments reveal that our multi-task
SpeechVerse model is even superior to conventional task-specific baselines on 9
out of the 11 tasks.
| [
{
"version": "v1",
"created": "Tue, 14 May 2024 03:33:31 GMT"
},
{
"version": "v2",
"created": "Fri, 31 May 2024 17:47:40 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Mar 2025 21:06:53 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Das",
"Nilaksh",
""
],
[
"Dingliwal",
"Saket",
""
],
[
"Ronanki",
"Srikanth",
""
],
[
"Paturi",
"Rohit",
""
],
[
"Huang",
"Zhaocheng",
""
],
[
"Mathur",
"Prashant",
""
],
[
"Yuan",
"Jie",
""
],
[
"Bekal",
"Dhanush",
""
],
[
"Niu",
"Xing",
""
],
[
"Jayanthi",
"Sai Muralidhar",
""
],
[
"Li",
"Xilai",
""
],
[
"Mundnich",
"Karel",
""
],
[
"Sunkara",
"Monica",
""
],
[
"Bodapati",
"Sravan",
""
],
[
"Srinivasan",
"Sundararajan",
""
],
[
"Han",
"Kyu J",
""
],
[
"Kirchhoff",
"Katrin",
""
]
] | TITLE: SpeechVerse: A Large-scale Generalizable Audio Language Model
ABSTRACT: Large language models (LLMs) have shown incredible proficiency in performing
tasks that require semantic understanding of natural language instructions.
Recently, many works have further expanded this capability to perceive
multimodal audio and text inputs, but their capabilities are often limited to
specific fine-tuned tasks such as automatic speech recognition and translation.
We therefore develop SpeechVerse, a robust multi-task training and curriculum
learning framework that combines pre-trained speech and text foundation models
via a small set of learnable parameters, while keeping the pre-trained models
frozen during training. The models are instruction finetuned using continuous
latent representations extracted from the speech foundation model to achieve
optimal zero-shot performance on a diverse range of speech processing tasks
using natural language instructions. We perform extensive benchmarking that
includes comparing our model performance against traditional baselines across
several datasets and tasks. Furthermore, we evaluate the model's capability for
generalized instruction following by testing on out-of-domain datasets, novel
prompts, and unseen tasks. Our empirical experiments reveal that our multi-task
SpeechVerse model is even superior to conventional task-specific baselines on 9
out of the 11 tasks.
|
2405.11968 | Singh Akansha | S. Akansha | Conditional Shift-Robust Conformal Prediction for Graph Neural Network | 15 pages, 3 figures, 4 tables | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Graph Neural Networks (GNNs) have emerged as potent tools for predicting
outcomes in graph-structured data. Despite their efficacy, a significant
drawback of GNNs lies in their limited ability to provide robust uncertainty
estimates, posing challenges to their reliability in contexts where errors
carry significant consequences. Moreover, GNNs typically excel in
in-distribution settings, assuming that training and test data follow identical
distributions a condition often unmet in real world graph data scenarios. In
this article, we leverage conformal prediction, a widely recognized statistical
technique for quantifying uncertainty by transforming predictive model outputs
into prediction sets, to address uncertainty quantification in GNN predictions
amidst conditional shift\footnote{Representing the change in conditional
probability distribution \(P(label|input)\) from source domain to target
domain.} in graph-based semi-supervised learning (SSL). Additionally, we
propose a novel loss function aimed at refining model predictions by minimizing
conditional shift in latent stages. Termed Conditional Shift Robust (CondSR)
conformal prediction for GNNs, our approach CondSR is model-agnostic and
adaptable to various classification models. We validate the effectiveness of
our method on standard graph benchmark datasets, integrating it with
state-of-the-art GNNs in node classification tasks. Comprehensive evaluations
demonstrate that our approach consistently achieves any predefined target
marginal coverage, enhances the accuracy of state of the art GNN models by up
to 12\% under conditional shift, and reduces the prediction set size by up to
48\%. The code implementation is publicly available for further exploration and
experimentation.
| [
{
"version": "v1",
"created": "Mon, 20 May 2024 11:47:31 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Jun 2024 18:17:51 GMT"
},
{
"version": "v3",
"created": "Tue, 25 Mar 2025 08:27:10 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Akansha",
"S.",
""
]
] | TITLE: Conditional Shift-Robust Conformal Prediction for Graph Neural Network
ABSTRACT: Graph Neural Networks (GNNs) have emerged as potent tools for predicting
outcomes in graph-structured data. Despite their efficacy, a significant
drawback of GNNs lies in their limited ability to provide robust uncertainty
estimates, posing challenges to their reliability in contexts where errors
carry significant consequences. Moreover, GNNs typically excel in
in-distribution settings, assuming that training and test data follow identical
distributions a condition often unmet in real world graph data scenarios. In
this article, we leverage conformal prediction, a widely recognized statistical
technique for quantifying uncertainty by transforming predictive model outputs
into prediction sets, to address uncertainty quantification in GNN predictions
amidst conditional shift\footnote{Representing the change in conditional
probability distribution \(P(label|input)\) from source domain to target
domain.} in graph-based semi-supervised learning (SSL). Additionally, we
propose a novel loss function aimed at refining model predictions by minimizing
conditional shift in latent stages. Termed Conditional Shift Robust (CondSR)
conformal prediction for GNNs, our approach CondSR is model-agnostic and
adaptable to various classification models. We validate the effectiveness of
our method on standard graph benchmark datasets, integrating it with
state-of-the-art GNNs in node classification tasks. Comprehensive evaluations
demonstrate that our approach consistently achieves any predefined target
marginal coverage, enhances the accuracy of state of the art GNN models by up
to 12\% under conditional shift, and reduces the prediction set size by up to
48\%. The code implementation is publicly available for further exploration and
experimentation.
|
2405.14119 | Chongwei Liu | Chongwei Liu, Haojie Li, Zhihui Wang, Rui Xu | Is a Pure Transformer Effective for Separated and Online Multi-Object
Tracking? | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Recent advances in Multi-Object Tracking (MOT) have demonstrated significant
success in short-term association within the separated tracking-by-detection
online paradigm. However, long-term tracking remains challenging. While
graph-based approaches address this by modeling trajectories as global graphs,
these methods are unsuitable for real-time applications due to their non-online
nature. In this paper, we review the concept of trajectory graphs and propose a
novel perspective by representing them as directed acyclic graphs. This
representation can be described using frame-ordered object sequences and binary
adjacency matrices. We observe that this structure naturally aligns with
Transformer attention mechanisms, enabling us to model the association problem
using a classic Transformer architecture. Based on this insight, we introduce a
concise Pure Transformer (PuTR) to validate the effectiveness of Transformer in
unifying short- and long-term tracking for separated online MOT. Extensive
experiments on four diverse datasets (SportsMOT, DanceTrack, MOT17, and MOT20)
demonstrate that PuTR effectively establishes a solid baseline compared to
existing foundational online methods while exhibiting superior domain
adaptation capabilities. Furthermore, the separated nature enables efficient
training and inference, making it suitable for practical applications.
Implementation code and trained models are available at
https://github.com/chongweiliu/PuTR .
| [
{
"version": "v1",
"created": "Thu, 23 May 2024 02:44:46 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 06:46:45 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Liu",
"Chongwei",
""
],
[
"Li",
"Haojie",
""
],
[
"Wang",
"Zhihui",
""
],
[
"Xu",
"Rui",
""
]
] | TITLE: Is a Pure Transformer Effective for Separated and Online Multi-Object
Tracking?
ABSTRACT: Recent advances in Multi-Object Tracking (MOT) have demonstrated significant
success in short-term association within the separated tracking-by-detection
online paradigm. However, long-term tracking remains challenging. While
graph-based approaches address this by modeling trajectories as global graphs,
these methods are unsuitable for real-time applications due to their non-online
nature. In this paper, we review the concept of trajectory graphs and propose a
novel perspective by representing them as directed acyclic graphs. This
representation can be described using frame-ordered object sequences and binary
adjacency matrices. We observe that this structure naturally aligns with
Transformer attention mechanisms, enabling us to model the association problem
using a classic Transformer architecture. Based on this insight, we introduce a
concise Pure Transformer (PuTR) to validate the effectiveness of Transformer in
unifying short- and long-term tracking for separated online MOT. Extensive
experiments on four diverse datasets (SportsMOT, DanceTrack, MOT17, and MOT20)
demonstrate that PuTR effectively establishes a solid baseline compared to
existing foundational online methods while exhibiting superior domain
adaptation capabilities. Furthermore, the separated nature enables efficient
training and inference, making it suitable for practical applications.
Implementation code and trained models are available at
https://github.com/chongweiliu/PuTR .
|
2405.14517 | Songze Li | Songze Li, Ruoxi Cheng, Xiaojun Jia | TUNI: A Textual Unimodal Detector for Identity Inference in CLIP Models | null | null | null | null | cs.LG cs.CR | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The widespread usage of large-scale multimodal models like CLIP has
heightened concerns about the leakage of PII. Existing methods for identity
inference in CLIP models require querying the model with full PII, including
textual descriptions of the person and corresponding images (e.g., the name and
the face photo of the person). However, applying images may risk exposing
personal information to target models, as the image might not have been
previously encountered by the target model. Additionally, previous MIAs train
shadow models to mimic the behaviors of the target model, which incurs high
computational costs, especially for large CLIP models. To address these
challenges, we propose a textual unimodal detector (TUNI) in CLIP models, a
novel technique for identity inference that: 1) only utilizes text data to
query the target model; and 2) eliminates the need for training shadow models.
Extensive experiments of TUNI across various CLIP model architectures and
datasets demonstrate its superior performance over baselines, albeit with only
text data.
| [
{
"version": "v1",
"created": "Thu, 23 May 2024 12:54:25 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 01:47:37 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Li",
"Songze",
""
],
[
"Cheng",
"Ruoxi",
""
],
[
"Jia",
"Xiaojun",
""
]
] | TITLE: TUNI: A Textual Unimodal Detector for Identity Inference in CLIP Models
ABSTRACT: The widespread usage of large-scale multimodal models like CLIP has
heightened concerns about the leakage of PII. Existing methods for identity
inference in CLIP models require querying the model with full PII, including
textual descriptions of the person and corresponding images (e.g., the name and
the face photo of the person). However, applying images may risk exposing
personal information to target models, as the image might not have been
previously encountered by the target model. Additionally, previous MIAs train
shadow models to mimic the behaviors of the target model, which incurs high
computational costs, especially for large CLIP models. To address these
challenges, we propose a textual unimodal detector (TUNI) in CLIP models, a
novel technique for identity inference that: 1) only utilizes text data to
query the target model; and 2) eliminates the need for training shadow models.
Extensive experiments of TUNI across various CLIP model architectures and
datasets demonstrate its superior performance over baselines, albeit with only
text data.
|
2405.17403 | Mingjia Shi | Kai Wang, Mingjia Shi, Yukun Zhou, Zekai Li, Zhihang Yuan, Yuzhang
Shang, Xiaojiang Peng, Hanwang Zhang and Yang You | A Closer Look at Time Steps is Worthy of Triple Speed-Up for Diffusion
Model Training | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Training diffusion models is always a computation-intensive task. In this
paper, we introduce a novel speed-up method for diffusion model training,
called, which is based on a closer look at time steps. Our key findings are: i)
Time steps can be empirically divided into acceleration, deceleration, and
convergence areas based on the process increment. ii) These time steps are
imbalanced, with many concentrated in the convergence area. iii) The
concentrated steps provide limited benefits for diffusion training. To address
this, we design an asymmetric sampling strategy that reduces the frequency of
steps from the convergence area while increasing the sampling probability for
steps from other areas. Additionally, we propose a weighting strategy to
emphasize the importance of time steps with rapid-change process increments. As
a plug-and-play and architecture-agnostic approach, SpeeD consistently achieves
3-times acceleration across various diffusion architectures, datasets, and
tasks. Notably, due to its simple design, our approach significantly reduces
the cost of diffusion model training with minimal overhead. Our research
enables more researchers to train diffusion models at a lower cost.
| [
{
"version": "v1",
"created": "Mon, 27 May 2024 17:51:36 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Oct 2024 13:40:00 GMT"
},
{
"version": "v3",
"created": "Tue, 25 Mar 2025 08:38:28 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Wang",
"Kai",
""
],
[
"Shi",
"Mingjia",
""
],
[
"Zhou",
"Yukun",
""
],
[
"Li",
"Zekai",
""
],
[
"Yuan",
"Zhihang",
""
],
[
"Shang",
"Yuzhang",
""
],
[
"Peng",
"Xiaojiang",
""
],
[
"Zhang",
"Hanwang",
""
],
[
"You",
"Yang",
""
]
] | TITLE: A Closer Look at Time Steps is Worthy of Triple Speed-Up for Diffusion
Model Training
ABSTRACT: Training diffusion models is always a computation-intensive task. In this
paper, we introduce a novel speed-up method for diffusion model training,
called, which is based on a closer look at time steps. Our key findings are: i)
Time steps can be empirically divided into acceleration, deceleration, and
convergence areas based on the process increment. ii) These time steps are
imbalanced, with many concentrated in the convergence area. iii) The
concentrated steps provide limited benefits for diffusion training. To address
this, we design an asymmetric sampling strategy that reduces the frequency of
steps from the convergence area while increasing the sampling probability for
steps from other areas. Additionally, we propose a weighting strategy to
emphasize the importance of time steps with rapid-change process increments. As
a plug-and-play and architecture-agnostic approach, SpeeD consistently achieves
3-times acceleration across various diffusion architectures, datasets, and
tasks. Notably, due to its simple design, our approach significantly reduces
the cost of diffusion model training with minimal overhead. Our research
enables more researchers to train diffusion models at a lower cost.
|
2405.18710 | Joonhyung Lee | Joonhyung Lee, Jeongin Bae, Byeongwook Kim, Se Jung Kwon, Dongsoo Lee | To FP8 and Back Again: Quantifying Reduced Precision Effects on LLM
Training Stability | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | The massive computational costs associated with large language model (LLM)
pretraining have spurred great interest in reduced-precision floating-point
representations to accelerate the process. As a result, the BrainFloat16 (BF16)
precision has become the de facto standard for LLM training, with hardware
support included in recent generations of accelerators. This trend has gone
even further in the latest processors, where FP8 has recently been introduced.
However, prior experience with FP16, which was found to be less stable than
BF16, raises concerns as to whether FP8, with even fewer bits than FP16, can be
a cost-effective option for LLM training. We argue that reduced-precision
training schemes must have similar training stability and hyperparameter
sensitivities to their higher-precision counterparts in order to be
cost-effective. However, we find that currently available methods for FP8
training are not robust enough to allow their use as economical replacements.
This prompts us to investigate the stability of reduced-precision LLM training
in terms of robustness across random seeds, learning rates, and datasets. To
this end, we propose new evaluation techniques and a new metric for quantifying
loss landscape sharpness in autoregressive language models. By simulating
incremental bit reductions in floating-point representations, we analyze the
relationship between representational power and training stability with the
intent of aiding future research into the field.
| [
{
"version": "v1",
"created": "Wed, 29 May 2024 02:42:23 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 11:11:03 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Lee",
"Joonhyung",
""
],
[
"Bae",
"Jeongin",
""
],
[
"Kim",
"Byeongwook",
""
],
[
"Kwon",
"Se Jung",
""
],
[
"Lee",
"Dongsoo",
""
]
] | TITLE: To FP8 and Back Again: Quantifying Reduced Precision Effects on LLM
Training Stability
ABSTRACT: The massive computational costs associated with large language model (LLM)
pretraining have spurred great interest in reduced-precision floating-point
representations to accelerate the process. As a result, the BrainFloat16 (BF16)
precision has become the de facto standard for LLM training, with hardware
support included in recent generations of accelerators. This trend has gone
even further in the latest processors, where FP8 has recently been introduced.
However, prior experience with FP16, which was found to be less stable than
BF16, raises concerns as to whether FP8, with even fewer bits than FP16, can be
a cost-effective option for LLM training. We argue that reduced-precision
training schemes must have similar training stability and hyperparameter
sensitivities to their higher-precision counterparts in order to be
cost-effective. However, we find that currently available methods for FP8
training are not robust enough to allow their use as economical replacements.
This prompts us to investigate the stability of reduced-precision LLM training
in terms of robustness across random seeds, learning rates, and datasets. To
this end, we propose new evaluation techniques and a new metric for quantifying
loss landscape sharpness in autoregressive language models. By simulating
incremental bit reductions in floating-point representations, we analyze the
relationship between representational power and training stability with the
intent of aiding future research into the field.
|
2406.01591 | Yu-Lun Liu | Chun-Hung Wu, Shih-Hong Chen, Chih-Yao Hu, Hsin-Yu Wu, Kai-Hsin Chen,
Yu-You Chen, Chih-Hai Su, Chih-Kuo Lee, Yu-Lun Liu | DeNVeR: Deformable Neural Vessel Representations for Unsupervised Video
Vessel Segmentation | Paper accepted to CVPR 2025. Project page:
https://kirito878.github.io/DeNVeR/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | This paper presents Deformable Neural Vessel Representations (DeNVeR), an
unsupervised approach for vessel segmentation in X-ray angiography videos
without annotated ground truth. DeNVeR utilizes optical flow and layer
separation techniques, enhancing segmentation accuracy and adaptability through
test-time training. Key contributions include a novel layer separation
bootstrapping technique, a parallel vessel motion loss, and the integration of
Eulerian motion fields for modeling complex vessel dynamics. A significant
component of this research is the introduction of the XACV dataset, the first
X-ray angiography coronary video dataset with high-quality, manually labeled
segmentation ground truth. Extensive evaluations on both XACV and CADICA
datasets demonstrate that DeNVeR outperforms current state-of-the-art methods
in vessel segmentation accuracy and generalization capability while maintaining
temporal coherency.
| [
{
"version": "v1",
"created": "Mon, 3 Jun 2024 17:59:34 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Oct 2024 14:36:11 GMT"
},
{
"version": "v3",
"created": "Sat, 7 Dec 2024 18:27:23 GMT"
},
{
"version": "v4",
"created": "Tue, 25 Mar 2025 15:52:48 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Wu",
"Chun-Hung",
""
],
[
"Chen",
"Shih-Hong",
""
],
[
"Hu",
"Chih-Yao",
""
],
[
"Wu",
"Hsin-Yu",
""
],
[
"Chen",
"Kai-Hsin",
""
],
[
"Chen",
"Yu-You",
""
],
[
"Su",
"Chih-Hai",
""
],
[
"Lee",
"Chih-Kuo",
""
],
[
"Liu",
"Yu-Lun",
""
]
] | TITLE: DeNVeR: Deformable Neural Vessel Representations for Unsupervised Video
Vessel Segmentation
ABSTRACT: This paper presents Deformable Neural Vessel Representations (DeNVeR), an
unsupervised approach for vessel segmentation in X-ray angiography videos
without annotated ground truth. DeNVeR utilizes optical flow and layer
separation techniques, enhancing segmentation accuracy and adaptability through
test-time training. Key contributions include a novel layer separation
bootstrapping technique, a parallel vessel motion loss, and the integration of
Eulerian motion fields for modeling complex vessel dynamics. A significant
component of this research is the introduction of the XACV dataset, the first
X-ray angiography coronary video dataset with high-quality, manually labeled
segmentation ground truth. Extensive evaluations on both XACV and CADICA
datasets demonstrate that DeNVeR outperforms current state-of-the-art methods
in vessel segmentation accuracy and generalization capability while maintaining
temporal coherency.
|
2406.01821 | Hormoz Shahrzad | Hormoz Shahrzad and Risto Miikkulainen | GPU-Accelerated Rule Evaluation and Evolution | null | null | null | null | cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces an innovative approach to boost the efficiency and
scalability of Evolutionary Rule-based machine Learning (ERL), a key technique
in explainable AI. While traditional ERL systems can distribute processes
across multiple CPUs, fitness evaluation of candidate rules is a bottleneck,
especially with large datasets. The method proposed in this paper, AERL
(Accelerated ERL) solves this problem in two ways. First, by adopting
GPU-optimized rule sets through a tensorized representation within the PyTorch
framework, AERL mitigates the bottleneck and accelerates fitness evaluation
significantly. Second, AERL takes further advantage of the GPUs by fine-tuning
the rule coefficients via back-propagation, thereby improving search space
exploration. Experimental evidence confirms that AERL search is faster and more
effective, thus empowering explainable artificial intelligence.
| [
{
"version": "v1",
"created": "Mon, 3 Jun 2024 22:24:12 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 18:20:23 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Shahrzad",
"Hormoz",
""
],
[
"Miikkulainen",
"Risto",
""
]
] | TITLE: GPU-Accelerated Rule Evaluation and Evolution
ABSTRACT: This paper introduces an innovative approach to boost the efficiency and
scalability of Evolutionary Rule-based machine Learning (ERL), a key technique
in explainable AI. While traditional ERL systems can distribute processes
across multiple CPUs, fitness evaluation of candidate rules is a bottleneck,
especially with large datasets. The method proposed in this paper, AERL
(Accelerated ERL) solves this problem in two ways. First, by adopting
GPU-optimized rule sets through a tensorized representation within the PyTorch
framework, AERL mitigates the bottleneck and accelerates fitness evaluation
significantly. Second, AERL takes further advantage of the GPUs by fine-tuning
the rule coefficients via back-propagation, thereby improving search space
exploration. Experimental evidence confirms that AERL search is faster and more
effective, thus empowering explainable artificial intelligence.
|
2406.04314 | Zhanhao Liang | Zhanhao Liang, Yuhui Yuan, Shuyang Gu, Bohan Chen, Tiankai Hang,
Mingxi Cheng, Ji Li, Liang Zheng | Aesthetic Post-Training Diffusion Models from Generic Preferences with
Step-by-step Preference Optimization | CVPR 2025. Project Page: https://rockeycoss.github.io/spo.github.io/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generating visually appealing images is fundamental to modern text-to-image
generation models. A potential solution to better aesthetics is direct
preference optimization (DPO), which has been applied to diffusion models to
improve general image quality including prompt alignment and aesthetics.
Popular DPO methods propagate preference labels from clean image pairs to all
the intermediate steps along the two generation trajectories. However,
preference labels provided in existing datasets are blended with layout and
aesthetic opinions, which would disagree with aesthetic preference. Even if
aesthetic labels were provided (at substantial cost), it would be hard for the
two-trajectory methods to capture nuanced visual differences at different
steps. To improve aesthetics economically, this paper uses existing generic
preference data and introduces step-by-step preference optimization (SPO) that
discards the propagation strategy and allows fine-grained image details to be
assessed. Specifically, at each denoising step, we 1) sample a pool of
candidates by denoising from a shared noise latent, 2) use a step-aware
preference model to find a suitable win-lose pair to supervise the diffusion
model, and 3) randomly select one from the pool to initialize the next
denoising step. This strategy ensures that diffusion models focus on the
subtle, fine-grained visual differences instead of layout aspect. We find that
aesthetics can be significantly enhanced by accumulating these improved minor
differences. When fine-tuning Stable Diffusion v1.5 and SDXL, SPO yields
significant improvements in aesthetics compared with existing DPO methods while
not sacrificing image-text alignment compared with vanilla models. Moreover,
SPO converges much faster than DPO methods due to the use of more correct
preference labels provided by the step-aware preference model.
| [
{
"version": "v1",
"created": "Thu, 6 Jun 2024 17:57:09 GMT"
},
{
"version": "v2",
"created": "Fri, 6 Dec 2024 17:59:18 GMT"
},
{
"version": "v3",
"created": "Tue, 25 Mar 2025 17:06:27 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Liang",
"Zhanhao",
""
],
[
"Yuan",
"Yuhui",
""
],
[
"Gu",
"Shuyang",
""
],
[
"Chen",
"Bohan",
""
],
[
"Hang",
"Tiankai",
""
],
[
"Cheng",
"Mingxi",
""
],
[
"Li",
"Ji",
""
],
[
"Zheng",
"Liang",
""
]
] | TITLE: Aesthetic Post-Training Diffusion Models from Generic Preferences with
Step-by-step Preference Optimization
ABSTRACT: Generating visually appealing images is fundamental to modern text-to-image
generation models. A potential solution to better aesthetics is direct
preference optimization (DPO), which has been applied to diffusion models to
improve general image quality including prompt alignment and aesthetics.
Popular DPO methods propagate preference labels from clean image pairs to all
the intermediate steps along the two generation trajectories. However,
preference labels provided in existing datasets are blended with layout and
aesthetic opinions, which would disagree with aesthetic preference. Even if
aesthetic labels were provided (at substantial cost), it would be hard for the
two-trajectory methods to capture nuanced visual differences at different
steps. To improve aesthetics economically, this paper uses existing generic
preference data and introduces step-by-step preference optimization (SPO) that
discards the propagation strategy and allows fine-grained image details to be
assessed. Specifically, at each denoising step, we 1) sample a pool of
candidates by denoising from a shared noise latent, 2) use a step-aware
preference model to find a suitable win-lose pair to supervise the diffusion
model, and 3) randomly select one from the pool to initialize the next
denoising step. This strategy ensures that diffusion models focus on the
subtle, fine-grained visual differences instead of layout aspect. We find that
aesthetics can be significantly enhanced by accumulating these improved minor
differences. When fine-tuning Stable Diffusion v1.5 and SDXL, SPO yields
significant improvements in aesthetics compared with existing DPO methods while
not sacrificing image-text alignment compared with vanilla models. Moreover,
SPO converges much faster than DPO methods due to the use of more correct
preference labels provided by the step-aware preference model.
|
2406.10219 | Allen Tu | Alex Hanson, Allen Tu, Vasu Singla, Mayuka Jayawardhana, Matthias
Zwicker, Tom Goldstein | PUP 3D-GS: Principled Uncertainty Pruning for 3D Gaussian Splatting | CVPR 2025, Project Page: https://pup3dgs.github.io/ | null | null | null | cs.CV cs.GR | http://creativecommons.org/licenses/by/4.0/ | Recent advances in novel view synthesis have enabled real-time rendering
speeds with high reconstruction accuracy. 3D Gaussian Splatting (3D-GS), a
foundational point-based parametric 3D scene representation, models scenes as
large sets of 3D Gaussians. However, complex scenes can consist of millions of
Gaussians, resulting in high storage and memory requirements that limit the
viability of 3D-GS on devices with limited resources. Current techniques for
compressing these pretrained models by pruning Gaussians rely on combining
heuristics to determine which Gaussians to remove. At high compression ratios,
these pruned scenes suffer from heavy degradation of visual fidelity and loss
of foreground details. In this paper, we propose a principled sensitivity
pruning score that preserves visual fidelity and foreground details at
significantly higher compression ratios than existing approaches. It is
computed as a second-order approximation of the reconstruction error on the
training views with respect to the spatial parameters of each Gaussian.
Additionally, we propose a multi-round prune-refine pipeline that can be
applied to any pretrained 3D-GS model without changing its training pipeline.
After pruning 90% of Gaussians, a substantially higher percentage than previous
methods, our PUP 3D-GS pipeline increases average rendering speed by
3.56$\times$ while retaining more salient foreground information and achieving
higher image quality metrics than existing techniques on scenes from the
Mip-NeRF 360, Tanks & Temples, and Deep Blending datasets.
| [
{
"version": "v1",
"created": "Fri, 14 Jun 2024 17:53:55 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Dec 2024 19:00:28 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Mar 2025 18:34:01 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Hanson",
"Alex",
""
],
[
"Tu",
"Allen",
""
],
[
"Singla",
"Vasu",
""
],
[
"Jayawardhana",
"Mayuka",
""
],
[
"Zwicker",
"Matthias",
""
],
[
"Goldstein",
"Tom",
""
]
] | TITLE: PUP 3D-GS: Principled Uncertainty Pruning for 3D Gaussian Splatting
ABSTRACT: Recent advances in novel view synthesis have enabled real-time rendering
speeds with high reconstruction accuracy. 3D Gaussian Splatting (3D-GS), a
foundational point-based parametric 3D scene representation, models scenes as
large sets of 3D Gaussians. However, complex scenes can consist of millions of
Gaussians, resulting in high storage and memory requirements that limit the
viability of 3D-GS on devices with limited resources. Current techniques for
compressing these pretrained models by pruning Gaussians rely on combining
heuristics to determine which Gaussians to remove. At high compression ratios,
these pruned scenes suffer from heavy degradation of visual fidelity and loss
of foreground details. In this paper, we propose a principled sensitivity
pruning score that preserves visual fidelity and foreground details at
significantly higher compression ratios than existing approaches. It is
computed as a second-order approximation of the reconstruction error on the
training views with respect to the spatial parameters of each Gaussian.
Additionally, we propose a multi-round prune-refine pipeline that can be
applied to any pretrained 3D-GS model without changing its training pipeline.
After pruning 90% of Gaussians, a substantially higher percentage than previous
methods, our PUP 3D-GS pipeline increases average rendering speed by
3.56$\times$ while retaining more salient foreground information and achieving
higher image quality metrics than existing techniques on scenes from the
Mip-NeRF 360, Tanks & Temples, and Deep Blending datasets.
|
2406.10326 | Rohit Bharadwaj | Rohit Bharadwaj, Hanan Gani, Muzammal Naseer, Fahad Shahbaz Khan,
Salman Khan | VANE-Bench: Video Anomaly Evaluation Benchmark for Conversational LMMs | Accepted to NAACL 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The recent developments in Large Multi-modal Video Models (Video-LMMs) have
significantly enhanced our ability to interpret and analyze video data. Despite
their impressive capabilities, current Video-LMMs have not been evaluated for
anomaly detection tasks, which is critical to their deployment in practical
scenarios e.g., towards identifying deepfakes, manipulated video content,
traffic accidents and crimes. In this paper, we introduce VANE-Bench, a
benchmark designed to assess the proficiency of Video-LMMs in detecting and
localizing anomalies and inconsistencies in videos. Our dataset comprises an
array of videos synthetically generated using existing state-of-the-art
text-to-video generation models, encompassing a variety of subtle anomalies and
inconsistencies grouped into five categories: unnatural transformations,
unnatural appearance, pass-through, disappearance and sudden appearance.
Additionally, our benchmark features real-world samples from existing anomaly
detection datasets, focusing on crime-related irregularities, atypical
pedestrian behavior, and unusual events. The task is structured as a visual
question-answering challenge to gauge the models' ability to accurately detect
and localize the anomalies within the videos. We evaluate nine existing
Video-LMMs, both open and closed sources, on this benchmarking task and find
that most of the models encounter difficulties in effectively identifying the
subtle anomalies. In conclusion, our research offers significant insights into
the current capabilities of Video-LMMs in the realm of anomaly detection,
highlighting the importance of our work in evaluating and improving these
models for real-world applications. Our code and data is available at
https://hananshafi.github.io/vane-benchmark/
| [
{
"version": "v1",
"created": "Fri, 14 Jun 2024 17:59:01 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 20:26:56 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Bharadwaj",
"Rohit",
""
],
[
"Gani",
"Hanan",
""
],
[
"Naseer",
"Muzammal",
""
],
[
"Khan",
"Fahad Shahbaz",
""
],
[
"Khan",
"Salman",
""
]
] | TITLE: VANE-Bench: Video Anomaly Evaluation Benchmark for Conversational LMMs
ABSTRACT: The recent developments in Large Multi-modal Video Models (Video-LMMs) have
significantly enhanced our ability to interpret and analyze video data. Despite
their impressive capabilities, current Video-LMMs have not been evaluated for
anomaly detection tasks, which is critical to their deployment in practical
scenarios e.g., towards identifying deepfakes, manipulated video content,
traffic accidents and crimes. In this paper, we introduce VANE-Bench, a
benchmark designed to assess the proficiency of Video-LMMs in detecting and
localizing anomalies and inconsistencies in videos. Our dataset comprises an
array of videos synthetically generated using existing state-of-the-art
text-to-video generation models, encompassing a variety of subtle anomalies and
inconsistencies grouped into five categories: unnatural transformations,
unnatural appearance, pass-through, disappearance and sudden appearance.
Additionally, our benchmark features real-world samples from existing anomaly
detection datasets, focusing on crime-related irregularities, atypical
pedestrian behavior, and unusual events. The task is structured as a visual
question-answering challenge to gauge the models' ability to accurately detect
and localize the anomalies within the videos. We evaluate nine existing
Video-LMMs, both open and closed sources, on this benchmarking task and find
that most of the models encounter difficulties in effectively identifying the
subtle anomalies. In conclusion, our research offers significant insights into
the current capabilities of Video-LMMs in the realm of anomaly detection,
highlighting the importance of our work in evaluating and improving these
models for real-world applications. Our code and data is available at
https://hananshafi.github.io/vane-benchmark/
|
2406.12030 | Yongting Zhang | Yongting Zhang, Lu Chen, Guodong Zheng, Yifeng Gao, Rui Zheng, Jinlan
Fu, Zhenfei Yin, Senjie Jin, Yu Qiao, Xuanjing Huang, Feng Zhao, Tao Gui,
Jing Shao | SPA-VL: A Comprehensive Safety Preference Alignment Dataset for Vision
Language Model | null | null | null | null | cs.CV cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The emergence of Vision Language Models (VLMs) has brought unprecedented
advances in understanding multimodal information. The combination of textual
and visual semantics in VLMs is highly complex and diverse, making the safety
alignment of these models challenging. Furthermore, due to the limited study on
the safety alignment of VLMs, there is a lack of large-scale, high-quality
datasets. To address these limitations, we propose a Safety Preference
Alignment dataset for Vision Language Models named SPA-VL. In terms of breadth,
SPA-VL covers 6 harmfulness domains, 13 categories, and 53 subcategories, and
contains 100,788 samples of the quadruple (question, image, chosen response,
rejected response). In terms of depth, the responses are collected from 12
open-source (e.g., QwenVL) and closed-source (e.g., Gemini) VLMs to ensure
diversity. The construction of preference data is fully automated, and the
experimental results indicate that models trained with alignment techniques on
the SPA-VL dataset exhibit substantial improvements in harmlessness and
helpfulness while maintaining core capabilities. SPA-VL, as a large-scale,
high-quality, and diverse dataset, represents a significant milestone in
ensuring that VLMs achieve both harmlessness and helpfulness.
| [
{
"version": "v1",
"created": "Mon, 17 Jun 2024 18:57:37 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Feb 2025 04:18:50 GMT"
},
{
"version": "v3",
"created": "Tue, 25 Mar 2025 16:01:59 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Zhang",
"Yongting",
""
],
[
"Chen",
"Lu",
""
],
[
"Zheng",
"Guodong",
""
],
[
"Gao",
"Yifeng",
""
],
[
"Zheng",
"Rui",
""
],
[
"Fu",
"Jinlan",
""
],
[
"Yin",
"Zhenfei",
""
],
[
"Jin",
"Senjie",
""
],
[
"Qiao",
"Yu",
""
],
[
"Huang",
"Xuanjing",
""
],
[
"Zhao",
"Feng",
""
],
[
"Gui",
"Tao",
""
],
[
"Shao",
"Jing",
""
]
] | TITLE: SPA-VL: A Comprehensive Safety Preference Alignment Dataset for Vision
Language Model
ABSTRACT: The emergence of Vision Language Models (VLMs) has brought unprecedented
advances in understanding multimodal information. The combination of textual
and visual semantics in VLMs is highly complex and diverse, making the safety
alignment of these models challenging. Furthermore, due to the limited study on
the safety alignment of VLMs, there is a lack of large-scale, high-quality
datasets. To address these limitations, we propose a Safety Preference
Alignment dataset for Vision Language Models named SPA-VL. In terms of breadth,
SPA-VL covers 6 harmfulness domains, 13 categories, and 53 subcategories, and
contains 100,788 samples of the quadruple (question, image, chosen response,
rejected response). In terms of depth, the responses are collected from 12
open-source (e.g., QwenVL) and closed-source (e.g., Gemini) VLMs to ensure
diversity. The construction of preference data is fully automated, and the
experimental results indicate that models trained with alignment techniques on
the SPA-VL dataset exhibit substantial improvements in harmlessness and
helpfulness while maintaining core capabilities. SPA-VL, as a large-scale,
high-quality, and diverse dataset, represents a significant milestone in
ensuring that VLMs achieve both harmlessness and helpfulness.
|
2406.12693 | Du Yin | Du Yin, Hao Xue, Arian Prabowo, Shuang Ao, Flora Salim | XXLTraffic: Expanding and Extremely Long Traffic forecasting beyond test
adaptation | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Traffic forecasting is crucial for smart cities and intelligent
transportation initiatives, where deep learning has made significant progress
in modeling complex spatio-temporal patterns in recent years. However, current
public datasets have limitations in reflecting the distribution shift nature of
real-world scenarios, characterized by continuously evolving infrastructures,
varying temporal distributions, and long temporal gaps due to sensor downtimes
or changes in traffic patterns. These limitations inevitably restrict the
practical applicability of existing traffic forecasting datasets. To bridge
this gap, we present XXLTraffic, largest available public traffic dataset with
the longest timespan collected from Los Angeles, USA, and New South Wales,
Australia, curated to support research in extremely long forecasting beyond
test adaptation. Our benchmark includes both typical time-series forecasting
settings with hourly and daily aggregated data and novel configurations that
introduce gaps and down-sample the training size to better simulate practical
constraints. We anticipate the new XXLTraffic will provide a fresh perspective
for the time-series and traffic forecasting communities. It would also offer a
robust platform for developing and evaluating models designed to tackle the
extremely long forecasting problems beyond test adaptation. Our dataset
supplements existing spatio-temporal data resources and leads to new research
directions in this domain.
| [
{
"version": "v1",
"created": "Tue, 18 Jun 2024 15:06:22 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 05:39:42 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Yin",
"Du",
""
],
[
"Xue",
"Hao",
""
],
[
"Prabowo",
"Arian",
""
],
[
"Ao",
"Shuang",
""
],
[
"Salim",
"Flora",
""
]
] | TITLE: XXLTraffic: Expanding and Extremely Long Traffic forecasting beyond test
adaptation
ABSTRACT: Traffic forecasting is crucial for smart cities and intelligent
transportation initiatives, where deep learning has made significant progress
in modeling complex spatio-temporal patterns in recent years. However, current
public datasets have limitations in reflecting the distribution shift nature of
real-world scenarios, characterized by continuously evolving infrastructures,
varying temporal distributions, and long temporal gaps due to sensor downtimes
or changes in traffic patterns. These limitations inevitably restrict the
practical applicability of existing traffic forecasting datasets. To bridge
this gap, we present XXLTraffic, largest available public traffic dataset with
the longest timespan collected from Los Angeles, USA, and New South Wales,
Australia, curated to support research in extremely long forecasting beyond
test adaptation. Our benchmark includes both typical time-series forecasting
settings with hourly and daily aggregated data and novel configurations that
introduce gaps and down-sample the training size to better simulate practical
constraints. We anticipate the new XXLTraffic will provide a fresh perspective
for the time-series and traffic forecasting communities. It would also offer a
robust platform for developing and evaluating models designed to tackle the
extremely long forecasting problems beyond test adaptation. Our dataset
supplements existing spatio-temporal data resources and leads to new research
directions in this domain.
|
2406.15863 | Tianyu Wei | Tianyu Wei, Shanmin Pang, Qi Guo, Yizhuo Ma, Xiaofeng Cao, Ming-Ming
Cheng, Qing Guo | EmoAttack: Emotion-to-Image Diffusion Models for Emotional Backdoor
Generation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Text-to-image diffusion models can generate realistic images based on textual
inputs, enabling users to convey their opinions visually through language.
Meanwhile, within language, emotion plays a crucial role in expressing personal
opinions in our daily lives and the inclusion of maliciously negative content
can lead users astray, exacerbating negative emotions. Recognizing the success
of diffusion models and the significance of emotion, we investigate a
previously overlooked risk associated with text-to-image diffusion models, that
is, utilizing emotion in the input texts to introduce negative content and
provoke unfavorable emotions in users. Specifically, we identify a new backdoor
attack, i.e., emotion-aware backdoor attack (EmoAttack), which introduces
malicious negative content triggered by emotional texts during image
generation. We formulate such an attack as a diffusion personalization problem
to avoid extensive model retraining and propose the EmoBooth. Unlike existing
personalization methods, our approach fine-tunes a pre-trained diffusion model
by establishing a mapping between a cluster of emotional words and a given
reference image containing malicious negative content. To validate the
effectiveness of our method, we built a dataset and conducted extensive
analysis and discussion about its effectiveness. Given consumers' widespread
use of diffusion models, uncovering this threat is critical for society.
| [
{
"version": "v1",
"created": "Sat, 22 Jun 2024 14:43:23 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 16:08:20 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Wei",
"Tianyu",
""
],
[
"Pang",
"Shanmin",
""
],
[
"Guo",
"Qi",
""
],
[
"Ma",
"Yizhuo",
""
],
[
"Cao",
"Xiaofeng",
""
],
[
"Cheng",
"Ming-Ming",
""
],
[
"Guo",
"Qing",
""
]
] | TITLE: EmoAttack: Emotion-to-Image Diffusion Models for Emotional Backdoor
Generation
ABSTRACT: Text-to-image diffusion models can generate realistic images based on textual
inputs, enabling users to convey their opinions visually through language.
Meanwhile, within language, emotion plays a crucial role in expressing personal
opinions in our daily lives and the inclusion of maliciously negative content
can lead users astray, exacerbating negative emotions. Recognizing the success
of diffusion models and the significance of emotion, we investigate a
previously overlooked risk associated with text-to-image diffusion models, that
is, utilizing emotion in the input texts to introduce negative content and
provoke unfavorable emotions in users. Specifically, we identify a new backdoor
attack, i.e., emotion-aware backdoor attack (EmoAttack), which introduces
malicious negative content triggered by emotional texts during image
generation. We formulate such an attack as a diffusion personalization problem
to avoid extensive model retraining and propose the EmoBooth. Unlike existing
personalization methods, our approach fine-tunes a pre-trained diffusion model
by establishing a mapping between a cluster of emotional words and a given
reference image containing malicious negative content. To validate the
effectiveness of our method, we built a dataset and conducted extensive
analysis and discussion about its effectiveness. Given consumers' widespread
use of diffusion models, uncovering this threat is critical for society.
|
2407.01519 | Yu-Lun Liu | Chang-Han Yeh, Chin-Yang Lin, Zhixiang Wang, Chi-Wei Hsiao, Ting-Hsuan
Chen, Hau-Shiang Shiu, Yu-Lun Liu | DiffIR2VR-Zero: Zero-Shot Video Restoration with Diffusion-based Image
Restoration Models | Project page: https://jimmycv07.github.io/DiffIR2VR_web/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We present DiffIR2VR-Zero, a zero-shot framework that enables any pre-trained
image restoration diffusion model to perform high-quality video restoration
without additional training. While image diffusion models have shown remarkable
restoration capabilities, their direct application to video leads to temporal
inconsistencies, and existing video restoration methods require extensive
retraining for different degradation types. Our approach addresses these
challenges through two key innovations: a hierarchical latent warping strategy
that maintains consistency across both keyframes and local frames, and a hybrid
token merging mechanism that adaptively combines optical flow and feature
matching. Through extensive experiments, we demonstrate that our method not
only maintains the high-quality restoration of base diffusion models but also
achieves superior temporal consistency across diverse datasets and degradation
conditions, including challenging scenarios like 8$\times$ super-resolution and
severe noise. Importantly, our framework works with any image restoration
diffusion model, providing a versatile solution for video enhancement without
task-specific training or modifications.
| [
{
"version": "v1",
"created": "Mon, 1 Jul 2024 17:59:12 GMT"
},
{
"version": "v2",
"created": "Fri, 19 Jul 2024 16:25:53 GMT"
},
{
"version": "v3",
"created": "Fri, 4 Oct 2024 14:37:13 GMT"
},
{
"version": "v4",
"created": "Tue, 25 Mar 2025 15:35:12 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Yeh",
"Chang-Han",
""
],
[
"Lin",
"Chin-Yang",
""
],
[
"Wang",
"Zhixiang",
""
],
[
"Hsiao",
"Chi-Wei",
""
],
[
"Chen",
"Ting-Hsuan",
""
],
[
"Shiu",
"Hau-Shiang",
""
],
[
"Liu",
"Yu-Lun",
""
]
] | TITLE: DiffIR2VR-Zero: Zero-Shot Video Restoration with Diffusion-based Image
Restoration Models
ABSTRACT: We present DiffIR2VR-Zero, a zero-shot framework that enables any pre-trained
image restoration diffusion model to perform high-quality video restoration
without additional training. While image diffusion models have shown remarkable
restoration capabilities, their direct application to video leads to temporal
inconsistencies, and existing video restoration methods require extensive
retraining for different degradation types. Our approach addresses these
challenges through two key innovations: a hierarchical latent warping strategy
that maintains consistency across both keyframes and local frames, and a hybrid
token merging mechanism that adaptively combines optical flow and feature
matching. Through extensive experiments, we demonstrate that our method not
only maintains the high-quality restoration of base diffusion models but also
achieves superior temporal consistency across diverse datasets and degradation
conditions, including challenging scenarios like 8$\times$ super-resolution and
severe noise. Importantly, our framework works with any image restoration
diffusion model, providing a versatile solution for video enhancement without
task-specific training or modifications.
|
2407.03146 | Yunpeng Jiang | Yunpeng Jiang and Paul Weng and Yutong Ban | Understanding and Reducing the Class-Dependent Effects of Data
Augmentation with A Two-Player Game Approach | null | null | null | null | cs.CY cs.AI cs.CV cs.GT cs.LG | http://creativecommons.org/licenses/by/4.0/ | Data augmentation is widely applied and has shown its benefits in different
machine learning tasks. However, as recently observed, it may have an unfair
effect in multi-class classification. While data augmentation generally
improves the overall performance (and therefore is beneficial for many
classes), it can actually be detrimental for other classes, which can be
problematic in some application domains. In this paper, to counteract this
phenomenon, we propose CLAM, a CLAss-dependent Multiplicative-weights method.
To derive it, we first formulate the training of a classifier as a non-linear
optimization problem that aims at simultaneously maximizing the individual
class performances and balancing them. By rewriting this optimization problem
as an adversarial two-player game, we propose a novel multiplicative weight
algorithm, for which we prove the convergence. Interestingly, our formulation
also reveals that the class-dependent effects of data augmentation is not due
to data augmentation only, but is in fact a general phenomenon. Our empirical
results over five datasets demonstrate that the performance of learned
classifiers is indeed more fairly distributed over classes, with only limited
impact on the average accuracy.
| [
{
"version": "v1",
"created": "Fri, 31 May 2024 02:56:43 GMT"
},
{
"version": "v2",
"created": "Mon, 8 Jul 2024 05:21:59 GMT"
},
{
"version": "v3",
"created": "Tue, 25 Mar 2025 09:05:02 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Jiang",
"Yunpeng",
""
],
[
"Weng",
"Paul",
""
],
[
"Ban",
"Yutong",
""
]
] | TITLE: Understanding and Reducing the Class-Dependent Effects of Data
Augmentation with A Two-Player Game Approach
ABSTRACT: Data augmentation is widely applied and has shown its benefits in different
machine learning tasks. However, as recently observed, it may have an unfair
effect in multi-class classification. While data augmentation generally
improves the overall performance (and therefore is beneficial for many
classes), it can actually be detrimental for other classes, which can be
problematic in some application domains. In this paper, to counteract this
phenomenon, we propose CLAM, a CLAss-dependent Multiplicative-weights method.
To derive it, we first formulate the training of a classifier as a non-linear
optimization problem that aims at simultaneously maximizing the individual
class performances and balancing them. By rewriting this optimization problem
as an adversarial two-player game, we propose a novel multiplicative weight
algorithm, for which we prove the convergence. Interestingly, our formulation
also reveals that the class-dependent effects of data augmentation is not due
to data augmentation only, but is in fact a general phenomenon. Our empirical
results over five datasets demonstrate that the performance of learned
classifiers is indeed more fairly distributed over classes, with only limited
impact on the average accuracy.
|
2407.08083 | Ali Hatamizadeh | Ali Hatamizadeh, Jan Kautz | MambaVision: A Hybrid Mamba-Transformer Vision Backbone | Accepted to CVPR'25 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | We propose a novel hybrid Mamba-Transformer backbone, MambaVision,
specifically tailored for vision applications. Our core contribution includes
redesigning the Mamba formulation to enhance its capability for efficient
modeling of visual features. Through a comprehensive ablation study, we
demonstrate the feasibility of integrating Vision Transformers (ViT) with
Mamba. Our results show that equipping the Mamba architecture with
self-attention blocks in the final layers greatly improves its capacity to
capture long-range spatial dependencies. Based on these findings, we introduce
a family of MambaVision models with a hierarchical architecture to meet various
design criteria. For classification on the ImageNet-1K dataset, MambaVision
variants achieve state-of-the-art (SOTA) performance in terms of both Top-1
accuracy and throughput. In downstream tasks such as object detection, instance
segmentation, and semantic segmentation on MS COCO and ADE20K datasets,
MambaVision outperforms comparably sized backbones while demonstrating
favorable performance. Code: https://github.com/NVlabs/MambaVision
| [
{
"version": "v1",
"created": "Wed, 10 Jul 2024 23:02:45 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 17:54:37 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Hatamizadeh",
"Ali",
""
],
[
"Kautz",
"Jan",
""
]
] | TITLE: MambaVision: A Hybrid Mamba-Transformer Vision Backbone
ABSTRACT: We propose a novel hybrid Mamba-Transformer backbone, MambaVision,
specifically tailored for vision applications. Our core contribution includes
redesigning the Mamba formulation to enhance its capability for efficient
modeling of visual features. Through a comprehensive ablation study, we
demonstrate the feasibility of integrating Vision Transformers (ViT) with
Mamba. Our results show that equipping the Mamba architecture with
self-attention blocks in the final layers greatly improves its capacity to
capture long-range spatial dependencies. Based on these findings, we introduce
a family of MambaVision models with a hierarchical architecture to meet various
design criteria. For classification on the ImageNet-1K dataset, MambaVision
variants achieve state-of-the-art (SOTA) performance in terms of both Top-1
accuracy and throughput. In downstream tasks such as object detection, instance
segmentation, and semantic segmentation on MS COCO and ADE20K datasets,
MambaVision outperforms comparably sized backbones while demonstrating
favorable performance. Code: https://github.com/NVlabs/MambaVision
|
2407.19042 | Devin Matthews | Tingting Zhao, James H. Thorpe, Devin A. Matthews | Prospects for rank-reduced CCSD(T) in the context of high-accuracy
thermochemistry | null | J. Chem. Phys. 161, 154110 (2024) | 10.1063/5.0230899 | null | physics.chem-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Obtaining sub-chemical accuracy (1 kJ mol${}^{-1}$) for reaction energies of
medium-sized gas-phase molecules is a longstanding challenge in the field of
thermochemical modeling. The perturbative triples correction to CCSD, CCSD(T),
constitutes an important component of all high-accuracy composite model
chemistries that obtain this accuracy, but can be a roadblock in the
calculation of medium to large systems due to its $\mathcal{O}(N^7)$ scaling,
particularly in HEAT-like model chemistries that eschew separation of core and
valance correlation. This study extends the work of Lesiuk [J. Chem. Phys. 156,
064103 (2022)] with new approximate methods and assesses the accuracy of five
different approximations of (T) in the context of a subset of molecules
selected from the W4-17 dataset. It is demonstrated that all of these
approximate methods can achieve sub-0.1 kJ mol${}^{-1}$ accuracy with respect
to canonical, density-fitted (T) contributions with a modest number of
projectors. The approximation labeled $\tilde{Z}T$ appears to offer the best
trade-off between cost and accuracy and shows significant promise in an
order-of-magnitude reduction in the computational cost of the CCSD(T) component
of high-accuracy model chemistries.
| [
{
"version": "v1",
"created": "Fri, 26 Jul 2024 18:49:04 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Zhao",
"Tingting",
""
],
[
"Thorpe",
"James H.",
""
],
[
"Matthews",
"Devin A.",
""
]
] | TITLE: Prospects for rank-reduced CCSD(T) in the context of high-accuracy
thermochemistry
ABSTRACT: Obtaining sub-chemical accuracy (1 kJ mol${}^{-1}$) for reaction energies of
medium-sized gas-phase molecules is a longstanding challenge in the field of
thermochemical modeling. The perturbative triples correction to CCSD, CCSD(T),
constitutes an important component of all high-accuracy composite model
chemistries that obtain this accuracy, but can be a roadblock in the
calculation of medium to large systems due to its $\mathcal{O}(N^7)$ scaling,
particularly in HEAT-like model chemistries that eschew separation of core and
valance correlation. This study extends the work of Lesiuk [J. Chem. Phys. 156,
064103 (2022)] with new approximate methods and assesses the accuracy of five
different approximations of (T) in the context of a subset of molecules
selected from the W4-17 dataset. It is demonstrated that all of these
approximate methods can achieve sub-0.1 kJ mol${}^{-1}$ accuracy with respect
to canonical, density-fitted (T) contributions with a modest number of
projectors. The approximation labeled $\tilde{Z}T$ appears to offer the best
trade-off between cost and accuracy and shows significant promise in an
order-of-magnitude reduction in the computational cost of the CCSD(T) component
of high-accuracy model chemistries.
|
2408.04811 | Moussa Koulako Bala Doumbouya | Moussa Koulako Bala Doumbouya, Ananjan Nandi, Gabriel Poesia, Davide
Ghilardi, Anna Goldie, Federico Bianchi, Dan Jurafsky, Christopher D. Manning | h4rm3l: A language for Composable Jailbreak Attack Synthesis | Accepted to the Thirteenth International Conference on Learning
Representations (ICLR 2025) | null | null | null | cs.CR cs.AI cs.CL cs.CY cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | Despite their demonstrated valuable capabilities, state-of-the-art (SOTA)
widely deployed large language models (LLMs) still have the potential to cause
harm to society due to the ineffectiveness of their safety filters, which can
be bypassed by prompt transformations called jailbreak attacks. Current
approaches to LLM safety assessment, which employ datasets of templated prompts
and benchmarking pipelines, fail to cover sufficiently large and diverse sets
of jailbreak attacks, leading to the widespread deployment of unsafe LLMs.
Recent research showed that novel jailbreak attacks could be derived by
composition; however, a formal composable representation for jailbreak attacks,
which, among other benefits, could enable the exploration of a large
compositional space of jailbreak attacks through program synthesis methods, has
not been previously proposed. We introduce h4rm3l, a novel approach that
addresses this gap with a human-readable domain-specific language (DSL). Our
framework comprises: (1) The h4rm3l DSL, which formally expresses jailbreak
attacks as compositions of parameterized string transformation primitives. (2)
A synthesizer with bandit algorithms that efficiently generates jailbreak
attacks optimized for a target black box LLM. (3) The h4rm3l red-teaming
software toolkit that employs the previous two components and an automated
harmful LLM behavior classifier that is strongly aligned with human judgment.
We demonstrate h4rm3l's efficacy by synthesizing a dataset of 2656 successful
novel jailbreak attacks targeting 6 SOTA open-source and proprietary LLMs, and
by benchmarking those models against a subset of these synthesized attacks. Our
results show that h4rm3l's synthesized attacks are diverse and more successful
than existing jailbreak attacks in literature, with success rates exceeding 90%
on SOTA LLMs.
| [
{
"version": "v1",
"created": "Fri, 9 Aug 2024 01:45:39 GMT"
},
{
"version": "v2",
"created": "Fri, 13 Sep 2024 05:19:32 GMT"
},
{
"version": "v3",
"created": "Sun, 16 Mar 2025 08:42:00 GMT"
},
{
"version": "v4",
"created": "Tue, 25 Mar 2025 01:51:22 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Doumbouya",
"Moussa Koulako Bala",
""
],
[
"Nandi",
"Ananjan",
""
],
[
"Poesia",
"Gabriel",
""
],
[
"Ghilardi",
"Davide",
""
],
[
"Goldie",
"Anna",
""
],
[
"Bianchi",
"Federico",
""
],
[
"Jurafsky",
"Dan",
""
],
[
"Manning",
"Christopher D.",
""
]
] | TITLE: h4rm3l: A language for Composable Jailbreak Attack Synthesis
ABSTRACT: Despite their demonstrated valuable capabilities, state-of-the-art (SOTA)
widely deployed large language models (LLMs) still have the potential to cause
harm to society due to the ineffectiveness of their safety filters, which can
be bypassed by prompt transformations called jailbreak attacks. Current
approaches to LLM safety assessment, which employ datasets of templated prompts
and benchmarking pipelines, fail to cover sufficiently large and diverse sets
of jailbreak attacks, leading to the widespread deployment of unsafe LLMs.
Recent research showed that novel jailbreak attacks could be derived by
composition; however, a formal composable representation for jailbreak attacks,
which, among other benefits, could enable the exploration of a large
compositional space of jailbreak attacks through program synthesis methods, has
not been previously proposed. We introduce h4rm3l, a novel approach that
addresses this gap with a human-readable domain-specific language (DSL). Our
framework comprises: (1) The h4rm3l DSL, which formally expresses jailbreak
attacks as compositions of parameterized string transformation primitives. (2)
A synthesizer with bandit algorithms that efficiently generates jailbreak
attacks optimized for a target black box LLM. (3) The h4rm3l red-teaming
software toolkit that employs the previous two components and an automated
harmful LLM behavior classifier that is strongly aligned with human judgment.
We demonstrate h4rm3l's efficacy by synthesizing a dataset of 2656 successful
novel jailbreak attacks targeting 6 SOTA open-source and proprietary LLMs, and
by benchmarking those models against a subset of these synthesized attacks. Our
results show that h4rm3l's synthesized attacks are diverse and more successful
than existing jailbreak attacks in literature, with success rates exceeding 90%
on SOTA LLMs.
|
2408.12974 | Hinako Mitsuoka | Hinako Mitsuoka, Kazuhiro Hotta | Accuracy Improvement of Cell Image Segmentation Using Feedback Former | Accepted by ECCV2024 Workshop "Human-inspired Computer Vision (HCV)".
2025/3/19 : This paper has been accepted for publication in IEEE Access. The
published version is available at DOI:
https://doi.org/10.1109/ACCESS.2025.3552847 | null | 10.1109/ACCESS.2025.3552847 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Semantic segmentation of microscopy cell images by deep learning is a
significant technique. We considered that the Transformers, which have recently
outperformed CNNs in image recognition, could also be improved and developed
for cell image segmentation. Transformers tend to focus more on contextual
information than on detailed information. This tendency leads to a lack of
detailed information for segmentation. Therefore, to supplement or reinforce
the missing detailed information, we hypothesized that feedback processing in
the human visual cortex should be effective. Our proposed Feedback Former is a
novel architecture for semantic segmentation, in which Transformers is used as
an encoder and has a feedback processing mechanism. Feature maps with detailed
information are fed back to the lower layers from near the output of the model
to compensate for the lack of detailed information which is the weakness of
Transformers and improve the segmentation accuracy. By experiments on three
cell image datasets, we confirmed that our method surpasses methods without
feedback, demonstrating its superior accuracy in cell image segmentation. Our
method achieved higher segmentation accuracy while consuming less computational
cost than conventional feedback approaches. Moreover, our method offered
superior precision without simply increasing the model size of Transformer
encoder, demonstrating higher accuracy with lower computational cost.
| [
{
"version": "v1",
"created": "Fri, 23 Aug 2024 10:48:03 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 05:46:20 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Mitsuoka",
"Hinako",
""
],
[
"Hotta",
"Kazuhiro",
""
]
] | TITLE: Accuracy Improvement of Cell Image Segmentation Using Feedback Former
ABSTRACT: Semantic segmentation of microscopy cell images by deep learning is a
significant technique. We considered that the Transformers, which have recently
outperformed CNNs in image recognition, could also be improved and developed
for cell image segmentation. Transformers tend to focus more on contextual
information than on detailed information. This tendency leads to a lack of
detailed information for segmentation. Therefore, to supplement or reinforce
the missing detailed information, we hypothesized that feedback processing in
the human visual cortex should be effective. Our proposed Feedback Former is a
novel architecture for semantic segmentation, in which Transformers is used as
an encoder and has a feedback processing mechanism. Feature maps with detailed
information are fed back to the lower layers from near the output of the model
to compensate for the lack of detailed information which is the weakness of
Transformers and improve the segmentation accuracy. By experiments on three
cell image datasets, we confirmed that our method surpasses methods without
feedback, demonstrating its superior accuracy in cell image segmentation. Our
method achieved higher segmentation accuracy while consuming less computational
cost than conventional feedback approaches. Moreover, our method offered
superior precision without simply increasing the model size of Transformer
encoder, demonstrating higher accuracy with lower computational cost.
|
2409.11140 | Andrzej Perzanowski | Andrzej Perzanowski and Tony Lindeberg | Scale generalisation properties of extended scale-covariant and
scale-invariant Gaussian derivative networks on image datasets with spatial
scaling variations | 52 pages, 24 figures, 18 tables | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents an in-depth analysis of the scale generalisation
properties of the scale-covariant and scale-invariant Gaussian derivative
networks, complemented with both conceptual and algorithmic extensions. For
this purpose, Gaussian derivative networks (GaussDerNets) are evaluated on new
rescaled versions of the Fashion-MNIST and the CIFAR-10 datasets, with spatial
scaling variations over a factor of 4 in the testing data, that are not present
in the training data. Additionally, evaluations on the previously existing STIR
datasets show that the GaussDerNets achieve better scale generalisation than
previously reported for these datasets for other types of deep networks.
We first experimentally demonstrate that the GaussDerNets have quite good
scale generalisation properties on the new datasets, and that average pooling
of feature responses over scales may sometimes also lead to better results than
the previously used approach of max pooling over scales. Then, we demonstrate
that using a spatial max pooling mechanism after the final layer enables
localisation of non-centred objects in image domain, with maintained scale
generalisation properties. We also show that regularisation during training, by
applying dropout across the scale channels, referred to as scale-channel
dropout, improves both the performance and the scale generalisation.
In additional ablation studies, we demonstrate that discretisations of
GaussDerNets, based on the discrete analogue of the Gaussian kernel in
combination with central difference operators, perform best or among the best,
compared to a set of other discrete approximations of the Gaussian derivative
kernels.
Finally, by visualising the activation maps and the learned receptive fields,
we demonstrate that the GaussDerNets have very good explainability properties.
| [
{
"version": "v1",
"created": "Tue, 17 Sep 2024 12:51:04 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 10:38:59 GMT"
}
] | 2025-03-26T00:00:00 | [
[
"Perzanowski",
"Andrzej",
""
],
[
"Lindeberg",
"Tony",
""
]
] | TITLE: Scale generalisation properties of extended scale-covariant and
scale-invariant Gaussian derivative networks on image datasets with spatial
scaling variations
ABSTRACT: This paper presents an in-depth analysis of the scale generalisation
properties of the scale-covariant and scale-invariant Gaussian derivative
networks, complemented with both conceptual and algorithmic extensions. For
this purpose, Gaussian derivative networks (GaussDerNets) are evaluated on new
rescaled versions of the Fashion-MNIST and the CIFAR-10 datasets, with spatial
scaling variations over a factor of 4 in the testing data, that are not present
in the training data. Additionally, evaluations on the previously existing STIR
datasets show that the GaussDerNets achieve better scale generalisation than
previously reported for these datasets for other types of deep networks.
We first experimentally demonstrate that the GaussDerNets have quite good
scale generalisation properties on the new datasets, and that average pooling
of feature responses over scales may sometimes also lead to better results than
the previously used approach of max pooling over scales. Then, we demonstrate
that using a spatial max pooling mechanism after the final layer enables
localisation of non-centred objects in image domain, with maintained scale
generalisation properties. We also show that regularisation during training, by
applying dropout across the scale channels, referred to as scale-channel
dropout, improves both the performance and the scale generalisation.
In additional ablation studies, we demonstrate that discretisations of
GaussDerNets, based on the discrete analogue of the Gaussian kernel in
combination with central difference operators, perform best or among the best,
compared to a set of other discrete approximations of the Gaussian derivative
kernels.
Finally, by visualising the activation maps and the learned receptive fields,
we demonstrate that the GaussDerNets have very good explainability properties.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.