Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2412.01814 | Sanghwan Kim | Sanghwan Kim, Rui Xiao, Mariana-Iuliana Georgescu, Stephan Alaniz,
Zeynep Akata | COSMOS: Cross-Modality Self-Distillation for Vision Language
Pre-training | CVPR 2025 | null | null | null | cs.CV cs.AI cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Vision-Language Models (VLMs) trained with contrastive loss have achieved
significant advancements in various vision and language tasks. However, the
global nature of the contrastive loss makes VLMs focus predominantly on
foreground objects, neglecting other crucial information in the image, which
limits their effectiveness in downstream tasks. To address these challenges, we
propose COSMOS: CrOSs-MOdality Self-distillation for vision-language
pre-training that integrates a novel text-cropping strategy and cross-attention
module into a self-supervised learning framework. We create global and local
views of images and texts (i.e., multi-modal augmentations), which are
essential for self-distillation in VLMs. We further introduce a cross-attention
module, enabling COSMOS to learn comprehensive cross-modal representations
optimized via a cross-modality self-distillation loss. COSMOS consistently
outperforms previous strong baselines on various zero-shot downstream tasks,
including retrieval, classification, and semantic segmentation. Additionally,
it surpasses CLIP-based models trained on larger datasets in visual perception
and contextual understanding tasks. Code is available at
https://github.com/ExplainableML/cosmos.
| [
{
"version": "v1",
"created": "Mon, 2 Dec 2024 18:56:06 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 16:07:40 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Kim",
"Sanghwan",
""
],
[
"Xiao",
"Rui",
""
],
[
"Georgescu",
"Mariana-Iuliana",
""
],
[
"Alaniz",
"Stephan",
""
],
[
"Akata",
"Zeynep",
""
]
] | TITLE: COSMOS: Cross-Modality Self-Distillation for Vision Language
Pre-training
ABSTRACT: Vision-Language Models (VLMs) trained with contrastive loss have achieved
significant advancements in various vision and language tasks. However, the
global nature of the contrastive loss makes VLMs focus predominantly on
foreground objects, neglecting other crucial information in the image, which
limits their effectiveness in downstream tasks. To address these challenges, we
propose COSMOS: CrOSs-MOdality Self-distillation for vision-language
pre-training that integrates a novel text-cropping strategy and cross-attention
module into a self-supervised learning framework. We create global and local
views of images and texts (i.e., multi-modal augmentations), which are
essential for self-distillation in VLMs. We further introduce a cross-attention
module, enabling COSMOS to learn comprehensive cross-modal representations
optimized via a cross-modality self-distillation loss. COSMOS consistently
outperforms previous strong baselines on various zero-shot downstream tasks,
including retrieval, classification, and semantic segmentation. Additionally,
it surpasses CLIP-based models trained on larger datasets in visual perception
and contextual understanding tasks. Code is available at
https://github.com/ExplainableML/cosmos.
|
2412.02071 | Zihui Xue | Zihui Xue, Joungbin An, Xitong Yang, Kristen Grauman | Progress-Aware Video Frame Captioning | Accepted by CVPR 2025, Project website:
https://vision.cs.utexas.edu/projects/ProgressCaptioner/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While image captioning provides isolated descriptions for individual images,
and video captioning offers one single narrative for an entire video clip, our
work explores an important middle ground: progress-aware video captioning at
the frame level. This novel task aims to generate temporally fine-grained
captions that not only accurately describe each frame but also capture the
subtle progression of actions throughout a video sequence. Despite the strong
capabilities of existing leading vision language models, they often struggle to
discern the nuances of frame-wise differences. To address this, we propose
ProgressCaptioner, a captioning model designed to capture the fine-grained
temporal dynamics within an action sequence. Alongside, we develop the FrameCap
dataset to support training and the FrameCapEval benchmark to assess caption
quality. The results demonstrate that ProgressCaptioner significantly surpasses
leading captioning models, producing precise captions that accurately capture
action progression and set a new standard for temporal precision in video
captioning. Finally, we showcase practical applications of our approach,
specifically in aiding keyframe selection and advancing video understanding,
highlighting its broad utility.
| [
{
"version": "v1",
"created": "Tue, 3 Dec 2024 01:21:28 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 02:26:56 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Xue",
"Zihui",
""
],
[
"An",
"Joungbin",
""
],
[
"Yang",
"Xitong",
""
],
[
"Grauman",
"Kristen",
""
]
] | TITLE: Progress-Aware Video Frame Captioning
ABSTRACT: While image captioning provides isolated descriptions for individual images,
and video captioning offers one single narrative for an entire video clip, our
work explores an important middle ground: progress-aware video captioning at
the frame level. This novel task aims to generate temporally fine-grained
captions that not only accurately describe each frame but also capture the
subtle progression of actions throughout a video sequence. Despite the strong
capabilities of existing leading vision language models, they often struggle to
discern the nuances of frame-wise differences. To address this, we propose
ProgressCaptioner, a captioning model designed to capture the fine-grained
temporal dynamics within an action sequence. Alongside, we develop the FrameCap
dataset to support training and the FrameCapEval benchmark to assess caption
quality. The results demonstrate that ProgressCaptioner significantly surpasses
leading captioning models, producing precise captions that accurately capture
action progression and set a new standard for temporal precision in video
captioning. Finally, we showcase practical applications of our approach,
specifically in aiding keyframe selection and advancing video understanding,
highlighting its broad utility.
|
2412.03907 | Yizhou Jin | Yizhou Jin, Jiahui Zhu, Guodong Wang, Shiwei Li, Jinjin Zhang, Xinyue
Liu, Qingjie Liu, Yunhong Wang | ONER: Online Experience Replay for Incremental Anomaly Detection | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Incremental anomaly detection aims to sequentially identify defects in
industrial product lines but suffers from catastrophic forgetting, primarily
due to knowledge overwriting during parameter updates and feature conflicts
between tasks. In this work, We propose ONER (ONline Experience Replay), an
end-to-end framework that addresses these issues by synergistically integrating
two types of experience: (1) decomposed prompts, which dynamically generate
image-conditioned prompts from reusable modules to retain prior knowledge thus
prevent knowledge overwriting, and (2) semantic prototypes, which enforce
separability in latent feature spaces at pixel and image levels to mitigate
cross-task feature conflicts. Extensive experiments demonstrate the superiority
of ONER, achieving state-of-the-art performance with +4.4% Pixel AUROC and
+28.3% Pixel AUPR improvements on the MVTec AD dataset over prior methods.
Remarkably, ONER achieves this with only 0.019M parameters and 5 training
epochs per task, confirming its efficiency and stability for real-world
industrial deployment.
| [
{
"version": "v1",
"created": "Thu, 5 Dec 2024 06:26:32 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Jan 2025 09:40:53 GMT"
},
{
"version": "v3",
"created": "Wed, 26 Mar 2025 09:06:21 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Jin",
"Yizhou",
""
],
[
"Zhu",
"Jiahui",
""
],
[
"Wang",
"Guodong",
""
],
[
"Li",
"Shiwei",
""
],
[
"Zhang",
"Jinjin",
""
],
[
"Liu",
"Xinyue",
""
],
[
"Liu",
"Qingjie",
""
],
[
"Wang",
"Yunhong",
""
]
] | TITLE: ONER: Online Experience Replay for Incremental Anomaly Detection
ABSTRACT: Incremental anomaly detection aims to sequentially identify defects in
industrial product lines but suffers from catastrophic forgetting, primarily
due to knowledge overwriting during parameter updates and feature conflicts
between tasks. In this work, We propose ONER (ONline Experience Replay), an
end-to-end framework that addresses these issues by synergistically integrating
two types of experience: (1) decomposed prompts, which dynamically generate
image-conditioned prompts from reusable modules to retain prior knowledge thus
prevent knowledge overwriting, and (2) semantic prototypes, which enforce
separability in latent feature spaces at pixel and image levels to mitigate
cross-task feature conflicts. Extensive experiments demonstrate the superiority
of ONER, achieving state-of-the-art performance with +4.4% Pixel AUROC and
+28.3% Pixel AUPR improvements on the MVTec AD dataset over prior methods.
Remarkably, ONER achieves this with only 0.019M parameters and 5 training
epochs per task, confirming its efficiency and stability for real-world
industrial deployment.
|
2412.04234 | Shihua Huang | Shihua Huang, Zhichao Lu, Xiaodong Cun, Yongjun Yu, Xiao Zhou, and Xi
Shen | DEIM: DETR with Improved Matching for Fast Convergence | CVPR 2025 | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We introduce DEIM, an innovative and efficient training framework designed to
accelerate convergence in real-time object detection with Transformer-based
architectures (DETR). To mitigate the sparse supervision inherent in one-to-one
(O2O) matching in DETR models, DEIM employs a Dense O2O matching strategy. This
approach increases the number of positive samples per image by incorporating
additional targets, using standard data augmentation techniques. While Dense
O2O matching speeds up convergence, it also introduces numerous low-quality
matches that could affect performance. To address this, we propose the
Matchability-Aware Loss (MAL), a novel loss function that optimizes matches
across various quality levels, enhancing the effectiveness of Dense O2O.
Extensive experiments on the COCO dataset validate the efficacy of DEIM. When
integrated with RT-DETR and D-FINE, it consistently boosts performance while
reducing training time by 50%. Notably, paired with RT-DETRv2, DEIM achieves
53.2% AP in a single day of training on an NVIDIA 4090 GPU. Additionally,
DEIM-trained real-time models outperform leading real-time object detectors,
with DEIM-D-FINE-L and DEIM-D-FINE-X achieving 54.7% and 56.5% AP at 124 and 78
FPS on an NVIDIA T4 GPU, respectively, without the need for additional data. We
believe DEIM sets a new baseline for advancements in real-time object
detection. Our code and pre-trained models are available at
https://github.com/ShihuaHuang95/DEIM.
| [
{
"version": "v1",
"created": "Thu, 5 Dec 2024 15:10:13 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 10:00:35 GMT"
},
{
"version": "v3",
"created": "Wed, 26 Mar 2025 10:41:29 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Huang",
"Shihua",
""
],
[
"Lu",
"Zhichao",
""
],
[
"Cun",
"Xiaodong",
""
],
[
"Yu",
"Yongjun",
""
],
[
"Zhou",
"Xiao",
""
],
[
"Shen",
"Xi",
""
]
] | TITLE: DEIM: DETR with Improved Matching for Fast Convergence
ABSTRACT: We introduce DEIM, an innovative and efficient training framework designed to
accelerate convergence in real-time object detection with Transformer-based
architectures (DETR). To mitigate the sparse supervision inherent in one-to-one
(O2O) matching in DETR models, DEIM employs a Dense O2O matching strategy. This
approach increases the number of positive samples per image by incorporating
additional targets, using standard data augmentation techniques. While Dense
O2O matching speeds up convergence, it also introduces numerous low-quality
matches that could affect performance. To address this, we propose the
Matchability-Aware Loss (MAL), a novel loss function that optimizes matches
across various quality levels, enhancing the effectiveness of Dense O2O.
Extensive experiments on the COCO dataset validate the efficacy of DEIM. When
integrated with RT-DETR and D-FINE, it consistently boosts performance while
reducing training time by 50%. Notably, paired with RT-DETRv2, DEIM achieves
53.2% AP in a single day of training on an NVIDIA 4090 GPU. Additionally,
DEIM-trained real-time models outperform leading real-time object detectors,
with DEIM-D-FINE-L and DEIM-D-FINE-X achieving 54.7% and 56.5% AP at 124 and 78
FPS on an NVIDIA T4 GPU, respectively, without the need for additional data. We
believe DEIM sets a new baseline for advancements in real-time object
detection. Our code and pre-trained models are available at
https://github.com/ShihuaHuang95/DEIM.
|
2412.04880 | Pawel Pieta | Pawel Tomasz Pieta, Peter Winkel Rasmussen, Anders Bjorholm Dahl,
Jeppe Revall Frisvad, Siavash Arjomand Bigdeli, Carsten Gundlach, Anders
Nymark Christensen | MozzaVID: Mozzarella Volumetric Image Dataset | null | null | null | null | cs.CV eess.IV | http://creativecommons.org/licenses/by/4.0/ | Influenced by the complexity of volumetric imaging, there is a shortage of
established datasets useful for benchmarking volumetric deep-learning models.
As a consequence, new and existing models are not easily comparable, limiting
the development of architectures optimized specifically for volumetric data. To
counteract this trend, we introduce MozzaVID - a large, clean, and versatile
volumetric classification dataset. Our dataset contains X-ray computed
tomography (CT) images of mozzarella microstructure and enables the
classification of 25 cheese types and 149 cheese samples. We provide data in
three different resolutions, resulting in three dataset instances containing
from 591 to 37,824 images. While being general-purpose, the dataset also
facilitates investigating mozzarella structure properties. The structure of
food directly affects its functional properties and thus its consumption
experience. Understanding food structure helps tune the production and
mimicking it enables sustainable alternatives to animal-derived food products.
The complex and disordered nature of food structures brings a unique challenge,
where a choice of appropriate imaging method, scale, and sample size is not
trivial. With this dataset we aim to address these complexities, contributing
to more robust structural analysis models. The dataset can be downloaded from:
https://archive.compute.dtu.dk/files/public/projects/MozzaVID/.
| [
{
"version": "v1",
"created": "Fri, 6 Dec 2024 09:23:31 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 09:07:32 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Pieta",
"Pawel Tomasz",
""
],
[
"Rasmussen",
"Peter Winkel",
""
],
[
"Dahl",
"Anders Bjorholm",
""
],
[
"Frisvad",
"Jeppe Revall",
""
],
[
"Bigdeli",
"Siavash Arjomand",
""
],
[
"Gundlach",
"Carsten",
""
],
[
"Christensen",
"Anders Nymark",
""
]
] | TITLE: MozzaVID: Mozzarella Volumetric Image Dataset
ABSTRACT: Influenced by the complexity of volumetric imaging, there is a shortage of
established datasets useful for benchmarking volumetric deep-learning models.
As a consequence, new and existing models are not easily comparable, limiting
the development of architectures optimized specifically for volumetric data. To
counteract this trend, we introduce MozzaVID - a large, clean, and versatile
volumetric classification dataset. Our dataset contains X-ray computed
tomography (CT) images of mozzarella microstructure and enables the
classification of 25 cheese types and 149 cheese samples. We provide data in
three different resolutions, resulting in three dataset instances containing
from 591 to 37,824 images. While being general-purpose, the dataset also
facilitates investigating mozzarella structure properties. The structure of
food directly affects its functional properties and thus its consumption
experience. Understanding food structure helps tune the production and
mimicking it enables sustainable alternatives to animal-derived food products.
The complex and disordered nature of food structures brings a unique challenge,
where a choice of appropriate imaging method, scale, and sample size is not
trivial. With this dataset we aim to address these complexities, contributing
to more robust structural analysis models. The dataset can be downloaded from:
https://archive.compute.dtu.dk/files/public/projects/MozzaVID/.
|
2412.08259 | Zhiqiang Yuan | Zhiqiang Yuan, Jiapei Zhang, Ying Deng, Yeshuang Zhu, Jie Zhou,
Jinchao Zhang | VSD2M: A Large-scale Vision-language Sticker Dataset for Multi-frame
Animated Sticker Generation | null | null | null | null | cs.HC | http://creativecommons.org/licenses/by/4.0/ | As a common form of communication in social media,stickers win users' love in
the internet scenarios, for their ability to convey emotions in a vivid, cute,
and interesting way. People prefer to get an appropriate sticker through
retrieval rather than creation for the reason that creating a sticker is
time-consuming and relies on rule-based creative tools with limited
capabilities. Nowadays, advanced text-to-video algorithms have spawned numerous
general video generation systems that allow users to customize high-quality,
photo-realistic videos by only providing simple text prompts. However, creating
customized animated stickers, which have lower frame rates and more abstract
semantics than videos, is greatly hindered by difficulties in data acquisition
and incomplete benchmarks. To facilitate the exploration of researchers in
animated sticker generation (ASG) field, we firstly construct the currently
largest vision-language sticker dataset named VSD2M at a two-million scale that
contains static and animated stickers. Secondly, to improve the performance of
traditional video generation methods on ASG tasks with discrete
characteristics, we propose a Spatial Temporal Interaction (STI) layer that
utilizes semantic interaction and detail preservation to address the issue of
insufficient information utilization. Moreover, we train baselines with several
video generation methods (e.g., transformer-based, diffusion-based methods) on
VSD2M and conduct a detailed analysis to establish systemic supervision on ASG
task. To the best of our knowledge, this is the most comprehensive large-scale
benchmark for multi-frame animated sticker generation, and we hope this work
can provide valuable inspiration for other scholars in intelligent creation.
| [
{
"version": "v1",
"created": "Wed, 11 Dec 2024 10:11:41 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 11:54:06 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Yuan",
"Zhiqiang",
""
],
[
"Zhang",
"Jiapei",
""
],
[
"Deng",
"Ying",
""
],
[
"Zhu",
"Yeshuang",
""
],
[
"Zhou",
"Jie",
""
],
[
"Zhang",
"Jinchao",
""
]
] | TITLE: VSD2M: A Large-scale Vision-language Sticker Dataset for Multi-frame
Animated Sticker Generation
ABSTRACT: As a common form of communication in social media,stickers win users' love in
the internet scenarios, for their ability to convey emotions in a vivid, cute,
and interesting way. People prefer to get an appropriate sticker through
retrieval rather than creation for the reason that creating a sticker is
time-consuming and relies on rule-based creative tools with limited
capabilities. Nowadays, advanced text-to-video algorithms have spawned numerous
general video generation systems that allow users to customize high-quality,
photo-realistic videos by only providing simple text prompts. However, creating
customized animated stickers, which have lower frame rates and more abstract
semantics than videos, is greatly hindered by difficulties in data acquisition
and incomplete benchmarks. To facilitate the exploration of researchers in
animated sticker generation (ASG) field, we firstly construct the currently
largest vision-language sticker dataset named VSD2M at a two-million scale that
contains static and animated stickers. Secondly, to improve the performance of
traditional video generation methods on ASG tasks with discrete
characteristics, we propose a Spatial Temporal Interaction (STI) layer that
utilizes semantic interaction and detail preservation to address the issue of
insufficient information utilization. Moreover, we train baselines with several
video generation methods (e.g., transformer-based, diffusion-based methods) on
VSD2M and conduct a detailed analysis to establish systemic supervision on ASG
task. To the best of our knowledge, this is the most comprehensive large-scale
benchmark for multi-frame animated sticker generation, and we hope this work
can provide valuable inspiration for other scholars in intelligent creation.
|
2412.08614 | Fan Lu | Fan Lu, Wei Wu, Kecheng Zheng, Shuailei Ma, Biao Gong, Jiawei Liu, Wei
Zhai, Yang Cao, Yujun Shen, Zheng-Jun Zha | Benchmarking Large Vision-Language Models via Directed Scene Graph for
Comprehensive Image Captioning | Accepted by CVPR2025. Code and Dataset:
https://github.com/LuFan31/CompreCap | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generating detailed captions comprehending text-rich visual content in images
has received growing attention for Large Vision-Language Models (LVLMs).
However, few studies have developed benchmarks specifically tailored for
detailed captions to measure their accuracy and comprehensiveness. In this
paper, we introduce a detailed caption benchmark, termed as CompreCap, to
evaluate the visual context from a directed scene graph view. Concretely, we
first manually segment the image into semantically meaningful regions (i.e.,
semantic segmentation mask) according to common-object vocabulary, while also
distinguishing attributes of objects within all those regions. Then directional
relation labels of these objects are annotated to compose a directed scene
graph that can well encode rich compositional information of the image. Based
on our directed scene graph, we develop a pipeline to assess the generated
detailed captions from LVLMs on multiple levels, including the object-level
coverage, the accuracy of attribute descriptions, the score of key
relationships, etc. Experimental results on the CompreCap dataset confirm that
our evaluation method aligns closely with human evaluation scores across LVLMs.
| [
{
"version": "v1",
"created": "Wed, 11 Dec 2024 18:37:42 GMT"
},
{
"version": "v2",
"created": "Thu, 12 Dec 2024 06:33:36 GMT"
},
{
"version": "v3",
"created": "Wed, 26 Mar 2025 06:10:20 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Lu",
"Fan",
""
],
[
"Wu",
"Wei",
""
],
[
"Zheng",
"Kecheng",
""
],
[
"Ma",
"Shuailei",
""
],
[
"Gong",
"Biao",
""
],
[
"Liu",
"Jiawei",
""
],
[
"Zhai",
"Wei",
""
],
[
"Cao",
"Yang",
""
],
[
"Shen",
"Yujun",
""
],
[
"Zha",
"Zheng-Jun",
""
]
] | TITLE: Benchmarking Large Vision-Language Models via Directed Scene Graph for
Comprehensive Image Captioning
ABSTRACT: Generating detailed captions comprehending text-rich visual content in images
has received growing attention for Large Vision-Language Models (LVLMs).
However, few studies have developed benchmarks specifically tailored for
detailed captions to measure their accuracy and comprehensiveness. In this
paper, we introduce a detailed caption benchmark, termed as CompreCap, to
evaluate the visual context from a directed scene graph view. Concretely, we
first manually segment the image into semantically meaningful regions (i.e.,
semantic segmentation mask) according to common-object vocabulary, while also
distinguishing attributes of objects within all those regions. Then directional
relation labels of these objects are annotated to compose a directed scene
graph that can well encode rich compositional information of the image. Based
on our directed scene graph, we develop a pipeline to assess the generated
detailed captions from LVLMs on multiple levels, including the object-level
coverage, the accuracy of attribute descriptions, the score of key
relationships, etc. Experimental results on the CompreCap dataset confirm that
our evaluation method aligns closely with human evaluation scores across LVLMs.
|
2412.08687 | Lisa Dunlap | Christopher Chou, Lisa Dunlap, Koki Mashita, Krishna Mandal, Trevor
Darrell, Ion Stoica, Joseph E. Gonzalez, Wei-Lin Chiang | VisionArena: 230K Real World User-VLM Conversations with Preference
Labels | updated for CVPR Camera Ready | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | With the growing adoption and capabilities of vision-language models (VLMs)
comes the need for benchmarks that capture authentic user-VLM interactions. In
response, we create VisionArena, a dataset of 230K real-world conversations
between users and VLMs. Collected from Chatbot Arena - an open-source platform
where users interact with VLMs and submit preference votes - VisionArena spans
73K unique users, 45 VLMs, and 138 languages. Our dataset contains three
subsets: VisionArena-Chat, 200k single and multi-turn conversations between a
user and a VLM; VisionArena-Battle, 30K conversations comparing two anonymous
VLMs with user preference votes; and VisionArena-Bench, an automatic benchmark
of 500 diverse user prompts that efficiently approximate the live Chatbot Arena
model rankings. Additionally, we highlight the types of question asked by
users, the influence of response style on preference, and areas where models
often fail. We find open-ended tasks like captioning and humor are highly
style-dependent, and current VLMs struggle with spatial reasoning and planning
tasks. Lastly, we show finetuning the same base model on VisionArena-Chat
outperforms Llava-Instruct-158K, with a 17-point gain on MMMU and a 46-point
gain on the WildVision benchmark. Dataset at https://huggingface.co/lmarena-ai
| [
{
"version": "v1",
"created": "Wed, 11 Dec 2024 18:59:46 GMT"
},
{
"version": "v2",
"created": "Fri, 13 Dec 2024 23:12:23 GMT"
},
{
"version": "v3",
"created": "Tue, 25 Mar 2025 22:17:42 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Chou",
"Christopher",
""
],
[
"Dunlap",
"Lisa",
""
],
[
"Mashita",
"Koki",
""
],
[
"Mandal",
"Krishna",
""
],
[
"Darrell",
"Trevor",
""
],
[
"Stoica",
"Ion",
""
],
[
"Gonzalez",
"Joseph E.",
""
],
[
"Chiang",
"Wei-Lin",
""
]
] | TITLE: VisionArena: 230K Real World User-VLM Conversations with Preference
Labels
ABSTRACT: With the growing adoption and capabilities of vision-language models (VLMs)
comes the need for benchmarks that capture authentic user-VLM interactions. In
response, we create VisionArena, a dataset of 230K real-world conversations
between users and VLMs. Collected from Chatbot Arena - an open-source platform
where users interact with VLMs and submit preference votes - VisionArena spans
73K unique users, 45 VLMs, and 138 languages. Our dataset contains three
subsets: VisionArena-Chat, 200k single and multi-turn conversations between a
user and a VLM; VisionArena-Battle, 30K conversations comparing two anonymous
VLMs with user preference votes; and VisionArena-Bench, an automatic benchmark
of 500 diverse user prompts that efficiently approximate the live Chatbot Arena
model rankings. Additionally, we highlight the types of question asked by
users, the influence of response style on preference, and areas where models
often fail. We find open-ended tasks like captioning and humor are highly
style-dependent, and current VLMs struggle with spatial reasoning and planning
tasks. Lastly, we show finetuning the same base model on VisionArena-Chat
outperforms Llava-Instruct-158K, with a 17-point gain on MMMU and a 46-point
gain on the WildVision benchmark. Dataset at https://huggingface.co/lmarena-ai
|
2412.18031 | Chiyu Wei | Chiyu Wei, Sean Noh, Ho-Chun Herbert Chang | Faces Speak Louder Than Words: Emotions Versus Textual Sentiment in the
2024 USA Presidential Election | 4 pages. 4 figures | null | 10.1145/3701716.3715556 | null | cs.SI | http://creativecommons.org/licenses/by/4.0/ | Sentiment analysis of textual content has become a well-established solution
for analyzing social media data. However, with the rise of images and videos as
primary modes of expression, more information on social media is conveyed
visually. Among these, facial expressions serve as one of the most direct
indicators of emotional content in images. This study analyzes a dataset of
Instagram posts related to the 2024 U.S. presidential election, spanning April
5, 2024, to August 9, 2024, to compare the relationship between textual and
facial sentiment. Our findings reveal that facial expressions align with text
sentiment, where positive sentiment aligns with happiness, although neutral and
negative facial expressions provide critical information beyond negative
valence. Furthermore, during politically significant events such as Donald
Trump's conviction and assassination attempt, posts depicting Trump showed a
12% increase in negative sentiment. Crucially, Democrats use their opponent's
fear to depict weakness, whereas Republicans use their candidate's anger to
depict resilience. Our research highlights the potential of integrating facial
expression analysis with textual sentiment analysis to uncover deeper insights
into social media dynamics.
| [
{
"version": "v1",
"created": "Mon, 23 Dec 2024 22:51:21 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 20:16:36 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Wei",
"Chiyu",
""
],
[
"Noh",
"Sean",
""
],
[
"Chang",
"Ho-Chun Herbert",
""
]
] | TITLE: Faces Speak Louder Than Words: Emotions Versus Textual Sentiment in the
2024 USA Presidential Election
ABSTRACT: Sentiment analysis of textual content has become a well-established solution
for analyzing social media data. However, with the rise of images and videos as
primary modes of expression, more information on social media is conveyed
visually. Among these, facial expressions serve as one of the most direct
indicators of emotional content in images. This study analyzes a dataset of
Instagram posts related to the 2024 U.S. presidential election, spanning April
5, 2024, to August 9, 2024, to compare the relationship between textual and
facial sentiment. Our findings reveal that facial expressions align with text
sentiment, where positive sentiment aligns with happiness, although neutral and
negative facial expressions provide critical information beyond negative
valence. Furthermore, during politically significant events such as Donald
Trump's conviction and assassination attempt, posts depicting Trump showed a
12% increase in negative sentiment. Crucially, Democrats use their opponent's
fear to depict weakness, whereas Republicans use their candidate's anger to
depict resilience. Our research highlights the potential of integrating facial
expression analysis with textual sentiment analysis to uncover deeper insights
into social media dynamics.
|
2412.18951 | Muhammet Esat Kalfaoglu | Muhammet Esat Kalfaoglu and Halil Ibrahim Ozturk and Ozsel Kilinc and
Alptekin Temizel | TopoBDA: Towards Bezier Deformable Attention for Road Topology
Understanding | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding road topology is crucial for autonomous driving. This paper
introduces TopoBDA (Topology with Bezier Deformable Attention), a novel
approach that enhances road topology comprehension by leveraging Bezier
Deformable Attention (BDA). TopoBDA processes multi-camera 360-degree imagery
to generate Bird's Eye View (BEV) features, which are refined through a
transformer decoder employing BDA. BDA utilizes Bezier control points to drive
the deformable attention mechanism, improving the detection and representation
of elongated and thin polyline structures, such as lane centerlines.
Additionally, TopoBDA integrates two auxiliary components: an instance mask
formulation loss and a one-to-many set prediction loss strategy, to further
refine centerline detection and enhance road topology understanding.
Experimental evaluations on the OpenLane-V2 dataset demonstrate that TopoBDA
outperforms existing methods, achieving state-of-the-art results in centerline
detection and topology reasoning. TopoBDA also achieves the best results on the
OpenLane-V1 dataset in 3D lane detection. Further experiments on integrating
multi-modal data -- such as LiDAR, radar, and SDMap -- show that multimodal
inputs can further enhance performance in road topology understanding.
| [
{
"version": "v1",
"created": "Wed, 25 Dec 2024 17:31:54 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 08:29:03 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Kalfaoglu",
"Muhammet Esat",
""
],
[
"Ozturk",
"Halil Ibrahim",
""
],
[
"Kilinc",
"Ozsel",
""
],
[
"Temizel",
"Alptekin",
""
]
] | TITLE: TopoBDA: Towards Bezier Deformable Attention for Road Topology
Understanding
ABSTRACT: Understanding road topology is crucial for autonomous driving. This paper
introduces TopoBDA (Topology with Bezier Deformable Attention), a novel
approach that enhances road topology comprehension by leveraging Bezier
Deformable Attention (BDA). TopoBDA processes multi-camera 360-degree imagery
to generate Bird's Eye View (BEV) features, which are refined through a
transformer decoder employing BDA. BDA utilizes Bezier control points to drive
the deformable attention mechanism, improving the detection and representation
of elongated and thin polyline structures, such as lane centerlines.
Additionally, TopoBDA integrates two auxiliary components: an instance mask
formulation loss and a one-to-many set prediction loss strategy, to further
refine centerline detection and enhance road topology understanding.
Experimental evaluations on the OpenLane-V2 dataset demonstrate that TopoBDA
outperforms existing methods, achieving state-of-the-art results in centerline
detection and topology reasoning. TopoBDA also achieves the best results on the
OpenLane-V1 dataset in 3D lane detection. Further experiments on integrating
multi-modal data -- such as LiDAR, radar, and SDMap -- show that multimodal
inputs can further enhance performance in road topology understanding.
|
2501.01645 | Heqing Zou | Heqing Zou, Tianze Luo, Guiyang Xie, Victor (Xiao Jie) Zhang, Fengmao
Lv, Guangcong Wang, Junyang Chen, Zhuochen Wang, Hansheng Zhang and Huaijian
Zhang | HLV-1K: A Large-scale Hour-Long Video Benchmark for Time-Specific Long
Video Understanding | Accepted to ICME 2025 | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimodal large language models have become a popular topic in deep visual
understanding due to many promising real-world applications. However, hour-long
video understanding, spanning over one hour and containing tens of thousands of
visual frames, remains under-explored because of 1) challenging long-term video
analyses, 2) inefficient large-model approaches, and 3) lack of large-scale
benchmark datasets. Among them, in this paper, we focus on building a
large-scale hour-long long video benchmark, HLV-1K, designed to evaluate long
video understanding models. HLV-1K comprises 1009 hour-long videos with 14,847
high-quality question answering (QA) and multi-choice question asnwering (MCQA)
pairs with time-aware query and diverse annotations, covering frame-level,
within-event-level, cross-event-level, and long-term reasoning tasks. We
evaluate our benchmark using existing state-of-the-art methods and demonstrate
its value for testing deep long video understanding capabilities at different
levels and for various tasks. This includes promoting future long video
understanding tasks at a granular level, such as deep understanding of long
live videos, meeting recordings, and movies.
| [
{
"version": "v1",
"created": "Fri, 3 Jan 2025 05:32:37 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 02:12:50 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Zou",
"Heqing",
"",
"Xiao Jie"
],
[
"Luo",
"Tianze",
"",
"Xiao Jie"
],
[
"Xie",
"Guiyang",
"",
"Xiao Jie"
],
[
"Victor",
"",
"",
"Xiao Jie"
],
[
"Zhang",
"",
""
],
[
"Lv",
"Fengmao",
""
],
[
"Wang",
"Guangcong",
""
],
[
"Chen",
"Junyang",
""
],
[
"Wang",
"Zhuochen",
""
],
[
"Zhang",
"Hansheng",
""
],
[
"Zhang",
"Huaijian",
""
]
] | TITLE: HLV-1K: A Large-scale Hour-Long Video Benchmark for Time-Specific Long
Video Understanding
ABSTRACT: Multimodal large language models have become a popular topic in deep visual
understanding due to many promising real-world applications. However, hour-long
video understanding, spanning over one hour and containing tens of thousands of
visual frames, remains under-explored because of 1) challenging long-term video
analyses, 2) inefficient large-model approaches, and 3) lack of large-scale
benchmark datasets. Among them, in this paper, we focus on building a
large-scale hour-long long video benchmark, HLV-1K, designed to evaluate long
video understanding models. HLV-1K comprises 1009 hour-long videos with 14,847
high-quality question answering (QA) and multi-choice question asnwering (MCQA)
pairs with time-aware query and diverse annotations, covering frame-level,
within-event-level, cross-event-level, and long-term reasoning tasks. We
evaluate our benchmark using existing state-of-the-art methods and demonstrate
its value for testing deep long video understanding capabilities at different
levels and for various tasks. This includes promoting future long video
understanding tasks at a granular level, such as deep understanding of long
live videos, meeting recordings, and movies.
|
2501.12206 | Sajib Acharjee Dip | Kazi Hasan Ibn Arif, Sajib Acharjee Dip, Khizar Hussain, Lang Zhang,
Chris Thomas | PAINT: Paying Attention to INformed Tokens to Mitigate Hallucination in
Large Vision-Language Model | 6 pages, 4 tables, 3 figures | null | null | null | cs.CV cs.CL | http://creativecommons.org/licenses/by/4.0/ | Large Vision Language Models (LVLMs) have demonstrated remarkable
capabilities in understanding and describing visual content, achieving
state-of-the-art performance across various vision-language tasks. However,
these models often generate descriptions containing objects or details that are
absent in the input image, a phenomenon commonly known as hallucination. Our
work investigates the key reasons behind this issue by analyzing the pattern of
self-attention in transformer layers. We find that hallucinations often arise
from the progressive weakening of attention weight to visual tokens in the
deeper layers of the LLM. Some previous works naively boost the attention of
all visual tokens to mitigate this issue, resulting in suboptimal hallucination
reduction. To address this, we identify two critical sets of visual tokens that
facilitate the transfer of visual information from the vision encoder to the
LLM. Local tokens encode grounded information about objects present in an
image, while summary tokens capture the overall aggregated representation of
the image. Importantly, these two sets of tokens require different levels of
weight enhancement. To this end, we propose \textbf{PAINT} (\textbf{P}aying
\textbf{A}ttention to \textbf{IN}formed \textbf{T}okens), a plug-and-play
framework that intervenes in the self-attention mechanism of the LLM,
selectively boosting the attention weights of local and summary tokens with
experimentally learned margins. Evaluation on the MSCOCO image captioning
dataset demonstrate that our approach reduces hallucination rates by up to
62.3\% compared to baseline models while maintaining accuracy. Code is
available at
\href{https://github.com/hasanar1f/PAINT}{https://github.com/hasanar1f/PAINT}
| [
{
"version": "v1",
"created": "Tue, 21 Jan 2025 15:22:31 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 23:02:52 GMT"
},
{
"version": "v3",
"created": "Wed, 26 Mar 2025 01:49:37 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Arif",
"Kazi Hasan Ibn",
""
],
[
"Dip",
"Sajib Acharjee",
""
],
[
"Hussain",
"Khizar",
""
],
[
"Zhang",
"Lang",
""
],
[
"Thomas",
"Chris",
""
]
] | TITLE: PAINT: Paying Attention to INformed Tokens to Mitigate Hallucination in
Large Vision-Language Model
ABSTRACT: Large Vision Language Models (LVLMs) have demonstrated remarkable
capabilities in understanding and describing visual content, achieving
state-of-the-art performance across various vision-language tasks. However,
these models often generate descriptions containing objects or details that are
absent in the input image, a phenomenon commonly known as hallucination. Our
work investigates the key reasons behind this issue by analyzing the pattern of
self-attention in transformer layers. We find that hallucinations often arise
from the progressive weakening of attention weight to visual tokens in the
deeper layers of the LLM. Some previous works naively boost the attention of
all visual tokens to mitigate this issue, resulting in suboptimal hallucination
reduction. To address this, we identify two critical sets of visual tokens that
facilitate the transfer of visual information from the vision encoder to the
LLM. Local tokens encode grounded information about objects present in an
image, while summary tokens capture the overall aggregated representation of
the image. Importantly, these two sets of tokens require different levels of
weight enhancement. To this end, we propose \textbf{PAINT} (\textbf{P}aying
\textbf{A}ttention to \textbf{IN}formed \textbf{T}okens), a plug-and-play
framework that intervenes in the self-attention mechanism of the LLM,
selectively boosting the attention weights of local and summary tokens with
experimentally learned margins. Evaluation on the MSCOCO image captioning
dataset demonstrate that our approach reduces hallucination rates by up to
62.3\% compared to baseline models while maintaining accuracy. Code is
available at
\href{https://github.com/hasanar1f/PAINT}{https://github.com/hasanar1f/PAINT}
|
2501.13274 | Hao Yuan Bai | Hao Yuan Bai, Xue Liu | T-Graphormer: Using Transformers for Spatiotemporal Forecasting | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Spatiotemporal data is ubiquitous, and forecasting it has important
applications in many domains. However, its complex cross-component dependencies
and non-linear temporal dynamics can be challenging for traditional techniques.
Existing methods address this by learning the two dimensions separately. Here,
we introduce Temporal Graphormer (T-Graphormer), a Transformer-based approach
capable of modelling spatiotemporal correlations simultaneously. By adding
temporal encodings in the Graphormer architecture, each node attends to all
other tokens within the graph sequence, enabling the model to learn rich
spacetime patterns with minimal predefined inductive biases. We show the
effectiveness of T-Graphormer on real-world traffic prediction benchmark
datasets. Compared to state-of-the-art methods, T-Graphormer reduces root mean
squared error (RMSE) and mean absolute percentage error (MAPE) by up to 20% and
10%.
| [
{
"version": "v1",
"created": "Wed, 22 Jan 2025 23:32:29 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Jan 2025 04:55:51 GMT"
},
{
"version": "v3",
"created": "Wed, 26 Mar 2025 07:43:36 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Bai",
"Hao Yuan",
""
],
[
"Liu",
"Xue",
""
]
] | TITLE: T-Graphormer: Using Transformers for Spatiotemporal Forecasting
ABSTRACT: Spatiotemporal data is ubiquitous, and forecasting it has important
applications in many domains. However, its complex cross-component dependencies
and non-linear temporal dynamics can be challenging for traditional techniques.
Existing methods address this by learning the two dimensions separately. Here,
we introduce Temporal Graphormer (T-Graphormer), a Transformer-based approach
capable of modelling spatiotemporal correlations simultaneously. By adding
temporal encodings in the Graphormer architecture, each node attends to all
other tokens within the graph sequence, enabling the model to learn rich
spacetime patterns with minimal predefined inductive biases. We show the
effectiveness of T-Graphormer on real-world traffic prediction benchmark
datasets. Compared to state-of-the-art methods, T-Graphormer reduces root mean
squared error (RMSE) and mean absolute percentage error (MAPE) by up to 20% and
10%.
|
2501.15891 | Bohan Zeng | Hailong Guo, Bohan Zeng, Yiren Song, Wentao Zhang, Chuang Zhang,
Jiaming Liu | Any2AnyTryon: Leveraging Adaptive Position Embeddings for Versatile
Virtual Clothing Tasks | 13 pages,13 figures | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Image-based virtual try-on (VTON) aims to generate a virtual try-on result by
transferring an input garment onto a target person's image. However, the
scarcity of paired garment-model data makes it challenging for existing methods
to achieve high generalization and quality in VTON. Also, it limits the ability
to generate mask-free try-ons. To tackle the data scarcity problem, approaches
such as Stable Garment and MMTryon use a synthetic data strategy, effectively
increasing the amount of paired data on the model side. However, existing
methods are typically limited to performing specific try-on tasks and lack
user-friendliness. To enhance the generalization and controllability of VTON
generation, we propose Any2AnyTryon, which can generate try-on results based on
different textual instructions and model garment images to meet various needs,
eliminating the reliance on masks, poses, or other conditions. Specifically, we
first construct the virtual try-on dataset LAION-Garment, the largest known
open-source garment try-on dataset. Then, we introduce adaptive position
embedding, which enables the model to generate satisfactory outfitted model
images or garment images based on input images of different sizes and
categories, significantly enhancing the generalization and controllability of
VTON generation. In our experiments, we demonstrate the effectiveness of our
Any2AnyTryon and compare it with existing methods. The results show that
Any2AnyTryon enables flexible, controllable, and high-quality image-based
virtual try-on generation. https://logn-2024.github.io/Any2anyTryonProjectPage
| [
{
"version": "v1",
"created": "Mon, 27 Jan 2025 09:33:23 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 02:08:33 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Guo",
"Hailong",
""
],
[
"Zeng",
"Bohan",
""
],
[
"Song",
"Yiren",
""
],
[
"Zhang",
"Wentao",
""
],
[
"Zhang",
"Chuang",
""
],
[
"Liu",
"Jiaming",
""
]
] | TITLE: Any2AnyTryon: Leveraging Adaptive Position Embeddings for Versatile
Virtual Clothing Tasks
ABSTRACT: Image-based virtual try-on (VTON) aims to generate a virtual try-on result by
transferring an input garment onto a target person's image. However, the
scarcity of paired garment-model data makes it challenging for existing methods
to achieve high generalization and quality in VTON. Also, it limits the ability
to generate mask-free try-ons. To tackle the data scarcity problem, approaches
such as Stable Garment and MMTryon use a synthetic data strategy, effectively
increasing the amount of paired data on the model side. However, existing
methods are typically limited to performing specific try-on tasks and lack
user-friendliness. To enhance the generalization and controllability of VTON
generation, we propose Any2AnyTryon, which can generate try-on results based on
different textual instructions and model garment images to meet various needs,
eliminating the reliance on masks, poses, or other conditions. Specifically, we
first construct the virtual try-on dataset LAION-Garment, the largest known
open-source garment try-on dataset. Then, we introduce adaptive position
embedding, which enables the model to generate satisfactory outfitted model
images or garment images based on input images of different sizes and
categories, significantly enhancing the generalization and controllability of
VTON generation. In our experiments, we demonstrate the effectiveness of our
Any2AnyTryon and compare it with existing methods. The results show that
Any2AnyTryon enables flexible, controllable, and high-quality image-based
virtual try-on generation. https://logn-2024.github.io/Any2anyTryonProjectPage
|
2502.04847 | Qijun Gan | Qijun Gan, Yi Ren, Chen Zhang, Zhenhui Ye, Pan Xie, Xiang Yin, Zehuan
Yuan, Bingyue Peng, Jianke Zhu | HumanDiT: Pose-Guided Diffusion Transformer for Long-form Human Motion
Video Generation | https://agnjason.github.io/HumanDiT-page/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Human motion video generation has advanced significantly, while existing
methods still struggle with accurately rendering detailed body parts like hands
and faces, especially in long sequences and intricate motions. Current
approaches also rely on fixed resolution and struggle to maintain visual
consistency. To address these limitations, we propose HumanDiT, a pose-guided
Diffusion Transformer (DiT)-based framework trained on a large and wild dataset
containing 14,000 hours of high-quality video to produce high-fidelity videos
with fine-grained body rendering. Specifically, (i) HumanDiT, built on DiT,
supports numerous video resolutions and variable sequence lengths, facilitating
learning for long-sequence video generation; (ii) we introduce a prefix-latent
reference strategy to maintain personalized characteristics across extended
sequences. Furthermore, during inference, HumanDiT leverages Keypoint-DiT to
generate subsequent pose sequences, facilitating video continuation from static
images or existing videos. It also utilizes a Pose Adapter to enable pose
transfer with given sequences. Extensive experiments demonstrate its superior
performance in generating long-form, pose-accurate videos across diverse
scenarios.
| [
{
"version": "v1",
"created": "Fri, 7 Feb 2025 11:36:36 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Feb 2025 14:51:29 GMT"
},
{
"version": "v3",
"created": "Wed, 26 Mar 2025 08:27:17 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Gan",
"Qijun",
""
],
[
"Ren",
"Yi",
""
],
[
"Zhang",
"Chen",
""
],
[
"Ye",
"Zhenhui",
""
],
[
"Xie",
"Pan",
""
],
[
"Yin",
"Xiang",
""
],
[
"Yuan",
"Zehuan",
""
],
[
"Peng",
"Bingyue",
""
],
[
"Zhu",
"Jianke",
""
]
] | TITLE: HumanDiT: Pose-Guided Diffusion Transformer for Long-form Human Motion
Video Generation
ABSTRACT: Human motion video generation has advanced significantly, while existing
methods still struggle with accurately rendering detailed body parts like hands
and faces, especially in long sequences and intricate motions. Current
approaches also rely on fixed resolution and struggle to maintain visual
consistency. To address these limitations, we propose HumanDiT, a pose-guided
Diffusion Transformer (DiT)-based framework trained on a large and wild dataset
containing 14,000 hours of high-quality video to produce high-fidelity videos
with fine-grained body rendering. Specifically, (i) HumanDiT, built on DiT,
supports numerous video resolutions and variable sequence lengths, facilitating
learning for long-sequence video generation; (ii) we introduce a prefix-latent
reference strategy to maintain personalized characteristics across extended
sequences. Furthermore, during inference, HumanDiT leverages Keypoint-DiT to
generate subsequent pose sequences, facilitating video continuation from static
images or existing videos. It also utilizes a Pose Adapter to enable pose
transfer with given sequences. Extensive experiments demonstrate its superior
performance in generating long-form, pose-accurate videos across diverse
scenarios.
|
2502.05167 | Ali Modarressi | Ali Modarressi, Hanieh Deilamsalehy, Franck Dernoncourt, Trung Bui,
Ryan A. Rossi, Seunghyun Yoon, Hinrich Sch\"utze | NoLiMa: Long-Context Evaluation Beyond Literal Matching | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Recent large language models (LLMs) support long contexts ranging from 128K
to 1M tokens. A popular method for evaluating these capabilities is the
needle-in-a-haystack (NIAH) test, which involves retrieving a "needle"
(relevant information) from a "haystack" (long irrelevant context). Extensions
of this approach include increasing distractors, fact chaining, and in-context
reasoning. However, in these benchmarks, models can exploit existing literal
matches between the needle and haystack to simplify the task. To address this,
we introduce NoLiMa, a benchmark extending NIAH with a carefully designed
needle set, where questions and needles have minimal lexical overlap, requiring
models to infer latent associations to locate the needle within the haystack.
We evaluate 12 popular LLMs that claim to support contexts of at least 128K
tokens. While they perform well in short contexts (<1K), performance degrades
significantly as context length increases. At 32K, for instance, 10 models drop
below 50% of their strong short-length baselines. Even GPT-4o, one of the
top-performing exceptions, experiences a reduction from an almost-perfect
baseline of 99.3% to 69.7%. Our analysis suggests these declines stem from the
increased difficulty the attention mechanism faces in longer contexts when
literal matches are absent, making it harder to retrieve relevant information.
We publicly release the dataset and evaluation code at
https://github.com/adobe-research/NoLiMa.
| [
{
"version": "v1",
"created": "Fri, 7 Feb 2025 18:49:46 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 13:23:30 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Modarressi",
"Ali",
""
],
[
"Deilamsalehy",
"Hanieh",
""
],
[
"Dernoncourt",
"Franck",
""
],
[
"Bui",
"Trung",
""
],
[
"Rossi",
"Ryan A.",
""
],
[
"Yoon",
"Seunghyun",
""
],
[
"Schütze",
"Hinrich",
""
]
] | TITLE: NoLiMa: Long-Context Evaluation Beyond Literal Matching
ABSTRACT: Recent large language models (LLMs) support long contexts ranging from 128K
to 1M tokens. A popular method for evaluating these capabilities is the
needle-in-a-haystack (NIAH) test, which involves retrieving a "needle"
(relevant information) from a "haystack" (long irrelevant context). Extensions
of this approach include increasing distractors, fact chaining, and in-context
reasoning. However, in these benchmarks, models can exploit existing literal
matches between the needle and haystack to simplify the task. To address this,
we introduce NoLiMa, a benchmark extending NIAH with a carefully designed
needle set, where questions and needles have minimal lexical overlap, requiring
models to infer latent associations to locate the needle within the haystack.
We evaluate 12 popular LLMs that claim to support contexts of at least 128K
tokens. While they perform well in short contexts (<1K), performance degrades
significantly as context length increases. At 32K, for instance, 10 models drop
below 50% of their strong short-length baselines. Even GPT-4o, one of the
top-performing exceptions, experiences a reduction from an almost-perfect
baseline of 99.3% to 69.7%. Our analysis suggests these declines stem from the
increased difficulty the attention mechanism faces in longer contexts when
literal matches are absent, making it harder to retrieve relevant information.
We publicly release the dataset and evaluation code at
https://github.com/adobe-research/NoLiMa.
|
2502.07022 | Arsene Fansi Tchango | Adriana Eufrosina Bora, Pierre-Luc St-Charles, Mirko Bronzi, Ars\`ene
Fansi Tchango, Bruno Rousseau, Kerrie Mengersen | AIMS.au: A Dataset for the Analysis of Modern Slavery Countermeasures in
Corporate Statements | Camera ready. ICLR 2025 | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Despite over a decade of legislative efforts to address modern slavery in the
supply chains of large corporations, the effectiveness of government oversight
remains hampered by the challenge of scrutinizing thousands of statements
annually. While Large Language Models (LLMs) can be considered a well
established solution for the automatic analysis and summarization of documents,
recognizing concrete modern slavery countermeasures taken by companies and
differentiating those from vague claims remains a challenging task. To help
evaluate and fine-tune LLMs for the assessment of corporate statements, we
introduce a dataset composed of 5,731 modern slavery statements taken from the
Australian Modern Slavery Register and annotated at the sentence level. This
paper details the construction steps for the dataset that include the careful
design of annotation specifications, the selection and preprocessing of
statements, and the creation of high-quality annotation subsets for effective
model evaluations. To demonstrate our dataset's utility, we propose a machine
learning methodology for the detection of sentences relevant to mandatory
reporting requirements set by the Australian Modern Slavery Act. We then follow
this methodology to benchmark modern language models under zero-shot and
supervised learning settings.
| [
{
"version": "v1",
"created": "Mon, 10 Feb 2025 20:30:32 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Bora",
"Adriana Eufrosina",
""
],
[
"St-Charles",
"Pierre-Luc",
""
],
[
"Bronzi",
"Mirko",
""
],
[
"Tchango",
"Arsène Fansi",
""
],
[
"Rousseau",
"Bruno",
""
],
[
"Mengersen",
"Kerrie",
""
]
] | TITLE: AIMS.au: A Dataset for the Analysis of Modern Slavery Countermeasures in
Corporate Statements
ABSTRACT: Despite over a decade of legislative efforts to address modern slavery in the
supply chains of large corporations, the effectiveness of government oversight
remains hampered by the challenge of scrutinizing thousands of statements
annually. While Large Language Models (LLMs) can be considered a well
established solution for the automatic analysis and summarization of documents,
recognizing concrete modern slavery countermeasures taken by companies and
differentiating those from vague claims remains a challenging task. To help
evaluate and fine-tune LLMs for the assessment of corporate statements, we
introduce a dataset composed of 5,731 modern slavery statements taken from the
Australian Modern Slavery Register and annotated at the sentence level. This
paper details the construction steps for the dataset that include the careful
design of annotation specifications, the selection and preprocessing of
statements, and the creation of high-quality annotation subsets for effective
model evaluations. To demonstrate our dataset's utility, we propose a machine
learning methodology for the detection of sentences relevant to mandatory
reporting requirements set by the Australian Modern Slavery Act. We then follow
this methodology to benchmark modern language models under zero-shot and
supervised learning settings.
|
2502.08818 | arXiv Admin | Koinis Vassilis, Godfrey Milbourne, Harriet Featherstone, Xanthe
Peverell, Yorick Bletchley, Zachary Montford | Lexical Manifold Reconfiguration in Large Language Models: A Novel
Architectural Approach for Contextual Modulation | arXiv admin note: This paper has been withdrawn by arXiv due to
disputed and unverifiable authorship | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Contextual adaptation in token embeddings plays a central role in determining
how well language models maintain coherence and retain semantic relationships
over extended text sequences. Static embeddings often impose constraints on
lexical flexibility, leading to suboptimal performance when faced with complex
sentence structures or domain-specific terminology shifts. To address this
limitation, a structured approach was developed for dynamically reconfiguring
token embeddings through continuous geometric transformations, ensuring that
representations evolved in response to evolving discourse structures. A
manifold-based transformation mechanism was integrated to regulate lexical
positioning, allowing embeddings to undergo controlled shifts while preserving
linguistic relationships across varying textual contexts. Empirical evaluations
demonstrated that embedding reconfiguration contributed to reductions in
perplexity, improved lexical coherence, and enhanced sentence-level continuity,
particularly in structured and domain-adaptive text generation tasks.
Comparative analyses of embedding drift indicated that dynamically restructured
representations maintained stronger contextual consistency, reducing
misalignment in token dependencies while preserving fluency in language
modeling outputs. Computational overhead assessments confirmed that while
training complexity increased due to the iterative refinement of embeddings,
inference remained efficient, ensuring practical feasibility for real-time
generation. Evaluations across multiple datasets further demonstrated that
dynamically modulated embeddings exhibited broader lexical diversity, reducing
repetitive token patterns and enabling a more adaptable representation learning
process.
| [
{
"version": "v1",
"created": "Wed, 12 Feb 2025 22:11:07 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 15:58:26 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Vassilis",
"Koinis",
""
],
[
"Milbourne",
"Godfrey",
""
],
[
"Featherstone",
"Harriet",
""
],
[
"Peverell",
"Xanthe",
""
],
[
"Bletchley",
"Yorick",
""
],
[
"Montford",
"Zachary",
""
]
] | TITLE: Lexical Manifold Reconfiguration in Large Language Models: A Novel
Architectural Approach for Contextual Modulation
ABSTRACT: Contextual adaptation in token embeddings plays a central role in determining
how well language models maintain coherence and retain semantic relationships
over extended text sequences. Static embeddings often impose constraints on
lexical flexibility, leading to suboptimal performance when faced with complex
sentence structures or domain-specific terminology shifts. To address this
limitation, a structured approach was developed for dynamically reconfiguring
token embeddings through continuous geometric transformations, ensuring that
representations evolved in response to evolving discourse structures. A
manifold-based transformation mechanism was integrated to regulate lexical
positioning, allowing embeddings to undergo controlled shifts while preserving
linguistic relationships across varying textual contexts. Empirical evaluations
demonstrated that embedding reconfiguration contributed to reductions in
perplexity, improved lexical coherence, and enhanced sentence-level continuity,
particularly in structured and domain-adaptive text generation tasks.
Comparative analyses of embedding drift indicated that dynamically restructured
representations maintained stronger contextual consistency, reducing
misalignment in token dependencies while preserving fluency in language
modeling outputs. Computational overhead assessments confirmed that while
training complexity increased due to the iterative refinement of embeddings,
inference remained efficient, ensuring practical feasibility for real-time
generation. Evaluations across multiple datasets further demonstrated that
dynamically modulated embeddings exhibited broader lexical diversity, reducing
repetitive token patterns and enabling a more adaptable representation learning
process.
|
2502.14418 | Masoud Thajudeen Tholan | Masoud Thajudeen Tholan, Vinayaka Hegde, Chetan Sharma, Prasanta Kumar
Ghosh | Role of the Pretraining and the Adaptation data sizes for low-resource
real-time MRI video segmentation | Accepted to ICASSP 2025 | IEEE International Conference on Acoustics, Speech and Signal
Processing (ICASSP), Hyderabad, India, 2025, pp. 1-5 | 10.1109/ICASSP49660.2025.10889096 | null | eess.AS cs.CV eess.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Real-time Magnetic Resonance Imaging (rtMRI) is frequently used in speech
production studies as it provides a complete view of the vocal tract during
articulation. This study investigates the effectiveness of rtMRI in analyzing
vocal tract movements by employing the SegNet and UNet models for Air-Tissue
Boundary (ATB)segmentation tasks. We conducted pretraining of a few base models
using increasing numbers of subjects and videos, to assess performance on two
datasets. First, consisting of unseen subjects with unseen videos from the same
data source, achieving 0.33% and 0.91% (Pixel-wise Classification Accuracy
(PCA) and Dice Coefficient respectively) better than its matched condition.
Second, comprising unseen videos from a new data source, where we obtained an
accuracy of 99.63% and 98.09% (PCA and Dice Coefficient respectively) of its
matched condition performance. Here, matched condition performance refers to
the performance of a model trained only on the test subjects which was set as a
benchmark for the other models. Our findings highlight the significance of
fine-tuning and adapting models with limited data. Notably, we demonstrated
that effective model adaptation can be achieved with as few as 15 rtMRI frames
from any new dataset.
| [
{
"version": "v1",
"created": "Thu, 20 Feb 2025 10:15:43 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Tholan",
"Masoud Thajudeen",
""
],
[
"Hegde",
"Vinayaka",
""
],
[
"Sharma",
"Chetan",
""
],
[
"Ghosh",
"Prasanta Kumar",
""
]
] | TITLE: Role of the Pretraining and the Adaptation data sizes for low-resource
real-time MRI video segmentation
ABSTRACT: Real-time Magnetic Resonance Imaging (rtMRI) is frequently used in speech
production studies as it provides a complete view of the vocal tract during
articulation. This study investigates the effectiveness of rtMRI in analyzing
vocal tract movements by employing the SegNet and UNet models for Air-Tissue
Boundary (ATB)segmentation tasks. We conducted pretraining of a few base models
using increasing numbers of subjects and videos, to assess performance on two
datasets. First, consisting of unseen subjects with unseen videos from the same
data source, achieving 0.33% and 0.91% (Pixel-wise Classification Accuracy
(PCA) and Dice Coefficient respectively) better than its matched condition.
Second, comprising unseen videos from a new data source, where we obtained an
accuracy of 99.63% and 98.09% (PCA and Dice Coefficient respectively) of its
matched condition performance. Here, matched condition performance refers to
the performance of a model trained only on the test subjects which was set as a
benchmark for the other models. Our findings highlight the significance of
fine-tuning and adapting models with limited data. Notably, we demonstrated
that effective model adaptation can be achieved with as few as 15 rtMRI frames
from any new dataset.
|
2502.18185 | Adnan Iltaf | Adnan Iltaf, Rayan Merghani Ahmed, Zhenxi Zhang, Bin Li and Shoujun
Zhou | VesselSAM: Leveraging SAM for Aortic Vessel Segmentation with LoRA and
Atrous Attention | Work in progress | null | null | null | eess.IV cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Medical image segmentation is crucial for clinical diagnosis and treatment
planning, especially when dealing with complex anatomical structures such as
vessels. However, accurately segmenting vessels remains challenging due to
their small size, intricate edge structures, and susceptibility to artifacts
and imaging noise. In this work, we propose VesselSAM, an enhanced version of
the Segment Anything Model (SAM), specifically tailored for aortic vessel
segmentation. VesselSAM incorporates AtrousLoRA, a novel module integrating
Atrous Attention and Low-Rank Adaptation (LoRA), to enhance segmentation
performance. Atrous Attention enables the model to capture multi-scale
contextual information, preserving both fine-grained local details and broader
global context. Additionally, LoRA facilitates efficient fine-tuning of the
frozen SAM image encoder, reducing the number of trainable parameters and
thereby enhancing computational efficiency. We evaluate VesselSAM using two
challenging datasets: the Aortic Vessel Tree (AVT) dataset and the Type-B
Aortic Dissection (TBAD) dataset. VesselSAM achieves state-of-the-art
performance, attaining DSC scores of 93.50\%, 93.25\%, 93.02\%, and 93.26\%
across multi-center datasets. Our results demonstrate that VesselSAM delivers
high segmentation accuracy while significantly reducing computational overhead
compared to existing large-scale models. This development paves the way for
enhanced AI-based aortic vessel segmentation in clinical environments. The code
and models will be released at https://github.com/Adnan-CAS/AtrousLora.
| [
{
"version": "v1",
"created": "Tue, 25 Feb 2025 13:26:06 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Mar 2025 20:04:50 GMT"
},
{
"version": "v3",
"created": "Wed, 26 Mar 2025 06:10:48 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Iltaf",
"Adnan",
""
],
[
"Ahmed",
"Rayan Merghani",
""
],
[
"Zhang",
"Zhenxi",
""
],
[
"Li",
"Bin",
""
],
[
"Zhou",
"Shoujun",
""
]
] | TITLE: VesselSAM: Leveraging SAM for Aortic Vessel Segmentation with LoRA and
Atrous Attention
ABSTRACT: Medical image segmentation is crucial for clinical diagnosis and treatment
planning, especially when dealing with complex anatomical structures such as
vessels. However, accurately segmenting vessels remains challenging due to
their small size, intricate edge structures, and susceptibility to artifacts
and imaging noise. In this work, we propose VesselSAM, an enhanced version of
the Segment Anything Model (SAM), specifically tailored for aortic vessel
segmentation. VesselSAM incorporates AtrousLoRA, a novel module integrating
Atrous Attention and Low-Rank Adaptation (LoRA), to enhance segmentation
performance. Atrous Attention enables the model to capture multi-scale
contextual information, preserving both fine-grained local details and broader
global context. Additionally, LoRA facilitates efficient fine-tuning of the
frozen SAM image encoder, reducing the number of trainable parameters and
thereby enhancing computational efficiency. We evaluate VesselSAM using two
challenging datasets: the Aortic Vessel Tree (AVT) dataset and the Type-B
Aortic Dissection (TBAD) dataset. VesselSAM achieves state-of-the-art
performance, attaining DSC scores of 93.50\%, 93.25\%, 93.02\%, and 93.26\%
across multi-center datasets. Our results demonstrate that VesselSAM delivers
high segmentation accuracy while significantly reducing computational overhead
compared to existing large-scale models. This development paves the way for
enhanced AI-based aortic vessel segmentation in clinical environments. The code
and models will be released at https://github.com/Adnan-CAS/AtrousLora.
|
2502.18915 | Hongye Jin | Hongye Jin, Pei Chen, Jingfeng Yang, Zhengyang Wang, Meng Jiang, Yifan
Gao, Binxuan Huang, Xinyang Zhang, Zheng Li, Tianyi Liu, Huasheng Li, Bing
Yin | END: Early Noise Dropping for Efficient and Effective Context Denoising | It's not approved by the legal from Amazon. They told us arXiv is not
allowed unless the paper is accepted later. It's under submission now | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) have demonstrated remarkable performance across
a wide range of natural language processing tasks. However, they are often
distracted by irrelevant or noisy context in input sequences that degrades
output quality. This problem affects both long- and short-context scenarios,
such as retrieval-augmented generation, table question-answering, and
in-context learning. We reveal that LLMs can implicitly identify whether input
sequences contain useful information at early layers, prior to token
generation. Leveraging this insight, we introduce Early Noise Dropping
(\textsc{END}), a novel approach to mitigate this issue without requiring
fine-tuning the LLMs. \textsc{END} segments input sequences into chunks and
employs a linear prober on the early layers of LLMs to differentiate between
informative and noisy chunks. By discarding noisy chunks early in the process,
\textsc{END} preserves critical information, reduces distraction, and lowers
computational overhead. Extensive experiments demonstrate that \textsc{END}
significantly improves both performance and efficiency across different LLMs on
multiple evaluation datasets. Furthermore, by investigating LLMs' implicit
understanding to the input with the prober, this work also deepens
understanding of how LLMs do reasoning with contexts internally.
| [
{
"version": "v1",
"created": "Wed, 26 Feb 2025 08:07:17 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 20:34:56 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Jin",
"Hongye",
""
],
[
"Chen",
"Pei",
""
],
[
"Yang",
"Jingfeng",
""
],
[
"Wang",
"Zhengyang",
""
],
[
"Jiang",
"Meng",
""
],
[
"Gao",
"Yifan",
""
],
[
"Huang",
"Binxuan",
""
],
[
"Zhang",
"Xinyang",
""
],
[
"Li",
"Zheng",
""
],
[
"Liu",
"Tianyi",
""
],
[
"Li",
"Huasheng",
""
],
[
"Yin",
"Bing",
""
]
] | TITLE: END: Early Noise Dropping for Efficient and Effective Context Denoising
ABSTRACT: Large Language Models (LLMs) have demonstrated remarkable performance across
a wide range of natural language processing tasks. However, they are often
distracted by irrelevant or noisy context in input sequences that degrades
output quality. This problem affects both long- and short-context scenarios,
such as retrieval-augmented generation, table question-answering, and
in-context learning. We reveal that LLMs can implicitly identify whether input
sequences contain useful information at early layers, prior to token
generation. Leveraging this insight, we introduce Early Noise Dropping
(\textsc{END}), a novel approach to mitigate this issue without requiring
fine-tuning the LLMs. \textsc{END} segments input sequences into chunks and
employs a linear prober on the early layers of LLMs to differentiate between
informative and noisy chunks. By discarding noisy chunks early in the process,
\textsc{END} preserves critical information, reduces distraction, and lowers
computational overhead. Extensive experiments demonstrate that \textsc{END}
significantly improves both performance and efficiency across different LLMs on
multiple evaluation datasets. Furthermore, by investigating LLMs' implicit
understanding to the input with the prober, this work also deepens
understanding of how LLMs do reasoning with contexts internally.
|
2503.03562 | Xintao Chen | Wenqiao Li, Yao Gu, Xintao Chen, Xiaohao Xu, Ming Hu, Xiaonan Huang,
Yingna Wu | Towards Visual Discrimination and Reasoning of Real-World Physical
Dynamics: Physics-Grounded Anomaly Detection | Accepted by CVPR25 | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Humans detect real-world object anomalies by perceiving, interacting, and
reasoning based on object-conditioned physical knowledge. The long-term goal of
Industrial Anomaly Detection (IAD) is to enable machines to autonomously
replicate this skill. However, current IAD algorithms are largely developed and
tested on static, semantically simple datasets, which diverge from real-world
scenarios where physical understanding and reasoning are essential. To bridge
this gap, we introduce the Physics Anomaly Detection (Phys-AD) dataset, the
first large-scale, real-world, physics-grounded video dataset for industrial
anomaly detection. Collected using a real robot arm and motor, Phys-AD provides
a diverse set of dynamic, semantically rich scenarios. The dataset includes
more than 6400 videos across 22 real-world object categories, interacting with
robot arms and motors, and exhibits 47 types of anomalies. Anomaly detection in
Phys-AD requires visual reasoning, combining both physical knowledge and video
content to determine object abnormality. We benchmark state-of-the-art anomaly
detection methods under three settings: unsupervised AD, weakly-supervised AD,
and video-understanding AD, highlighting their limitations in handling
physics-grounded anomalies. Additionally, we introduce the Physics Anomaly
Explanation (PAEval) metric, designed to assess the ability of visual-language
foundation models to not only detect anomalies but also provide accurate
explanations for their underlying physical causes. Our project is available at
https://guyao2023.github.io/Phys-AD/.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 14:49:08 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Mar 2025 03:06:58 GMT"
},
{
"version": "v3",
"created": "Wed, 26 Mar 2025 03:58:26 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Li",
"Wenqiao",
""
],
[
"Gu",
"Yao",
""
],
[
"Chen",
"Xintao",
""
],
[
"Xu",
"Xiaohao",
""
],
[
"Hu",
"Ming",
""
],
[
"Huang",
"Xiaonan",
""
],
[
"Wu",
"Yingna",
""
]
] | TITLE: Towards Visual Discrimination and Reasoning of Real-World Physical
Dynamics: Physics-Grounded Anomaly Detection
ABSTRACT: Humans detect real-world object anomalies by perceiving, interacting, and
reasoning based on object-conditioned physical knowledge. The long-term goal of
Industrial Anomaly Detection (IAD) is to enable machines to autonomously
replicate this skill. However, current IAD algorithms are largely developed and
tested on static, semantically simple datasets, which diverge from real-world
scenarios where physical understanding and reasoning are essential. To bridge
this gap, we introduce the Physics Anomaly Detection (Phys-AD) dataset, the
first large-scale, real-world, physics-grounded video dataset for industrial
anomaly detection. Collected using a real robot arm and motor, Phys-AD provides
a diverse set of dynamic, semantically rich scenarios. The dataset includes
more than 6400 videos across 22 real-world object categories, interacting with
robot arms and motors, and exhibits 47 types of anomalies. Anomaly detection in
Phys-AD requires visual reasoning, combining both physical knowledge and video
content to determine object abnormality. We benchmark state-of-the-art anomaly
detection methods under three settings: unsupervised AD, weakly-supervised AD,
and video-understanding AD, highlighting their limitations in handling
physics-grounded anomalies. Additionally, we introduce the Physics Anomaly
Explanation (PAEval) metric, designed to assess the ability of visual-language
foundation models to not only detect anomalies but also provide accurate
explanations for their underlying physical causes. Our project is available at
https://guyao2023.github.io/Phys-AD/.
|
2503.03734 | Letian Fu | Huang Huang, Fangchen Liu, Letian Fu, Tingfan Wu, Mustafa Mukadam,
Jitendra Malik, Ken Goldberg, Pieter Abbeel | OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature
Extraction | null | null | null | null | cs.RO cs.CV | http://creativecommons.org/licenses/by/4.0/ | Vision-Language-Action (VLA) models aim to predict robotic actions based on
visual observations and language instructions. Existing approaches require
fine-tuning pre-trained visionlanguage models (VLMs) as visual and language
features are independently fed into downstream policies, degrading the
pre-trained semantic alignments. We propose OTTER, a novel VLA architecture
that leverages these existing alignments through explicit, text-aware visual
feature extraction. Instead of processing all visual features, OTTER
selectively extracts and passes only task-relevant visual features that are
semantically aligned with the language instruction to the policy transformer.
This allows OTTER to keep the pre-trained vision-language encoders frozen.
Thereby, OTTER preserves and utilizes the rich semantic understanding learned
from large-scale pre-training, enabling strong zero-shot generalization
capabilities. In simulation and real-world experiments, OTTER significantly
outperforms existing VLA models, demonstrating strong zeroshot generalization
to novel objects and environments. Video, code, checkpoints, and dataset:
https://ottervla.github.io/.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 18:44:48 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 03:17:25 GMT"
},
{
"version": "v3",
"created": "Wed, 26 Mar 2025 17:55:06 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Huang",
"Huang",
""
],
[
"Liu",
"Fangchen",
""
],
[
"Fu",
"Letian",
""
],
[
"Wu",
"Tingfan",
""
],
[
"Mukadam",
"Mustafa",
""
],
[
"Malik",
"Jitendra",
""
],
[
"Goldberg",
"Ken",
""
],
[
"Abbeel",
"Pieter",
""
]
] | TITLE: OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature
Extraction
ABSTRACT: Vision-Language-Action (VLA) models aim to predict robotic actions based on
visual observations and language instructions. Existing approaches require
fine-tuning pre-trained visionlanguage models (VLMs) as visual and language
features are independently fed into downstream policies, degrading the
pre-trained semantic alignments. We propose OTTER, a novel VLA architecture
that leverages these existing alignments through explicit, text-aware visual
feature extraction. Instead of processing all visual features, OTTER
selectively extracts and passes only task-relevant visual features that are
semantically aligned with the language instruction to the policy transformer.
This allows OTTER to keep the pre-trained vision-language encoders frozen.
Thereby, OTTER preserves and utilizes the rich semantic understanding learned
from large-scale pre-training, enabling strong zero-shot generalization
capabilities. In simulation and real-world experiments, OTTER significantly
outperforms existing VLA models, demonstrating strong zeroshot generalization
to novel objects and environments. Video, code, checkpoints, and dataset:
https://ottervla.github.io/.
|
2503.05186 | Jeong-Hun Hong | Chan Hur, Jeong-hun Hong, Dong-hun Lee, Dabin Kang, Semin Myeong,
Sang-hyo Park, Hyeyoung Park | Narrating the Video: Boosting Text-Video Retrieval via Comprehensive
Utilization of Frame-Level Captions | Accepted at CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent text-video retrieval, the use of additional captions from
vision-language models has shown promising effects on the performance. However,
existing models using additional captions often have struggled to capture the
rich semantics, including temporal changes, inherent in the video. In addition,
incorrect information caused by generative models can lead to inaccurate
retrieval. To address these issues, we propose a new framework, Narrating the
Video (NarVid), which strategically leverages the comprehensive information
available from frame-level captions, the narration. The proposed NarVid
exploits narration in multiple ways: 1) feature enhancement through cross-modal
interactions between narration and video, 2) query-aware adaptive filtering to
suppress irrelevant or incorrect information, 3) dual-modal matching score by
adding query-video similarity and query-narration similarity, and 4)
hard-negative loss to learn discriminative features from multiple perspectives
using the two similarities from different views. Experimental results
demonstrate that NarVid achieves state-of-the-art performance on various
benchmark datasets.
| [
{
"version": "v1",
"created": "Fri, 7 Mar 2025 07:15:06 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 10:28:45 GMT"
},
{
"version": "v3",
"created": "Thu, 13 Mar 2025 11:24:58 GMT"
},
{
"version": "v4",
"created": "Wed, 26 Mar 2025 02:09:24 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Hur",
"Chan",
""
],
[
"Hong",
"Jeong-hun",
""
],
[
"Lee",
"Dong-hun",
""
],
[
"Kang",
"Dabin",
""
],
[
"Myeong",
"Semin",
""
],
[
"Park",
"Sang-hyo",
""
],
[
"Park",
"Hyeyoung",
""
]
] | TITLE: Narrating the Video: Boosting Text-Video Retrieval via Comprehensive
Utilization of Frame-Level Captions
ABSTRACT: In recent text-video retrieval, the use of additional captions from
vision-language models has shown promising effects on the performance. However,
existing models using additional captions often have struggled to capture the
rich semantics, including temporal changes, inherent in the video. In addition,
incorrect information caused by generative models can lead to inaccurate
retrieval. To address these issues, we propose a new framework, Narrating the
Video (NarVid), which strategically leverages the comprehensive information
available from frame-level captions, the narration. The proposed NarVid
exploits narration in multiple ways: 1) feature enhancement through cross-modal
interactions between narration and video, 2) query-aware adaptive filtering to
suppress irrelevant or incorrect information, 3) dual-modal matching score by
adding query-video similarity and query-narration similarity, and 4)
hard-negative loss to learn discriminative features from multiple perspectives
using the two similarities from different views. Experimental results
demonstrate that NarVid achieves state-of-the-art performance on various
benchmark datasets.
|
2503.06337 | Gopeshh Raaj Subbaraj | Mohit Pandey, Gopeshh Subbaraj, Artem Cherkasov, Martin Ester,
Emmanuel Bengio | Pretraining Generative Flow Networks with Inexpensive Rewards for
Molecular Graph Generation | arXiv admin note: text overlap with arXiv:2409.09702 | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Generative Flow Networks (GFlowNets) have recently emerged as a suitable
framework for generating diverse and high-quality molecular structures by
learning from rewards treated as unnormalized distributions. Previous works in
this framework often restrict exploration by using predefined molecular
fragments as building blocks, limiting the chemical space that can be accessed.
In this work, we introduce Atomic GFlowNets (A-GFNs), a foundational generative
model leveraging individual atoms as building blocks to explore drug-like
chemical space more comprehensively. We propose an unsupervised pre-training
approach using drug-like molecule datasets, which teaches A-GFNs about
inexpensive yet informative molecular descriptors such as drug-likeliness,
topological polar surface area, and synthetic accessibility scores. These
properties serve as proxy rewards, guiding A-GFNs towards regions of chemical
space that exhibit desirable pharmacological properties. We further implement a
goal-conditioned finetuning process, which adapts A-GFNs to optimize for
specific target properties. In this work, we pretrain A-GFN on a subset of ZINC
dataset, and by employing robust evaluation metrics we show the effectiveness
of our approach when compared to other relevant baseline methods for a wide
range of drug design tasks.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 20:41:07 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 19:56:33 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Pandey",
"Mohit",
""
],
[
"Subbaraj",
"Gopeshh",
""
],
[
"Cherkasov",
"Artem",
""
],
[
"Ester",
"Martin",
""
],
[
"Bengio",
"Emmanuel",
""
]
] | TITLE: Pretraining Generative Flow Networks with Inexpensive Rewards for
Molecular Graph Generation
ABSTRACT: Generative Flow Networks (GFlowNets) have recently emerged as a suitable
framework for generating diverse and high-quality molecular structures by
learning from rewards treated as unnormalized distributions. Previous works in
this framework often restrict exploration by using predefined molecular
fragments as building blocks, limiting the chemical space that can be accessed.
In this work, we introduce Atomic GFlowNets (A-GFNs), a foundational generative
model leveraging individual atoms as building blocks to explore drug-like
chemical space more comprehensively. We propose an unsupervised pre-training
approach using drug-like molecule datasets, which teaches A-GFNs about
inexpensive yet informative molecular descriptors such as drug-likeliness,
topological polar surface area, and synthetic accessibility scores. These
properties serve as proxy rewards, guiding A-GFNs towards regions of chemical
space that exhibit desirable pharmacological properties. We further implement a
goal-conditioned finetuning process, which adapts A-GFNs to optimize for
specific target properties. In this work, we pretrain A-GFN on a subset of ZINC
dataset, and by employing robust evaluation metrics we show the effectiveness
of our approach when compared to other relevant baseline methods for a wide
range of drug design tasks.
|
2503.07819 | Joey Wilson | Joey Wilson, Marcelino Almeida, Sachit Mahajan, Martin Labrie, Maani
Ghaffari, Omid Ghasemalizadeh, Min Sun, Cheng-Hao Kuo, Arnab Sen | POp-GS: Next Best View in 3D-Gaussian Splatting with P-Optimality | null | null | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In this paper, we present a novel algorithm for quantifying uncertainty and
information gained within 3D Gaussian Splatting (3D-GS) through P-Optimality.
While 3D-GS has proven to be a useful world model with high-quality
rasterizations, it does not natively quantify uncertainty or information,
posing a challenge for real-world applications such as 3D-GS SLAM. We propose
to quantify information gain in 3D-GS by reformulating the problem through the
lens of optimal experimental design, which is a classical solution widely used
in literature. By restructuring information quantification of 3D-GS through
optimal experimental design, we arrive at multiple solutions, of which
T-Optimality and D-Optimality perform the best quantitatively and qualitatively
as measured on two popular datasets. Additionally, we propose a block diagonal
covariance approximation which provides a measure of correlation at the expense
of a greater computation cost.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 20:01:56 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 18:08:49 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Wilson",
"Joey",
""
],
[
"Almeida",
"Marcelino",
""
],
[
"Mahajan",
"Sachit",
""
],
[
"Labrie",
"Martin",
""
],
[
"Ghaffari",
"Maani",
""
],
[
"Ghasemalizadeh",
"Omid",
""
],
[
"Sun",
"Min",
""
],
[
"Kuo",
"Cheng-Hao",
""
],
[
"Sen",
"Arnab",
""
]
] | TITLE: POp-GS: Next Best View in 3D-Gaussian Splatting with P-Optimality
ABSTRACT: In this paper, we present a novel algorithm for quantifying uncertainty and
information gained within 3D Gaussian Splatting (3D-GS) through P-Optimality.
While 3D-GS has proven to be a useful world model with high-quality
rasterizations, it does not natively quantify uncertainty or information,
posing a challenge for real-world applications such as 3D-GS SLAM. We propose
to quantify information gain in 3D-GS by reformulating the problem through the
lens of optimal experimental design, which is a classical solution widely used
in literature. By restructuring information quantification of 3D-GS through
optimal experimental design, we arrive at multiple solutions, of which
T-Optimality and D-Optimality perform the best quantitatively and qualitatively
as measured on two popular datasets. Additionally, we propose a block diagonal
covariance approximation which provides a measure of correlation at the expense
of a greater computation cost.
|
2503.08120 | Junzhe Li | Junzhe Li, Xuerui Qiu, Linrui Xu, Liya Guo, Delin Qu, Tingting Long,
Chun Fan, Ming Li | Uni$\textbf{F}^2$ace: Fine-grained Face Understanding and Generation
with Unified Multimodal Models | null | null | null | null | cs.CV cs.AI cs.LG cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Unified multimodal models (UMMs) have emerged as a powerful paradigm in
foundational computer vision research, demonstrating significant potential in
both image understanding and generation. However, existing research in the face
domain primarily focuses on $\textbf{coarse}$ facial attribute understanding,
with limited capacity to handle $\textbf{fine-grained}$ facial attributes and
without addressing generation capabilities. To overcome these limitations, we
propose Uni$\textbf{F}^2$ace, the first UMM tailored specifically for
fine-grained face understanding and generation. In general, we train
Uni$\textbf{F}^2$ace on a self-constructed, specialized dataset utilizing two
mutually beneficial diffusion techniques and a two-level mixture-of-experts
architecture. Specifically, we first build a large-scale facial dataset,
Uni$\textbf{F}^2$ace-130K, which contains 130K image-text pairs with one
million question-answering pairs that span a wide range of facial attributes.
Second, we establish a theoretical connection between discrete diffusion score
matching and masked generative models, optimizing both evidence lower bounds
simultaneously, which significantly improves the model's ability to synthesize
facial details. Finally, we introduce both token-level and sequence-level
mixture-of-experts, enabling efficient fine-grained representation learning for
both understanding and generation tasks. Extensive experiments on
Uni$\textbf{F}^2$ace-130K demonstrate that Uni$\textbf{F}^2$ace outperforms
existing UMMs and generative models, achieving superior performance across both
understanding and generation tasks.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 07:34:59 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 02:30:35 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Li",
"Junzhe",
""
],
[
"Qiu",
"Xuerui",
""
],
[
"Xu",
"Linrui",
""
],
[
"Guo",
"Liya",
""
],
[
"Qu",
"Delin",
""
],
[
"Long",
"Tingting",
""
],
[
"Fan",
"Chun",
""
],
[
"Li",
"Ming",
""
]
] | TITLE: Uni$\textbf{F}^2$ace: Fine-grained Face Understanding and Generation
with Unified Multimodal Models
ABSTRACT: Unified multimodal models (UMMs) have emerged as a powerful paradigm in
foundational computer vision research, demonstrating significant potential in
both image understanding and generation. However, existing research in the face
domain primarily focuses on $\textbf{coarse}$ facial attribute understanding,
with limited capacity to handle $\textbf{fine-grained}$ facial attributes and
without addressing generation capabilities. To overcome these limitations, we
propose Uni$\textbf{F}^2$ace, the first UMM tailored specifically for
fine-grained face understanding and generation. In general, we train
Uni$\textbf{F}^2$ace on a self-constructed, specialized dataset utilizing two
mutually beneficial diffusion techniques and a two-level mixture-of-experts
architecture. Specifically, we first build a large-scale facial dataset,
Uni$\textbf{F}^2$ace-130K, which contains 130K image-text pairs with one
million question-answering pairs that span a wide range of facial attributes.
Second, we establish a theoretical connection between discrete diffusion score
matching and masked generative models, optimizing both evidence lower bounds
simultaneously, which significantly improves the model's ability to synthesize
facial details. Finally, we introduce both token-level and sequence-level
mixture-of-experts, enabling efficient fine-grained representation learning for
both understanding and generation tasks. Extensive experiments on
Uni$\textbf{F}^2$ace-130K demonstrate that Uni$\textbf{F}^2$ace outperforms
existing UMMs and generative models, achieving superior performance across both
understanding and generation tasks.
|
2503.08497 | YunCheng Guo | Yuncheng Guo, Xiaodong Gu | MMRL: Multi-Modal Representation Learning for Vision-Language Models | Accepted by CVPR 2025 | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large-scale pre-trained Vision-Language Models (VLMs) have become essential
for transfer learning across diverse tasks. However, adapting these models with
limited few-shot data often leads to overfitting, diminishing their performance
on new tasks. To tackle this issue, we propose a novel Multi-Modal
Representation Learning (MMRL) framework that introduces a shared, learnable,
and modality-agnostic representation space. MMRL projects the space tokens to
text and image representation tokens, facilitating more effective multi-modal
interactions. Unlike previous approaches that solely optimize class token
features, MMRL integrates representation tokens at higher layers of the
encoders--where dataset-specific features are more prominent--while preserving
generalized knowledge in the lower layers. During training, both representation
and class features are optimized, with trainable projection layer applied to
the representation tokens, whereas the class token projection layer remains
frozen to retain pre-trained knowledge. Furthermore, a regularization term is
introduced to align the class features and text features with the zero-shot
features from the frozen VLM, thereby safeguarding the model's generalization
capacity. For inference, a decoupling strategy is employed, wherein both
representation and class features are utilized for base classes, while only the
class features, which retain more generalized knowledge, are used for new
tasks. Extensive experiments across 15 datasets demonstrate that MMRL
outperforms state-of-the-art methods, achieving a balanced trade-off between
task-specific adaptation and generalization. Code is available at
https://github.com/yunncheng/MMRL.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 14:48:01 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 07:35:10 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Guo",
"Yuncheng",
""
],
[
"Gu",
"Xiaodong",
""
]
] | TITLE: MMRL: Multi-Modal Representation Learning for Vision-Language Models
ABSTRACT: Large-scale pre-trained Vision-Language Models (VLMs) have become essential
for transfer learning across diverse tasks. However, adapting these models with
limited few-shot data often leads to overfitting, diminishing their performance
on new tasks. To tackle this issue, we propose a novel Multi-Modal
Representation Learning (MMRL) framework that introduces a shared, learnable,
and modality-agnostic representation space. MMRL projects the space tokens to
text and image representation tokens, facilitating more effective multi-modal
interactions. Unlike previous approaches that solely optimize class token
features, MMRL integrates representation tokens at higher layers of the
encoders--where dataset-specific features are more prominent--while preserving
generalized knowledge in the lower layers. During training, both representation
and class features are optimized, with trainable projection layer applied to
the representation tokens, whereas the class token projection layer remains
frozen to retain pre-trained knowledge. Furthermore, a regularization term is
introduced to align the class features and text features with the zero-shot
features from the frozen VLM, thereby safeguarding the model's generalization
capacity. For inference, a decoupling strategy is employed, wherein both
representation and class features are utilized for base classes, while only the
class features, which retain more generalized knowledge, are used for new
tasks. Extensive experiments across 15 datasets demonstrate that MMRL
outperforms state-of-the-art methods, achieving a balanced trade-off between
task-specific adaptation and generalization. Code is available at
https://github.com/yunncheng/MMRL.
|
2503.08741 | Letian Zhang | Letian Zhang, Quan Cui, Bingchen Zhao, Cheng Yang | Oasis: One Image is All You Need for Multimodal Instruction Data
Synthesis | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | The success of multi-modal large language models (MLLMs) has been largely
attributed to the large-scale training data. However, the training data of many
MLLMs is unavailable due to privacy concerns. The expensive and labor-intensive
process of collecting multi-modal data further exacerbates the problem. Is it
possible to synthesize multi-modal training data automatically without
compromising diversity and quality? In this paper, we propose a new method,
Oasis, to synthesize high-quality multi-modal data with only images. Oasis
breaks through traditional methods by prompting only images to the MLLMs, thus
extending the data diversity by a large margin. Our method features a delicate
quality control method which ensures the data quality. We collected over 500k
data and conducted incremental experiments on LLaVA-NeXT. Extensive experiments
demonstrate that our method can significantly improve the performance of MLLMs.
The image-based synthesis also allows us to focus on the specific-domain
ability of MLLMs. Code and dataset are publicly available at
https://github.com/Letian2003/MM_INF.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 08:25:40 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 06:15:32 GMT"
},
{
"version": "v3",
"created": "Wed, 26 Mar 2025 09:01:55 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Zhang",
"Letian",
""
],
[
"Cui",
"Quan",
""
],
[
"Zhao",
"Bingchen",
""
],
[
"Yang",
"Cheng",
""
]
] | TITLE: Oasis: One Image is All You Need for Multimodal Instruction Data
Synthesis
ABSTRACT: The success of multi-modal large language models (MLLMs) has been largely
attributed to the large-scale training data. However, the training data of many
MLLMs is unavailable due to privacy concerns. The expensive and labor-intensive
process of collecting multi-modal data further exacerbates the problem. Is it
possible to synthesize multi-modal training data automatically without
compromising diversity and quality? In this paper, we propose a new method,
Oasis, to synthesize high-quality multi-modal data with only images. Oasis
breaks through traditional methods by prompting only images to the MLLMs, thus
extending the data diversity by a large margin. Our method features a delicate
quality control method which ensures the data quality. We collected over 500k
data and conducted incremental experiments on LLaVA-NeXT. Extensive experiments
demonstrate that our method can significantly improve the performance of MLLMs.
The image-based synthesis also allows us to focus on the specific-domain
ability of MLLMs. Code and dataset are publicly available at
https://github.com/Letian2003/MM_INF.
|
2503.10879 | Ben Winter | Benjamin David Winter, William John Teahan | Task-Specific Activation Functions for Neuroevolution using Grammatical
Evolution | 8 pages, 4 figures, IEEE | null | null | null | cs.NE cs.AI | http://creativecommons.org/licenses/by/4.0/ | Activation functions play a critical role in the performance and behaviour of
neural networks, significantly impacting their ability to learn and generalise.
Traditional activation functions, such as ReLU, sigmoid, and tanh, have been
widely used with considerable success. However, these functions may not always
provide optimal performance for all tasks and datasets. In this paper, we
introduce Neuvo GEAF - an innovative approach leveraging grammatical evolution
(GE) to automatically evolve novel activation functions tailored to specific
neural network architectures and datasets. Experiments conducted on well-known
binary classification datasets show statistically significant improvements in
F1-score (between 2.4% and 9.4%) over ReLU using identical network
architectures. Notably, these performance gains were achieved without
increasing the network's parameter count, supporting the trend toward more
efficient neural networks that can operate effectively on resource-constrained
edge devices. This paper's findings suggest that evolved activation functions
can provide significant performance improvements for compact networks while
maintaining energy efficiency during both training and inference phases.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 20:50:21 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 17:39:57 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Winter",
"Benjamin David",
""
],
[
"Teahan",
"William John",
""
]
] | TITLE: Task-Specific Activation Functions for Neuroevolution using Grammatical
Evolution
ABSTRACT: Activation functions play a critical role in the performance and behaviour of
neural networks, significantly impacting their ability to learn and generalise.
Traditional activation functions, such as ReLU, sigmoid, and tanh, have been
widely used with considerable success. However, these functions may not always
provide optimal performance for all tasks and datasets. In this paper, we
introduce Neuvo GEAF - an innovative approach leveraging grammatical evolution
(GE) to automatically evolve novel activation functions tailored to specific
neural network architectures and datasets. Experiments conducted on well-known
binary classification datasets show statistically significant improvements in
F1-score (between 2.4% and 9.4%) over ReLU using identical network
architectures. Notably, these performance gains were achieved without
increasing the network's parameter count, supporting the trend toward more
efficient neural networks that can operate effectively on resource-constrained
edge devices. This paper's findings suggest that evolved activation functions
can provide significant performance improvements for compact networks while
maintaining energy efficiency during both training and inference phases.
|
2503.10927 | Angela Lopez | Angela Lopez-Cardona, Sebastian Idesis, Miguel Barreda-\'Angeles,
Sergi Abadal, and Ioannis Arapakis | OASST-ETC Dataset: Alignment Signals from Eye-tracking Analysis of LLM
Responses | This paper has been accepted to ACM ETRA 2025 and published on
PACMHCI | Proceedings of the ACM on Human-Computer Interaction. 2025 | 10.1145/3725840 | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | While Large Language Models (LLMs) have significantly advanced natural
language processing, aligning them with human preferences remains an open
challenge. Although current alignment methods rely primarily on explicit
feedback, eye-tracking (ET) data offers insights into real-time cognitive
processing during reading. In this paper, we present OASST-ETC, a novel
eye-tracking corpus capturing reading patterns from 24 participants, while
evaluating LLM-generated responses from the OASST1 dataset. Our analysis
reveals distinct reading patterns between preferred and non-preferred
responses, which we compare with synthetic eye-tracking data. Furthermore, we
examine the correlation between human reading measures and attention patterns
from various transformer-based models, discovering stronger correlations in
preferred responses. This work introduces a unique resource for studying human
cognitive processing in LLM evaluation and suggests promising directions for
incorporating eye-tracking data into alignment methods. The dataset and
analysis code are publicly available.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 22:28:38 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 13:24:43 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Lopez-Cardona",
"Angela",
""
],
[
"Idesis",
"Sebastian",
""
],
[
"Barreda-Ángeles",
"Miguel",
""
],
[
"Abadal",
"Sergi",
""
],
[
"Arapakis",
"Ioannis",
""
]
] | TITLE: OASST-ETC Dataset: Alignment Signals from Eye-tracking Analysis of LLM
Responses
ABSTRACT: While Large Language Models (LLMs) have significantly advanced natural
language processing, aligning them with human preferences remains an open
challenge. Although current alignment methods rely primarily on explicit
feedback, eye-tracking (ET) data offers insights into real-time cognitive
processing during reading. In this paper, we present OASST-ETC, a novel
eye-tracking corpus capturing reading patterns from 24 participants, while
evaluating LLM-generated responses from the OASST1 dataset. Our analysis
reveals distinct reading patterns between preferred and non-preferred
responses, which we compare with synthetic eye-tracking data. Furthermore, we
examine the correlation between human reading measures and attention patterns
from various transformer-based models, discovering stronger correlations in
preferred responses. This work introduces a unique resource for studying human
cognitive processing in LLM evaluation and suggests promising directions for
incorporating eye-tracking data into alignment methods. The dataset and
analysis code are publicly available.
|
2503.14754 | Matt Franchi | Matt Franchi, Nikhil Garg, Wendy Ju, Emma Pierson | Bayesian Modeling of Zero-Shot Classifications for Urban Flood Detection | In review | null | null | null | cs.LG cs.AI cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Street scene datasets, collected from Street View or dashboard cameras, offer
a promising means of detecting urban objects and incidents like street
flooding. However, a major challenge in using these datasets is their lack of
reliable labels: there are myriad types of incidents, many types occur rarely,
and ground-truth measures of where incidents occur are lacking. Here, we
propose BayFlood, a two-stage approach which circumvents this difficulty.
First, we perform zero-shot classification of where incidents occur using a
pretrained vision-language model (VLM). Second, we fit a spatial Bayesian model
on the VLM classifications. The zero-shot approach avoids the need to annotate
large training sets, and the Bayesian model provides frequent desiderata in
urban settings - principled measures of uncertainty, smoothing across
locations, and incorporation of external data like stormwater accumulation
zones. We comprehensively validate this two-stage approach, showing that VLMs
provide strong zero-shot signal for floods across multiple cities and time
periods, the Bayesian model improves out-of-sample prediction relative to
baseline methods, and our inferred flood risk correlates with known external
predictors of risk. Having validated our approach, we show it can be used to
improve urban flood detection: our analysis reveals 113,738 people who are at
high risk of flooding overlooked by current methods, identifies demographic
biases in existing methods, and suggests locations for new flood sensors. More
broadly, our results showcase how Bayesian modeling of zero-shot LM annotations
represents a promising paradigm because it avoids the need to collect large
labeled datasets and leverages the power of foundation models while providing
the expressiveness and uncertainty quantification of Bayesian models.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 21:53:37 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 12:25:03 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Franchi",
"Matt",
""
],
[
"Garg",
"Nikhil",
""
],
[
"Ju",
"Wendy",
""
],
[
"Pierson",
"Emma",
""
]
] | TITLE: Bayesian Modeling of Zero-Shot Classifications for Urban Flood Detection
ABSTRACT: Street scene datasets, collected from Street View or dashboard cameras, offer
a promising means of detecting urban objects and incidents like street
flooding. However, a major challenge in using these datasets is their lack of
reliable labels: there are myriad types of incidents, many types occur rarely,
and ground-truth measures of where incidents occur are lacking. Here, we
propose BayFlood, a two-stage approach which circumvents this difficulty.
First, we perform zero-shot classification of where incidents occur using a
pretrained vision-language model (VLM). Second, we fit a spatial Bayesian model
on the VLM classifications. The zero-shot approach avoids the need to annotate
large training sets, and the Bayesian model provides frequent desiderata in
urban settings - principled measures of uncertainty, smoothing across
locations, and incorporation of external data like stormwater accumulation
zones. We comprehensively validate this two-stage approach, showing that VLMs
provide strong zero-shot signal for floods across multiple cities and time
periods, the Bayesian model improves out-of-sample prediction relative to
baseline methods, and our inferred flood risk correlates with known external
predictors of risk. Having validated our approach, we show it can be used to
improve urban flood detection: our analysis reveals 113,738 people who are at
high risk of flooding overlooked by current methods, identifies demographic
biases in existing methods, and suggests locations for new flood sensors. More
broadly, our results showcase how Bayesian modeling of zero-shot LM annotations
represents a promising paradigm because it avoids the need to collect large
labeled datasets and leverages the power of foundation models while providing
the expressiveness and uncertainty quantification of Bayesian models.
|
2503.15013 | Caifeng Zou | Caifeng Zou, Zachary E. Ross, Robert W. Clayton, Fan-Chi Lin, and
Kamyar Azizzadenesheli | Ambient Noise Full Waveform Inversion with Neural Operators | Added references | null | null | null | physics.geo-ph cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Numerical simulations of seismic wave propagation are crucial for
investigating velocity structures and improving seismic hazard assessment.
However, standard methods such as finite difference or finite element are
computationally expensive. Recent studies have shown that a new class of
machine learning models, called neural operators, can solve the elastodynamic
wave equation orders of magnitude faster than conventional methods. Full
waveform inversion is a prime beneficiary of the accelerated simulations.
Neural operators, as end-to-end differentiable operators, combined with
automatic differentiation, provide an alternative approach to the adjoint-state
method. Since neural operators do not involve the Born approximation, when used
for full waveform inversion they have the potential to include additional
phases and alleviate cycle-skipping problems present in traditional
adjoint-state formulations. In this study, we demonstrate the first application
of neural operators for full waveform inversion on a real seismic dataset,
which consists of several nodal transects collected across the San Gabriel,
Chino, and San Bernardino basins in the Los Angeles metropolitan area.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 09:10:43 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 21:50:39 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Zou",
"Caifeng",
""
],
[
"Ross",
"Zachary E.",
""
],
[
"Clayton",
"Robert W.",
""
],
[
"Lin",
"Fan-Chi",
""
],
[
"Azizzadenesheli",
"Kamyar",
""
]
] | TITLE: Ambient Noise Full Waveform Inversion with Neural Operators
ABSTRACT: Numerical simulations of seismic wave propagation are crucial for
investigating velocity structures and improving seismic hazard assessment.
However, standard methods such as finite difference or finite element are
computationally expensive. Recent studies have shown that a new class of
machine learning models, called neural operators, can solve the elastodynamic
wave equation orders of magnitude faster than conventional methods. Full
waveform inversion is a prime beneficiary of the accelerated simulations.
Neural operators, as end-to-end differentiable operators, combined with
automatic differentiation, provide an alternative approach to the adjoint-state
method. Since neural operators do not involve the Born approximation, when used
for full waveform inversion they have the potential to include additional
phases and alleviate cycle-skipping problems present in traditional
adjoint-state formulations. In this study, we demonstrate the first application
of neural operators for full waveform inversion on a real seismic dataset,
which consists of several nodal transects collected across the San Gabriel,
Chino, and San Bernardino basins in the Los Angeles metropolitan area.
|
2503.15893 | Jiawei Wang | Jiawei Wang and Kai Hu and Qiang Huo | UniHDSA: A Unified Relation Prediction Approach for Hierarchical
Document Structure Analysis | Accepted by Pattern Recognition. arXiv admin note: text overlap with
arXiv:2405.11757 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Document structure analysis, aka document layout analysis, is crucial for
understanding both the physical layout and logical structure of documents,
serving information retrieval, document summarization, knowledge extraction,
etc. Hierarchical Document Structure Analysis (HDSA) specifically aims to
restore the hierarchical structure of documents created using authoring
software with hierarchical schemas. Previous research has primarily followed
two approaches: one focuses on tackling specific subtasks of HDSA in isolation,
such as table detection or reading order prediction, while the other adopts a
unified framework that uses multiple branches or modules, each designed to
address a distinct task. In this work, we propose a unified relation prediction
approach for HDSA, called UniHDSA, which treats various HDSA sub-tasks as
relation prediction problems and consolidates relation prediction labels into a
unified label space. This allows a single relation prediction module to handle
multiple tasks simultaneously, whether at a page-level or document-level
structure analysis. To validate the effectiveness of UniHDSA, we develop a
multimodal end-to-end system based on Transformer architectures. Extensive
experimental results demonstrate that our approach achieves state-of-the-art
performance on a hierarchical document structure analysis benchmark,
Comp-HRDoc, and competitive results on a large-scale document layout analysis
dataset, DocLayNet, effectively illustrating the superiority of our method
across all sub-tasks. The Comp-HRDoc benchmark and UniHDSA's configurations are
publicly available at https://github.com/microsoft/CompHRDoc.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 06:44:47 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 02:59:45 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Wang",
"Jiawei",
""
],
[
"Hu",
"Kai",
""
],
[
"Huo",
"Qiang",
""
]
] | TITLE: UniHDSA: A Unified Relation Prediction Approach for Hierarchical
Document Structure Analysis
ABSTRACT: Document structure analysis, aka document layout analysis, is crucial for
understanding both the physical layout and logical structure of documents,
serving information retrieval, document summarization, knowledge extraction,
etc. Hierarchical Document Structure Analysis (HDSA) specifically aims to
restore the hierarchical structure of documents created using authoring
software with hierarchical schemas. Previous research has primarily followed
two approaches: one focuses on tackling specific subtasks of HDSA in isolation,
such as table detection or reading order prediction, while the other adopts a
unified framework that uses multiple branches or modules, each designed to
address a distinct task. In this work, we propose a unified relation prediction
approach for HDSA, called UniHDSA, which treats various HDSA sub-tasks as
relation prediction problems and consolidates relation prediction labels into a
unified label space. This allows a single relation prediction module to handle
multiple tasks simultaneously, whether at a page-level or document-level
structure analysis. To validate the effectiveness of UniHDSA, we develop a
multimodal end-to-end system based on Transformer architectures. Extensive
experimental results demonstrate that our approach achieves state-of-the-art
performance on a hierarchical document structure analysis benchmark,
Comp-HRDoc, and competitive results on a large-scale document layout analysis
dataset, DocLayNet, effectively illustrating the superiority of our method
across all sub-tasks. The Comp-HRDoc benchmark and UniHDSA's configurations are
publicly available at https://github.com/microsoft/CompHRDoc.
|
2503.16973 | Wentao Jiang | Wentao Jiang, Jingya Wang, Haotao Lu, Kaiyang Ji, Baoxiong Jia, Siyuan
Huang, Ye Shi | ARFlow: Human Action-Reaction Flow Matching with Physical Guidance | Project Page: https://arflow2025.github.io/ | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Human action-reaction synthesis, a fundamental challenge in modeling causal
human interactions, plays a critical role in applications ranging from virtual
reality to social robotics. While diffusion-based models have demonstrated
promising performance, they exhibit two key limitations for interaction
synthesis: reliance on complex noise-to-reaction generators with intricate
conditional mechanisms, and frequent physical violations in generated motions.
To address these issues, we propose Action-Reaction Flow Matching (ARFlow), a
novel framework that establishes direct action-to-reaction mappings,
eliminating the need for complex conditional mechanisms. Our approach
introduces two key innovations: an x1-prediction method that directly outputs
human motions instead of velocity fields, enabling explicit constraint
enforcement; and a training-free, gradient-based physical guidance mechanism
that effectively prevents body penetration artifacts during sampling. Extensive
experiments on NTU120 and Chi3D datasets demonstrate that ARFlow not only
outperforms existing methods in terms of Fr\'echet Inception Distance and
motion diversity but also significantly reduces body collisions, as measured by
our new Intersection Volume and Intersection Frequency metrics.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 09:41:24 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 08:43:09 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Jiang",
"Wentao",
""
],
[
"Wang",
"Jingya",
""
],
[
"Lu",
"Haotao",
""
],
[
"Ji",
"Kaiyang",
""
],
[
"Jia",
"Baoxiong",
""
],
[
"Huang",
"Siyuan",
""
],
[
"Shi",
"Ye",
""
]
] | TITLE: ARFlow: Human Action-Reaction Flow Matching with Physical Guidance
ABSTRACT: Human action-reaction synthesis, a fundamental challenge in modeling causal
human interactions, plays a critical role in applications ranging from virtual
reality to social robotics. While diffusion-based models have demonstrated
promising performance, they exhibit two key limitations for interaction
synthesis: reliance on complex noise-to-reaction generators with intricate
conditional mechanisms, and frequent physical violations in generated motions.
To address these issues, we propose Action-Reaction Flow Matching (ARFlow), a
novel framework that establishes direct action-to-reaction mappings,
eliminating the need for complex conditional mechanisms. Our approach
introduces two key innovations: an x1-prediction method that directly outputs
human motions instead of velocity fields, enabling explicit constraint
enforcement; and a training-free, gradient-based physical guidance mechanism
that effectively prevents body penetration artifacts during sampling. Extensive
experiments on NTU120 and Chi3D datasets demonstrate that ARFlow not only
outperforms existing methods in terms of Fr\'echet Inception Distance and
motion diversity but also significantly reduces body collisions, as measured by
our new Intersection Volume and Intersection Frequency metrics.
|
2503.17122 | Lei Wan | Jonas Mirlach and Lei Wan and Andreas Wiedholz and Hannan Ejaz Keen
and Andreas Eich | R-LiViT: A LiDAR-Visual-Thermal Dataset Enabling Vulnerable Road User
Focused Roadside Perception | 10 pages, 7 figures, submitted to ICCV2025 | null | null | null | cs.CV | http://creativecommons.org/publicdomain/zero/1.0/ | In autonomous driving, the integration of roadside perception systems is
essential for overcoming occlusion challenges and enhancing the safety of
Vulnerable Road Users (VRUs). While LiDAR and visual (RGB) sensors are commonly
used, thermal imaging remains underrepresented in datasets, despite its
acknowledged advantages for VRU detection in extreme lighting conditions. In
this paper, we present R-LiViT, the first dataset to combine LiDAR, RGB, and
thermal imaging from a roadside perspective, with a strong focus on VRUs.
R-LiViT captures three intersections during both day and night, ensuring a
diverse dataset. It includes 10,000 LiDAR frames and 2,400 temporally and
spatially aligned RGB and thermal images across over 150 traffic scenarios,
with 6 and 8 annotated classes respectively, providing a comprehensive resource
for tasks such as object detection and tracking. The dataset and the code for
reproducing our evaluation results are made publicly available.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 13:17:28 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 17:38:07 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Mirlach",
"Jonas",
""
],
[
"Wan",
"Lei",
""
],
[
"Wiedholz",
"Andreas",
""
],
[
"Keen",
"Hannan Ejaz",
""
],
[
"Eich",
"Andreas",
""
]
] | TITLE: R-LiViT: A LiDAR-Visual-Thermal Dataset Enabling Vulnerable Road User
Focused Roadside Perception
ABSTRACT: In autonomous driving, the integration of roadside perception systems is
essential for overcoming occlusion challenges and enhancing the safety of
Vulnerable Road Users (VRUs). While LiDAR and visual (RGB) sensors are commonly
used, thermal imaging remains underrepresented in datasets, despite its
acknowledged advantages for VRU detection in extreme lighting conditions. In
this paper, we present R-LiViT, the first dataset to combine LiDAR, RGB, and
thermal imaging from a roadside perspective, with a strong focus on VRUs.
R-LiViT captures three intersections during both day and night, ensuring a
diverse dataset. It includes 10,000 LiDAR frames and 2,400 temporally and
spatially aligned RGB and thermal images across over 150 traffic scenarios,
with 6 and 8 annotated classes respectively, providing a comprehensive resource
for tasks such as object detection and tracking. The dataset and the code for
reproducing our evaluation results are made publicly available.
|
2503.18147 | Ke Niu | Ke Niu, Yuwen Chen, Haiyang Yu, Zhuofan Chen, Xianghui Que, Bin Li,
Xiangyang Xue | PHT-CAD: Efficient CAD Parametric Primitive Analysis with Progressive
Hierarchical Tuning | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Computer-Aided Design (CAD) plays a pivotal role in industrial manufacturing,
yet 2D Parametric Primitive Analysis (PPA) remains underexplored due to two key
challenges: structural constraint reasoning and advanced semantic
understanding. To tackle these challenges, we first propose an Efficient Hybrid
Parametrization (EHP) for better representing 2D engineering drawings. EHP
contains four types of atomic component i.e., point, line, circle, and arc).
Additionally, we propose PHT-CAD, a novel 2D PPA framework that harnesses the
modality alignment and reasoning capabilities of Vision-Language Models (VLMs)
for precise engineering drawing analysis. In PHT-CAD, we introduce four
dedicated regression heads to predict corresponding atomic components. To train
PHT-CAD, a three-stage training paradigm Progressive Hierarchical Tuning (PHT)
is proposed to progressively enhance PHT-CAD's capability to perceive
individual primitives, infer structural constraints, and align annotation
layers with their corresponding geometric representations. Considering that
existing datasets lack complete annotation layers and real-world engineering
drawings, we introduce ParaCAD, the first large-scale benchmark that explicitly
integrates both the geometric and annotation layers. ParaCAD comprises over 10
million annotated drawings for training and 3,000 real-world industrial
drawings with complex topological structures and physical constraints for test.
Extensive experiments demonstrate the effectiveness of PHT-CAD and highlight
the practical significance of ParaCAD in advancing 2D PPA research.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 17:24:32 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 10:42:11 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Niu",
"Ke",
""
],
[
"Chen",
"Yuwen",
""
],
[
"Yu",
"Haiyang",
""
],
[
"Chen",
"Zhuofan",
""
],
[
"Que",
"Xianghui",
""
],
[
"Li",
"Bin",
""
],
[
"Xue",
"Xiangyang",
""
]
] | TITLE: PHT-CAD: Efficient CAD Parametric Primitive Analysis with Progressive
Hierarchical Tuning
ABSTRACT: Computer-Aided Design (CAD) plays a pivotal role in industrial manufacturing,
yet 2D Parametric Primitive Analysis (PPA) remains underexplored due to two key
challenges: structural constraint reasoning and advanced semantic
understanding. To tackle these challenges, we first propose an Efficient Hybrid
Parametrization (EHP) for better representing 2D engineering drawings. EHP
contains four types of atomic component i.e., point, line, circle, and arc).
Additionally, we propose PHT-CAD, a novel 2D PPA framework that harnesses the
modality alignment and reasoning capabilities of Vision-Language Models (VLMs)
for precise engineering drawing analysis. In PHT-CAD, we introduce four
dedicated regression heads to predict corresponding atomic components. To train
PHT-CAD, a three-stage training paradigm Progressive Hierarchical Tuning (PHT)
is proposed to progressively enhance PHT-CAD's capability to perceive
individual primitives, infer structural constraints, and align annotation
layers with their corresponding geometric representations. Considering that
existing datasets lack complete annotation layers and real-world engineering
drawings, we introduce ParaCAD, the first large-scale benchmark that explicitly
integrates both the geometric and annotation layers. ParaCAD comprises over 10
million annotated drawings for training and 3,000 real-world industrial
drawings with complex topological structures and physical constraints for test.
Extensive experiments demonstrate the effectiveness of PHT-CAD and highlight
the practical significance of ParaCAD in advancing 2D PPA research.
|
2503.18211 | Zhengyuan Li | Zhengyuan Li, Kai Cheng, Anindita Ghosh, Uttaran Bhattacharya,
Liangyan Gui, Aniket Bera | SimMotionEdit: Text-Based Human Motion Editing with Motion Similarity
Prediction | Project URL: https://github.com/lzhyu/SimMotionEdit | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Text-based 3D human motion editing is a critical yet challenging task in
computer vision and graphics. While training-free approaches have been
explored, the recent release of the MotionFix dataset, which includes
source-text-motion triplets, has opened new avenues for training, yielding
promising results. However, existing methods struggle with precise control,
often leading to misalignment between motion semantics and language
instructions. In this paper, we introduce a related task, motion similarity
prediction, and propose a multi-task training paradigm, where we train the
model jointly on motion editing and motion similarity prediction to foster the
learning of semantically meaningful representations. To complement this task,
we design an advanced Diffusion-Transformer-based architecture that separately
handles motion similarity prediction and motion editing. Extensive experiments
demonstrate the state-of-the-art performance of our approach in both editing
alignment and fidelity.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 21:29:37 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 20:31:03 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Li",
"Zhengyuan",
""
],
[
"Cheng",
"Kai",
""
],
[
"Ghosh",
"Anindita",
""
],
[
"Bhattacharya",
"Uttaran",
""
],
[
"Gui",
"Liangyan",
""
],
[
"Bera",
"Aniket",
""
]
] | TITLE: SimMotionEdit: Text-Based Human Motion Editing with Motion Similarity
Prediction
ABSTRACT: Text-based 3D human motion editing is a critical yet challenging task in
computer vision and graphics. While training-free approaches have been
explored, the recent release of the MotionFix dataset, which includes
source-text-motion triplets, has opened new avenues for training, yielding
promising results. However, existing methods struggle with precise control,
often leading to misalignment between motion semantics and language
instructions. In this paper, we introduce a related task, motion similarity
prediction, and propose a multi-task training paradigm, where we train the
model jointly on motion editing and motion similarity prediction to foster the
learning of semantically meaningful representations. To complement this task,
we design an advanced Diffusion-Transformer-based architecture that separately
handles motion similarity prediction and motion editing. Extensive experiments
demonstrate the state-of-the-art performance of our approach in both editing
alignment and fidelity.
|
2503.18227 | Zihong Luo | Yiheng Zhong, Zihong Luo, Chengzhi Liu, Feilong Tang, Zelin Peng, Ming
Hu, Yingzhen Hu, Jionglong Su, Zongyuan Ge and Imran Razzak | PG-SAM: Prior-Guided SAM with Medical for Multi-organ Segmentation | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Segment Anything Model (SAM) demonstrates powerful zero-shot capabilities;
however, its accuracy and robustness significantly decrease when applied to
medical image segmentation. Existing methods address this issue through
modality fusion, integrating textual and image information to provide more
detailed priors. In this study, we argue that the granularity of text and the
domain gap affect the accuracy of the priors. Furthermore, the discrepancy
between high-level abstract semantics and pixel-level boundary details in
images can introduce noise into the fusion process. To address this, we propose
Prior-Guided SAM (PG-SAM), which employs a fine-grained modality prior aligner
to leverage specialized medical knowledge for better modality alignment. The
core of our method lies in efficiently addressing the domain gap with
fine-grained text from a medical LLM. Meanwhile, it also enhances the priors'
quality after modality alignment, ensuring more accurate segmentation. In
addition, our decoder enhances the model's expressive capabilities through
multi-level feature fusion and iterative mask optimizer operations, supporting
unprompted learning. We also propose a unified pipeline that effectively
supplies high-quality semantic information to SAM. Extensive experiments on the
Synapse dataset demonstrate that the proposed PG-SAM achieves state-of-the-art
performance. Our code is released at https://github.com/logan-0623/PG-SAM.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 22:06:07 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 13:25:06 GMT"
},
{
"version": "v3",
"created": "Wed, 26 Mar 2025 13:38:40 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Zhong",
"Yiheng",
""
],
[
"Luo",
"Zihong",
""
],
[
"Liu",
"Chengzhi",
""
],
[
"Tang",
"Feilong",
""
],
[
"Peng",
"Zelin",
""
],
[
"Hu",
"Ming",
""
],
[
"Hu",
"Yingzhen",
""
],
[
"Su",
"Jionglong",
""
],
[
"Ge",
"Zongyuan",
""
],
[
"Razzak",
"Imran",
""
]
] | TITLE: PG-SAM: Prior-Guided SAM with Medical for Multi-organ Segmentation
ABSTRACT: Segment Anything Model (SAM) demonstrates powerful zero-shot capabilities;
however, its accuracy and robustness significantly decrease when applied to
medical image segmentation. Existing methods address this issue through
modality fusion, integrating textual and image information to provide more
detailed priors. In this study, we argue that the granularity of text and the
domain gap affect the accuracy of the priors. Furthermore, the discrepancy
between high-level abstract semantics and pixel-level boundary details in
images can introduce noise into the fusion process. To address this, we propose
Prior-Guided SAM (PG-SAM), which employs a fine-grained modality prior aligner
to leverage specialized medical knowledge for better modality alignment. The
core of our method lies in efficiently addressing the domain gap with
fine-grained text from a medical LLM. Meanwhile, it also enhances the priors'
quality after modality alignment, ensuring more accurate segmentation. In
addition, our decoder enhances the model's expressive capabilities through
multi-level feature fusion and iterative mask optimizer operations, supporting
unprompted learning. We also propose a unified pipeline that effectively
supplies high-quality semantic information to SAM. Extensive experiments on the
Synapse dataset demonstrate that the proposed PG-SAM achieves state-of-the-art
performance. Our code is released at https://github.com/logan-0623/PG-SAM.
|
2503.18395 | Shuzhi Cao | Rong Chen, Shuzhi Cao, Ailong He, Shuguang Han, Jufeng Chen | PRECTR: A Synergistic Framework for Integrating Personalized Search
Relevance Matching and CTR Prediction | null | null | null | null | cs.IR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The two primary tasks in the search recommendation system are search
relevance matching and click-through rate (CTR) prediction -- the former
focuses on seeking relevant items for user queries whereas the latter forecasts
which item may better match user interest. Prior research typically develops
two models to predict the CTR and search relevance separately, then ranking
candidate items based on the fusion of the two outputs. However, such a
divide-and-conquer paradigm creates the inconsistency between different models.
Meanwhile, the search relevance model mainly concentrates on the degree of
objective text matching while neglecting personalized differences among
different users, leading to restricted model performance. To tackle these
issues, we propose a unified Personalized Search RElevance Matching and CTR
Prediction Fusion Model(PRECTR). Specifically, based on the conditional
probability fusion mechanism, PRECTR integrates the CTR prediction and search
relevance matching into one framework to enhance the interaction and
consistency of the two modules. However, directly optimizing CTR binary
classification loss may bring challenges to the fusion model's convergence and
indefinitely promote the exposure of items with high CTR, regardless of their
search relevance. Hence, we further introduce two-stage training and semantic
consistency regularization to accelerate the model's convergence and restrain
the recommendation of irrelevant items. Finally, acknowledging that different
users may have varied relevance preferences, we assessed current users'
relevance preferences by analyzing past users' preferences for similar queries
and tailored incentives for different candidate items accordingly. Extensive
experimental results on our production dataset and online A/B testing
demonstrate the effectiveness and superiority of our proposed PRECTR method.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 07:07:04 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 14:38:27 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Chen",
"Rong",
""
],
[
"Cao",
"Shuzhi",
""
],
[
"He",
"Ailong",
""
],
[
"Han",
"Shuguang",
""
],
[
"Chen",
"Jufeng",
""
]
] | TITLE: PRECTR: A Synergistic Framework for Integrating Personalized Search
Relevance Matching and CTR Prediction
ABSTRACT: The two primary tasks in the search recommendation system are search
relevance matching and click-through rate (CTR) prediction -- the former
focuses on seeking relevant items for user queries whereas the latter forecasts
which item may better match user interest. Prior research typically develops
two models to predict the CTR and search relevance separately, then ranking
candidate items based on the fusion of the two outputs. However, such a
divide-and-conquer paradigm creates the inconsistency between different models.
Meanwhile, the search relevance model mainly concentrates on the degree of
objective text matching while neglecting personalized differences among
different users, leading to restricted model performance. To tackle these
issues, we propose a unified Personalized Search RElevance Matching and CTR
Prediction Fusion Model(PRECTR). Specifically, based on the conditional
probability fusion mechanism, PRECTR integrates the CTR prediction and search
relevance matching into one framework to enhance the interaction and
consistency of the two modules. However, directly optimizing CTR binary
classification loss may bring challenges to the fusion model's convergence and
indefinitely promote the exposure of items with high CTR, regardless of their
search relevance. Hence, we further introduce two-stage training and semantic
consistency regularization to accelerate the model's convergence and restrain
the recommendation of irrelevant items. Finally, acknowledging that different
users may have varied relevance preferences, we assessed current users'
relevance preferences by analyzing past users' preferences for similar queries
and tailored incentives for different candidate items accordingly. Extensive
experimental results on our production dataset and online A/B testing
demonstrate the effectiveness and superiority of our proposed PRECTR method.
|
2503.18603 | Jong Myoung Kim | Jong Myoung Kim, Young-Jun Lee, Ho-Jin Choi, Sangkeun Jung | LANGALIGN: Enhancing Non-English Language Models via Cross-Lingual
Embedding Alignment | now preparing | null | null | null | cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | While Large Language Models have gained attention, many service developers
still rely on embedding-based models due to practical constraints. In such
cases, the quality of fine-tuning data directly impacts performance, and
English datasets are often used as seed data for training non-English models.
In this study, we propose LANGALIGN, which enhances target language processing
by aligning English embedding vectors with those of the target language at the
interface between the language model and the task header. Experiments on
Korean, Japanese, and Chinese demonstrate that LANGALIGN significantly improves
performance across all three languages. Additionally, we show that LANGALIGN
can be applied in reverse to convert target language data into a format that an
English-based model can process.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 12:02:26 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 23:15:05 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Kim",
"Jong Myoung",
""
],
[
"Lee",
"Young-Jun",
""
],
[
"Choi",
"Ho-Jin",
""
],
[
"Jung",
"Sangkeun",
""
]
] | TITLE: LANGALIGN: Enhancing Non-English Language Models via Cross-Lingual
Embedding Alignment
ABSTRACT: While Large Language Models have gained attention, many service developers
still rely on embedding-based models due to practical constraints. In such
cases, the quality of fine-tuning data directly impacts performance, and
English datasets are often used as seed data for training non-English models.
In this study, we propose LANGALIGN, which enhances target language processing
by aligning English embedding vectors with those of the target language at the
interface between the language model and the task header. Experiments on
Korean, Japanese, and Chinese demonstrate that LANGALIGN significantly improves
performance across all three languages. Additionally, we show that LANGALIGN
can be applied in reverse to convert target language data into a format that an
English-based model can process.
|
2503.19070 | Yubing Lu | Jiazhu Dai, Yubing Lu | Graph-Level Label-Only Membership Inference Attack against Graph Neural
Networks | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph neural networks (GNNs) are widely used for graph-structured data but
are vulnerable to membership inference attacks (MIAs) in graph classification
tasks, which determine if a graph was part of the training dataset, potentially
causing data leakage. Existing MIAs rely on prediction probability vectors, but
they become ineffective when only prediction labels are available. We propose a
Graph-level Label-Only Membership Inference Attack (GLO-MIA), which is based on
the intuition that the target model's predictions on training data are more
stable than those on testing data. GLO-MIA generates a set of perturbed graphs
for target graph by adding perturbations to its effective features and queries
the target model with the perturbed graphs to get their prediction labels,
which are then used to calculate robustness score of the target graph. Finally,
by comparing the robustness score with a predefined threshold, the membership
of the target graph can be inferred correctly with high probability. Our
evaluation on three datasets and four GNN models shows that GLO-MIA achieves an
attack accuracy of up to 0.825, outperforming baseline work by 8.5% and closely
matching the performance of probability-based MIAs, even with only prediction
labels.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 18:55:02 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 06:48:09 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Dai",
"Jiazhu",
""
],
[
"Lu",
"Yubing",
""
]
] | TITLE: Graph-Level Label-Only Membership Inference Attack against Graph Neural
Networks
ABSTRACT: Graph neural networks (GNNs) are widely used for graph-structured data but
are vulnerable to membership inference attacks (MIAs) in graph classification
tasks, which determine if a graph was part of the training dataset, potentially
causing data leakage. Existing MIAs rely on prediction probability vectors, but
they become ineffective when only prediction labels are available. We propose a
Graph-level Label-Only Membership Inference Attack (GLO-MIA), which is based on
the intuition that the target model's predictions on training data are more
stable than those on testing data. GLO-MIA generates a set of perturbed graphs
for target graph by adding perturbations to its effective features and queries
the target model with the perturbed graphs to get their prediction labels,
which are then used to calculate robustness score of the target graph. Finally,
by comparing the robustness score with a predefined threshold, the membership
of the target graph can be inferred correctly with high probability. Our
evaluation on three datasets and four GNN models shows that GLO-MIA achieves an
attack accuracy of up to 0.825, outperforming baseline work by 8.5% and closely
matching the performance of probability-based MIAs, even with only prediction
labels.
|
2503.19355 | Dohwan Ko | Dohwan Ko, Sihyeon Kim, Yumin Suh, Vijay Kumar B.G, Minseo Yoon,
Manmohan Chandraker, Hyunwoo J. Kim | ST-VLM: Kinematic Instruction Tuning for Spatio-Temporal Reasoning in
Vision-Language Models | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Spatio-temporal reasoning is essential in understanding real-world
environments in various fields, eg, autonomous driving and sports analytics.
Recent advances have improved the spatial reasoning ability of Vision-Language
Models (VLMs) by introducing large-scale data, but these models still struggle
to analyze kinematic elements like traveled distance and speed of moving
objects. To bridge this gap, we construct a spatio-temporal reasoning dataset
and benchmark involving kinematic instruction tuning, referred to as STKit and
STKit-Bench. They consist of real-world videos with 3D annotations, detailing
object motion dynamics: traveled distance, speed, movement direction,
inter-object distance comparisons, and relative movement direction. To further
scale such data construction to videos without 3D labels, we propose an
automatic pipeline to generate pseudo-labels using 4D reconstruction in
real-world scale. With our kinematic instruction tuning data for
spatio-temporal reasoning, we present ST-VLM, a VLM enhanced for
spatio-temporal reasoning, which exhibits outstanding performance on
STKit-Bench. Furthermore, we show that ST-VLM generalizes robustly across
diverse domains and tasks, outperforming baselines on other spatio-temporal
benchmarks (eg, ActivityNet, TVQA+). Finally, by integrating learned
spatio-temporal reasoning with existing abilities, ST-VLM enables complex
multi-step reasoning. Project page: https://ikodoh.github.io/ST-VLM.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 05:08:06 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 05:32:54 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Ko",
"Dohwan",
""
],
[
"Kim",
"Sihyeon",
""
],
[
"Suh",
"Yumin",
""
],
[
"G",
"Vijay Kumar B.",
""
],
[
"Yoon",
"Minseo",
""
],
[
"Chandraker",
"Manmohan",
""
],
[
"Kim",
"Hyunwoo J.",
""
]
] | TITLE: ST-VLM: Kinematic Instruction Tuning for Spatio-Temporal Reasoning in
Vision-Language Models
ABSTRACT: Spatio-temporal reasoning is essential in understanding real-world
environments in various fields, eg, autonomous driving and sports analytics.
Recent advances have improved the spatial reasoning ability of Vision-Language
Models (VLMs) by introducing large-scale data, but these models still struggle
to analyze kinematic elements like traveled distance and speed of moving
objects. To bridge this gap, we construct a spatio-temporal reasoning dataset
and benchmark involving kinematic instruction tuning, referred to as STKit and
STKit-Bench. They consist of real-world videos with 3D annotations, detailing
object motion dynamics: traveled distance, speed, movement direction,
inter-object distance comparisons, and relative movement direction. To further
scale such data construction to videos without 3D labels, we propose an
automatic pipeline to generate pseudo-labels using 4D reconstruction in
real-world scale. With our kinematic instruction tuning data for
spatio-temporal reasoning, we present ST-VLM, a VLM enhanced for
spatio-temporal reasoning, which exhibits outstanding performance on
STKit-Bench. Furthermore, we show that ST-VLM generalizes robustly across
diverse domains and tasks, outperforming baselines on other spatio-temporal
benchmarks (eg, ActivityNet, TVQA+). Finally, by integrating learned
spatio-temporal reasoning with existing abilities, ST-VLM enables complex
multi-step reasoning. Project page: https://ikodoh.github.io/ST-VLM.
|
2503.19357 | Farzad Beizaee | Farzad Beizaee, Gregory A. Lodygensky, Christian Desrosiers, Jose Dolz | Correcting Deviations from Normality: A Reformulated Diffusion Model for
Multi-Class Unsupervised Anomaly Detection | null | Proceedings of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition (CVPR), 2025 | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Recent advances in diffusion models have spurred research into their
application for Reconstruction-based unsupervised anomaly detection. However,
these methods may struggle with maintaining structural integrity and recovering
the anomaly-free content of abnormal regions, especially in multi-class
scenarios. Furthermore, diffusion models are inherently designed to generate
images from pure noise and struggle to selectively alter anomalous regions of
an image while preserving normal ones. This leads to potential degradation of
normal regions during reconstruction, hampering the effectiveness of anomaly
detection. This paper introduces a reformulation of the standard diffusion
model geared toward selective region alteration, allowing the accurate
identification of anomalies. By modeling anomalies as noise in the latent
space, our proposed Deviation correction diffusion (DeCo-Diff) model preserves
the normal regions and encourages transformations exclusively on anomalous
areas. This selective approach enhances the reconstruction quality,
facilitating effective unsupervised detection and localization of anomaly
regions. Comprehensive evaluations demonstrate the superiority of our method in
accurately identifying and localizing anomalies in complex images, with
pixel-level AUPRC improvements of 11-14% over state-of-the-art models on well
known anomaly detection datasets. The code is available at
https://github.com/farzad-bz/DeCo-Diff
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 05:14:40 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Beizaee",
"Farzad",
""
],
[
"Lodygensky",
"Gregory A.",
""
],
[
"Desrosiers",
"Christian",
""
],
[
"Dolz",
"Jose",
""
]
] | TITLE: Correcting Deviations from Normality: A Reformulated Diffusion Model for
Multi-Class Unsupervised Anomaly Detection
ABSTRACT: Recent advances in diffusion models have spurred research into their
application for Reconstruction-based unsupervised anomaly detection. However,
these methods may struggle with maintaining structural integrity and recovering
the anomaly-free content of abnormal regions, especially in multi-class
scenarios. Furthermore, diffusion models are inherently designed to generate
images from pure noise and struggle to selectively alter anomalous regions of
an image while preserving normal ones. This leads to potential degradation of
normal regions during reconstruction, hampering the effectiveness of anomaly
detection. This paper introduces a reformulation of the standard diffusion
model geared toward selective region alteration, allowing the accurate
identification of anomalies. By modeling anomalies as noise in the latent
space, our proposed Deviation correction diffusion (DeCo-Diff) model preserves
the normal regions and encourages transformations exclusively on anomalous
areas. This selective approach enhances the reconstruction quality,
facilitating effective unsupervised detection and localization of anomaly
regions. Comprehensive evaluations demonstrate the superiority of our method in
accurately identifying and localizing anomalies in complex images, with
pixel-level AUPRC improvements of 11-14% over state-of-the-art models on well
known anomaly detection datasets. The code is available at
https://github.com/farzad-bz/DeCo-Diff
|
2503.19551 | Xingxing Zhang | Zeyu Qin, Qingxiu Dong, Xingxing Zhang, Li Dong, Xiaolong Huang, Ziyi
Yang, Mahmoud Khademi, Dongdong Zhang, Hany Hassan Awadalla, Yi R. Fung,
Weizhu Chen, Minhao Cheng, Furu Wei | Scaling Laws of Synthetic Data for Language Models | work in progress | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) achieve strong performance across diverse tasks,
largely driven by high-quality web data used in pre-training. However, recent
studies indicate this data source is rapidly depleting. Synthetic data emerges
as a promising alternative, but it remains unclear whether synthetic datasets
exhibit predictable scalability comparable to raw pre-training data. In this
work, we systematically investigate the scaling laws of synthetic data by
introducing SynthLLM, a scalable framework that transforms pre-training corpora
into diverse, high-quality synthetic datasets. Our approach achieves this by
automatically extracting and recombining high-level concepts across multiple
documents using a graph algorithm. Key findings from our extensive mathematical
experiments on SynthLLM include: (1) SynthLLM generates synthetic data that
reliably adheres to the rectified scaling law across various model sizes; (2)
Performance improvements plateau near 300B tokens; and (3) Larger models
approach optimal performance with fewer training tokens. For instance, an 8B
model peaks at 1T tokens, while a 3B model requires 4T. Moreover, comparisons
with existing synthetic data generation and augmentation methods demonstrate
that SynthLLM achieves superior performance and scalability. Our findings
highlight synthetic data as a scalable and reliable alternative to organic
pre-training corpora, offering a viable path toward continued improvement in
model performance.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 11:07:12 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 11:23:44 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Qin",
"Zeyu",
""
],
[
"Dong",
"Qingxiu",
""
],
[
"Zhang",
"Xingxing",
""
],
[
"Dong",
"Li",
""
],
[
"Huang",
"Xiaolong",
""
],
[
"Yang",
"Ziyi",
""
],
[
"Khademi",
"Mahmoud",
""
],
[
"Zhang",
"Dongdong",
""
],
[
"Awadalla",
"Hany Hassan",
""
],
[
"Fung",
"Yi R.",
""
],
[
"Chen",
"Weizhu",
""
],
[
"Cheng",
"Minhao",
""
],
[
"Wei",
"Furu",
""
]
] | TITLE: Scaling Laws of Synthetic Data for Language Models
ABSTRACT: Large language models (LLMs) achieve strong performance across diverse tasks,
largely driven by high-quality web data used in pre-training. However, recent
studies indicate this data source is rapidly depleting. Synthetic data emerges
as a promising alternative, but it remains unclear whether synthetic datasets
exhibit predictable scalability comparable to raw pre-training data. In this
work, we systematically investigate the scaling laws of synthetic data by
introducing SynthLLM, a scalable framework that transforms pre-training corpora
into diverse, high-quality synthetic datasets. Our approach achieves this by
automatically extracting and recombining high-level concepts across multiple
documents using a graph algorithm. Key findings from our extensive mathematical
experiments on SynthLLM include: (1) SynthLLM generates synthetic data that
reliably adheres to the rectified scaling law across various model sizes; (2)
Performance improvements plateau near 300B tokens; and (3) Larger models
approach optimal performance with fewer training tokens. For instance, an 8B
model peaks at 1T tokens, while a 3B model requires 4T. Moreover, comparisons
with existing synthetic data generation and augmentation methods demonstrate
that SynthLLM achieves superior performance and scalability. Our findings
highlight synthetic data as a scalable and reliable alternative to organic
pre-training corpora, offering a viable path toward continued improvement in
model performance.
|
2503.19666 | Moshe Eliasof | Eshed Gal, Moshe Eliasof, Carola-Bibiane Sch\"onlieb, Eldad Haber,
Eran Treister | Towards Efficient Training of Graph Neural Networks: A Multiscale
Approach | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph Neural Networks (GNNs) have emerged as a powerful tool for learning and
inferring from graph-structured data, and are widely used in a variety of
applications, often considering large amounts of data and large graphs.
However, training on such data requires large memory and extensive
computations. In this paper, we introduce a novel framework for efficient
multiscale training of GNNs, designed to integrate information across
multiscale representations of a graph. Our approach leverages a hierarchical
graph representation, taking advantage of coarse graph scales in the training
process, where each coarse scale graph has fewer nodes and edges. Based on this
approach, we propose a suite of GNN training methods: such as coarse-to-fine,
sub-to-full, and multiscale gradient computation. We demonstrate the
effectiveness of our methods on various datasets and learning tasks.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 13:52:26 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 10:39:33 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Gal",
"Eshed",
""
],
[
"Eliasof",
"Moshe",
""
],
[
"Schönlieb",
"Carola-Bibiane",
""
],
[
"Haber",
"Eldad",
""
],
[
"Treister",
"Eran",
""
]
] | TITLE: Towards Efficient Training of Graph Neural Networks: A Multiscale
Approach
ABSTRACT: Graph Neural Networks (GNNs) have emerged as a powerful tool for learning and
inferring from graph-structured data, and are widely used in a variety of
applications, often considering large amounts of data and large graphs.
However, training on such data requires large memory and extensive
computations. In this paper, we introduce a novel framework for efficient
multiscale training of GNNs, designed to integrate information across
multiscale representations of a graph. Our approach leverages a hierarchical
graph representation, taking advantage of coarse graph scales in the training
process, where each coarse scale graph has fewer nodes and edges. Based on this
approach, we propose a suite of GNN training methods: such as coarse-to-fine,
sub-to-full, and multiscale gradient computation. We demonstrate the
effectiveness of our methods on various datasets and learning tasks.
|
2503.19683 | Andrii Yermakov | Andrii Yermakov, Jan Cech, Jiri Matas | Unlocking the Hidden Potential of CLIP in Generalizable Deepfake
Detection | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | This paper tackles the challenge of detecting partially manipulated facial
deepfakes, which involve subtle alterations to specific facial features while
retaining the overall context, posing a greater detection difficulty than fully
synthetic faces. We leverage the Contrastive Language-Image Pre-training (CLIP)
model, specifically its ViT-L/14 visual encoder, to develop a generalizable
detection method that performs robustly across diverse datasets and unknown
forgery techniques with minimal modifications to the original model. The
proposed approach utilizes parameter-efficient fine-tuning (PEFT) techniques,
such as LN-tuning, to adjust a small subset of the model's parameters,
preserving CLIP's pre-trained knowledge and reducing overfitting. A tailored
preprocessing pipeline optimizes the method for facial images, while
regularization strategies, including L2 normalization and metric learning on a
hyperspherical manifold, enhance generalization. Trained on the FaceForensics++
dataset and evaluated in a cross-dataset fashion on Celeb-DF-v2, DFDC, FFIW,
and others, the proposed method achieves competitive detection accuracy
comparable to or outperforming much more complex state-of-the-art techniques.
This work highlights the efficacy of CLIP's visual encoder in facial deepfake
detection and establishes a simple, powerful baseline for future research,
advancing the field of generalizable deepfake detection. The code is available
at: https://github.com/yermandy/deepfake-detection
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 14:10:54 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 11:21:23 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Yermakov",
"Andrii",
""
],
[
"Cech",
"Jan",
""
],
[
"Matas",
"Jiri",
""
]
] | TITLE: Unlocking the Hidden Potential of CLIP in Generalizable Deepfake
Detection
ABSTRACT: This paper tackles the challenge of detecting partially manipulated facial
deepfakes, which involve subtle alterations to specific facial features while
retaining the overall context, posing a greater detection difficulty than fully
synthetic faces. We leverage the Contrastive Language-Image Pre-training (CLIP)
model, specifically its ViT-L/14 visual encoder, to develop a generalizable
detection method that performs robustly across diverse datasets and unknown
forgery techniques with minimal modifications to the original model. The
proposed approach utilizes parameter-efficient fine-tuning (PEFT) techniques,
such as LN-tuning, to adjust a small subset of the model's parameters,
preserving CLIP's pre-trained knowledge and reducing overfitting. A tailored
preprocessing pipeline optimizes the method for facial images, while
regularization strategies, including L2 normalization and metric learning on a
hyperspherical manifold, enhance generalization. Trained on the FaceForensics++
dataset and evaluated in a cross-dataset fashion on Celeb-DF-v2, DFDC, FFIW,
and others, the proposed method achieves competitive detection accuracy
comparable to or outperforming much more complex state-of-the-art techniques.
This work highlights the efficacy of CLIP's visual encoder in facial deepfake
detection and establishes a simple, powerful baseline for future research,
advancing the field of generalizable deepfake detection. The code is available
at: https://github.com/yermandy/deepfake-detection
|
2503.19730 | Yuli Zhou | Yuli Zhou and Guolei Sun and Yawei Li and Yuqian Fu and Luca Benini
and Ender Konukoglu | CamSAM2: Segment Anything Accurately in Camouflaged Videos | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Video camouflaged object segmentation (VCOS), aiming at segmenting
camouflaged objects that seamlessly blend into their environment, is a
fundamental vision task with various real-world applications. With the release
of SAM2, video segmentation has witnessed significant progress. However, SAM2's
capability of segmenting camouflaged videos is suboptimal, especially when
given simple prompts such as point and box. To address the problem, we propose
Camouflaged SAM2 (CamSAM2), which enhances SAM2's ability to handle camouflaged
scenes without modifying SAM2's parameters. Specifically, we introduce a
decamouflaged token to provide the flexibility of feature adjustment for VCOS.
To make full use of fine-grained and high-resolution features from the current
frame and previous frames, we propose implicit object-aware fusion (IOF) and
explicit object-aware fusion (EOF) modules, respectively. Object prototype
generation (OPG) is introduced to abstract and memorize object prototypes with
informative details using high-quality features from previous frames. Extensive
experiments are conducted to validate the effectiveness of our approach. While
CamSAM2 only adds negligible learnable parameters to SAM2, it substantially
outperforms SAM2 on three VCOS datasets, especially achieving 12.2 mDice gains
with click prompt on MoCA-Mask and 19.6 mDice gains with mask prompt on
SUN-SEG-Hard, with Hiera-T as the backbone. The code will be available at
https://github.com/zhoustan/CamSAM2.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 14:58:52 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 02:14:50 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Zhou",
"Yuli",
""
],
[
"Sun",
"Guolei",
""
],
[
"Li",
"Yawei",
""
],
[
"Fu",
"Yuqian",
""
],
[
"Benini",
"Luca",
""
],
[
"Konukoglu",
"Ender",
""
]
] | TITLE: CamSAM2: Segment Anything Accurately in Camouflaged Videos
ABSTRACT: Video camouflaged object segmentation (VCOS), aiming at segmenting
camouflaged objects that seamlessly blend into their environment, is a
fundamental vision task with various real-world applications. With the release
of SAM2, video segmentation has witnessed significant progress. However, SAM2's
capability of segmenting camouflaged videos is suboptimal, especially when
given simple prompts such as point and box. To address the problem, we propose
Camouflaged SAM2 (CamSAM2), which enhances SAM2's ability to handle camouflaged
scenes without modifying SAM2's parameters. Specifically, we introduce a
decamouflaged token to provide the flexibility of feature adjustment for VCOS.
To make full use of fine-grained and high-resolution features from the current
frame and previous frames, we propose implicit object-aware fusion (IOF) and
explicit object-aware fusion (EOF) modules, respectively. Object prototype
generation (OPG) is introduced to abstract and memorize object prototypes with
informative details using high-quality features from previous frames. Extensive
experiments are conducted to validate the effectiveness of our approach. While
CamSAM2 only adds negligible learnable parameters to SAM2, it substantially
outperforms SAM2 on three VCOS datasets, especially achieving 12.2 mDice gains
with click prompt on MoCA-Mask and 19.6 mDice gains with mask prompt on
SUN-SEG-Hard, with Hiera-T as the backbone. The code will be available at
https://github.com/zhoustan/CamSAM2.
|
2503.19739 | Pihai Sun | Pihai Sun, Junjun Jiang, Yuanqi Yao, Youyu Chen, Wenbo Zhao, Kui
Jiang, Xianming Liu | FUSE: Label-Free Image-Event Joint Monocular Depth Estimation via
Frequency-Decoupled Alignment and Degradation-Robust Fusion | 8 pages, 6 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Image-event joint depth estimation methods leverage complementary modalities
for robust perception, yet face challenges in generalizability stemming from
two factors: 1) limited annotated image-event-depth datasets causing
insufficient cross-modal supervision, and 2) inherent frequency mismatches
between static images and dynamic event streams with distinct spatiotemporal
patterns, leading to ineffective feature fusion. To address this dual
challenge, we propose Frequency-decoupled Unified Self-supervised Encoder
(FUSE) with two synergistic components: The Parameter-efficient Self-supervised
Transfer (PST) establishes cross-modal knowledge transfer through latent space
alignment with image foundation models, effectively mitigating data scarcity by
enabling joint encoding without depth ground truth. Complementing this, we
propose the Frequency-Decoupled Fusion module (FreDFuse) to explicitly decouple
high-frequency edge features from low-frequency structural components,
resolving modality-specific frequency mismatches through physics-aware fusion.
This combined approach enables FUSE to construct a universal image-event
encoder that only requires lightweight decoder adaptation for target datasets.
Extensive experiments demonstrate state-of-the-art performance with 14% and
24.9% improvements in Abs.Rel on MVSEC and DENSE datasets. The framework
exhibits remarkable zero-shot adaptability to challenging scenarios including
extreme lighting and motion blur, significantly advancing real-world deployment
capabilities. The source code for our method is publicly available at:
https://github.com/sunpihai-up/FUSE
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 15:04:53 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 06:54:19 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Sun",
"Pihai",
""
],
[
"Jiang",
"Junjun",
""
],
[
"Yao",
"Yuanqi",
""
],
[
"Chen",
"Youyu",
""
],
[
"Zhao",
"Wenbo",
""
],
[
"Jiang",
"Kui",
""
],
[
"Liu",
"Xianming",
""
]
] | TITLE: FUSE: Label-Free Image-Event Joint Monocular Depth Estimation via
Frequency-Decoupled Alignment and Degradation-Robust Fusion
ABSTRACT: Image-event joint depth estimation methods leverage complementary modalities
for robust perception, yet face challenges in generalizability stemming from
two factors: 1) limited annotated image-event-depth datasets causing
insufficient cross-modal supervision, and 2) inherent frequency mismatches
between static images and dynamic event streams with distinct spatiotemporal
patterns, leading to ineffective feature fusion. To address this dual
challenge, we propose Frequency-decoupled Unified Self-supervised Encoder
(FUSE) with two synergistic components: The Parameter-efficient Self-supervised
Transfer (PST) establishes cross-modal knowledge transfer through latent space
alignment with image foundation models, effectively mitigating data scarcity by
enabling joint encoding without depth ground truth. Complementing this, we
propose the Frequency-Decoupled Fusion module (FreDFuse) to explicitly decouple
high-frequency edge features from low-frequency structural components,
resolving modality-specific frequency mismatches through physics-aware fusion.
This combined approach enables FUSE to construct a universal image-event
encoder that only requires lightweight decoder adaptation for target datasets.
Extensive experiments demonstrate state-of-the-art performance with 14% and
24.9% improvements in Abs.Rel on MVSEC and DENSE datasets. The framework
exhibits remarkable zero-shot adaptability to challenging scenarios including
extreme lighting and motion blur, significantly advancing real-world deployment
capabilities. The source code for our method is publicly available at:
https://github.com/sunpihai-up/FUSE
|
2503.19753 | Chuanzhi Xu | Chuanzhi Xu, Haoxian Zhou, Haodong Chen, Vera Chung, Qiang Qu | A Survey on Event-driven 3D Reconstruction: Development under Different
Categories | 6 pages, 1 figure, 6 tables, submitted to an anonymous conference
under double-blind review | null | null | null | cs.GR cs.AI cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Event cameras have gained increasing attention for 3D reconstruction due to
their high temporal resolution, low latency, and high dynamic range. They
capture per-pixel brightness changes asynchronously, allowing accurate
reconstruction under fast motion and challenging lighting conditions. In this
survey, we provide a comprehensive review of event-driven 3D reconstruction
methods, including stereo, monocular, and multimodal systems. We further
categorize recent developments based on geometric, learning-based, and hybrid
approaches. Emerging trends, such as neural radiance fields and 3D Gaussian
splatting with event data, are also covered. The related works are structured
chronologically to illustrate the innovations and progression within the field.
To support future research, we also highlight key research gaps and future
research directions in dataset, experiment, evaluation, event representation,
etc.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 15:16:53 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 12:34:34 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Xu",
"Chuanzhi",
""
],
[
"Zhou",
"Haoxian",
""
],
[
"Chen",
"Haodong",
""
],
[
"Chung",
"Vera",
""
],
[
"Qu",
"Qiang",
""
]
] | TITLE: A Survey on Event-driven 3D Reconstruction: Development under Different
Categories
ABSTRACT: Event cameras have gained increasing attention for 3D reconstruction due to
their high temporal resolution, low latency, and high dynamic range. They
capture per-pixel brightness changes asynchronously, allowing accurate
reconstruction under fast motion and challenging lighting conditions. In this
survey, we provide a comprehensive review of event-driven 3D reconstruction
methods, including stereo, monocular, and multimodal systems. We further
categorize recent developments based on geometric, learning-based, and hybrid
approaches. Emerging trends, such as neural radiance fields and 3D Gaussian
splatting with event data, are also covered. The related works are structured
chronologically to illustrate the innovations and progression within the field.
To support future research, we also highlight key research gaps and future
research directions in dataset, experiment, evaluation, event representation,
etc.
|
2503.19846 | Aaron Serianni | Aaron Serianni, Tyler Zhu, Olga Russakovsky, Vikram V. Ramaswamy | Attention IoU: Examining Biases in CelebA using Attention Maps | To appear in CVPR 2025. Code and data is available at
https://github.com/aaronserianni/attention-iou . 15 pages, 14 figures,
including appendix | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Computer vision models have been shown to exhibit and amplify biases across a
wide array of datasets and tasks. Existing methods for quantifying bias in
classification models primarily focus on dataset distribution and model
performance on subgroups, overlooking the internal workings of a model. We
introduce the Attention-IoU (Attention Intersection over Union) metric and
related scores, which use attention maps to reveal biases within a model's
internal representations and identify image features potentially causing the
biases. First, we validate Attention-IoU on the synthetic Waterbirds dataset,
showing that the metric accurately measures model bias. We then analyze the
CelebA dataset, finding that Attention-IoU uncovers correlations beyond
accuracy disparities. Through an investigation of individual attributes through
the protected attribute of Male, we examine the distinct ways biases are
represented in CelebA. Lastly, by subsampling the training set to change
attribute correlations, we demonstrate that Attention-IoU reveals potential
confounding variables not present in dataset labels.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 17:11:39 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 02:43:45 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Serianni",
"Aaron",
""
],
[
"Zhu",
"Tyler",
""
],
[
"Russakovsky",
"Olga",
""
],
[
"Ramaswamy",
"Vikram V.",
""
]
] | TITLE: Attention IoU: Examining Biases in CelebA using Attention Maps
ABSTRACT: Computer vision models have been shown to exhibit and amplify biases across a
wide array of datasets and tasks. Existing methods for quantifying bias in
classification models primarily focus on dataset distribution and model
performance on subgroups, overlooking the internal workings of a model. We
introduce the Attention-IoU (Attention Intersection over Union) metric and
related scores, which use attention maps to reveal biases within a model's
internal representations and identify image features potentially causing the
biases. First, we validate Attention-IoU on the synthetic Waterbirds dataset,
showing that the metric accurately measures model bias. We then analyze the
CelebA dataset, finding that Attention-IoU uncovers correlations beyond
accuracy disparities. Through an investigation of individual attributes through
the protected attribute of Male, we examine the distinct ways biases are
represented in CelebA. Lastly, by subsampling the training set to change
attribute correlations, we demonstrate that Attention-IoU reveals potential
confounding variables not present in dataset labels.
|
2503.19906 | HongYu Liu | Hongyu Liu, Xuan Wang, Ziyu Wan, Yue Ma, Jingye Chen, Yanbo Fan, Yujun
Shen, Yibing Song, Qifeng Chen | AvatarArtist: Open-Domain 4D Avatarization | Accepted to CVPR 2025. Project page:
https://kumapowerliu.github.io/AvatarArtist | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work focuses on open-domain 4D avatarization, with the purpose of
creating a 4D avatar from a portrait image in an arbitrary style. We select
parametric triplanes as the intermediate 4D representation and propose a
practical training paradigm that takes advantage of both generative adversarial
networks (GANs) and diffusion models. Our design stems from the observation
that 4D GANs excel at bridging images and triplanes without supervision yet
usually face challenges in handling diverse data distributions. A robust 2D
diffusion prior emerges as the solution, assisting the GAN in transferring its
expertise across various domains. The synergy between these experts permits the
construction of a multi-domain image-triplane dataset, which drives the
development of a general 4D avatar creator. Extensive experiments suggest that
our model, AvatarArtist, is capable of producing high-quality 4D avatars with
strong robustness to various source image domains. The code, the data, and the
models will be made publicly available to facilitate future studies.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 17:59:03 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 05:09:21 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Liu",
"Hongyu",
""
],
[
"Wang",
"Xuan",
""
],
[
"Wan",
"Ziyu",
""
],
[
"Ma",
"Yue",
""
],
[
"Chen",
"Jingye",
""
],
[
"Fan",
"Yanbo",
""
],
[
"Shen",
"Yujun",
""
],
[
"Song",
"Yibing",
""
],
[
"Chen",
"Qifeng",
""
]
] | TITLE: AvatarArtist: Open-Domain 4D Avatarization
ABSTRACT: This work focuses on open-domain 4D avatarization, with the purpose of
creating a 4D avatar from a portrait image in an arbitrary style. We select
parametric triplanes as the intermediate 4D representation and propose a
practical training paradigm that takes advantage of both generative adversarial
networks (GANs) and diffusion models. Our design stems from the observation
that 4D GANs excel at bridging images and triplanes without supervision yet
usually face challenges in handling diverse data distributions. A robust 2D
diffusion prior emerges as the solution, assisting the GAN in transferring its
expertise across various domains. The synergy between these experts permits the
construction of a multi-domain image-triplane dataset, which drives the
development of a general 4D avatar creator. Extensive experiments suggest that
our model, AvatarArtist, is capable of producing high-quality 4D avatars with
strong robustness to various source image domains. The code, the data, and the
models will be made publicly available to facilitate future studies.
|
2503.19936 | Kelaiti Xiao Mr | Kelaiti Xiao, Liang Yang, Paerhati Tulajiang, Hongfei Lin | VisualQuest: A Diverse Image Dataset for Evaluating Visual Recognition
in LLMs | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | This paper introduces VisualQuest, a novel image dataset designed to assess
the ability of large language models (LLMs) to interpret non-traditional,
stylized imagery. Unlike conventional photographic benchmarks, VisualQuest
challenges models with images that incorporate abstract, symbolic, and
metaphorical elements, requiring the integration of domain-specific knowledge
and advanced reasoning. The dataset was meticulously curated through multiple
stages of filtering, annotation, and standardization to ensure high quality and
diversity. Our evaluations using several state-of-the-art multimodal LLMs
reveal significant performance variations that underscore the importance of
both factual background knowledge and inferential capabilities in visual
recognition tasks. VisualQuest thus provides a robust and comprehensive
benchmark for advancing research in multimodal reasoning and model architecture
design.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 01:23:11 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Xiao",
"Kelaiti",
""
],
[
"Yang",
"Liang",
""
],
[
"Tulajiang",
"Paerhati",
""
],
[
"Lin",
"Hongfei",
""
]
] | TITLE: VisualQuest: A Diverse Image Dataset for Evaluating Visual Recognition
in LLMs
ABSTRACT: This paper introduces VisualQuest, a novel image dataset designed to assess
the ability of large language models (LLMs) to interpret non-traditional,
stylized imagery. Unlike conventional photographic benchmarks, VisualQuest
challenges models with images that incorporate abstract, symbolic, and
metaphorical elements, requiring the integration of domain-specific knowledge
and advanced reasoning. The dataset was meticulously curated through multiple
stages of filtering, annotation, and standardization to ensure high quality and
diversity. Our evaluations using several state-of-the-art multimodal LLMs
reveal significant performance variations that underscore the importance of
both factual background knowledge and inferential capabilities in visual
recognition tasks. VisualQuest thus provides a robust and comprehensive
benchmark for advancing research in multimodal reasoning and model architecture
design.
|
2503.19940 | Qiusheng Huang | Qiusheng Huang, Xiaohui Zhong, Xu Fan, Lei Chen, Hao Li | FuXi-RTM: A Physics-Guided Prediction Framework with Radiative Transfer
Modeling | null | null | null | null | physics.ao-ph cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Similar to conventional video generation, current deep learning-based weather
prediction frameworks often lack explicit physical constraints, leading to
unphysical outputs that limit their reliability for operational forecasting.
Among various physical processes requiring proper representation, radiation
plays a fundamental role as it drives Earth's weather and climate systems.
However, accurate simulation of radiative transfer processes remains
challenging for traditional numerical weather prediction (NWP) models due to
their inherent complexity and high computational costs. Here, we propose
FuXi-RTM, a hybrid physics-guided deep learning framework designed to enhance
weather forecast accuracy while enforcing physical consistency. FuXi-RTM
integrates a primary forecasting model (FuXi) with a fixed deep learning-based
radiative transfer model (DLRTM) surrogate that efficiently replaces
conventional radiation parameterization schemes. This represents the first deep
learning-based weather forecasting framework to explicitly incorporate physical
process modeling. Evaluated over a comprehensive 5-year dataset, FuXi-RTM
outperforms its unconstrained counterpart in 88.51% of 3320 variable and lead
time combinations, with improvements in radiative flux predictions. By
incorporating additional physical processes, FuXi-RTM paves the way for
next-generation weather forecasting systems that are both accurate and
physically consistent.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 08:21:58 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Huang",
"Qiusheng",
""
],
[
"Zhong",
"Xiaohui",
""
],
[
"Fan",
"Xu",
""
],
[
"Chen",
"Lei",
""
],
[
"Li",
"Hao",
""
]
] | TITLE: FuXi-RTM: A Physics-Guided Prediction Framework with Radiative Transfer
Modeling
ABSTRACT: Similar to conventional video generation, current deep learning-based weather
prediction frameworks often lack explicit physical constraints, leading to
unphysical outputs that limit their reliability for operational forecasting.
Among various physical processes requiring proper representation, radiation
plays a fundamental role as it drives Earth's weather and climate systems.
However, accurate simulation of radiative transfer processes remains
challenging for traditional numerical weather prediction (NWP) models due to
their inherent complexity and high computational costs. Here, we propose
FuXi-RTM, a hybrid physics-guided deep learning framework designed to enhance
weather forecast accuracy while enforcing physical consistency. FuXi-RTM
integrates a primary forecasting model (FuXi) with a fixed deep learning-based
radiative transfer model (DLRTM) surrogate that efficiently replaces
conventional radiation parameterization schemes. This represents the first deep
learning-based weather forecasting framework to explicitly incorporate physical
process modeling. Evaluated over a comprehensive 5-year dataset, FuXi-RTM
outperforms its unconstrained counterpart in 88.51% of 3320 variable and lead
time combinations, with improvements in radiative flux predictions. By
incorporating additional physical processes, FuXi-RTM paves the way for
next-generation weather forecasting systems that are both accurate and
physically consistent.
|
2503.19948 | Alexander Gambashidze | Alexander Gambashidze, Konstantin Sobolev, Andrey Kuznetsov, Ivan
Oseledets | Test-Time Reasoning Through Visual Human Preferences with VLMs and Soft
Rewards | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Can Visual Language Models (VLMs) effectively capture human visual
preferences? This work addresses this question by training VLMs to think about
preferences at test time, employing reinforcement learning methods inspired by
DeepSeek R1 and OpenAI O1. Using datasets such as ImageReward and Human
Preference Score v2 (HPSv2), our models achieve accuracies of 64.9% on the
ImageReward test set (trained on ImageReward official split) and 65.4% on HPSv2
(trained on approximately 25% of its data). These results match traditional
encoder-based models while providing transparent reasoning and enhanced
generalization. This approach allows to use not only rich VLM world knowledge,
but also its potential to think, yielding interpretable outcomes that help
decision-making processes. By demonstrating that human visual preferences
reasonable by current VLMs, we introduce efficient soft-reward strategies for
image ranking, outperforming simplistic selection or scoring methods. This
reasoning capability enables VLMs to rank arbitrary images-regardless of aspect
ratio or complexity-thereby potentially amplifying the effectiveness of visual
Preference Optimization. By reducing the need for extensive markup while
improving reward generalization and explainability, our findings can be a
strong mile-stone that will enhance text-to-vision models even further.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 15:30:21 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Gambashidze",
"Alexander",
""
],
[
"Sobolev",
"Konstantin",
""
],
[
"Kuznetsov",
"Andrey",
""
],
[
"Oseledets",
"Ivan",
""
]
] | TITLE: Test-Time Reasoning Through Visual Human Preferences with VLMs and Soft
Rewards
ABSTRACT: Can Visual Language Models (VLMs) effectively capture human visual
preferences? This work addresses this question by training VLMs to think about
preferences at test time, employing reinforcement learning methods inspired by
DeepSeek R1 and OpenAI O1. Using datasets such as ImageReward and Human
Preference Score v2 (HPSv2), our models achieve accuracies of 64.9% on the
ImageReward test set (trained on ImageReward official split) and 65.4% on HPSv2
(trained on approximately 25% of its data). These results match traditional
encoder-based models while providing transparent reasoning and enhanced
generalization. This approach allows to use not only rich VLM world knowledge,
but also its potential to think, yielding interpretable outcomes that help
decision-making processes. By demonstrating that human visual preferences
reasonable by current VLMs, we introduce efficient soft-reward strategies for
image ranking, outperforming simplistic selection or scoring methods. This
reasoning capability enables VLMs to rank arbitrary images-regardless of aspect
ratio or complexity-thereby potentially amplifying the effectiveness of visual
Preference Optimization. By reducing the need for extensive markup while
improving reward generalization and explainability, our findings can be a
strong mile-stone that will enhance text-to-vision models even further.
|
2503.19979 | Enora Rice | Enora Rice, Ali Marashian, Hannah Haynie, Katharina von der Wense, and
Alexis Palmer | Untangling the Influence of Typology, Data and Model Architecture on
Ranking Transfer Languages for Cross-Lingual POS Tagging | Accepted to NAACL 2025 Workshop Language Models for Underserved
Communities | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Cross-lingual transfer learning is an invaluable tool for overcoming data
scarcity, yet selecting a suitable transfer language remains a challenge. The
precise roles of linguistic typology, training data, and model architecture in
transfer language choice are not fully understood. We take a holistic approach,
examining how both dataset-specific and fine-grained typological features
influence transfer language selection for part-of-speech tagging, considering
two different sources for morphosyntactic features. While previous work
examines these dynamics in the context of bilingual biLSTMS, we extend our
analysis to a more modern transfer learning pipeline: zero-shot prediction with
pretrained multilingual models. We train a series of transfer language ranking
systems and examine how different feature inputs influence ranker performance
across architectures. Word overlap, type-token ratio, and genealogical distance
emerge as top features across all architectures. Our findings reveal that a
combination of typological and dataset-dependent features leads to the best
rankings, and that good performance can be obtained with either feature group
on its own.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 18:05:40 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Rice",
"Enora",
""
],
[
"Marashian",
"Ali",
""
],
[
"Haynie",
"Hannah",
""
],
[
"von der Wense",
"Katharina",
""
],
[
"Palmer",
"Alexis",
""
]
] | TITLE: Untangling the Influence of Typology, Data and Model Architecture on
Ranking Transfer Languages for Cross-Lingual POS Tagging
ABSTRACT: Cross-lingual transfer learning is an invaluable tool for overcoming data
scarcity, yet selecting a suitable transfer language remains a challenge. The
precise roles of linguistic typology, training data, and model architecture in
transfer language choice are not fully understood. We take a holistic approach,
examining how both dataset-specific and fine-grained typological features
influence transfer language selection for part-of-speech tagging, considering
two different sources for morphosyntactic features. While previous work
examines these dynamics in the context of bilingual biLSTMS, we extend our
analysis to a more modern transfer learning pipeline: zero-shot prediction with
pretrained multilingual models. We train a series of transfer language ranking
systems and examine how different feature inputs influence ranker performance
across architectures. Word overlap, type-token ratio, and genealogical distance
emerge as top features across all architectures. Our findings reveal that a
combination of typological and dataset-dependent features leads to the best
rankings, and that good performance can be obtained with either feature group
on its own.
|
2503.19988 | Bohan Zhai | Bohan Zhai, Canwen Xu, Yuxiong He, Zhewei Yao | ExCoT: Optimizing Reasoning for Text-to-SQL with Execution Feedback | null | null | null | null | cs.LG cs.AI cs.DB | http://creativecommons.org/licenses/by/4.0/ | Text-to-SQL demands precise reasoning to convert natural language questions
into structured queries. While large language models (LLMs) excel in many
reasoning tasks, their ability to leverage Chain-of-Thought (CoT) reasoning for
text-to-SQL remains underexplored. We identify critical limitations: zero-shot
CoT offers minimal gains, and Direct Preference Optimization (DPO) applied
without CoT yields marginal improvements. We propose ExCoT, a novel framework
that iteratively optimizes open-source LLMs by combining CoT reasoning with
off-policy and on-policy DPO, relying solely on execution accuracy as feedback.
This approach eliminates the need for reward models or human-annotated
preferences.
Our experimental results demonstrate significant performance gains: ExCoT
improves execution accuracy on BIRD dev set from 57.37% to 68.51% and on Spider
test set from 78.81% to 86.59% for LLaMA-3 70B, with Qwen-2.5-Coder
demonstrating similar improvements. Our best model achieves state-of-the-art
performance in the single-model setting on both BIRD and Spider datasets,
notably achieving 68.53% on the BIRD test set.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 18:17:36 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Zhai",
"Bohan",
""
],
[
"Xu",
"Canwen",
""
],
[
"He",
"Yuxiong",
""
],
[
"Yao",
"Zhewei",
""
]
] | TITLE: ExCoT: Optimizing Reasoning for Text-to-SQL with Execution Feedback
ABSTRACT: Text-to-SQL demands precise reasoning to convert natural language questions
into structured queries. While large language models (LLMs) excel in many
reasoning tasks, their ability to leverage Chain-of-Thought (CoT) reasoning for
text-to-SQL remains underexplored. We identify critical limitations: zero-shot
CoT offers minimal gains, and Direct Preference Optimization (DPO) applied
without CoT yields marginal improvements. We propose ExCoT, a novel framework
that iteratively optimizes open-source LLMs by combining CoT reasoning with
off-policy and on-policy DPO, relying solely on execution accuracy as feedback.
This approach eliminates the need for reward models or human-annotated
preferences.
Our experimental results demonstrate significant performance gains: ExCoT
improves execution accuracy on BIRD dev set from 57.37% to 68.51% and on Spider
test set from 78.81% to 86.59% for LLaMA-3 70B, with Qwen-2.5-Coder
demonstrating similar improvements. Our best model achieves state-of-the-art
performance in the single-model setting on both BIRD and Spider datasets,
notably achieving 68.53% on the BIRD test set.
|
2503.20000 | Jonathan Sauder | Jonathan Sauder, Viktor Domazetoski, Guilhem Banc-Prandi, Gabriela
Perna, Anders Meibom, Devis Tuia | The Coralscapes Dataset: Semantic Scene Understanding in Coral Reefs | null | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | Coral reefs are declining worldwide due to climate change and local
stressors. To inform effective conservation or restoration, monitoring at the
highest possible spatial and temporal resolution is necessary. Conventional
coral reef surveying methods are limited in scalability due to their reliance
on expert labor time, motivating the use of computer vision tools to automate
the identification and abundance estimation of live corals from images.
However, the design and evaluation of such tools has been impeded by the lack
of large high quality datasets. We release the Coralscapes dataset, the first
general-purpose dense semantic segmentation dataset for coral reefs, covering
2075 images, 39 benthic classes, and 174k segmentation masks annotated by
experts. Coralscapes has a similar scope and the same structure as the widely
used Cityscapes dataset for urban scene segmentation, allowing benchmarking of
semantic segmentation models in a new challenging domain which requires expert
knowledge to annotate. We benchmark a wide range of semantic segmentation
models, and find that transfer learning from Coralscapes to existing smaller
datasets consistently leads to state-of-the-art performance. Coralscapes will
catalyze research on efficient, scalable, and standardized coral reef surveying
methods based on computer vision, and holds the potential to streamline the
development of underwater ecological robotics.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 18:33:59 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Sauder",
"Jonathan",
""
],
[
"Domazetoski",
"Viktor",
""
],
[
"Banc-Prandi",
"Guilhem",
""
],
[
"Perna",
"Gabriela",
""
],
[
"Meibom",
"Anders",
""
],
[
"Tuia",
"Devis",
""
]
] | TITLE: The Coralscapes Dataset: Semantic Scene Understanding in Coral Reefs
ABSTRACT: Coral reefs are declining worldwide due to climate change and local
stressors. To inform effective conservation or restoration, monitoring at the
highest possible spatial and temporal resolution is necessary. Conventional
coral reef surveying methods are limited in scalability due to their reliance
on expert labor time, motivating the use of computer vision tools to automate
the identification and abundance estimation of live corals from images.
However, the design and evaluation of such tools has been impeded by the lack
of large high quality datasets. We release the Coralscapes dataset, the first
general-purpose dense semantic segmentation dataset for coral reefs, covering
2075 images, 39 benthic classes, and 174k segmentation masks annotated by
experts. Coralscapes has a similar scope and the same structure as the widely
used Cityscapes dataset for urban scene segmentation, allowing benchmarking of
semantic segmentation models in a new challenging domain which requires expert
knowledge to annotate. We benchmark a wide range of semantic segmentation
models, and find that transfer learning from Coralscapes to existing smaller
datasets consistently leads to state-of-the-art performance. Coralscapes will
catalyze research on efficient, scalable, and standardized coral reef surveying
methods based on computer vision, and holds the potential to streamline the
development of underwater ecological robotics.
|
2503.20031 | Franck Cappello | Franck Cappello, Allison Baker, Ebru Bozda, Martin Burtscher, Kyle
Chard, Sheng Di, Paul Christopher O Grady, Peng Jiang, Shaomeng Li, Erik
Lindahl, Peter Lindstrom, Magnus Lundborg, Kai Zhao, Xin Liang, Masaru
Nagaso, Kento Sato, Amarjit Singh, Seung Woo Son, Dingwen Tao, Jiannan Tian,
Robert Underwood, Kazutomo Yoshii, Danylo Lykov, Yuri Alexeev, Kyle Gerard
Felker | Lossy Compression of Scientific Data: Applications Constrains and
Requirements | 33 pages | null | null | null | astro-ph.IM cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Increasing data volumes from scientific simulations and instruments
(supercomputers, accelerators, telescopes) often exceed network, storage, and
analysis capabilities. The scientific community's response to this challenge is
scientific data reduction. Reduction can take many forms, such as triggering,
sampling, filtering, quantization, and dimensionality reduction. This report
focuses on a specific technique: lossy compression. Lossy compression retains
all data points, leveraging correlations and controlled reduced accuracy.
Quality constraints, especially for quantities of interest, are crucial for
preserving scientific discoveries. User requirements also include compression
ratio and speed. While many papers have been published on lossy compression
techniques and reference datasets are shared by the community, there is a lack
of detailed specifications of application needs that can guide lossy
compression researchers and developers. This report fills this gap by reporting
on the requirements and constraints of nine scientific applications covering a
large spectrum of domains (climate, combustion, cosmology, fusion, light
sources, molecular dynamics, quantum circuit simulation, seismology, and system
logs). The report also details key lossy compression technologies (SZ, ZFP,
MGARD, LC, SPERR, DCTZ, TEZip, LibPressio), discussing their history,
principles, error control, hardware support, features, and impact. By
presenting both application needs and compression technologies, the report aims
to inspire new research to fill existing gaps.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 19:25:56 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Cappello",
"Franck",
""
],
[
"Baker",
"Allison",
""
],
[
"Bozda",
"Ebru",
""
],
[
"Burtscher",
"Martin",
""
],
[
"Chard",
"Kyle",
""
],
[
"Di",
"Sheng",
""
],
[
"Grady",
"Paul Christopher O",
""
],
[
"Jiang",
"Peng",
""
],
[
"Li",
"Shaomeng",
""
],
[
"Lindahl",
"Erik",
""
],
[
"Lindstrom",
"Peter",
""
],
[
"Lundborg",
"Magnus",
""
],
[
"Zhao",
"Kai",
""
],
[
"Liang",
"Xin",
""
],
[
"Nagaso",
"Masaru",
""
],
[
"Sato",
"Kento",
""
],
[
"Singh",
"Amarjit",
""
],
[
"Son",
"Seung Woo",
""
],
[
"Tao",
"Dingwen",
""
],
[
"Tian",
"Jiannan",
""
],
[
"Underwood",
"Robert",
""
],
[
"Yoshii",
"Kazutomo",
""
],
[
"Lykov",
"Danylo",
""
],
[
"Alexeev",
"Yuri",
""
],
[
"Felker",
"Kyle Gerard",
""
]
] | TITLE: Lossy Compression of Scientific Data: Applications Constrains and
Requirements
ABSTRACT: Increasing data volumes from scientific simulations and instruments
(supercomputers, accelerators, telescopes) often exceed network, storage, and
analysis capabilities. The scientific community's response to this challenge is
scientific data reduction. Reduction can take many forms, such as triggering,
sampling, filtering, quantization, and dimensionality reduction. This report
focuses on a specific technique: lossy compression. Lossy compression retains
all data points, leveraging correlations and controlled reduced accuracy.
Quality constraints, especially for quantities of interest, are crucial for
preserving scientific discoveries. User requirements also include compression
ratio and speed. While many papers have been published on lossy compression
techniques and reference datasets are shared by the community, there is a lack
of detailed specifications of application needs that can guide lossy
compression researchers and developers. This report fills this gap by reporting
on the requirements and constraints of nine scientific applications covering a
large spectrum of domains (climate, combustion, cosmology, fusion, light
sources, molecular dynamics, quantum circuit simulation, seismology, and system
logs). The report also details key lossy compression technologies (SZ, ZFP,
MGARD, LC, SPERR, DCTZ, TEZip, LibPressio), discussing their history,
principles, error control, hardware support, features, and impact. By
presenting both application needs and compression technologies, the report aims
to inspire new research to fill existing gaps.
|
2503.20036 | Eray Yapa\u{g}c{\i} | Eray Yapa\u{g}c{\i}, Yavuz Alp Sencer \"Ozt\"urk, Eray T\"uz\"un | BugCraft: End-to-End Crash Bug Reproduction Using LLM Agents in
Minecraft | null | null | null | null | cs.SE cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Reproducing game bugs, in our case crash bugs in continuously evolving games
like Minecraft, is a notoriously manual, time-consuming, and challenging
process to automate. Despite the success of LLM-driven bug reproduction in
other software domains, games, with their complex interactive environments,
remain largely unaddressed. This paper introduces BugCraft, a novel end-to-end
framework designed to automate the reproduction of crash bugs in Minecraft
directly from user-submitted bug reports, addressing the critical gap in
automated game bug reproduction. BugCraft employs a two-stage approach: first,
a Step Synthesizer leverages LLMs and Minecraft Wiki knowledge to transform bug
reports into high-quality, structured steps to reproduce (S2R). Second, an
Action Model, powered by a vision-based LLM agent (GPT-4o) and a custom macro
API, executes these S2R steps within Minecraft to trigger the reported crash.
To facilitate evaluation, we introduce BugCraft-Bench, a curated dataset of
Minecraft crash bug reports. Evaluated on BugCraft-Bench, our framework
successfully reproduced 30.23% of crash bugs end-to-end. The Step Synthesizer
demonstrated a 66.28% accuracy in generating correct bug reproduction plans,
highlighting its effectiveness in interpreting and structuring bug report
information. BugCraft demonstrates the feasibility of automated reproduction of
crash bugs in complex game environments using LLMs, opening promising avenues
for game testing and development. The framework and the BugCraft-Bench dataset
pave the way for future research in automated game bug analysis and hold
potential for generalization to other interactive game platforms. Finally, we
make our code open at https://bugcraft2025.github.io/
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 19:34:24 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Yapağcı",
"Eray",
""
],
[
"Öztürk",
"Yavuz Alp Sencer",
""
],
[
"Tüzün",
"Eray",
""
]
] | TITLE: BugCraft: End-to-End Crash Bug Reproduction Using LLM Agents in
Minecraft
ABSTRACT: Reproducing game bugs, in our case crash bugs in continuously evolving games
like Minecraft, is a notoriously manual, time-consuming, and challenging
process to automate. Despite the success of LLM-driven bug reproduction in
other software domains, games, with their complex interactive environments,
remain largely unaddressed. This paper introduces BugCraft, a novel end-to-end
framework designed to automate the reproduction of crash bugs in Minecraft
directly from user-submitted bug reports, addressing the critical gap in
automated game bug reproduction. BugCraft employs a two-stage approach: first,
a Step Synthesizer leverages LLMs and Minecraft Wiki knowledge to transform bug
reports into high-quality, structured steps to reproduce (S2R). Second, an
Action Model, powered by a vision-based LLM agent (GPT-4o) and a custom macro
API, executes these S2R steps within Minecraft to trigger the reported crash.
To facilitate evaluation, we introduce BugCraft-Bench, a curated dataset of
Minecraft crash bug reports. Evaluated on BugCraft-Bench, our framework
successfully reproduced 30.23% of crash bugs end-to-end. The Step Synthesizer
demonstrated a 66.28% accuracy in generating correct bug reproduction plans,
highlighting its effectiveness in interpreting and structuring bug report
information. BugCraft demonstrates the feasibility of automated reproduction of
crash bugs in complex game environments using LLMs, opening promising avenues
for game testing and development. The framework and the BugCraft-Bench dataset
pave the way for future research in automated game bug analysis and hold
potential for generalization to other interactive game platforms. Finally, we
make our code open at https://bugcraft2025.github.io/
|
2503.20040 | Lin Dong | Shaohuai Liu, Lin Dong, Chao Tian, Le Xie | Unlocking Multi-Task Electric Energy System Intelligence: Data Scaling
Laws and Performance with Limited Fine-Tuning | null | null | null | null | eess.SY cs.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data scaling has revolutionized research fields like natural language
processing, computer vision, and robotics control, providing foundation models
with remarkable multi-task and generalization capabilities. In this paper, we
investigate whether similar data scaling laws exist in developing foundation
models for power systems, and whether appropriate data scaling can yield
multi-task, cross-timescales capabilities that can be deployed in
\textit{unseen} operational scenarios. To this end, we conducted a
comprehensive empirical study on data scaling by fine-tuning open-source
foundation models using labeled data collected from diverse operational tasks
and scenarios. We study how a foundation model's scenario generalization
performance evolves with the number of training tasks, scenarios, and
demonstrations. Our study involved collecting more than 450k demonstrations and
implementing independent tests under a rigorous evaluation framework. Our
findings reveal several key insights: First, the generalization performance of
a fine-tuned foundation model follows an approximate power-law relationship
with the number of demonstrations and scenarios. Second, the fine-tuned model
also demonstrates impressive multi-task capabilities, where multi-task training
shares similar performance improvements with single-task training as the number
of demonstrations increases, without interference among tasks. Lastly, models
with small parameter sizes could have strong performance as well. Model
performance does not scale significantly with parameter size. These findings
underscore the feasibility of developing multi-task foundation models tailored
for power systems, demonstrating that while larger datasets and models
generally improve performance, extreme scaling is unnecessary to achieve
satisfactory outcomes.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 19:41:06 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Liu",
"Shaohuai",
""
],
[
"Dong",
"Lin",
""
],
[
"Tian",
"Chao",
""
],
[
"Xie",
"Le",
""
]
] | TITLE: Unlocking Multi-Task Electric Energy System Intelligence: Data Scaling
Laws and Performance with Limited Fine-Tuning
ABSTRACT: Data scaling has revolutionized research fields like natural language
processing, computer vision, and robotics control, providing foundation models
with remarkable multi-task and generalization capabilities. In this paper, we
investigate whether similar data scaling laws exist in developing foundation
models for power systems, and whether appropriate data scaling can yield
multi-task, cross-timescales capabilities that can be deployed in
\textit{unseen} operational scenarios. To this end, we conducted a
comprehensive empirical study on data scaling by fine-tuning open-source
foundation models using labeled data collected from diverse operational tasks
and scenarios. We study how a foundation model's scenario generalization
performance evolves with the number of training tasks, scenarios, and
demonstrations. Our study involved collecting more than 450k demonstrations and
implementing independent tests under a rigorous evaluation framework. Our
findings reveal several key insights: First, the generalization performance of
a fine-tuned foundation model follows an approximate power-law relationship
with the number of demonstrations and scenarios. Second, the fine-tuned model
also demonstrates impressive multi-task capabilities, where multi-task training
shares similar performance improvements with single-task training as the number
of demonstrations increases, without interference among tasks. Lastly, models
with small parameter sizes could have strong performance as well. Model
performance does not scale significantly with parameter size. These findings
underscore the feasibility of developing multi-task foundation models tailored
for power systems, demonstrating that while larger datasets and models
generally improve performance, extreme scaling is unnecessary to achieve
satisfactory outcomes.
|
2503.20047 | Gorkem Ates | Yu Xin, Gorkem Can Ates, Kuang Gong, Wei Shao | Med3DVLM: An Efficient Vision-Language Model for 3D Medical Image
Analysis | null | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Vision-language models (VLMs) have shown promise in 2D medical image
analysis, but extending them to 3D remains challenging due to the high
computational demands of volumetric data and the difficulty of aligning 3D
spatial features with clinical text. We present Med3DVLM, a 3D VLM designed to
address these challenges through three key innovations: (1) DCFormer, an
efficient encoder that uses decomposed 3D convolutions to capture fine-grained
spatial features at scale; (2) SigLIP, a contrastive learning strategy with
pairwise sigmoid loss that improves image-text alignment without relying on
large negative batches; and (3) a dual-stream MLP-Mixer projector that fuses
low- and high-level image features with text embeddings for richer multi-modal
representations. We evaluate our model on the M3D dataset, which includes
radiology reports and VQA data for 120,084 3D medical images. Results show that
Med3DVLM achieves superior performance across multiple benchmarks. For
image-text retrieval, it reaches 61.00% R@1 on 2,000 samples, significantly
outperforming the current state-of-the-art M3D model (19.10%). For report
generation, it achieves a METEOR score of 36.42% (vs. 14.38%). In open-ended
visual question answering (VQA), it scores 36.76% METEOR (vs. 33.58%), and in
closed-ended VQA, it achieves 79.95% accuracy (vs. 75.78%). These results
highlight Med3DVLM's ability to bridge the gap between 3D imaging and language,
enabling scalable, multi-task reasoning across clinical applications. Our code
is publicly available at https://github.com/mirthAI/Med3DVLM.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 20:09:30 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Xin",
"Yu",
""
],
[
"Ates",
"Gorkem Can",
""
],
[
"Gong",
"Kuang",
""
],
[
"Shao",
"Wei",
""
]
] | TITLE: Med3DVLM: An Efficient Vision-Language Model for 3D Medical Image
Analysis
ABSTRACT: Vision-language models (VLMs) have shown promise in 2D medical image
analysis, but extending them to 3D remains challenging due to the high
computational demands of volumetric data and the difficulty of aligning 3D
spatial features with clinical text. We present Med3DVLM, a 3D VLM designed to
address these challenges through three key innovations: (1) DCFormer, an
efficient encoder that uses decomposed 3D convolutions to capture fine-grained
spatial features at scale; (2) SigLIP, a contrastive learning strategy with
pairwise sigmoid loss that improves image-text alignment without relying on
large negative batches; and (3) a dual-stream MLP-Mixer projector that fuses
low- and high-level image features with text embeddings for richer multi-modal
representations. We evaluate our model on the M3D dataset, which includes
radiology reports and VQA data for 120,084 3D medical images. Results show that
Med3DVLM achieves superior performance across multiple benchmarks. For
image-text retrieval, it reaches 61.00% R@1 on 2,000 samples, significantly
outperforming the current state-of-the-art M3D model (19.10%). For report
generation, it achieves a METEOR score of 36.42% (vs. 14.38%). In open-ended
visual question answering (VQA), it scores 36.76% METEOR (vs. 33.58%), and in
closed-ended VQA, it achieves 79.95% accuracy (vs. 75.78%). These results
highlight Med3DVLM's ability to bridge the gap between 3D imaging and language,
enabling scalable, multi-task reasoning across clinical applications. Our code
is publicly available at https://github.com/mirthAI/Med3DVLM.
|
2503.20068 | Amogh Joshi | Naitik Jain, Amogh Joshi, Mason Earles | iNatAg: Multi-Class Classification Models Enabled by a Large-Scale
Benchmark Dataset with 4.7M Images of 2,959 Crop and Weed Species | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | Accurate identification of crop and weed species is critical for precision
agriculture and sustainable farming. However, it remains a challenging task due
to a variety of factors -- a high degree of visual similarity among species,
environmental variability, and a continued lack of large, agriculture-specific
image data. We introduce iNatAg, a large-scale image dataset which contains
over 4.7 million images of 2,959 distinct crop and weed species, with precise
annotations along the taxonomic hierarchy from binary crop/weed labels to
specific species labels. Curated from the broader iNaturalist database, iNatAg
contains data from every continent and accurately reflects the variability of
natural image captures and environments. Enabled by this data, we train
benchmark models built upon the Swin Transformer architecture and evaluate the
impact of various modifications such as the incorporation of geospatial data
and LoRA finetuning. Our best models achieve state-of-the-art performance
across all taxonomic classification tasks, achieving 92.38\% on crop and weed
classification. Furthermore, the scale of our dataset enables us to explore
incorrect misclassifications and unlock new analytic possiblities for plant
species. By combining large-scale species coverage, multi-task labels, and
geographic diversity, iNatAg provides a new foundation for building robust,
geolocation-aware agricultural classification systems. We release the iNatAg
dataset publicly through AgML (https://github.com/Project-AgML/AgML), enabling
direct access and integration into agricultural machine learning workflows.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 21:04:42 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Jain",
"Naitik",
""
],
[
"Joshi",
"Amogh",
""
],
[
"Earles",
"Mason",
""
]
] | TITLE: iNatAg: Multi-Class Classification Models Enabled by a Large-Scale
Benchmark Dataset with 4.7M Images of 2,959 Crop and Weed Species
ABSTRACT: Accurate identification of crop and weed species is critical for precision
agriculture and sustainable farming. However, it remains a challenging task due
to a variety of factors -- a high degree of visual similarity among species,
environmental variability, and a continued lack of large, agriculture-specific
image data. We introduce iNatAg, a large-scale image dataset which contains
over 4.7 million images of 2,959 distinct crop and weed species, with precise
annotations along the taxonomic hierarchy from binary crop/weed labels to
specific species labels. Curated from the broader iNaturalist database, iNatAg
contains data from every continent and accurately reflects the variability of
natural image captures and environments. Enabled by this data, we train
benchmark models built upon the Swin Transformer architecture and evaluate the
impact of various modifications such as the incorporation of geospatial data
and LoRA finetuning. Our best models achieve state-of-the-art performance
across all taxonomic classification tasks, achieving 92.38\% on crop and weed
classification. Furthermore, the scale of our dataset enables us to explore
incorrect misclassifications and unlock new analytic possiblities for plant
species. By combining large-scale species coverage, multi-task labels, and
geographic diversity, iNatAg provides a new foundation for building robust,
geolocation-aware agricultural classification systems. We release the iNatAg
dataset publicly through AgML (https://github.com/Project-AgML/AgML), enabling
direct access and integration into agricultural machine learning workflows.
|
2503.20076 | Aryan Sharad Shetty | Ajitesh Srivastava, Aryan Shetty, Eric Rice | Peer Disambiguation in Self-Reported Surveys using Graph Attention
Networks | null | null | null | 6310023 | cs.SI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Studying peer relationships is crucial in solving complex challenges
underserved communities face and designing interventions. The effectiveness of
such peer-based interventions relies on accurate network data regarding
individual attributes and social influences. However, these datasets are often
collected through self-reported surveys, introducing ambiguities in network
construction. These ambiguities make it challenging to fully utilize the
network data to understand the issues and to design the best interventions. We
propose and solve two variations of link ambiguities in such network data --
(i) which among the two candidate links exists, and (ii) if a candidate link
exists. We design a Graph Attention Network (GAT) that accounts for personal
attributes and network relationships on real-world data with real and simulated
ambiguities. We also demonstrate that by resolving these ambiguities, we
improve network accuracy, and in turn, improve suicide risk prediction. We also
uncover patterns using GNNExplainer to provide additional insights into vital
features and relationships. This research demonstrates the potential of Graph
Neural Networks (GNN) to advance real-world network data analysis facilitating
more effective peer interventions across various fields.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 21:25:31 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Srivastava",
"Ajitesh",
""
],
[
"Shetty",
"Aryan",
""
],
[
"Rice",
"Eric",
""
]
] | TITLE: Peer Disambiguation in Self-Reported Surveys using Graph Attention
Networks
ABSTRACT: Studying peer relationships is crucial in solving complex challenges
underserved communities face and designing interventions. The effectiveness of
such peer-based interventions relies on accurate network data regarding
individual attributes and social influences. However, these datasets are often
collected through self-reported surveys, introducing ambiguities in network
construction. These ambiguities make it challenging to fully utilize the
network data to understand the issues and to design the best interventions. We
propose and solve two variations of link ambiguities in such network data --
(i) which among the two candidate links exists, and (ii) if a candidate link
exists. We design a Graph Attention Network (GAT) that accounts for personal
attributes and network relationships on real-world data with real and simulated
ambiguities. We also demonstrate that by resolving these ambiguities, we
improve network accuracy, and in turn, improve suicide risk prediction. We also
uncover patterns using GNNExplainer to provide additional insights into vital
features and relationships. This research demonstrates the potential of Graph
Neural Networks (GNN) to advance real-world network data analysis facilitating
more effective peer interventions across various fields.
|
2503.20098 | Somnath Basu Roy Chowdhury | Somnath Basu Roy Chowdhury, Avinava Dubey, Ahmad Beirami, Rahul
Kidambi, Nicholas Monath, Amr Ahmed, Snigdha Chaturvedi | Fundamental Limits of Perfect Concept Erasure | Accepted at AISTATS 2025 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Concept erasure is the task of erasing information about a concept (e.g.,
gender or race) from a representation set while retaining the maximum possible
utility -- information from original representations. Concept erasure is useful
in several applications, such as removing sensitive concepts to achieve
fairness and interpreting the impact of specific concepts on a model's
performance. Previous concept erasure techniques have prioritized robustly
erasing concepts over retaining the utility of the resultant representations.
However, there seems to be an inherent tradeoff between erasure and retaining
utility, making it unclear how to achieve perfect concept erasure while
maintaining high utility. In this paper, we offer a fresh perspective toward
solving this problem by quantifying the fundamental limits of concept erasure
through an information-theoretic lens. Using these results, we investigate
constraints on the data distribution and the erasure functions required to
achieve the limits of perfect concept erasure. Empirically, we show that the
derived erasure functions achieve the optimal theoretical bounds. Additionally,
we show that our approach outperforms existing methods on a range of synthetic
and real-world datasets using GPT-4 representations.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 22:36:10 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Chowdhury",
"Somnath Basu Roy",
""
],
[
"Dubey",
"Avinava",
""
],
[
"Beirami",
"Ahmad",
""
],
[
"Kidambi",
"Rahul",
""
],
[
"Monath",
"Nicholas",
""
],
[
"Ahmed",
"Amr",
""
],
[
"Chaturvedi",
"Snigdha",
""
]
] | TITLE: Fundamental Limits of Perfect Concept Erasure
ABSTRACT: Concept erasure is the task of erasing information about a concept (e.g.,
gender or race) from a representation set while retaining the maximum possible
utility -- information from original representations. Concept erasure is useful
in several applications, such as removing sensitive concepts to achieve
fairness and interpreting the impact of specific concepts on a model's
performance. Previous concept erasure techniques have prioritized robustly
erasing concepts over retaining the utility of the resultant representations.
However, there seems to be an inherent tradeoff between erasure and retaining
utility, making it unclear how to achieve perfect concept erasure while
maintaining high utility. In this paper, we offer a fresh perspective toward
solving this problem by quantifying the fundamental limits of concept erasure
through an information-theoretic lens. Using these results, we investigate
constraints on the data distribution and the erasure functions required to
achieve the limits of perfect concept erasure. Empirically, we show that the
derived erasure functions achieve the optimal theoretical bounds. Additionally,
we show that our approach outperforms existing methods on a range of synthetic
and real-world datasets using GPT-4 representations.
|
2503.20101 | Connor Hashemi | Albert W Reed, Connor Hashemi, Dennis Melamed, Nitesh Menon, Keigo
Hirakawa, Scott McCloskey | EBS-EKF: Accurate and High Frequency Event-based Star Tracking | Accepted into the proceedings of the Conference on Computer Vision
and Pattern Recognition (CVPR) for 2025. Link to code and dataset is
https://gitlab.kitware.com/nest-public/kw_ebs_star_tracking# | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Event-based sensors (EBS) are a promising new technology for star tracking
due to their low latency and power efficiency, but prior work has thus far been
evaluated exclusively in simulation with simplified signal models. We propose a
novel algorithm for event-based star tracking, grounded in an analysis of the
EBS circuit and an extended Kalman filter (EKF). We quantitatively evaluate our
method using real night sky data, comparing its results with those from a
space-ready active-pixel sensor (APS) star tracker. We demonstrate that our
method is an order-of-magnitude more accurate than existing methods due to
improved signal modeling and state estimation, while providing more frequent
updates and greater motion tolerance than conventional APS trackers. We provide
all code and the first dataset of events synchronized with APS solutions.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 22:44:50 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Reed",
"Albert W",
""
],
[
"Hashemi",
"Connor",
""
],
[
"Melamed",
"Dennis",
""
],
[
"Menon",
"Nitesh",
""
],
[
"Hirakawa",
"Keigo",
""
],
[
"McCloskey",
"Scott",
""
]
] | TITLE: EBS-EKF: Accurate and High Frequency Event-based Star Tracking
ABSTRACT: Event-based sensors (EBS) are a promising new technology for star tracking
due to their low latency and power efficiency, but prior work has thus far been
evaluated exclusively in simulation with simplified signal models. We propose a
novel algorithm for event-based star tracking, grounded in an analysis of the
EBS circuit and an extended Kalman filter (EKF). We quantitatively evaluate our
method using real night sky data, comparing its results with those from a
space-ready active-pixel sensor (APS) star tracker. We demonstrate that our
method is an order-of-magnitude more accurate than existing methods due to
improved signal modeling and state estimation, while providing more frequent
updates and greater motion tolerance than conventional APS trackers. We provide
all code and the first dataset of events synchronized with APS solutions.
|
2503.20104 | Changye Li | Changye Li, Zhecheng Sheng, Trevor Cohen, and Serguei Pakhomov | "Is There Anything Else?'': Examining Administrator Influence on
Linguistic Features from the Cookie Theft Picture Description Cognitive Test | Accepted to CMCL 2025 workshop, co-located with NAACL 2025 | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Alzheimer's Disease (AD) dementia is a progressive neurodegenerative disease
that negatively impacts patients' cognitive ability. Previous studies have
demonstrated that changes in naturalistic language samples can be useful for
early screening of AD dementia. However, the nature of language deficits often
requires test administrators to use various speech elicitation techniques
during spontaneous language assessments to obtain enough propositional
utterances from dementia patients. This could lead to the ``observer's effect''
on the downstream analysis that has not been fully investigated. Our study
seeks to quantify the influence of test administrators on linguistic features
in dementia assessment with two English corpora the ``Cookie Theft'' picture
description datasets collected at different locations and test administrators
show different levels of administrator involvement. Our results show that the
level of test administrator involvement significantly impacts observed
linguistic features in patient speech. These results suggest that many of
significant linguistic features in the downstream classification task may be
partially attributable to differences in the test administration practices
rather than solely to participants' cognitive status. The variations in test
administrator behavior can lead to systematic biases in linguistic data,
potentially confounding research outcomes and clinical assessments. Our study
suggests that there is a need for a more standardized test administration
protocol in the development of responsible clinical speech analytics
frameworks.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 23:01:15 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Li",
"Changye",
""
],
[
"Sheng",
"Zhecheng",
""
],
[
"Cohen",
"Trevor",
""
],
[
"Pakhomov",
"Serguei",
""
]
] | TITLE: "Is There Anything Else?'': Examining Administrator Influence on
Linguistic Features from the Cookie Theft Picture Description Cognitive Test
ABSTRACT: Alzheimer's Disease (AD) dementia is a progressive neurodegenerative disease
that negatively impacts patients' cognitive ability. Previous studies have
demonstrated that changes in naturalistic language samples can be useful for
early screening of AD dementia. However, the nature of language deficits often
requires test administrators to use various speech elicitation techniques
during spontaneous language assessments to obtain enough propositional
utterances from dementia patients. This could lead to the ``observer's effect''
on the downstream analysis that has not been fully investigated. Our study
seeks to quantify the influence of test administrators on linguistic features
in dementia assessment with two English corpora the ``Cookie Theft'' picture
description datasets collected at different locations and test administrators
show different levels of administrator involvement. Our results show that the
level of test administrator involvement significantly impacts observed
linguistic features in patient speech. These results suggest that many of
significant linguistic features in the downstream classification task may be
partially attributable to differences in the test administration practices
rather than solely to participants' cognitive status. The variations in test
administrator behavior can lead to systematic biases in linguistic data,
potentially confounding research outcomes and clinical assessments. Our study
suggests that there is a need for a more standardized test administration
protocol in the development of responsible clinical speech analytics
frameworks.
|
2503.20107 | Tomasz Pieciak | Dominika Ciupek, Maciej Malawski and Tomasz Pieciak | Federated Learning: A new frontier in the exploration of
multi-institutional medical imaging data | null | null | null | null | eess.IV physics.med-ph | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Artificial intelligence has transformed the perspective of medical imaging,
leading to a genuine technological revolution in modern computer-assisted
healthcare systems. However, ubiquitously featured deep learning (DL) systems
require access to a considerable amount of data, facilitating proper knowledge
extraction and generalization. Admission to such extensive resources may be
hindered due to the time and effort required to convey ethical agreements, set
up and carry the acquisition procedures through, and manage the datasets
adequately with a particular emphasis on proper anonymization. One of the
pivotal challenges in the DL field is data integration from various sources
acquired using different hardware vendors, diverse acquisition protocols,
experimental setups, and even inter-operator variabilities. In this paper, we
review the federated learning (FL) concept that fosters the integration of
large-scale heterogeneous datasets from multiple institutions in training DL
models. In contrast to a centralized approach, the decentralized FL procedure
promotes training DL models while preserving data privacy at each institution
involved. We formulate the FL principle and comprehensively review general and
dedicated medical imaging aggregation and learning algorithms, enabling the
generation of a globally generalized model. We meticulously go through the
challenges in constructing FL-based systems, such as data heterogeneity across
the institutions, resilience to potential attacks on data privacy, and the
variability in computational and communication resources among the entangled
sites that might induce efficiency issues of the entire system. Finally, we
explore the up-to-date open frameworks for rapid FL-based algorithm prototyping
and shed light on future directions in this intensively growing field.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 23:08:36 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Ciupek",
"Dominika",
""
],
[
"Malawski",
"Maciej",
""
],
[
"Pieciak",
"Tomasz",
""
]
] | TITLE: Federated Learning: A new frontier in the exploration of
multi-institutional medical imaging data
ABSTRACT: Artificial intelligence has transformed the perspective of medical imaging,
leading to a genuine technological revolution in modern computer-assisted
healthcare systems. However, ubiquitously featured deep learning (DL) systems
require access to a considerable amount of data, facilitating proper knowledge
extraction and generalization. Admission to such extensive resources may be
hindered due to the time and effort required to convey ethical agreements, set
up and carry the acquisition procedures through, and manage the datasets
adequately with a particular emphasis on proper anonymization. One of the
pivotal challenges in the DL field is data integration from various sources
acquired using different hardware vendors, diverse acquisition protocols,
experimental setups, and even inter-operator variabilities. In this paper, we
review the federated learning (FL) concept that fosters the integration of
large-scale heterogeneous datasets from multiple institutions in training DL
models. In contrast to a centralized approach, the decentralized FL procedure
promotes training DL models while preserving data privacy at each institution
involved. We formulate the FL principle and comprehensively review general and
dedicated medical imaging aggregation and learning algorithms, enabling the
generation of a globally generalized model. We meticulously go through the
challenges in constructing FL-based systems, such as data heterogeneity across
the institutions, resilience to potential attacks on data privacy, and the
variability in computational and communication resources among the entangled
sites that might induce efficiency issues of the entire system. Finally, we
explore the up-to-date open frameworks for rapid FL-based algorithm prototyping
and shed light on future directions in this intensively growing field.
|
2503.20118 | Yuke Lou | Yuke Lou, Yiming Wang, Zhen Wu, Rui Zhao, Wenjia Wang, Mingyi Shi,
Taku Komura | Zero-Shot Human-Object Interaction Synthesis with Multimodal Priors | null | null | null | null | cs.GR cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human-object interaction (HOI) synthesis is important for various
applications, ranging from virtual reality to robotics. However, acquiring 3D
HOI data is challenging due to its complexity and high cost, limiting existing
methods to the narrow diversity of object types and interaction patterns in
training datasets. This paper proposes a novel zero-shot HOI synthesis
framework without relying on end-to-end training on currently limited 3D HOI
datasets. The core idea of our method lies in leveraging extensive HOI
knowledge from pre-trained Multimodal Models. Given a text description, our
system first obtains temporally consistent 2D HOI image sequences using image
or video generation models, which are then uplifted to 3D HOI milestones of
human and object poses. We employ pre-trained human pose estimation models to
extract human poses and introduce a generalizable category-level 6-DoF
estimation method to obtain the object poses from 2D HOI images. Our estimation
method is adaptive to various object templates obtained from text-to-3D models
or online retrieval. A physics-based tracking of the 3D HOI kinematic milestone
is further applied to refine both body motions and object poses, yielding more
physically plausible HOI generation results. The experimental results
demonstrate that our method is capable of generating open-vocabulary HOIs with
physical realism and semantic diversity.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 23:55:47 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Lou",
"Yuke",
""
],
[
"Wang",
"Yiming",
""
],
[
"Wu",
"Zhen",
""
],
[
"Zhao",
"Rui",
""
],
[
"Wang",
"Wenjia",
""
],
[
"Shi",
"Mingyi",
""
],
[
"Komura",
"Taku",
""
]
] | TITLE: Zero-Shot Human-Object Interaction Synthesis with Multimodal Priors
ABSTRACT: Human-object interaction (HOI) synthesis is important for various
applications, ranging from virtual reality to robotics. However, acquiring 3D
HOI data is challenging due to its complexity and high cost, limiting existing
methods to the narrow diversity of object types and interaction patterns in
training datasets. This paper proposes a novel zero-shot HOI synthesis
framework without relying on end-to-end training on currently limited 3D HOI
datasets. The core idea of our method lies in leveraging extensive HOI
knowledge from pre-trained Multimodal Models. Given a text description, our
system first obtains temporally consistent 2D HOI image sequences using image
or video generation models, which are then uplifted to 3D HOI milestones of
human and object poses. We employ pre-trained human pose estimation models to
extract human poses and introduce a generalizable category-level 6-DoF
estimation method to obtain the object poses from 2D HOI images. Our estimation
method is adaptive to various object templates obtained from text-to-3D models
or online retrieval. A physics-based tracking of the 3D HOI kinematic milestone
is further applied to refine both body motions and object poses, yielding more
physically plausible HOI generation results. The experimental results
demonstrate that our method is capable of generating open-vocabulary HOIs with
physical realism and semantic diversity.
|
2503.20119 | Jiwon Chang | Jiwon Chang, Fatemeh Nargesian | Approximating Opaque Top-k Queries | 25 pages, 9 figures. To be published in PACMMOD 2025 | null | 10.1145/3725266 | null | cs.DB | http://creativecommons.org/licenses/by/4.0/ | Combining query answering and data science workloads has become prevalent. An
important class of such workloads is top-k queries with a scoring function
implemented as an opaque UDF - a black box whose internal structure and scores
on the search domain are unavailable. Some typical examples include costly
calls to fuzzy classification and regression models. The models may also be
changed in an ad-hoc manner. Since the algorithm does not know the scoring
function's behavior on the input data, opaque top-k queries become expensive to
evaluate exactly or speed up by indexing. Hence, we propose an approximation
algorithm for opaque top-k query answering. Our proposed solution is a
task-independent hierarchical index and a novel bandit algorithm. The index
clusters elements by some cheap vector representation then builds a tree of the
clusters. Our bandit is a diminishing returns submodular epsilon-greedy bandit
algorithm that maximizes the sum of the solution set's scores. Our bandit
models the distribution of scores in each arm using a histogram, then targets
arms with fat tails. We prove that our bandit algorithm approaches a constant
factor of the optimal algorithm. We evaluate our standalone library on large
synthetic, image, and tabular datasets over a variety of scoring functions. Our
method accelerates the time required to achieve nearly optimal scores by up to
an order of magnitude compared to exhaustive scan while consistently
outperforming baseline sampling algorithms.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 23:59:29 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Chang",
"Jiwon",
""
],
[
"Nargesian",
"Fatemeh",
""
]
] | TITLE: Approximating Opaque Top-k Queries
ABSTRACT: Combining query answering and data science workloads has become prevalent. An
important class of such workloads is top-k queries with a scoring function
implemented as an opaque UDF - a black box whose internal structure and scores
on the search domain are unavailable. Some typical examples include costly
calls to fuzzy classification and regression models. The models may also be
changed in an ad-hoc manner. Since the algorithm does not know the scoring
function's behavior on the input data, opaque top-k queries become expensive to
evaluate exactly or speed up by indexing. Hence, we propose an approximation
algorithm for opaque top-k query answering. Our proposed solution is a
task-independent hierarchical index and a novel bandit algorithm. The index
clusters elements by some cheap vector representation then builds a tree of the
clusters. Our bandit is a diminishing returns submodular epsilon-greedy bandit
algorithm that maximizes the sum of the solution set's scores. Our bandit
models the distribution of scores in each arm using a histogram, then targets
arms with fat tails. We prove that our bandit algorithm approaches a constant
factor of the optimal algorithm. We evaluate our standalone library on large
synthetic, image, and tabular datasets over a variety of scoring functions. Our
method accelerates the time required to achieve nearly optimal scores by up to
an order of magnitude compared to exhaustive scan while consistently
outperforming baseline sampling algorithms.
|
2503.20120 | Hongwei Wen | Hongwei Wen, Annika Betken, Wouter Koolen | On the Robustness of Kernel Ridge Regression Using the Cauchy Loss
Function | null | null | null | null | stat.ML cs.LG | http://creativecommons.org/licenses/by/4.0/ | Robust regression aims to develop methods for estimating an unknown
regression function in the presence of outliers, heavy-tailed distributions, or
contaminated data, which can severely impact performance. Most existing
theoretical results in robust regression assume that the noise has a finite
absolute mean, an assumption violated by certain distributions, such as Cauchy
and some Pareto noise. In this paper, we introduce a generalized Cauchy noise
framework that accommodates all noise distributions with finite moments of any
order, even when the absolute mean is infinite. Within this framework, we study
the \textit{kernel Cauchy ridge regressor} (\textit{KCRR}), which minimizes a
regularized empirical Cauchy risk to achieve robustness. To derive the
$L_2$-risk bound for KCRR, we establish a connection between the excess Cauchy
risk and $L_2$-risk for sufficiently large scale parameters of the Cauchy loss,
which reveals that these two risks are equivalent. Furthermore, under the
assumption that the regression function satisfies H\"older smoothness, we
derive excess Cauchy risk bounds for KCRR, showing improved performance as the
scale parameter decreases. By considering the twofold effect of the scale
parameter on the excess Cauchy risk and its equivalence with the $L_2$-risk, we
establish the almost minimax-optimal convergence rate for KCRR in terms of
$L_2$-risk, highlighting the robustness of the Cauchy loss in handling various
types of noise. Finally, we validate the effectiveness of KCRR through
experiments on both synthetic and real-world datasets under diverse noise
corruption scenarios.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 00:00:53 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Wen",
"Hongwei",
""
],
[
"Betken",
"Annika",
""
],
[
"Koolen",
"Wouter",
""
]
] | TITLE: On the Robustness of Kernel Ridge Regression Using the Cauchy Loss
Function
ABSTRACT: Robust regression aims to develop methods for estimating an unknown
regression function in the presence of outliers, heavy-tailed distributions, or
contaminated data, which can severely impact performance. Most existing
theoretical results in robust regression assume that the noise has a finite
absolute mean, an assumption violated by certain distributions, such as Cauchy
and some Pareto noise. In this paper, we introduce a generalized Cauchy noise
framework that accommodates all noise distributions with finite moments of any
order, even when the absolute mean is infinite. Within this framework, we study
the \textit{kernel Cauchy ridge regressor} (\textit{KCRR}), which minimizes a
regularized empirical Cauchy risk to achieve robustness. To derive the
$L_2$-risk bound for KCRR, we establish a connection between the excess Cauchy
risk and $L_2$-risk for sufficiently large scale parameters of the Cauchy loss,
which reveals that these two risks are equivalent. Furthermore, under the
assumption that the regression function satisfies H\"older smoothness, we
derive excess Cauchy risk bounds for KCRR, showing improved performance as the
scale parameter decreases. By considering the twofold effect of the scale
parameter on the excess Cauchy risk and its equivalence with the $L_2$-risk, we
establish the almost minimax-optimal convergence rate for KCRR in terms of
$L_2$-risk, highlighting the robustness of the Cauchy loss in handling various
types of noise. Finally, we validate the effectiveness of KCRR through
experiments on both synthetic and real-world datasets under diverse noise
corruption scenarios.
|
2503.20127 | Peter Schafhalter | Peter Schafhalter, Alexander Krentsel, Joseph E. Gonzalez, Sylvia
Ratnasamy, Scott Shenker, Ion Stoica | Bandwidth Allocation for Cloud-Augmented Autonomous Driving | 18 pages, 11 figures | null | null | null | cs.RO cs.NI | http://creativecommons.org/licenses/by/4.0/ | Autonomous vehicle (AV) control systems increasingly rely on ML models for
tasks such as perception and planning. Current practice is to run these models
on the car's local hardware due to real-time latency constraints and
reliability concerns, which limits model size and thus accuracy. Prior work has
observed that we could augment current systems by running larger models in the
cloud, relying on faster cloud runtimes to offset the cellular network latency.
However, prior work does not account for an important practical constraint:
limited cellular bandwidth. We show that, for typical bandwidth levels,
proposed techniques for cloud-augmented AV models take too long to transfer
data, thus mostly falling back to the on-car models and resulting in no
accuracy improvement.
In this work, we show that realizing cloud-augmented AV models requires
intelligent use of this scarce bandwidth, i.e. carefully allocating bandwidth
across tasks and providing multiple data compression and model options. We
formulate this as a resource allocation problem to maximize car utility, and
present our system \sysname which achieves an increase in average model
accuracy by up to 15 percentage points on driving scenarios from the Waymo Open
Dataset.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 00:33:38 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Schafhalter",
"Peter",
""
],
[
"Krentsel",
"Alexander",
""
],
[
"Gonzalez",
"Joseph E.",
""
],
[
"Ratnasamy",
"Sylvia",
""
],
[
"Shenker",
"Scott",
""
],
[
"Stoica",
"Ion",
""
]
] | TITLE: Bandwidth Allocation for Cloud-Augmented Autonomous Driving
ABSTRACT: Autonomous vehicle (AV) control systems increasingly rely on ML models for
tasks such as perception and planning. Current practice is to run these models
on the car's local hardware due to real-time latency constraints and
reliability concerns, which limits model size and thus accuracy. Prior work has
observed that we could augment current systems by running larger models in the
cloud, relying on faster cloud runtimes to offset the cellular network latency.
However, prior work does not account for an important practical constraint:
limited cellular bandwidth. We show that, for typical bandwidth levels,
proposed techniques for cloud-augmented AV models take too long to transfer
data, thus mostly falling back to the on-car models and resulting in no
accuracy improvement.
In this work, we show that realizing cloud-augmented AV models requires
intelligent use of this scarce bandwidth, i.e. carefully allocating bandwidth
across tasks and providing multiple data compression and model options. We
formulate this as a resource allocation problem to maximize car utility, and
present our system \sysname which achieves an increase in average model
accuracy by up to 15 percentage points on driving scenarios from the Waymo Open
Dataset.
|
2503.20144 | Seyedeh Azadeh Fallah Mortezanejad Dr | Seyedeh Azadeh Fallah Mortezanejad (1), Ruochen Wang (2), Ali
Mohammad-Djafari (3, 4) ((1, 2) School of Automotive and Traffic Engineering,
Jiangsu University, Zhenjiang, Jiangsu, China. (3) International Science
Consulting and Training (ISCT), Bures sur Yvette, France. (4) Shanfeng
Company, Shaoxing, China) | Physics-Informed Neural Networks with Unknown Partial Differential
Equations: an Application in Multivariate Time Series | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A significant advancement in Neural Network (NN) research is the integration
of domain-specific knowledge through custom loss functions. This approach
addresses a crucial challenge: how can models utilize physics or mathematical
principles to enhance predictions when dealing with sparse, noisy, or
incomplete data? Physics-Informed Neural Networks (PINNs) put this idea into
practice by incorporating physical equations, such as Partial Differential
Equations (PDEs), as soft constraints. This guidance helps the networks find
solutions that align with established laws. Recently, researchers have expanded
this framework to include Bayesian NNs (BNNs), which allow for uncertainty
quantification while still adhering to physical principles. But what happens
when the governing equations of a system are not known? In this work, we
introduce methods to automatically extract PDEs from historical data. We then
integrate these learned equations into three different modeling approaches:
PINNs, Bayesian-PINNs (B-PINNs), and Bayesian Linear Regression (BLR). To
assess these frameworks, we evaluate them on a real-world Multivariate Time
Series (MTS) dataset. We compare their effectiveness in forecasting future
states under different scenarios: with and without PDE constraints and accuracy
considerations. This research aims to bridge the gap between data-driven
discovery and physics-guided learning, providing valuable insights for
practical applications.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 01:24:47 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Mortezanejad",
"Seyedeh Azadeh Fallah",
""
],
[
"Wang",
"Ruochen",
""
],
[
"Mohammad-Djafari",
"Ali",
""
]
] | TITLE: Physics-Informed Neural Networks with Unknown Partial Differential
Equations: an Application in Multivariate Time Series
ABSTRACT: A significant advancement in Neural Network (NN) research is the integration
of domain-specific knowledge through custom loss functions. This approach
addresses a crucial challenge: how can models utilize physics or mathematical
principles to enhance predictions when dealing with sparse, noisy, or
incomplete data? Physics-Informed Neural Networks (PINNs) put this idea into
practice by incorporating physical equations, such as Partial Differential
Equations (PDEs), as soft constraints. This guidance helps the networks find
solutions that align with established laws. Recently, researchers have expanded
this framework to include Bayesian NNs (BNNs), which allow for uncertainty
quantification while still adhering to physical principles. But what happens
when the governing equations of a system are not known? In this work, we
introduce methods to automatically extract PDEs from historical data. We then
integrate these learned equations into three different modeling approaches:
PINNs, Bayesian-PINNs (B-PINNs), and Bayesian Linear Regression (BLR). To
assess these frameworks, we evaluate them on a real-world Multivariate Time
Series (MTS) dataset. We compare their effectiveness in forecasting future
states under different scenarios: with and without PDE constraints and accuracy
considerations. This research aims to bridge the gap between data-driven
discovery and physics-guided learning, providing valuable insights for
practical applications.
|
2503.20148 | Seyedeh Azadeh Fallah Mortezanejad Dr | Seyedeh Azadeh Fallah Mortezanejad, Ruochen Wang (School of Automotive
and Traffic Engineering, Jiangsu University, Zhenjiang, Jiangsu, China) | Addressing Challenges in Time Series Forecasting: A Comprehensive
Comparison of Machine Learning Techniques | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The explosion of Time Series (TS) data, driven by advancements in technology,
necessitates sophisticated analytical methods. Modern management systems
increasingly rely on analyzing this data, highlighting the importance of
effcient processing techniques. State-of-the-art Machine Learning (ML)
approaches for TS analysis and forecasting are becoming prevalent. This paper
briefly describes and compiles suitable algorithms for TS regression task. We
compare these algorithms against each other and the classic ARIMA method using
diverse datasets: complete data, data with outliers, and data with missing
values. The focus is on forecasting accuracy, particularly for long-term
predictions. This research aids in selecting the most appropriate algorithm
based on forecasting needs and data characteristics.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 01:55:56 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Mortezanejad",
"Seyedeh Azadeh Fallah",
"",
"School of Automotive\n and Traffic Engineering, Jiangsu University, Zhenjiang, Jiangsu, China"
],
[
"Wang",
"Ruochen",
"",
"School of Automotive\n and Traffic Engineering, Jiangsu University, Zhenjiang, Jiangsu, China"
]
] | TITLE: Addressing Challenges in Time Series Forecasting: A Comprehensive
Comparison of Machine Learning Techniques
ABSTRACT: The explosion of Time Series (TS) data, driven by advancements in technology,
necessitates sophisticated analytical methods. Modern management systems
increasingly rely on analyzing this data, highlighting the importance of
effcient processing techniques. State-of-the-art Machine Learning (ML)
approaches for TS analysis and forecasting are becoming prevalent. This paper
briefly describes and compiles suitable algorithms for TS regression task. We
compare these algorithms against each other and the classic ARIMA method using
diverse datasets: complete data, data with outliers, and data with missing
values. The focus is on forecasting accuracy, particularly for long-term
predictions. This research aids in selecting the most appropriate algorithm
based on forecasting needs and data characteristics.
|
2503.20158 | Oren Kraus | Oren Kraus, Federico Comitani, John Urbanik, Kian Kenyon-Dean,
Lakshmanan Arumugam, Saber Saberian, Cas Wognum, Safiye Celik, and Imran S.
Haque | RxRx3-core: Benchmarking drug-target interactions in High-Content
Microscopy | Published at LMRL Workshop at ICLR 2025 | null | null | null | q-bio.QM cs.LG q-bio.CB | http://creativecommons.org/licenses/by-nc-sa/4.0/ | High Content Screening (HCS) microscopy datasets have transformed the ability
to profile cellular responses to genetic and chemical perturbations, enabling
cell-based inference of drug-target interactions (DTI). However, the adoption
of representation learning methods for HCS data has been hindered by the lack
of accessible datasets and robust benchmarks. To address this gap, we present
RxRx3-core, a curated and compressed subset of the RxRx3 dataset, and an
associated DTI benchmarking task. At just 18GB, RxRx3-core significantly
reduces the size barrier associated with large-scale HCS datasets while
preserving critical data necessary for benchmarking representation learning
models against a zero-shot DTI prediction task. RxRx3-core includes 222,601
microscopy images spanning 736 CRISPR knockouts and 1,674 compounds at 8
concentrations. RxRx3-core is available on HuggingFace and Polaris, along with
pre-trained embeddings and benchmarking code, ensuring accessibility for the
research community. By providing a compact dataset and robust benchmarks, we
aim to accelerate innovation in representation learning methods for HCS data
and support the discovery of novel biological insights.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 02:23:58 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Kraus",
"Oren",
""
],
[
"Comitani",
"Federico",
""
],
[
"Urbanik",
"John",
""
],
[
"Kenyon-Dean",
"Kian",
""
],
[
"Arumugam",
"Lakshmanan",
""
],
[
"Saberian",
"Saber",
""
],
[
"Wognum",
"Cas",
""
],
[
"Celik",
"Safiye",
""
],
[
"Haque",
"Imran S.",
""
]
] | TITLE: RxRx3-core: Benchmarking drug-target interactions in High-Content
Microscopy
ABSTRACT: High Content Screening (HCS) microscopy datasets have transformed the ability
to profile cellular responses to genetic and chemical perturbations, enabling
cell-based inference of drug-target interactions (DTI). However, the adoption
of representation learning methods for HCS data has been hindered by the lack
of accessible datasets and robust benchmarks. To address this gap, we present
RxRx3-core, a curated and compressed subset of the RxRx3 dataset, and an
associated DTI benchmarking task. At just 18GB, RxRx3-core significantly
reduces the size barrier associated with large-scale HCS datasets while
preserving critical data necessary for benchmarking representation learning
models against a zero-shot DTI prediction task. RxRx3-core includes 222,601
microscopy images spanning 736 CRISPR knockouts and 1,674 compounds at 8
concentrations. RxRx3-core is available on HuggingFace and Polaris, along with
pre-trained embeddings and benchmarking code, ensuring accessibility for the
research community. By providing a compact dataset and robust benchmarks, we
aim to accelerate innovation in representation learning methods for HCS data
and support the discovery of novel biological insights.
|
2503.20164 | Jinyu Wang | Jinyu Wang, Xianghui Fang, Nan Chen, Bo Qin, Mu Mu, Chaopeng Ji | A Dual-Core Model for ENSO Diversity: Unifying Model Hierarchies for
Realistic Simulations | null | null | null | null | physics.ao-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite advances in climate modeling, simulating the El Ni\~no-Southern
Oscillation (ENSO) remains challenging due to its spatiotemporal diversity and
complexity. To address this, we build upon existing model hierarchies to
develop a new unified modeling platform, which provides practical, scalable,
and accurate tools for advancing ENSO research. Within this framework, we
introduce a dual-core ENSO model (DCM) that integrates two widely used ENSO
modeling approaches: a linear stochastic model confined to the equator and a
nonlinear intermediate model extending off-equator. The stochastic model
ensures computational efficiency and statistical accuracy. It captures
essential ENSO characteristics and reproduces the observed non-Gaussian
statistics. Meanwhile, the nonlinear model assimilates pseudo-observations from
the stochastic model while resolving key air-sea interactions, such as feedback
balances and spatial patterns of sea surface temperature anomalies (SSTA)
during El Ni\~no peaks and improving western-central Pacific SSTA magnitudes
and spatial accuracy. The DCM effectively captures the realistic dynamical and
statistical features of the ENSO diversity and complexity. Notably, the
computational efficiency of the DCM facilitates a rapid generation of extended
ENSO datasets, overcoming observational limitations. The outcome facilitates
the analysis of long-term variations, advancing our understanding of ENSO and
many other climate phenomena.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 02:44:19 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Wang",
"Jinyu",
""
],
[
"Fang",
"Xianghui",
""
],
[
"Chen",
"Nan",
""
],
[
"Qin",
"Bo",
""
],
[
"Mu",
"Mu",
""
],
[
"Ji",
"Chaopeng",
""
]
] | TITLE: A Dual-Core Model for ENSO Diversity: Unifying Model Hierarchies for
Realistic Simulations
ABSTRACT: Despite advances in climate modeling, simulating the El Ni\~no-Southern
Oscillation (ENSO) remains challenging due to its spatiotemporal diversity and
complexity. To address this, we build upon existing model hierarchies to
develop a new unified modeling platform, which provides practical, scalable,
and accurate tools for advancing ENSO research. Within this framework, we
introduce a dual-core ENSO model (DCM) that integrates two widely used ENSO
modeling approaches: a linear stochastic model confined to the equator and a
nonlinear intermediate model extending off-equator. The stochastic model
ensures computational efficiency and statistical accuracy. It captures
essential ENSO characteristics and reproduces the observed non-Gaussian
statistics. Meanwhile, the nonlinear model assimilates pseudo-observations from
the stochastic model while resolving key air-sea interactions, such as feedback
balances and spatial patterns of sea surface temperature anomalies (SSTA)
during El Ni\~no peaks and improving western-central Pacific SSTA magnitudes
and spatial accuracy. The DCM effectively captures the realistic dynamical and
statistical features of the ENSO diversity and complexity. Notably, the
computational efficiency of the DCM facilitates a rapid generation of extended
ENSO datasets, overcoming observational limitations. The outcome facilitates
the analysis of long-term variations, advancing our understanding of ENSO and
many other climate phenomena.
|
2503.20166 | Xianke Qiang | Xianke Qiang, Zheng Chang, Ying-Chang Liang | AIGC-assisted Federated Learning for Edge Intelligence: Architecture
Design, Research Challenges and Future Directions | null | null | null | null | cs.LG cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Federated learning (FL) can fully leverage large-scale terminal data while
ensuring privacy and security, and is considered as a distributed alternative
for the centralized machine learning. However, the issue of data heterogeneity
poses limitations on FL's performance. To address this challenge, artificial
intelligence-generated content (AIGC) which is an innovative data synthesis
technique emerges as one potential solution. In this article, we first provide
an overview of the system architecture, performance metrics, and challenges
associated with AIGC-assistant FL system design. We then propose the Generative
federated learning (GenFL) architecture and present its workflow, including the
design of aggregation and weight policy. Finally, using the CIFAR10 and
CIFAR100 datasets, we employ diffusion models to generate dataset and improve
FL performance. Experiments conducted under various non-independent and
identically distributed (non-IID) data distributions demonstrate the
effectiveness of GenFL on overcoming the bottlenecks in FL caused by data
heterogeneity. Open research directions in the research of AIGC-assisted FL are
also discussed.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 02:45:19 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Qiang",
"Xianke",
""
],
[
"Chang",
"Zheng",
""
],
[
"Liang",
"Ying-Chang",
""
]
] | TITLE: AIGC-assisted Federated Learning for Edge Intelligence: Architecture
Design, Research Challenges and Future Directions
ABSTRACT: Federated learning (FL) can fully leverage large-scale terminal data while
ensuring privacy and security, and is considered as a distributed alternative
for the centralized machine learning. However, the issue of data heterogeneity
poses limitations on FL's performance. To address this challenge, artificial
intelligence-generated content (AIGC) which is an innovative data synthesis
technique emerges as one potential solution. In this article, we first provide
an overview of the system architecture, performance metrics, and challenges
associated with AIGC-assistant FL system design. We then propose the Generative
federated learning (GenFL) architecture and present its workflow, including the
design of aggregation and weight policy. Finally, using the CIFAR10 and
CIFAR100 datasets, we employ diffusion models to generate dataset and improve
FL performance. Experiments conducted under various non-independent and
identically distributed (non-IID) data distributions demonstrate the
effectiveness of GenFL on overcoming the bottlenecks in FL caused by data
heterogeneity. Open research directions in the research of AIGC-assisted FL are
also discussed.
|
2503.20168 | Sheng Miao | Sheng Miao, Jiaxin Huang, Dongfeng Bai, Xu Yan, Hongyu Zhou, Yue Wang,
Bingbing Liu, Andreas Geiger, Yiyi Liao | EVolSplat: Efficient Volume-based Gaussian Splatting for Urban View
Synthesis | CVPR2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Novel view synthesis of urban scenes is essential for autonomous
driving-related applications.Existing NeRF and 3DGS-based methods show
promising results in achieving photorealistic renderings but require slow,
per-scene optimization. We introduce EVolSplat, an efficient 3D Gaussian
Splatting model for urban scenes that works in a feed-forward manner. Unlike
existing feed-forward, pixel-aligned 3DGS methods, which often suffer from
issues like multi-view inconsistencies and duplicated content, our approach
predicts 3D Gaussians across multiple frames within a unified volume using a 3D
convolutional network. This is achieved by initializing 3D Gaussians with noisy
depth predictions, and then refining their geometric properties in 3D space and
predicting color based on 2D textures. Our model also handles distant views and
the sky with a flexible hemisphere background model. This enables us to perform
fast, feed-forward reconstruction while achieving real-time rendering.
Experimental evaluations on the KITTI-360 and Waymo datasets show that our
method achieves state-of-the-art quality compared to existing feed-forward
3DGS- and NeRF-based methods.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 02:47:27 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Miao",
"Sheng",
""
],
[
"Huang",
"Jiaxin",
""
],
[
"Bai",
"Dongfeng",
""
],
[
"Yan",
"Xu",
""
],
[
"Zhou",
"Hongyu",
""
],
[
"Wang",
"Yue",
""
],
[
"Liu",
"Bingbing",
""
],
[
"Geiger",
"Andreas",
""
],
[
"Liao",
"Yiyi",
""
]
] | TITLE: EVolSplat: Efficient Volume-based Gaussian Splatting for Urban View
Synthesis
ABSTRACT: Novel view synthesis of urban scenes is essential for autonomous
driving-related applications.Existing NeRF and 3DGS-based methods show
promising results in achieving photorealistic renderings but require slow,
per-scene optimization. We introduce EVolSplat, an efficient 3D Gaussian
Splatting model for urban scenes that works in a feed-forward manner. Unlike
existing feed-forward, pixel-aligned 3DGS methods, which often suffer from
issues like multi-view inconsistencies and duplicated content, our approach
predicts 3D Gaussians across multiple frames within a unified volume using a 3D
convolutional network. This is achieved by initializing 3D Gaussians with noisy
depth predictions, and then refining their geometric properties in 3D space and
predicting color based on 2D textures. Our model also handles distant views and
the sky with a flexible hemisphere background model. This enables us to perform
fast, feed-forward reconstruction while achieving real-time rendering.
Experimental evaluations on the KITTI-360 and Waymo datasets show that our
method achieves state-of-the-art quality compared to existing feed-forward
3DGS- and NeRF-based methods.
|
2503.20179 | Xiyu Ding | Shijia Zhang, Xiyu Ding, Kai Ding, Jacob Zhang, Kevin Galinsky,
Mengrui Wang, Ryan P. Mayers, Zheyu Wang, Hadi Kharrazi | ProtoBERT-LoRA: Parameter-Efficient Prototypical Finetuning for
Immunotherapy Study Identification | Submitted to AMIA 2025 Annual Symposium | null | null | null | cs.CL cs.IR q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Identifying immune checkpoint inhibitor (ICI) studies in genomic repositories
like Gene Expression Omnibus (GEO) is vital for cancer research yet remains
challenging due to semantic ambiguity, extreme class imbalance, and limited
labeled data in low-resource settings. We present ProtoBERT-LoRA, a hybrid
framework that combines PubMedBERT with prototypical networks and Low-Rank
Adaptation (LoRA) for efficient fine-tuning. The model enforces class-separable
embeddings via episodic prototype training while preserving biomedical domain
knowledge. Our dataset was divided as: Training (20 positive, 20 negative),
Prototype Set (10 positive, 10 negative), Validation (20 positive, 200
negative), and Test (71 positive, 765 negative). Evaluated on test dataset,
ProtoBERT-LoRA achieved F1-score of 0.624 (precision: 0.481, recall: 0.887),
outperforming the rule-based system, machine learning baselines and finetuned
PubMedBERT. Application to 44,287 unlabeled studies reduced manual review
efforts by 82%. Ablation studies confirmed that combining prototypes with LoRA
improved performance by 29% over stand-alone LoRA.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 03:09:11 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Zhang",
"Shijia",
""
],
[
"Ding",
"Xiyu",
""
],
[
"Ding",
"Kai",
""
],
[
"Zhang",
"Jacob",
""
],
[
"Galinsky",
"Kevin",
""
],
[
"Wang",
"Mengrui",
""
],
[
"Mayers",
"Ryan P.",
""
],
[
"Wang",
"Zheyu",
""
],
[
"Kharrazi",
"Hadi",
""
]
] | TITLE: ProtoBERT-LoRA: Parameter-Efficient Prototypical Finetuning for
Immunotherapy Study Identification
ABSTRACT: Identifying immune checkpoint inhibitor (ICI) studies in genomic repositories
like Gene Expression Omnibus (GEO) is vital for cancer research yet remains
challenging due to semantic ambiguity, extreme class imbalance, and limited
labeled data in low-resource settings. We present ProtoBERT-LoRA, a hybrid
framework that combines PubMedBERT with prototypical networks and Low-Rank
Adaptation (LoRA) for efficient fine-tuning. The model enforces class-separable
embeddings via episodic prototype training while preserving biomedical domain
knowledge. Our dataset was divided as: Training (20 positive, 20 negative),
Prototype Set (10 positive, 10 negative), Validation (20 positive, 200
negative), and Test (71 positive, 765 negative). Evaluated on test dataset,
ProtoBERT-LoRA achieved F1-score of 0.624 (precision: 0.481, recall: 0.887),
outperforming the rule-based system, machine learning baselines and finetuned
PubMedBERT. Application to 44,287 unlabeled studies reduced manual review
efforts by 82%. Ablation studies confirmed that combining prototypes with LoRA
improved performance by 29% over stand-alone LoRA.
|
2503.20190 | Yuxuan Chen | Yuxuan Chen, Jiawen Li, Jiali Hu, Xitong Ling, Tian Guan, Anjia Han,
Yonghong He | Cross-Modal Prototype Allocation: Unsupervised Slide Representation
Learning via Patch-Text Contrast in Computational Pathology | 11pages,3 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the rapid advancement of pathology foundation models (FMs), the
representation learning of whole slide images (WSIs) attracts increasing
attention. Existing studies develop high-quality patch feature extractors and
employ carefully designed aggregation schemes to derive slide-level
representations. However, mainstream weakly supervised slide representation
learning methods, primarily based on multiple instance learning (MIL), are
tailored to specific downstream tasks, which limits their generalizability. To
address this issue, some studies explore unsupervised slide representation
learning. However, these approaches focus solely on the visual modality of
patches, neglecting the rich semantic information embedded in textual data. In
this work, we propose ProAlign, a cross-modal unsupervised slide representation
learning framework. Specifically, we leverage a large language model (LLM) to
generate descriptive text for the prototype types present in a WSI, introducing
patch-text contrast to construct initial prototype embeddings. Furthermore, we
propose a parameter-free attention aggregation strategy that utilizes the
similarity between patches and these prototypes to form unsupervised slide
embeddings applicable to a wide range of downstream tasks. Extensive
experiments on four public datasets show that ProAlign outperforms existing
unsupervised frameworks and achieves performance comparable to some weakly
supervised models.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 03:31:07 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Chen",
"Yuxuan",
""
],
[
"Li",
"Jiawen",
""
],
[
"Hu",
"Jiali",
""
],
[
"Ling",
"Xitong",
""
],
[
"Guan",
"Tian",
""
],
[
"Han",
"Anjia",
""
],
[
"He",
"Yonghong",
""
]
] | TITLE: Cross-Modal Prototype Allocation: Unsupervised Slide Representation
Learning via Patch-Text Contrast in Computational Pathology
ABSTRACT: With the rapid advancement of pathology foundation models (FMs), the
representation learning of whole slide images (WSIs) attracts increasing
attention. Existing studies develop high-quality patch feature extractors and
employ carefully designed aggregation schemes to derive slide-level
representations. However, mainstream weakly supervised slide representation
learning methods, primarily based on multiple instance learning (MIL), are
tailored to specific downstream tasks, which limits their generalizability. To
address this issue, some studies explore unsupervised slide representation
learning. However, these approaches focus solely on the visual modality of
patches, neglecting the rich semantic information embedded in textual data. In
this work, we propose ProAlign, a cross-modal unsupervised slide representation
learning framework. Specifically, we leverage a large language model (LLM) to
generate descriptive text for the prototype types present in a WSI, introducing
patch-text contrast to construct initial prototype embeddings. Furthermore, we
propose a parameter-free attention aggregation strategy that utilizes the
similarity between patches and these prototypes to form unsupervised slide
embeddings applicable to a wide range of downstream tasks. Extensive
experiments on four public datasets show that ProAlign outperforms existing
unsupervised frameworks and achieves performance comparable to some weakly
supervised models.
|
2503.20202 | Nan Gao | Nan Gao, Yihua Bao, Dongdong Weng, Jiayi Zhao, Jia Li, Yan Zhou,
Pengfei Wan, Di Zhang | SARGes: Semantically Aligned Reliable Gesture Generation via Intent
Chain | null | null | null | null | cs.CL cs.AI cs.HC cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Co-speech gesture generation enhances human-computer interaction realism
through speech-synchronized gesture synthesis. However, generating semantically
meaningful gestures remains a challenging problem. We propose SARGes, a novel
framework that leverages large language models (LLMs) to parse speech content
and generate reliable semantic gesture labels, which subsequently guide the
synthesis of meaningful co-speech gestures.First, we constructed a
comprehensive co-speech gesture ethogram and developed an LLM-based intent
chain reasoning mechanism that systematically parses and decomposes gesture
semantics into structured inference steps following ethogram criteria,
effectively guiding LLMs to generate context-aware gesture labels.
Subsequently, we constructed an intent chain-annotated text-to-gesture label
dataset and trained a lightweight gesture label generation model, which then
guides the generation of credible and semantically coherent co-speech gestures.
Experimental results demonstrate that SARGes achieves highly
semantically-aligned gesture labeling (50.2% accuracy) with efficient
single-pass inference (0.4 seconds). The proposed method provides an
interpretable intent reasoning pathway for semantic gesture synthesis.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 03:55:41 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Gao",
"Nan",
""
],
[
"Bao",
"Yihua",
""
],
[
"Weng",
"Dongdong",
""
],
[
"Zhao",
"Jiayi",
""
],
[
"Li",
"Jia",
""
],
[
"Zhou",
"Yan",
""
],
[
"Wan",
"Pengfei",
""
],
[
"Zhang",
"Di",
""
]
] | TITLE: SARGes: Semantically Aligned Reliable Gesture Generation via Intent
Chain
ABSTRACT: Co-speech gesture generation enhances human-computer interaction realism
through speech-synchronized gesture synthesis. However, generating semantically
meaningful gestures remains a challenging problem. We propose SARGes, a novel
framework that leverages large language models (LLMs) to parse speech content
and generate reliable semantic gesture labels, which subsequently guide the
synthesis of meaningful co-speech gestures.First, we constructed a
comprehensive co-speech gesture ethogram and developed an LLM-based intent
chain reasoning mechanism that systematically parses and decomposes gesture
semantics into structured inference steps following ethogram criteria,
effectively guiding LLMs to generate context-aware gesture labels.
Subsequently, we constructed an intent chain-annotated text-to-gesture label
dataset and trained a lightweight gesture label generation model, which then
guides the generation of credible and semantically coherent co-speech gestures.
Experimental results demonstrate that SARGes achieves highly
semantically-aligned gesture labeling (50.2% accuracy) with efficient
single-pass inference (0.4 seconds). The proposed method provides an
interpretable intent reasoning pathway for semantic gesture synthesis.
|
2503.20205 | Xiao-Cheng Liao | Xiao-Cheng Liao, Yi Mei, Mengjie Zhang, Xiang-Ling Chen | Generalized Phase Pressure Control Enhanced Reinforcement Learning for
Traffic Signal Control | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Appropriate traffic state representation is crucial for learning traffic
signal control policies. However, most of the current traffic state
representations are heuristically designed, with insufficient theoretical
support. In this paper, we (1) develop a flexible, efficient, and theoretically
grounded method, namely generalized phase pressure (G2P) control, which takes
only simple lane features into consideration to decide which phase to be
actuated; 2) extend the pressure control theory to a general form for
multi-homogeneous-lane road networks based on queueing theory; (3) design a new
traffic state representation based on the generalized phase state features from
G2P control; and 4) develop a reinforcement learning (RL)-based algorithm
template named G2P-XLight, and two RL algorithms, G2P-MPLight and G2P-CoLight,
by combining the generalized phase state representation with MPLight and
CoLight, two well-performed RL methods for learning traffic signal control
policies. Extensive experiments conducted on multiple real-world datasets
demonstrate that G2P control outperforms the state-of-the-art (SOTA) heuristic
method in the transportation field and other recent human-designed heuristic
methods; and that the newly proposed G2P-XLight significantly outperforms SOTA
learning-based approaches. Our code is available online.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 04:03:12 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Liao",
"Xiao-Cheng",
""
],
[
"Mei",
"Yi",
""
],
[
"Zhang",
"Mengjie",
""
],
[
"Chen",
"Xiang-Ling",
""
]
] | TITLE: Generalized Phase Pressure Control Enhanced Reinforcement Learning for
Traffic Signal Control
ABSTRACT: Appropriate traffic state representation is crucial for learning traffic
signal control policies. However, most of the current traffic state
representations are heuristically designed, with insufficient theoretical
support. In this paper, we (1) develop a flexible, efficient, and theoretically
grounded method, namely generalized phase pressure (G2P) control, which takes
only simple lane features into consideration to decide which phase to be
actuated; 2) extend the pressure control theory to a general form for
multi-homogeneous-lane road networks based on queueing theory; (3) design a new
traffic state representation based on the generalized phase state features from
G2P control; and 4) develop a reinforcement learning (RL)-based algorithm
template named G2P-XLight, and two RL algorithms, G2P-MPLight and G2P-CoLight,
by combining the generalized phase state representation with MPLight and
CoLight, two well-performed RL methods for learning traffic signal control
policies. Extensive experiments conducted on multiple real-world datasets
demonstrate that G2P control outperforms the state-of-the-art (SOTA) heuristic
method in the transportation field and other recent human-designed heuristic
methods; and that the newly proposed G2P-XLight significantly outperforms SOTA
learning-based approaches. Our code is available online.
|
2503.20207 | Peiyuan Ni | Peiyuan Ni, Chee Meng Chew, Marcelo H. Ang Jr., Gregory S. Chirikjian | Reasoning and Learning a Perceptual Metric for Self-Training of
Reflective Objects in Bin-Picking with a Low-cost Camera | 9 pages, 10 figures | null | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by/4.0/ | Bin-picking of metal objects using low-cost RGB-D cameras often suffers from
sparse depth information and reflective surface textures, leading to errors and
the need for manual labeling. To reduce human intervention, we propose a
two-stage framework consisting of a metric learning stage and a self-training
stage. Specifically, to automatically process data captured by a low-cost
camera (LC), we introduce a Multi-object Pose Reasoning (MoPR) algorithm that
optimizes pose hypotheses under depth, collision, and boundary constraints. To
further refine pose candidates, we adopt a Symmetry-aware Lie-group based
Bayesian Gaussian Mixture Model (SaL-BGMM), integrated with the
Expectation-Maximization (EM) algorithm, for symmetry-aware filtering.
Additionally, we propose a Weighted Ranking Information Noise Contrastive
Estimation (WR-InfoNCE) loss to enable the LC to learn a perceptual metric from
reconstructed data, supporting self-training on untrained or even unseen
objects. Experimental results show that our approach outperforms several
state-of-the-art methods on both the ROBI dataset and our newly introduced
Self-ROBI dataset.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 04:03:51 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Ni",
"Peiyuan",
""
],
[
"Chew",
"Chee Meng",
""
],
[
"Ang",
"Marcelo H.",
"Jr."
],
[
"Chirikjian",
"Gregory S.",
""
]
] | TITLE: Reasoning and Learning a Perceptual Metric for Self-Training of
Reflective Objects in Bin-Picking with a Low-cost Camera
ABSTRACT: Bin-picking of metal objects using low-cost RGB-D cameras often suffers from
sparse depth information and reflective surface textures, leading to errors and
the need for manual labeling. To reduce human intervention, we propose a
two-stage framework consisting of a metric learning stage and a self-training
stage. Specifically, to automatically process data captured by a low-cost
camera (LC), we introduce a Multi-object Pose Reasoning (MoPR) algorithm that
optimizes pose hypotheses under depth, collision, and boundary constraints. To
further refine pose candidates, we adopt a Symmetry-aware Lie-group based
Bayesian Gaussian Mixture Model (SaL-BGMM), integrated with the
Expectation-Maximization (EM) algorithm, for symmetry-aware filtering.
Additionally, we propose a Weighted Ranking Information Noise Contrastive
Estimation (WR-InfoNCE) loss to enable the LC to learn a perceptual metric from
reconstructed data, supporting self-training on untrained or even unseen
objects. Experimental results show that our approach outperforms several
state-of-the-art methods on both the ROBI dataset and our newly introduced
Self-ROBI dataset.
|
2503.20209 | Chengyang Hu | Chengyang Hu, Yuduo Chen, Lizhuang Ma | BEAR: A Video Dataset For Fine-grained Behaviors Recognition Oriented
with Action and Environment Factors | Accept by ICME2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Behavior recognition is an important task in video representation learning.
An essential aspect pertains to effective feature learning conducive to
behavior recognition. Recently, researchers have started to study fine-grained
behavior recognition, which provides similar behaviors and encourages the model
to concern with more details of behaviors with effective features for
distinction. However, previous fine-grained behaviors limited themselves to
controlling partial information to be similar, leading to an unfair and not
comprehensive evaluation of existing works. In this work, we develop a new
video fine-grained behavior dataset, named BEAR, which provides fine-grained
(i.e. similar) behaviors that uniquely focus on two primary factors defining
behavior: Environment and Action. It includes two fine-grained behavior
protocols including Fine-grained Behavior with Similar Environments and
Fine-grained Behavior with Similar Actions as well as multiple sub-protocols as
different scenarios. Furthermore, with this new dataset, we conduct multiple
experiments with different behavior recognition models. Our research primarily
explores the impact of input modality, a critical element in studying the
environmental and action-based aspects of behavior recognition. Our
experimental results yield intriguing insights that have substantial
implications for further research endeavors.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 04:06:20 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Hu",
"Chengyang",
""
],
[
"Chen",
"Yuduo",
""
],
[
"Ma",
"Lizhuang",
""
]
] | TITLE: BEAR: A Video Dataset For Fine-grained Behaviors Recognition Oriented
with Action and Environment Factors
ABSTRACT: Behavior recognition is an important task in video representation learning.
An essential aspect pertains to effective feature learning conducive to
behavior recognition. Recently, researchers have started to study fine-grained
behavior recognition, which provides similar behaviors and encourages the model
to concern with more details of behaviors with effective features for
distinction. However, previous fine-grained behaviors limited themselves to
controlling partial information to be similar, leading to an unfair and not
comprehensive evaluation of existing works. In this work, we develop a new
video fine-grained behavior dataset, named BEAR, which provides fine-grained
(i.e. similar) behaviors that uniquely focus on two primary factors defining
behavior: Environment and Action. It includes two fine-grained behavior
protocols including Fine-grained Behavior with Similar Environments and
Fine-grained Behavior with Similar Actions as well as multiple sub-protocols as
different scenarios. Furthermore, with this new dataset, we conduct multiple
experiments with different behavior recognition models. Our research primarily
explores the impact of input modality, a critical element in studying the
environmental and action-based aspects of behavior recognition. Our
experimental results yield intriguing insights that have substantial
implications for further research endeavors.
|
2503.20211 | Weilong Yan | Weilong Yan, Ming Li, Haipeng Li, Shuwei Shao, Robby T. Tan | Synthetic-to-Real Self-supervised Robust Depth Estimation via Learning
with Motion and Structure Priors | null | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Self-supervised depth estimation from monocular cameras in diverse outdoor
conditions, such as daytime, rain, and nighttime, is challenging due to the
difficulty of learning universal representations and the severe lack of labeled
real-world adverse data. Previous methods either rely on synthetic inputs and
pseudo-depth labels or directly apply daytime strategies to adverse conditions,
resulting in suboptimal results. In this paper, we present the first
synthetic-to-real robust depth estimation framework, incorporating motion and
structure priors to capture real-world knowledge effectively. In the synthetic
adaptation, we transfer motion-structure knowledge inside cost volumes for
better robust representation, using a frozen daytime model to train a depth
estimator in synthetic adverse conditions. In the innovative real adaptation,
which targets to fix synthetic-real gaps, models trained earlier identify the
weather-insensitive regions with a designed consistency-reweighting strategy to
emphasize valid pseudo-labels. We introduce a new regularization by gathering
explicit depth distributions to constrain the model when facing real-world
data. Experiments show that our method outperforms the state-of-the-art across
diverse conditions in multi-frame and single-frame evaluations. We achieve
improvements of 7.5% and 4.3% in AbsRel and RMSE on average for nuScenes and
Robotcar datasets (daytime, nighttime, rain). In zero-shot evaluation of
DrivingStereo (rain, fog), our method generalizes better than the previous
ones.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 04:12:54 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Yan",
"Weilong",
""
],
[
"Li",
"Ming",
""
],
[
"Li",
"Haipeng",
""
],
[
"Shao",
"Shuwei",
""
],
[
"Tan",
"Robby T.",
""
]
] | TITLE: Synthetic-to-Real Self-supervised Robust Depth Estimation via Learning
with Motion and Structure Priors
ABSTRACT: Self-supervised depth estimation from monocular cameras in diverse outdoor
conditions, such as daytime, rain, and nighttime, is challenging due to the
difficulty of learning universal representations and the severe lack of labeled
real-world adverse data. Previous methods either rely on synthetic inputs and
pseudo-depth labels or directly apply daytime strategies to adverse conditions,
resulting in suboptimal results. In this paper, we present the first
synthetic-to-real robust depth estimation framework, incorporating motion and
structure priors to capture real-world knowledge effectively. In the synthetic
adaptation, we transfer motion-structure knowledge inside cost volumes for
better robust representation, using a frozen daytime model to train a depth
estimator in synthetic adverse conditions. In the innovative real adaptation,
which targets to fix synthetic-real gaps, models trained earlier identify the
weather-insensitive regions with a designed consistency-reweighting strategy to
emphasize valid pseudo-labels. We introduce a new regularization by gathering
explicit depth distributions to constrain the model when facing real-world
data. Experiments show that our method outperforms the state-of-the-art across
diverse conditions in multi-frame and single-frame evaluations. We achieve
improvements of 7.5% and 4.3% in AbsRel and RMSE on average for nuScenes and
Robotcar datasets (daytime, nighttime, rain). In zero-shot evaluation of
DrivingStereo (rain, fog), our method generalizes better than the previous
ones.
|
2503.20212 | Wei-Qiang Zhang | Yangyang Meng, Jinpeng Li, Guodong Lin, Yu Pu, Guanbo Wang, Hu Du,
Zhiming Shao, Yukai Huang, Ke Li, Wei-Qiang Zhang | Dolphin: A Large-Scale Automatic Speech Recognition Model for Eastern
Languages | null | null | null | null | cs.CL eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This report introduces Dolphin, a large-scale multilingual automatic speech
recognition (ASR) model that extends the Whisper architecture to support a
wider range of languages. Our approach integrates in-house proprietary and
open-source datasets to refine and optimize Dolphin's performance. The model is
specifically designed to achieve notable recognition accuracy for 40 Eastern
languages across East Asia, South Asia, Southeast Asia, and the Middle East,
while also supporting 22 Chinese dialects. Experimental evaluations show that
Dolphin significantly outperforms current state-of-the-art open-source models
across various languages. To promote reproducibility and community-driven
innovation, we are making our trained models and inference source code publicly
available.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 04:14:03 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Meng",
"Yangyang",
""
],
[
"Li",
"Jinpeng",
""
],
[
"Lin",
"Guodong",
""
],
[
"Pu",
"Yu",
""
],
[
"Wang",
"Guanbo",
""
],
[
"Du",
"Hu",
""
],
[
"Shao",
"Zhiming",
""
],
[
"Huang",
"Yukai",
""
],
[
"Li",
"Ke",
""
],
[
"Zhang",
"Wei-Qiang",
""
]
] | TITLE: Dolphin: A Large-Scale Automatic Speech Recognition Model for Eastern
Languages
ABSTRACT: This report introduces Dolphin, a large-scale multilingual automatic speech
recognition (ASR) model that extends the Whisper architecture to support a
wider range of languages. Our approach integrates in-house proprietary and
open-source datasets to refine and optimize Dolphin's performance. The model is
specifically designed to achieve notable recognition accuracy for 40 Eastern
languages across East Asia, South Asia, Southeast Asia, and the Middle East,
while also supporting 22 Chinese dialects. Experimental evaluations show that
Dolphin significantly outperforms current state-of-the-art open-source models
across various languages. To promote reproducibility and community-driven
innovation, we are making our trained models and inference source code publicly
available.
|
2503.20220 | Wufei Ma | Weijie Guo, Guofeng Zhang, Wufei Ma, Alan Yuille | DINeMo: Learning Neural Mesh Models with no 3D Annotations | Technical report | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Category-level 3D/6D pose estimation is a crucial step towards comprehensive
3D scene understanding, which would enable a broad range of applications in
robotics and embodied AI. Recent works explored neural mesh models that
approach a range of 2D and 3D tasks from an analysis-by-synthesis perspective.
Despite the largely enhanced robustness to partial occlusion and domain shifts,
these methods depended heavily on 3D annotations for part-contrastive learning,
which confines them to a narrow set of categories and hinders efficient
scaling. In this work, we present DINeMo, a novel neural mesh model that is
trained with no 3D annotations by leveraging pseudo-correspondence obtained
from large visual foundation models. We adopt a bidirectional
pseudo-correspondence generation method, which produce pseudo correspondence
utilize both local appearance features and global context information.
Experimental results on car datasets demonstrate that our DINeMo outperforms
previous zero- and few-shot 3D pose estimation by a wide margin, narrowing the
gap with fully-supervised methods by 67.3%. Our DINeMo also scales effectively
and efficiently when incorporating more unlabeled images during training, which
demonstrate the advantages over supervised learning methods that rely on 3D
annotations. Our project page is available at
https://analysis-by-synthesis.github.io/DINeMo/.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 04:23:53 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Guo",
"Weijie",
""
],
[
"Zhang",
"Guofeng",
""
],
[
"Ma",
"Wufei",
""
],
[
"Yuille",
"Alan",
""
]
] | TITLE: DINeMo: Learning Neural Mesh Models with no 3D Annotations
ABSTRACT: Category-level 3D/6D pose estimation is a crucial step towards comprehensive
3D scene understanding, which would enable a broad range of applications in
robotics and embodied AI. Recent works explored neural mesh models that
approach a range of 2D and 3D tasks from an analysis-by-synthesis perspective.
Despite the largely enhanced robustness to partial occlusion and domain shifts,
these methods depended heavily on 3D annotations for part-contrastive learning,
which confines them to a narrow set of categories and hinders efficient
scaling. In this work, we present DINeMo, a novel neural mesh model that is
trained with no 3D annotations by leveraging pseudo-correspondence obtained
from large visual foundation models. We adopt a bidirectional
pseudo-correspondence generation method, which produce pseudo correspondence
utilize both local appearance features and global context information.
Experimental results on car datasets demonstrate that our DINeMo outperforms
previous zero- and few-shot 3D pose estimation by a wide margin, narrowing the
gap with fully-supervised methods by 67.3%. Our DINeMo also scales effectively
and efficiently when incorporating more unlabeled images during training, which
demonstrate the advantages over supervised learning methods that rely on 3D
annotations. Our project page is available at
https://analysis-by-synthesis.github.io/DINeMo/.
|
2503.20221 | Taorui Wang | Taorui Wang, Zitong Yu, Yong Xu | TC-GS: Tri-plane based compression for 3D Gaussian Splatting | Accepted by ICME 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, 3D Gaussian Splatting (3DGS) has emerged as a prominent framework
for novel view synthesis, providing high fidelity and rapid rendering speed.
However, the substantial data volume of 3DGS and its attributes impede its
practical utility, requiring compression techniques for reducing memory cost.
Nevertheless, the unorganized shape of 3DGS leads to difficulties in
compression. To formulate unstructured attributes into normative distribution,
we propose a well-structured tri-plane to encode Gaussian attributes,
leveraging the distribution of attributes for compression. To exploit the
correlations among adjacent Gaussians, K-Nearest Neighbors (KNN) is used when
decoding Gaussian distribution from the Tri-plane. We also introduce Gaussian
position information as a prior of the position-sensitive decoder.
Additionally, we incorporate an adaptive wavelet loss, aiming to focus on the
high-frequency details as iterations increase. Our approach has achieved
results that are comparable to or surpass that of SOTA 3D Gaussians Splatting
compression work in extensive experiments across multiple datasets. The codes
are released at https://github.com/timwang2001/TC-GS.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 04:26:22 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Wang",
"Taorui",
""
],
[
"Yu",
"Zitong",
""
],
[
"Xu",
"Yong",
""
]
] | TITLE: TC-GS: Tri-plane based compression for 3D Gaussian Splatting
ABSTRACT: Recently, 3D Gaussian Splatting (3DGS) has emerged as a prominent framework
for novel view synthesis, providing high fidelity and rapid rendering speed.
However, the substantial data volume of 3DGS and its attributes impede its
practical utility, requiring compression techniques for reducing memory cost.
Nevertheless, the unorganized shape of 3DGS leads to difficulties in
compression. To formulate unstructured attributes into normative distribution,
we propose a well-structured tri-plane to encode Gaussian attributes,
leveraging the distribution of attributes for compression. To exploit the
correlations among adjacent Gaussians, K-Nearest Neighbors (KNN) is used when
decoding Gaussian distribution from the Tri-plane. We also introduce Gaussian
position information as a prior of the position-sensitive decoder.
Additionally, we incorporate an adaptive wavelet loss, aiming to focus on the
high-frequency details as iterations increase. Our approach has achieved
results that are comparable to or surpass that of SOTA 3D Gaussians Splatting
compression work in extensive experiments across multiple datasets. The codes
are released at https://github.com/timwang2001/TC-GS.
|
2503.20232 | Wei Wang | Wei Wang, Yujie Lin, Jianli Zhao, Moyan Zhang, Pengjie Ren, Xianye
Ben, Yujun Li | Learnable Sequence Augmenter for Triplet Contrastive Learning in
Sequential Recommendation | null | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most existing contrastive learning-based sequential recommendation (SR)
methods rely on random operations (e.g., crop, reorder, and substitute) to
generate augmented sequences. These methods often struggle to create positive
sample pairs that closely resemble the representations of the raw sequences,
potentially disrupting item correlations by deleting key items or introducing
noisy iterac, which misguides the contrastive learning process.
To address this limitation, we propose Learnable sequence Augmentor for
triplet Contrastive Learning in sequential Recommendation (LACLRec).
Specifically, the self-supervised learning-based augmenter can automatically
delete noisy items from sequences and insert new items that better capture item
transition patterns, generating a higher-quality augmented sequence.
Subsequently, we randomly generate another augmented sequence and design a
ranking-based triplet contrastive loss to differentiate the similarities
between the raw sequence, the augmented sequence from augmenter, and the
randomly augmented sequence, providing more fine-grained contrastive signals.
Extensive experiments on three real-world datasets demonstrate that both the
sequence augmenter and the triplet contrast contribute to improving
recommendation accuracy. LACLRec significantly outperforms the baseline model
CL4SRec, and demonstrates superior performance compared to several
state-of-the-art sequential recommendation algorithms.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 04:56:29 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Wang",
"Wei",
""
],
[
"Lin",
"Yujie",
""
],
[
"Zhao",
"Jianli",
""
],
[
"Zhang",
"Moyan",
""
],
[
"Ren",
"Pengjie",
""
],
[
"Ben",
"Xianye",
""
],
[
"Li",
"Yujun",
""
]
] | TITLE: Learnable Sequence Augmenter for Triplet Contrastive Learning in
Sequential Recommendation
ABSTRACT: Most existing contrastive learning-based sequential recommendation (SR)
methods rely on random operations (e.g., crop, reorder, and substitute) to
generate augmented sequences. These methods often struggle to create positive
sample pairs that closely resemble the representations of the raw sequences,
potentially disrupting item correlations by deleting key items or introducing
noisy iterac, which misguides the contrastive learning process.
To address this limitation, we propose Learnable sequence Augmentor for
triplet Contrastive Learning in sequential Recommendation (LACLRec).
Specifically, the self-supervised learning-based augmenter can automatically
delete noisy items from sequences and insert new items that better capture item
transition patterns, generating a higher-quality augmented sequence.
Subsequently, we randomly generate another augmented sequence and design a
ranking-based triplet contrastive loss to differentiate the similarities
between the raw sequence, the augmented sequence from augmenter, and the
randomly augmented sequence, providing more fine-grained contrastive signals.
Extensive experiments on three real-world datasets demonstrate that both the
sequence augmenter and the triplet contrast contribute to improving
recommendation accuracy. LACLRec significantly outperforms the baseline model
CL4SRec, and demonstrates superior performance compared to several
state-of-the-art sequential recommendation algorithms.
|
2503.20233 | Yue Yin | Yue Yin | Dynamic Learning and Productivity for Data Analysts: A Bayesian Hidden
Markov Model Perspective | 29 pages; a shorter 11-page version is accepted by HCI International
(HCII) 2025; | null | null | null | cs.SI cs.AI cs.CE cs.HC | http://creativecommons.org/licenses/by/4.0/ | Data analysts are essential in organizations, transforming raw data into
insights that drive decision-making and strategy. This study explores how
analysts' productivity evolves on a collaborative platform, focusing on two key
learning activities: writing queries and viewing peer queries. While
traditional research often assumes static models, where performance improves
steadily with cumulative learning, such models fail to capture the dynamic
nature of real-world learning. To address this, we propose a Hidden Markov
Model (HMM) that tracks how analysts transition between distinct learning
states based on their participation in these activities.
Using an industry dataset with 2,001 analysts and 79,797 queries, this study
identifies three learning states: novice, intermediate, and advanced.
Productivity increases as analysts advance to higher states, reflecting the
cumulative benefits of learning. Writing queries benefits analysts across all
states, with the largest gains observed for novices. Viewing peer queries
supports novices but may hinder analysts in higher states due to cognitive
overload or inefficiencies. Transitions between states are also uneven, with
progression from intermediate to advanced being particularly challenging. This
study advances understanding of into dynamic learning behavior of knowledge
worker and offers practical implications for designing systems, optimizing
training, enabling personalized learning, and fostering effective knowledge
sharing.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 04:57:03 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Yin",
"Yue",
""
]
] | TITLE: Dynamic Learning and Productivity for Data Analysts: A Bayesian Hidden
Markov Model Perspective
ABSTRACT: Data analysts are essential in organizations, transforming raw data into
insights that drive decision-making and strategy. This study explores how
analysts' productivity evolves on a collaborative platform, focusing on two key
learning activities: writing queries and viewing peer queries. While
traditional research often assumes static models, where performance improves
steadily with cumulative learning, such models fail to capture the dynamic
nature of real-world learning. To address this, we propose a Hidden Markov
Model (HMM) that tracks how analysts transition between distinct learning
states based on their participation in these activities.
Using an industry dataset with 2,001 analysts and 79,797 queries, this study
identifies three learning states: novice, intermediate, and advanced.
Productivity increases as analysts advance to higher states, reflecting the
cumulative benefits of learning. Writing queries benefits analysts across all
states, with the largest gains observed for novices. Viewing peer queries
supports novices but may hinder analysts in higher states due to cognitive
overload or inefficiencies. Transitions between states are also uneven, with
progression from intermediate to advanced being particularly challenging. This
study advances understanding of into dynamic learning behavior of knowledge
worker and offers practical implications for designing systems, optimizing
training, enabling personalized learning, and fostering effective knowledge
sharing.
|
2503.20248 | Mingfu Liang | Mingfu Liang, Jiahuan Zhou, Xu Zou, Ying Wu | Incremental Object Keypoint Learning | The IEEE/CVF Conference on Computer Vision and Pattern Recognition
(CVPR) 2025 | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Existing progress in object keypoint estimation primarily benefits from the
conventional supervised learning paradigm based on numerous data labeled with
pre-defined keypoints. However, these well-trained models can hardly detect the
undefined new keypoints in test time, which largely hinders their feasibility
for diverse downstream tasks. To handle this, various solutions are explored
but still suffer from either limited generalizability or transferability.
Therefore, in this paper, we explore a novel keypoint learning paradigm in that
we only annotate new keypoints in the new data and incrementally train the
model, without retaining any old data, called Incremental object Keypoint
Learning (IKL). A two-stage learning scheme as a novel baseline tailored to IKL
is developed. In the first Knowledge Association stage, given the data labeled
with only new keypoints, an auxiliary KA-Net is trained to automatically
associate the old keypoints to these new ones based on their spatial and
intrinsic anatomical relations. In the second Mutual Promotion stage, based on
a keypoint-oriented spatial distillation loss, we jointly leverage the
auxiliary KA-Net and the old model for knowledge consolidation to mutually
promote the estimation of all old and new keypoints. Owing to the investigation
of the correlations between new and old keypoints, our proposed method can not
just effectively mitigate the catastrophic forgetting of old keypoints, but may
even further improve the estimation of the old ones and achieve a positive
transfer beyond anti-forgetting. Such an observation has been solidly verified
by extensive experiments on different keypoint datasets, where our method
exhibits superiority in alleviating the forgetting issue and boosting
performance while enjoying labeling efficiency even under the low-shot data
regime.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 05:32:12 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Liang",
"Mingfu",
""
],
[
"Zhou",
"Jiahuan",
""
],
[
"Zou",
"Xu",
""
],
[
"Wu",
"Ying",
""
]
] | TITLE: Incremental Object Keypoint Learning
ABSTRACT: Existing progress in object keypoint estimation primarily benefits from the
conventional supervised learning paradigm based on numerous data labeled with
pre-defined keypoints. However, these well-trained models can hardly detect the
undefined new keypoints in test time, which largely hinders their feasibility
for diverse downstream tasks. To handle this, various solutions are explored
but still suffer from either limited generalizability or transferability.
Therefore, in this paper, we explore a novel keypoint learning paradigm in that
we only annotate new keypoints in the new data and incrementally train the
model, without retaining any old data, called Incremental object Keypoint
Learning (IKL). A two-stage learning scheme as a novel baseline tailored to IKL
is developed. In the first Knowledge Association stage, given the data labeled
with only new keypoints, an auxiliary KA-Net is trained to automatically
associate the old keypoints to these new ones based on their spatial and
intrinsic anatomical relations. In the second Mutual Promotion stage, based on
a keypoint-oriented spatial distillation loss, we jointly leverage the
auxiliary KA-Net and the old model for knowledge consolidation to mutually
promote the estimation of all old and new keypoints. Owing to the investigation
of the correlations between new and old keypoints, our proposed method can not
just effectively mitigate the catastrophic forgetting of old keypoints, but may
even further improve the estimation of the old ones and achieve a positive
transfer beyond anti-forgetting. Such an observation has been solidly verified
by extensive experiments on different keypoint datasets, where our method
exhibits superiority in alleviating the forgetting issue and boosting
performance while enjoying labeling efficiency even under the low-shot data
regime.
|
2503.20263 | Zhihan Jiang | Zhihan Jiang, Junjie Huang, Zhuangbin Chen, Yichen Li, Guangba Yu,
Cong Feng, Yongqiang Yang, Zengyin Yang and Michael R. Lyu | L4: Diagnosing Large-scale LLM Training Failures via Automated Log
Analysis | To appear in companion proceedings of the 33rd ACM International
Conference on the Foundations of Software Engineering (FSE'25). 13 pages | null | null | null | cs.SE cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As Large Language Models (LLMs) show their capabilities across various
applications, training customized LLMs has become essential for modern
enterprises. However, due to the complexity of LLM training, which requires
massive computational resources and extensive training time, failures are
inevitable during the training process. These failures result in considerable
waste of resource and time, highlighting the critical need for effective and
efficient failure diagnosis to reduce the cost of LLM training.
In this paper, we present the first empirical study on the failure reports of
428 LLM training failures in our production Platform-X between May 2023 and
April 2024. Our study reveals that hardware and user faults are the predominant
root causes, and current diagnosis processes rely heavily on training logs.
Unfortunately, existing log-based diagnostic methods fall short in handling LLM
training logs. Considering the unique features of LLM training, we identify
three distinct patterns of LLM training logs: cross-job, spatial, and temporal
patterns. We then introduce our Log-based Large-scale LLM training failure
diagnosis framework, L4, which can automatically extract failure-indicating
information (i.e., log events, nodes, stages, and iterations) from extensive
training logs, thereby reducing manual effort and facilitating failure
recovery. Experimental results on real-world datasets show that L4 outperforms
existing approaches in identifying failure-indicating logs and localizing
faulty nodes. Furthermore, L4 has been applied in Platform-X and demonstrated
its effectiveness in enabling accurate and efficient failure diagnosis.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 06:09:55 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Jiang",
"Zhihan",
""
],
[
"Huang",
"Junjie",
""
],
[
"Chen",
"Zhuangbin",
""
],
[
"Li",
"Yichen",
""
],
[
"Yu",
"Guangba",
""
],
[
"Feng",
"Cong",
""
],
[
"Yang",
"Yongqiang",
""
],
[
"Yang",
"Zengyin",
""
],
[
"Lyu",
"Michael R.",
""
]
] | TITLE: L4: Diagnosing Large-scale LLM Training Failures via Automated Log
Analysis
ABSTRACT: As Large Language Models (LLMs) show their capabilities across various
applications, training customized LLMs has become essential for modern
enterprises. However, due to the complexity of LLM training, which requires
massive computational resources and extensive training time, failures are
inevitable during the training process. These failures result in considerable
waste of resource and time, highlighting the critical need for effective and
efficient failure diagnosis to reduce the cost of LLM training.
In this paper, we present the first empirical study on the failure reports of
428 LLM training failures in our production Platform-X between May 2023 and
April 2024. Our study reveals that hardware and user faults are the predominant
root causes, and current diagnosis processes rely heavily on training logs.
Unfortunately, existing log-based diagnostic methods fall short in handling LLM
training logs. Considering the unique features of LLM training, we identify
three distinct patterns of LLM training logs: cross-job, spatial, and temporal
patterns. We then introduce our Log-based Large-scale LLM training failure
diagnosis framework, L4, which can automatically extract failure-indicating
information (i.e., log events, nodes, stages, and iterations) from extensive
training logs, thereby reducing manual effort and facilitating failure
recovery. Experimental results on real-world datasets show that L4 outperforms
existing approaches in identifying failure-indicating logs and localizing
faulty nodes. Furthermore, L4 has been applied in Platform-X and demonstrated
its effectiveness in enabling accurate and efficient failure diagnosis.
|
2503.20264 | Yunrui Zhang Mr | Yunrui Zhang, Gustavo Batista, Salil S. Kanhere | Revisit Time Series Classification Benchmark: The Impact of Temporal
Information for Classification | Accepted to PAKDD2025 | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | Time series classification is usually regarded as a distinct task from
tabular data classification due to the importance of temporal information.
However, in this paper, by performing permutation tests that disrupt temporal
information on the UCR time series classification archive, the most widely used
benchmark for time series classification, we identify a significant proportion
of datasets where temporal information has little to no impact on
classification. Many of these datasets are tabular in nature or rely mainly on
tabular features, leading to potentially biased evaluations of time series
classifiers focused on temporal information. To address this, we propose UCR
Augmented, a benchmark based on the UCR time series classification archive
designed to evaluate classifiers' ability to extract and utilize temporal
information. Testing classifiers from seven categories on this benchmark
revealed notable shifts in performance rankings. Some previously overlooked
approaches perform well, while others see their performance decline
significantly when temporal information is crucial. UCR Augmented provides a
more robust framework for assessing time series classifiers, ensuring fairer
evaluations. Our code is available at
https://github.com/YunruiZhang/Revisit-Time-Series-Classification-Benchmark.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 06:13:41 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Zhang",
"Yunrui",
""
],
[
"Batista",
"Gustavo",
""
],
[
"Kanhere",
"Salil S.",
""
]
] | TITLE: Revisit Time Series Classification Benchmark: The Impact of Temporal
Information for Classification
ABSTRACT: Time series classification is usually regarded as a distinct task from
tabular data classification due to the importance of temporal information.
However, in this paper, by performing permutation tests that disrupt temporal
information on the UCR time series classification archive, the most widely used
benchmark for time series classification, we identify a significant proportion
of datasets where temporal information has little to no impact on
classification. Many of these datasets are tabular in nature or rely mainly on
tabular features, leading to potentially biased evaluations of time series
classifiers focused on temporal information. To address this, we propose UCR
Augmented, a benchmark based on the UCR time series classification archive
designed to evaluate classifiers' ability to extract and utilize temporal
information. Testing classifiers from seven categories on this benchmark
revealed notable shifts in performance rankings. Some previously overlooked
approaches perform well, while others see their performance decline
significantly when temporal information is crucial. UCR Augmented provides a
more robust framework for assessing time series classifiers, ensuring fairer
evaluations. Our code is available at
https://github.com/YunruiZhang/Revisit-Time-Series-Classification-Benchmark.
|
2503.20265 | Yiran Cheng | Yiran Cheng, Ting Zhang, Lwin Khin Shar, Zhe Lang, David Lo, Shichao
Lv, Dongliang Fang, Zhiqiang Shi, Limin Sun | Fixseeker: An Empirical Driven Graph-based Approach for Detecting Silent
Vulnerability Fixes in Open Source Software | null | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Open source software vulnerabilities pose significant security risks to
downstream applications. While vulnerability databases provide valuable
information for mitigation, many security patches are released silently in new
commits of OSS repositories without explicit indications of their security
impact. This makes it challenging for software maintainers and users to detect
and address these vulnerability fixes. There are a few approaches for detecting
vulnerability-fixing commits (VFCs) but most of these approaches leverage
commit messages, which would miss silent VFCs. On the other hand, there are
some approaches for detecting silent VFCs based on code change patterns but
they often fail to adequately characterize vulnerability fix patterns, thereby
lacking effectiveness. For example, some approaches analyze each hunk in known
VFCs, in isolation, to learn vulnerability fix patterns; but vulnerabiliy fixes
are often associated with multiple hunks, in which cases correlations of code
changes across those hunks are essential for characterizing the vulnerability
fixes. To address these problems, we first conduct a large-scale empirical
study on 11,900 VFCs across six programming languages, in which we found that
over 70% of VFCs involve multiple hunks with various types of correlations.
Based on our findings, we propose Fixseeker, a graph-based approach that
extracts the various correlations between code changes at the hunk level to
detect silent vulnerability fixes. Our evaluation demonstrates that Fixseeker
outperforms state-of-the-art approaches across multiple programming languages,
achieving a high F1 score of 0.8404 on average in balanced datasets and
consistently improving F1 score, AUC-ROC and AUC-PR scores by 32.40%, 1.55% and
8.24% on imbalanced datasets. Our evaluation also indicates the generality of
Fixseeker across different repository sizes and commit complexities.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 06:16:58 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Cheng",
"Yiran",
""
],
[
"Zhang",
"Ting",
""
],
[
"Shar",
"Lwin Khin",
""
],
[
"Lang",
"Zhe",
""
],
[
"Lo",
"David",
""
],
[
"Lv",
"Shichao",
""
],
[
"Fang",
"Dongliang",
""
],
[
"Shi",
"Zhiqiang",
""
],
[
"Sun",
"Limin",
""
]
] | TITLE: Fixseeker: An Empirical Driven Graph-based Approach for Detecting Silent
Vulnerability Fixes in Open Source Software
ABSTRACT: Open source software vulnerabilities pose significant security risks to
downstream applications. While vulnerability databases provide valuable
information for mitigation, many security patches are released silently in new
commits of OSS repositories without explicit indications of their security
impact. This makes it challenging for software maintainers and users to detect
and address these vulnerability fixes. There are a few approaches for detecting
vulnerability-fixing commits (VFCs) but most of these approaches leverage
commit messages, which would miss silent VFCs. On the other hand, there are
some approaches for detecting silent VFCs based on code change patterns but
they often fail to adequately characterize vulnerability fix patterns, thereby
lacking effectiveness. For example, some approaches analyze each hunk in known
VFCs, in isolation, to learn vulnerability fix patterns; but vulnerabiliy fixes
are often associated with multiple hunks, in which cases correlations of code
changes across those hunks are essential for characterizing the vulnerability
fixes. To address these problems, we first conduct a large-scale empirical
study on 11,900 VFCs across six programming languages, in which we found that
over 70% of VFCs involve multiple hunks with various types of correlations.
Based on our findings, we propose Fixseeker, a graph-based approach that
extracts the various correlations between code changes at the hunk level to
detect silent vulnerability fixes. Our evaluation demonstrates that Fixseeker
outperforms state-of-the-art approaches across multiple programming languages,
achieving a high F1 score of 0.8404 on average in balanced datasets and
consistently improving F1 score, AUC-ROC and AUC-PR scores by 32.40%, 1.55% and
8.24% on imbalanced datasets. Our evaluation also indicates the generality of
Fixseeker across different repository sizes and commit complexities.
|
2503.20268 | Ziran Zhang | Ziran Zhang, Xiaohui Li, Yihao Liu, Yujin Wang, Yueting Chen, Tianfan
Xue, Shi Guo | EGVD: Event-Guided Video Diffusion Model for Physically Realistic
Large-Motion Frame Interpolation | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Video frame interpolation (VFI) in scenarios with large motion remains
challenging due to motion ambiguity between frames. While event cameras can
capture high temporal resolution motion information, existing event-based VFI
methods struggle with limited training data and complex motion patterns. In
this paper, we introduce Event-Guided Video Diffusion Model (EGVD), a novel
framework that leverages the powerful priors of pre-trained stable video
diffusion models alongside the precise temporal information from event cameras.
Our approach features a Multi-modal Motion Condition Generator (MMCG) that
effectively integrates RGB frames and event signals to guide the diffusion
process, producing physically realistic intermediate frames. We employ a
selective fine-tuning strategy that preserves spatial modeling capabilities
while efficiently incorporating event-guided temporal information. We
incorporate input-output normalization techniques inspired by recent advances
in diffusion modeling to enhance training stability across varying noise
levels. To improve generalization, we construct a comprehensive dataset
combining both real and simulated event data across diverse scenarios.
Extensive experiments on both real and simulated datasets demonstrate that EGVD
significantly outperforms existing methods in handling large motion and
challenging lighting conditions, achieving substantial improvements in
perceptual quality metrics (27.4% better LPIPS on Prophesee and 24.1% on BSRGB)
while maintaining competitive fidelity measures. Code and datasets available
at: https://github.com/OpenImagingLab/EGVD.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 06:33:32 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Zhang",
"Ziran",
""
],
[
"Li",
"Xiaohui",
""
],
[
"Liu",
"Yihao",
""
],
[
"Wang",
"Yujin",
""
],
[
"Chen",
"Yueting",
""
],
[
"Xue",
"Tianfan",
""
],
[
"Guo",
"Shi",
""
]
] | TITLE: EGVD: Event-Guided Video Diffusion Model for Physically Realistic
Large-Motion Frame Interpolation
ABSTRACT: Video frame interpolation (VFI) in scenarios with large motion remains
challenging due to motion ambiguity between frames. While event cameras can
capture high temporal resolution motion information, existing event-based VFI
methods struggle with limited training data and complex motion patterns. In
this paper, we introduce Event-Guided Video Diffusion Model (EGVD), a novel
framework that leverages the powerful priors of pre-trained stable video
diffusion models alongside the precise temporal information from event cameras.
Our approach features a Multi-modal Motion Condition Generator (MMCG) that
effectively integrates RGB frames and event signals to guide the diffusion
process, producing physically realistic intermediate frames. We employ a
selective fine-tuning strategy that preserves spatial modeling capabilities
while efficiently incorporating event-guided temporal information. We
incorporate input-output normalization techniques inspired by recent advances
in diffusion modeling to enhance training stability across varying noise
levels. To improve generalization, we construct a comprehensive dataset
combining both real and simulated event data across diverse scenarios.
Extensive experiments on both real and simulated datasets demonstrate that EGVD
significantly outperforms existing methods in handling large motion and
challenging lighting conditions, achieving substantial improvements in
perceptual quality metrics (27.4% better LPIPS on Prophesee and 24.1% on BSRGB)
while maintaining competitive fidelity measures. Code and datasets available
at: https://github.com/OpenImagingLab/EGVD.
|
2503.20278 | William Gilpin | William Gilpin | The cell as a token: high-dimensional geometry in language models and
cell embeddings | 4 pages, 2 figures | null | null | null | q-bio.QM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Single-cell sequencing technology maps cells to a high-dimensional space
encoding their internal activity. This process mirrors parallel developments in
machine learning, where large language models ingest unstructured text by
converting words into discrete tokens embedded within a high-dimensional vector
space. This perspective explores how advances in understanding the structure of
language embeddings can inform ongoing efforts to analyze and visualize single
cell datasets. We discuss how the context of tokens influences the geometry of
embedding space, and the role of low-dimensional manifolds in shaping this
space's robustness and interpretability. We highlight new developments in
language modeling, such as interpretability probes and in-context reasoning,
that can inform future efforts to construct and consolidate cell atlases.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 07:05:58 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Gilpin",
"William",
""
]
] | TITLE: The cell as a token: high-dimensional geometry in language models and
cell embeddings
ABSTRACT: Single-cell sequencing technology maps cells to a high-dimensional space
encoding their internal activity. This process mirrors parallel developments in
machine learning, where large language models ingest unstructured text by
converting words into discrete tokens embedded within a high-dimensional vector
space. This perspective explores how advances in understanding the structure of
language embeddings can inform ongoing efforts to analyze and visualize single
cell datasets. We discuss how the context of tokens influences the geometry of
embedding space, and the role of low-dimensional manifolds in shaping this
space's robustness and interpretability. We highlight new developments in
language modeling, such as interpretability probes and in-context reasoning,
that can inform future efforts to construct and consolidate cell atlases.
|
2503.20281 | Chenglong Wang | Chenglong Wang, Pujia Zheng, Jiaping Gui, Cunqing Hua, Wajih Ul Hassan | Are We There Yet? Unraveling the State-of-the-Art Graph Network
Intrusion Detection Systems | null | null | null | null | cs.CR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Network Intrusion Detection Systems (NIDS) are vital for ensuring enterprise
security. Recently, Graph-based NIDS (GIDS) have attracted considerable
attention because of their capability to effectively capture the complex
relationships within the graph structures of data communications. Despite their
promise, the reproducibility and replicability of these GIDS remain largely
unexplored, posing challenges for developing reliable and robust detection
systems. This study bridges this gap by designing a systematic approach to
evaluate state-of-the-art GIDS, which includes critically assessing, extending,
and clarifying the findings of these systems. We further assess the robustness
of GIDS under adversarial attacks. Evaluations were conducted on three public
datasets as well as a newly collected large-scale enterprise dataset. Our
findings reveal significant performance discrepancies, highlighting challenges
related to dataset scale, model inputs, and implementation settings. We
demonstrate difficulties in reproducing and replicating results, particularly
concerning false positive rates and robustness against adversarial attacks.
This work provides valuable insights and recommendations for future research,
emphasizing the importance of rigorous reproduction and replication studies in
developing robust and generalizable GIDS solutions.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 07:11:57 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Wang",
"Chenglong",
""
],
[
"Zheng",
"Pujia",
""
],
[
"Gui",
"Jiaping",
""
],
[
"Hua",
"Cunqing",
""
],
[
"Hassan",
"Wajih Ul",
""
]
] | TITLE: Are We There Yet? Unraveling the State-of-the-Art Graph Network
Intrusion Detection Systems
ABSTRACT: Network Intrusion Detection Systems (NIDS) are vital for ensuring enterprise
security. Recently, Graph-based NIDS (GIDS) have attracted considerable
attention because of their capability to effectively capture the complex
relationships within the graph structures of data communications. Despite their
promise, the reproducibility and replicability of these GIDS remain largely
unexplored, posing challenges for developing reliable and robust detection
systems. This study bridges this gap by designing a systematic approach to
evaluate state-of-the-art GIDS, which includes critically assessing, extending,
and clarifying the findings of these systems. We further assess the robustness
of GIDS under adversarial attacks. Evaluations were conducted on three public
datasets as well as a newly collected large-scale enterprise dataset. Our
findings reveal significant performance discrepancies, highlighting challenges
related to dataset scale, model inputs, and implementation settings. We
demonstrate difficulties in reproducing and replicating results, particularly
concerning false positive rates and robustness against adversarial attacks.
This work provides valuable insights and recommendations for future research,
emphasizing the importance of rigorous reproduction and replication studies in
developing robust and generalizable GIDS solutions.
|
2503.20285 | Hongye Cao | Hongye Cao, Fan Feng, Jing Huo, Shangdong Yang, Meng Fang, Tianpei
Yang, and Yang Gao | Model-Based Offline Reinforcement Learning with Adversarial Data
Augmentation | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Model-based offline Reinforcement Learning (RL) constructs environment models
from offline datasets to perform conservative policy optimization. Existing
approaches focus on learning state transitions through ensemble models,
rollouting conservative estimation to mitigate extrapolation errors. However,
the static data makes it challenging to develop a robust policy, and offline
agents cannot access the environment to gather new data. To address these
challenges, we introduce Model-based Offline Reinforcement learning with
AdversariaL data augmentation (MORAL). In MORAL, we replace the fixed horizon
rollout by employing adversaria data augmentation to execute alternating
sampling with ensemble models to enrich training data. Specifically, this
adversarial process dynamically selects ensemble models against policy for
biased sampling, mitigating the optimistic estimation of fixed models, thus
robustly expanding the training data for policy optimization. Moreover, a
differential factor is integrated into the adversarial process for
regularization, ensuring error minimization in extrapolations. This
data-augmented optimization adapts to diverse offline tasks without rollout
horizon tuning, showing remarkable applicability. Extensive experiments on D4RL
benchmark demonstrate that MORAL outperforms other model-based offline RL
methods in terms of policy learning and sample efficiency.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 07:24:34 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Cao",
"Hongye",
""
],
[
"Feng",
"Fan",
""
],
[
"Huo",
"Jing",
""
],
[
"Yang",
"Shangdong",
""
],
[
"Fang",
"Meng",
""
],
[
"Yang",
"Tianpei",
""
],
[
"Gao",
"Yang",
""
]
] | TITLE: Model-Based Offline Reinforcement Learning with Adversarial Data
Augmentation
ABSTRACT: Model-based offline Reinforcement Learning (RL) constructs environment models
from offline datasets to perform conservative policy optimization. Existing
approaches focus on learning state transitions through ensemble models,
rollouting conservative estimation to mitigate extrapolation errors. However,
the static data makes it challenging to develop a robust policy, and offline
agents cannot access the environment to gather new data. To address these
challenges, we introduce Model-based Offline Reinforcement learning with
AdversariaL data augmentation (MORAL). In MORAL, we replace the fixed horizon
rollout by employing adversaria data augmentation to execute alternating
sampling with ensemble models to enrich training data. Specifically, this
adversarial process dynamically selects ensemble models against policy for
biased sampling, mitigating the optimistic estimation of fixed models, thus
robustly expanding the training data for policy optimization. Moreover, a
differential factor is integrated into the adversarial process for
regularization, ensuring error minimization in extrapolations. This
data-augmented optimization adapts to diverse offline tasks without rollout
horizon tuning, showing remarkable applicability. Extensive experiments on D4RL
benchmark demonstrate that MORAL outperforms other model-based offline RL
methods in terms of policy learning and sample efficiency.
|
2503.20287 | Yuhui Wu | Yuhui Wu, Liyi Chen, Ruibin Li, Shihao Wang, Chenxi Xie, Lei Zhang | InsViE-1M: Effective Instruction-based Video Editing with Elaborate
Dataset Construction | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Instruction-based video editing allows effective and interactive editing of
videos using only instructions without extra inputs such as masks or
attributes. However, collecting high-quality training triplets (source video,
edited video, instruction) is a challenging task. Existing datasets mostly
consist of low-resolution, short duration, and limited amount of source videos
with unsatisfactory editing quality, limiting the performance of trained
editing models. In this work, we present a high-quality Instruction-based Video
Editing dataset with 1M triplets, namely InsViE-1M. We first curate
high-resolution and high-quality source videos and images, then design an
effective editing-filtering pipeline to construct high-quality editing triplets
for model training. For a source video, we generate multiple edited samples of
its first frame with different intensities of classifier-free guidance, which
are automatically filtered by GPT-4o with carefully crafted guidelines. The
edited first frame is propagated to subsequent frames to produce the edited
video, followed by another round of filtering for frame quality and motion
evaluation. We also generate and filter a variety of video editing triplets
from high-quality images. With the InsViE-1M dataset, we propose a multi-stage
learning strategy to train our InsViE model, progressively enhancing its
instruction following and editing ability. Extensive experiments demonstrate
the advantages of our InsViE-1M dataset and the trained model over
state-of-the-art works. Codes are available at InsViE.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 07:30:58 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Wu",
"Yuhui",
""
],
[
"Chen",
"Liyi",
""
],
[
"Li",
"Ruibin",
""
],
[
"Wang",
"Shihao",
""
],
[
"Xie",
"Chenxi",
""
],
[
"Zhang",
"Lei",
""
]
] | TITLE: InsViE-1M: Effective Instruction-based Video Editing with Elaborate
Dataset Construction
ABSTRACT: Instruction-based video editing allows effective and interactive editing of
videos using only instructions without extra inputs such as masks or
attributes. However, collecting high-quality training triplets (source video,
edited video, instruction) is a challenging task. Existing datasets mostly
consist of low-resolution, short duration, and limited amount of source videos
with unsatisfactory editing quality, limiting the performance of trained
editing models. In this work, we present a high-quality Instruction-based Video
Editing dataset with 1M triplets, namely InsViE-1M. We first curate
high-resolution and high-quality source videos and images, then design an
effective editing-filtering pipeline to construct high-quality editing triplets
for model training. For a source video, we generate multiple edited samples of
its first frame with different intensities of classifier-free guidance, which
are automatically filtered by GPT-4o with carefully crafted guidelines. The
edited first frame is propagated to subsequent frames to produce the edited
video, followed by another round of filtering for frame quality and motion
evaluation. We also generate and filter a variety of video editing triplets
from high-quality images. With the InsViE-1M dataset, we propose a multi-stage
learning strategy to train our InsViE model, progressively enhancing its
instruction following and editing ability. Extensive experiments demonstrate
the advantages of our InsViE-1M dataset and the trained model over
state-of-the-art works. Codes are available at InsViE.
|
2503.20306 | Anandakumar D | Bargava Subramanian, Naveen Kumarasami, Praveen Shastry, Kalyan
Sivasailam, Anandakumar D, Elakkiya R, Harsha KG, Rithanya V, Harini T,
Afshin Hussain, Kishore Prasath Venkatesh | 3D Convolutional Neural Networks for Improved Detection of Intracranial
bleeding in CT Imaging | 12 pages,4 figures | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: Intracranial bleeding (IB) is a life-threatening condition caused
by traumatic brain injuries, including epidural, subdural, subarachnoid, and
intraparenchymal hemorrhages. Rapid and accurate detection is crucial to
prevent severe complications. Traditional imaging can be slow and prone to
variability, especially in high-pressure scenarios. Artificial Intelligence
(AI) provides a solution by quickly analyzing medical images, identifying
subtle hemorrhages, and flagging urgent cases. By enhancing diagnostic speed
and accuracy, AI improves workflows and patient care. This article explores
AI's role in transforming IB detection in emergency settings.
Methods: A U-shaped 3D Convolutional Neural Network (CNN) automates IB
detection and classification in volumetric CT scans. Advanced preprocessing,
including CLAHE and intensity normalization, enhances image quality. The
architecture preserves spatial and contextual details for precise segmentation.
A dataset of 2,912 annotated CT scans was used for training and evaluation.
Results: The model achieved high performance across major bleed types, with
precision, recall, and accuracy exceeding 90 percent in most cases 96 percent
precision for epidural hemorrhages and 94 percent accuracy for subarachnoid
hemorrhages. Its ability to classify and localize hemorrhages highlights its
clinical reliability.
Conclusion: This U-shaped 3D CNN offers a scalable solution for automating IB
detection, reducing diagnostic delays, and improving emergency care outcomes.
Future work will expand dataset diversity, optimize real-time processing, and
integrate multimodal data for enhanced clinical applicability.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 08:10:29 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Subramanian",
"Bargava",
""
],
[
"Kumarasami",
"Naveen",
""
],
[
"Shastry",
"Praveen",
""
],
[
"Sivasailam",
"Kalyan",
""
],
[
"D",
"Anandakumar",
""
],
[
"R",
"Elakkiya",
""
],
[
"KG",
"Harsha",
""
],
[
"V",
"Rithanya",
""
],
[
"T",
"Harini",
""
],
[
"Hussain",
"Afshin",
""
],
[
"Venkatesh",
"Kishore Prasath",
""
]
] | TITLE: 3D Convolutional Neural Networks for Improved Detection of Intracranial
bleeding in CT Imaging
ABSTRACT: Background: Intracranial bleeding (IB) is a life-threatening condition caused
by traumatic brain injuries, including epidural, subdural, subarachnoid, and
intraparenchymal hemorrhages. Rapid and accurate detection is crucial to
prevent severe complications. Traditional imaging can be slow and prone to
variability, especially in high-pressure scenarios. Artificial Intelligence
(AI) provides a solution by quickly analyzing medical images, identifying
subtle hemorrhages, and flagging urgent cases. By enhancing diagnostic speed
and accuracy, AI improves workflows and patient care. This article explores
AI's role in transforming IB detection in emergency settings.
Methods: A U-shaped 3D Convolutional Neural Network (CNN) automates IB
detection and classification in volumetric CT scans. Advanced preprocessing,
including CLAHE and intensity normalization, enhances image quality. The
architecture preserves spatial and contextual details for precise segmentation.
A dataset of 2,912 annotated CT scans was used for training and evaluation.
Results: The model achieved high performance across major bleed types, with
precision, recall, and accuracy exceeding 90 percent in most cases 96 percent
precision for epidural hemorrhages and 94 percent accuracy for subarachnoid
hemorrhages. Its ability to classify and localize hemorrhages highlights its
clinical reliability.
Conclusion: This U-shaped 3D CNN offers a scalable solution for automating IB
detection, reducing diagnostic delays, and improving emergency care outcomes.
Future work will expand dataset diversity, optimize real-time processing, and
integrate multimodal data for enhanced clinical applicability.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.