Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2504.01952 | Wenxuan Wang | Wenxuan Wang, Zijia Zhao, Yisi Zhang, Yepeng Tang, Erdong Hu, Xinlong
Wang, Jing Liu | Image Difference Grounding with Natural Language | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual grounding (VG) typically focuses on locating regions of interest
within an image using natural language, and most existing VG methods are
limited to single-image interpretations. This limits their applicability in
real-world scenarios like automatic surveillance, where detecting subtle but
meaningful visual differences across multiple images is crucial. Besides,
previous work on image difference understanding (IDU) has either focused on
detecting all change regions without cross-modal text guidance, or on providing
coarse-grained descriptions of differences. Therefore, to push towards
finer-grained vision-language perception, we propose Image Difference Grounding
(IDG), a task designed to precisely localize visual differences based on user
instructions. We introduce DiffGround, a large-scale and high-quality dataset
for IDG, containing image pairs with diverse visual variations along with
instructions querying fine-grained differences. Besides, we present a baseline
model for IDG, DiffTracker, which effectively integrates feature differential
enhancement and common suppression to precisely locate differences. Experiments
on the DiffGround dataset highlight the importance of our IDG dataset in
enabling finer-grained IDU. To foster future research, both DiffGround data and
DiffTracker model will be publicly released.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 17:56:42 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Wang",
"Wenxuan",
""
],
[
"Zhao",
"Zijia",
""
],
[
"Zhang",
"Yisi",
""
],
[
"Tang",
"Yepeng",
""
],
[
"Hu",
"Erdong",
""
],
[
"Wang",
"Xinlong",
""
],
[
"Liu",
"Jing",
""
]
] | TITLE: Image Difference Grounding with Natural Language
ABSTRACT: Visual grounding (VG) typically focuses on locating regions of interest
within an image using natural language, and most existing VG methods are
limited to single-image interpretations. This limits their applicability in
real-world scenarios like automatic surveillance, where detecting subtle but
meaningful visual differences across multiple images is crucial. Besides,
previous work on image difference understanding (IDU) has either focused on
detecting all change regions without cross-modal text guidance, or on providing
coarse-grained descriptions of differences. Therefore, to push towards
finer-grained vision-language perception, we propose Image Difference Grounding
(IDG), a task designed to precisely localize visual differences based on user
instructions. We introduce DiffGround, a large-scale and high-quality dataset
for IDG, containing image pairs with diverse visual variations along with
instructions querying fine-grained differences. Besides, we present a baseline
model for IDG, DiffTracker, which effectively integrates feature differential
enhancement and common suppression to precisely locate differences. Experiments
on the DiffGround dataset highlight the importance of our IDG dataset in
enabling finer-grained IDU. To foster future research, both DiffGround data and
DiffTracker model will be publicly released.
|
2504.01954 | Wenxuan Wang | Jing Liu, Wenxuan Wang, Yisi Zhang, Yepeng Tang, Xingjian He, Longteng
Guo, Tongtian Yue, Xinlong Wang | Towards Unified Referring Expression Segmentation Across Omni-Level
Visual Target Granularities | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Referring expression segmentation (RES) aims at segmenting the entities'
masks that match the descriptive language expression. While traditional RES
methods primarily address object-level grounding, real-world scenarios demand a
more versatile framework that can handle multiple levels of target granularity,
such as multi-object, single object or part-level references. This introduces
great challenges due to the diverse and nuanced ways users describe targets.
However, existing datasets and models mainly focus on designing grounding
specialists for object-level target localization, lacking the necessary data
resources and unified frameworks for the more practical multi-grained RES. In
this paper, we take a step further towards visual granularity unified RES task.
To overcome the limitation of data scarcity, we introduce a new
multi-granularity referring expression segmentation (MRES) task, alongside the
RefCOCOm benchmark, which includes part-level annotations for advancing
finer-grained visual understanding. In addition, we create MRES-32M, the
largest visual grounding dataset, comprising over 32.2M masks and captions
across 1M images, specifically designed for part-level vision-language
grounding. To tackle the challenges of multi-granularity RES, we propose
UniRES++, a unified multimodal large language model that integrates
object-level and part-level RES tasks. UniRES++ incorporates targeted designs
for fine-grained visual feature exploration. With the joint model architecture
and parameters, UniRES++ achieves state-of-the-art performance across multiple
benchmarks, including RefCOCOm for MRES, gRefCOCO for generalized RES, and
RefCOCO, RefCOCO+, RefCOCOg for classic RES. To foster future research into
multi-grained visual grounding, our RefCOCOm benchmark, MRES-32M dataset and
model UniRES++ will be publicly available at
https://github.com/Rubics-Xuan/MRES.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 17:58:05 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Liu",
"Jing",
""
],
[
"Wang",
"Wenxuan",
""
],
[
"Zhang",
"Yisi",
""
],
[
"Tang",
"Yepeng",
""
],
[
"He",
"Xingjian",
""
],
[
"Guo",
"Longteng",
""
],
[
"Yue",
"Tongtian",
""
],
[
"Wang",
"Xinlong",
""
]
] | TITLE: Towards Unified Referring Expression Segmentation Across Omni-Level
Visual Target Granularities
ABSTRACT: Referring expression segmentation (RES) aims at segmenting the entities'
masks that match the descriptive language expression. While traditional RES
methods primarily address object-level grounding, real-world scenarios demand a
more versatile framework that can handle multiple levels of target granularity,
such as multi-object, single object or part-level references. This introduces
great challenges due to the diverse and nuanced ways users describe targets.
However, existing datasets and models mainly focus on designing grounding
specialists for object-level target localization, lacking the necessary data
resources and unified frameworks for the more practical multi-grained RES. In
this paper, we take a step further towards visual granularity unified RES task.
To overcome the limitation of data scarcity, we introduce a new
multi-granularity referring expression segmentation (MRES) task, alongside the
RefCOCOm benchmark, which includes part-level annotations for advancing
finer-grained visual understanding. In addition, we create MRES-32M, the
largest visual grounding dataset, comprising over 32.2M masks and captions
across 1M images, specifically designed for part-level vision-language
grounding. To tackle the challenges of multi-granularity RES, we propose
UniRES++, a unified multimodal large language model that integrates
object-level and part-level RES tasks. UniRES++ incorporates targeted designs
for fine-grained visual feature exploration. With the joint model architecture
and parameters, UniRES++ achieves state-of-the-art performance across multiple
benchmarks, including RefCOCOm for MRES, gRefCOCO for generalized RES, and
RefCOCO, RefCOCO+, RefCOCOg for classic RES. To foster future research into
multi-grained visual grounding, our RefCOCOm benchmark, MRES-32M dataset and
model UniRES++ will be publicly available at
https://github.com/Rubics-Xuan/MRES.
|
2504.01961 | Tengda Han | Tengda Han, Dilara Gokay, Joseph Heyward, Chuhan Zhang, Daniel Zoran,
Viorica P\u{a}tr\u{a}ucean, Jo\~ao Carreira, Dima Damen, Andrew Zisserman | Learning from Streaming Video with Orthogonal Gradients | CVPR2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address the challenge of representation learning from a continuous stream
of video as input, in a self-supervised manner. This differs from the standard
approaches to video learning where videos are chopped and shuffled during
training in order to create a non-redundant batch that satisfies the
independently and identically distributed (IID) sample assumption expected by
conventional training paradigms. When videos are only available as a continuous
stream of input, the IID assumption is evidently broken, leading to poor
performance. We demonstrate the drop in performance when moving from shuffled
to sequential learning on three tasks: the one-video representation learning
method DoRA, standard VideoMAE on multi-video datasets, and the task of future
video prediction. To address this drop, we propose a geometric modification to
standard optimizers, to decorrelate batches by utilising orthogonal gradients
during training. The proposed modification can be applied to any optimizer --
we demonstrate it with Stochastic Gradient Descent (SGD) and AdamW. Our
proposed orthogonal optimizer allows models trained from streaming videos to
alleviate the drop in representation learning performance, as evaluated on
downstream tasks. On three scenarios (DoRA, VideoMAE, future prediction), we
show our orthogonal optimizer outperforms the strong AdamW in all three
scenarios.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 17:59:57 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Han",
"Tengda",
""
],
[
"Gokay",
"Dilara",
""
],
[
"Heyward",
"Joseph",
""
],
[
"Zhang",
"Chuhan",
""
],
[
"Zoran",
"Daniel",
""
],
[
"Pătrăucean",
"Viorica",
""
],
[
"Carreira",
"João",
""
],
[
"Damen",
"Dima",
""
],
[
"Zisserman",
"Andrew",
""
]
] | TITLE: Learning from Streaming Video with Orthogonal Gradients
ABSTRACT: We address the challenge of representation learning from a continuous stream
of video as input, in a self-supervised manner. This differs from the standard
approaches to video learning where videos are chopped and shuffled during
training in order to create a non-redundant batch that satisfies the
independently and identically distributed (IID) sample assumption expected by
conventional training paradigms. When videos are only available as a continuous
stream of input, the IID assumption is evidently broken, leading to poor
performance. We demonstrate the drop in performance when moving from shuffled
to sequential learning on three tasks: the one-video representation learning
method DoRA, standard VideoMAE on multi-video datasets, and the task of future
video prediction. To address this drop, we propose a geometric modification to
standard optimizers, to decorrelate batches by utilising orthogonal gradients
during training. The proposed modification can be applied to any optimizer --
we demonstrate it with Stochastic Gradient Descent (SGD) and AdamW. Our
proposed orthogonal optimizer allows models trained from streaming videos to
alleviate the drop in representation learning performance, as evaluated on
downstream tasks. On three scenarios (DoRA, VideoMAE, future prediction), we
show our orthogonal optimizer outperforms the strong AdamW in all three
scenarios.
|
1904.06866 | Ori Plonsky | Ori Plonsky, Reut Apel, Eyal Ert, Moshe Tennenholtz, David Bourgin,
Joshua C. Peterson, Daniel Reichman, Thomas L. Griffiths, Stuart J. Russell,
Evan C. Carter, James F. Cavanagh, Ido Erev | Predicting human decisions with behavioral theories and machine learning | null | null | null | null | cs.AI cs.GT cs.LG | http://creativecommons.org/licenses/by/4.0/ | Predicting human decisions under risk and uncertainty remains a fundamental
challenge across disciplines. Existing models often struggle even in highly
stylized tasks like choice between lotteries. We introduce BEAST Gradient
Boosting (BEAST-GB), a hybrid model integrating behavioral theory (BEAST) with
machine learning. We first present CPC18, a competition for predicting risky
choice, in which BEAST-GB won. Then, using two large datasets, we demonstrate
BEAST-GB predicts more accurately than neural networks trained on extensive
data and dozens of existing behavioral models. BEAST-GB also generalizes
robustly across unseen experimental contexts, surpassing direct empirical
generalization, and helps refine and improve the behavioral theory itself. Our
analyses highlight the potential of anchoring predictions on behavioral theory
even in data-rich settings and even when the theory alone falters. Our results
underscore how integrating machine learning with theoretical frameworks,
especially those-like BEAST-designed for prediction, can improve our ability to
predict and understand human behavior.
| [
{
"version": "v1",
"created": "Mon, 15 Apr 2019 06:12:44 GMT"
},
{
"version": "v2",
"created": "Thu, 18 Apr 2024 07:10:17 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Mar 2025 09:01:41 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Plonsky",
"Ori",
""
],
[
"Apel",
"Reut",
""
],
[
"Ert",
"Eyal",
""
],
[
"Tennenholtz",
"Moshe",
""
],
[
"Bourgin",
"David",
""
],
[
"Peterson",
"Joshua C.",
""
],
[
"Reichman",
"Daniel",
""
],
[
"Griffiths",
"Thomas L.",
""
],
[
"Russell",
"Stuart J.",
""
],
[
"Carter",
"Evan C.",
""
],
[
"Cavanagh",
"James F.",
""
],
[
"Erev",
"Ido",
""
]
] | TITLE: Predicting human decisions with behavioral theories and machine learning
ABSTRACT: Predicting human decisions under risk and uncertainty remains a fundamental
challenge across disciplines. Existing models often struggle even in highly
stylized tasks like choice between lotteries. We introduce BEAST Gradient
Boosting (BEAST-GB), a hybrid model integrating behavioral theory (BEAST) with
machine learning. We first present CPC18, a competition for predicting risky
choice, in which BEAST-GB won. Then, using two large datasets, we demonstrate
BEAST-GB predicts more accurately than neural networks trained on extensive
data and dozens of existing behavioral models. BEAST-GB also generalizes
robustly across unseen experimental contexts, surpassing direct empirical
generalization, and helps refine and improve the behavioral theory itself. Our
analyses highlight the potential of anchoring predictions on behavioral theory
even in data-rich settings and even when the theory alone falters. Our results
underscore how integrating machine learning with theoretical frameworks,
especially those-like BEAST-designed for prediction, can improve our ability to
predict and understand human behavior.
|
2101.00814 | Chenxing Wang | Fanzhou Wang, Chenxing Wang, Qingze Guan | Single-shot fringe projection profilometry based on Deep Learning and
Computer Graphics | null | Opt. Express 29(2021) 8024-8040 | 10.1364/OE.418430 | 10944087 | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multiple works have applied deep learning to fringe projection profilometry
(FPP) in recent years. However, to obtain a large amount of data from actual
systems for training is still a tricky problem, and moreover, the network
design and optimization still worth exploring. In this paper, we introduce
computer graphics to build virtual FPP systems in order to generate the desired
datasets conveniently and simply. The way of constructing a virtual FPP system
is described in detail firstly, and then some key factors to set the virtual
FPP system much close to the reality are analyzed. With the aim of accurately
estimating the depth image from only one fringe image, we also design a new
loss function to enhance the quality of the overall and detailed information
restored. And two representative networks, U-Net and pix2pix, are compared in
multiple aspects. The real experiments prove the good accuracy and
generalization of the network trained by the data from our virtual systems and
the designed loss, implying the potential of our method for applications.
| [
{
"version": "v1",
"created": "Mon, 4 Jan 2021 07:42:37 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Wang",
"Fanzhou",
""
],
[
"Wang",
"Chenxing",
""
],
[
"Guan",
"Qingze",
""
]
] | TITLE: Single-shot fringe projection profilometry based on Deep Learning and
Computer Graphics
ABSTRACT: Multiple works have applied deep learning to fringe projection profilometry
(FPP) in recent years. However, to obtain a large amount of data from actual
systems for training is still a tricky problem, and moreover, the network
design and optimization still worth exploring. In this paper, we introduce
computer graphics to build virtual FPP systems in order to generate the desired
datasets conveniently and simply. The way of constructing a virtual FPP system
is described in detail firstly, and then some key factors to set the virtual
FPP system much close to the reality are analyzed. With the aim of accurately
estimating the depth image from only one fringe image, we also design a new
loss function to enhance the quality of the overall and detailed information
restored. And two representative networks, U-Net and pix2pix, are compared in
multiple aspects. The real experiments prove the good accuracy and
generalization of the network trained by the data from our virtual systems and
the designed loss, implying the potential of our method for applications.
|
2209.06327 | Yuzhou Jiang | Yuzhou Jiang, Tianxi Ji, Erman Ayday | PROVGEN: A Privacy-Preserving Approach for Outcome Validation in Genomic
Research | null | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As genomic research has grown increasingly popular in recent years, dataset
sharing has remained limited due to privacy concerns. This limitation hinders
the reproducibility and validation of research outcomes, both of which are
essential for identifying computational errors during the research process. In
this paper, we introduce PROVGEN, a privacy-preserving method for sharing
genomic datasets that facilitates reproducibility and outcome validation in
genome-wide association studies (GWAS). Our approach encodes genomic data into
binary space and applies a two-stage process. First, we generate a
differentially private version of the dataset using an XOR-based mechanism that
incorporates biological characteristics. Second, we restore data utility by
adjusting the Minor Allele Frequency (MAF) values in the noisy dataset to align
with published MAFs using optimal transport. Finally, we convert the processed
binary data back into its genomic representation and publish the resulting
dataset. We evaluate PROVGEN on three real-world genomic datasets and compare
it with local differential privacy and three synthesis-based methods. We show
that our proposed scheme outperforms all existing methods in detecting GWAS
outcome errors, achieves better data utility, and provides higher privacy
protection against membership inference attacks (MIAs). By adopting our method,
genomic researchers will be inclined to share differentially private datasets
while maintaining high data quality for reproducibility of their findings.
| [
{
"version": "v1",
"created": "Tue, 13 Sep 2022 22:20:41 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Feb 2023 03:32:11 GMT"
},
{
"version": "v3",
"created": "Fri, 21 Apr 2023 19:23:24 GMT"
},
{
"version": "v4",
"created": "Mon, 18 Dec 2023 20:12:21 GMT"
},
{
"version": "v5",
"created": "Wed, 28 Aug 2024 15:24:07 GMT"
},
{
"version": "v6",
"created": "Wed, 5 Mar 2025 04:02:30 GMT"
},
{
"version": "v7",
"created": "Tue, 1 Apr 2025 05:51:30 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Jiang",
"Yuzhou",
""
],
[
"Ji",
"Tianxi",
""
],
[
"Ayday",
"Erman",
""
]
] | TITLE: PROVGEN: A Privacy-Preserving Approach for Outcome Validation in Genomic
Research
ABSTRACT: As genomic research has grown increasingly popular in recent years, dataset
sharing has remained limited due to privacy concerns. This limitation hinders
the reproducibility and validation of research outcomes, both of which are
essential for identifying computational errors during the research process. In
this paper, we introduce PROVGEN, a privacy-preserving method for sharing
genomic datasets that facilitates reproducibility and outcome validation in
genome-wide association studies (GWAS). Our approach encodes genomic data into
binary space and applies a two-stage process. First, we generate a
differentially private version of the dataset using an XOR-based mechanism that
incorporates biological characteristics. Second, we restore data utility by
adjusting the Minor Allele Frequency (MAF) values in the noisy dataset to align
with published MAFs using optimal transport. Finally, we convert the processed
binary data back into its genomic representation and publish the resulting
dataset. We evaluate PROVGEN on three real-world genomic datasets and compare
it with local differential privacy and three synthesis-based methods. We show
that our proposed scheme outperforms all existing methods in detecting GWAS
outcome errors, achieves better data utility, and provides higher privacy
protection against membership inference attacks (MIAs). By adopting our method,
genomic researchers will be inclined to share differentially private datasets
while maintaining high data quality for reproducibility of their findings.
|
2302.10473 | Kun Wang | Kun Wang, Zi Wang, Zhang Li, Ang Su, Xichao Teng, Erting Pan, Minhao
Liu and Qifeng Yu | Oriented Object Detection in Optical Remote Sensing Images using Deep
Learning: A Survey | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Oriented object detection is one of the most fundamental and challenging
tasks in remote sensing, aiming to locate and classify objects with arbitrary
orientations. Recent advancements in deep learning have significantly enhanced
the capabilities of oriented object detection. Given the rapid development of
this field, this paper presents a comprehensive survey of recent advances in
oriented object detection. To be specific, we begin by tracing the technical
evolution from horizontal object detection to oriented object detection and
highlighting the specific challenges, including feature misalignment, spatial
misalignment, and oriented bounding box (OBB) regression problems.
Subsequently, we further categorize existing methods into detection framework,
OBB regression, and feature representations, and provide an in-depth discussion
on how these approaches address the above challenges. In addition, we cover
several publicly available datasets and evaluation protocols. Furthermore, we
provide a comprehensive comparison and analysis of state-of-the-art methods.
Toward the end of this paper, we identify several future directions for
oriented object detection.
| [
{
"version": "v1",
"created": "Tue, 21 Feb 2023 06:31:53 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Jul 2023 15:16:23 GMT"
},
{
"version": "v3",
"created": "Sat, 8 Jul 2023 08:19:18 GMT"
},
{
"version": "v4",
"created": "Tue, 9 Apr 2024 05:47:57 GMT"
},
{
"version": "v5",
"created": "Tue, 1 Apr 2025 15:54:12 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Wang",
"Kun",
""
],
[
"Wang",
"Zi",
""
],
[
"Li",
"Zhang",
""
],
[
"Su",
"Ang",
""
],
[
"Teng",
"Xichao",
""
],
[
"Pan",
"Erting",
""
],
[
"Liu",
"Minhao",
""
],
[
"Yu",
"Qifeng",
""
]
] | TITLE: Oriented Object Detection in Optical Remote Sensing Images using Deep
Learning: A Survey
ABSTRACT: Oriented object detection is one of the most fundamental and challenging
tasks in remote sensing, aiming to locate and classify objects with arbitrary
orientations. Recent advancements in deep learning have significantly enhanced
the capabilities of oriented object detection. Given the rapid development of
this field, this paper presents a comprehensive survey of recent advances in
oriented object detection. To be specific, we begin by tracing the technical
evolution from horizontal object detection to oriented object detection and
highlighting the specific challenges, including feature misalignment, spatial
misalignment, and oriented bounding box (OBB) regression problems.
Subsequently, we further categorize existing methods into detection framework,
OBB regression, and feature representations, and provide an in-depth discussion
on how these approaches address the above challenges. In addition, we cover
several publicly available datasets and evaluation protocols. Furthermore, we
provide a comprehensive comparison and analysis of state-of-the-art methods.
Toward the end of this paper, we identify several future directions for
oriented object detection.
|
2308.14746 | Lucas Ventura | Lucas Ventura, Antoine Yang, Cordelia Schmid, G\"ul Varol | CoVR-2: Automatic Data Construction for Composed Video Retrieval | Appears in TPAMI 2024 (DOI: 10.1109/TPAMI.2024.3463799). Journal
extension of the AAAI 2024 conference paper arXiv:2308.14746v3. Project page:
https://imagine.enpc.fr/~ventural/covr/ | IEEE Transactions on Pattern Analysis and Machine Intelligence
(2024) | 10.1109/TPAMI.2024.3463799 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Composed Image Retrieval (CoIR) has recently gained popularity as a task that
considers both text and image queries together, to search for relevant images
in a database. Most CoIR approaches require manually annotated datasets,
comprising image-text-image triplets, where the text describes a modification
from the query image to the target image. However, manual curation of CoIR
triplets is expensive and prevents scalability. In this work, we instead
propose a scalable automatic dataset creation methodology that generates
triplets given video-caption pairs, while also expanding the scope of the task
to include composed video retrieval (CoVR). To this end, we mine paired videos
with a similar caption from a large database, and leverage a large language
model to generate the corresponding modification text. Applying this
methodology to the extensive WebVid2M collection, we automatically construct
our WebVid-CoVR dataset, resulting in 1.6 million triplets. Moreover, we
introduce a new benchmark for CoVR with a manually annotated evaluation set,
along with baseline results. We further validate that our methodology is
equally applicable to image-caption pairs, by generating 3.3 million CoIR
training triplets using the Conceptual Captions dataset. Our model builds on
BLIP-2 pretraining, adapting it to composed video (or image) retrieval, and
incorporates an additional caption retrieval loss to exploit extra supervision
beyond the triplet. We provide extensive ablations to analyze the design
choices on our new CoVR benchmark. Our experiments also demonstrate that
training a CoVR model on our datasets effectively transfers to CoIR, leading to
improved state-of-the-art performance in the zero-shot setup on the CIRR,
FashionIQ, and CIRCO benchmarks. Our code, datasets, and models are publicly
available at https://imagine.enpc.fr/~ventural/covr/.
| [
{
"version": "v1",
"created": "Mon, 28 Aug 2023 17:55:33 GMT"
},
{
"version": "v2",
"created": "Tue, 21 May 2024 14:44:08 GMT"
},
{
"version": "v3",
"created": "Thu, 30 May 2024 11:52:33 GMT"
},
{
"version": "v4",
"created": "Tue, 5 Nov 2024 02:51:22 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Ventura",
"Lucas",
""
],
[
"Yang",
"Antoine",
""
],
[
"Schmid",
"Cordelia",
""
],
[
"Varol",
"Gül",
""
]
] | TITLE: CoVR-2: Automatic Data Construction for Composed Video Retrieval
ABSTRACT: Composed Image Retrieval (CoIR) has recently gained popularity as a task that
considers both text and image queries together, to search for relevant images
in a database. Most CoIR approaches require manually annotated datasets,
comprising image-text-image triplets, where the text describes a modification
from the query image to the target image. However, manual curation of CoIR
triplets is expensive and prevents scalability. In this work, we instead
propose a scalable automatic dataset creation methodology that generates
triplets given video-caption pairs, while also expanding the scope of the task
to include composed video retrieval (CoVR). To this end, we mine paired videos
with a similar caption from a large database, and leverage a large language
model to generate the corresponding modification text. Applying this
methodology to the extensive WebVid2M collection, we automatically construct
our WebVid-CoVR dataset, resulting in 1.6 million triplets. Moreover, we
introduce a new benchmark for CoVR with a manually annotated evaluation set,
along with baseline results. We further validate that our methodology is
equally applicable to image-caption pairs, by generating 3.3 million CoIR
training triplets using the Conceptual Captions dataset. Our model builds on
BLIP-2 pretraining, adapting it to composed video (or image) retrieval, and
incorporates an additional caption retrieval loss to exploit extra supervision
beyond the triplet. We provide extensive ablations to analyze the design
choices on our new CoVR benchmark. Our experiments also demonstrate that
training a CoVR model on our datasets effectively transfers to CoIR, leading to
improved state-of-the-art performance in the zero-shot setup on the CIRR,
FashionIQ, and CIRCO benchmarks. Our code, datasets, and models are publicly
available at https://imagine.enpc.fr/~ventural/covr/.
|
2309.02057 | Kaike Zhang | Kaike Zhang, Qi Cao, Fei Sun, Yunfan Wu, Shuchang Tao, Huawei Shen,
Xueqi Cheng | Robust Recommender System: A Survey and Future Directions | null | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the rapid growth of information, recommender systems have become
integral for providing personalized suggestions and overcoming information
overload. However, their practical deployment often encounters ``dirty'' data,
where noise or malicious information can lead to abnormal recommendations.
Research on improving recommender systems' robustness against such dirty data
has thus gained significant attention. This survey provides a comprehensive
review of recent work on recommender systems' robustness. We first present a
taxonomy to organize current techniques for withstanding malicious attacks and
natural noise. We then explore state-of-the-art methods in each category,
including fraudster detection, adversarial training, certifiable robust
training for defending against malicious attacks, and regularization,
purification, self-supervised learning for defending against malicious attacks.
Additionally, we summarize evaluation metrics and commonly used datasets for
assessing robustness. We discuss robustness across varying recommendation
scenarios and its interplay with other properties like accuracy,
interpretability, privacy, and fairness. Finally, we delve into open issues and
future research directions in this emerging field. Our goal is to provide
readers with a comprehensive understanding of robust recommender systems and to
identify key pathways for future research and development. To facilitate
ongoing exploration, we maintain a continuously updated GitHub repository with
related research: https://github.com/Kaike-Zhang/Robust-Recommender-System.
| [
{
"version": "v1",
"created": "Tue, 5 Sep 2023 08:58:46 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 07:33:46 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Zhang",
"Kaike",
""
],
[
"Cao",
"Qi",
""
],
[
"Sun",
"Fei",
""
],
[
"Wu",
"Yunfan",
""
],
[
"Tao",
"Shuchang",
""
],
[
"Shen",
"Huawei",
""
],
[
"Cheng",
"Xueqi",
""
]
] | TITLE: Robust Recommender System: A Survey and Future Directions
ABSTRACT: With the rapid growth of information, recommender systems have become
integral for providing personalized suggestions and overcoming information
overload. However, their practical deployment often encounters ``dirty'' data,
where noise or malicious information can lead to abnormal recommendations.
Research on improving recommender systems' robustness against such dirty data
has thus gained significant attention. This survey provides a comprehensive
review of recent work on recommender systems' robustness. We first present a
taxonomy to organize current techniques for withstanding malicious attacks and
natural noise. We then explore state-of-the-art methods in each category,
including fraudster detection, adversarial training, certifiable robust
training for defending against malicious attacks, and regularization,
purification, self-supervised learning for defending against malicious attacks.
Additionally, we summarize evaluation metrics and commonly used datasets for
assessing robustness. We discuss robustness across varying recommendation
scenarios and its interplay with other properties like accuracy,
interpretability, privacy, and fairness. Finally, we delve into open issues and
future research directions in this emerging field. Our goal is to provide
readers with a comprehensive understanding of robust recommender systems and to
identify key pathways for future research and development. To facilitate
ongoing exploration, we maintain a continuously updated GitHub repository with
related research: https://github.com/Kaike-Zhang/Robust-Recommender-System.
|
2310.11399 | Pedro Afonso Marques | Pedro Afonso Marques, Samuel Ahizi, Miguel Alfonso Mendez | Real-time data assimilation for the thermodynamic modeling of cryogenic
storage tanks | 21 pages, 18 figures, preprint submitted to Energy | null | 10.1016/j.energy.2024.131739 | null | physics.flu-dyn | http://creativecommons.org/licenses/by/4.0/ | The thermal management of cryogenic storage tanks requires advanced control
strategies to minimize the boil-off losses produced by heat leakages and
sloshing-enhanced heat and mass transfer. This work presents a
data-assimilation approach to calibrate a 0D thermodynamic model for cryogenic
fuel tanks from data collected in real time from multiple tanks. The model
combines energy and mass balance between three control volumes (the ullage
vapor, the liquid, and the solid tank) with an Artificial Neural Network (ANN)
for predicting the heat transfer coefficients from the current tank state. The
proposed approach combines ideas from traditional data assimilation and
multi-environment reinforcement learning, where an agent's training (model
assimilation) is carried out simultaneously on multiple environments (systems).
The real-time assimilation uses a mini-batch version of the Limited-memory
Broyden-Fletcher-Goldfarb-Shanno with bounds (L-BFGS-B) and adjoint-based
gradient computation for solving the underlying optimization problem. The
approach is tested on synthetic datasets simulating multiple tanks undergoing
different operation phases (pressurization, hold, long-term storage, and
sloshing). The results show that the assimilation is robust against measurement
noise and uses it to explore the parameter space further. Moreover, we show
that sampling from multiple environments simultaneously accelerates the
assimilation.
| [
{
"version": "v1",
"created": "Tue, 17 Oct 2023 17:07:20 GMT"
},
{
"version": "v2",
"created": "Fri, 20 Oct 2023 07:42:41 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Marques",
"Pedro Afonso",
""
],
[
"Ahizi",
"Samuel",
""
],
[
"Mendez",
"Miguel Alfonso",
""
]
] | TITLE: Real-time data assimilation for the thermodynamic modeling of cryogenic
storage tanks
ABSTRACT: The thermal management of cryogenic storage tanks requires advanced control
strategies to minimize the boil-off losses produced by heat leakages and
sloshing-enhanced heat and mass transfer. This work presents a
data-assimilation approach to calibrate a 0D thermodynamic model for cryogenic
fuel tanks from data collected in real time from multiple tanks. The model
combines energy and mass balance between three control volumes (the ullage
vapor, the liquid, and the solid tank) with an Artificial Neural Network (ANN)
for predicting the heat transfer coefficients from the current tank state. The
proposed approach combines ideas from traditional data assimilation and
multi-environment reinforcement learning, where an agent's training (model
assimilation) is carried out simultaneously on multiple environments (systems).
The real-time assimilation uses a mini-batch version of the Limited-memory
Broyden-Fletcher-Goldfarb-Shanno with bounds (L-BFGS-B) and adjoint-based
gradient computation for solving the underlying optimization problem. The
approach is tested on synthetic datasets simulating multiple tanks undergoing
different operation phases (pressurization, hold, long-term storage, and
sloshing). The results show that the assimilation is robust against measurement
noise and uses it to explore the parameter space further. Moreover, we show
that sampling from multiple environments simultaneously accelerates the
assimilation.
|
2310.12183 | Pavithra Harsha | Pavithra Harsha, Shivaram Subramanian, Ali Koc, Mahesh Ramakrishna,
Brian Quanz, Dhruv Shah, Chandra Narayanaswami | An Optimistic-Robust Approach for Dynamic Positioning of Omnichannel
Inventories | null | null | null | null | math.OC cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a new class of data-driven and distribution-free
optimistic-robust bimodal inventory optimization (BIO) strategy to effectively
allocate inventory across a retail chain to meet time-varying, uncertain
omnichannel demand. The bimodal nature of BIO stems from its ability to balance
downside risk, as in traditional Robust Optimization (RO), which focuses on
worst-case adversarial demand, with upside potential to enhance average-case
performance. This enables BIO to remain as resilient as RO while capturing
benefits that would otherwise be lost due to endogenous outliers. Omnichannel
inventory planning provides a suitable problem setting for analyzing the
effectiveness of BIO's bimodal strategy in managing the tradeoff between lost
sales at stores and cross-channel e-commerce fulfillment costs, factors that
are inherently asymmetric due to channel-specific behaviors. We provide
structural insights about the BIO solution and how it can be tuned to achieve a
preferred tradeoff between robustness and the average-case performance. Using a
real-world dataset from a large American omnichannel retail chain, a business
value assessment during a peak period indicates that BIO outperforms pure RO by
27% in terms of realized average profitability and surpasses other competitive
baselines under imperfect distributional information by over 10%. This
demonstrates that BIO provides a novel, data-driven, and distribution-free
alternative to traditional RO that achieves strong average performance while
carefully balancing robustness.
| [
{
"version": "v1",
"created": "Tue, 17 Oct 2023 23:10:57 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 15:59:59 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Harsha",
"Pavithra",
""
],
[
"Subramanian",
"Shivaram",
""
],
[
"Koc",
"Ali",
""
],
[
"Ramakrishna",
"Mahesh",
""
],
[
"Quanz",
"Brian",
""
],
[
"Shah",
"Dhruv",
""
],
[
"Narayanaswami",
"Chandra",
""
]
] | TITLE: An Optimistic-Robust Approach for Dynamic Positioning of Omnichannel
Inventories
ABSTRACT: We introduce a new class of data-driven and distribution-free
optimistic-robust bimodal inventory optimization (BIO) strategy to effectively
allocate inventory across a retail chain to meet time-varying, uncertain
omnichannel demand. The bimodal nature of BIO stems from its ability to balance
downside risk, as in traditional Robust Optimization (RO), which focuses on
worst-case adversarial demand, with upside potential to enhance average-case
performance. This enables BIO to remain as resilient as RO while capturing
benefits that would otherwise be lost due to endogenous outliers. Omnichannel
inventory planning provides a suitable problem setting for analyzing the
effectiveness of BIO's bimodal strategy in managing the tradeoff between lost
sales at stores and cross-channel e-commerce fulfillment costs, factors that
are inherently asymmetric due to channel-specific behaviors. We provide
structural insights about the BIO solution and how it can be tuned to achieve a
preferred tradeoff between robustness and the average-case performance. Using a
real-world dataset from a large American omnichannel retail chain, a business
value assessment during a peak period indicates that BIO outperforms pure RO by
27% in terms of realized average profitability and surpasses other competitive
baselines under imperfect distributional information by over 10%. This
demonstrates that BIO provides a novel, data-driven, and distribution-free
alternative to traditional RO that achieves strong average performance while
carefully balancing robustness.
|
2310.14558 | Xinlu Zhang | Xinlu Zhang, Chenxin Tian, Xianjun Yang, Lichang Chen, Zekun Li, Linda
Ruth Petzold | AlpaCare:Instruction-tuned Large Language Models for Medical Application | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Instruction-finetuning (IFT) has become crucial in aligning Large Language
Models (LLMs) with diverse human needs and has shown great potential in medical
applications. However, previous studies mainly fine-tune LLMs on biomedical
datasets with limited diversity, which often rely on benchmarks or narrow task
scopes, and hence significantly limit the effectiveness on their medical
instruction-following ability and generalizability. To bridge this gap, we
propose creating a diverse, machine-generated medical IFT dataset,
MedInstruct-52k, using GPT-4 and ChatGPT with a high-quality expert-curated
seed set. We then fine-tune LLaMA-series models on the dataset to develop
AlpaCare. Despite using a smaller domain-specific dataset than previous medical
LLMs, AlpaCare not only demonstrates superior performance on medical
applications, with up to 38.1% absolute gain over best baselines in medical
free-form instruction evaluations, but also achieves 6.7% absolute gains
averaged over multiple general domain benchmarks. Human evaluation further
shows that AlpaCare consistently outperforms best baselines in terms of both
correctness and helpfulness. We offer public access to our data, model, and
codebase in https://github.com/XZhang97666/AlpaCare.
| [
{
"version": "v1",
"created": "Mon, 23 Oct 2023 04:22:50 GMT"
},
{
"version": "v2",
"created": "Wed, 3 Apr 2024 21:36:08 GMT"
},
{
"version": "v3",
"created": "Mon, 13 May 2024 21:49:17 GMT"
},
{
"version": "v4",
"created": "Mon, 10 Jun 2024 17:52:31 GMT"
},
{
"version": "v5",
"created": "Wed, 10 Jul 2024 23:46:06 GMT"
},
{
"version": "v6",
"created": "Mon, 31 Mar 2025 21:04:11 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Zhang",
"Xinlu",
""
],
[
"Tian",
"Chenxin",
""
],
[
"Yang",
"Xianjun",
""
],
[
"Chen",
"Lichang",
""
],
[
"Li",
"Zekun",
""
],
[
"Petzold",
"Linda Ruth",
""
]
] | TITLE: AlpaCare:Instruction-tuned Large Language Models for Medical Application
ABSTRACT: Instruction-finetuning (IFT) has become crucial in aligning Large Language
Models (LLMs) with diverse human needs and has shown great potential in medical
applications. However, previous studies mainly fine-tune LLMs on biomedical
datasets with limited diversity, which often rely on benchmarks or narrow task
scopes, and hence significantly limit the effectiveness on their medical
instruction-following ability and generalizability. To bridge this gap, we
propose creating a diverse, machine-generated medical IFT dataset,
MedInstruct-52k, using GPT-4 and ChatGPT with a high-quality expert-curated
seed set. We then fine-tune LLaMA-series models on the dataset to develop
AlpaCare. Despite using a smaller domain-specific dataset than previous medical
LLMs, AlpaCare not only demonstrates superior performance on medical
applications, with up to 38.1% absolute gain over best baselines in medical
free-form instruction evaluations, but also achieves 6.7% absolute gains
averaged over multiple general domain benchmarks. Human evaluation further
shows that AlpaCare consistently outperforms best baselines in terms of both
correctness and helpfulness. We offer public access to our data, model, and
codebase in https://github.com/XZhang97666/AlpaCare.
|
2311.14395 | Xuecheng Hua | Xuecheng Hua, Ke Cheng, Hu Lu, Juanjuan Tu, Yuanquan Wang, Shitong
Wang | MSCMNet: Multi-scale Semantic Correlation Mining for Visible-Infrared
Person Re-Identification | null | Pattern Recognition 159, 111090 (2025), ISSN: 0031-3203 | 10.1016/j.patcog.2024.111090 | null | cs.LG cs.CV | http://creativecommons.org/licenses/by/4.0/ | The main challenge in the Visible-Infrared Person Re-Identification (VI-ReID)
task lies in how to extract discriminative features from different modalities
for matching purposes. While the existing well works primarily focus on
minimizing the modal discrepancies, the modality information can not thoroughly
be leveraged. To solve this problem, a Multi-scale Semantic Correlation Mining
network (MSCMNet) is proposed to comprehensively exploit semantic features at
multiple scales and simultaneously reduce modality information loss as small as
possible in feature extraction. The proposed network contains three novel
components. Firstly, after taking into account the effective utilization of
modality information, the Multi-scale Information Correlation Mining Block
(MIMB) is designed to explore semantic correlations across multiple scales.
Secondly, in order to enrich the semantic information that MIMB can utilize, a
quadruple-stream feature extractor (QFE) with non-shared parameters is
specifically designed to extract information from different dimensions of the
dataset. Finally, the Quadruple Center Triplet Loss (QCT) is further proposed
to address the information discrepancy in the comprehensive features. Extensive
experiments on the SYSU-MM01, RegDB, and LLCM datasets demonstrate that the
proposed MSCMNet achieves the greatest accuracy.
| [
{
"version": "v1",
"created": "Fri, 24 Nov 2023 10:23:57 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 13:34:48 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Hua",
"Xuecheng",
""
],
[
"Cheng",
"Ke",
""
],
[
"Lu",
"Hu",
""
],
[
"Tu",
"Juanjuan",
""
],
[
"Wang",
"Yuanquan",
""
],
[
"Wang",
"Shitong",
""
]
] | TITLE: MSCMNet: Multi-scale Semantic Correlation Mining for Visible-Infrared
Person Re-Identification
ABSTRACT: The main challenge in the Visible-Infrared Person Re-Identification (VI-ReID)
task lies in how to extract discriminative features from different modalities
for matching purposes. While the existing well works primarily focus on
minimizing the modal discrepancies, the modality information can not thoroughly
be leveraged. To solve this problem, a Multi-scale Semantic Correlation Mining
network (MSCMNet) is proposed to comprehensively exploit semantic features at
multiple scales and simultaneously reduce modality information loss as small as
possible in feature extraction. The proposed network contains three novel
components. Firstly, after taking into account the effective utilization of
modality information, the Multi-scale Information Correlation Mining Block
(MIMB) is designed to explore semantic correlations across multiple scales.
Secondly, in order to enrich the semantic information that MIMB can utilize, a
quadruple-stream feature extractor (QFE) with non-shared parameters is
specifically designed to extract information from different dimensions of the
dataset. Finally, the Quadruple Center Triplet Loss (QCT) is further proposed
to address the information discrepancy in the comprehensive features. Extensive
experiments on the SYSU-MM01, RegDB, and LLCM datasets demonstrate that the
proposed MSCMNet achieves the greatest accuracy.
|
2312.06275 | Christian Weihsbach | Christian Weihsbach, Christian N. Kruse, Alexander Bigalke, Mattias P.
Heinrich | DG-TTA: Out-of-domain Medical Image Segmentation through Augmentation
and Descriptor-driven Domain Generalization and Test-Time Adaptation | null | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Purpose: Applying pre-trained medical deep learning segmentation models on
out-of-domain images often yields predictions of insufficient quality. In this
study, we propose to use a powerful generalizing descriptor along with
augmentation to enable domain-generalized pre-training and test-time
adaptation, achieving high-quality segmentation in unseen domains.
Materials and Methods: In this retrospective study five different publicly
available datasets (2012 to 2022) including 3D CT and MRI images are used to
evaluate segmentation performance in out-of-domain scenarios. The settings
include abdominal, spine, and cardiac imaging. The data is randomly split into
training and test samples. Domain-generalized pre-training on source data is
used to obtain the best initial performance in the target domain. We introduce
the combination of the generalizing SSC descriptor and GIN intensity
augmentation for optimal generalization. Segmentation results are subsequently
optimized at test time, where we propose to adapt the pre-trained models for
every unseen scan with a consistency scheme using the same
augmentation-descriptor combination. The segmentation is evaluated using Dice
similarity and Hausdorff distance and the significance of improvements is
tested with the Wilcoxon signed-rank test.
Results: The proposed generalized pre-training and subsequent test-time
adaptation improves model performance significantly in CT to MRI cross-domain
prediction for abdominal (+46.2% and +28.2% Dice), spine (+72.9%), and cardiac
(+14.2% and +55.7% Dice) scenarios (p<0.001).
Conclusion: Our method enables optimal, independent usage of medical image
source and target data and bridges domain gaps successfully with a compact and
efficient methodology. Open-source code available at:
https://github.com/multimodallearning/DG-TTA
| [
{
"version": "v1",
"created": "Mon, 11 Dec 2023 10:26:21 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Dec 2023 13:01:13 GMT"
},
{
"version": "v3",
"created": "Wed, 10 Apr 2024 11:49:05 GMT"
},
{
"version": "v4",
"created": "Tue, 1 Apr 2025 11:54:39 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Weihsbach",
"Christian",
""
],
[
"Kruse",
"Christian N.",
""
],
[
"Bigalke",
"Alexander",
""
],
[
"Heinrich",
"Mattias P.",
""
]
] | TITLE: DG-TTA: Out-of-domain Medical Image Segmentation through Augmentation
and Descriptor-driven Domain Generalization and Test-Time Adaptation
ABSTRACT: Purpose: Applying pre-trained medical deep learning segmentation models on
out-of-domain images often yields predictions of insufficient quality. In this
study, we propose to use a powerful generalizing descriptor along with
augmentation to enable domain-generalized pre-training and test-time
adaptation, achieving high-quality segmentation in unseen domains.
Materials and Methods: In this retrospective study five different publicly
available datasets (2012 to 2022) including 3D CT and MRI images are used to
evaluate segmentation performance in out-of-domain scenarios. The settings
include abdominal, spine, and cardiac imaging. The data is randomly split into
training and test samples. Domain-generalized pre-training on source data is
used to obtain the best initial performance in the target domain. We introduce
the combination of the generalizing SSC descriptor and GIN intensity
augmentation for optimal generalization. Segmentation results are subsequently
optimized at test time, where we propose to adapt the pre-trained models for
every unseen scan with a consistency scheme using the same
augmentation-descriptor combination. The segmentation is evaluated using Dice
similarity and Hausdorff distance and the significance of improvements is
tested with the Wilcoxon signed-rank test.
Results: The proposed generalized pre-training and subsequent test-time
adaptation improves model performance significantly in CT to MRI cross-domain
prediction for abdominal (+46.2% and +28.2% Dice), spine (+72.9%), and cardiac
(+14.2% and +55.7% Dice) scenarios (p<0.001).
Conclusion: Our method enables optimal, independent usage of medical image
source and target data and bridges domain gaps successfully with a compact and
efficient methodology. Open-source code available at:
https://github.com/multimodallearning/DG-TTA
|
2312.08194 | Hacer Yalim Keles | Mojtaba Najafi Khatounabad, Hacer Yalim Keles, Selma Kadioglu | SVInvNet: A Densely Connected Encoder-Decoder Architecture for Seismic
Velocity Inversion | This is the preprint of the accepted manuscript to appear in IEEE
Transactions on Geoscience and Remote Sensing | null | 10.1109/TGRS.2025.3552741 | null | cs.LG cs.CV physics.geo-ph | http://creativecommons.org/licenses/by/4.0/ | This study presents a deep learning-based approach to seismic velocity
inversion problem, focusing on both noisy and noiseless training datasets of
varying sizes. Our Seismic Velocity Inversion Network (SVInvNet) introduces a
novel architecture that contains a multi-connection encoder-decoder structure
enhanced with dense blocks. This design is specifically tuned to effectively
process time series data, which is essential for addressing the challenges of
non-linear seismic velocity inversion. For training and testing, we created
diverse seismic velocity models, including multi-layered, faulty, and salt dome
categories. We also investigated how different kinds of ambient noise, both
coherent and stochastic, and the size of the training dataset affect learning
outcomes. SVInvNet is trained on datasets ranging from 750 to 6,000 samples and
is tested using a large benchmark dataset of 12,000 samples. Despite its fewer
parameters compared to the baseline model, SVInvNet achieves superior
performance with this dataset. The performance of SVInvNet was further
evaluated using the OpenFWI dataset and Marmousi-derived velocity models. The
comparative analysis clearly reveals the effectiveness of the proposed model.
| [
{
"version": "v1",
"created": "Wed, 13 Dec 2023 14:58:25 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 12:44:26 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Khatounabad",
"Mojtaba Najafi",
""
],
[
"Keles",
"Hacer Yalim",
""
],
[
"Kadioglu",
"Selma",
""
]
] | TITLE: SVInvNet: A Densely Connected Encoder-Decoder Architecture for Seismic
Velocity Inversion
ABSTRACT: This study presents a deep learning-based approach to seismic velocity
inversion problem, focusing on both noisy and noiseless training datasets of
varying sizes. Our Seismic Velocity Inversion Network (SVInvNet) introduces a
novel architecture that contains a multi-connection encoder-decoder structure
enhanced with dense blocks. This design is specifically tuned to effectively
process time series data, which is essential for addressing the challenges of
non-linear seismic velocity inversion. For training and testing, we created
diverse seismic velocity models, including multi-layered, faulty, and salt dome
categories. We also investigated how different kinds of ambient noise, both
coherent and stochastic, and the size of the training dataset affect learning
outcomes. SVInvNet is trained on datasets ranging from 750 to 6,000 samples and
is tested using a large benchmark dataset of 12,000 samples. Despite its fewer
parameters compared to the baseline model, SVInvNet achieves superior
performance with this dataset. The performance of SVInvNet was further
evaluated using the OpenFWI dataset and Marmousi-derived velocity models. The
comparative analysis clearly reveals the effectiveness of the proposed model.
|
2401.04560 | Gyutae Hwang | Gyutae Hwang, Sang Jun Lee | Phase-shifted remote photoplethysmography for estimating heart rate and
blood pressure from facial video | 13 pages, 10 figures | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Human health can be critically affected by cardiovascular diseases, such as
hypertension, arrhythmias, and stroke. Heart rate and blood pressure are
important biometric information for the monitoring of cardiovascular system and
early diagnosis of cardiovascular diseases. Existing methods for estimating the
heart rate are based on electrocardiography and photoplethyomography, which
require contacting the sensor to the skin surface. Moreover, catheter and
cuff-based methods for measuring blood pressure cause inconvenience and have
limited applicability. Therefore, in this thesis, we propose a vision-based
method for estimating the heart rate and blood pressure. This thesis proposes a
2-stage deep learning framework consisting of a dual remote
photoplethysmography network (DRP-Net) and bounded blood pressure network
(BBP-Net). In the first stage, DRP-Net infers remote photoplethysmography
(rPPG) signals for the acral and facial regions, and these phase-shifted rPPG
signals are utilized to estimate the heart rate. In the second stage, BBP-Net
integrates temporal features and analyzes phase discrepancy between the acral
and facial rPPG signals to estimate SBP and DBP values. To improve the accuracy
of estimating the heart rate, we employed a data augmentation method based on a
frame interpolation model. Moreover, we designed BBP-Net to infer blood
pressure within a predefined range by incorporating a scaled sigmoid function.
Our method resulted in estimating the heart rate with the mean absolute error
(MAE) of 1.78 BPM, reducing the MAE by 34.31 % compared to the recent method,
on the MMSE-HR dataset. The MAE for estimating the systolic blood pressure
(SBP) and diastolic blood pressure (DBP) were 10.19 mmHg and 7.09 mmHg. On the
V4V dataset, the MAE for the heart rate, SBP, and DBP were 3.83 BPM, 13.64
mmHg, and 9.4 mmHg, respectively.
| [
{
"version": "v1",
"created": "Tue, 9 Jan 2024 13:56:37 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Mar 2024 00:46:45 GMT"
},
{
"version": "v3",
"created": "Mon, 26 Aug 2024 02:11:58 GMT"
},
{
"version": "v4",
"created": "Tue, 1 Apr 2025 05:04:22 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Hwang",
"Gyutae",
""
],
[
"Lee",
"Sang Jun",
""
]
] | TITLE: Phase-shifted remote photoplethysmography for estimating heart rate and
blood pressure from facial video
ABSTRACT: Human health can be critically affected by cardiovascular diseases, such as
hypertension, arrhythmias, and stroke. Heart rate and blood pressure are
important biometric information for the monitoring of cardiovascular system and
early diagnosis of cardiovascular diseases. Existing methods for estimating the
heart rate are based on electrocardiography and photoplethyomography, which
require contacting the sensor to the skin surface. Moreover, catheter and
cuff-based methods for measuring blood pressure cause inconvenience and have
limited applicability. Therefore, in this thesis, we propose a vision-based
method for estimating the heart rate and blood pressure. This thesis proposes a
2-stage deep learning framework consisting of a dual remote
photoplethysmography network (DRP-Net) and bounded blood pressure network
(BBP-Net). In the first stage, DRP-Net infers remote photoplethysmography
(rPPG) signals for the acral and facial regions, and these phase-shifted rPPG
signals are utilized to estimate the heart rate. In the second stage, BBP-Net
integrates temporal features and analyzes phase discrepancy between the acral
and facial rPPG signals to estimate SBP and DBP values. To improve the accuracy
of estimating the heart rate, we employed a data augmentation method based on a
frame interpolation model. Moreover, we designed BBP-Net to infer blood
pressure within a predefined range by incorporating a scaled sigmoid function.
Our method resulted in estimating the heart rate with the mean absolute error
(MAE) of 1.78 BPM, reducing the MAE by 34.31 % compared to the recent method,
on the MMSE-HR dataset. The MAE for estimating the systolic blood pressure
(SBP) and diastolic blood pressure (DBP) were 10.19 mmHg and 7.09 mmHg. On the
V4V dataset, the MAE for the heart rate, SBP, and DBP were 3.83 BPM, 13.64
mmHg, and 9.4 mmHg, respectively.
|
2402.11910 | Saranya Alagarsamy | Saranya Alagarsamy, Chakkrit Tantithamthavorn, Wannita Takerngsaksiri,
Chetan Arora, Aldeida Aleti | Enhancing Large Language Models for Text-to-Testcase Generation | null | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Context: Test-driven development (TDD) is a widely employed software
development practice that involves developing test cases based on requirements
prior to writing the code. Although various methods for automated test case
generation have been proposed, they are not specifically tailored for TDD,
where requirements instead of code serve as input. Objective: In this paper, we
introduce a text-to-testcase generation approach based on a large language
model (GPT-3.5) that is fine-tuned on our curated dataset with an effective
prompt design. Method: Our approach involves enhancing the capabilities of
basic GPT-3.5 for text-to-testcase generation task that is fine-tuned on our
curated dataset with an effective prompting design. We evaluated the
effectiveness of our approach using a span of five large-scale open-source
software projects. Results: Our approach generated 7k test cases for open
source projects, achieving 78.5% syntactic correctness, 67.09% requirement
alignment, and 61.7% code coverage, which substantially outperforms all other
LLMs (basic GPT-3.5, Bloom, and CodeT5). In addition, our ablation study
demonstrates the substantial performance improvement of the fine-tuning and
prompting components of the GPT-3.5 model. Conclusions: These findings lead us
to conclude that fine-tuning and prompting should be considered in the future
when building a language model for the text-to-testcase generation task
| [
{
"version": "v1",
"created": "Mon, 19 Feb 2024 07:50:54 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 07:37:55 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Alagarsamy",
"Saranya",
""
],
[
"Tantithamthavorn",
"Chakkrit",
""
],
[
"Takerngsaksiri",
"Wannita",
""
],
[
"Arora",
"Chetan",
""
],
[
"Aleti",
"Aldeida",
""
]
] | TITLE: Enhancing Large Language Models for Text-to-Testcase Generation
ABSTRACT: Context: Test-driven development (TDD) is a widely employed software
development practice that involves developing test cases based on requirements
prior to writing the code. Although various methods for automated test case
generation have been proposed, they are not specifically tailored for TDD,
where requirements instead of code serve as input. Objective: In this paper, we
introduce a text-to-testcase generation approach based on a large language
model (GPT-3.5) that is fine-tuned on our curated dataset with an effective
prompt design. Method: Our approach involves enhancing the capabilities of
basic GPT-3.5 for text-to-testcase generation task that is fine-tuned on our
curated dataset with an effective prompting design. We evaluated the
effectiveness of our approach using a span of five large-scale open-source
software projects. Results: Our approach generated 7k test cases for open
source projects, achieving 78.5% syntactic correctness, 67.09% requirement
alignment, and 61.7% code coverage, which substantially outperforms all other
LLMs (basic GPT-3.5, Bloom, and CodeT5). In addition, our ablation study
demonstrates the substantial performance improvement of the fine-tuning and
prompting components of the GPT-3.5 model. Conclusions: These findings lead us
to conclude that fine-tuning and prompting should be considered in the future
when building a language model for the text-to-testcase generation task
|
2403.13164 | Yongshuo Zong | Yongshuo Zong, Ondrej Bohdal, Timothy Hospedales | VL-ICL Bench: The Devil in the Details of Multimodal In-Context Learning | ICLR 2025 | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Large language models (LLMs) famously exhibit emergent in-context learning
(ICL) -- the ability to rapidly adapt to new tasks using few-shot examples
provided as a prompt, without updating the model's weights. Built on top of
LLMs, vision large language models (VLLMs) have advanced significantly in areas
such as recognition, reasoning, and grounding. However, investigations into
\emph{multimodal ICL} have predominantly focused on few-shot visual question
answering (VQA), and image captioning, which we will show neither exploit the
strengths of ICL, nor test its limitations. The broader capabilities and
limitations of multimodal ICL remain under-explored. In this study, we
introduce a comprehensive benchmark VL-ICL Bench for multimodal in-context
learning, encompassing a broad spectrum of tasks that involve both images and
text as inputs and outputs, and different types of challenges, from {perception
to reasoning and long context length}. We evaluate the abilities of
state-of-the-art VLLMs against this benchmark suite, revealing their diverse
strengths and weaknesses, and showing that even the most advanced models, such
as GPT-4, find the tasks challenging. By highlighting a range of new ICL tasks,
and the associated strengths and limitations of existing models, we hope that
our dataset will inspire future work on enhancing the in-context learning
capabilities of VLLMs, as well as inspire new applications that leverage VLLM
ICL. The code and dataset are available at https://github.com/ys-zong/VL-ICL.
| [
{
"version": "v1",
"created": "Tue, 19 Mar 2024 21:31:56 GMT"
},
{
"version": "v2",
"created": "Sun, 6 Oct 2024 04:21:40 GMT"
},
{
"version": "v3",
"created": "Sun, 2 Feb 2025 08:18:38 GMT"
},
{
"version": "v4",
"created": "Mon, 31 Mar 2025 20:03:34 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Zong",
"Yongshuo",
""
],
[
"Bohdal",
"Ondrej",
""
],
[
"Hospedales",
"Timothy",
""
]
] | TITLE: VL-ICL Bench: The Devil in the Details of Multimodal In-Context Learning
ABSTRACT: Large language models (LLMs) famously exhibit emergent in-context learning
(ICL) -- the ability to rapidly adapt to new tasks using few-shot examples
provided as a prompt, without updating the model's weights. Built on top of
LLMs, vision large language models (VLLMs) have advanced significantly in areas
such as recognition, reasoning, and grounding. However, investigations into
\emph{multimodal ICL} have predominantly focused on few-shot visual question
answering (VQA), and image captioning, which we will show neither exploit the
strengths of ICL, nor test its limitations. The broader capabilities and
limitations of multimodal ICL remain under-explored. In this study, we
introduce a comprehensive benchmark VL-ICL Bench for multimodal in-context
learning, encompassing a broad spectrum of tasks that involve both images and
text as inputs and outputs, and different types of challenges, from {perception
to reasoning and long context length}. We evaluate the abilities of
state-of-the-art VLLMs against this benchmark suite, revealing their diverse
strengths and weaknesses, and showing that even the most advanced models, such
as GPT-4, find the tasks challenging. By highlighting a range of new ICL tasks,
and the associated strengths and limitations of existing models, we hope that
our dataset will inspire future work on enhancing the in-context learning
capabilities of VLLMs, as well as inspire new applications that leverage VLLM
ICL. The code and dataset are available at https://github.com/ys-zong/VL-ICL.
|
2403.13846 | Xinrun Xu | Xinrun Xu, Manying Lv, Zhanbiao Lian, Yurong Wu, Jin Yan, Shan Jiang,
Zhiming Ding | A Clustering Method with Graph Maximum Decoding Information | 9 pages, 9 figures, IJCNN 2024 | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The clustering method based on graph models has garnered increased attention
for its widespread applicability across various knowledge domains. Its
adaptability to integrate seamlessly with other relevant applications endows
the graph model-based clustering analysis with the ability to robustly extract
"natural associations" or "graph structures" within datasets, facilitating the
modelling of relationships between data points. Despite its efficacy, the
current clustering method utilizing the graph-based model overlooks the
uncertainty associated with random walk access between nodes and the embedded
structural information in the data. To address this gap, we present a novel
Clustering method for Maximizing Decoding Information within graph-based
models, named CMDI. CMDI innovatively incorporates two-dimensional structural
information theory into the clustering process, consisting of two phases: graph
structure extraction and graph vertex partitioning. Within CMDI, graph
partitioning is reformulated as an abstract clustering problem, leveraging
maximum decoding information to minimize uncertainty associated with random
visits to vertices. Empirical evaluations on three real-world datasets
demonstrate that CMDI outperforms classical baseline methods, exhibiting a
superior decoding information ratio (DI-R). Furthermore, CMDI showcases
heightened efficiency, particularly when considering prior knowledge (PK).
These findings underscore the effectiveness of CMDI in enhancing decoding
information quality and computational efficiency, positioning it as a valuable
tool in graph-based clustering analyses.
| [
{
"version": "v1",
"created": "Mon, 18 Mar 2024 05:18:19 GMT"
},
{
"version": "v2",
"created": "Thu, 18 Apr 2024 12:22:12 GMT"
},
{
"version": "v3",
"created": "Tue, 1 Apr 2025 08:10:49 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Xu",
"Xinrun",
""
],
[
"Lv",
"Manying",
""
],
[
"Lian",
"Zhanbiao",
""
],
[
"Wu",
"Yurong",
""
],
[
"Yan",
"Jin",
""
],
[
"Jiang",
"Shan",
""
],
[
"Ding",
"Zhiming",
""
]
] | TITLE: A Clustering Method with Graph Maximum Decoding Information
ABSTRACT: The clustering method based on graph models has garnered increased attention
for its widespread applicability across various knowledge domains. Its
adaptability to integrate seamlessly with other relevant applications endows
the graph model-based clustering analysis with the ability to robustly extract
"natural associations" or "graph structures" within datasets, facilitating the
modelling of relationships between data points. Despite its efficacy, the
current clustering method utilizing the graph-based model overlooks the
uncertainty associated with random walk access between nodes and the embedded
structural information in the data. To address this gap, we present a novel
Clustering method for Maximizing Decoding Information within graph-based
models, named CMDI. CMDI innovatively incorporates two-dimensional structural
information theory into the clustering process, consisting of two phases: graph
structure extraction and graph vertex partitioning. Within CMDI, graph
partitioning is reformulated as an abstract clustering problem, leveraging
maximum decoding information to minimize uncertainty associated with random
visits to vertices. Empirical evaluations on three real-world datasets
demonstrate that CMDI outperforms classical baseline methods, exhibiting a
superior decoding information ratio (DI-R). Furthermore, CMDI showcases
heightened efficiency, particularly when considering prior knowledge (PK).
These findings underscore the effectiveness of CMDI in enhancing decoding
information quality and computational efficiency, positioning it as a valuable
tool in graph-based clustering analyses.
|
2403.15426 | Zhangquan Chen | Zhangquan Chen, Chunjiang Liu, Haobin Duan | CodingTeachLLM: Empowering LLM's Coding Ability via AST Prior Knowledge | 9 pages, 2 figures | null | null | null | cs.LG cs.AI cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | In this paper, we introduce CodingTeachLLM, a large language model (LLM)
designed for coding teaching. Specially, we aim to enhance the coding ability
of LLM and lead it to better teaching mode in education context. Thus, we
propose an end-to-end prior-based three-phases supervised fine-tuned model,
which is proved more competitive than traditional fine-tuning method. More
specifically, our model realizes the structural disassembly and incremental
guided output of educational knowledge. To this end, we robustify data
classification of three types via a sampler and overlap estimation neural
network, and inject the preprocessing datasets into pre-trained model in three
batches for LORA fine-tuning. Then, we design a prior module couples system
prompt, vector databases, and abstract syntax tree task segmentation. Finally,
the compression method and regularization constraint are applied to the
prior-based fine-tuned model, followed by text filter at the output end to
obtain incremental guided results. Our model represents the first research
effort to truly embody the tutor role with the features of abundant educational
knowledge, step-by-step incremental guided outputs and non-disclosure of
answers. Extensive experiments report that our model also achieves
state-of-the-art in code abilities compared to open-source models, reaching an
impressive 75.10% on the HumanEval (@pass 1) benchmark. Additionally, our model
maintains strong conversational capabilities, with the 13B quantized version
achieving scores of 56.34, 50.60, and 45.27 respectively on the MMLU, C-Eval,
and AGIEval (5 shot) dialogue evaluation benchmarks.
| [
{
"version": "v1",
"created": "Wed, 13 Mar 2024 05:38:39 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 03:53:53 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Chen",
"Zhangquan",
""
],
[
"Liu",
"Chunjiang",
""
],
[
"Duan",
"Haobin",
""
]
] | TITLE: CodingTeachLLM: Empowering LLM's Coding Ability via AST Prior Knowledge
ABSTRACT: In this paper, we introduce CodingTeachLLM, a large language model (LLM)
designed for coding teaching. Specially, we aim to enhance the coding ability
of LLM and lead it to better teaching mode in education context. Thus, we
propose an end-to-end prior-based three-phases supervised fine-tuned model,
which is proved more competitive than traditional fine-tuning method. More
specifically, our model realizes the structural disassembly and incremental
guided output of educational knowledge. To this end, we robustify data
classification of three types via a sampler and overlap estimation neural
network, and inject the preprocessing datasets into pre-trained model in three
batches for LORA fine-tuning. Then, we design a prior module couples system
prompt, vector databases, and abstract syntax tree task segmentation. Finally,
the compression method and regularization constraint are applied to the
prior-based fine-tuned model, followed by text filter at the output end to
obtain incremental guided results. Our model represents the first research
effort to truly embody the tutor role with the features of abundant educational
knowledge, step-by-step incremental guided outputs and non-disclosure of
answers. Extensive experiments report that our model also achieves
state-of-the-art in code abilities compared to open-source models, reaching an
impressive 75.10% on the HumanEval (@pass 1) benchmark. Additionally, our model
maintains strong conversational capabilities, with the 13B quantized version
achieving scores of 56.34, 50.60, and 45.27 respectively on the MMLU, C-Eval,
and AGIEval (5 shot) dialogue evaluation benchmarks.
|
2403.17238 | Jonathan Salfity | Jonathan Salfity, Selma Wanna, Minkyu Choi, and Mitch Pryor | Temporal and Semantic Evaluation Metrics for Foundation Models in
Post-Hoc Analysis of Robotic Sub-tasks | 8 pages, 3 figures. IROS 2024 Submission | null | null | null | cs.RO cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent works in Task and Motion Planning (TAMP) show that training control
policies on language-supervised robot trajectories with quality labeled data
markedly improves agent task success rates. However, the scarcity of such data
presents a significant hurdle to extending these methods to general use cases.
To address this concern, we present an automated framework to decompose
trajectory data into temporally bounded and natural language-based descriptive
sub-tasks by leveraging recent prompting strategies for Foundation Models (FMs)
including both Large Language Models (LLMs) and Vision Language Models (VLMs).
Our framework provides both time-based and language-based descriptions for
lower-level sub-tasks that comprise full trajectories. To rigorously evaluate
the quality of our automatic labeling framework, we contribute an algorithm
SIMILARITY to produce two novel metrics, temporal similarity and semantic
similarity. The metrics measure the temporal alignment and semantic fidelity of
language descriptions between two sub-task decompositions, namely an FM
sub-task decomposition prediction and a ground-truth sub-task decomposition. We
present scores for temporal similarity and semantic similarity above 90%,
compared to 30% of a randomized baseline, for multiple robotic environments,
demonstrating the effectiveness of our proposed framework. Our results enable
building diverse, large-scale, language-supervised datasets for improved
robotic TAMP.
| [
{
"version": "v1",
"created": "Mon, 25 Mar 2024 22:39:20 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 03:50:12 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Salfity",
"Jonathan",
""
],
[
"Wanna",
"Selma",
""
],
[
"Choi",
"Minkyu",
""
],
[
"Pryor",
"Mitch",
""
]
] | TITLE: Temporal and Semantic Evaluation Metrics for Foundation Models in
Post-Hoc Analysis of Robotic Sub-tasks
ABSTRACT: Recent works in Task and Motion Planning (TAMP) show that training control
policies on language-supervised robot trajectories with quality labeled data
markedly improves agent task success rates. However, the scarcity of such data
presents a significant hurdle to extending these methods to general use cases.
To address this concern, we present an automated framework to decompose
trajectory data into temporally bounded and natural language-based descriptive
sub-tasks by leveraging recent prompting strategies for Foundation Models (FMs)
including both Large Language Models (LLMs) and Vision Language Models (VLMs).
Our framework provides both time-based and language-based descriptions for
lower-level sub-tasks that comprise full trajectories. To rigorously evaluate
the quality of our automatic labeling framework, we contribute an algorithm
SIMILARITY to produce two novel metrics, temporal similarity and semantic
similarity. The metrics measure the temporal alignment and semantic fidelity of
language descriptions between two sub-task decompositions, namely an FM
sub-task decomposition prediction and a ground-truth sub-task decomposition. We
present scores for temporal similarity and semantic similarity above 90%,
compared to 30% of a randomized baseline, for multiple robotic environments,
demonstrating the effectiveness of our proposed framework. Our results enable
building diverse, large-scale, language-supervised datasets for improved
robotic TAMP.
|
2405.04912 | Samuel Hoffman | Jerret Ross, Brian Belgodere, Samuel C. Hoffman, Vijil
Chenthamarakshan, Jiri Navratil, Youssef Mroueh, Payel Das | GP-MoLFormer: A Foundation Model For Molecular Generation | null | null | null | null | q-bio.BM cs.LG physics.chem-ph | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Transformer-based models trained on large and general purpose datasets
consisting of molecular strings have recently emerged as a powerful tool for
successfully modeling various structure-property relations. Inspired by this
success, we extend the paradigm of training chemical language transformers on
large-scale chemical datasets to generative tasks in this work. Specifically,
we propose GP-MoLFormer, an autoregressive molecular string generator that is
trained on more than 1.1B (billion) chemical SMILES. GP-MoLFormer uses a 46.8M
parameter transformer decoder model with linear attention and rotary positional
encodings as the base architecture. GP-MoLFormer's utility is evaluated and
compared with that of existing baselines on three different tasks: de novo
generation, scaffold-constrained molecular decoration, and unconstrained
property-guided optimization. While the first two are handled with no
additional training, we propose a parameter-efficient fine-tuning method for
the last task, which uses property-ordered molecular pairs as input. We call
this new approach pair-tuning. Our results show GP-MoLFormer performs better or
comparable with baselines across all three tasks, demonstrating its general
utility for a variety of molecular generation tasks. We further report strong
memorization of training data in GP-MoLFormer generations, which has so far
remained unexplored for chemical language models. Our analyses reveal that
training data memorization and novelty in generations are impacted by the
quality and scale of the training data; duplication bias in training data can
enhance memorization at the cost of lowering novelty. We further establish a
scaling law relating inference compute and novelty in generations.
| [
{
"version": "v1",
"created": "Thu, 4 Apr 2024 16:20:06 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Mar 2025 18:16:41 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Ross",
"Jerret",
""
],
[
"Belgodere",
"Brian",
""
],
[
"Hoffman",
"Samuel C.",
""
],
[
"Chenthamarakshan",
"Vijil",
""
],
[
"Navratil",
"Jiri",
""
],
[
"Mroueh",
"Youssef",
""
],
[
"Das",
"Payel",
""
]
] | TITLE: GP-MoLFormer: A Foundation Model For Molecular Generation
ABSTRACT: Transformer-based models trained on large and general purpose datasets
consisting of molecular strings have recently emerged as a powerful tool for
successfully modeling various structure-property relations. Inspired by this
success, we extend the paradigm of training chemical language transformers on
large-scale chemical datasets to generative tasks in this work. Specifically,
we propose GP-MoLFormer, an autoregressive molecular string generator that is
trained on more than 1.1B (billion) chemical SMILES. GP-MoLFormer uses a 46.8M
parameter transformer decoder model with linear attention and rotary positional
encodings as the base architecture. GP-MoLFormer's utility is evaluated and
compared with that of existing baselines on three different tasks: de novo
generation, scaffold-constrained molecular decoration, and unconstrained
property-guided optimization. While the first two are handled with no
additional training, we propose a parameter-efficient fine-tuning method for
the last task, which uses property-ordered molecular pairs as input. We call
this new approach pair-tuning. Our results show GP-MoLFormer performs better or
comparable with baselines across all three tasks, demonstrating its general
utility for a variety of molecular generation tasks. We further report strong
memorization of training data in GP-MoLFormer generations, which has so far
remained unexplored for chemical language models. Our analyses reveal that
training data memorization and novelty in generations are impacted by the
quality and scale of the training data; duplication bias in training data can
enhance memorization at the cost of lowering novelty. We further establish a
scaling law relating inference compute and novelty in generations.
|
2405.13900 | Rui Sun | Rui Sun, Haoran Duan, Jiahua Dong, Varun Ojha, Tejal Shah, Rajiv
Ranjan | Rehearsal-free Federated Domain-incremental Learning | Camera ready version. Accepted by the IEEE ICDCS, 2025 | null | null | null | cs.LG cs.CV | http://creativecommons.org/licenses/by/4.0/ | We introduce a rehearsal-free federated domain incremental learning
framework, RefFiL, based on a global prompt-sharing paradigm to alleviate
catastrophic forgetting challenges in federated domain-incremental learning,
where unseen domains are continually learned. Typical methods for mitigating
forgetting, such as the use of additional datasets and the retention of private
data from earlier tasks, are not viable in federated learning (FL) due to
devices' limited resources. Our method, RefFiL, addresses this by learning
domain-invariant knowledge and incorporating various domain-specific prompts
from the domains represented by different FL participants. A key feature of
RefFiL is the generation of local fine-grained prompts by our domain adaptive
prompt generator, which effectively learns from local domain knowledge while
maintaining distinctive boundaries on a global scale. We also introduce a
domain-specific prompt contrastive learning loss that differentiates between
locally generated prompts and those from other domains, enhancing RefFiL's
precision and effectiveness. Compared to existing methods, RefFiL significantly
alleviates catastrophic forgetting without requiring extra memory space, making
it ideal for privacy-sensitive and resource-constrained devices.
| [
{
"version": "v1",
"created": "Wed, 22 May 2024 18:13:38 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 17:09:48 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Sun",
"Rui",
""
],
[
"Duan",
"Haoran",
""
],
[
"Dong",
"Jiahua",
""
],
[
"Ojha",
"Varun",
""
],
[
"Shah",
"Tejal",
""
],
[
"Ranjan",
"Rajiv",
""
]
] | TITLE: Rehearsal-free Federated Domain-incremental Learning
ABSTRACT: We introduce a rehearsal-free federated domain incremental learning
framework, RefFiL, based on a global prompt-sharing paradigm to alleviate
catastrophic forgetting challenges in federated domain-incremental learning,
where unseen domains are continually learned. Typical methods for mitigating
forgetting, such as the use of additional datasets and the retention of private
data from earlier tasks, are not viable in federated learning (FL) due to
devices' limited resources. Our method, RefFiL, addresses this by learning
domain-invariant knowledge and incorporating various domain-specific prompts
from the domains represented by different FL participants. A key feature of
RefFiL is the generation of local fine-grained prompts by our domain adaptive
prompt generator, which effectively learns from local domain knowledge while
maintaining distinctive boundaries on a global scale. We also introduce a
domain-specific prompt contrastive learning loss that differentiates between
locally generated prompts and those from other domains, enhancing RefFiL's
precision and effectiveness. Compared to existing methods, RefFiL significantly
alleviates catastrophic forgetting without requiring extra memory space, making
it ideal for privacy-sensitive and resource-constrained devices.
|
2406.06723 | Enshuo Hsu | Enshuo Hsu, Kirk Roberts | Leveraging Large Language Models for Knowledge-free Weak Supervision in
Clinical Natural Language Processing | null | null | 10.1038/s41598-024-68168-2 | null | cs.CL cs.IR | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The performance of deep learning-based natural language processing systems is
based on large amounts of labeled training data which, in the clinical domain,
are not easily available or affordable. Weak supervision and in-context
learning offer partial solutions to this issue, particularly using large
language models (LLMs), but their performance still trails traditional
supervised methods with moderate amounts of gold-standard data. In particular,
inferencing with LLMs is computationally heavy. We propose an approach
leveraging fine-tuning LLMs and weak supervision with virtually no domain
knowledge that still achieves consistently dominant performance. Using a
prompt-based approach, the LLM is used to generate weakly-labeled data for
training a downstream BERT model. The weakly supervised model is then further
fine-tuned on small amounts of gold standard data. We evaluate this approach
using Llama2 on three different n2c2 datasets. With no more than 10 gold
standard notes, our final BERT models weakly supervised by fine-tuned
Llama2-13B consistently outperformed out-of-the-box PubMedBERT by 4.7% to 47.9%
in F1 scores. With only 50 gold standard notes, our models achieved close
performance to fully fine-tuned systems.
| [
{
"version": "v1",
"created": "Mon, 10 Jun 2024 18:34:48 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Hsu",
"Enshuo",
""
],
[
"Roberts",
"Kirk",
""
]
] | TITLE: Leveraging Large Language Models for Knowledge-free Weak Supervision in
Clinical Natural Language Processing
ABSTRACT: The performance of deep learning-based natural language processing systems is
based on large amounts of labeled training data which, in the clinical domain,
are not easily available or affordable. Weak supervision and in-context
learning offer partial solutions to this issue, particularly using large
language models (LLMs), but their performance still trails traditional
supervised methods with moderate amounts of gold-standard data. In particular,
inferencing with LLMs is computationally heavy. We propose an approach
leveraging fine-tuning LLMs and weak supervision with virtually no domain
knowledge that still achieves consistently dominant performance. Using a
prompt-based approach, the LLM is used to generate weakly-labeled data for
training a downstream BERT model. The weakly supervised model is then further
fine-tuned on small amounts of gold standard data. We evaluate this approach
using Llama2 on three different n2c2 datasets. With no more than 10 gold
standard notes, our final BERT models weakly supervised by fine-tuned
Llama2-13B consistently outperformed out-of-the-box PubMedBERT by 4.7% to 47.9%
in F1 scores. With only 50 gold standard notes, our models achieved close
performance to fully fine-tuned systems.
|
2406.09588 | Yulong Yang | Yulong Yang, Felix O'Mahony, Christine Allen-Blanchette | Learning Color Equivariant Representations | Accept to The 13th International Conference on Learning
Representations (ICLR 2025) | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | In this paper, we introduce group convolutional neural networks (GCNNs)
equivariant to color variation. GCNNs have been designed for a variety of
geometric transformations from 2D and 3D rotation groups, to semi-groups such
as scale. Despite the improved interpretability, accuracy and generalizability
of these architectures, GCNNs have seen limited application in the context of
perceptual quantities. Notably, the recent CEConv network uses a GCNN to
achieve equivariance to hue transformations by convolving input images with a
hue rotated RGB filter. However, this approach leads to invalid RGB values
which break equivariance and degrade performance. We resolve these issues with
a lifting layer that transforms the input image directly, thereby circumventing
the issue of invalid RGB values and improving equivariance error by over three
orders of magnitude. Moreover, we extend the notion of color equivariance to
include equivariance to saturation and luminance shift. Our hue-, saturation-,
luminance- and color-equivariant networks achieve strong generalization to
out-of-distribution perceptual variations and improved sample efficiency over
conventional architectures. We demonstrate the utility of our approach on
synthetic and real world datasets where we consistently outperform competitive
baselines.
| [
{
"version": "v1",
"created": "Thu, 13 Jun 2024 21:02:03 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Oct 2024 01:48:47 GMT"
},
{
"version": "v3",
"created": "Sun, 20 Oct 2024 23:21:45 GMT"
},
{
"version": "v4",
"created": "Sat, 1 Mar 2025 04:19:53 GMT"
},
{
"version": "v5",
"created": "Sat, 15 Mar 2025 01:52:28 GMT"
},
{
"version": "v6",
"created": "Mon, 31 Mar 2025 21:04:41 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Yang",
"Yulong",
""
],
[
"O'Mahony",
"Felix",
""
],
[
"Allen-Blanchette",
"Christine",
""
]
] | TITLE: Learning Color Equivariant Representations
ABSTRACT: In this paper, we introduce group convolutional neural networks (GCNNs)
equivariant to color variation. GCNNs have been designed for a variety of
geometric transformations from 2D and 3D rotation groups, to semi-groups such
as scale. Despite the improved interpretability, accuracy and generalizability
of these architectures, GCNNs have seen limited application in the context of
perceptual quantities. Notably, the recent CEConv network uses a GCNN to
achieve equivariance to hue transformations by convolving input images with a
hue rotated RGB filter. However, this approach leads to invalid RGB values
which break equivariance and degrade performance. We resolve these issues with
a lifting layer that transforms the input image directly, thereby circumventing
the issue of invalid RGB values and improving equivariance error by over three
orders of magnitude. Moreover, we extend the notion of color equivariance to
include equivariance to saturation and luminance shift. Our hue-, saturation-,
luminance- and color-equivariant networks achieve strong generalization to
out-of-distribution perceptual variations and improved sample efficiency over
conventional architectures. We demonstrate the utility of our approach on
synthetic and real world datasets where we consistently outperform competitive
baselines.
|
2406.11519 | Di Wang | Di Wang, Meiqi Hu, Yao Jin, Yuchun Miao, Jiaqi Yang, Yichu Xu, Xiaolei
Qin, Jiaqi Ma, Lingyu Sun, Chenxing Li, Chuan Fu, Hongruixuan Chen, Chengxi
Han, Naoto Yokoya, Jing Zhang, Minqiang Xu, Lin Liu, Lefei Zhang, Chen Wu, Bo
Du, Dacheng Tao and Liangpei Zhang | HyperSIGMA: Hyperspectral Intelligence Comprehension Foundation Model | Accepted by IEEE TPAMI. Project website:
https://whu-sigma.github.io/HyperSIGMA | null | null | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate hyperspectral image (HSI) interpretation is critical for providing
valuable insights into various earth observation-related applications such as
urban planning, precision agriculture, and environmental monitoring. However,
existing HSI processing methods are predominantly task-specific and
scene-dependent, which severely limits their ability to transfer knowledge
across tasks and scenes, thereby reducing the practicality in real-world
applications. To address these challenges, we present HyperSIGMA, a vision
transformer-based foundation model that unifies HSI interpretation across tasks
and scenes, scalable to over one billion parameters. To overcome the spectral
and spatial redundancy inherent in HSIs, we introduce a novel sparse sampling
attention (SSA) mechanism, which effectively promotes the learning of diverse
contextual features and serves as the basic block of HyperSIGMA. HyperSIGMA
integrates spatial and spectral features using a specially designed spectral
enhancement module. In addition, we construct a large-scale hyperspectral
dataset, HyperGlobal-450K, for pre-training, which contains about 450K
hyperspectral images, significantly surpassing existing datasets in scale.
Extensive experiments on various high-level and low-level HSI tasks demonstrate
HyperSIGMA's versatility and superior representational capability compared to
current state-of-the-art methods. Moreover, HyperSIGMA shows significant
advantages in scalability, robustness, cross-modal transferring capability,
real-world applicability, and computational efficiency. The code and models
will be released at https://github.com/WHU-Sigma/HyperSIGMA.
| [
{
"version": "v1",
"created": "Mon, 17 Jun 2024 13:22:58 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 15:14:22 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Wang",
"Di",
""
],
[
"Hu",
"Meiqi",
""
],
[
"Jin",
"Yao",
""
],
[
"Miao",
"Yuchun",
""
],
[
"Yang",
"Jiaqi",
""
],
[
"Xu",
"Yichu",
""
],
[
"Qin",
"Xiaolei",
""
],
[
"Ma",
"Jiaqi",
""
],
[
"Sun",
"Lingyu",
""
],
[
"Li",
"Chenxing",
""
],
[
"Fu",
"Chuan",
""
],
[
"Chen",
"Hongruixuan",
""
],
[
"Han",
"Chengxi",
""
],
[
"Yokoya",
"Naoto",
""
],
[
"Zhang",
"Jing",
""
],
[
"Xu",
"Minqiang",
""
],
[
"Liu",
"Lin",
""
],
[
"Zhang",
"Lefei",
""
],
[
"Wu",
"Chen",
""
],
[
"Du",
"Bo",
""
],
[
"Tao",
"Dacheng",
""
],
[
"Zhang",
"Liangpei",
""
]
] | TITLE: HyperSIGMA: Hyperspectral Intelligence Comprehension Foundation Model
ABSTRACT: Accurate hyperspectral image (HSI) interpretation is critical for providing
valuable insights into various earth observation-related applications such as
urban planning, precision agriculture, and environmental monitoring. However,
existing HSI processing methods are predominantly task-specific and
scene-dependent, which severely limits their ability to transfer knowledge
across tasks and scenes, thereby reducing the practicality in real-world
applications. To address these challenges, we present HyperSIGMA, a vision
transformer-based foundation model that unifies HSI interpretation across tasks
and scenes, scalable to over one billion parameters. To overcome the spectral
and spatial redundancy inherent in HSIs, we introduce a novel sparse sampling
attention (SSA) mechanism, which effectively promotes the learning of diverse
contextual features and serves as the basic block of HyperSIGMA. HyperSIGMA
integrates spatial and spectral features using a specially designed spectral
enhancement module. In addition, we construct a large-scale hyperspectral
dataset, HyperGlobal-450K, for pre-training, which contains about 450K
hyperspectral images, significantly surpassing existing datasets in scale.
Extensive experiments on various high-level and low-level HSI tasks demonstrate
HyperSIGMA's versatility and superior representational capability compared to
current state-of-the-art methods. Moreover, HyperSIGMA shows significant
advantages in scalability, robustness, cross-modal transferring capability,
real-world applicability, and computational efficiency. The code and models
will be released at https://github.com/WHU-Sigma/HyperSIGMA.
|
2406.18012 | Subin Varghese | Subin Varghese, Vedhus Hoskere | View-Invariant Pixelwise Anomaly Detection in Multi-object Scenes with
Adaptive View Synthesis | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual anomaly detection in the built environment is a valuable tool for
applications such as infrastructure assessment, construction monitoring,
security surveillance, and urban planning. Anomaly detection approaches are
typically unsupervised and work by detecting deviations from an expected state
where no assumptions are made exact type of deviation. Unsupervised pixel-level
anomaly detection methods have been developed to successfully recognize and
segment anomalies; however, existing techniques are designed for industrial
settings with a fixed camera position. In the built environment, images are
periodically captured by a camera operated manually or mounted on aerial or
ground vehicles. The camera pose between successive collections may vary widely
voiding a fundamental assumption in existing anomaly detection approaches. To
address this gap, we introduce the problem of Scene Anomaly Detection (Scene
AD), where the goal is to detect anomalies from two sets of images: one set
without anomalies and one set that may or may not contain anomalies. No labeled
semantic segmentation data are provided for training. We propose a novel
network, OmniAD, to tackle Scene AD by refining the reverse distillation
anomaly detection method, leading to a 40\% improvement in pixel-level anomaly
detection. Additionally, we introduce two new data augmentation strategies that
leverage novel view synthesis and camera localization to enhance
generalization. We evaluate our approach both qualitatively and quantitatively
on a new dataset, ToyCity the first Scene AD dataset featuring multiple objects
as well as on the established single object centric dataset, MAD. Our method
demonstrates marked improvement over baseline approaches, paving the way for
robust anomaly detection in scenes with real-world camera pose variations
commonly observed in the built environment. https://drags99.github.io/OmniAD/
| [
{
"version": "v1",
"created": "Wed, 26 Jun 2024 01:54:10 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 00:59:21 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Varghese",
"Subin",
""
],
[
"Hoskere",
"Vedhus",
""
]
] | TITLE: View-Invariant Pixelwise Anomaly Detection in Multi-object Scenes with
Adaptive View Synthesis
ABSTRACT: Visual anomaly detection in the built environment is a valuable tool for
applications such as infrastructure assessment, construction monitoring,
security surveillance, and urban planning. Anomaly detection approaches are
typically unsupervised and work by detecting deviations from an expected state
where no assumptions are made exact type of deviation. Unsupervised pixel-level
anomaly detection methods have been developed to successfully recognize and
segment anomalies; however, existing techniques are designed for industrial
settings with a fixed camera position. In the built environment, images are
periodically captured by a camera operated manually or mounted on aerial or
ground vehicles. The camera pose between successive collections may vary widely
voiding a fundamental assumption in existing anomaly detection approaches. To
address this gap, we introduce the problem of Scene Anomaly Detection (Scene
AD), where the goal is to detect anomalies from two sets of images: one set
without anomalies and one set that may or may not contain anomalies. No labeled
semantic segmentation data are provided for training. We propose a novel
network, OmniAD, to tackle Scene AD by refining the reverse distillation
anomaly detection method, leading to a 40\% improvement in pixel-level anomaly
detection. Additionally, we introduce two new data augmentation strategies that
leverage novel view synthesis and camera localization to enhance
generalization. We evaluate our approach both qualitatively and quantitatively
on a new dataset, ToyCity the first Scene AD dataset featuring multiple objects
as well as on the established single object centric dataset, MAD. Our method
demonstrates marked improvement over baseline approaches, paving the way for
robust anomaly detection in scenes with real-world camera pose variations
commonly observed in the built environment. https://drags99.github.io/OmniAD/
|
2407.06501 | Melanie Subbiah | Melanie Subbiah, Faisal Ladhak, Akankshya Mishra, Griffin Adams, Lydia
B. Chilton, Kathleen McKeown | STORYSUMM: Evaluating Faithfulness in Story Summarization | EMNLP Main 2024 | null | null | null | cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | Human evaluation has been the gold standard for checking faithfulness in
abstractive summarization. However, with a challenging source domain like
narrative, multiple annotators can agree a summary is faithful, while missing
details that are obvious errors only once pointed out. We therefore introduce a
new dataset, STORYSUMM, comprising LLM summaries of short stories with
localized faithfulness labels and error explanations. This benchmark is for
evaluation methods, testing whether a given method can detect challenging
inconsistencies. Using this dataset, we first show that any one human
annotation protocol is likely to miss inconsistencies, and we advocate for
pursuing a range of methods when establishing ground truth for a summarization
dataset. We finally test recent automatic metrics and find that none of them
achieve more than 70% balanced accuracy on this task, demonstrating that it is
a challenging benchmark for future work in faithfulness evaluation.
| [
{
"version": "v1",
"created": "Tue, 9 Jul 2024 02:06:30 GMT"
},
{
"version": "v2",
"created": "Sat, 9 Nov 2024 00:42:46 GMT"
},
{
"version": "v3",
"created": "Tue, 1 Apr 2025 16:54:54 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Subbiah",
"Melanie",
""
],
[
"Ladhak",
"Faisal",
""
],
[
"Mishra",
"Akankshya",
""
],
[
"Adams",
"Griffin",
""
],
[
"Chilton",
"Lydia B.",
""
],
[
"McKeown",
"Kathleen",
""
]
] | TITLE: STORYSUMM: Evaluating Faithfulness in Story Summarization
ABSTRACT: Human evaluation has been the gold standard for checking faithfulness in
abstractive summarization. However, with a challenging source domain like
narrative, multiple annotators can agree a summary is faithful, while missing
details that are obvious errors only once pointed out. We therefore introduce a
new dataset, STORYSUMM, comprising LLM summaries of short stories with
localized faithfulness labels and error explanations. This benchmark is for
evaluation methods, testing whether a given method can detect challenging
inconsistencies. Using this dataset, we first show that any one human
annotation protocol is likely to miss inconsistencies, and we advocate for
pursuing a range of methods when establishing ground truth for a summarization
dataset. We finally test recent automatic metrics and find that none of them
achieve more than 70% balanced accuracy on this task, demonstrating that it is
a challenging benchmark for future work in faithfulness evaluation.
|
2407.07408 | Yuexuan Kong | Yuexuan Kong, Vincent Lostanlen, Gabriel Meseguer-Brocal, Stella Wong,
Mathieu Lagrange, Romain Hennequin | STONE: Self-supervised Tonality Estimator | null | null | null | null | cs.SD eess.AS | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Although deep neural networks can estimate the key of a musical piece, their
supervision incurs a massive annotation effort. Against this shortcoming, we
present STONE, the first self-supervised tonality estimator. The architecture
behind STONE, named ChromaNet, is a convnet with octave equivalence which
outputs a key signature profile (KSP) of 12 structured logits. First, we train
ChromaNet to regress artificial pitch transpositions between any two unlabeled
musical excerpts from the same audio track, as measured as cross-power spectral
density (CPSD) within the circle of fifths (CoF). We observe that this
self-supervised pretext task leads KSP to correlate with tonal key signature.
Based on this observation, we extend STONE to output a structured KSP of 24
logits, and introduce supervision so as to disambiguate major versus minor keys
sharing the same key signature. Applying different amounts of supervision
yields semi-supervised and fully supervised tonality estimators: i.e.,
Semi-TONEs and Sup-TONEs. We evaluate these estimators on FMAK, a new dataset
of 5489 real-world musical recordings with expert annotation of 24 major and
minor keys. We find that Semi-TONE matches the classification accuracy of
Sup-TONE with reduced supervision and outperforms it with equal supervision.
| [
{
"version": "v1",
"created": "Wed, 10 Jul 2024 07:09:56 GMT"
},
{
"version": "v2",
"created": "Wed, 17 Jul 2024 21:52:45 GMT"
},
{
"version": "v3",
"created": "Thu, 8 Aug 2024 09:31:44 GMT"
},
{
"version": "v4",
"created": "Tue, 1 Apr 2025 14:28:21 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Kong",
"Yuexuan",
""
],
[
"Lostanlen",
"Vincent",
""
],
[
"Meseguer-Brocal",
"Gabriel",
""
],
[
"Wong",
"Stella",
""
],
[
"Lagrange",
"Mathieu",
""
],
[
"Hennequin",
"Romain",
""
]
] | TITLE: STONE: Self-supervised Tonality Estimator
ABSTRACT: Although deep neural networks can estimate the key of a musical piece, their
supervision incurs a massive annotation effort. Against this shortcoming, we
present STONE, the first self-supervised tonality estimator. The architecture
behind STONE, named ChromaNet, is a convnet with octave equivalence which
outputs a key signature profile (KSP) of 12 structured logits. First, we train
ChromaNet to regress artificial pitch transpositions between any two unlabeled
musical excerpts from the same audio track, as measured as cross-power spectral
density (CPSD) within the circle of fifths (CoF). We observe that this
self-supervised pretext task leads KSP to correlate with tonal key signature.
Based on this observation, we extend STONE to output a structured KSP of 24
logits, and introduce supervision so as to disambiguate major versus minor keys
sharing the same key signature. Applying different amounts of supervision
yields semi-supervised and fully supervised tonality estimators: i.e.,
Semi-TONEs and Sup-TONEs. We evaluate these estimators on FMAK, a new dataset
of 5489 real-world musical recordings with expert annotation of 24 major and
minor keys. We find that Semi-TONE matches the classification accuracy of
Sup-TONE with reduced supervision and outperforms it with equal supervision.
|
2407.08035 | Yongjian Tang | Yongjian Tang, Rakebul Hasan and Thomas Runkler | FsPONER: Few-shot Prompt Optimization for Named Entity Recognition in
Domain-specific Scenarios | accepted in the main track at the 27th European Conference on
Artificial Intelligence (ECAI-2024) | null | null | null | cs.CL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) have provided a new pathway for Named Entity
Recognition (NER) tasks. Compared with fine-tuning, LLM-powered prompting
methods avoid the need for training, conserve substantial computational
resources, and rely on minimal annotated data. Previous studies have achieved
comparable performance to fully supervised BERT-based fine-tuning approaches on
general NER benchmarks. However, none of the previous approaches has
investigated the efficiency of LLM-based few-shot learning in domain-specific
scenarios. To address this gap, we introduce FsPONER, a novel approach for
optimizing few-shot prompts, and evaluate its performance on domain-specific
NER datasets, with a focus on industrial manufacturing and maintenance, while
using multiple LLMs -- GPT-4-32K, GPT-3.5-Turbo, LLaMA 2-chat, and Vicuna.
FsPONER consists of three few-shot selection methods based on random sampling,
TF-IDF vectors, and a combination of both. We compare these methods with a
general-purpose GPT-NER method as the number of few-shot examples increases and
evaluate their optimal NER performance against fine-tuned BERT and LLaMA
2-chat. In the considered real-world scenarios with data scarcity, FsPONER with
TF-IDF surpasses fine-tuned models by approximately 10% in F1 score.
| [
{
"version": "v1",
"created": "Wed, 10 Jul 2024 20:32:50 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 10:19:16 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Tang",
"Yongjian",
""
],
[
"Hasan",
"Rakebul",
""
],
[
"Runkler",
"Thomas",
""
]
] | TITLE: FsPONER: Few-shot Prompt Optimization for Named Entity Recognition in
Domain-specific Scenarios
ABSTRACT: Large Language Models (LLMs) have provided a new pathway for Named Entity
Recognition (NER) tasks. Compared with fine-tuning, LLM-powered prompting
methods avoid the need for training, conserve substantial computational
resources, and rely on minimal annotated data. Previous studies have achieved
comparable performance to fully supervised BERT-based fine-tuning approaches on
general NER benchmarks. However, none of the previous approaches has
investigated the efficiency of LLM-based few-shot learning in domain-specific
scenarios. To address this gap, we introduce FsPONER, a novel approach for
optimizing few-shot prompts, and evaluate its performance on domain-specific
NER datasets, with a focus on industrial manufacturing and maintenance, while
using multiple LLMs -- GPT-4-32K, GPT-3.5-Turbo, LLaMA 2-chat, and Vicuna.
FsPONER consists of three few-shot selection methods based on random sampling,
TF-IDF vectors, and a combination of both. We compare these methods with a
general-purpose GPT-NER method as the number of few-shot examples increases and
evaluate their optimal NER performance against fine-tuned BERT and LLaMA
2-chat. In the considered real-world scenarios with data scarcity, FsPONER with
TF-IDF surpasses fine-tuned models by approximately 10% in F1 score.
|
2407.10380 | Tushar Kataria | Pranshu Pandya, Vatsal Gupta, Agney S Talwarr, Tushar Kataria, Dan
Roth, Vivek Gupta | NTSEBENCH: Cognitive Reasoning Benchmark for Vision Language Models | 28 pages, 3 figures, 12 tables | null | null | null | cs.CV cs.AI cs.CL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cognitive textual and visual reasoning tasks, including puzzles, series, and
analogies, demand the ability to quickly reason, decipher, and evaluate
patterns both textually and spatially. Due to extensive training on vast
amounts of human-curated data, LLMs and VLMs excel in common-sense reasoning
tasks, however still struggle with more complex reasoning that demands deeper
cognitive understanding. We introduce NTSEBench, a new dataset designed to
evaluate cognitive multi-modal reasoning and problem-solving skills of large
models. The dataset contains 2728 multiple-choice questions, accompanied by a
total of 4,642 images, categorized into 26 different types. These questions are
drawn from the nationwide NTSE examination in India and feature a mix of visual
and textual general aptitude challenges, designed to assess intelligence and
critical thinking skills beyond mere rote learning. We establish baselines on
the dataset using state-of-the-art LLMs and VLMs. To facilitate a comparison
between open source and propriety models, we propose four distinct modeling
strategies to handle different modalities -- text and images -- in the dataset
instances.
| [
{
"version": "v1",
"created": "Mon, 15 Jul 2024 01:21:56 GMT"
},
{
"version": "v2",
"created": "Sun, 5 Jan 2025 03:15:39 GMT"
},
{
"version": "v3",
"created": "Tue, 1 Apr 2025 17:25:53 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Pandya",
"Pranshu",
""
],
[
"Gupta",
"Vatsal",
""
],
[
"Talwarr",
"Agney S",
""
],
[
"Kataria",
"Tushar",
""
],
[
"Roth",
"Dan",
""
],
[
"Gupta",
"Vivek",
""
]
] | TITLE: NTSEBENCH: Cognitive Reasoning Benchmark for Vision Language Models
ABSTRACT: Cognitive textual and visual reasoning tasks, including puzzles, series, and
analogies, demand the ability to quickly reason, decipher, and evaluate
patterns both textually and spatially. Due to extensive training on vast
amounts of human-curated data, LLMs and VLMs excel in common-sense reasoning
tasks, however still struggle with more complex reasoning that demands deeper
cognitive understanding. We introduce NTSEBench, a new dataset designed to
evaluate cognitive multi-modal reasoning and problem-solving skills of large
models. The dataset contains 2728 multiple-choice questions, accompanied by a
total of 4,642 images, categorized into 26 different types. These questions are
drawn from the nationwide NTSE examination in India and feature a mix of visual
and textual general aptitude challenges, designed to assess intelligence and
critical thinking skills beyond mere rote learning. We establish baselines on
the dataset using state-of-the-art LLMs and VLMs. To facilitate a comparison
between open source and propriety models, we propose four distinct modeling
strategies to handle different modalities -- text and images -- in the dataset
instances.
|
2407.12481 | Rahul Kumar | Rahul Kumar, Shubham Kakde, Divyansh Rajput, Daud Ibrahim, Rishabh
Nahata, Pidathala Sowjanya, Deepak Kumarr, Gautam Bhargava, Chandra Khatri | Krutrim LLM: A Novel Tokenization Strategy for Multilingual Indic
Languages with Petabyte-Scale Data Processing | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | We present a novel approach to data preparation for developing multilingual
Indic large language model. Our meticulous data acquisition spans open-source
and proprietary sources, including Common Crawl, Indic books, news articles,
and Wikipedia, ensuring a diverse and rich linguistic representation. For each
Indic language, we design a custom preprocessing pipeline to effectively
eliminate redundant and low-quality text content. Additionally, we perform
deduplication on Common Crawl data to address the redundancy present in 70% of
the crawled web pages. This study focuses on developing high-quality data,
optimizing tokenization for our multilingual dataset for Indic large language
models with 3B and 7B parameters, engineered for superior performance in Indic
languages. We introduce a novel multilingual tokenizer training strategy,
demonstrating our custom-trained Indic tokenizer outperforms the
state-of-the-art OpenAI Tiktoken tokenizer, achieving a superior token-to-word
ratio for Indic languages.
| [
{
"version": "v1",
"created": "Wed, 17 Jul 2024 11:06:27 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 15:16:34 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Kumar",
"Rahul",
""
],
[
"Kakde",
"Shubham",
""
],
[
"Rajput",
"Divyansh",
""
],
[
"Ibrahim",
"Daud",
""
],
[
"Nahata",
"Rishabh",
""
],
[
"Sowjanya",
"Pidathala",
""
],
[
"Kumarr",
"Deepak",
""
],
[
"Bhargava",
"Gautam",
""
],
[
"Khatri",
"Chandra",
""
]
] | TITLE: Krutrim LLM: A Novel Tokenization Strategy for Multilingual Indic
Languages with Petabyte-Scale Data Processing
ABSTRACT: We present a novel approach to data preparation for developing multilingual
Indic large language model. Our meticulous data acquisition spans open-source
and proprietary sources, including Common Crawl, Indic books, news articles,
and Wikipedia, ensuring a diverse and rich linguistic representation. For each
Indic language, we design a custom preprocessing pipeline to effectively
eliminate redundant and low-quality text content. Additionally, we perform
deduplication on Common Crawl data to address the redundancy present in 70% of
the crawled web pages. This study focuses on developing high-quality data,
optimizing tokenization for our multilingual dataset for Indic large language
models with 3B and 7B parameters, engineered for superior performance in Indic
languages. We introduce a novel multilingual tokenizer training strategy,
demonstrating our custom-trained Indic tokenizer outperforms the
state-of-the-art OpenAI Tiktoken tokenizer, achieving a superior token-to-word
ratio for Indic languages.
|
2407.12787 | Matthew Barthet | Matthew Barthet, Maria Kaselimi, Kosmas Pinitas, Konstantinos
Makantasis, Antonios Liapis, Georgios N. Yannakakis | GameVibe: A Multimodal Affective Game Corpus | 12 pages, 5 figures, 1 table | null | 10.1038/s41597-024-04022-4 | null | cs.HC cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As online video and streaming platforms continue to grow, affective computing
research has undergone a shift towards more complex studies involving multiple
modalities. However, there is still a lack of readily available datasets with
high-quality audiovisual stimuli. In this paper, we present GameVibe, a novel
affect corpus which consists of multimodal audiovisual stimuli, including
in-game behavioural observations and third-person affect traces for viewer
engagement. The corpus consists of videos from a diverse set of publicly
available gameplay sessions across 30 games, with particular attention to
ensure high-quality stimuli with good audiovisual and gameplay diversity.
Furthermore, we present an analysis on the reliability of the annotators in
terms of inter-annotator agreement.
| [
{
"version": "v1",
"created": "Mon, 17 Jun 2024 10:52:52 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 09:14:18 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Barthet",
"Matthew",
""
],
[
"Kaselimi",
"Maria",
""
],
[
"Pinitas",
"Kosmas",
""
],
[
"Makantasis",
"Konstantinos",
""
],
[
"Liapis",
"Antonios",
""
],
[
"Yannakakis",
"Georgios N.",
""
]
] | TITLE: GameVibe: A Multimodal Affective Game Corpus
ABSTRACT: As online video and streaming platforms continue to grow, affective computing
research has undergone a shift towards more complex studies involving multiple
modalities. However, there is still a lack of readily available datasets with
high-quality audiovisual stimuli. In this paper, we present GameVibe, a novel
affect corpus which consists of multimodal audiovisual stimuli, including
in-game behavioural observations and third-person affect traces for viewer
engagement. The corpus consists of videos from a diverse set of publicly
available gameplay sessions across 30 games, with particular attention to
ensure high-quality stimuli with good audiovisual and gameplay diversity.
Furthermore, we present an analysis on the reliability of the annotators in
terms of inter-annotator agreement.
|
2407.15240 | Hanjun Luo | Hanjun Luo, Haoyu Huang, Ziye Deng, Xinfeng Li, Hewei Wang, Yingbin
Jin, Yang Liu, Wenyuan Xu, Zuozhu Liu | BIGbench: A Unified Benchmark for Evaluating Multi-dimensional Social
Biases in Text-to-Image Models | arXiv admin note: substantial text overlap with arXiv:2405.17814 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Text-to-Image (T2I) generative models are becoming increasingly crucial due
to their ability to generate high-quality images, but also raise concerns about
social biases, particularly in human image generation. Sociological research
has established systematic classifications of bias. Yet, existing studies on
bias in T2I models largely conflate different types of bias, impeding
methodological progress. In this paper, we introduce BIGbench, a unified
benchmark for Biases of Image Generation, featuring a carefully designed
dataset. Unlike existing benchmarks, BIGbench classifies and evaluates biases
across four dimensions to enable a more granular evaluation and deeper
analysis. Furthermore, BIGbench applies advanced multi-modal large language
models to achieve fully automated and highly accurate evaluations. We apply
BIGbench to evaluate eight representative T2I models and three debiasing
methods. Our human evaluation results by trained evaluators from different
races underscore BIGbench's effectiveness in aligning images and identifying
various biases. Moreover, our study also reveals new research directions about
biases with insightful analysis of our results. Our work is openly accessible
at https://github.com/BIGbench2024/BIGbench2024/.
| [
{
"version": "v1",
"created": "Sun, 21 Jul 2024 18:09:40 GMT"
},
{
"version": "v2",
"created": "Tue, 23 Jul 2024 12:13:42 GMT"
},
{
"version": "v3",
"created": "Fri, 16 Aug 2024 05:53:16 GMT"
},
{
"version": "v4",
"created": "Mon, 24 Feb 2025 09:52:19 GMT"
},
{
"version": "v5",
"created": "Wed, 26 Feb 2025 03:39:38 GMT"
},
{
"version": "v6",
"created": "Mon, 31 Mar 2025 17:33:40 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Luo",
"Hanjun",
""
],
[
"Huang",
"Haoyu",
""
],
[
"Deng",
"Ziye",
""
],
[
"Li",
"Xinfeng",
""
],
[
"Wang",
"Hewei",
""
],
[
"Jin",
"Yingbin",
""
],
[
"Liu",
"Yang",
""
],
[
"Xu",
"Wenyuan",
""
],
[
"Liu",
"Zuozhu",
""
]
] | TITLE: BIGbench: A Unified Benchmark for Evaluating Multi-dimensional Social
Biases in Text-to-Image Models
ABSTRACT: Text-to-Image (T2I) generative models are becoming increasingly crucial due
to their ability to generate high-quality images, but also raise concerns about
social biases, particularly in human image generation. Sociological research
has established systematic classifications of bias. Yet, existing studies on
bias in T2I models largely conflate different types of bias, impeding
methodological progress. In this paper, we introduce BIGbench, a unified
benchmark for Biases of Image Generation, featuring a carefully designed
dataset. Unlike existing benchmarks, BIGbench classifies and evaluates biases
across four dimensions to enable a more granular evaluation and deeper
analysis. Furthermore, BIGbench applies advanced multi-modal large language
models to achieve fully automated and highly accurate evaluations. We apply
BIGbench to evaluate eight representative T2I models and three debiasing
methods. Our human evaluation results by trained evaluators from different
races underscore BIGbench's effectiveness in aligning images and identifying
various biases. Moreover, our study also reveals new research directions about
biases with insightful analysis of our results. Our work is openly accessible
at https://github.com/BIGbench2024/BIGbench2024/.
|
2408.02900 | Yunfei Xie | Yunfei Xie, Ce Zhou, Lang Gao, Juncheng Wu, Xianhang Li, Hong-Yu Zhou,
Sheng Liu, Lei Xing, James Zou, Cihang Xie, Yuyin Zhou | MedTrinity-25M: A Large-scale Multimodal Dataset with Multigranular
Annotations for Medicine | The dataset is publicly available at
https://yunfeixie233.github.io/MedTrinity-25M/. Accepted to ICLR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces MedTrinity-25M, a comprehensive, large-scale multimodal
dataset for medicine, covering over 25 million images across 10 modalities with
multigranular annotations for more than 65 diseases. These multigranular
annotations encompass both global information, such as modality and organ
detection, and local information like ROI analysis, lesion texture, and
region-wise correlations. Unlike the existing multimodal datasets, which are
limited by the availability of image-text pairs, we have developed the first
automated pipeline that scales up multimodal data by generating multigranular
visual and textual annotations in the form of image-ROI-description triplets
without the need for any paired text descriptions. Specifically, data from over
30 different sources have been collected, preprocessed, and grounded using
domain-specific expert models to identify ROIs related to abnormal regions. We
then build a comprehensive knowledge base and prompt multimodal large language
models to perform retrieval-augmented generation with the identified ROIs as
guidance, resulting in multigranular textual descriptions. Compared to existing
datasets, MedTrinity-25M provides the most enriched annotations, supporting a
comprehensive range of multimodal tasks such as captioning and report
generation, as well as vision-centric tasks like classification and
segmentation. We propose LLaVA-Tri by pretraining LLaVA on MedTrinity-25M,
achieving state-of-the-art performance on VQA-RAD, SLAKE, and PathVQA,
surpassing representative SOTA multimodal large language models. Furthermore,
MedTrinity-25M can also be utilized to support large-scale pre-training of
multimodal medical AI models, contributing to the development of future
foundation models in the medical domain. We will make our dataset available.
| [
{
"version": "v1",
"created": "Tue, 6 Aug 2024 02:09:35 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Mar 2025 18:11:59 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Xie",
"Yunfei",
""
],
[
"Zhou",
"Ce",
""
],
[
"Gao",
"Lang",
""
],
[
"Wu",
"Juncheng",
""
],
[
"Li",
"Xianhang",
""
],
[
"Zhou",
"Hong-Yu",
""
],
[
"Liu",
"Sheng",
""
],
[
"Xing",
"Lei",
""
],
[
"Zou",
"James",
""
],
[
"Xie",
"Cihang",
""
],
[
"Zhou",
"Yuyin",
""
]
] | TITLE: MedTrinity-25M: A Large-scale Multimodal Dataset with Multigranular
Annotations for Medicine
ABSTRACT: This paper introduces MedTrinity-25M, a comprehensive, large-scale multimodal
dataset for medicine, covering over 25 million images across 10 modalities with
multigranular annotations for more than 65 diseases. These multigranular
annotations encompass both global information, such as modality and organ
detection, and local information like ROI analysis, lesion texture, and
region-wise correlations. Unlike the existing multimodal datasets, which are
limited by the availability of image-text pairs, we have developed the first
automated pipeline that scales up multimodal data by generating multigranular
visual and textual annotations in the form of image-ROI-description triplets
without the need for any paired text descriptions. Specifically, data from over
30 different sources have been collected, preprocessed, and grounded using
domain-specific expert models to identify ROIs related to abnormal regions. We
then build a comprehensive knowledge base and prompt multimodal large language
models to perform retrieval-augmented generation with the identified ROIs as
guidance, resulting in multigranular textual descriptions. Compared to existing
datasets, MedTrinity-25M provides the most enriched annotations, supporting a
comprehensive range of multimodal tasks such as captioning and report
generation, as well as vision-centric tasks like classification and
segmentation. We propose LLaVA-Tri by pretraining LLaVA on MedTrinity-25M,
achieving state-of-the-art performance on VQA-RAD, SLAKE, and PathVQA,
surpassing representative SOTA multimodal large language models. Furthermore,
MedTrinity-25M can also be utilized to support large-scale pre-training of
multimodal medical AI models, contributing to the development of future
foundation models in the medical domain. We will make our dataset available.
|
2408.04692 | Inmaculada Santamaria-Valenzuela | Inmaculada Santamaria-Valenzuela, Victor Rodriguez-Fernandez, David
Camacho | Exploring Scalability in Large-Scale Time Series in DeepVATS framework | Admitted pending publication in Lecture Notes in Network and Systems
(LNNS) series (Springer). Code available at
https://github.com/vrodriguezf/deepvats | The 13th Conference on Information Technology and its
Applications, Lecture Notes in Networks and Systems, vol. 937, Springer,
Cham, 2024, pp. 244-255 | 10.1007/978-3-031-74127-2_21 | The 13th Conference on Information Technology and its Applications | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Visual analytics is essential for studying large time series due to its
ability to reveal trends, anomalies, and insights. DeepVATS is a tool that
merges Deep Learning (Deep) with Visual Analytics (VA) for the analysis of
large time series data (TS). It has three interconnected modules. The Deep
Learning module, developed in R, manages the load of datasets and Deep Learning
models from and to the Storage module. This module also supports models
training and the acquisition of the embeddings from the latent space of the
trained model. The Storage module operates using the Weights and Biases system.
Subsequently, these embeddings can be analyzed in the Visual Analytics module.
This module, based on an R Shiny application, allows the adjustment of the
parameters related to the projection and clustering of the embeddings space.
Once these parameters are set, interactive plots representing both the
embeddings, and the time series are shown. This paper introduces the tool and
examines its scalability through log analytics. The execution time evolution is
examined while the length of the time series is varied. This is achieved by
resampling a large data series into smaller subsets and logging the main
execution and rendering times for later analysis of scalability.
| [
{
"version": "v1",
"created": "Thu, 8 Aug 2024 15:30:48 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Santamaria-Valenzuela",
"Inmaculada",
""
],
[
"Rodriguez-Fernandez",
"Victor",
""
],
[
"Camacho",
"David",
""
]
] | TITLE: Exploring Scalability in Large-Scale Time Series in DeepVATS framework
ABSTRACT: Visual analytics is essential for studying large time series due to its
ability to reveal trends, anomalies, and insights. DeepVATS is a tool that
merges Deep Learning (Deep) with Visual Analytics (VA) for the analysis of
large time series data (TS). It has three interconnected modules. The Deep
Learning module, developed in R, manages the load of datasets and Deep Learning
models from and to the Storage module. This module also supports models
training and the acquisition of the embeddings from the latent space of the
trained model. The Storage module operates using the Weights and Biases system.
Subsequently, these embeddings can be analyzed in the Visual Analytics module.
This module, based on an R Shiny application, allows the adjustment of the
parameters related to the projection and clustering of the embeddings space.
Once these parameters are set, interactive plots representing both the
embeddings, and the time series are shown. This paper introduces the tool and
examines its scalability through log analytics. The execution time evolution is
examined while the length of the time series is varied. This is achieved by
resampling a large data series into smaller subsets and logging the main
execution and rendering times for later analysis of scalability.
|
2408.06621 | Sungmin Cha | Sungmin Cha, Sungjun Cho, Dasol Hwang, and Moontae Lee | Towards Robust and Parameter-Efficient Knowledge Unlearning for LLMs | ICLR 2025 camera-ready version | null | null | null | cs.LG cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) have demonstrated strong reasoning and
memorization capabilities via pretraining on massive textual corpora. However,
this poses risk of privacy and copyright violations, highlighting the need for
efficient machine unlearning methods that remove sensitive data without
retraining from scratch. While Gradient Ascent (GA) is commonly used to unlearn
by reducing the likelihood of generating unwanted content, it leads to unstable
optimization and catastrophic forgetting of retrained knowledge. We find that
combining GA with low-rank adaptation results in poor trade-offs between
computational cost and generative performance. To address these challenges, we
propose Low-rank Knowledge Unlearning (LoKU), a novel framework that enables
robust and efficient unlearning for LLMs. First, we introduce Inverted Hinge
Loss, which suppresses unwanted tokens while maintaining fluency by boosting
the probability of the next most likely token. Second, we develop a
data-adaptive initialization for LoRA adapters via low-rank approximation
weighted with relative Fisher information, thereby focusing updates on
parameters critical for removing targeted knowledge. Experiments on the
Training Data Extraction Challenge dataset using GPT-Neo models as well as on
the TOFU benchmark with Phi-1.5B and Llama2-7B models demonstrate that our
approach effectively removes sensitive information while maintaining reasoning
and generative capabilities with minimal impact. Our implementation can be
found in https://github.com/csm9493/efficient-llm-unlearning.
| [
{
"version": "v1",
"created": "Tue, 13 Aug 2024 04:18:32 GMT"
},
{
"version": "v2",
"created": "Sun, 13 Oct 2024 19:03:38 GMT"
},
{
"version": "v3",
"created": "Sun, 16 Mar 2025 17:36:12 GMT"
},
{
"version": "v4",
"created": "Tue, 1 Apr 2025 12:53:30 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Cha",
"Sungmin",
""
],
[
"Cho",
"Sungjun",
""
],
[
"Hwang",
"Dasol",
""
],
[
"Lee",
"Moontae",
""
]
] | TITLE: Towards Robust and Parameter-Efficient Knowledge Unlearning for LLMs
ABSTRACT: Large Language Models (LLMs) have demonstrated strong reasoning and
memorization capabilities via pretraining on massive textual corpora. However,
this poses risk of privacy and copyright violations, highlighting the need for
efficient machine unlearning methods that remove sensitive data without
retraining from scratch. While Gradient Ascent (GA) is commonly used to unlearn
by reducing the likelihood of generating unwanted content, it leads to unstable
optimization and catastrophic forgetting of retrained knowledge. We find that
combining GA with low-rank adaptation results in poor trade-offs between
computational cost and generative performance. To address these challenges, we
propose Low-rank Knowledge Unlearning (LoKU), a novel framework that enables
robust and efficient unlearning for LLMs. First, we introduce Inverted Hinge
Loss, which suppresses unwanted tokens while maintaining fluency by boosting
the probability of the next most likely token. Second, we develop a
data-adaptive initialization for LoRA adapters via low-rank approximation
weighted with relative Fisher information, thereby focusing updates on
parameters critical for removing targeted knowledge. Experiments on the
Training Data Extraction Challenge dataset using GPT-Neo models as well as on
the TOFU benchmark with Phi-1.5B and Llama2-7B models demonstrate that our
approach effectively removes sensitive information while maintaining reasoning
and generative capabilities with minimal impact. Our implementation can be
found in https://github.com/csm9493/efficient-llm-unlearning.
|
2408.10360 | Syed Rifat Raiyan | Syed Rifat Raiyan, Zibran Zarif Amio, Sabbir Ahmed | HaSPeR: An Image Repository for Hand Shadow Puppet Recognition | Submitted to Image and Vision Computing, 15 pages, 110 figures, 2
tables | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Hand shadow puppetry, also known as shadowgraphy or ombromanie, is a form of
theatrical art and storytelling where hand shadows are projected onto flat
surfaces to create illusions of living creatures. The skilled performers create
these silhouettes by hand positioning, finger movements, and dexterous gestures
to resemble shadows of animals and objects. Due to the lack of practitioners
and a seismic shift in people's entertainment standards, this art form is on
the verge of extinction. To facilitate its preservation and proliferate it to a
wider audience, we introduce ${\rm H{\small A}SP{\small E}R}$, a novel dataset
consisting of 15,000 images of hand shadow puppets across 15 classes extracted
from both professional and amateur hand shadow puppeteer clips. We provide a
detailed statistical analysis of the dataset and employ a range of pretrained
image classification models to establish baselines. Our findings show a
substantial performance superiority of skip-connected convolutional models over
attention-based transformer architectures. We also find that lightweight
models, such as MobileNetV2, suited for mobile applications and embedded
devices, perform comparatively well. We surmise that such low-latency
architectures can be useful in developing ombromanie teaching tools, and we
create a prototype application to explore this surmission. Keeping the
best-performing model ResNet34 under the limelight, we conduct comprehensive
feature-spatial, explainability, and error analyses to gain insights into its
decision-making process. To the best of our knowledge, this is the first
documented dataset and research endeavor to preserve this dying art for future
generations, with computer vision approaches. Our code and data will be
publicly available.
| [
{
"version": "v1",
"created": "Mon, 19 Aug 2024 18:56:24 GMT"
},
{
"version": "v2",
"created": "Wed, 18 Dec 2024 11:02:07 GMT"
},
{
"version": "v3",
"created": "Tue, 24 Dec 2024 00:55:15 GMT"
},
{
"version": "v4",
"created": "Sun, 2 Feb 2025 21:34:33 GMT"
},
{
"version": "v5",
"created": "Fri, 14 Feb 2025 10:53:27 GMT"
},
{
"version": "v6",
"created": "Mon, 31 Mar 2025 19:29:48 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Raiyan",
"Syed Rifat",
""
],
[
"Amio",
"Zibran Zarif",
""
],
[
"Ahmed",
"Sabbir",
""
]
] | TITLE: HaSPeR: An Image Repository for Hand Shadow Puppet Recognition
ABSTRACT: Hand shadow puppetry, also known as shadowgraphy or ombromanie, is a form of
theatrical art and storytelling where hand shadows are projected onto flat
surfaces to create illusions of living creatures. The skilled performers create
these silhouettes by hand positioning, finger movements, and dexterous gestures
to resemble shadows of animals and objects. Due to the lack of practitioners
and a seismic shift in people's entertainment standards, this art form is on
the verge of extinction. To facilitate its preservation and proliferate it to a
wider audience, we introduce ${\rm H{\small A}SP{\small E}R}$, a novel dataset
consisting of 15,000 images of hand shadow puppets across 15 classes extracted
from both professional and amateur hand shadow puppeteer clips. We provide a
detailed statistical analysis of the dataset and employ a range of pretrained
image classification models to establish baselines. Our findings show a
substantial performance superiority of skip-connected convolutional models over
attention-based transformer architectures. We also find that lightweight
models, such as MobileNetV2, suited for mobile applications and embedded
devices, perform comparatively well. We surmise that such low-latency
architectures can be useful in developing ombromanie teaching tools, and we
create a prototype application to explore this surmission. Keeping the
best-performing model ResNet34 under the limelight, we conduct comprehensive
feature-spatial, explainability, and error analyses to gain insights into its
decision-making process. To the best of our knowledge, this is the first
documented dataset and research endeavor to preserve this dying art for future
generations, with computer vision approaches. Our code and data will be
publicly available.
|
2408.13805 | Ioannis Athanasiadis | Ioannis Athanasiadis, Fredrik Lindsten and Michael Felsberg | Prior Learning in Introspective VAEs | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Variational Autoencoders (VAEs) are a popular framework for unsupervised
learning and data generation. A plethora of methods have been proposed focusing
on improving VAEs, with the incorporation of adversarial objectives and the
integration of prior learning mechanisms being prominent directions. When it
comes to the former, an indicative instance is the recently introduced family
of Introspective VAEs aiming at ensuring that a low likelihood is assigned to
unrealistic samples. In this study, we focus on the Soft-IntroVAE (S-IntroVAE)
and investigate the implication of incorporating a multimodal and learnable
prior into this framework. Namely, we formulate the prior as a third player and
show that when trained in cooperation with the decoder constitutes an effective
way for prior learning, which shares the Nash Equilibrium with the vanilla
S-IntroVAE. Furthermore, based on a modified formulation of the optimal ELBO in
S-IntroVAE, we develop theoretically motivated regularizations, that is (i)
adaptive variance clipping to stabilize training when learning the prior and
(ii) responsibility regularization to discourage the formation of inactive
prior mode. Finally, we perform a series of targeted experiments on a 2D
density estimation benchmark and in an image generation setting comprised of
the (F)-MNIST and CIFAR-10 datasets demonstrating the benefit of prior learning
in S-IntroVAE in generation and representation learning.
| [
{
"version": "v1",
"created": "Sun, 25 Aug 2024 10:54:25 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 08:18:54 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Athanasiadis",
"Ioannis",
""
],
[
"Lindsten",
"Fredrik",
""
],
[
"Felsberg",
"Michael",
""
]
] | TITLE: Prior Learning in Introspective VAEs
ABSTRACT: Variational Autoencoders (VAEs) are a popular framework for unsupervised
learning and data generation. A plethora of methods have been proposed focusing
on improving VAEs, with the incorporation of adversarial objectives and the
integration of prior learning mechanisms being prominent directions. When it
comes to the former, an indicative instance is the recently introduced family
of Introspective VAEs aiming at ensuring that a low likelihood is assigned to
unrealistic samples. In this study, we focus on the Soft-IntroVAE (S-IntroVAE)
and investigate the implication of incorporating a multimodal and learnable
prior into this framework. Namely, we formulate the prior as a third player and
show that when trained in cooperation with the decoder constitutes an effective
way for prior learning, which shares the Nash Equilibrium with the vanilla
S-IntroVAE. Furthermore, based on a modified formulation of the optimal ELBO in
S-IntroVAE, we develop theoretically motivated regularizations, that is (i)
adaptive variance clipping to stabilize training when learning the prior and
(ii) responsibility regularization to discourage the formation of inactive
prior mode. Finally, we perform a series of targeted experiments on a 2D
density estimation benchmark and in an image generation setting comprised of
the (F)-MNIST and CIFAR-10 datasets demonstrating the benefit of prior learning
in S-IntroVAE in generation and representation learning.
|
2408.15953 | Elisabeth Fischer | Elisabeth Fischer, Albin Zehe, Andreas Hotho, Daniel Schl\"or | Modeling and Analyzing the Influence of Non-Item Pages on Sequential
Next-Item Prediction | 40 pages, 19 figures; Accepted for ACM TORS Journal, Updated
copyright information | null | 10.1145/3721298 | null | cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Analyzing sequences of interactions between users and items, sequential
recommendation models can learn user intent and make predictions about the next
item. Next to item interactions, most systems also have interactions with what
we call non-item pages: these pages are not related to specific items but still
can provide insights into the user's interests, as, for example, navigation
pages. We therefore propose a general way to include these non-item pages in
sequential recommendation models to enhance next-item prediction.
First, we demonstrate the influence of non-item pages on following
interactions using the hypotheses testing framework HypTrails and propose
methods for representing non-item pages in sequential recommendation models.
Subsequently, we adapt popular sequential recommender models to integrate
non-item pages and investigate their performance with different item
representation strategies as well as their ability to handle noisy data. To
show the general capabilities of the models to integrate non-item pages, we
create a synthetic dataset for a controlled setting and then evaluate the
improvements from including non-item pages on two real-world datasets.
Our results show that non-item pages are a valuable source of information,
and incorporating them in sequential recommendation models increases the
performance of next-item prediction across all analyzed model architectures.
| [
{
"version": "v1",
"created": "Wed, 28 Aug 2024 17:12:01 GMT"
},
{
"version": "v2",
"created": "Thu, 26 Sep 2024 10:22:34 GMT"
},
{
"version": "v3",
"created": "Tue, 25 Feb 2025 17:17:41 GMT"
},
{
"version": "v4",
"created": "Tue, 1 Apr 2025 15:03:12 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Fischer",
"Elisabeth",
""
],
[
"Zehe",
"Albin",
""
],
[
"Hotho",
"Andreas",
""
],
[
"Schlör",
"Daniel",
""
]
] | TITLE: Modeling and Analyzing the Influence of Non-Item Pages on Sequential
Next-Item Prediction
ABSTRACT: Analyzing sequences of interactions between users and items, sequential
recommendation models can learn user intent and make predictions about the next
item. Next to item interactions, most systems also have interactions with what
we call non-item pages: these pages are not related to specific items but still
can provide insights into the user's interests, as, for example, navigation
pages. We therefore propose a general way to include these non-item pages in
sequential recommendation models to enhance next-item prediction.
First, we demonstrate the influence of non-item pages on following
interactions using the hypotheses testing framework HypTrails and propose
methods for representing non-item pages in sequential recommendation models.
Subsequently, we adapt popular sequential recommender models to integrate
non-item pages and investigate their performance with different item
representation strategies as well as their ability to handle noisy data. To
show the general capabilities of the models to integrate non-item pages, we
create a synthetic dataset for a controlled setting and then evaluate the
improvements from including non-item pages on two real-world datasets.
Our results show that non-item pages are a valuable source of information,
and incorporating them in sequential recommendation models increases the
performance of next-item prediction across all analyzed model architectures.
|
2409.06809 | Amin Karimi Monsefi | Amin Karimi Monsefi, Kishore Prakash Sailaja, Ali Alilooee, Ser-Nam
Lim, Rajiv Ramnath | DetailCLIP: Detail-Oriented CLIP for Fine-Grained Tasks | Accepted in SSI-FM Workshop of ICLR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | In this paper, we introduce DetailCLIP: A Detail-Oriented CLIP to address the
limitations of contrastive learning-based vision-language models, particularly
CLIP, in handling detail-oriented and fine-grained tasks like segmentation.
While CLIP and its variants excel in the global alignment of image and text
representations, they often struggle to capture the fine-grained details
necessary for precise segmentation. To overcome these challenges, we propose a
novel framework that employs patch-level comparison of self-distillation and
pixel-level reconstruction losses, enhanced with an attention-based token
removal mechanism. This approach selectively retains semantically relevant
tokens, enabling the model to focus on the image's critical regions aligned
with the specific functions of our model, including textual information
processing, patch comparison, and image reconstruction, ensuring that the model
learns high-level semantics and detailed visual features. Our experiments
demonstrate that DetailCLIP surpasses existing CLIP-based and traditional
self-supervised learning (SSL) models in segmentation accuracy and exhibits
superior generalization across diverse datasets. DetailCLIP represents a
significant advancement in vision-language modeling, offering a robust solution
for tasks that demand high-level semantic understanding and detailed feature
extraction. https://github.com/KishoreP1/DetailCLIP.
| [
{
"version": "v1",
"created": "Tue, 10 Sep 2024 18:27:36 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Mar 2025 21:53:36 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Monsefi",
"Amin Karimi",
""
],
[
"Sailaja",
"Kishore Prakash",
""
],
[
"Alilooee",
"Ali",
""
],
[
"Lim",
"Ser-Nam",
""
],
[
"Ramnath",
"Rajiv",
""
]
] | TITLE: DetailCLIP: Detail-Oriented CLIP for Fine-Grained Tasks
ABSTRACT: In this paper, we introduce DetailCLIP: A Detail-Oriented CLIP to address the
limitations of contrastive learning-based vision-language models, particularly
CLIP, in handling detail-oriented and fine-grained tasks like segmentation.
While CLIP and its variants excel in the global alignment of image and text
representations, they often struggle to capture the fine-grained details
necessary for precise segmentation. To overcome these challenges, we propose a
novel framework that employs patch-level comparison of self-distillation and
pixel-level reconstruction losses, enhanced with an attention-based token
removal mechanism. This approach selectively retains semantically relevant
tokens, enabling the model to focus on the image's critical regions aligned
with the specific functions of our model, including textual information
processing, patch comparison, and image reconstruction, ensuring that the model
learns high-level semantics and detailed visual features. Our experiments
demonstrate that DetailCLIP surpasses existing CLIP-based and traditional
self-supervised learning (SSL) models in segmentation accuracy and exhibits
superior generalization across diverse datasets. DetailCLIP represents a
significant advancement in vision-language modeling, offering a robust solution
for tasks that demand high-level semantic understanding and detailed feature
extraction. https://github.com/KishoreP1/DetailCLIP.
|
2409.13955 | Saumya Sinha | Saumya Sinha, Brandon Benton, Patrick Emami | On the Effectiveness of Neural Operators at Zero-Shot Weather
Downscaling | null | Environ. Data Science 4 (2025) e21 | 10.1017/eds.2025.11 | null | cs.CE | http://creativecommons.org/licenses/by/4.0/ | Machine learning (ML) methods have shown great potential for weather
downscaling. These data-driven approaches provide a more efficient alternative
for producing high-resolution weather datasets and forecasts compared to
physics-based numerical simulations. Neural operators, which learn solution
operators for a family of partial differential equations (PDEs), have shown
great success in scientific ML applications involving physics-driven datasets.
Neural operators are grid-resolution-invariant and are often evaluated on
higher grid resolutions than they are trained on, i.e., zero-shot
super-resolution. Given their promising zero-shot super-resolution performance
on dynamical systems emulation, we present a critical investigation of their
zero-shot weather downscaling capabilities, which is when models are tasked
with producing high-resolution outputs using higher upsampling factors than are
seen during training. To this end, we create two realistic downscaling
experiments with challenging upsampling factors (e.g., 8x and 15x) across data
from different simulations: the European Centre for Medium-Range Weather
Forecasts Reanalysis version 5 (ERA5) and the Wind Integration National Dataset
Toolkit (WTK). While neural operator-based downscaling models perform better
than interpolation and a simple convolutional baseline, we show the surprising
performance of an approach that combines a powerful transformer-based model
with parameter-free interpolation at zero-shot weather downscaling. We find
that this Swin-Transformer-based approach mostly outperforms models with neural
operator layers in terms of average error metrics, whereas an Enhanced
Super-Resolution Generative Adversarial Network (ESRGAN)-based approach is
better than most models in terms of capturing the physics of the ground truth
data. We suggest their use in future work as strong baselines.
| [
{
"version": "v1",
"created": "Sat, 21 Sep 2024 00:14:49 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Feb 2025 00:27:42 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Sinha",
"Saumya",
""
],
[
"Benton",
"Brandon",
""
],
[
"Emami",
"Patrick",
""
]
] | TITLE: On the Effectiveness of Neural Operators at Zero-Shot Weather
Downscaling
ABSTRACT: Machine learning (ML) methods have shown great potential for weather
downscaling. These data-driven approaches provide a more efficient alternative
for producing high-resolution weather datasets and forecasts compared to
physics-based numerical simulations. Neural operators, which learn solution
operators for a family of partial differential equations (PDEs), have shown
great success in scientific ML applications involving physics-driven datasets.
Neural operators are grid-resolution-invariant and are often evaluated on
higher grid resolutions than they are trained on, i.e., zero-shot
super-resolution. Given their promising zero-shot super-resolution performance
on dynamical systems emulation, we present a critical investigation of their
zero-shot weather downscaling capabilities, which is when models are tasked
with producing high-resolution outputs using higher upsampling factors than are
seen during training. To this end, we create two realistic downscaling
experiments with challenging upsampling factors (e.g., 8x and 15x) across data
from different simulations: the European Centre for Medium-Range Weather
Forecasts Reanalysis version 5 (ERA5) and the Wind Integration National Dataset
Toolkit (WTK). While neural operator-based downscaling models perform better
than interpolation and a simple convolutional baseline, we show the surprising
performance of an approach that combines a powerful transformer-based model
with parameter-free interpolation at zero-shot weather downscaling. We find
that this Swin-Transformer-based approach mostly outperforms models with neural
operator layers in terms of average error metrics, whereas an Enhanced
Super-Resolution Generative Adversarial Network (ESRGAN)-based approach is
better than most models in terms of capturing the physics of the ground truth
data. We suggest their use in future work as strong baselines.
|
2409.16644 | Siyin Wang | Siyin Wang, Wenyi Yu, Yudong Yang, Changli Tang, Yixuan Li, Jimin
Zhuang, Xianzhao Chen, Xiaohai Tian, Jun Zhang, Guangzhi Sun, Lu Lu, Yuxuan
Wang, Chao Zhang | Enabling Auditory Large Language Models for Automatic Speech Quality
Evaluation | Accepted by ICASSP 2025 | null | null | null | eess.AS cs.CL cs.SD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Speech quality assessment typically requires evaluating audio from multiple
aspects, such as mean opinion score (MOS) and speaker similarity (SIM) \etc.,
which can be challenging to cover using one small model designed for a single
task. In this paper, we propose leveraging recently introduced auditory large
language models (LLMs) for automatic speech quality assessment. By employing
task-specific prompts, auditory LLMs are finetuned to predict MOS, SIM and A/B
testing results, which are commonly used for evaluating text-to-speech systems.
Additionally, the finetuned auditory LLM is able to generate natural language
descriptions assessing aspects like noisiness, distortion, discontinuity, and
overall quality, providing more interpretable outputs. Extensive experiments
have been performed on the NISQA, BVCC, SOMOS and VoxSim speech quality
datasets, using open-source auditory LLMs such as SALMONN, Qwen-Audio, and
Qwen2-Audio. For the natural language descriptions task, a commercial model
Google Gemini 1.5 Pro is also evaluated. The results demonstrate that auditory
LLMs achieve competitive performance compared to state-of-the-art task-specific
small models in predicting MOS and SIM, while also delivering promising results
in A/B testing and natural language descriptions. Our data processing scripts
and finetuned model checkpoints can be found at
https://github.com/bytedance/SALMONN.
| [
{
"version": "v1",
"created": "Wed, 25 Sep 2024 05:44:44 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 07:22:54 GMT"
},
{
"version": "v3",
"created": "Tue, 1 Apr 2025 12:35:25 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Wang",
"Siyin",
""
],
[
"Yu",
"Wenyi",
""
],
[
"Yang",
"Yudong",
""
],
[
"Tang",
"Changli",
""
],
[
"Li",
"Yixuan",
""
],
[
"Zhuang",
"Jimin",
""
],
[
"Chen",
"Xianzhao",
""
],
[
"Tian",
"Xiaohai",
""
],
[
"Zhang",
"Jun",
""
],
[
"Sun",
"Guangzhi",
""
],
[
"Lu",
"Lu",
""
],
[
"Wang",
"Yuxuan",
""
],
[
"Zhang",
"Chao",
""
]
] | TITLE: Enabling Auditory Large Language Models for Automatic Speech Quality
Evaluation
ABSTRACT: Speech quality assessment typically requires evaluating audio from multiple
aspects, such as mean opinion score (MOS) and speaker similarity (SIM) \etc.,
which can be challenging to cover using one small model designed for a single
task. In this paper, we propose leveraging recently introduced auditory large
language models (LLMs) for automatic speech quality assessment. By employing
task-specific prompts, auditory LLMs are finetuned to predict MOS, SIM and A/B
testing results, which are commonly used for evaluating text-to-speech systems.
Additionally, the finetuned auditory LLM is able to generate natural language
descriptions assessing aspects like noisiness, distortion, discontinuity, and
overall quality, providing more interpretable outputs. Extensive experiments
have been performed on the NISQA, BVCC, SOMOS and VoxSim speech quality
datasets, using open-source auditory LLMs such as SALMONN, Qwen-Audio, and
Qwen2-Audio. For the natural language descriptions task, a commercial model
Google Gemini 1.5 Pro is also evaluated. The results demonstrate that auditory
LLMs achieve competitive performance compared to state-of-the-art task-specific
small models in predicting MOS and SIM, while also delivering promising results
in A/B testing and natural language descriptions. Our data processing scripts
and finetuned model checkpoints can be found at
https://github.com/bytedance/SALMONN.
|
2410.02116 | Siddharth Joshi | Siddharth Joshi, Jiayi Ni and Baharan Mirzasoleiman | Dataset Distillation via Knowledge Distillation: Towards Efficient
Self-Supervised Pre-Training of Deep Networks | ICLR 2025. Code at https://github.com/BigML-CS-UCLA/MKDT | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Dataset distillation (DD) generates small synthetic datasets that can
efficiently train deep networks with a limited amount of memory and compute.
Despite the success of DD methods for supervised learning, DD for
self-supervised pre-training of deep models has remained unaddressed.
Pre-training on unlabeled data is crucial for efficiently generalizing to
downstream tasks with limited labeled data. In this work, we propose the first
effective DD method for SSL pre-training. First, we show, theoretically and
empirically, that naive application of supervised DD methods to SSL fails, due
to the high variance of the SSL gradient. Then, we address this issue by
relying on insights from knowledge distillation (KD) literature. Specifically,
we train a small student model to match the representations of a larger teacher
model trained with SSL. Then, we generate a small synthetic dataset by matching
the training trajectories of the student models. As the KD objective has
considerably lower variance than SSL, our approach can generate synthetic
datasets that can successfully pre-train high-quality encoders. Through
extensive experiments, we show that our distilled sets lead to up to 13% higher
accuracy than prior work, on a variety of downstream tasks, in the presence of
limited labeled data. Code at https://github.com/BigML-CS-UCLA/MKDT.
| [
{
"version": "v1",
"created": "Thu, 3 Oct 2024 00:39:25 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Feb 2025 18:39:00 GMT"
},
{
"version": "v3",
"created": "Mon, 31 Mar 2025 19:01:30 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Joshi",
"Siddharth",
""
],
[
"Ni",
"Jiayi",
""
],
[
"Mirzasoleiman",
"Baharan",
""
]
] | TITLE: Dataset Distillation via Knowledge Distillation: Towards Efficient
Self-Supervised Pre-Training of Deep Networks
ABSTRACT: Dataset distillation (DD) generates small synthetic datasets that can
efficiently train deep networks with a limited amount of memory and compute.
Despite the success of DD methods for supervised learning, DD for
self-supervised pre-training of deep models has remained unaddressed.
Pre-training on unlabeled data is crucial for efficiently generalizing to
downstream tasks with limited labeled data. In this work, we propose the first
effective DD method for SSL pre-training. First, we show, theoretically and
empirically, that naive application of supervised DD methods to SSL fails, due
to the high variance of the SSL gradient. Then, we address this issue by
relying on insights from knowledge distillation (KD) literature. Specifically,
we train a small student model to match the representations of a larger teacher
model trained with SSL. Then, we generate a small synthetic dataset by matching
the training trajectories of the student models. As the KD objective has
considerably lower variance than SSL, our approach can generate synthetic
datasets that can successfully pre-train high-quality encoders. Through
extensive experiments, we show that our distilled sets lead to up to 13% higher
accuracy than prior work, on a variety of downstream tasks, in the presence of
limited labeled data. Code at https://github.com/BigML-CS-UCLA/MKDT.
|
2410.02224 | Guangwei Gao | Yangyang Qiu, Guoan Xu, Guangwei Gao, Zhenhua Guo, Yi Yu, and Chia-Wen
Lin | Efficient Semantic Segmentation via Lightweight Multiple-Information
Interaction Network | 10 pages, 6 figures, 9 tables | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, integrating the local modeling capabilities of Convolutional Neural
Networks (CNNs) with the global dependency strengths of Transformers has
created a sensation in the semantic segmentation community. However,
substantial computational workloads and high hardware memory demands remain
major obstacles to their further application in real-time scenarios. In this
work, we propose a Lightweight Multiple-Information Interaction Network
(LMIINet) for real-time semantic segmentation, which effectively combines CNNs
and Transformers while reducing redundant computations and memory footprints.
It features Lightweight Feature Interaction Bottleneck (LFIB) modules
comprising efficient convolutions that enhance context integration.
Additionally, improvements are made to the Flatten Transformer by enhancing
local and global feature interaction to capture detailed semantic information.
Incorporating a combination coefficient learning scheme in both LFIB and
Transformer blocks facilitates improved feature interaction. Extensive
experiments demonstrate that LMIINet excels in balancing accuracy and
efficiency. With only 0.72M parameters and 11.74G FLOPs (Floating Point
Operations Per Second), LMIINet achieves 72.0\% mIoU at 100 FPS (Frames Per
Second) on the Cityscapes test set and 69.94\% mIoU (mean Intersection over
Union) at 160 FPS on the CamVid test dataset using a single RTX2080Ti GPU.
| [
{
"version": "v1",
"created": "Thu, 3 Oct 2024 05:45:24 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 13:14:22 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Qiu",
"Yangyang",
""
],
[
"Xu",
"Guoan",
""
],
[
"Gao",
"Guangwei",
""
],
[
"Guo",
"Zhenhua",
""
],
[
"Yu",
"Yi",
""
],
[
"Lin",
"Chia-Wen",
""
]
] | TITLE: Efficient Semantic Segmentation via Lightweight Multiple-Information
Interaction Network
ABSTRACT: Recently, integrating the local modeling capabilities of Convolutional Neural
Networks (CNNs) with the global dependency strengths of Transformers has
created a sensation in the semantic segmentation community. However,
substantial computational workloads and high hardware memory demands remain
major obstacles to their further application in real-time scenarios. In this
work, we propose a Lightweight Multiple-Information Interaction Network
(LMIINet) for real-time semantic segmentation, which effectively combines CNNs
and Transformers while reducing redundant computations and memory footprints.
It features Lightweight Feature Interaction Bottleneck (LFIB) modules
comprising efficient convolutions that enhance context integration.
Additionally, improvements are made to the Flatten Transformer by enhancing
local and global feature interaction to capture detailed semantic information.
Incorporating a combination coefficient learning scheme in both LFIB and
Transformer blocks facilitates improved feature interaction. Extensive
experiments demonstrate that LMIINet excels in balancing accuracy and
efficiency. With only 0.72M parameters and 11.74G FLOPs (Floating Point
Operations Per Second), LMIINet achieves 72.0\% mIoU at 100 FPS (Frames Per
Second) on the Cityscapes test set and 69.94\% mIoU (mean Intersection over
Union) at 160 FPS on the CamVid test dataset using a single RTX2080Ti GPU.
|
2410.04738 | Zhen Wang | Zhen Wang, Dongyuan Li, Yaozu Wu, Tianyu He, Jiang Bian, Renhe Jiang | Diffusion Models in 3D Vision: A Survey | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, 3D vision has become a crucial field within computer vision,
powering a wide range of applications such as autonomous driving, robotics,
augmented reality, and medical imaging. This field relies on accurate
perception, understanding, and reconstruction of 3D scenes from 2D images or
text data sources. Diffusion models, originally designed for 2D generative
tasks, offer the potential for more flexible, probabilistic methods that can
better capture the variability and uncertainty present in real-world 3D data.
In this paper, we review the state-of-the-art methods that use diffusion models
for 3D visual tasks, including but not limited to 3D object generation, shape
completion, point-cloud reconstruction, and scene construction. We provide an
in-depth discussion of the underlying mathematical principles of diffusion
models, outlining their forward and reverse processes, as well as the various
architectural advancements that enable these models to work with 3D datasets.
We also discuss the key challenges in applying diffusion models to 3D vision,
such as handling occlusions and varying point densities, and the computational
demands of high-dimensional data. Finally, we discuss potential solutions,
including improving computational efficiency, enhancing multimodal fusion, and
exploring the use of large-scale pretraining for better generalization across
3D tasks. This paper serves as a foundation for future exploration and
development in this rapidly evolving field.
| [
{
"version": "v1",
"created": "Mon, 7 Oct 2024 04:12:23 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Oct 2024 06:03:52 GMT"
},
{
"version": "v3",
"created": "Tue, 1 Apr 2025 05:46:41 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Wang",
"Zhen",
""
],
[
"Li",
"Dongyuan",
""
],
[
"Wu",
"Yaozu",
""
],
[
"He",
"Tianyu",
""
],
[
"Bian",
"Jiang",
""
],
[
"Jiang",
"Renhe",
""
]
] | TITLE: Diffusion Models in 3D Vision: A Survey
ABSTRACT: In recent years, 3D vision has become a crucial field within computer vision,
powering a wide range of applications such as autonomous driving, robotics,
augmented reality, and medical imaging. This field relies on accurate
perception, understanding, and reconstruction of 3D scenes from 2D images or
text data sources. Diffusion models, originally designed for 2D generative
tasks, offer the potential for more flexible, probabilistic methods that can
better capture the variability and uncertainty present in real-world 3D data.
In this paper, we review the state-of-the-art methods that use diffusion models
for 3D visual tasks, including but not limited to 3D object generation, shape
completion, point-cloud reconstruction, and scene construction. We provide an
in-depth discussion of the underlying mathematical principles of diffusion
models, outlining their forward and reverse processes, as well as the various
architectural advancements that enable these models to work with 3D datasets.
We also discuss the key challenges in applying diffusion models to 3D vision,
such as handling occlusions and varying point densities, and the computational
demands of high-dimensional data. Finally, we discuss potential solutions,
including improving computational efficiency, enhancing multimodal fusion, and
exploring the use of large-scale pretraining for better generalization across
3D tasks. This paper serves as a foundation for future exploration and
development in this rapidly evolving field.
|
2410.09437 | Dilxat Muhtar | Yaming Yang, Dilxat Muhtar, Yelong Shen, Yuefeng Zhan, Jianfeng Liu,
Yujing Wang, Hao Sun, Denvy Deng, Feng Sun, Qi Zhang, Weizhu Chen, and Yunhai
Tong | MTL-LoRA: Low-Rank Adaptation for Multi-Task Learning | 12 Pages, 4 Figures | null | null | null | cs.LG cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Parameter-efficient fine-tuning (PEFT) has been widely employed for domain
adaptation, with LoRA being one of the most prominent methods due to its
simplicity and effectiveness. However, in multi-task learning (MTL) scenarios,
LoRA tends to obscure the distinction between tasks by projecting sparse
high-dimensional features from different tasks into the same dense
low-dimensional intrinsic space. This leads to task interference and suboptimal
performance for LoRA and its variants. To tackle this challenge, we propose
MTL-LoRA, which retains the advantages of low-rank adaptation while
significantly enhancing MTL capabilities. MTL-LoRA augments LoRA by
incorporating additional task-adaptive parameters that differentiate
task-specific information and capture shared knowledge across various tasks
within low-dimensional spaces. This approach enables pre-trained models to
jointly adapt to different target domains with a limited number of trainable
parameters. Comprehensive experimental results, including evaluations on public
academic benchmarks for natural language understanding, commonsense reasoning,
and image-text understanding, as well as real-world industrial text Ads
relevance datasets, demonstrate that MTL-LoRA outperforms LoRA and its various
variants with comparable or even fewer learnable parameters in MTL setting.
| [
{
"version": "v1",
"created": "Sat, 12 Oct 2024 08:32:26 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Oct 2024 07:48:55 GMT"
},
{
"version": "v3",
"created": "Tue, 1 Apr 2025 10:18:48 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Yang",
"Yaming",
""
],
[
"Muhtar",
"Dilxat",
""
],
[
"Shen",
"Yelong",
""
],
[
"Zhan",
"Yuefeng",
""
],
[
"Liu",
"Jianfeng",
""
],
[
"Wang",
"Yujing",
""
],
[
"Sun",
"Hao",
""
],
[
"Deng",
"Denvy",
""
],
[
"Sun",
"Feng",
""
],
[
"Zhang",
"Qi",
""
],
[
"Chen",
"Weizhu",
""
],
[
"Tong",
"Yunhai",
""
]
] | TITLE: MTL-LoRA: Low-Rank Adaptation for Multi-Task Learning
ABSTRACT: Parameter-efficient fine-tuning (PEFT) has been widely employed for domain
adaptation, with LoRA being one of the most prominent methods due to its
simplicity and effectiveness. However, in multi-task learning (MTL) scenarios,
LoRA tends to obscure the distinction between tasks by projecting sparse
high-dimensional features from different tasks into the same dense
low-dimensional intrinsic space. This leads to task interference and suboptimal
performance for LoRA and its variants. To tackle this challenge, we propose
MTL-LoRA, which retains the advantages of low-rank adaptation while
significantly enhancing MTL capabilities. MTL-LoRA augments LoRA by
incorporating additional task-adaptive parameters that differentiate
task-specific information and capture shared knowledge across various tasks
within low-dimensional spaces. This approach enables pre-trained models to
jointly adapt to different target domains with a limited number of trainable
parameters. Comprehensive experimental results, including evaluations on public
academic benchmarks for natural language understanding, commonsense reasoning,
and image-text understanding, as well as real-world industrial text Ads
relevance datasets, demonstrate that MTL-LoRA outperforms LoRA and its various
variants with comparable or even fewer learnable parameters in MTL setting.
|
2410.10114 | Jun Luo | Jun Luo, Chen Chen, Shandong Wu | Mixture of Experts Made Personalized: Federated Prompt Learning for
Vision-Language Models | ICLR 2025 | null | null | null | cs.LG cs.CL cs.CV | http://creativecommons.org/licenses/by/4.0/ | Federated prompt learning benefits federated learning with CLIP-like
Vision-Language Model's (VLM's) robust representation learning ability through
prompt learning. However, current federated prompt learning methods are
habitually restricted to the traditional FL paradigm, where the participating
clients are generally only allowed to download a single globally aggregated
model from the server. While justifiable for training full-sized models under
federated settings, in this work, we argue that this paradigm is ill-suited for
lightweight prompts. By facilitating the clients to download multiple
pre-aggregated prompts as fixed non-local experts, we propose Personalized
Federated Mixture of Adaptive Prompts (pFedMoAP), a novel FL framework that
personalizes the prompt learning process through the lens of Mixture of Experts
(MoE). pFedMoAP implements a local attention-based gating network that learns
to generate enhanced text features for better alignment with local image data,
benefiting from both local and downloaded non-local adaptive prompt experts.
Extensive experiments on 9 datasets under various federated settings
demonstrate the efficacy of the proposed pFedMoAP algorithm. The code is
available at https://github.com/ljaiverson/pFedMoAP.
| [
{
"version": "v1",
"created": "Mon, 14 Oct 2024 03:05:12 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Oct 2024 12:30:53 GMT"
},
{
"version": "v3",
"created": "Sat, 15 Feb 2025 17:49:04 GMT"
},
{
"version": "v4",
"created": "Tue, 1 Apr 2025 15:53:12 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Luo",
"Jun",
""
],
[
"Chen",
"Chen",
""
],
[
"Wu",
"Shandong",
""
]
] | TITLE: Mixture of Experts Made Personalized: Federated Prompt Learning for
Vision-Language Models
ABSTRACT: Federated prompt learning benefits federated learning with CLIP-like
Vision-Language Model's (VLM's) robust representation learning ability through
prompt learning. However, current federated prompt learning methods are
habitually restricted to the traditional FL paradigm, where the participating
clients are generally only allowed to download a single globally aggregated
model from the server. While justifiable for training full-sized models under
federated settings, in this work, we argue that this paradigm is ill-suited for
lightweight prompts. By facilitating the clients to download multiple
pre-aggregated prompts as fixed non-local experts, we propose Personalized
Federated Mixture of Adaptive Prompts (pFedMoAP), a novel FL framework that
personalizes the prompt learning process through the lens of Mixture of Experts
(MoE). pFedMoAP implements a local attention-based gating network that learns
to generate enhanced text features for better alignment with local image data,
benefiting from both local and downloaded non-local adaptive prompt experts.
Extensive experiments on 9 datasets under various federated settings
demonstrate the efficacy of the proposed pFedMoAP algorithm. The code is
available at https://github.com/ljaiverson/pFedMoAP.
|
2410.11071 | Creston Brooks | Creston Brooks, Johannes Haubold, Charlie Cowen-Breen, Jay White,
Desmond DeVaul, Frederick Riemenschneider, Karthik Narasimhan, Barbara
Graziosi | An Annotated Dataset of Errors in Premodern Greek and Baselines for
Detecting Them | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | As premodern texts are passed down over centuries, errors inevitably accrue.
These errors can be challenging to identify, as some have survived undetected
for so long precisely because they are so elusive. While prior work has
evaluated error detection methods on artificially-generated errors, we
introduce the first dataset of real errors in premodern Greek, enabling the
evaluation of error detection methods on errors that genuinely accumulated at
some stage in the centuries-long copying process. To create this dataset, we
use metrics derived from BERT conditionals to sample 1,000 words more likely to
contain errors, which are then annotated and labeled by a domain expert as
errors or not. We then propose and evaluate new error detection methods and
find that our discriminator-based detector outperforms all other methods,
improving the true positive rate for classifying real errors by 5%. We
additionally observe that scribal errors are more difficult to detect than
print or digitization errors. Our dataset enables the evaluation of error
detection methods on real errors in premodern texts for the first time,
providing a benchmark for developing more effective error detection algorithms
to assist scholars in restoring premodern works.
| [
{
"version": "v1",
"created": "Mon, 14 Oct 2024 20:30:54 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Mar 2025 20:00:17 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Brooks",
"Creston",
""
],
[
"Haubold",
"Johannes",
""
],
[
"Cowen-Breen",
"Charlie",
""
],
[
"White",
"Jay",
""
],
[
"DeVaul",
"Desmond",
""
],
[
"Riemenschneider",
"Frederick",
""
],
[
"Narasimhan",
"Karthik",
""
],
[
"Graziosi",
"Barbara",
""
]
] | TITLE: An Annotated Dataset of Errors in Premodern Greek and Baselines for
Detecting Them
ABSTRACT: As premodern texts are passed down over centuries, errors inevitably accrue.
These errors can be challenging to identify, as some have survived undetected
for so long precisely because they are so elusive. While prior work has
evaluated error detection methods on artificially-generated errors, we
introduce the first dataset of real errors in premodern Greek, enabling the
evaluation of error detection methods on errors that genuinely accumulated at
some stage in the centuries-long copying process. To create this dataset, we
use metrics derived from BERT conditionals to sample 1,000 words more likely to
contain errors, which are then annotated and labeled by a domain expert as
errors or not. We then propose and evaluate new error detection methods and
find that our discriminator-based detector outperforms all other methods,
improving the true positive rate for classifying real errors by 5%. We
additionally observe that scribal errors are more difficult to detect than
print or digitization errors. Our dataset enables the evaluation of error
detection methods on real errors in premodern texts for the first time,
providing a benchmark for developing more effective error detection algorithms
to assist scholars in restoring premodern works.
|
2410.11635 | Miguel A. Gonz\'alez-Casado | Miguel A. Gonz\'alez-Casado, Andreia Sofia Teixeira, Angel S\'anchez | Evidence of equilibrium dynamics in human social networks evolving in
time | 17 pages, 5 figures, under peer-review | null | null | null | physics.soc-ph cond-mat.stat-mech cs.SI physics.data-an | http://creativecommons.org/licenses/by-nc-nd/4.0/ | How do networks of relationships evolve over time? We analyse a dataset
tracking the social interactions of 900 individuals over four years. Despite
continuous shifts in individual relationships, the macroscopic structural
properties of the network remain stable, fluctuating within predictable bounds.
We connect this stability to the concept of equilibrium in statistical physics.
Specifically, we demonstrate that the probabilities governing network dynamics
are stationary over time, and key features like degree, edge, and triangle
abundances align with theoretical predictions from equilibrium dynamics.
Moreover, the dynamics satisfies the detailed balance condition. Remarkably,
equilibrium persists despite constant turnover as people join, leave, and
change connections. This suggests that equilibrium arises not from specific
individuals but from the balancing act of human needs, cognitive limits, and
social pressures. Practically, this equilibrium simplifies data collection,
supports methods relying on single network snapshots (like Exponential Random
Graph Models), and aids in designing interventions for social challenges.
Theoretically, it offers new insights into collective human behaviour,
revealing how emergent properties of complex social systems can be captured by
simple mathematical models.
| [
{
"version": "v1",
"created": "Tue, 15 Oct 2024 14:25:39 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Jan 2025 10:32:43 GMT"
},
{
"version": "v3",
"created": "Tue, 1 Apr 2025 15:54:48 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"González-Casado",
"Miguel A.",
""
],
[
"Teixeira",
"Andreia Sofia",
""
],
[
"Sánchez",
"Angel",
""
]
] | TITLE: Evidence of equilibrium dynamics in human social networks evolving in
time
ABSTRACT: How do networks of relationships evolve over time? We analyse a dataset
tracking the social interactions of 900 individuals over four years. Despite
continuous shifts in individual relationships, the macroscopic structural
properties of the network remain stable, fluctuating within predictable bounds.
We connect this stability to the concept of equilibrium in statistical physics.
Specifically, we demonstrate that the probabilities governing network dynamics
are stationary over time, and key features like degree, edge, and triangle
abundances align with theoretical predictions from equilibrium dynamics.
Moreover, the dynamics satisfies the detailed balance condition. Remarkably,
equilibrium persists despite constant turnover as people join, leave, and
change connections. This suggests that equilibrium arises not from specific
individuals but from the balancing act of human needs, cognitive limits, and
social pressures. Practically, this equilibrium simplifies data collection,
supports methods relying on single network snapshots (like Exponential Random
Graph Models), and aids in designing interventions for social challenges.
Theoretically, it offers new insights into collective human behaviour,
revealing how emergent properties of complex social systems can be captured by
simple mathematical models.
|
2410.13727 | Rajkumar Pujari | Rajkumar Pujari, Dan Goldwasser | LLM-Human Pipeline for Cultural Context Grounding of Conversations | Oral at NAACL 2025 Main conference. Albuquerque, USA. Apr 29 - May 4,
2025. 19 pages, 9 figures, 7 tables | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Conversations often adhere to well-understood social norms that vary across
cultures. For example, while "addressing parents by name" is commonplace in the
West, it is rare in most Asian cultures. Adherence or violation of such norms
often dictates the tenor of conversations. Humans are able to navigate social
situations requiring cultural awareness quite adeptly. However, it is a hard
task for NLP models.
In this paper, we tackle this problem by introducing a "Cultural Context
Schema" for conversations. It comprises (1) conversational information such as
emotions, dialogue acts, etc., and (2) cultural information such as social
norms, violations, etc. We generate ~110k social norm and violation
descriptions for ~23k conversations from Chinese culture using LLMs. We refine
them using automated verification strategies which are evaluated against
culturally aware human judgements. We organize these descriptions into
meaningful structures we call "Norm Concepts", using an interactive
human-in-loop framework. We ground the norm concepts and the descriptions in
conversations using symbolic annotation. Finally, we use the obtained dataset
for downstream tasks such as emotion, sentiment, and dialogue act detection. We
show that it significantly improves the empirical performance.
| [
{
"version": "v1",
"created": "Thu, 17 Oct 2024 16:33:01 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 16:24:24 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Pujari",
"Rajkumar",
""
],
[
"Goldwasser",
"Dan",
""
]
] | TITLE: LLM-Human Pipeline for Cultural Context Grounding of Conversations
ABSTRACT: Conversations often adhere to well-understood social norms that vary across
cultures. For example, while "addressing parents by name" is commonplace in the
West, it is rare in most Asian cultures. Adherence or violation of such norms
often dictates the tenor of conversations. Humans are able to navigate social
situations requiring cultural awareness quite adeptly. However, it is a hard
task for NLP models.
In this paper, we tackle this problem by introducing a "Cultural Context
Schema" for conversations. It comprises (1) conversational information such as
emotions, dialogue acts, etc., and (2) cultural information such as social
norms, violations, etc. We generate ~110k social norm and violation
descriptions for ~23k conversations from Chinese culture using LLMs. We refine
them using automated verification strategies which are evaluated against
culturally aware human judgements. We organize these descriptions into
meaningful structures we call "Norm Concepts", using an interactive
human-in-loop framework. We ground the norm concepts and the descriptions in
conversations using symbolic annotation. Finally, we use the obtained dataset
for downstream tasks such as emotion, sentiment, and dialogue act detection. We
show that it significantly improves the empirical performance.
|
2410.15314 | Vivek Hruday Kavuri | Samarth Garg, Vivek Hruday Kavuri, Gargi Shroff, Rahul Mishra | KTCR: Improving Implicit Hate Detection with Knowledge Transfer driven
Concept Refinement | 9 pages, 4 figures, 2 algorithms, 5 tables | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | The constant shifts in social and political contexts, driven by emerging
social movements and political events, lead to new forms of hate content and
previously unrecognized hate patterns that machine learning models may not have
captured. Some recent literature proposes data augmentation-based techniques to
enrich existing hate datasets by incorporating samples that reveal new implicit
hate patterns. This approach aims to improve the model's performance on
out-of-domain implicit hate instances. It is observed, that further addition of
more samples for augmentation results in the decrease of the performance of the
model. In this work, we propose a Knowledge Transfer-driven Concept Refinement
method that distills and refines the concepts related to implicit hate samples
through novel prototype alignment and concept losses, alongside data
augmentation based on concept activation vectors. Experiments with several
publicly available datasets show that incorporating additional implicit samples
reflecting new hate patterns through concept refinement enhances the model's
performance, surpassing baseline results while maintaining cross-dataset
generalization capabilities.
| [
{
"version": "v1",
"created": "Sun, 20 Oct 2024 06:53:04 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 09:48:20 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Garg",
"Samarth",
""
],
[
"Kavuri",
"Vivek Hruday",
""
],
[
"Shroff",
"Gargi",
""
],
[
"Mishra",
"Rahul",
""
]
] | TITLE: KTCR: Improving Implicit Hate Detection with Knowledge Transfer driven
Concept Refinement
ABSTRACT: The constant shifts in social and political contexts, driven by emerging
social movements and political events, lead to new forms of hate content and
previously unrecognized hate patterns that machine learning models may not have
captured. Some recent literature proposes data augmentation-based techniques to
enrich existing hate datasets by incorporating samples that reveal new implicit
hate patterns. This approach aims to improve the model's performance on
out-of-domain implicit hate instances. It is observed, that further addition of
more samples for augmentation results in the decrease of the performance of the
model. In this work, we propose a Knowledge Transfer-driven Concept Refinement
method that distills and refines the concepts related to implicit hate samples
through novel prototype alignment and concept losses, alongside data
augmentation based on concept activation vectors. Experiments with several
publicly available datasets show that incorporating additional implicit samples
reflecting new hate patterns through concept refinement enhances the model's
performance, surpassing baseline results while maintaining cross-dataset
generalization capabilities.
|
2410.16608 | Zhexuan Liu | Zhexuan Liu, Rong Ma, Yiqiao Zhong | Assessing and improving reliability of neighbor embedding methods: a
map-continuity perspective | 49 pages, 20 figures | null | null | null | stat.ME cs.LG stat.CO stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visualizing high-dimensional data is essential for understanding biomedical
data and deep learning models. Neighbor embedding methods, such as t-SNE and
UMAP, are widely used but can introduce misleading visual artifacts. We find
that the manifold learning interpretations from many prior works are inaccurate
and that the misuse stems from a lack of data-independent notions of embedding
maps, which project high-dimensional data into a lower-dimensional space.
Leveraging the leave-one-out principle, we introduce LOO-map, a framework that
extends embedding maps beyond discrete points to the entire input space. We
identify two forms of map discontinuity that distort visualizations: one
exaggerates cluster separation and the other creates spurious local structures.
As a remedy, we develop two types of point-wise diagnostic scores to detect
unreliable embedding points and improve hyperparameter selection, which are
validated on datasets from computer vision and single-cell omics.
| [
{
"version": "v1",
"created": "Tue, 22 Oct 2024 01:40:43 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 02:20:44 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Liu",
"Zhexuan",
""
],
[
"Ma",
"Rong",
""
],
[
"Zhong",
"Yiqiao",
""
]
] | TITLE: Assessing and improving reliability of neighbor embedding methods: a
map-continuity perspective
ABSTRACT: Visualizing high-dimensional data is essential for understanding biomedical
data and deep learning models. Neighbor embedding methods, such as t-SNE and
UMAP, are widely used but can introduce misleading visual artifacts. We find
that the manifold learning interpretations from many prior works are inaccurate
and that the misuse stems from a lack of data-independent notions of embedding
maps, which project high-dimensional data into a lower-dimensional space.
Leveraging the leave-one-out principle, we introduce LOO-map, a framework that
extends embedding maps beyond discrete points to the entire input space. We
identify two forms of map discontinuity that distort visualizations: one
exaggerates cluster separation and the other creates spurious local structures.
As a remedy, we develop two types of point-wise diagnostic scores to detect
unreliable embedding points and improve hyperparameter selection, which are
validated on datasets from computer vision and single-cell omics.
|
2411.06343 | Huanshui Zhang | Hailin Xu, Hongxia Wang, Huanshui Zhang | A novel algorithm for optimizing bundle adjustment in image sequence
alignment | null | null | null | null | math.OC cs.CV | http://creativecommons.org/licenses/by/4.0/ | The Bundle Adjustment (BA) model is commonly optimized using a nonlinear
least squares method, with the Levenberg-Marquardt (L-M) algorithm being a
typical choice. However, despite the L-M algorithm's effectiveness, its
sensitivity to initial conditions often results in slower convergence when
applied to poorly conditioned datasets, motivating the exploration of
alternative optimization strategies. This paper introduces a novel algorithm
for optimizing the BA model in the context of image sequence alignment for
cryo-electron tomography, utilizing optimal control theory to directly optimize
general nonlinear functions. The proposed Optimal Control Algorithm (OCA)
exhibits superior convergence rates and effectively mitigates the oscillatory
behavior frequently observed in L-M algorithm. Extensive experiments on both
synthetic and real-world datasets were conducted to evaluate the algorithm's
performance. The results demonstrate that the OCA achieves faster convergence
compared to the L-M algorithm. Moreover, the incorporation of a bisection-based
update procedure significantly enhances the OCA's performance, particularly in
poorly initialized datasets. These findings indicate that the OCA can
substantially improve the efficiency of 3D reconstructions in cryo-electron
tomography.
| [
{
"version": "v1",
"created": "Sun, 10 Nov 2024 03:19:33 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 02:21:16 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Xu",
"Hailin",
""
],
[
"Wang",
"Hongxia",
""
],
[
"Zhang",
"Huanshui",
""
]
] | TITLE: A novel algorithm for optimizing bundle adjustment in image sequence
alignment
ABSTRACT: The Bundle Adjustment (BA) model is commonly optimized using a nonlinear
least squares method, with the Levenberg-Marquardt (L-M) algorithm being a
typical choice. However, despite the L-M algorithm's effectiveness, its
sensitivity to initial conditions often results in slower convergence when
applied to poorly conditioned datasets, motivating the exploration of
alternative optimization strategies. This paper introduces a novel algorithm
for optimizing the BA model in the context of image sequence alignment for
cryo-electron tomography, utilizing optimal control theory to directly optimize
general nonlinear functions. The proposed Optimal Control Algorithm (OCA)
exhibits superior convergence rates and effectively mitigates the oscillatory
behavior frequently observed in L-M algorithm. Extensive experiments on both
synthetic and real-world datasets were conducted to evaluate the algorithm's
performance. The results demonstrate that the OCA achieves faster convergence
compared to the L-M algorithm. Moreover, the incorporation of a bisection-based
update procedure significantly enhances the OCA's performance, particularly in
poorly initialized datasets. These findings indicate that the OCA can
substantially improve the efficiency of 3D reconstructions in cryo-electron
tomography.
|
2411.08211 | Silvia Dalla | S. Dalla, A. Hutchinson, R.A. Hyndman, K. Kihara, N. Nitta, L.
Rodriguez-Garcia, T. Laitinen, C.O.G. Waterfall and D.S. Brown | Detection asymmetry in solar energetic particle events | A&A, in press | A&A 696, A12 (2025) | 10.1051/0004-6361/202453000 | null | astro-ph.SR physics.space-ph | http://creativecommons.org/licenses/by/4.0/ | Context. Solar energetic particles (SEPs) are detected in interplanetary
space in association with solar flares and coronal mass ejections (CMEs). The
magnetic connection between the observing spacecraft and the solar active
region (AR) source of the event is a key parameter in determining whether SEPs
are observed and the particle event's properties. Aims. We investigate whether
an east-west asymmetry in the detection of SEP events is present in
observations and discuss its possible link to corotation of magnetic flux tubes
with the Sun. Methods. We used a published dataset of 239 CMEs recorded between
2006 and 2017 and having source regions both on the Sun's front and far sides
as seen from Earth. We produced distributions of occurrence of in-situ SEP
intensity enhancements associated with the CME events, versus \Delta\phi, the
longitudinal separation between source active region and spacecraft magnetic
footpoint based on the nominal Parker spiral. We focused on protons of energy
>10 MeV measured by STEREO A, STEREO B and GOES at 1 au. We also considered
occurrences of 71-112 keV electron events detected by MESSENGER between 0.31
and 0.47 au. Results. We find an east-west asymmetry with respect to the best
magnetic connection (\Delta\phi=0) in the detection of >10 MeV proton events
and of 71-112 keV electron events. For protons, observers for which the source
AR is on the east side of the spacecraft footpoint and not well connected
(-180<\Delta\phi<-40) are 93% more likely to detect an SEP event compared to
observers with +40<\Delta\phi<+180. The asymmetry may be a signature of
corotation of magnetic flux tubes with the Sun, since for events with
\Delta\phi<0 corotation sweeps particle-filled flux tubes towards the observing
spacecraft, while for \Delta\phi>0 it takes them away from it. Alternatively it
may be related to asymmetric acceleration or propagation effects.
| [
{
"version": "v1",
"created": "Tue, 12 Nov 2024 22:02:29 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Nov 2024 15:52:01 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Feb 2025 16:14:01 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Dalla",
"S.",
""
],
[
"Hutchinson",
"A.",
""
],
[
"Hyndman",
"R. A.",
""
],
[
"Kihara",
"K.",
""
],
[
"Nitta",
"N.",
""
],
[
"Rodriguez-Garcia",
"L.",
""
],
[
"Laitinen",
"T.",
""
],
[
"Waterfall",
"C. O. G.",
""
],
[
"Brown",
"D. S.",
""
]
] | TITLE: Detection asymmetry in solar energetic particle events
ABSTRACT: Context. Solar energetic particles (SEPs) are detected in interplanetary
space in association with solar flares and coronal mass ejections (CMEs). The
magnetic connection between the observing spacecraft and the solar active
region (AR) source of the event is a key parameter in determining whether SEPs
are observed and the particle event's properties. Aims. We investigate whether
an east-west asymmetry in the detection of SEP events is present in
observations and discuss its possible link to corotation of magnetic flux tubes
with the Sun. Methods. We used a published dataset of 239 CMEs recorded between
2006 and 2017 and having source regions both on the Sun's front and far sides
as seen from Earth. We produced distributions of occurrence of in-situ SEP
intensity enhancements associated with the CME events, versus \Delta\phi, the
longitudinal separation between source active region and spacecraft magnetic
footpoint based on the nominal Parker spiral. We focused on protons of energy
>10 MeV measured by STEREO A, STEREO B and GOES at 1 au. We also considered
occurrences of 71-112 keV electron events detected by MESSENGER between 0.31
and 0.47 au. Results. We find an east-west asymmetry with respect to the best
magnetic connection (\Delta\phi=0) in the detection of >10 MeV proton events
and of 71-112 keV electron events. For protons, observers for which the source
AR is on the east side of the spacecraft footpoint and not well connected
(-180<\Delta\phi<-40) are 93% more likely to detect an SEP event compared to
observers with +40<\Delta\phi<+180. The asymmetry may be a signature of
corotation of magnetic flux tubes with the Sun, since for events with
\Delta\phi<0 corotation sweeps particle-filled flux tubes towards the observing
spacecraft, while for \Delta\phi>0 it takes them away from it. Alternatively it
may be related to asymmetric acceleration or propagation effects.
|
2411.11779 | Enshuo Hsu | Enshuo Hsu, Kirk Roberts | LLM-IE: A Python Package for Generative Information Extraction with
Large Language Models | null | null | 10.1093/jamiaopen/ooaf012 | null | cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Objectives: Despite the recent adoption of large language models (LLMs) for
biomedical information extraction, challenges in prompt engineering and
algorithms persist, with no dedicated software available. To address this, we
developed LLM-IE: a Python package for building complete information extraction
pipelines. Our key innovation is an interactive LLM agent to support schema
definition and prompt design.
Materials and Methods: The LLM-IE supports named entity recognition, entity
attribute extraction, and relation extraction tasks. We benchmarked on the i2b2
datasets and conducted a system evaluation.
Results: The sentence-based prompting algorithm resulted in the best
performance while requiring a longer inference time. System evaluation provided
intuitive visualization.
Discussion: LLM-IE was designed from practical NLP experience in healthcare
and has been adopted in internal projects. It should hold great value to the
biomedical NLP community.
Conclusion: We developed a Python package, LLM-IE, that provides building
blocks for robust information extraction pipeline construction.
| [
{
"version": "v1",
"created": "Mon, 18 Nov 2024 17:56:13 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Hsu",
"Enshuo",
""
],
[
"Roberts",
"Kirk",
""
]
] | TITLE: LLM-IE: A Python Package for Generative Information Extraction with
Large Language Models
ABSTRACT: Objectives: Despite the recent adoption of large language models (LLMs) for
biomedical information extraction, challenges in prompt engineering and
algorithms persist, with no dedicated software available. To address this, we
developed LLM-IE: a Python package for building complete information extraction
pipelines. Our key innovation is an interactive LLM agent to support schema
definition and prompt design.
Materials and Methods: The LLM-IE supports named entity recognition, entity
attribute extraction, and relation extraction tasks. We benchmarked on the i2b2
datasets and conducted a system evaluation.
Results: The sentence-based prompting algorithm resulted in the best
performance while requiring a longer inference time. System evaluation provided
intuitive visualization.
Discussion: LLM-IE was designed from practical NLP experience in healthcare
and has been adopted in internal projects. It should hold great value to the
biomedical NLP community.
Conclusion: We developed a Python package, LLM-IE, that provides building
blocks for robust information extraction pipeline construction.
|
2411.12972 | Yuan Yuan | Yuan Yuan, Jingtao Ding, Chonghua Han, Zhi Sheng, Depeng Jin, Yong Li | UniFlow: A Foundation Model for Unified Urban Spatio-Temporal Flow
Prediction | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Urban spatio-temporal flow prediction, encompassing traffic flows and crowd
flows, is crucial for optimizing city infrastructure and managing traffic and
emergency responses. Traditional approaches have relied on separate models
tailored to either grid-based data, representing cities as uniform cells, or
graph-based data, modeling cities as networks of nodes and edges. In this
paper, we build UniFlow, a foundational model for general urban flow prediction
that unifies both grid-based and graphbased data. We first design a multi-view
spatio-temporal patching mechanism to standardize different data into a
consistent sequential format and then introduce a spatio-temporal transformer
architecture to capture complex correlations and dynamics. To leverage shared
spatio-temporal patterns across different data types and facilitate effective
cross-learning, we propose SpatioTemporal Memory Retrieval Augmentation
(ST-MRA). By creating structured memory modules to store shared spatio-temporal
patterns, ST-MRA enhances predictions through adaptive memory retrieval.
Extensive experiments demonstrate that UniFlow outperforms existing models in
both grid-based and graph-based flow prediction, excelling particularly in
scenarios with limited data availability, showcasing its superior performance
and broad applicability. The datasets and code implementation have been
released on https://github.com/YuanYuan98/UniFlow.
| [
{
"version": "v1",
"created": "Wed, 20 Nov 2024 01:54:52 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 11:18:41 GMT"
},
{
"version": "v3",
"created": "Tue, 1 Apr 2025 01:32:13 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Yuan",
"Yuan",
""
],
[
"Ding",
"Jingtao",
""
],
[
"Han",
"Chonghua",
""
],
[
"Sheng",
"Zhi",
""
],
[
"Jin",
"Depeng",
""
],
[
"Li",
"Yong",
""
]
] | TITLE: UniFlow: A Foundation Model for Unified Urban Spatio-Temporal Flow
Prediction
ABSTRACT: Urban spatio-temporal flow prediction, encompassing traffic flows and crowd
flows, is crucial for optimizing city infrastructure and managing traffic and
emergency responses. Traditional approaches have relied on separate models
tailored to either grid-based data, representing cities as uniform cells, or
graph-based data, modeling cities as networks of nodes and edges. In this
paper, we build UniFlow, a foundational model for general urban flow prediction
that unifies both grid-based and graphbased data. We first design a multi-view
spatio-temporal patching mechanism to standardize different data into a
consistent sequential format and then introduce a spatio-temporal transformer
architecture to capture complex correlations and dynamics. To leverage shared
spatio-temporal patterns across different data types and facilitate effective
cross-learning, we propose SpatioTemporal Memory Retrieval Augmentation
(ST-MRA). By creating structured memory modules to store shared spatio-temporal
patterns, ST-MRA enhances predictions through adaptive memory retrieval.
Extensive experiments demonstrate that UniFlow outperforms existing models in
both grid-based and graph-based flow prediction, excelling particularly in
scenarios with limited data availability, showcasing its superior performance
and broad applicability. The datasets and code implementation have been
released on https://github.com/YuanYuan98/UniFlow.
|
2411.16801 | Yisol Choi | Yisol Choi, Sangkyung Kwak, Sihyun Yu, Hyungwon Choi, Jinwoo Shin | Controllable Human Image Generation with Personalized Multi-Garments | CVPR 2025. Project page: https://omnious.github.io/BootComp | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present BootComp, a novel framework based on text-to-image diffusion
models for controllable human image generation with multiple reference
garments. Here, the main bottleneck is data acquisition for training:
collecting a large-scale dataset of high-quality reference garment images per
human subject is quite challenging, i.e., ideally, one needs to manually gather
every single garment photograph worn by each human. To address this, we propose
a data generation pipeline to construct a large synthetic dataset, consisting
of human and multiple-garment pairs, by introducing a model to extract any
reference garment images from each human image. To ensure data quality, we also
propose a filtering strategy to remove undesirable generated data based on
measuring perceptual similarities between the garment presented in human image
and extracted garment. Finally, by utilizing the constructed synthetic dataset,
we train a diffusion model having two parallel denoising paths that use
multiple garment images as conditions to generate human images while preserving
their fine-grained details. We further show the wide-applicability of our
framework by adapting it to different types of reference-based generation in
the fashion domain, including virtual try-on, and controllable human image
generation with other conditions, e.g., pose, face, etc.
| [
{
"version": "v1",
"created": "Mon, 25 Nov 2024 12:37:13 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Mar 2025 08:27:25 GMT"
},
{
"version": "v3",
"created": "Tue, 1 Apr 2025 04:36:01 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Choi",
"Yisol",
""
],
[
"Kwak",
"Sangkyung",
""
],
[
"Yu",
"Sihyun",
""
],
[
"Choi",
"Hyungwon",
""
],
[
"Shin",
"Jinwoo",
""
]
] | TITLE: Controllable Human Image Generation with Personalized Multi-Garments
ABSTRACT: We present BootComp, a novel framework based on text-to-image diffusion
models for controllable human image generation with multiple reference
garments. Here, the main bottleneck is data acquisition for training:
collecting a large-scale dataset of high-quality reference garment images per
human subject is quite challenging, i.e., ideally, one needs to manually gather
every single garment photograph worn by each human. To address this, we propose
a data generation pipeline to construct a large synthetic dataset, consisting
of human and multiple-garment pairs, by introducing a model to extract any
reference garment images from each human image. To ensure data quality, we also
propose a filtering strategy to remove undesirable generated data based on
measuring perceptual similarities between the garment presented in human image
and extracted garment. Finally, by utilizing the constructed synthetic dataset,
we train a diffusion model having two parallel denoising paths that use
multiple garment images as conditions to generate human images while preserving
their fine-grained details. We further show the wide-applicability of our
framework by adapting it to different types of reference-based generation in
the fashion domain, including virtual try-on, and controllable human image
generation with other conditions, e.g., pose, face, etc.
|
2412.01095 | Muchao Ye | Muchao Ye, Weiyang Liu, Pan He | VERA: Explainable Video Anomaly Detection via Verbalized Learning of
Vision-Language Models | Accepted in CVPR 2025 | null | null | null | cs.AI cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rapid advancement of vision-language models (VLMs) has established a new
paradigm in video anomaly detection (VAD): leveraging VLMs to simultaneously
detect anomalies and provide comprehendible explanations for the decisions.
Existing work in this direction often assumes the complex reasoning required
for VAD exceeds the capabilities of pretrained VLMs. Consequently, these
approaches either incorporate specialized reasoning modules during inference or
rely on instruction tuning datasets through additional training to adapt VLMs
for VAD. However, such strategies often incur substantial computational costs
or data annotation overhead. To address these challenges in explainable VAD, we
introduce a verbalized learning framework named VERA that enables VLMs to
perform VAD without model parameter modifications. Specifically, VERA
automatically decomposes the complex reasoning required for VAD into
reflections on simpler, more focused guiding questions capturing distinct
abnormal patterns. It treats these reflective questions as learnable parameters
and optimizes them through data-driven verbal interactions between learner and
optimizer VLMs, using coarsely labeled training data. During inference, VERA
embeds the learned questions into model prompts to guide VLMs in generating
segment-level anomaly scores, which are then refined into frame-level scores
via the fusion of scene and temporal contexts. Experimental results on
challenging benchmarks demonstrate that the learned questions of VERA are
highly adaptable, significantly improving both detection performance and
explainability of VLMs for VAD.
| [
{
"version": "v1",
"created": "Mon, 2 Dec 2024 04:10:14 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Mar 2025 02:26:40 GMT"
},
{
"version": "v3",
"created": "Mon, 31 Mar 2025 20:17:27 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Ye",
"Muchao",
""
],
[
"Liu",
"Weiyang",
""
],
[
"He",
"Pan",
""
]
] | TITLE: VERA: Explainable Video Anomaly Detection via Verbalized Learning of
Vision-Language Models
ABSTRACT: The rapid advancement of vision-language models (VLMs) has established a new
paradigm in video anomaly detection (VAD): leveraging VLMs to simultaneously
detect anomalies and provide comprehendible explanations for the decisions.
Existing work in this direction often assumes the complex reasoning required
for VAD exceeds the capabilities of pretrained VLMs. Consequently, these
approaches either incorporate specialized reasoning modules during inference or
rely on instruction tuning datasets through additional training to adapt VLMs
for VAD. However, such strategies often incur substantial computational costs
or data annotation overhead. To address these challenges in explainable VAD, we
introduce a verbalized learning framework named VERA that enables VLMs to
perform VAD without model parameter modifications. Specifically, VERA
automatically decomposes the complex reasoning required for VAD into
reflections on simpler, more focused guiding questions capturing distinct
abnormal patterns. It treats these reflective questions as learnable parameters
and optimizes them through data-driven verbal interactions between learner and
optimizer VLMs, using coarsely labeled training data. During inference, VERA
embeds the learned questions into model prompts to guide VLMs in generating
segment-level anomaly scores, which are then refined into frame-level scores
via the fusion of scene and temporal contexts. Experimental results on
challenging benchmarks demonstrate that the learned questions of VERA are
highly adaptable, significantly improving both detection performance and
explainability of VLMs for VAD.
|
2412.03526 | Jiahui Huang | Hanxue Liang, Jiawei Ren, Ashkan Mirzaei, Antonio Torralba, Ziwei Liu,
Igor Gilitschenski, Sanja Fidler, Cengiz Oztireli, Huan Ling, Zan Gojcic,
Jiahui Huang | Feed-Forward Bullet-Time Reconstruction of Dynamic Scenes from Monocular
Videos | Project website:
https://research.nvidia.com/labs/toronto-ai/bullet-timer/ | null | null | null | cs.CV cs.AI cs.GR | http://creativecommons.org/licenses/by/4.0/ | Recent advancements in static feed-forward scene reconstruction have
demonstrated significant progress in high-quality novel view synthesis.
However, these models often struggle with generalizability across diverse
environments and fail to effectively handle dynamic content. We present BTimer
(short for BulletTimer), the first motion-aware feed-forward model for
real-time reconstruction and novel view synthesis of dynamic scenes. Our
approach reconstructs the full scene in a 3D Gaussian Splatting representation
at a given target ('bullet') timestamp by aggregating information from all the
context frames. Such a formulation allows BTimer to gain scalability and
generalization by leveraging both static and dynamic scene datasets. Given a
casual monocular dynamic video, BTimer reconstructs a bullet-time scene within
150ms while reaching state-of-the-art performance on both static and dynamic
scene datasets, even compared with optimization-based approaches.
| [
{
"version": "v1",
"created": "Wed, 4 Dec 2024 18:15:06 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 06:04:34 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Liang",
"Hanxue",
""
],
[
"Ren",
"Jiawei",
""
],
[
"Mirzaei",
"Ashkan",
""
],
[
"Torralba",
"Antonio",
""
],
[
"Liu",
"Ziwei",
""
],
[
"Gilitschenski",
"Igor",
""
],
[
"Fidler",
"Sanja",
""
],
[
"Oztireli",
"Cengiz",
""
],
[
"Ling",
"Huan",
""
],
[
"Gojcic",
"Zan",
""
],
[
"Huang",
"Jiahui",
""
]
] | TITLE: Feed-Forward Bullet-Time Reconstruction of Dynamic Scenes from Monocular
Videos
ABSTRACT: Recent advancements in static feed-forward scene reconstruction have
demonstrated significant progress in high-quality novel view synthesis.
However, these models often struggle with generalizability across diverse
environments and fail to effectively handle dynamic content. We present BTimer
(short for BulletTimer), the first motion-aware feed-forward model for
real-time reconstruction and novel view synthesis of dynamic scenes. Our
approach reconstructs the full scene in a 3D Gaussian Splatting representation
at a given target ('bullet') timestamp by aggregating information from all the
context frames. Such a formulation allows BTimer to gain scalability and
generalization by leveraging both static and dynamic scene datasets. Given a
casual monocular dynamic video, BTimer reconstructs a bullet-time scene within
150ms while reaching state-of-the-art performance on both static and dynamic
scene datasets, even compared with optimization-based approaches.
|
2412.10545 | Brandon Gower-Winter | Brandon Gower-Winter, Georg Krempl, Sergey Dragomiretskiy, Tineke
Jelsma and Arno Siebes | Identifying Predictions That Influence the Future: Detecting
Performative Concept Drift in Data Streams | 21 pages, 17 figures. Extended version of paper with the same name
accepted to AAAI2025 v2.0 updated the figures and text to more align with
conference paper. Acknowledgements Section added | null | null | null | cs.LG cs.CR stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Concept Drift has been extensively studied within the context of Stream
Learning. However, it is often assumed that the deployed model's predictions
play no role in the concept drift the system experiences. Closer inspection
reveals that this is not always the case. Automated trading might be prone to
self-fulfilling feedback loops. Likewise, malicious entities might adapt to
evade detectors in the adversarial setting resulting in a self-negating
feedback loop that requires the deployed models to constantly retrain. Such
settings where a model may induce concept drift are called performative. In
this work, we investigate this phenomenon. Our contributions are as follows:
First, we define performative drift within a stream learning setting and
distinguish it from other causes of drift. We introduce a novel type of drift
detection task, aimed at identifying potential performative concept drift in
data streams. We propose a first such performative drift detection approach,
called CheckerBoard Performative Drift Detection (CB-PDD). We apply CB-PDD to
both synthetic and semi-synthetic datasets that exhibit varying degrees of
self-fulfilling feedback loops. Results are positive with CB-PDD showing high
efficacy, low false detection rates, resilience to intrinsic drift,
comparability to other drift detection techniques, and an ability to
effectively detect performative drift in semi-synthetic datasets. Secondly, we
highlight the role intrinsic (traditional) drift plays in obfuscating
performative drift and discuss the implications of these findings as well as
the limitations of CB-PDD.
| [
{
"version": "v1",
"created": "Fri, 13 Dec 2024 20:45:18 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 16:59:58 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Gower-Winter",
"Brandon",
""
],
[
"Krempl",
"Georg",
""
],
[
"Dragomiretskiy",
"Sergey",
""
],
[
"Jelsma",
"Tineke",
""
],
[
"Siebes",
"Arno",
""
]
] | TITLE: Identifying Predictions That Influence the Future: Detecting
Performative Concept Drift in Data Streams
ABSTRACT: Concept Drift has been extensively studied within the context of Stream
Learning. However, it is often assumed that the deployed model's predictions
play no role in the concept drift the system experiences. Closer inspection
reveals that this is not always the case. Automated trading might be prone to
self-fulfilling feedback loops. Likewise, malicious entities might adapt to
evade detectors in the adversarial setting resulting in a self-negating
feedback loop that requires the deployed models to constantly retrain. Such
settings where a model may induce concept drift are called performative. In
this work, we investigate this phenomenon. Our contributions are as follows:
First, we define performative drift within a stream learning setting and
distinguish it from other causes of drift. We introduce a novel type of drift
detection task, aimed at identifying potential performative concept drift in
data streams. We propose a first such performative drift detection approach,
called CheckerBoard Performative Drift Detection (CB-PDD). We apply CB-PDD to
both synthetic and semi-synthetic datasets that exhibit varying degrees of
self-fulfilling feedback loops. Results are positive with CB-PDD showing high
efficacy, low false detection rates, resilience to intrinsic drift,
comparability to other drift detection techniques, and an ability to
effectively detect performative drift in semi-synthetic datasets. Secondly, we
highlight the role intrinsic (traditional) drift plays in obfuscating
performative drift and discuss the implications of these findings as well as
the limitations of CB-PDD.
|
2412.11923 | Sepideh Mamooler | Sepideh Mamooler, Syrielle Montariol, Alexander Mathis, Antoine
Bosselut | PICLe: Pseudo-Annotations for In-Context Learning in Low-Resource Named
Entity Detection | In Proceedings of NAACL2025 | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | In-context learning (ICL) enables Large Language Models (LLMs) to perform
tasks using few demonstrations, facilitating task adaptation when labeled
examples are hard to obtain. However, ICL is sensitive to the choice of
demonstrations, and it remains unclear which demonstration attributes enable
in-context generalization. In this work, we conduct a perturbation study of
in-context demonstrations for low-resource Named Entity Detection (NED). Our
surprising finding is that in-context demonstrations with partially correct
annotated entity mentions can be as effective for task transfer as fully
correct demonstrations. Based off our findings, we propose Pseudo-annotated
In-Context Learning (PICLe), a framework for in-context learning with noisy,
pseudo-annotated demonstrations. PICLe leverages LLMs to annotate many
demonstrations in a zero-shot first pass. We then cluster these synthetic
demonstrations, sample specific sets of in-context demonstrations from each
cluster, and predict entity mentions using each set independently. Finally, we
use self-verification to select the final set of entity mentions. We evaluate
PICLe on five biomedical NED datasets and show that, with zero human
annotation, PICLe outperforms ICL in low-resource settings where limited gold
examples can be used as in-context demonstrations.
| [
{
"version": "v1",
"created": "Mon, 16 Dec 2024 16:09:35 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 12:45:58 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Mamooler",
"Sepideh",
""
],
[
"Montariol",
"Syrielle",
""
],
[
"Mathis",
"Alexander",
""
],
[
"Bosselut",
"Antoine",
""
]
] | TITLE: PICLe: Pseudo-Annotations for In-Context Learning in Low-Resource Named
Entity Detection
ABSTRACT: In-context learning (ICL) enables Large Language Models (LLMs) to perform
tasks using few demonstrations, facilitating task adaptation when labeled
examples are hard to obtain. However, ICL is sensitive to the choice of
demonstrations, and it remains unclear which demonstration attributes enable
in-context generalization. In this work, we conduct a perturbation study of
in-context demonstrations for low-resource Named Entity Detection (NED). Our
surprising finding is that in-context demonstrations with partially correct
annotated entity mentions can be as effective for task transfer as fully
correct demonstrations. Based off our findings, we propose Pseudo-annotated
In-Context Learning (PICLe), a framework for in-context learning with noisy,
pseudo-annotated demonstrations. PICLe leverages LLMs to annotate many
demonstrations in a zero-shot first pass. We then cluster these synthetic
demonstrations, sample specific sets of in-context demonstrations from each
cluster, and predict entity mentions using each set independently. Finally, we
use self-verification to select the final set of entity mentions. We evaluate
PICLe on five biomedical NED datasets and show that, with zero human
annotation, PICLe outperforms ICL in low-resource settings where limited gold
examples can be used as in-context demonstrations.
|
2412.12083 | Zhibing Li | Zhibing Li, Tong Wu, Jing Tan, Mengchen Zhang, Jiaqi Wang, Dahua Lin | IDArb: Intrinsic Decomposition for Arbitrary Number of Input Views and
Illuminations | ICLR 2025. Project Page: https://lizb6626.github.io/IDArb/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Capturing geometric and material information from images remains a
fundamental challenge in computer vision and graphics. Traditional
optimization-based methods often require hours of computational time to
reconstruct geometry, material properties, and environmental lighting from
dense multi-view inputs, while still struggling with inherent ambiguities
between lighting and material. On the other hand, learning-based approaches
leverage rich material priors from existing 3D object datasets but face
challenges with maintaining multi-view consistency. In this paper, we introduce
IDArb, a diffusion-based model designed to perform intrinsic decomposition on
an arbitrary number of images under varying illuminations. Our method achieves
accurate and multi-view consistent estimation on surface normals and material
properties. This is made possible through a novel cross-view, cross-domain
attention module and an illumination-augmented, view-adaptive training
strategy. Additionally, we introduce ARB-Objaverse, a new dataset that provides
large-scale multi-view intrinsic data and renderings under diverse lighting
conditions, supporting robust training. Extensive experiments demonstrate that
IDArb outperforms state-of-the-art methods both qualitatively and
quantitatively. Moreover, our approach facilitates a range of downstream tasks,
including single-image relighting, photometric stereo, and 3D reconstruction,
highlighting its broad applications in realistic 3D content creation.
| [
{
"version": "v1",
"created": "Mon, 16 Dec 2024 18:52:56 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 15:02:48 GMT"
},
{
"version": "v3",
"created": "Tue, 1 Apr 2025 16:23:56 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Li",
"Zhibing",
""
],
[
"Wu",
"Tong",
""
],
[
"Tan",
"Jing",
""
],
[
"Zhang",
"Mengchen",
""
],
[
"Wang",
"Jiaqi",
""
],
[
"Lin",
"Dahua",
""
]
] | TITLE: IDArb: Intrinsic Decomposition for Arbitrary Number of Input Views and
Illuminations
ABSTRACT: Capturing geometric and material information from images remains a
fundamental challenge in computer vision and graphics. Traditional
optimization-based methods often require hours of computational time to
reconstruct geometry, material properties, and environmental lighting from
dense multi-view inputs, while still struggling with inherent ambiguities
between lighting and material. On the other hand, learning-based approaches
leverage rich material priors from existing 3D object datasets but face
challenges with maintaining multi-view consistency. In this paper, we introduce
IDArb, a diffusion-based model designed to perform intrinsic decomposition on
an arbitrary number of images under varying illuminations. Our method achieves
accurate and multi-view consistent estimation on surface normals and material
properties. This is made possible through a novel cross-view, cross-domain
attention module and an illumination-augmented, view-adaptive training
strategy. Additionally, we introduce ARB-Objaverse, a new dataset that provides
large-scale multi-view intrinsic data and renderings under diverse lighting
conditions, supporting robust training. Extensive experiments demonstrate that
IDArb outperforms state-of-the-art methods both qualitatively and
quantitatively. Moreover, our approach facilitates a range of downstream tasks,
including single-image relighting, photometric stereo, and 3D reconstruction,
highlighting its broad applications in realistic 3D content creation.
|
2412.14642 | Jiatong Li | Jiatong Li, Junxian Li, Yunqing Liu, Dongzhan Zhou, and Qing Li | TOMG-Bench: Evaluating LLMs on Text-based Open Molecule Generation | The first benchmark for text-based open molecule generation | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In this paper, we propose Text-based Open Molecule Generation Benchmark
(TOMG-Bench), the first benchmark to evaluate the open-domain molecule
generation capability of LLMs. TOMG-Bench encompasses a dataset of three major
tasks: molecule editing (MolEdit), molecule optimization (MolOpt), and
customized molecule generation (MolCustom). Each major task further contains
three subtasks, while each subtask comprises 5,000 test samples. Given the
inherent complexity of open molecule generation evaluation, we also developed
an automated evaluation system that helps measure both the quality and the
accuracy of the generated molecules. Our comprehensive benchmarking of 25 LLMs
reveals the current limitations as well as potential areas for improvement in
text-guided molecule discovery. Furthermore, we propose OpenMolIns, a
specialized instruction tuning dataset established for solving challenges
raised by TOMG-Bench. Fine-tuned on OpenMolIns, Llama3.1-8B could outperform
all the open-source general LLMs, even surpassing GPT-3.5-turbo by 46.5\% on
TOMG-Bench. Our codes and datasets are available through
https://github.com/phenixace/TOMG-Bench.
| [
{
"version": "v1",
"created": "Thu, 19 Dec 2024 08:51:16 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 16:18:55 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Li",
"Jiatong",
""
],
[
"Li",
"Junxian",
""
],
[
"Liu",
"Yunqing",
""
],
[
"Zhou",
"Dongzhan",
""
],
[
"Li",
"Qing",
""
]
] | TITLE: TOMG-Bench: Evaluating LLMs on Text-based Open Molecule Generation
ABSTRACT: In this paper, we propose Text-based Open Molecule Generation Benchmark
(TOMG-Bench), the first benchmark to evaluate the open-domain molecule
generation capability of LLMs. TOMG-Bench encompasses a dataset of three major
tasks: molecule editing (MolEdit), molecule optimization (MolOpt), and
customized molecule generation (MolCustom). Each major task further contains
three subtasks, while each subtask comprises 5,000 test samples. Given the
inherent complexity of open molecule generation evaluation, we also developed
an automated evaluation system that helps measure both the quality and the
accuracy of the generated molecules. Our comprehensive benchmarking of 25 LLMs
reveals the current limitations as well as potential areas for improvement in
text-guided molecule discovery. Furthermore, we propose OpenMolIns, a
specialized instruction tuning dataset established for solving challenges
raised by TOMG-Bench. Fine-tuned on OpenMolIns, Llama3.1-8B could outperform
all the open-source general LLMs, even surpassing GPT-3.5-turbo by 46.5\% on
TOMG-Bench. Our codes and datasets are available through
https://github.com/phenixace/TOMG-Bench.
|
2412.16367 | Saakaar Bhatnagar | Saakaar Bhatnagar, Andrew Comerford, Zelu Xu, Simone Reitano, Luigi
Scrimieri, Luca Giuliano, Araz Banaeizadeh | A Layered Swarm Optimization Method for Fitting Battery Thermal Runaway
Models to Accelerating Rate Calorimetry Data | null | null | null | null | cs.CE | http://creativecommons.org/licenses/by/4.0/ | Thermal runaway in lithium-ion batteries is a critical safety concern for the
battery industry due to its potential to cause uncontrolled temperature rises
and subsequent fires that can engulf the battery pack and its surroundings.
Modeling and simulation offer cost-effective tools for designing strategies to
mitigate thermal runaway. Accurately simulating the chemical kinetics of
thermal runaway, commonly represented by systems of Arrhenius-based Ordinary
Differential Equations (ODEs), requires fitting kinetic parameters to
experimental calorimetry data, such as Accelerating Rate Calorimetry (ARC)
measurements. However, existing fitting methods often rely on empirical
assumptions and simplifications that compromise generality or require manual
tuning during the fitting process. Particle Swarm Optimization (PSO) offers a
promising approach for directly fitting kinetic parameters to experimental
data. Yet, for systems created by multiple Arrhenius ODEs, the computational
cost of fitting using a brute-force approach that searches the entire parameter
space simultaneously can become prohibitive. This work introduces a
divide-and-conquer approach based on PSO to fit N-equation Arrhenius ODE models
to ARC data. The proposed method achieves more accurate parameter fitting
compared to the brute-force method while maintaining low computational costs.
The method is analyzed using two distinct ARC datasets, and the resulting
models are further validated through simulations of 3D ARC and oven tests,
showing excellent agreement with experimental data and alignment with expected
trends.
| [
{
"version": "v1",
"created": "Fri, 20 Dec 2024 21:57:48 GMT"
},
{
"version": "v2",
"created": "Tue, 24 Dec 2024 08:47:57 GMT"
},
{
"version": "v3",
"created": "Thu, 6 Feb 2025 20:58:54 GMT"
},
{
"version": "v4",
"created": "Tue, 1 Apr 2025 14:53:13 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Bhatnagar",
"Saakaar",
""
],
[
"Comerford",
"Andrew",
""
],
[
"Xu",
"Zelu",
""
],
[
"Reitano",
"Simone",
""
],
[
"Scrimieri",
"Luigi",
""
],
[
"Giuliano",
"Luca",
""
],
[
"Banaeizadeh",
"Araz",
""
]
] | TITLE: A Layered Swarm Optimization Method for Fitting Battery Thermal Runaway
Models to Accelerating Rate Calorimetry Data
ABSTRACT: Thermal runaway in lithium-ion batteries is a critical safety concern for the
battery industry due to its potential to cause uncontrolled temperature rises
and subsequent fires that can engulf the battery pack and its surroundings.
Modeling and simulation offer cost-effective tools for designing strategies to
mitigate thermal runaway. Accurately simulating the chemical kinetics of
thermal runaway, commonly represented by systems of Arrhenius-based Ordinary
Differential Equations (ODEs), requires fitting kinetic parameters to
experimental calorimetry data, such as Accelerating Rate Calorimetry (ARC)
measurements. However, existing fitting methods often rely on empirical
assumptions and simplifications that compromise generality or require manual
tuning during the fitting process. Particle Swarm Optimization (PSO) offers a
promising approach for directly fitting kinetic parameters to experimental
data. Yet, for systems created by multiple Arrhenius ODEs, the computational
cost of fitting using a brute-force approach that searches the entire parameter
space simultaneously can become prohibitive. This work introduces a
divide-and-conquer approach based on PSO to fit N-equation Arrhenius ODE models
to ARC data. The proposed method achieves more accurate parameter fitting
compared to the brute-force method while maintaining low computational costs.
The method is analyzed using two distinct ARC datasets, and the resulting
models are further validated through simulations of 3D ARC and oven tests,
showing excellent agreement with experimental data and alignment with expected
trends.
|
2412.16698 | Tongfei Bian | Tongfei Bian, Yiming Ma, Mathieu Chollet, Victor Sanchez, and Tanaya
Guha | Interact with me: Joint Egocentric Forecasting of Intent to Interact,
Attitude and Social Actions | Accepted at ICME, 2025 | null | null | null | cs.CV cs.HC | http://creativecommons.org/publicdomain/zero/1.0/ | For efficient human-agent interaction, an agent should proactively recognize
their target user and prepare for upcoming interactions. We formulate this
challenging problem as the novel task of jointly forecasting a person's intent
to interact with the agent, their attitude towards the agent and the action
they will perform, from the agent's (egocentric) perspective. So we propose
\emph{SocialEgoNet} - a graph-based spatiotemporal framework that exploits task
dependencies through a hierarchical multitask learning approach. SocialEgoNet
uses whole-body skeletons (keypoints from face, hands and body) extracted from
only 1 second of video input for high inference speed. For evaluation, we
augment an existing egocentric human-agent interaction dataset with new class
labels and bounding box annotations. Extensive experiments on this augmented
dataset, named JPL-Social, demonstrate \emph{real-time} inference and superior
performance (average accuracy across all tasks: 83.15\%) of our model
outperforming several competitive baselines. The additional annotations and
code will be available upon acceptance.
| [
{
"version": "v1",
"created": "Sat, 21 Dec 2024 16:54:28 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Mar 2025 20:33:59 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Bian",
"Tongfei",
""
],
[
"Ma",
"Yiming",
""
],
[
"Chollet",
"Mathieu",
""
],
[
"Sanchez",
"Victor",
""
],
[
"Guha",
"Tanaya",
""
]
] | TITLE: Interact with me: Joint Egocentric Forecasting of Intent to Interact,
Attitude and Social Actions
ABSTRACT: For efficient human-agent interaction, an agent should proactively recognize
their target user and prepare for upcoming interactions. We formulate this
challenging problem as the novel task of jointly forecasting a person's intent
to interact with the agent, their attitude towards the agent and the action
they will perform, from the agent's (egocentric) perspective. So we propose
\emph{SocialEgoNet} - a graph-based spatiotemporal framework that exploits task
dependencies through a hierarchical multitask learning approach. SocialEgoNet
uses whole-body skeletons (keypoints from face, hands and body) extracted from
only 1 second of video input for high inference speed. For evaluation, we
augment an existing egocentric human-agent interaction dataset with new class
labels and bounding box annotations. Extensive experiments on this augmented
dataset, named JPL-Social, demonstrate \emph{real-time} inference and superior
performance (average accuracy across all tasks: 83.15\%) of our model
outperforming several competitive baselines. The additional annotations and
code will be available upon acceptance.
|
2412.16855 | Xin Zhang | Xin Zhang, Yanzhao Zhang, Wen Xie, Mingxin Li, Ziqi Dai, Dingkun Long,
Pengjun Xie, Meishan Zhang, Wenjie Li, Min Zhang | GME: Improving Universal Multimodal Retrieval by Multimodal LLMs | Accepted to CVPR 2025, models at
https://huggingface.co/Alibaba-NLP/gme-Qwen2-VL-2B-Instruct | null | null | null | cs.CL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Universal Multimodal Retrieval (UMR) aims to enable search across various
modalities using a unified model, where queries and candidates can consist of
pure text, images, or a combination of both. Previous work has attempted to
adopt multimodal large language models (MLLMs) to realize UMR using only text
data. However, our preliminary experiments demonstrate that more diverse
multimodal training data can further unlock the potential of MLLMs. Despite its
effectiveness, the existing multimodal training data is highly imbalanced in
terms of modality, which motivates us to develop a training data synthesis
pipeline and construct a large-scale, high-quality fused-modal training
dataset. Based on the synthetic training data, we develop the General
Multimodal Embedder (GME), an MLLM-based dense retriever designed for UMR.
Furthermore, we construct a comprehensive UMR Benchmark (UMRB) to evaluate the
effectiveness of our approach. Experimental results show that our method
achieves state-of-the-art performance among existing UMR methods. Last, we
provide in-depth analyses of model scaling and training strategies, and perform
ablation studies on both the model and synthetic data.
| [
{
"version": "v1",
"created": "Sun, 22 Dec 2024 04:40:24 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 08:48:04 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Zhang",
"Xin",
""
],
[
"Zhang",
"Yanzhao",
""
],
[
"Xie",
"Wen",
""
],
[
"Li",
"Mingxin",
""
],
[
"Dai",
"Ziqi",
""
],
[
"Long",
"Dingkun",
""
],
[
"Xie",
"Pengjun",
""
],
[
"Zhang",
"Meishan",
""
],
[
"Li",
"Wenjie",
""
],
[
"Zhang",
"Min",
""
]
] | TITLE: GME: Improving Universal Multimodal Retrieval by Multimodal LLMs
ABSTRACT: Universal Multimodal Retrieval (UMR) aims to enable search across various
modalities using a unified model, where queries and candidates can consist of
pure text, images, or a combination of both. Previous work has attempted to
adopt multimodal large language models (MLLMs) to realize UMR using only text
data. However, our preliminary experiments demonstrate that more diverse
multimodal training data can further unlock the potential of MLLMs. Despite its
effectiveness, the existing multimodal training data is highly imbalanced in
terms of modality, which motivates us to develop a training data synthesis
pipeline and construct a large-scale, high-quality fused-modal training
dataset. Based on the synthetic training data, we develop the General
Multimodal Embedder (GME), an MLLM-based dense retriever designed for UMR.
Furthermore, we construct a comprehensive UMR Benchmark (UMRB) to evaluate the
effectiveness of our approach. Experimental results show that our method
achieves state-of-the-art performance among existing UMR methods. Last, we
provide in-depth analyses of model scaling and training strategies, and perform
ablation studies on both the model and synthetic data.
|
2412.17007 | Weijia Li | Junyan Ye, Honglin Lin, Leyan Ou, Dairong Chen, Zihao Wang, Qi Zhu,
Conghui He, Weijia Li | Where am I? Cross-View Geo-localization with Natural Language
Descriptions | 11 pages, 6 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cross-view geo-localization identifies the locations of street-view images by
matching them with geo-tagged satellite images or OSM. However, most existing
studies focus on image-to-image retrieval, with fewer addressing text-guided
retrieval, a task vital for applications like pedestrian navigation and
emergency response. In this work, we introduce a novel task for cross-view
geo-localization with natural language descriptions, which aims to retrieve
corresponding satellite images or OSM database based on scene text
descriptions. To support this task, we construct the CVG-Text dataset by
collecting cross-view data from multiple cities and employing a scene text
generation approach that leverages the annotation capabilities of Large
Multimodal Models to produce high-quality scene text descriptions with
localization details. Additionally, we propose a novel text-based retrieval
localization method, CrossText2Loc, which improves recall by 10% and
demonstrates excellent long-text retrieval capabilities. In terms of
explainability, it not only provides similarity scores but also offers
retrieval reasons. More information can be found at
https://yejy53.github.io/CVG-Text/ .
| [
{
"version": "v1",
"created": "Sun, 22 Dec 2024 13:13:10 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 02:48:45 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Ye",
"Junyan",
""
],
[
"Lin",
"Honglin",
""
],
[
"Ou",
"Leyan",
""
],
[
"Chen",
"Dairong",
""
],
[
"Wang",
"Zihao",
""
],
[
"Zhu",
"Qi",
""
],
[
"He",
"Conghui",
""
],
[
"Li",
"Weijia",
""
]
] | TITLE: Where am I? Cross-View Geo-localization with Natural Language
Descriptions
ABSTRACT: Cross-view geo-localization identifies the locations of street-view images by
matching them with geo-tagged satellite images or OSM. However, most existing
studies focus on image-to-image retrieval, with fewer addressing text-guided
retrieval, a task vital for applications like pedestrian navigation and
emergency response. In this work, we introduce a novel task for cross-view
geo-localization with natural language descriptions, which aims to retrieve
corresponding satellite images or OSM database based on scene text
descriptions. To support this task, we construct the CVG-Text dataset by
collecting cross-view data from multiple cities and employing a scene text
generation approach that leverages the annotation capabilities of Large
Multimodal Models to produce high-quality scene text descriptions with
localization details. Additionally, we propose a novel text-based retrieval
localization method, CrossText2Loc, which improves recall by 10% and
demonstrates excellent long-text retrieval capabilities. In terms of
explainability, it not only provides similarity scores but also offers
retrieval reasons. More information can be found at
https://yejy53.github.io/CVG-Text/ .
|
2501.00751 | Haoxuan Li | Haoxuan Li, Wei song, Peiwu Qin, Xi Yuan, Zhenglin Chen | HCMA-UNet: A Hybrid CNN-Mamba UNet with Axial Self-Attention for
Efficient Breast Cancer Segmentation | null | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Breast cancer lesion segmentation in DCE-MRI remains challenging due to
heterogeneous tumor morphology and indistinct boundaries. To address these
challenges, this study proposes a novel hybrid segmentation network, HCMA-UNet,
for lesion segmentation of breast cancer. Our network consists of a lightweight
CNN backbone and a Multi-view Axial Self-Attention Mamba (MISM) module. The
MISM module integrates Visual State Space Block (VSSB) and Axial Self-Attention
(ASA) mechanism, effectively reducing parameters through Asymmetric Split
Channel (ASC) strategy to achieve efficient tri-directional feature extraction.
Our lightweight model achieves superior performance with 2.87M parameters and
126.44 GFLOPs. A Feature-guided Region-aware loss function (FRLoss) is proposed
to enhance segmentation accuracy. Extensive experiments on one private and two
public DCE-MRI breast cancer datasets demonstrate that our approach achieves
state-of-the-art performance while maintaining computational efficiency. FRLoss
also exhibits good cross-architecture generalization capabilities. The source
code is available at https://github.com/Haoxuanli-Thu/HCMA-UNet.
| [
{
"version": "v1",
"created": "Wed, 1 Jan 2025 06:42:57 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 15:36:57 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Li",
"Haoxuan",
""
],
[
"song",
"Wei",
""
],
[
"Qin",
"Peiwu",
""
],
[
"Yuan",
"Xi",
""
],
[
"Chen",
"Zhenglin",
""
]
] | TITLE: HCMA-UNet: A Hybrid CNN-Mamba UNet with Axial Self-Attention for
Efficient Breast Cancer Segmentation
ABSTRACT: Breast cancer lesion segmentation in DCE-MRI remains challenging due to
heterogeneous tumor morphology and indistinct boundaries. To address these
challenges, this study proposes a novel hybrid segmentation network, HCMA-UNet,
for lesion segmentation of breast cancer. Our network consists of a lightweight
CNN backbone and a Multi-view Axial Self-Attention Mamba (MISM) module. The
MISM module integrates Visual State Space Block (VSSB) and Axial Self-Attention
(ASA) mechanism, effectively reducing parameters through Asymmetric Split
Channel (ASC) strategy to achieve efficient tri-directional feature extraction.
Our lightweight model achieves superior performance with 2.87M parameters and
126.44 GFLOPs. A Feature-guided Region-aware loss function (FRLoss) is proposed
to enhance segmentation accuracy. Extensive experiments on one private and two
public DCE-MRI breast cancer datasets demonstrate that our approach achieves
state-of-the-art performance while maintaining computational efficiency. FRLoss
also exhibits good cross-architecture generalization capabilities. The source
code is available at https://github.com/Haoxuanli-Thu/HCMA-UNet.
|
2501.06897 | Huangying Zhan | Liyan Chen, Huangying Zhan, Kevin Chen, Xiangyu Xu, Qingan Yan,
Changjiang Cai, Yi Xu | ActiveGAMER: Active GAussian Mapping through Efficient Rendering | Accepted to CVPR2025 | null | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by/4.0/ | We introduce ActiveGAMER, an active mapping system that utilizes 3D Gaussian
Splatting (3DGS) to achieve high-quality, real-time scene mapping and
exploration. Unlike traditional NeRF-based methods, which are computationally
demanding and restrict active mapping performance, our approach leverages the
efficient rendering capabilities of 3DGS, allowing effective and efficient
exploration in complex environments. The core of our system is a
rendering-based information gain module that dynamically identifies the most
informative viewpoints for next-best-view planning, enhancing both geometric
and photometric reconstruction accuracy. ActiveGAMER also integrates a
carefully balanced framework, combining coarse-to-fine exploration,
post-refinement, and a global-local keyframe selection strategy to maximize
reconstruction completeness and fidelity. Our system autonomously explores and
reconstructs environments with state-of-the-art geometric and photometric
accuracy and completeness, significantly surpassing existing approaches in both
aspects. Extensive evaluations on benchmark datasets such as Replica and MP3D
highlight ActiveGAMER's effectiveness in active mapping tasks.
| [
{
"version": "v1",
"created": "Sun, 12 Jan 2025 18:38:51 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 17:34:15 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Chen",
"Liyan",
""
],
[
"Zhan",
"Huangying",
""
],
[
"Chen",
"Kevin",
""
],
[
"Xu",
"Xiangyu",
""
],
[
"Yan",
"Qingan",
""
],
[
"Cai",
"Changjiang",
""
],
[
"Xu",
"Yi",
""
]
] | TITLE: ActiveGAMER: Active GAussian Mapping through Efficient Rendering
ABSTRACT: We introduce ActiveGAMER, an active mapping system that utilizes 3D Gaussian
Splatting (3DGS) to achieve high-quality, real-time scene mapping and
exploration. Unlike traditional NeRF-based methods, which are computationally
demanding and restrict active mapping performance, our approach leverages the
efficient rendering capabilities of 3DGS, allowing effective and efficient
exploration in complex environments. The core of our system is a
rendering-based information gain module that dynamically identifies the most
informative viewpoints for next-best-view planning, enhancing both geometric
and photometric reconstruction accuracy. ActiveGAMER also integrates a
carefully balanced framework, combining coarse-to-fine exploration,
post-refinement, and a global-local keyframe selection strategy to maximize
reconstruction completeness and fidelity. Our system autonomously explores and
reconstructs environments with state-of-the-art geometric and photometric
accuracy and completeness, significantly surpassing existing approaches in both
aspects. Extensive evaluations on benchmark datasets such as Replica and MP3D
highlight ActiveGAMER's effectiveness in active mapping tasks.
|
2501.12907 | Yuexuan Kong | Yuexuan Kong, Gabriel Meseguer-Brocal, Vincent Lostanlen, Mathieu
Lagrange, Romain Hennequin | S-KEY: Self-supervised Learning of Major and Minor Keys from Audio | null | null | null | null | cs.SD eess.AS | http://creativecommons.org/licenses/by-nc-sa/4.0/ | STONE, the current method in self-supervised learning for tonality estimation
in music signals, cannot distinguish relative keys, such as C major versus A
minor. In this article, we extend the neural network architecture and learning
objective of STONE to perform self-supervised learning of major and minor keys
(S-KEY). Our main contribution is an auxiliary pretext task to STONE,
formulated using transposition-invariant chroma features as a source of
pseudo-labels. S-KEY matches the supervised state of the art in tonality
estimation on FMAKv2 and GTZAN datasets while requiring no human annotation and
having the same parameter budget as STONE. We build upon this result and expand
the training set of S-KEY to a million songs, thus showing the potential of
large-scale self-supervised learning in music information retrieval.
| [
{
"version": "v1",
"created": "Wed, 22 Jan 2025 14:35:37 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 14:32:32 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Kong",
"Yuexuan",
""
],
[
"Meseguer-Brocal",
"Gabriel",
""
],
[
"Lostanlen",
"Vincent",
""
],
[
"Lagrange",
"Mathieu",
""
],
[
"Hennequin",
"Romain",
""
]
] | TITLE: S-KEY: Self-supervised Learning of Major and Minor Keys from Audio
ABSTRACT: STONE, the current method in self-supervised learning for tonality estimation
in music signals, cannot distinguish relative keys, such as C major versus A
minor. In this article, we extend the neural network architecture and learning
objective of STONE to perform self-supervised learning of major and minor keys
(S-KEY). Our main contribution is an auxiliary pretext task to STONE,
formulated using transposition-invariant chroma features as a source of
pseudo-labels. S-KEY matches the supervised state of the art in tonality
estimation on FMAKv2 and GTZAN datasets while requiring no human annotation and
having the same parameter budget as STONE. We build upon this result and expand
the training set of S-KEY to a million songs, thus showing the potential of
large-scale self-supervised learning in music information retrieval.
|
2501.13271 | Peiqi Li | Peiqi Li, Jie Chen | Hybrid Two-Stage Reconstruction of Multiscale Subsurface Flow with
Physics-informed Residual Connected Neural Operator | 21 pages, 14 figures, 3 tables | null | null | null | cs.LG physics.flu-dyn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The novel neural networks show great potential in solving partial
differential equations. For single-phase flow problems in subsurface porous
media with high-contrast coefficients, the key is to develop neural operators
with accurate reconstruction capability and strict adherence to physical laws.
In this study, we proposed a hybrid two-stage framework that uses multiscale
basis functions and physics-guided deep learning to solve the Darcy flow
problem in high-contrast fractured porous media. In the first stage, a
data-driven model is used to reconstruct the multiscale basis function based on
the permeability field to achieve effective dimensionality reduction while
preserving the necessary multiscale features. In the second stage, the
physics-informed neural network, together with Transformer-based global
information extractor is used to reconstruct the pressure field by integrating
the physical constraints derived from the Darcy equation, ensuring consistency
with the physical laws of the real world. The model was evaluated on datasets
with different combinations of permeability and basis functions and performed
well in terms of reconstruction accuracy. Specifically, the framework achieves
R2 values above 0.9 in terms of basis function fitting and pressure
reconstruction, and the residual indicator is on the order of $1\times
10^{-4}$. These results validate the ability of the proposed framework to
achieve accurate reconstruction while maintaining physical consistency.
| [
{
"version": "v1",
"created": "Wed, 22 Jan 2025 23:28:03 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Li",
"Peiqi",
""
],
[
"Chen",
"Jie",
""
]
] | TITLE: Hybrid Two-Stage Reconstruction of Multiscale Subsurface Flow with
Physics-informed Residual Connected Neural Operator
ABSTRACT: The novel neural networks show great potential in solving partial
differential equations. For single-phase flow problems in subsurface porous
media with high-contrast coefficients, the key is to develop neural operators
with accurate reconstruction capability and strict adherence to physical laws.
In this study, we proposed a hybrid two-stage framework that uses multiscale
basis functions and physics-guided deep learning to solve the Darcy flow
problem in high-contrast fractured porous media. In the first stage, a
data-driven model is used to reconstruct the multiscale basis function based on
the permeability field to achieve effective dimensionality reduction while
preserving the necessary multiscale features. In the second stage, the
physics-informed neural network, together with Transformer-based global
information extractor is used to reconstruct the pressure field by integrating
the physical constraints derived from the Darcy equation, ensuring consistency
with the physical laws of the real world. The model was evaluated on datasets
with different combinations of permeability and basis functions and performed
well in terms of reconstruction accuracy. Specifically, the framework achieves
R2 values above 0.9 in terms of basis function fitting and pressure
reconstruction, and the residual indicator is on the order of $1\times
10^{-4}$. These results validate the ability of the proposed framework to
achieve accurate reconstruction while maintaining physical consistency.
|
2501.16373 | Chuang Zhao | Chuang Zhao, Hui Tang, Jiheng Zhang, Xiaomeng Li | Unveiling Discrete Clues: Superior Healthcare Predictions for Rare
Diseases | null | null | null | accepted in WWW 2025 | cs.LG cs.AI cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate healthcare prediction is essential for improving patient outcomes.
Existing work primarily leverages advanced frameworks like attention or graph
networks to capture the intricate collaborative (CO) signals in electronic
health records. However, prediction for rare diseases remains challenging due
to limited co-occurrence and inadequately tailored approaches. To address this
issue, this paper proposes UDC, a novel method that unveils discrete clues to
bridge consistent textual knowledge and CO signals within a unified semantic
space, thereby enriching the representation semantics of rare diseases.
Specifically, we focus on addressing two key sub-problems: (1) acquiring
distinguishable discrete encodings for precise disease representation and (2)
achieving semantic alignment between textual knowledge and the CO signals at
the code level. For the first sub-problem, we refine the standard vector
quantized process to include condition awareness. Additionally, we develop an
advanced contrastive approach in the decoding stage, leveraging synthetic and
mixed-domain targets as hard negatives to enrich the perceptibility of the
reconstructed representation for downstream tasks. For the second sub-problem,
we introduce a novel codebook update strategy using co-teacher distillation.
This approach facilitates bidirectional supervision between textual knowledge
and CO signals, thereby aligning semantically equivalent information in a
shared discrete latent space. Extensive experiments on three datasets
demonstrate our superiority.
| [
{
"version": "v1",
"created": "Thu, 23 Jan 2025 03:08:22 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Zhao",
"Chuang",
""
],
[
"Tang",
"Hui",
""
],
[
"Zhang",
"Jiheng",
""
],
[
"Li",
"Xiaomeng",
""
]
] | TITLE: Unveiling Discrete Clues: Superior Healthcare Predictions for Rare
Diseases
ABSTRACT: Accurate healthcare prediction is essential for improving patient outcomes.
Existing work primarily leverages advanced frameworks like attention or graph
networks to capture the intricate collaborative (CO) signals in electronic
health records. However, prediction for rare diseases remains challenging due
to limited co-occurrence and inadequately tailored approaches. To address this
issue, this paper proposes UDC, a novel method that unveils discrete clues to
bridge consistent textual knowledge and CO signals within a unified semantic
space, thereby enriching the representation semantics of rare diseases.
Specifically, we focus on addressing two key sub-problems: (1) acquiring
distinguishable discrete encodings for precise disease representation and (2)
achieving semantic alignment between textual knowledge and the CO signals at
the code level. For the first sub-problem, we refine the standard vector
quantized process to include condition awareness. Additionally, we develop an
advanced contrastive approach in the decoding stage, leveraging synthetic and
mixed-domain targets as hard negatives to enrich the perceptibility of the
reconstructed representation for downstream tasks. For the second sub-problem,
we introduce a novel codebook update strategy using co-teacher distillation.
This approach facilitates bidirectional supervision between textual knowledge
and CO signals, thereby aligning semantically equivalent information in a
shared discrete latent space. Extensive experiments on three datasets
demonstrate our superiority.
|
2501.16803 | Lantao Li | Lantao Li, Kang Yang, Wenqi Zhang, Xiaoxue Wang and Chen Sun | RG-Attn: Radian Glue Attention for Multi-modality Multi-agent
Cooperative Perception | null | null | null | null | cs.RO cs.CV cs.NI eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cooperative perception offers an optimal solution to overcome the perception
limitations of single-agent systems by leveraging Vehicle-to-Everything (V2X)
communication for data sharing and fusion across multiple agents. However, most
existing approaches focus on single-modality data exchange, limiting the
potential of both homogeneous and heterogeneous fusion across agents. This
overlooks the opportunity to utilize multi-modality data per agent, restricting
the system's performance. In the automotive industry, manufacturers adopt
diverse sensor configurations, resulting in heterogeneous combinations of
sensor modalities across agents. To harness the potential of every possible
data source for optimal performance, we design a robust LiDAR and camera
cross-modality fusion module, Radian-Glue-Attention (RG-Attn), applicable to
both intra-agent cross-modality fusion and inter-agent cross-modality fusion
scenarios, owing to the convenient coordinate conversion by transformation
matrix and the unified sampling/inversion mechanism. We also propose two
different architectures, named Paint-To-Puzzle (PTP) and
Co-Sketching-Co-Coloring (CoS-CoCo), for conducting cooperative perception. PTP
aims for maximum precision performance and achieves smaller data packet size by
limiting cross-agent fusion to a single instance, but requiring all
participants to be equipped with LiDAR. In contrast, CoS-CoCo supports agents
with any configuration-LiDAR-only, camera-only, or LiDAR-camera-both,
presenting more generalization ability. Our approach achieves state-of-the-art
(SOTA) performance on both real and simulated cooperative perception datasets.
The code is now available at GitHub.
| [
{
"version": "v1",
"created": "Tue, 28 Jan 2025 09:08:31 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 02:05:03 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Li",
"Lantao",
""
],
[
"Yang",
"Kang",
""
],
[
"Zhang",
"Wenqi",
""
],
[
"Wang",
"Xiaoxue",
""
],
[
"Sun",
"Chen",
""
]
] | TITLE: RG-Attn: Radian Glue Attention for Multi-modality Multi-agent
Cooperative Perception
ABSTRACT: Cooperative perception offers an optimal solution to overcome the perception
limitations of single-agent systems by leveraging Vehicle-to-Everything (V2X)
communication for data sharing and fusion across multiple agents. However, most
existing approaches focus on single-modality data exchange, limiting the
potential of both homogeneous and heterogeneous fusion across agents. This
overlooks the opportunity to utilize multi-modality data per agent, restricting
the system's performance. In the automotive industry, manufacturers adopt
diverse sensor configurations, resulting in heterogeneous combinations of
sensor modalities across agents. To harness the potential of every possible
data source for optimal performance, we design a robust LiDAR and camera
cross-modality fusion module, Radian-Glue-Attention (RG-Attn), applicable to
both intra-agent cross-modality fusion and inter-agent cross-modality fusion
scenarios, owing to the convenient coordinate conversion by transformation
matrix and the unified sampling/inversion mechanism. We also propose two
different architectures, named Paint-To-Puzzle (PTP) and
Co-Sketching-Co-Coloring (CoS-CoCo), for conducting cooperative perception. PTP
aims for maximum precision performance and achieves smaller data packet size by
limiting cross-agent fusion to a single instance, but requiring all
participants to be equipped with LiDAR. In contrast, CoS-CoCo supports agents
with any configuration-LiDAR-only, camera-only, or LiDAR-camera-both,
presenting more generalization ability. Our approach achieves state-of-the-art
(SOTA) performance on both real and simulated cooperative perception datasets.
The code is now available at GitHub.
|
2502.05979 | Xinyu Liu | Xinyu Liu, Ailing Zeng, Wei Xue, Harry Yang, Wenhan Luo, Qifeng Liu,
Yike Guo | VFX Creator: Animated Visual Effect Generation with Controllable
Diffusion Transformer | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Crafting magic and illusions is one of the most thrilling aspects of
filmmaking, with visual effects (VFX) serving as the powerhouse behind
unforgettable cinematic experiences. While recent advances in generative
artificial intelligence have driven progress in generic image and video
synthesis, the domain of controllable VFX generation remains relatively
underexplored. In this work, we propose a novel paradigm for animated VFX
generation as image animation, where dynamic effects are generated from
user-friendly textual descriptions and static reference images. Our work makes
two primary contributions: (i) Open-VFX, the first high-quality VFX video
dataset spanning 15 diverse effect categories, annotated with textual
descriptions, instance segmentation masks for spatial conditioning, and
start-end timestamps for temporal control. (ii) VFX Creator, a simple yet
effective controllable VFX generation framework based on a Video Diffusion
Transformer. The model incorporates a spatial and temporal controllable LoRA
adapter, requiring minimal training videos. Specifically, a plug-and-play mask
control module enables instance-level spatial manipulation, while tokenized
start-end motion timestamps embedded in the diffusion process, alongside the
text encoder, allow precise temporal control over effect timing and pace.
Extensive experiments on the Open-VFX test set demonstrate the superiority of
the proposed system in generating realistic and dynamic effects, achieving
state-of-the-art performance and generalization ability in both spatial and
temporal controllability. Furthermore, we introduce a specialized metric to
evaluate the precision of temporal control. By bridging traditional VFX
techniques with generative approaches, VFX Creator unlocks new possibilities
for efficient and high-quality video effect generation, making advanced VFX
accessible to a broader audience.
| [
{
"version": "v1",
"created": "Sun, 9 Feb 2025 18:12:25 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Feb 2025 05:45:45 GMT"
},
{
"version": "v3",
"created": "Mon, 10 Mar 2025 10:59:53 GMT"
},
{
"version": "v4",
"created": "Tue, 1 Apr 2025 07:54:57 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Liu",
"Xinyu",
""
],
[
"Zeng",
"Ailing",
""
],
[
"Xue",
"Wei",
""
],
[
"Yang",
"Harry",
""
],
[
"Luo",
"Wenhan",
""
],
[
"Liu",
"Qifeng",
""
],
[
"Guo",
"Yike",
""
]
] | TITLE: VFX Creator: Animated Visual Effect Generation with Controllable
Diffusion Transformer
ABSTRACT: Crafting magic and illusions is one of the most thrilling aspects of
filmmaking, with visual effects (VFX) serving as the powerhouse behind
unforgettable cinematic experiences. While recent advances in generative
artificial intelligence have driven progress in generic image and video
synthesis, the domain of controllable VFX generation remains relatively
underexplored. In this work, we propose a novel paradigm for animated VFX
generation as image animation, where dynamic effects are generated from
user-friendly textual descriptions and static reference images. Our work makes
two primary contributions: (i) Open-VFX, the first high-quality VFX video
dataset spanning 15 diverse effect categories, annotated with textual
descriptions, instance segmentation masks for spatial conditioning, and
start-end timestamps for temporal control. (ii) VFX Creator, a simple yet
effective controllable VFX generation framework based on a Video Diffusion
Transformer. The model incorporates a spatial and temporal controllable LoRA
adapter, requiring minimal training videos. Specifically, a plug-and-play mask
control module enables instance-level spatial manipulation, while tokenized
start-end motion timestamps embedded in the diffusion process, alongside the
text encoder, allow precise temporal control over effect timing and pace.
Extensive experiments on the Open-VFX test set demonstrate the superiority of
the proposed system in generating realistic and dynamic effects, achieving
state-of-the-art performance and generalization ability in both spatial and
temporal controllability. Furthermore, we introduce a specialized metric to
evaluate the precision of temporal control. By bridging traditional VFX
techniques with generative approaches, VFX Creator unlocks new possibilities
for efficient and high-quality video effect generation, making advanced VFX
accessible to a broader audience.
|
2502.07272 | Wei Wu | Wei Wu, Qiuyi Li, Mingyang Li, Kun Fu, Fuli Feng, Jieping Ye, Hui
Xiong, Zheng Wang | GENERator: A Long-Context Generative Genomic Foundation Model | null | null | null | null | cs.CL q-bio.GN | http://creativecommons.org/licenses/by/4.0/ | Advancements in DNA sequencing technologies have significantly improved our
ability to decode genomic sequences. However, the prediction and interpretation
of these sequences remain challenging due to the intricate nature of genetic
material. Large language models (LLMs) have introduced new opportunities for
biological sequence analysis. Recent developments in genomic language models
have underscored the potential of LLMs in deciphering DNA sequences.
Nonetheless, existing models often face limitations in robustness and
application scope, primarily due to constraints in model structure and training
data scale. To address these limitations, we present GENERator, a generative
genomic foundation model featuring a context length of 98k base pairs (bp) and
1.2B parameters. Trained on an expansive dataset comprising 386B bp of
eukaryotic DNA, the GENERator demonstrates state-of-the-art performance across
both established and newly proposed benchmarks. The model adheres to the
central dogma of molecular biology, accurately generating protein-coding
sequences that translate into proteins structurally analogous to known
families. It also shows significant promise in sequence optimization,
particularly through the prompt-responsive generation of enhancer sequences
with specific activity profiles. These capabilities position the GENERator as a
pivotal tool for genomic research and biotechnological advancement, enhancing
our ability to interpret and predict complex biological systems and enabling
precise genomic interventions. Implementation details and supplementary
resources are available at https://github.com/GenerTeam/GENERator.
| [
{
"version": "v1",
"created": "Tue, 11 Feb 2025 05:39:49 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Mar 2025 05:41:32 GMT"
},
{
"version": "v3",
"created": "Tue, 1 Apr 2025 03:14:15 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Wu",
"Wei",
""
],
[
"Li",
"Qiuyi",
""
],
[
"Li",
"Mingyang",
""
],
[
"Fu",
"Kun",
""
],
[
"Feng",
"Fuli",
""
],
[
"Ye",
"Jieping",
""
],
[
"Xiong",
"Hui",
""
],
[
"Wang",
"Zheng",
""
]
] | TITLE: GENERator: A Long-Context Generative Genomic Foundation Model
ABSTRACT: Advancements in DNA sequencing technologies have significantly improved our
ability to decode genomic sequences. However, the prediction and interpretation
of these sequences remain challenging due to the intricate nature of genetic
material. Large language models (LLMs) have introduced new opportunities for
biological sequence analysis. Recent developments in genomic language models
have underscored the potential of LLMs in deciphering DNA sequences.
Nonetheless, existing models often face limitations in robustness and
application scope, primarily due to constraints in model structure and training
data scale. To address these limitations, we present GENERator, a generative
genomic foundation model featuring a context length of 98k base pairs (bp) and
1.2B parameters. Trained on an expansive dataset comprising 386B bp of
eukaryotic DNA, the GENERator demonstrates state-of-the-art performance across
both established and newly proposed benchmarks. The model adheres to the
central dogma of molecular biology, accurately generating protein-coding
sequences that translate into proteins structurally analogous to known
families. It also shows significant promise in sequence optimization,
particularly through the prompt-responsive generation of enhancer sequences
with specific activity profiles. These capabilities position the GENERator as a
pivotal tool for genomic research and biotechnological advancement, enhancing
our ability to interpret and predict complex biological systems and enabling
precise genomic interventions. Implementation details and supplementary
resources are available at https://github.com/GenerTeam/GENERator.
|
2502.09978 | Yachao Yuan Dr. | Yachao Yuan, Xingyu Chen | RoadFed: A Multimodal Federated Learning System for Improving Road
Safety | null | null | null | null | cs.CE | http://creativecommons.org/licenses/by-sa/4.0/ | Internet of Things (IoTs) have been widely applied in Collaborative
Intelligent Transportation Systems (C-ITS) for the prevention of road
accidents. As one of the primary causes of road accidents in C-ITS, the
efficient detection and early alarm of road hazards are of paramount
importance. Given the importance, extensive research has explored this topic
and obtained favorable results. However, most existing solutions only explore
single-modality data, struggle with high computation and communication
overhead, or suffer from the curse of high dimensionality in their
privacy-preserving methodologies. To overcome these obstacles, in this paper,
we introduce RoadFed, an innovative and private multimodal Federated
learning-based system tailored for intelligent Road hazard detection and alarm.
This framework encompasses an innovative Multimodal Road Hazard Detector, a
communication-efficient federated learning approach, and a customized
low-error-rate local differential privacy method crafted for high dimensional
multimodal data. Experimental results reveal that the proposed RoadFed
surpasses most existing systems in the self-gathered real-world and CrisisMMD
public datasets. In particular, RoadFed achieves an accuracy of 96.42% with a
mere 0.0351 seconds of latency and its communication cost is up to 1,000 times
lower than existing systems in this field. It facilitates collaborative
training with non-iid high dimensional multimodal real-world data across
various data modalities on multiple edges while ensuring privacy preservation
for road users.
| [
{
"version": "v1",
"created": "Fri, 14 Feb 2025 08:05:30 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 04:36:20 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Yuan",
"Yachao",
""
],
[
"Chen",
"Xingyu",
""
]
] | TITLE: RoadFed: A Multimodal Federated Learning System for Improving Road
Safety
ABSTRACT: Internet of Things (IoTs) have been widely applied in Collaborative
Intelligent Transportation Systems (C-ITS) for the prevention of road
accidents. As one of the primary causes of road accidents in C-ITS, the
efficient detection and early alarm of road hazards are of paramount
importance. Given the importance, extensive research has explored this topic
and obtained favorable results. However, most existing solutions only explore
single-modality data, struggle with high computation and communication
overhead, or suffer from the curse of high dimensionality in their
privacy-preserving methodologies. To overcome these obstacles, in this paper,
we introduce RoadFed, an innovative and private multimodal Federated
learning-based system tailored for intelligent Road hazard detection and alarm.
This framework encompasses an innovative Multimodal Road Hazard Detector, a
communication-efficient federated learning approach, and a customized
low-error-rate local differential privacy method crafted for high dimensional
multimodal data. Experimental results reveal that the proposed RoadFed
surpasses most existing systems in the self-gathered real-world and CrisisMMD
public datasets. In particular, RoadFed achieves an accuracy of 96.42% with a
mere 0.0351 seconds of latency and its communication cost is up to 1,000 times
lower than existing systems in this field. It facilitates collaborative
training with non-iid high dimensional multimodal real-world data across
various data modalities on multiple edges while ensuring privacy preservation
for road users.
|
2502.11381 | Zhongwei Chen | Zhongwei Chen, Zhao-Xu Yang, Hai-Jun Rong | Without Paired Labeled Data: An End-to-End Self-Supervised Paradigm for
UAV-View Geo-Localization | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | UAV-View Geo-Localization (UVGL) aims to achieve accurate localization of
unmanned aerial vehicles (UAVs) by retrieving the most relevant GPS-tagged
satellite images. However, existing methods heavily rely on pre-paired
UAV-satellite images for supervised learning. Such dependency not only incurs
high annotation costs but also severely limits scalability and practical
deployment in open-world UVGL scenarios. To address these limitations, we
propose an end-to-end self-supervised UVGL method. Our method leverages a
shallow backbone network to extract initial features, employs clustering to
generate pseudo labels, and adopts a dual-path contrastive learning
architecture to learn discriminative intra-view representations. Furthermore,
our method incorporates two core modules, the dynamic hierarchical memory
learning module and the information consistency evolution learning module. The
dynamic hierarchical memory learning module combines short-term and long-term
memory to enhance intra-view feature consistency and discriminability.
Meanwhile, the information consistency evolution learning module leverages a
neighborhood-driven dynamic constraint mechanism to systematically capture
implicit cross-view semantic correlations, thereby improving cross-view feature
alignment. To further stabilize and strengthen the self-supervised training
process, a pseudo-label enhancement strategy is introduced, which refines the
quality of pseudo supervision. Our method ultimately constructs a unified
cross-view feature representation space under self-supervised settings.
Extensive experiments on three public benchmark datasets demonstrate that the
proposed method consistently outperforms existing self-supervised methods and
even surpasses several state-of-the-art supervised methods. Our code is
available at https://github.com/ISChenawei/DMNIL.
| [
{
"version": "v1",
"created": "Mon, 17 Feb 2025 02:53:08 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 03:44:00 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Chen",
"Zhongwei",
""
],
[
"Yang",
"Zhao-Xu",
""
],
[
"Rong",
"Hai-Jun",
""
]
] | TITLE: Without Paired Labeled Data: An End-to-End Self-Supervised Paradigm for
UAV-View Geo-Localization
ABSTRACT: UAV-View Geo-Localization (UVGL) aims to achieve accurate localization of
unmanned aerial vehicles (UAVs) by retrieving the most relevant GPS-tagged
satellite images. However, existing methods heavily rely on pre-paired
UAV-satellite images for supervised learning. Such dependency not only incurs
high annotation costs but also severely limits scalability and practical
deployment in open-world UVGL scenarios. To address these limitations, we
propose an end-to-end self-supervised UVGL method. Our method leverages a
shallow backbone network to extract initial features, employs clustering to
generate pseudo labels, and adopts a dual-path contrastive learning
architecture to learn discriminative intra-view representations. Furthermore,
our method incorporates two core modules, the dynamic hierarchical memory
learning module and the information consistency evolution learning module. The
dynamic hierarchical memory learning module combines short-term and long-term
memory to enhance intra-view feature consistency and discriminability.
Meanwhile, the information consistency evolution learning module leverages a
neighborhood-driven dynamic constraint mechanism to systematically capture
implicit cross-view semantic correlations, thereby improving cross-view feature
alignment. To further stabilize and strengthen the self-supervised training
process, a pseudo-label enhancement strategy is introduced, which refines the
quality of pseudo supervision. Our method ultimately constructs a unified
cross-view feature representation space under self-supervised settings.
Extensive experiments on three public benchmark datasets demonstrate that the
proposed method consistently outperforms existing self-supervised methods and
even surpasses several state-of-the-art supervised methods. Our code is
available at https://github.com/ISChenawei/DMNIL.
|
2502.12191 | Ruoxuan Feng | Ruoxuan Feng, Jiangyu Hu, Wenke Xia, Tianci Gao, Ao Shen, Yuhao Sun,
Bin Fang, Di Hu | AnyTouch: Learning Unified Static-Dynamic Representation across Multiple
Visuo-tactile Sensors | Accepted by ICLR 2025 | null | null | null | cs.LG cs.CV cs.RO | http://creativecommons.org/licenses/by/4.0/ | Visuo-tactile sensors aim to emulate human tactile perception, enabling
robots to precisely understand and manipulate objects. Over time, numerous
meticulously designed visuo-tactile sensors have been integrated into robotic
systems, aiding in completing various tasks. However, the distinct data
characteristics of these low-standardized visuo-tactile sensors hinder the
establishment of a powerful tactile perception system. We consider that the key
to addressing this issue lies in learning unified multi-sensor representations,
thereby integrating the sensors and promoting tactile knowledge transfer
between them. To achieve unified representation of this nature, we introduce
TacQuad, an aligned multi-modal multi-sensor tactile dataset from four
different visuo-tactile sensors, which enables the explicit integration of
various sensors. Recognizing that humans perceive the physical environment by
acquiring diverse tactile information such as texture and pressure changes, we
further propose to learn unified multi-sensor representations from both static
and dynamic perspectives. By integrating tactile images and videos, we present
AnyTouch, a unified static-dynamic multi-sensor representation learning
framework with a multi-level structure, aimed at both enhancing comprehensive
perceptual abilities and enabling effective cross-sensor transfer. This
multi-level architecture captures pixel-level details from tactile data via
masked modeling and enhances perception and transferability by learning
semantic-level sensor-agnostic features through multi-modal alignment and
cross-sensor matching. We provide a comprehensive analysis of multi-sensor
transferability, and validate our method on various datasets and in the
real-world pouring task. Experimental results show that our method outperforms
existing methods, exhibits outstanding static and dynamic perception
capabilities across various sensors.
| [
{
"version": "v1",
"created": "Sat, 15 Feb 2025 08:33:25 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 02:57:23 GMT"
},
{
"version": "v3",
"created": "Tue, 1 Apr 2025 08:17:30 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Feng",
"Ruoxuan",
""
],
[
"Hu",
"Jiangyu",
""
],
[
"Xia",
"Wenke",
""
],
[
"Gao",
"Tianci",
""
],
[
"Shen",
"Ao",
""
],
[
"Sun",
"Yuhao",
""
],
[
"Fang",
"Bin",
""
],
[
"Hu",
"Di",
""
]
] | TITLE: AnyTouch: Learning Unified Static-Dynamic Representation across Multiple
Visuo-tactile Sensors
ABSTRACT: Visuo-tactile sensors aim to emulate human tactile perception, enabling
robots to precisely understand and manipulate objects. Over time, numerous
meticulously designed visuo-tactile sensors have been integrated into robotic
systems, aiding in completing various tasks. However, the distinct data
characteristics of these low-standardized visuo-tactile sensors hinder the
establishment of a powerful tactile perception system. We consider that the key
to addressing this issue lies in learning unified multi-sensor representations,
thereby integrating the sensors and promoting tactile knowledge transfer
between them. To achieve unified representation of this nature, we introduce
TacQuad, an aligned multi-modal multi-sensor tactile dataset from four
different visuo-tactile sensors, which enables the explicit integration of
various sensors. Recognizing that humans perceive the physical environment by
acquiring diverse tactile information such as texture and pressure changes, we
further propose to learn unified multi-sensor representations from both static
and dynamic perspectives. By integrating tactile images and videos, we present
AnyTouch, a unified static-dynamic multi-sensor representation learning
framework with a multi-level structure, aimed at both enhancing comprehensive
perceptual abilities and enabling effective cross-sensor transfer. This
multi-level architecture captures pixel-level details from tactile data via
masked modeling and enhances perception and transferability by learning
semantic-level sensor-agnostic features through multi-modal alignment and
cross-sensor matching. We provide a comprehensive analysis of multi-sensor
transferability, and validate our method on various datasets and in the
real-world pouring task. Experimental results show that our method outperforms
existing methods, exhibits outstanding static and dynamic perception
capabilities across various sensors.
|
2502.17022 | Gregor Baer | Gregor Baer, Isel Grau, Chao Zhang and Pieter Van Gorp | Class-Dependent Perturbation Effects in Evaluating Time Series
Attributions | Accepted at The World Conference on eXplainable Artificial
Intelligence (XAI-2025) | null | null | null | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As machine learning models become increasingly prevalent in time series
applications, Explainable Artificial Intelligence (XAI) methods are essential
for understanding their predictions. Within XAI, feature attribution methods
aim to identify which input features contribute the most to a model's
prediction, with their evaluation typically relying on perturbation-based
metrics. Through systematic empirical analysis across multiple datasets, model
architectures, and perturbation strategies, we reveal previously overlooked
class-dependent effects in these metrics: they show varying effectiveness
across classes, achieving strong results for some while remaining less
sensitive to others. In particular, we find that the most effective
perturbation strategies often demonstrate the most pronounced class
differences. Our analysis suggests that these effects arise from the learned
biases of classifiers, indicating that perturbation-based evaluation may
reflect specific model behaviors rather than intrinsic attribution quality. We
propose an evaluation framework with a class-aware penalty term to help assess
and account for these effects in evaluating feature attributions, offering
particular value for class-imbalanced datasets. Although our analysis focuses
on time series classification, these class-dependent effects likely extend to
other structured data domains where perturbation-based evaluation is common.
| [
{
"version": "v1",
"created": "Mon, 24 Feb 2025 10:22:03 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 13:19:41 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Baer",
"Gregor",
""
],
[
"Grau",
"Isel",
""
],
[
"Zhang",
"Chao",
""
],
[
"Van Gorp",
"Pieter",
""
]
] | TITLE: Class-Dependent Perturbation Effects in Evaluating Time Series
Attributions
ABSTRACT: As machine learning models become increasingly prevalent in time series
applications, Explainable Artificial Intelligence (XAI) methods are essential
for understanding their predictions. Within XAI, feature attribution methods
aim to identify which input features contribute the most to a model's
prediction, with their evaluation typically relying on perturbation-based
metrics. Through systematic empirical analysis across multiple datasets, model
architectures, and perturbation strategies, we reveal previously overlooked
class-dependent effects in these metrics: they show varying effectiveness
across classes, achieving strong results for some while remaining less
sensitive to others. In particular, we find that the most effective
perturbation strategies often demonstrate the most pronounced class
differences. Our analysis suggests that these effects arise from the learned
biases of classifiers, indicating that perturbation-based evaluation may
reflect specific model behaviors rather than intrinsic attribution quality. We
propose an evaluation framework with a class-aware penalty term to help assess
and account for these effects in evaluating feature attributions, offering
particular value for class-imbalanced datasets. Although our analysis focuses
on time series classification, these class-dependent effects likely extend to
other structured data domains where perturbation-based evaluation is common.
|
2502.19231 | Sean O'Hagan | Veronika Ro\v{c}kov\'a, Sean O'Hagan | AI-Powered Bayesian Inference | 37 pages, 4 figures; added additional experiments, asymptotic theory
and exposition, corrected typos | null | null | null | stat.ME cs.AI stat.ML | http://creativecommons.org/licenses/by/4.0/ | The advent of Generative Artificial Intelligence (GAI) has heralded an
inflection point that changed how society thinks about knowledge acquisition.
While GAI cannot be fully trusted for decision-making, it may still provide
valuable information that can be integrated into a decision pipeline. Rather
than seeing the lack of certitude and inherent randomness of GAI as a problem,
we view it as an opportunity. Indeed, variable answers to given prompts can be
leveraged to construct a prior distribution which reflects assuredness of AI
predictions. This prior distribution may be combined with tailored datasets for
a fully Bayesian analysis with an AI-driven prior. In this paper, we explore
such a possibility within a non-parametric Bayesian framework. The basic idea
consists of assigning a Dirichlet process prior distribution on the
data-generating distribution with AI generative model as its baseline.
Hyper-parameters of the prior can be tuned out-of-sample to assess the
informativeness of the AI prior. Posterior simulation is achieved by computing
a suitably randomized functional on an augmented data that consists of observed
(labeled) data as well as fake data whose labels have been imputed using AI.
This strategy can be parallelized and rapidly produces iid samples from the
posterior by optimization as opposed to sampling from conditionals. Our method
enables (predictive) inference and uncertainty quantification leveraging AI
predictions in a coherent probabilistic manner.
| [
{
"version": "v1",
"created": "Wed, 26 Feb 2025 15:42:06 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 15:27:51 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Ročková",
"Veronika",
""
],
[
"O'Hagan",
"Sean",
""
]
] | TITLE: AI-Powered Bayesian Inference
ABSTRACT: The advent of Generative Artificial Intelligence (GAI) has heralded an
inflection point that changed how society thinks about knowledge acquisition.
While GAI cannot be fully trusted for decision-making, it may still provide
valuable information that can be integrated into a decision pipeline. Rather
than seeing the lack of certitude and inherent randomness of GAI as a problem,
we view it as an opportunity. Indeed, variable answers to given prompts can be
leveraged to construct a prior distribution which reflects assuredness of AI
predictions. This prior distribution may be combined with tailored datasets for
a fully Bayesian analysis with an AI-driven prior. In this paper, we explore
such a possibility within a non-parametric Bayesian framework. The basic idea
consists of assigning a Dirichlet process prior distribution on the
data-generating distribution with AI generative model as its baseline.
Hyper-parameters of the prior can be tuned out-of-sample to assess the
informativeness of the AI prior. Posterior simulation is achieved by computing
a suitably randomized functional on an augmented data that consists of observed
(labeled) data as well as fake data whose labels have been imputed using AI.
This strategy can be parallelized and rapidly produces iid samples from the
posterior by optimization as opposed to sampling from conditionals. Our method
enables (predictive) inference and uncertainty quantification leveraging AI
predictions in a coherent probabilistic manner.
|
2502.20760 | Weijia Zhang | Weijia Zhang, Fei Xie, Weidong Cai, Chao Ma | VRM: Knowledge Distillation via Virtual Relation Matching | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge distillation (KD) aims to transfer the knowledge of a more capable
yet cumbersome teacher model to a lightweight student model. In recent years,
relation-based KD methods have fallen behind, as their instance-matching
counterparts dominate in performance. In this paper, we revive relational KD by
identifying and tackling several key issues in relation-based methods,
including their susceptibility to overfitting and spurious responses.
Specifically, we transfer novelly constructed affinity graphs that compactly
encapsulate a wealth of beneficial inter-sample, inter-class, and inter-view
correlations by exploiting virtual views and relations as a new kind of
knowledge. As a result, the student has access to richer guidance signals and
stronger regularisation throughout the distillation process. To further
mitigate the adverse impact of spurious responses, we prune the affinity graphs
by dynamically detaching redundant and unreliable edges. Extensive experiments
on CIFAR-100 and ImageNet datasets demonstrate the superior performance of the
proposed virtual relation matching (VRM) method over a range of models,
architectures, and set-ups. For instance, VRM for the first time hits 74.0%
accuracy for ResNet50-to-MobileNetV2 distillation on ImageNet, and improves
DeiT-T by 14.44% on CIFAR-100 with a ResNet56 teacher. Thorough analyses are
also conducted to gauge the soundness, properties, and complexity of our
designs. Code and models will be released.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 06:29:39 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 04:57:26 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Zhang",
"Weijia",
""
],
[
"Xie",
"Fei",
""
],
[
"Cai",
"Weidong",
""
],
[
"Ma",
"Chao",
""
]
] | TITLE: VRM: Knowledge Distillation via Virtual Relation Matching
ABSTRACT: Knowledge distillation (KD) aims to transfer the knowledge of a more capable
yet cumbersome teacher model to a lightweight student model. In recent years,
relation-based KD methods have fallen behind, as their instance-matching
counterparts dominate in performance. In this paper, we revive relational KD by
identifying and tackling several key issues in relation-based methods,
including their susceptibility to overfitting and spurious responses.
Specifically, we transfer novelly constructed affinity graphs that compactly
encapsulate a wealth of beneficial inter-sample, inter-class, and inter-view
correlations by exploiting virtual views and relations as a new kind of
knowledge. As a result, the student has access to richer guidance signals and
stronger regularisation throughout the distillation process. To further
mitigate the adverse impact of spurious responses, we prune the affinity graphs
by dynamically detaching redundant and unreliable edges. Extensive experiments
on CIFAR-100 and ImageNet datasets demonstrate the superior performance of the
proposed virtual relation matching (VRM) method over a range of models,
architectures, and set-ups. For instance, VRM for the first time hits 74.0%
accuracy for ResNet50-to-MobileNetV2 distillation on ImageNet, and improves
DeiT-T by 14.44% on CIFAR-100 with a ResNet56 teacher. Thorough analyses are
also conducted to gauge the soundness, properties, and complexity of our
designs. Code and models will be released.
|
2503.00370 | Nicholas Pfaff Mr | Nicholas Pfaff, Evelyn Fu, Jeremy Binagia, Phillip Isola, and Russ
Tedrake | Scalable Real2Sim: Physics-Aware Asset Generation Via Robotic
Pick-and-Place Setups | Website: https://scalable-real2sim.github.io/ | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Simulating object dynamics from real-world perception shows great promise for
digital twins and robotic manipulation but often demands labor-intensive
measurements and expertise. We present a fully automated Real2Sim pipeline that
generates simulation-ready assets for real-world objects through robotic
interaction. Using only a robot's joint torque sensors and an external camera,
the pipeline identifies visual geometry, collision geometry, and physical
properties such as inertial parameters. Our approach introduces a general
method for extracting high-quality, object-centric meshes from photometric
reconstruction techniques (e.g., NeRF, Gaussian Splatting) by employing
alpha-transparent training while explicitly distinguishing foreground
occlusions from background subtraction. We validate the full pipeline through
extensive experiments, demonstrating its effectiveness across diverse objects.
By eliminating the need for manual intervention or environment modifications,
our pipeline can be integrated directly into existing pick-and-place setups,
enabling scalable and efficient dataset creation. Project page (with code and
data): https://scalable-real2sim.github.io/.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 06:40:41 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 03:01:18 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Pfaff",
"Nicholas",
""
],
[
"Fu",
"Evelyn",
""
],
[
"Binagia",
"Jeremy",
""
],
[
"Isola",
"Phillip",
""
],
[
"Tedrake",
"Russ",
""
]
] | TITLE: Scalable Real2Sim: Physics-Aware Asset Generation Via Robotic
Pick-and-Place Setups
ABSTRACT: Simulating object dynamics from real-world perception shows great promise for
digital twins and robotic manipulation but often demands labor-intensive
measurements and expertise. We present a fully automated Real2Sim pipeline that
generates simulation-ready assets for real-world objects through robotic
interaction. Using only a robot's joint torque sensors and an external camera,
the pipeline identifies visual geometry, collision geometry, and physical
properties such as inertial parameters. Our approach introduces a general
method for extracting high-quality, object-centric meshes from photometric
reconstruction techniques (e.g., NeRF, Gaussian Splatting) by employing
alpha-transparent training while explicitly distinguishing foreground
occlusions from background subtraction. We validate the full pipeline through
extensive experiments, demonstrating its effectiveness across diverse objects.
By eliminating the need for manual intervention or environment modifications,
our pipeline can be integrated directly into existing pick-and-place setups,
enabling scalable and efficient dataset creation. Project page (with code and
data): https://scalable-real2sim.github.io/.
|
2503.01263 | Cui Fangming | Fangming Cui, Yonggang Zhang, Xuan Wang, Xule Wang, Liang Xiao | Generalizable Prompt Learning of CLIP: A Brief Overview | null | null | null | null | cs.CV cs.CL | http://creativecommons.org/licenses/by/4.0/ | Existing vision-language models (VLMs) such as CLIP have showcased an
impressive capability to generalize well across various downstream tasks. These
models leverage the synergy between visual and textual information, enabling
them to understand and reason about the content present in images and text in a
unified manner. This article provides a brief overview of CLIP based on
few-shot prompt learning, including experimental data and technical
characteristics of some methods. The purpose of this review is to provide a
reference for researchers who have just started their research in generalizable
prompting of CLIP through few-shot training for classification across 15
datasets and also to facilitate the integration of this field by researchers in
other downstream tasks.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 07:41:41 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Mar 2025 09:28:13 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Mar 2025 02:51:32 GMT"
},
{
"version": "v4",
"created": "Tue, 1 Apr 2025 06:41:18 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Cui",
"Fangming",
""
],
[
"Zhang",
"Yonggang",
""
],
[
"Wang",
"Xuan",
""
],
[
"Wang",
"Xule",
""
],
[
"Xiao",
"Liang",
""
]
] | TITLE: Generalizable Prompt Learning of CLIP: A Brief Overview
ABSTRACT: Existing vision-language models (VLMs) such as CLIP have showcased an
impressive capability to generalize well across various downstream tasks. These
models leverage the synergy between visual and textual information, enabling
them to understand and reason about the content present in images and text in a
unified manner. This article provides a brief overview of CLIP based on
few-shot prompt learning, including experimental data and technical
characteristics of some methods. The purpose of this review is to provide a
reference for researchers who have just started their research in generalizable
prompting of CLIP through few-shot training for classification across 15
datasets and also to facilitate the integration of this field by researchers in
other downstream tasks.
|
2503.01412 | Michael Groom Dr | Michael Groom, Davide Bassetti, Illia Horenko, Terence J. O'Kane | Entropic learning enables skilful forecasts of ENSO phase at up to two
years lead time | null | null | null | null | physics.comp-ph physics.ao-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper extends previous work (Groom et al., \emph{Artif. Intell. Earth
Syst.}, 2024) in applying the entropy-optimal Sparse Probabilistic
Approximation (eSPA) algorithm to predict ENSO phase, defined by thresholding
the Ni\~no3.4 index. Only satellite-era observational datasets are used for
training and validation, while retrospective forecasts from 2012 to 2022 are
used to assess out-of-sample skill at lead times up to 24 months. Rather than
train a single eSPA model per lead, we introduce an ensemble approach in which
multiple eSPA models are aggregated via a novel meta-learning strategy. The
features used include the leading principal components from a delay-embedded
EOF analysis of global sea surface temperature, vertical temperature gradient
(a thermocline proxy), and tropical Pacific wind stresses. Crucially, the data
is processed to prevent any form of information leakage from the future,
ensuring realistic real-time forecasting conditions. Despite the limited number
of training instances, eSPA avoids overfitting and produces probabilistic
forecasts with skill comparable to the International Research Institute for
Climate and Society (IRI) ENSO prediction plume. Beyond the IRI's lead times,
eSPA maintains skill out to 22 months for the ranked probability skill score
and 24 months for accuracy and area under the ROC curve, all at a fraction of
the computational cost of a fully-coupled dynamical model. Furthermore, eSPA
successfully forecasts the 2015/16 and 2018/19 El Ni\~no events at 24 months
lead, the 2016/17, 2017/18 and 2020/21 La Ni\~na events at 24 months lead and
the 2021/22 and 2022/23 La Ni\~na events at 12 and 8 months lead.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 11:06:10 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 10:15:59 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Groom",
"Michael",
""
],
[
"Bassetti",
"Davide",
""
],
[
"Horenko",
"Illia",
""
],
[
"O'Kane",
"Terence J.",
""
]
] | TITLE: Entropic learning enables skilful forecasts of ENSO phase at up to two
years lead time
ABSTRACT: This paper extends previous work (Groom et al., \emph{Artif. Intell. Earth
Syst.}, 2024) in applying the entropy-optimal Sparse Probabilistic
Approximation (eSPA) algorithm to predict ENSO phase, defined by thresholding
the Ni\~no3.4 index. Only satellite-era observational datasets are used for
training and validation, while retrospective forecasts from 2012 to 2022 are
used to assess out-of-sample skill at lead times up to 24 months. Rather than
train a single eSPA model per lead, we introduce an ensemble approach in which
multiple eSPA models are aggregated via a novel meta-learning strategy. The
features used include the leading principal components from a delay-embedded
EOF analysis of global sea surface temperature, vertical temperature gradient
(a thermocline proxy), and tropical Pacific wind stresses. Crucially, the data
is processed to prevent any form of information leakage from the future,
ensuring realistic real-time forecasting conditions. Despite the limited number
of training instances, eSPA avoids overfitting and produces probabilistic
forecasts with skill comparable to the International Research Institute for
Climate and Society (IRI) ENSO prediction plume. Beyond the IRI's lead times,
eSPA maintains skill out to 22 months for the ranked probability skill score
and 24 months for accuracy and area under the ROC curve, all at a fraction of
the computational cost of a fully-coupled dynamical model. Furthermore, eSPA
successfully forecasts the 2015/16 and 2018/19 El Ni\~no events at 24 months
lead, the 2016/17, 2017/18 and 2020/21 La Ni\~na events at 24 months lead and
the 2021/22 and 2022/23 La Ni\~na events at 12 and 8 months lead.
|
2503.06405 | Jiachen Luo | Jiachen Luo, Huy Phan, Lin Wang, Joshua Reiss | Heterogeneous bimodal attention fusion for speech emotion recognition | null | null | null | null | cs.SD cs.AI eess.AS | http://creativecommons.org/licenses/by/4.0/ | Multi-modal emotion recognition in conversations is a challenging problem due
to the complex and complementary interactions between different modalities.
Audio and textual cues are particularly important for understanding emotions
from a human perspective. Most existing studies focus on exploring interactions
between audio and text modalities at the same representation level. However, a
critical issue is often overlooked: the heterogeneous modality gap between
low-level audio representations and high-level text representations. To address
this problem, we propose a novel framework called Heterogeneous Bimodal
Attention Fusion (HBAF) for multi-level multi-modal interaction in
conversational emotion recognition. The proposed method comprises three key
modules: the uni-modal representation module, the multi-modal fusion module,
and the inter-modal contrastive learning module. The uni-modal representation
module incorporates contextual content into low-level audio representations to
bridge the heterogeneous multi-modal gap, enabling more effective fusion. The
multi-modal fusion module uses dynamic bimodal attention and a dynamic gating
mechanism to filter incorrect cross-modal relationships and fully exploit both
intra-modal and inter-modal interactions. Finally, the inter-modal contrastive
learning module captures complex absolute and relative interactions between
audio and text modalities. Experiments on the MELD and IEMOCAP datasets
demonstrate that the proposed HBAF method outperforms existing state-of-the-art
baselines.
| [
{
"version": "v1",
"created": "Sun, 9 Mar 2025 02:50:49 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Mar 2025 08:21:43 GMT"
},
{
"version": "v3",
"created": "Tue, 1 Apr 2025 00:53:56 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Luo",
"Jiachen",
""
],
[
"Phan",
"Huy",
""
],
[
"Wang",
"Lin",
""
],
[
"Reiss",
"Joshua",
""
]
] | TITLE: Heterogeneous bimodal attention fusion for speech emotion recognition
ABSTRACT: Multi-modal emotion recognition in conversations is a challenging problem due
to the complex and complementary interactions between different modalities.
Audio and textual cues are particularly important for understanding emotions
from a human perspective. Most existing studies focus on exploring interactions
between audio and text modalities at the same representation level. However, a
critical issue is often overlooked: the heterogeneous modality gap between
low-level audio representations and high-level text representations. To address
this problem, we propose a novel framework called Heterogeneous Bimodal
Attention Fusion (HBAF) for multi-level multi-modal interaction in
conversational emotion recognition. The proposed method comprises three key
modules: the uni-modal representation module, the multi-modal fusion module,
and the inter-modal contrastive learning module. The uni-modal representation
module incorporates contextual content into low-level audio representations to
bridge the heterogeneous multi-modal gap, enabling more effective fusion. The
multi-modal fusion module uses dynamic bimodal attention and a dynamic gating
mechanism to filter incorrect cross-modal relationships and fully exploit both
intra-modal and inter-modal interactions. Finally, the inter-modal contrastive
learning module captures complex absolute and relative interactions between
audio and text modalities. Experiments on the MELD and IEMOCAP datasets
demonstrate that the proposed HBAF method outperforms existing state-of-the-art
baselines.
|
2503.09194 | Xudong Sun | Xudong Sun and Alex Markham and Pratik Misra and Carsten Marr | Addressing pitfalls in implicit unobserved confounding synthesis using
explicit block hierarchical ancestral sampling | null | null | null | null | stat.ML cs.LG math.ST stat.TH | http://creativecommons.org/licenses/by/4.0/ | Unbiased data synthesis is crucial for evaluating causal discovery algorithms
in the presence of unobserved confounding, given the scarcity of real-world
datasets. A common approach, implicit parameterization, encodes unobserved
confounding by modifying the off-diagonal entries of the idiosyncratic
covariance matrix while preserving positive definiteness. Within this approach,
we identify that state-of-the-art protocols have two distinct issues that
hinder unbiased sampling from the complete space of causal models: first, we
give a detailed analysis of use of diagonally dominant constructions restricts
the spectrum of partial correlation matrices; and second, the restriction of
possible graphical structures when sampling bidirected edges, unnecessarily
ruling out valid causal models. To address these limitations, we propose an
improved explicit modeling approach for unobserved confounding, leveraging
block-hierarchical ancestral generation of ground truth causal graphs.
Algorithms for converting the ground truth DAG into ancestral graph is provided
so that the output of causal discovery algorithms could be compared with. We
draw connections between implicit and explicit parameterization, prove that our
approach fully covers the space of causal models, including those generated by
the implicit parameterization, thus enabling more robust evaluation of methods
for causal discovery and inference.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 09:38:40 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 00:19:11 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Sun",
"Xudong",
""
],
[
"Markham",
"Alex",
""
],
[
"Misra",
"Pratik",
""
],
[
"Marr",
"Carsten",
""
]
] | TITLE: Addressing pitfalls in implicit unobserved confounding synthesis using
explicit block hierarchical ancestral sampling
ABSTRACT: Unbiased data synthesis is crucial for evaluating causal discovery algorithms
in the presence of unobserved confounding, given the scarcity of real-world
datasets. A common approach, implicit parameterization, encodes unobserved
confounding by modifying the off-diagonal entries of the idiosyncratic
covariance matrix while preserving positive definiteness. Within this approach,
we identify that state-of-the-art protocols have two distinct issues that
hinder unbiased sampling from the complete space of causal models: first, we
give a detailed analysis of use of diagonally dominant constructions restricts
the spectrum of partial correlation matrices; and second, the restriction of
possible graphical structures when sampling bidirected edges, unnecessarily
ruling out valid causal models. To address these limitations, we propose an
improved explicit modeling approach for unobserved confounding, leveraging
block-hierarchical ancestral generation of ground truth causal graphs.
Algorithms for converting the ground truth DAG into ancestral graph is provided
so that the output of causal discovery algorithms could be compared with. We
draw connections between implicit and explicit parameterization, prove that our
approach fully covers the space of causal models, including those generated by
the implicit parameterization, thus enabling more robust evaluation of methods
for causal discovery and inference.
|
2503.10200 | Boyu Chen | Boyu Chen, Zhengrong Yue, Siran Chen, Zikang Wang, Yang Liu, Peng Li,
Yali Wang | LVAgent: Long Video Understanding by Multi-Round Dynamical Collaboration
of MLLM Agents | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Existing Multimodal Large Language Models (MLLMs) encounter significant
challenges in modeling the temporal context within long videos. Currently,
mainstream Agent-based methods use external tools (e.g., search engine, memory
banks, OCR, retrieval models) to assist a single MLLM in answering long video
questions. Despite such tool-based support, a solitary MLLM still offers only a
partial understanding of long videos, resulting in limited performance. In
order to better address long video tasks, we introduce LVAgent, the first
framework enabling multi-round dynamic collaboration of MLLM agents in long
video understanding. Our methodology consists of four key steps: 1. Selection:
We pre-select appropriate agents from the model library to form optimal agent
teams based on different tasks. 2. Perception: We design an effective retrieval
scheme for long videos, improving the coverage of critical temporal segments
while maintaining computational efficiency. 3. Action: Agents answer long
video-related questions and exchange reasons. 4. Reflection: We evaluate the
performance of each agent in each round of discussion and optimize the agent
team for dynamic collaboration. The agents iteratively refine their answers by
multi-round dynamical collaboration of MLLM agents. LVAgent is the first agent
system method that outperforms all closed-source models (including GPT-4o) and
open-source models (including InternVL-2.5 and Qwen2-VL) in the long video
understanding tasks. Our LVAgent achieves an accuracy of 80% on four mainstream
long video understanding tasks. Notably, on the LongVideoBench dataset, LVAgent
improves accuracy by up to 13.3% compared with SOTA.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 09:35:09 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 02:07:45 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Chen",
"Boyu",
""
],
[
"Yue",
"Zhengrong",
""
],
[
"Chen",
"Siran",
""
],
[
"Wang",
"Zikang",
""
],
[
"Liu",
"Yang",
""
],
[
"Li",
"Peng",
""
],
[
"Wang",
"Yali",
""
]
] | TITLE: LVAgent: Long Video Understanding by Multi-Round Dynamical Collaboration
of MLLM Agents
ABSTRACT: Existing Multimodal Large Language Models (MLLMs) encounter significant
challenges in modeling the temporal context within long videos. Currently,
mainstream Agent-based methods use external tools (e.g., search engine, memory
banks, OCR, retrieval models) to assist a single MLLM in answering long video
questions. Despite such tool-based support, a solitary MLLM still offers only a
partial understanding of long videos, resulting in limited performance. In
order to better address long video tasks, we introduce LVAgent, the first
framework enabling multi-round dynamic collaboration of MLLM agents in long
video understanding. Our methodology consists of four key steps: 1. Selection:
We pre-select appropriate agents from the model library to form optimal agent
teams based on different tasks. 2. Perception: We design an effective retrieval
scheme for long videos, improving the coverage of critical temporal segments
while maintaining computational efficiency. 3. Action: Agents answer long
video-related questions and exchange reasons. 4. Reflection: We evaluate the
performance of each agent in each round of discussion and optimize the agent
team for dynamic collaboration. The agents iteratively refine their answers by
multi-round dynamical collaboration of MLLM agents. LVAgent is the first agent
system method that outperforms all closed-source models (including GPT-4o) and
open-source models (including InternVL-2.5 and Qwen2-VL) in the long video
understanding tasks. Our LVAgent achieves an accuracy of 80% on four mainstream
long video understanding tasks. Notably, on the LongVideoBench dataset, LVAgent
improves accuracy by up to 13.3% compared with SOTA.
|
2503.10460 | Haosheng Zou | Liang Wen, Yunke Cai, Fenrui Xiao, Xin He, Qi An, Zhenyu Duan, Yimin
Du, Junchen Liu, Lifu Tang, Xiaowei Lv, Haosheng Zou, Yongchao Deng,
Shousheng Jia, Xiangzheng Zhang | Light-R1: Curriculum SFT, DPO and RL for Long COT from Scratch and
Beyond | v3: minor modifications; v2: better writing & format for later
submission; all release at https://github.com/Qihoo360/Light-R1 | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | This paper introduces Light-R1, an open-source suite for training long
reasoning models using reproducible and cost-effective methodology. Given the
proprietary nature of data used in the DeepSeek-R1 series, we develop an
alternative approach leveraging exclusively public data and models. Our
curriculum training progressively increases data difficulty, combined with
multi-staged post-training. Our Light-R1-32B model, trained from
Qwen2.5-32B-Instruct, outperforms DeepSeek-R1-Distill-Qwen-32B in math
reasoning.
Experimental results show that this curriculum approach becomes more
effective when distinct, diverse datasets are available for different training
stages: fine-tuning DeepSeek-R1-Distilled models (pre-tuned by DeepSeek team on
proprietary data) with 3,000 challenging examples from our curriculum dataset
yielded state-of-the-art 7B and 14B models, while the 32B model,
Light-R1-32B-DS performed comparably to QwQ-32B and DeepSeek-R1.
Furthermore, we extend our work by applying GRPO on long reasoning models.
Our final Light-R1-14B-DS achieves SOTA performance among 14B models in math,
with AIME24 \& 25 scores of 74.0 and 60.2 respectively, surpassing many 32B
models and DeepSeek-R1-Distill-Llama-70B. Despite math-focused training,
Light-R1-14B-DS demonstrates strong cross-domain generalization.
Light-R1 represents a significant advancement in making sophisticated
reasoning models more accessible and implementable in real-world applications.
Our models, training data and code have been made available at
https://github.com/Qihoo360/Light-R1.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 15:29:22 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 17:07:21 GMT"
},
{
"version": "v3",
"created": "Tue, 1 Apr 2025 15:08:26 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Wen",
"Liang",
""
],
[
"Cai",
"Yunke",
""
],
[
"Xiao",
"Fenrui",
""
],
[
"He",
"Xin",
""
],
[
"An",
"Qi",
""
],
[
"Duan",
"Zhenyu",
""
],
[
"Du",
"Yimin",
""
],
[
"Liu",
"Junchen",
""
],
[
"Tang",
"Lifu",
""
],
[
"Lv",
"Xiaowei",
""
],
[
"Zou",
"Haosheng",
""
],
[
"Deng",
"Yongchao",
""
],
[
"Jia",
"Shousheng",
""
],
[
"Zhang",
"Xiangzheng",
""
]
] | TITLE: Light-R1: Curriculum SFT, DPO and RL for Long COT from Scratch and
Beyond
ABSTRACT: This paper introduces Light-R1, an open-source suite for training long
reasoning models using reproducible and cost-effective methodology. Given the
proprietary nature of data used in the DeepSeek-R1 series, we develop an
alternative approach leveraging exclusively public data and models. Our
curriculum training progressively increases data difficulty, combined with
multi-staged post-training. Our Light-R1-32B model, trained from
Qwen2.5-32B-Instruct, outperforms DeepSeek-R1-Distill-Qwen-32B in math
reasoning.
Experimental results show that this curriculum approach becomes more
effective when distinct, diverse datasets are available for different training
stages: fine-tuning DeepSeek-R1-Distilled models (pre-tuned by DeepSeek team on
proprietary data) with 3,000 challenging examples from our curriculum dataset
yielded state-of-the-art 7B and 14B models, while the 32B model,
Light-R1-32B-DS performed comparably to QwQ-32B and DeepSeek-R1.
Furthermore, we extend our work by applying GRPO on long reasoning models.
Our final Light-R1-14B-DS achieves SOTA performance among 14B models in math,
with AIME24 \& 25 scores of 74.0 and 60.2 respectively, surpassing many 32B
models and DeepSeek-R1-Distill-Llama-70B. Despite math-focused training,
Light-R1-14B-DS demonstrates strong cross-domain generalization.
Light-R1 represents a significant advancement in making sophisticated
reasoning models more accessible and implementable in real-world applications.
Our models, training data and code have been made available at
https://github.com/Qihoo360/Light-R1.
|
2503.11937 | Wonwoong Cho | Wonwoong Cho, Yan-Ying Chen, Matthew Klenk, David I. Inouye, Yanxia
Zhang | Att-Adapter: A Robust and Precise Domain-Specific Multi-Attributes T2I
Diffusion Adapter via Conditional Variational Autoencoder | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Text-to-Image (T2I) Diffusion Models have achieved remarkable performance in
generating high quality images. However, enabling precise control of continuous
attributes, especially multiple attributes simultaneously, in a new domain
(e.g., numeric values like eye openness or car width) with text-only guidance
remains a significant challenge. To address this, we introduce the Attribute
(Att) Adapter, a novel plug-and-play module designed to enable fine-grained,
multi-attributes control in pretrained diffusion models. Our approach learns a
single control adapter from a set of sample images that can be unpaired and
contain multiple visual attributes. The Att-Adapter leverages the decoupled
cross attention module to naturally harmonize the multiple domain attributes
with text conditioning. We further introduce Conditional Variational
Autoencoder (CVAE) to the Att-Adapter to mitigate overfitting, matching the
diverse nature of the visual world. Evaluations on two public datasets show
that Att-Adapter outperforms all LoRA-based baselines in controlling continuous
attributes. Additionally, our method enables a broader control range and also
improves disentanglement across multiple attributes, surpassing StyleGAN-based
techniques. Notably, Att-Adapter is flexible, requiring no paired synthetic
data for training, and is easily scalable to multiple attributes within a
single model.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 01:06:34 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 13:42:51 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Cho",
"Wonwoong",
""
],
[
"Chen",
"Yan-Ying",
""
],
[
"Klenk",
"Matthew",
""
],
[
"Inouye",
"David I.",
""
],
[
"Zhang",
"Yanxia",
""
]
] | TITLE: Att-Adapter: A Robust and Precise Domain-Specific Multi-Attributes T2I
Diffusion Adapter via Conditional Variational Autoencoder
ABSTRACT: Text-to-Image (T2I) Diffusion Models have achieved remarkable performance in
generating high quality images. However, enabling precise control of continuous
attributes, especially multiple attributes simultaneously, in a new domain
(e.g., numeric values like eye openness or car width) with text-only guidance
remains a significant challenge. To address this, we introduce the Attribute
(Att) Adapter, a novel plug-and-play module designed to enable fine-grained,
multi-attributes control in pretrained diffusion models. Our approach learns a
single control adapter from a set of sample images that can be unpaired and
contain multiple visual attributes. The Att-Adapter leverages the decoupled
cross attention module to naturally harmonize the multiple domain attributes
with text conditioning. We further introduce Conditional Variational
Autoencoder (CVAE) to the Att-Adapter to mitigate overfitting, matching the
diverse nature of the visual world. Evaluations on two public datasets show
that Att-Adapter outperforms all LoRA-based baselines in controlling continuous
attributes. Additionally, our method enables a broader control range and also
improves disentanglement across multiple attributes, surpassing StyleGAN-based
techniques. Notably, Att-Adapter is flexible, requiring no paired synthetic
data for training, and is easily scalable to multiple attributes within a
single model.
|
2503.13269 | Wenyi Xu | Wenyi Xu, Yuren Mao, Xiaolu Zhang, Chao Zhang, Xuemei Dong, Mengfei
Zhang, Yunjun Gao | DAgent: A Relational Database-Driven Data Analysis Report Generation
Agent | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Relational database-driven data analysis (RDB-DA) report generation, which
aims to generate data analysis reports after querying relational databases, has
been widely applied in fields such as finance and healthcare. Typically, these
tasks are manually completed by data scientists, making the process very
labor-intensive and showing a clear need for automation. Although existing
methods (e.g., Table QA or Text-to-SQL) have been proposed to reduce human
dependency, they cannot handle complex analytical tasks that require multi-step
reasoning, cross-table associations, and synthesizing insights into reports.
Moreover, there is no dataset available for developing automatic RDB-DA report
generation. To fill this gap, this paper proposes an LLM agent system for
RDB-DA report generation tasks, dubbed DAgent; moreover, we construct a
benchmark for automatic data analysis report generation, which includes a new
dataset DA-Dataset and evaluation metrics. DAgent integrates planning, tools,
and memory modules to decompose natural language questions into logically
independent sub-queries, accurately retrieve key information from relational
databases, and generate analytical reports that meet the requirements of
completeness, correctness, and conciseness through multi-step reasoning and
effective data integration. Experimental analysis on the DA-Dataset
demonstrates that DAgent's superiority in retrieval performance and analysis
report generation quality, showcasing its strong potential for tackling complex
database analysis report generation tasks.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 15:22:19 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 12:13:46 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Xu",
"Wenyi",
""
],
[
"Mao",
"Yuren",
""
],
[
"Zhang",
"Xiaolu",
""
],
[
"Zhang",
"Chao",
""
],
[
"Dong",
"Xuemei",
""
],
[
"Zhang",
"Mengfei",
""
],
[
"Gao",
"Yunjun",
""
]
] | TITLE: DAgent: A Relational Database-Driven Data Analysis Report Generation
Agent
ABSTRACT: Relational database-driven data analysis (RDB-DA) report generation, which
aims to generate data analysis reports after querying relational databases, has
been widely applied in fields such as finance and healthcare. Typically, these
tasks are manually completed by data scientists, making the process very
labor-intensive and showing a clear need for automation. Although existing
methods (e.g., Table QA or Text-to-SQL) have been proposed to reduce human
dependency, they cannot handle complex analytical tasks that require multi-step
reasoning, cross-table associations, and synthesizing insights into reports.
Moreover, there is no dataset available for developing automatic RDB-DA report
generation. To fill this gap, this paper proposes an LLM agent system for
RDB-DA report generation tasks, dubbed DAgent; moreover, we construct a
benchmark for automatic data analysis report generation, which includes a new
dataset DA-Dataset and evaluation metrics. DAgent integrates planning, tools,
and memory modules to decompose natural language questions into logically
independent sub-queries, accurately retrieve key information from relational
databases, and generate analytical reports that meet the requirements of
completeness, correctness, and conciseness through multi-step reasoning and
effective data integration. Experimental analysis on the DA-Dataset
demonstrates that DAgent's superiority in retrieval performance and analysis
report generation quality, showcasing its strong potential for tackling complex
database analysis report generation tasks.
|
2503.13837 | Pin-Jie Lin | Pin-Jie Lin, Ernie Chang, Yangyang Shi, Vikas Chandra | Self-Vocabularizing Training for Neural Machine Translation | Accepted to NAACL SRW 2025 | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Past vocabulary learning techniques identify relevant vocabulary before
training, relying on statistical and entropy-based assumptions that largely
neglect the role of model training. Empirically, we observe that trained
translation models are induced to use a byte-pair encoding (BPE) vocabulary
subset distinct from the original BPE vocabulary, leading to performance
improvements when retrained with the induced vocabulary. In this paper, we
analyze this discrepancy in neural machine translation by examining vocabulary
and entropy shifts during self-training--where each iteration generates a
labeled dataset by pairing source sentences with the model's predictions to
define a new vocabulary. Building on these insights, we propose
self-vocabularizing training, an iterative method that self-selects a smaller,
more optimal vocabulary, yielding up to a 1.49 BLEU improvement. Moreover, we
find that deeper model architectures lead to both an increase in unique token
usage and a 6-8% reduction in vocabulary size.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 02:21:07 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 04:09:17 GMT"
},
{
"version": "v3",
"created": "Mon, 31 Mar 2025 00:56:52 GMT"
},
{
"version": "v4",
"created": "Tue, 1 Apr 2025 02:43:48 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Lin",
"Pin-Jie",
""
],
[
"Chang",
"Ernie",
""
],
[
"Shi",
"Yangyang",
""
],
[
"Chandra",
"Vikas",
""
]
] | TITLE: Self-Vocabularizing Training for Neural Machine Translation
ABSTRACT: Past vocabulary learning techniques identify relevant vocabulary before
training, relying on statistical and entropy-based assumptions that largely
neglect the role of model training. Empirically, we observe that trained
translation models are induced to use a byte-pair encoding (BPE) vocabulary
subset distinct from the original BPE vocabulary, leading to performance
improvements when retrained with the induced vocabulary. In this paper, we
analyze this discrepancy in neural machine translation by examining vocabulary
and entropy shifts during self-training--where each iteration generates a
labeled dataset by pairing source sentences with the model's predictions to
define a new vocabulary. Building on these insights, we propose
self-vocabularizing training, an iterative method that self-selects a smaller,
more optimal vocabulary, yielding up to a 1.49 BLEU improvement. Moreover, we
find that deeper model architectures lead to both an increase in unique token
usage and a 6-8% reduction in vocabulary size.
|
2503.14538 | Anandakumar D | Ananya Ganapthy, Praveen Shastry, Naveen Kumarasami, Anandakumar D,
Keerthana R, Mounigasri M, Varshinipriya M, Kishore Prasath Venkatesh,
Bargava Subramanian, Kalyan Sivasailam | Vision-Language Models for Acute Tuberculosis Diagnosis: A Multimodal
Approach Combining Imaging and Clinical Data | 11 pages, 3 figures | null | null | null | eess.IV cs.AI cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: This study introduces a Vision-Language Model (VLM) leveraging
SIGLIP and Gemma-3b architectures for automated acute tuberculosis (TB)
screening. By integrating chest X-ray images and clinical notes, the model aims
to enhance diagnostic accuracy and efficiency, particularly in resource-limited
settings.
Methods: The VLM combines visual data from chest X-rays with clinical context
to generate detailed, context-aware diagnostic reports. The architecture
employs SIGLIP for visual encoding and Gemma-3b for decoding, ensuring
effective representation of acute TB-specific pathologies and clinical
insights.
Results: Key acute TB pathologies, including consolidation, cavities, and
nodules, were detected with high precision (97percent) and recall (96percent).
The model demonstrated strong spatial localization capabilities and robustness
in distinguishing TB-positive cases, making it a reliable tool for acute TB
diagnosis.
Conclusion: The multimodal capability of the VLM reduces reliance on
radiologists, providing a scalable solution for acute TB screening. Future work
will focus on improving the detection of subtle pathologies and addressing
dataset biases to enhance its generalizability and application in diverse
global healthcare settings.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 14:08:35 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 10:20:22 GMT"
},
{
"version": "v3",
"created": "Tue, 1 Apr 2025 06:41:57 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Ganapthy",
"Ananya",
""
],
[
"Shastry",
"Praveen",
""
],
[
"Kumarasami",
"Naveen",
""
],
[
"D",
"Anandakumar",
""
],
[
"R",
"Keerthana",
""
],
[
"M",
"Mounigasri",
""
],
[
"M",
"Varshinipriya",
""
],
[
"Venkatesh",
"Kishore Prasath",
""
],
[
"Subramanian",
"Bargava",
""
],
[
"Sivasailam",
"Kalyan",
""
]
] | TITLE: Vision-Language Models for Acute Tuberculosis Diagnosis: A Multimodal
Approach Combining Imaging and Clinical Data
ABSTRACT: Background: This study introduces a Vision-Language Model (VLM) leveraging
SIGLIP and Gemma-3b architectures for automated acute tuberculosis (TB)
screening. By integrating chest X-ray images and clinical notes, the model aims
to enhance diagnostic accuracy and efficiency, particularly in resource-limited
settings.
Methods: The VLM combines visual data from chest X-rays with clinical context
to generate detailed, context-aware diagnostic reports. The architecture
employs SIGLIP for visual encoding and Gemma-3b for decoding, ensuring
effective representation of acute TB-specific pathologies and clinical
insights.
Results: Key acute TB pathologies, including consolidation, cavities, and
nodules, were detected with high precision (97percent) and recall (96percent).
The model demonstrated strong spatial localization capabilities and robustness
in distinguishing TB-positive cases, making it a reliable tool for acute TB
diagnosis.
Conclusion: The multimodal capability of the VLM reduces reliance on
radiologists, providing a scalable solution for acute TB screening. Future work
will focus on improving the detection of subtle pathologies and addressing
dataset biases to enhance its generalizability and application in diverse
global healthcare settings.
|
2503.15896 | Giancarlo Ruffo | Arthur Capozzi, Salvatore Vilella, Dario Moncalvo, Marco Fornasiero,
Valeria Ricci, Silvia Ronchiadin, and Giancarlo Ruffo | FlowSeries: Anomaly Detection in Financial Transaction Flows | 12 pages, 6 figures, ITADATA2024 | Complex Networks & Their Applications XIII. COMPLEX NETWORKS 2024
2024. Studies in Computational Intelligence, vol 1189 | 10.1007/978-3-031-82435-7_3 | ITADATA/2024/12 | cs.CY cs.CE | http://creativecommons.org/licenses/by/4.0/ | In recent years, the digitization and automation of anti-financial crime
(AFC) investigative processes have faced significant challenges, particularly
the need for interpretability of AI model results and the lack of labeled data
for training. Network analysis has emerged as a valuable approach in this
context.
In this paper, we present WeirdFlows, a top-down search pipeline for
detecting potentially fraudulent transactions and non-compliant agents. In a
transaction network, fraud attempts are often based on complex transaction
patterns that change over time to avoid detection. The WeirdFlows pipeline
requires neither an a priori set of patterns nor a training set. In addition,
by providing elements to explain the anomalies found, it facilitates and
supports the work of an AFC analyst.
We evaluate WeirdFlows on a dataset from Intesa Sanpaolo (ISP) bank,
comprising 80 million cross-country transactions over 15 months, benchmarking
our implementation of the algorithm. The results, corroborated by ISP AFC
experts, highlight its effectiveness in identifying suspicious transactions and
actors, particularly in the context of the economic sanctions imposed in the EU
after February 2022. This demonstrates \textit{WeirdFlows}' capability to
handle large datasets, detect complex transaction patterns, and provide the
necessary interpretability for formal AFC investigations.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 06:49:33 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 16:23:27 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Capozzi",
"Arthur",
""
],
[
"Vilella",
"Salvatore",
""
],
[
"Moncalvo",
"Dario",
""
],
[
"Fornasiero",
"Marco",
""
],
[
"Ricci",
"Valeria",
""
],
[
"Ronchiadin",
"Silvia",
""
],
[
"Ruffo",
"Giancarlo",
""
]
] | TITLE: FlowSeries: Anomaly Detection in Financial Transaction Flows
ABSTRACT: In recent years, the digitization and automation of anti-financial crime
(AFC) investigative processes have faced significant challenges, particularly
the need for interpretability of AI model results and the lack of labeled data
for training. Network analysis has emerged as a valuable approach in this
context.
In this paper, we present WeirdFlows, a top-down search pipeline for
detecting potentially fraudulent transactions and non-compliant agents. In a
transaction network, fraud attempts are often based on complex transaction
patterns that change over time to avoid detection. The WeirdFlows pipeline
requires neither an a priori set of patterns nor a training set. In addition,
by providing elements to explain the anomalies found, it facilitates and
supports the work of an AFC analyst.
We evaluate WeirdFlows on a dataset from Intesa Sanpaolo (ISP) bank,
comprising 80 million cross-country transactions over 15 months, benchmarking
our implementation of the algorithm. The results, corroborated by ISP AFC
experts, highlight its effectiveness in identifying suspicious transactions and
actors, particularly in the context of the economic sanctions imposed in the EU
after February 2022. This demonstrates \textit{WeirdFlows}' capability to
handle large datasets, detect complex transaction patterns, and provide the
necessary interpretability for formal AFC investigations.
|
2503.16188 | Ming Li | Ming Li, Jike Zhong, Shitian Zhao, Yuxiang Lai, Kaipeng Zhang | Think or Not Think: A Study of Explicit Thinking inRule-Based Visual
Reinforcement Fine-Tuning | Preprint, work in progress. Add results on CVBench | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This paper investigates rule-based reinforcement learning (RL) fine-tuning
for visual classification using multi-modal large language models (MLLMs) and
the role of the thinking process. We begin by exploring \textit{CLS-RL}, a
method that leverages verifiable signals as rewards to encourage MLLMs to
'think' before classifying. Our experiments across \textbf{eleven} datasets
demonstrate that CLS-RL achieves significant improvements over supervised
fine-tuning (SFT) in both base-to-new generalization and few-shot learning
scenarios. Notably, we observe a 'free-lunch' phenomenon where fine-tuning on
one dataset unexpectedly enhances performance on others, suggesting that RL
effectively teaches fundamental classification skills. However, we question
whether the explicit thinking, a critical aspect of rule-based RL, is always
beneficial or indispensable. Challenging the conventional assumption that
complex reasoning enhances performance, we introduce \textit{No-Thinking-RL}, a
novel approach that minimizes the model's thinking during fine-tuning by
utilizing an equality accuracy reward. Our experiments reveal that
No-Thinking-RL achieves superior in-domain performance and generalization
capabilities compared to CLS-RL, while requiring significantly less fine-tuning
time. This underscores that, contrary to prevailing assumptions, reducing the
thinking process can lead to more efficient and effective MLLM fine-tuning for
some visual tasks. Furthermore, No-Thinking-RL demonstrates enhanced
performance on other visual benchmarks, such as a 6.4\% improvement on CVBench.
We hope our findings provides insights into the impact of thinking in RL-based
fine-tuning.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 14:37:45 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 09:52:37 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Li",
"Ming",
""
],
[
"Zhong",
"Jike",
""
],
[
"Zhao",
"Shitian",
""
],
[
"Lai",
"Yuxiang",
""
],
[
"Zhang",
"Kaipeng",
""
]
] | TITLE: Think or Not Think: A Study of Explicit Thinking inRule-Based Visual
Reinforcement Fine-Tuning
ABSTRACT: This paper investigates rule-based reinforcement learning (RL) fine-tuning
for visual classification using multi-modal large language models (MLLMs) and
the role of the thinking process. We begin by exploring \textit{CLS-RL}, a
method that leverages verifiable signals as rewards to encourage MLLMs to
'think' before classifying. Our experiments across \textbf{eleven} datasets
demonstrate that CLS-RL achieves significant improvements over supervised
fine-tuning (SFT) in both base-to-new generalization and few-shot learning
scenarios. Notably, we observe a 'free-lunch' phenomenon where fine-tuning on
one dataset unexpectedly enhances performance on others, suggesting that RL
effectively teaches fundamental classification skills. However, we question
whether the explicit thinking, a critical aspect of rule-based RL, is always
beneficial or indispensable. Challenging the conventional assumption that
complex reasoning enhances performance, we introduce \textit{No-Thinking-RL}, a
novel approach that minimizes the model's thinking during fine-tuning by
utilizing an equality accuracy reward. Our experiments reveal that
No-Thinking-RL achieves superior in-domain performance and generalization
capabilities compared to CLS-RL, while requiring significantly less fine-tuning
time. This underscores that, contrary to prevailing assumptions, reducing the
thinking process can lead to more efficient and effective MLLM fine-tuning for
some visual tasks. Furthermore, No-Thinking-RL demonstrates enhanced
performance on other visual benchmarks, such as a 6.4\% improvement on CVBench.
We hope our findings provides insights into the impact of thinking in RL-based
fine-tuning.
|
2503.17358 | Jerred Chen | Jerred Chen and Ronald Clark | Image as an IMU: Estimating Camera Motion from a Single Motion-Blurred
Image | Project page: https://jerredchen.github.io/image-as-imu/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | In many robotics and VR/AR applications, fast camera motions cause a high
level of motion blur, causing existing camera pose estimation methods to fail.
In this work, we propose a novel framework that leverages motion blur as a rich
cue for motion estimation rather than treating it as an unwanted artifact. Our
approach works by predicting a dense motion flow field and a monocular depth
map directly from a single motion-blurred image. We then recover the
instantaneous camera velocity by solving a linear least squares problem under
the small motion assumption. In essence, our method produces an IMU-like
measurement that robustly captures fast and aggressive camera movements. To
train our model, we construct a large-scale dataset with realistic synthetic
motion blur derived from ScanNet++v2 and further refine our model by training
end-to-end on real data using our fully differentiable pipeline. Extensive
evaluations on real-world benchmarks demonstrate that our method achieves
state-of-the-art angular and translational velocity estimates, outperforming
current methods like MASt3R and COLMAP.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 17:58:56 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Mar 2025 16:52:51 GMT"
},
{
"version": "v3",
"created": "Tue, 1 Apr 2025 09:58:06 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Chen",
"Jerred",
""
],
[
"Clark",
"Ronald",
""
]
] | TITLE: Image as an IMU: Estimating Camera Motion from a Single Motion-Blurred
Image
ABSTRACT: In many robotics and VR/AR applications, fast camera motions cause a high
level of motion blur, causing existing camera pose estimation methods to fail.
In this work, we propose a novel framework that leverages motion blur as a rich
cue for motion estimation rather than treating it as an unwanted artifact. Our
approach works by predicting a dense motion flow field and a monocular depth
map directly from a single motion-blurred image. We then recover the
instantaneous camera velocity by solving a linear least squares problem under
the small motion assumption. In essence, our method produces an IMU-like
measurement that robustly captures fast and aggressive camera movements. To
train our model, we construct a large-scale dataset with realistic synthetic
motion blur derived from ScanNet++v2 and further refine our model by training
end-to-end on real data using our fully differentiable pipeline. Extensive
evaluations on real-world benchmarks demonstrate that our method achieves
state-of-the-art angular and translational velocity estimates, outperforming
current methods like MASt3R and COLMAP.
|
2503.18104 | Ze Zhang | Ze Zhang, Enyuan Zhao, Yi Jiang, Jie Nie and Xinyue Liang | Challenging Dataset and Multi-modal Gated Mixture of Experts Model for
Remote Sensing Copy-Move Forgery Understanding | 6 pages, 6 figures | null | null | Comments: Accepted by icme2025 | cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Remote Sensing Copy-Move Question Answering (RSCMQA) task focuses on
interpreting complex tampering scenarios and inferring the relationships
between objects. Currently, publicly available datasets often use randomly
generated tampered images, which lack spatial logic and do not meet the
practical needs of defense security and land resource monitoring. To address
this, we propose a high-quality manually annotated RSCMQA dataset, Real-RSCM,
which provides more realistic evaluation metrics for the identification and
understanding of remote sensing image tampering. The tampered images in the
Real-RSCM dataset are subtle, authentic, and challenging, posing significant
difficulties for model discrimination capabilities. To overcome these
challenges, we introduce a multimodal gated mixture of experts model (CM-MMoE),
which guides multi-expert models to discern tampered information in images
through multi-level visual semantics and textual joint modeling. Extensive
experiments demonstrate that CM-MMoE provides a stronger benchmark for the
RSCMQA task compared to general VQA and CMQA models. Our dataset and code are
available at https://github.com/shenyedepisa/CM-MMoE.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 15:22:37 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 14:15:03 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Zhang",
"Ze",
""
],
[
"Zhao",
"Enyuan",
""
],
[
"Jiang",
"Yi",
""
],
[
"Nie",
"Jie",
""
],
[
"Liang",
"Xinyue",
""
]
] | TITLE: Challenging Dataset and Multi-modal Gated Mixture of Experts Model for
Remote Sensing Copy-Move Forgery Understanding
ABSTRACT: The Remote Sensing Copy-Move Question Answering (RSCMQA) task focuses on
interpreting complex tampering scenarios and inferring the relationships
between objects. Currently, publicly available datasets often use randomly
generated tampered images, which lack spatial logic and do not meet the
practical needs of defense security and land resource monitoring. To address
this, we propose a high-quality manually annotated RSCMQA dataset, Real-RSCM,
which provides more realistic evaluation metrics for the identification and
understanding of remote sensing image tampering. The tampered images in the
Real-RSCM dataset are subtle, authentic, and challenging, posing significant
difficulties for model discrimination capabilities. To overcome these
challenges, we introduce a multimodal gated mixture of experts model (CM-MMoE),
which guides multi-expert models to discern tampered information in images
through multi-level visual semantics and textual joint modeling. Extensive
experiments demonstrate that CM-MMoE provides a stronger benchmark for the
RSCMQA task compared to general VQA and CMQA models. Our dataset and code are
available at https://github.com/shenyedepisa/CM-MMoE.
|
2503.18497 | Stefan Rass | Stefan Rass, Martin Dallinger | Statistically Testing Training Data for Unwanted Error Patterns using
Rule-Oriented Regression | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Artificial intelligence models trained from data can only be as good as the
underlying data is. Biases in training data propagating through to the output
of a machine learning model are a well-documented and well-understood
phenomenon, but the machinery to prevent these undesired effects is much less
developed. Efforts to ensure data is clean during collection, such as using
bias-aware sampling, are most effective when the entity controlling data
collection also trains the AI. In cases where the data is already available,
how do we find out if the data was already manipulated, i.e., ``poisoned'', so
that an undesired behavior would be trained into a machine learning model? This
is a challenge fundamentally different to (just) improving approximation
accuracy or efficiency, and we provide a method to test training data for
flaws, to establish a trustworthy ground-truth for a subsequent training of
machine learning models (of any kind). Unlike the well-studied problem of
approximating data using fuzzy rules that are generated from the data, our
method hinges on a prior definition of rules to happen before seeing the data
to be tested. Therefore, the proposed method can also discover hidden error
patterns, which may also have substantial influence. Our approach extends the
abilities of conventional statistical testing by letting the ``test-condition''
be any Boolean condition to describe a pattern in the data, whose presence we
wish to determine. The method puts fuzzy inference into a regression model, to
get the best of the two: explainability from fuzzy logic with statistical
properties and diagnostics from the regression, and finally also being
applicable to ``small data'', hence not requiring large datasets as deep
learning methods do. We provide an open source implementation for demonstration
and experiments.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 09:52:36 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 13:34:59 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Rass",
"Stefan",
""
],
[
"Dallinger",
"Martin",
""
]
] | TITLE: Statistically Testing Training Data for Unwanted Error Patterns using
Rule-Oriented Regression
ABSTRACT: Artificial intelligence models trained from data can only be as good as the
underlying data is. Biases in training data propagating through to the output
of a machine learning model are a well-documented and well-understood
phenomenon, but the machinery to prevent these undesired effects is much less
developed. Efforts to ensure data is clean during collection, such as using
bias-aware sampling, are most effective when the entity controlling data
collection also trains the AI. In cases where the data is already available,
how do we find out if the data was already manipulated, i.e., ``poisoned'', so
that an undesired behavior would be trained into a machine learning model? This
is a challenge fundamentally different to (just) improving approximation
accuracy or efficiency, and we provide a method to test training data for
flaws, to establish a trustworthy ground-truth for a subsequent training of
machine learning models (of any kind). Unlike the well-studied problem of
approximating data using fuzzy rules that are generated from the data, our
method hinges on a prior definition of rules to happen before seeing the data
to be tested. Therefore, the proposed method can also discover hidden error
patterns, which may also have substantial influence. Our approach extends the
abilities of conventional statistical testing by letting the ``test-condition''
be any Boolean condition to describe a pattern in the data, whose presence we
wish to determine. The method puts fuzzy inference into a regression model, to
get the best of the two: explainability from fuzzy logic with statistical
properties and diagnostics from the regression, and finally also being
applicable to ``small data'', hence not requiring large datasets as deep
learning methods do. We provide an open source implementation for demonstration
and experiments.
|
2503.19537 | Noam Kahlon | Noam Kahlon, Guy Rom, Anatoly Efros, Filippo Galgani, Omri Berkovitch,
Sapir Caduri, William E. Bishop, Oriana Riva, Ido Dagan | Agent-Initiated Interaction in Phone UI Automation | null | null | null | null | cs.HC | http://creativecommons.org/licenses/by/4.0/ | Phone automation agents aim to autonomously perform a given natural-language
user request, such as scheduling appointments or booking a hotel. While much
research effort has been devoted to screen understanding and action planning,
complex tasks often necessitate user interaction for successful completion.
Aligning the agent with the user's expectations is crucial for building trust
and enabling personalized experiences. This requires the agent to proactively
engage the user when necessary, avoiding actions that violate their preferences
while refraining from unnecessary questions where a default action is expected.
We argue that such subtle agent-initiated interaction with the user deserves
focused research attention.
To promote such research, this paper introduces a task formulation for
detecting the need for user interaction and generating appropriate messages. We
thoroughly define the task, including aspects like interaction timing and the
scope of the agent's autonomy. Using this definition, we derived annotation
guidelines and created AndroidInteraction, a diverse dataset for the task,
leveraging an existing UI automation dataset. We tested several text-based and
multimodal baseline models for the task, finding that it is very challenging
for current LLMs. We suggest that our task formulation, dataset, baseline
models and analysis will be valuable for future UI automation research,
specifically in addressing this crucial yet often overlooked aspect of
agent-initiated interaction. This work provides a needed foundation to allow
personalized agents to properly engage the user when needed, within the context
of phone UI automation.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 10:46:08 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Kahlon",
"Noam",
""
],
[
"Rom",
"Guy",
""
],
[
"Efros",
"Anatoly",
""
],
[
"Galgani",
"Filippo",
""
],
[
"Berkovitch",
"Omri",
""
],
[
"Caduri",
"Sapir",
""
],
[
"Bishop",
"William E.",
""
],
[
"Riva",
"Oriana",
""
],
[
"Dagan",
"Ido",
""
]
] | TITLE: Agent-Initiated Interaction in Phone UI Automation
ABSTRACT: Phone automation agents aim to autonomously perform a given natural-language
user request, such as scheduling appointments or booking a hotel. While much
research effort has been devoted to screen understanding and action planning,
complex tasks often necessitate user interaction for successful completion.
Aligning the agent with the user's expectations is crucial for building trust
and enabling personalized experiences. This requires the agent to proactively
engage the user when necessary, avoiding actions that violate their preferences
while refraining from unnecessary questions where a default action is expected.
We argue that such subtle agent-initiated interaction with the user deserves
focused research attention.
To promote such research, this paper introduces a task formulation for
detecting the need for user interaction and generating appropriate messages. We
thoroughly define the task, including aspects like interaction timing and the
scope of the agent's autonomy. Using this definition, we derived annotation
guidelines and created AndroidInteraction, a diverse dataset for the task,
leveraging an existing UI automation dataset. We tested several text-based and
multimodal baseline models for the task, finding that it is very challenging
for current LLMs. We suggest that our task formulation, dataset, baseline
models and analysis will be valuable for future UI automation research,
specifically in addressing this crucial yet often overlooked aspect of
agent-initiated interaction. This work provides a needed foundation to allow
personalized agents to properly engage the user when needed, within the context
of phone UI automation.
|
2503.19721 | Chengjie Ge | Chengjie Ge, Xueyang Fu, Peng He, Kunyu Wang, Chengzhi Cao, Zheng-Jun
Zha | EventMamba: Enhancing Spatio-Temporal Locality with State Space Models
for Event-Based Video Reconstruction | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Leveraging its robust linear global modeling capability, Mamba has notably
excelled in computer vision. Despite its success, existing Mamba-based vision
models have overlooked the nuances of event-driven tasks, especially in video
reconstruction. Event-based video reconstruction (EBVR) demands spatial
translation invariance and close attention to local event relationships in the
spatio-temporal domain. Unfortunately, conventional Mamba algorithms apply
static window partitions and standard reshape scanning methods, leading to
significant losses in local connectivity. To overcome these limitations, we
introduce EventMamba--a specialized model designed for EBVR tasks. EventMamba
innovates by incorporating random window offset (RWO) in the spatial domain,
moving away from the restrictive fixed partitioning. Additionally, it features
a new consistent traversal serialization approach in the spatio-temporal
domain, which maintains the proximity of adjacent events both spatially and
temporally. These enhancements enable EventMamba to retain Mamba's robust
modeling capabilities while significantly preserving the spatio-temporal
locality of event data. Comprehensive testing on multiple datasets shows that
EventMamba markedly enhances video reconstruction, drastically improving
computation speed while delivering superior visual quality compared to
Transformer-based methods.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 14:46:45 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Mar 2025 13:41:35 GMT"
},
{
"version": "v3",
"created": "Tue, 1 Apr 2025 02:49:17 GMT"
}
] | 2025-04-02T00:00:00 | [
[
"Ge",
"Chengjie",
""
],
[
"Fu",
"Xueyang",
""
],
[
"He",
"Peng",
""
],
[
"Wang",
"Kunyu",
""
],
[
"Cao",
"Chengzhi",
""
],
[
"Zha",
"Zheng-Jun",
""
]
] | TITLE: EventMamba: Enhancing Spatio-Temporal Locality with State Space Models
for Event-Based Video Reconstruction
ABSTRACT: Leveraging its robust linear global modeling capability, Mamba has notably
excelled in computer vision. Despite its success, existing Mamba-based vision
models have overlooked the nuances of event-driven tasks, especially in video
reconstruction. Event-based video reconstruction (EBVR) demands spatial
translation invariance and close attention to local event relationships in the
spatio-temporal domain. Unfortunately, conventional Mamba algorithms apply
static window partitions and standard reshape scanning methods, leading to
significant losses in local connectivity. To overcome these limitations, we
introduce EventMamba--a specialized model designed for EBVR tasks. EventMamba
innovates by incorporating random window offset (RWO) in the spatial domain,
moving away from the restrictive fixed partitioning. Additionally, it features
a new consistent traversal serialization approach in the spatio-temporal
domain, which maintains the proximity of adjacent events both spatially and
temporally. These enhancements enable EventMamba to retain Mamba's robust
modeling capabilities while significantly preserving the spatio-temporal
locality of event data. Comprehensive testing on multiple datasets shows that
EventMamba markedly enhances video reconstruction, drastically improving
computation speed while delivering superior visual quality compared to
Transformer-based methods.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.