Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2503.21305 | Dorde Popovic | Dorde Popovic, Amin Sadeghi, Ting Yu, Sanjay Chawla, Issa Khalil | DeBackdoor: A Deductive Framework for Detecting Backdoor Attacks on Deep
Models with Limited Data | null | null | null | null | cs.CR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Backdoor attacks are among the most effective, practical, and stealthy
attacks in deep learning. In this paper, we consider a practical scenario where
a developer obtains a deep model from a third party and uses it as part of a
safety-critical system. The developer wants to inspect the model for potential
backdoors prior to system deployment. We find that most existing detection
techniques make assumptions that are not applicable to this scenario. In this
paper, we present a novel framework for detecting backdoors under realistic
restrictions. We generate candidate triggers by deductively searching over the
space of possible triggers. We construct and optimize a smoothed version of
Attack Success Rate as our search objective. Starting from a broad class of
template attacks and just using the forward pass of a deep model, we reverse
engineer the backdoor attack. We conduct extensive evaluation on a wide range
of attacks, models, and datasets, with our technique performing almost
perfectly across these settings.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 09:31:10 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Popovic",
"Dorde",
""
],
[
"Sadeghi",
"Amin",
""
],
[
"Yu",
"Ting",
""
],
[
"Chawla",
"Sanjay",
""
],
[
"Khalil",
"Issa",
""
]
] | TITLE: DeBackdoor: A Deductive Framework for Detecting Backdoor Attacks on Deep
Models with Limited Data
ABSTRACT: Backdoor attacks are among the most effective, practical, and stealthy
attacks in deep learning. In this paper, we consider a practical scenario where
a developer obtains a deep model from a third party and uses it as part of a
safety-critical system. The developer wants to inspect the model for potential
backdoors prior to system deployment. We find that most existing detection
techniques make assumptions that are not applicable to this scenario. In this
paper, we present a novel framework for detecting backdoors under realistic
restrictions. We generate candidate triggers by deductively searching over the
space of possible triggers. We construct and optimize a smoothed version of
Attack Success Rate as our search objective. Starting from a broad class of
template attacks and just using the forward pass of a deep model, we reverse
engineer the backdoor attack. We conduct extensive evaluation on a wide range
of attacks, models, and datasets, with our technique performing almost
perfectly across these settings.
|
2503.21309 | Zixu Li | Zixu Li, Zhiheng Fu, Yupeng Hu, Zhiwei Chen, Haokun Wen, Liqiang Nie | FineCIR: Explicit Parsing of Fine-Grained Modification Semantics for
Composed Image Retrieval | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Composed Image Retrieval (CIR) facilitates image retrieval through a
multimodal query consisting of a reference image and modification text. The
reference image defines the retrieval context, while the modification text
specifies desired alterations. However, existing CIR datasets predominantly
employ coarse-grained modification text (CoarseMT), which inadequately captures
fine-grained retrieval intents. This limitation introduces two key challenges:
(1) ignoring detailed differences leads to imprecise positive samples, and (2)
greater ambiguity arises when retrieving visually similar images. These issues
degrade retrieval accuracy, necessitating manual result filtering or repeated
queries. To address these limitations, we develop a robust fine-grained CIR
data annotation pipeline that minimizes imprecise positive samples and enhances
CIR systems' ability to discern modification intents accurately. Using this
pipeline, we refine the FashionIQ and CIRR datasets to create two fine-grained
CIR datasets: Fine-FashionIQ and Fine-CIRR. Furthermore, we introduce FineCIR,
the first CIR framework explicitly designed to parse the modification text.
FineCIR effectively captures fine-grained modification semantics and aligns
them with ambiguous visual entities, enhancing retrieval precision. Extensive
experiments demonstrate that FineCIR consistently outperforms state-of-the-art
CIR baselines on both fine-grained and traditional CIR benchmark datasets. Our
FineCIR code and fine-grained CIR datasets are available at
https://github.com/SDU-L/FineCIR.git.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 09:34:21 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Li",
"Zixu",
""
],
[
"Fu",
"Zhiheng",
""
],
[
"Hu",
"Yupeng",
""
],
[
"Chen",
"Zhiwei",
""
],
[
"Wen",
"Haokun",
""
],
[
"Nie",
"Liqiang",
""
]
] | TITLE: FineCIR: Explicit Parsing of Fine-Grained Modification Semantics for
Composed Image Retrieval
ABSTRACT: Composed Image Retrieval (CIR) facilitates image retrieval through a
multimodal query consisting of a reference image and modification text. The
reference image defines the retrieval context, while the modification text
specifies desired alterations. However, existing CIR datasets predominantly
employ coarse-grained modification text (CoarseMT), which inadequately captures
fine-grained retrieval intents. This limitation introduces two key challenges:
(1) ignoring detailed differences leads to imprecise positive samples, and (2)
greater ambiguity arises when retrieving visually similar images. These issues
degrade retrieval accuracy, necessitating manual result filtering or repeated
queries. To address these limitations, we develop a robust fine-grained CIR
data annotation pipeline that minimizes imprecise positive samples and enhances
CIR systems' ability to discern modification intents accurately. Using this
pipeline, we refine the FashionIQ and CIRR datasets to create two fine-grained
CIR datasets: Fine-FashionIQ and Fine-CIRR. Furthermore, we introduce FineCIR,
the first CIR framework explicitly designed to parse the modification text.
FineCIR effectively captures fine-grained modification semantics and aligns
them with ambiguous visual entities, enhancing retrieval precision. Extensive
experiments demonstrate that FineCIR consistently outperforms state-of-the-art
CIR baselines on both fine-grained and traditional CIR benchmark datasets. Our
FineCIR code and fine-grained CIR datasets are available at
https://github.com/SDU-L/FineCIR.git.
|
2503.21313 | Zerui Chen | Zerui Chen, Rolandos Alexandros Potamias, Shizhe Chen, Cordelia Schmid | HORT: Monocular Hand-held Objects Reconstruction with Transformers | Project Page: https://zerchen.github.io/projects/hort.html | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reconstructing hand-held objects in 3D from monocular images remains a
significant challenge in computer vision. Most existing approaches rely on
implicit 3D representations, which produce overly smooth reconstructions and
are time-consuming to generate explicit 3D shapes. While more recent methods
directly reconstruct point clouds with diffusion models, the multi-step
denoising makes high-resolution reconstruction inefficient. To address these
limitations, we propose a transformer-based model to efficiently reconstruct
dense 3D point clouds of hand-held objects. Our method follows a coarse-to-fine
strategy, first generating a sparse point cloud from the image and
progressively refining it into a dense representation using pixel-aligned image
features. To enhance reconstruction accuracy, we integrate image features with
3D hand geometry to jointly predict the object point cloud and its pose
relative to the hand. Our model is trained end-to-end for optimal performance.
Experimental results on both synthetic and real datasets demonstrate that our
method achieves state-of-the-art accuracy with much faster inference speed,
while generalizing well to in-the-wild images.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 09:45:09 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Chen",
"Zerui",
""
],
[
"Potamias",
"Rolandos Alexandros",
""
],
[
"Chen",
"Shizhe",
""
],
[
"Schmid",
"Cordelia",
""
]
] | TITLE: HORT: Monocular Hand-held Objects Reconstruction with Transformers
ABSTRACT: Reconstructing hand-held objects in 3D from monocular images remains a
significant challenge in computer vision. Most existing approaches rely on
implicit 3D representations, which produce overly smooth reconstructions and
are time-consuming to generate explicit 3D shapes. While more recent methods
directly reconstruct point clouds with diffusion models, the multi-step
denoising makes high-resolution reconstruction inefficient. To address these
limitations, we propose a transformer-based model to efficiently reconstruct
dense 3D point clouds of hand-held objects. Our method follows a coarse-to-fine
strategy, first generating a sparse point cloud from the image and
progressively refining it into a dense representation using pixel-aligned image
features. To enhance reconstruction accuracy, we integrate image features with
3D hand geometry to jointly predict the object point cloud and its pose
relative to the hand. Our model is trained end-to-end for optimal performance.
Experimental results on both synthetic and real datasets demonstrate that our
method achieves state-of-the-art accuracy with much faster inference speed,
while generalizing well to in-the-wild images.
|
2503.21315 | Cheng Wang | Cheng Wang, Yiwei Wang, Yujun Cai, Bryan Hooi | Tricking Retrievers with Influential Tokens: An Efficient Black-Box
Corpus Poisoning Attack | Accepted to NAACL 2025 Main Track | null | null | null | cs.LG cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Retrieval-augmented generation (RAG) systems enhance large language models by
incorporating external knowledge, addressing issues like outdated internal
knowledge and hallucination. However, their reliance on external knowledge
bases makes them vulnerable to corpus poisoning attacks, where adversarial
passages can be injected to manipulate retrieval results. Existing methods for
crafting such passages, such as random token replacement or training inversion
models, are often slow and computationally expensive, requiring either access
to retriever's gradients or large computational resources. To address these
limitations, we propose Dynamic Importance-Guided Genetic Algorithm (DIGA), an
efficient black-box method that leverages two key properties of retrievers:
insensitivity to token order and bias towards influential tokens. By focusing
on these characteristics, DIGA dynamically adjusts its genetic operations to
generate effective adversarial passages with significantly reduced time and
memory usage. Our experimental evaluation shows that DIGA achieves superior
efficiency and scalability compared to existing methods, while maintaining
comparable or better attack success rates across multiple datasets.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 09:54:37 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Wang",
"Cheng",
""
],
[
"Wang",
"Yiwei",
""
],
[
"Cai",
"Yujun",
""
],
[
"Hooi",
"Bryan",
""
]
] | TITLE: Tricking Retrievers with Influential Tokens: An Efficient Black-Box
Corpus Poisoning Attack
ABSTRACT: Retrieval-augmented generation (RAG) systems enhance large language models by
incorporating external knowledge, addressing issues like outdated internal
knowledge and hallucination. However, their reliance on external knowledge
bases makes them vulnerable to corpus poisoning attacks, where adversarial
passages can be injected to manipulate retrieval results. Existing methods for
crafting such passages, such as random token replacement or training inversion
models, are often slow and computationally expensive, requiring either access
to retriever's gradients or large computational resources. To address these
limitations, we propose Dynamic Importance-Guided Genetic Algorithm (DIGA), an
efficient black-box method that leverages two key properties of retrievers:
insensitivity to token order and bias towards influential tokens. By focusing
on these characteristics, DIGA dynamically adjusts its genetic operations to
generate effective adversarial passages with significantly reduced time and
memory usage. Our experimental evaluation shows that DIGA achieves superior
efficiency and scalability compared to existing methods, while maintaining
comparable or better attack success rates across multiple datasets.
|
2503.21323 | Ling Feng | Ling Feng, Tianyu Xie, Wei Ma, Ruijie Fu, Yingxiao Zhang, Jun Li, Bei
Zhou | DuckSegmentation: A segmentation model based on the AnYue Hemp Duck
Dataset | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The modernization of smart farming is a way to improve agricultural
production efficiency, and improve the agricultural production environment.
Although many large models have achieved high accuracy in the task of object
recognition and segmentation, they cannot really be put into use in the farming
industry due to their own poor interpretability and limitations in
computational volume. In this paper, we built AnYue Shelduck Dateset, which
contains a total of 1951 Shelduck datasets, and performed target detection and
segmentation annotation with the help of professional annotators. Based on
AnYue ShelduckDateset, this paper describes DuckProcessing, an efficient and
powerful module for duck identification based on real shelduckfarms. First of
all, using the YOLOv8 module designed to divide the mahjong between them,
Precision reached 98.10%, Recall reached 96.53% and F1 score reached 0.95 on
the test set. Again using the DuckSegmentation segmentation model,
DuckSegmentation reached 96.43% mIoU. Finally, the excellent DuckSegmentation
was used as the teacher model, and through knowledge distillation, Deeplabv3
r50 was used as the student model, and the final student model achieved 94.49%
mIoU on the test set. The method provides a new way of thinking in practical
sisal duck smart farming.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 10:02:30 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Feng",
"Ling",
""
],
[
"Xie",
"Tianyu",
""
],
[
"Ma",
"Wei",
""
],
[
"Fu",
"Ruijie",
""
],
[
"Zhang",
"Yingxiao",
""
],
[
"Li",
"Jun",
""
],
[
"Zhou",
"Bei",
""
]
] | TITLE: DuckSegmentation: A segmentation model based on the AnYue Hemp Duck
Dataset
ABSTRACT: The modernization of smart farming is a way to improve agricultural
production efficiency, and improve the agricultural production environment.
Although many large models have achieved high accuracy in the task of object
recognition and segmentation, they cannot really be put into use in the farming
industry due to their own poor interpretability and limitations in
computational volume. In this paper, we built AnYue Shelduck Dateset, which
contains a total of 1951 Shelduck datasets, and performed target detection and
segmentation annotation with the help of professional annotators. Based on
AnYue ShelduckDateset, this paper describes DuckProcessing, an efficient and
powerful module for duck identification based on real shelduckfarms. First of
all, using the YOLOv8 module designed to divide the mahjong between them,
Precision reached 98.10%, Recall reached 96.53% and F1 score reached 0.95 on
the test set. Again using the DuckSegmentation segmentation model,
DuckSegmentation reached 96.43% mIoU. Finally, the excellent DuckSegmentation
was used as the teacher model, and through knowledge distillation, Deeplabv3
r50 was used as the student model, and the final student model achieved 94.49%
mIoU on the test set. The method provides a new way of thinking in practical
sisal duck smart farming.
|
2503.21328 | Reinhard Maurer | Zsuzsanna Koczor-Benda, Joe Gilkes, Francesco Bartucca, Abdulla
Al-Fekaiki, Reinhard J. Maurer | Structural bias in three-dimensional autoregressive generative machine
learning of organic molecules | 18 pages, 7 figures, 14 pages of supplemental material | null | null | null | physics.chem-ph | http://creativecommons.org/licenses/by/4.0/ | A range of generative machine learning models for the design of novel
molecules and materials have been proposed in recent years. Models that can
generate three-dimensional structures are particularly suitable for quantum
chemistry workflows, enabling direct property prediction. The performance of
generative models is typically assessed based on their ability to produce
novel, valid, and unique molecules. However, equally important is their ability
to learn the prevalence of functional groups and certain chemical moieties in
the underlying training data, that is, to faithfully reproduce the chemical
space spanned by the training data. Here, we investigate the ability of the
autoregressive generative machine learning model G-SchNet to reproduce the
chemical space and property distributions of training datasets composed of
large, functional organic molecules. We assess the elemental composition, size-
and bond-length distributions, as well as the functional group and chemical
space distribution of training and generated molecules. By principal component
analysis of the chemical space, we find that the model leads to a biased
generation that is largely unaffected by the choice of hyperparameters or the
training dataset distribution, producing molecules that are, on average, more
unsaturated and contain more heteroatoms. Purely aliphatic molecules are mostly
absent in the generation. We further investigate generation with functional
group constraints and based on composite datasets, which can help partially
remedy the model generation bias. Decision tree models can recognize the
generation bias in the models and discriminate between training and generated
data, revealing key chemical differences between the two sets. The chemical
differences we find affect the distributions of electronic properties such as
the HOMO-LUMO gap, which is a common target for functional molecule design.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 10:08:06 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Koczor-Benda",
"Zsuzsanna",
""
],
[
"Gilkes",
"Joe",
""
],
[
"Bartucca",
"Francesco",
""
],
[
"Al-Fekaiki",
"Abdulla",
""
],
[
"Maurer",
"Reinhard J.",
""
]
] | TITLE: Structural bias in three-dimensional autoregressive generative machine
learning of organic molecules
ABSTRACT: A range of generative machine learning models for the design of novel
molecules and materials have been proposed in recent years. Models that can
generate three-dimensional structures are particularly suitable for quantum
chemistry workflows, enabling direct property prediction. The performance of
generative models is typically assessed based on their ability to produce
novel, valid, and unique molecules. However, equally important is their ability
to learn the prevalence of functional groups and certain chemical moieties in
the underlying training data, that is, to faithfully reproduce the chemical
space spanned by the training data. Here, we investigate the ability of the
autoregressive generative machine learning model G-SchNet to reproduce the
chemical space and property distributions of training datasets composed of
large, functional organic molecules. We assess the elemental composition, size-
and bond-length distributions, as well as the functional group and chemical
space distribution of training and generated molecules. By principal component
analysis of the chemical space, we find that the model leads to a biased
generation that is largely unaffected by the choice of hyperparameters or the
training dataset distribution, producing molecules that are, on average, more
unsaturated and contain more heteroatoms. Purely aliphatic molecules are mostly
absent in the generation. We further investigate generation with functional
group constraints and based on composite datasets, which can help partially
remedy the model generation bias. Decision tree models can recognize the
generation bias in the models and discriminate between training and generated
data, revealing key chemical differences between the two sets. The chemical
differences we find affect the distributions of electronic properties such as
the HOMO-LUMO gap, which is a common target for functional molecule design.
|
2503.21332 | Hwanjun Song | Taewon Yun and Jihwan Oh and Hyangsuk Min and Yuho Lee and Jihwan Bang
and Jason Cai and Hwanjun Song | ReFeed: Multi-dimensional Summarization Refinement with Reflective
Reasoning on Feedback | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Summarization refinement faces challenges when extending to multi-dimension.
In this paper, we introduce ReFeed, a powerful summarization refinement
pipeline that enhances multiple dimensions through reflective reasoning on
feedback. To achieve this, we release SumFeed-CoT, a large-scale Long-CoT-based
dataset optimized for training a lightweight model with reflective reasoning.
Our experiments reveal how the number of dimensions, feedback exposure, and
reasoning policy influence refinement performance, highlighting reflective
reasoning and simultaneously addressing multiple feedback is crucial to
mitigate trade-off between dimensions. Furthermore, ReFeed is robust to noisy
feedback and feedback order. Lastly, our finding emphasizes that creating data
with a proper goal and guideline constitutes a fundamental pillar of effective
reasoning. The dataset and model will be released.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 10:11:41 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Yun",
"Taewon",
""
],
[
"Oh",
"Jihwan",
""
],
[
"Min",
"Hyangsuk",
""
],
[
"Lee",
"Yuho",
""
],
[
"Bang",
"Jihwan",
""
],
[
"Cai",
"Jason",
""
],
[
"Song",
"Hwanjun",
""
]
] | TITLE: ReFeed: Multi-dimensional Summarization Refinement with Reflective
Reasoning on Feedback
ABSTRACT: Summarization refinement faces challenges when extending to multi-dimension.
In this paper, we introduce ReFeed, a powerful summarization refinement
pipeline that enhances multiple dimensions through reflective reasoning on
feedback. To achieve this, we release SumFeed-CoT, a large-scale Long-CoT-based
dataset optimized for training a lightweight model with reflective reasoning.
Our experiments reveal how the number of dimensions, feedback exposure, and
reasoning policy influence refinement performance, highlighting reflective
reasoning and simultaneously addressing multiple feedback is crucial to
mitigate trade-off between dimensions. Furthermore, ReFeed is robust to noisy
feedback and feedback order. Lastly, our finding emphasizes that creating data
with a proper goal and guideline constitutes a fundamental pillar of effective
reasoning. The dataset and model will be released.
|
2503.21338 | Yehui Shen | Yehui Shen, Lei Zhang, Qingqiu Li, Xiongwei Zhao, Yue Wang, Huimin Lu,
Xieyuanli Chen | UGNA-VPR: A Novel Training Paradigm for Visual Place Recognition Based
on Uncertainty-Guided NeRF Augmentation | Accepted to IEEE Robotics and Automation Letters (RA-L) | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual place recognition (VPR) is crucial for robots to identify previously
visited locations, playing an important role in autonomous navigation in both
indoor and outdoor environments. However, most existing VPR datasets are
limited to single-viewpoint scenarios, leading to reduced recognition accuracy,
particularly in multi-directional driving or feature-sparse scenes. Moreover,
obtaining additional data to mitigate these limitations is often expensive.
This paper introduces a novel training paradigm to improve the performance of
existing VPR networks by enhancing multi-view diversity within current datasets
through uncertainty estimation and NeRF-based data augmentation. Specifically,
we initially train NeRF using the existing VPR dataset. Then, our devised
self-supervised uncertainty estimation network identifies places with high
uncertainty. The poses of these uncertain places are input into NeRF to
generate new synthetic observations for further training of VPR networks.
Additionally, we propose an improved storage method for efficient organization
of augmented and original training data. We conducted extensive experiments on
three datasets and tested three different VPR backbone networks. The results
demonstrate that our proposed training paradigm significantly improves VPR
performance by fully utilizing existing data, outperforming other training
approaches. We further validated the effectiveness of our approach on
self-recorded indoor and outdoor datasets, consistently demonstrating superior
results. Our dataset and code have been released at
\href{https://github.com/nubot-nudt/UGNA-VPR}{https://github.com/nubot-nudt/UGNA-VPR}.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 10:14:46 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Shen",
"Yehui",
""
],
[
"Zhang",
"Lei",
""
],
[
"Li",
"Qingqiu",
""
],
[
"Zhao",
"Xiongwei",
""
],
[
"Wang",
"Yue",
""
],
[
"Lu",
"Huimin",
""
],
[
"Chen",
"Xieyuanli",
""
]
] | TITLE: UGNA-VPR: A Novel Training Paradigm for Visual Place Recognition Based
on Uncertainty-Guided NeRF Augmentation
ABSTRACT: Visual place recognition (VPR) is crucial for robots to identify previously
visited locations, playing an important role in autonomous navigation in both
indoor and outdoor environments. However, most existing VPR datasets are
limited to single-viewpoint scenarios, leading to reduced recognition accuracy,
particularly in multi-directional driving or feature-sparse scenes. Moreover,
obtaining additional data to mitigate these limitations is often expensive.
This paper introduces a novel training paradigm to improve the performance of
existing VPR networks by enhancing multi-view diversity within current datasets
through uncertainty estimation and NeRF-based data augmentation. Specifically,
we initially train NeRF using the existing VPR dataset. Then, our devised
self-supervised uncertainty estimation network identifies places with high
uncertainty. The poses of these uncertain places are input into NeRF to
generate new synthetic observations for further training of VPR networks.
Additionally, we propose an improved storage method for efficient organization
of augmented and original training data. We conducted extensive experiments on
three datasets and tested three different VPR backbone networks. The results
demonstrate that our proposed training paradigm significantly improves VPR
performance by fully utilizing existing data, outperforming other training
approaches. We further validated the effectiveness of our approach on
self-recorded indoor and outdoor datasets, consistently demonstrating superior
results. Our dataset and code have been released at
\href{https://github.com/nubot-nudt/UGNA-VPR}{https://github.com/nubot-nudt/UGNA-VPR}.
|
2503.21349 | Noah Losch | Noah Losch, Lucas Plagwitz, Antonius B\"uscher, Julian Varghese | Fine-Tuning LLMs on Small Medical Datasets: Text Classification and
Normalization Effectiveness on Cardiology reports and Discharge records | 4 pages, 2 tables, | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the effectiveness of fine-tuning large language models (LLMs)
on small medical datasets for text classification and named entity recognition
tasks. Using a German cardiology report dataset and the i2b2 Smoking Challenge
dataset, we demonstrate that fine-tuning small LLMs locally on limited training
data can improve performance achieving comparable results to larger models. Our
experiments show that fine-tuning improves performance on both tasks, with
notable gains observed with as few as 200-300 training examples. Overall, the
study highlights the potential of task-specific fine-tuning of LLMs for
automating clinical workflows and efficiently extracting structured data from
unstructured medical text.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 10:35:56 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Losch",
"Noah",
""
],
[
"Plagwitz",
"Lucas",
""
],
[
"Büscher",
"Antonius",
""
],
[
"Varghese",
"Julian",
""
]
] | TITLE: Fine-Tuning LLMs on Small Medical Datasets: Text Classification and
Normalization Effectiveness on Cardiology reports and Discharge records
ABSTRACT: We investigate the effectiveness of fine-tuning large language models (LLMs)
on small medical datasets for text classification and named entity recognition
tasks. Using a German cardiology report dataset and the i2b2 Smoking Challenge
dataset, we demonstrate that fine-tuning small LLMs locally on limited training
data can improve performance achieving comparable results to larger models. Our
experiments show that fine-tuning improves performance on both tasks, with
notable gains observed with as few as 200-300 training examples. Overall, the
study highlights the potential of task-specific fine-tuning of LLMs for
automating clinical workflows and efficiently extracting structured data from
unstructured medical text.
|
2503.21360 | Manuela Sanguinetti | Manuela Sanguinetti, Alessandra Perniciano, Luca Zedda, Andrea Loddo,
Cecilia Di Ruberto, and Maurizio Atzori | From User Preferences to Optimization Constraints Using Large Language
Models | null | null | null | ITADATA/2024/08 | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This work explores using Large Language Models (LLMs) to translate user
preferences into energy optimization constraints for home appliances. We
describe a task where natural language user utterances are converted into
formal constraints for smart appliances, within the broader context of a
renewable energy community (REC) and in the Italian scenario. We evaluate the
effectiveness of various LLMs currently available for Italian in translating
these preferences resorting to classical zero-shot, one-shot, and few-shot
learning settings, using a pilot dataset of Italian user requests paired with
corresponding formal constraint representation. Our contributions include
establishing a baseline performance for this task, publicly releasing the
dataset and code for further research, and providing insights on observed best
practices and limitations of LLMs in this particular domain
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 10:52:10 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Sanguinetti",
"Manuela",
""
],
[
"Perniciano",
"Alessandra",
""
],
[
"Zedda",
"Luca",
""
],
[
"Loddo",
"Andrea",
""
],
[
"Di Ruberto",
"Cecilia",
""
],
[
"Atzori",
"Maurizio",
""
]
] | TITLE: From User Preferences to Optimization Constraints Using Large Language
Models
ABSTRACT: This work explores using Large Language Models (LLMs) to translate user
preferences into energy optimization constraints for home appliances. We
describe a task where natural language user utterances are converted into
formal constraints for smart appliances, within the broader context of a
renewable energy community (REC) and in the Italian scenario. We evaluate the
effectiveness of various LLMs currently available for Italian in translating
these preferences resorting to classical zero-shot, one-shot, and few-shot
learning settings, using a pilot dataset of Italian user requests paired with
corresponding formal constraint representation. Our contributions include
establishing a baseline performance for this task, publicly releasing the
dataset and code for further research, and providing insights on observed best
practices and limitations of LLMs in this particular domain
|
2503.21377 | Hamadi Chihaoui | Hamadi Chihaoui and Paolo Favaro | Unsupervised Real-World Denoising: Sparsity is All You Need | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Supervised training for real-world denoising presents challenges due to the
difficulty of collecting large datasets of paired noisy and clean images.
Recent methods have attempted to address this by utilizing unpaired datasets of
clean and noisy images. Some approaches leverage such unpaired data to train
denoisers in a supervised manner by generating synthetic clean-noisy pairs.
However, these methods often fall short due to the distribution gap between
synthetic and real noisy images. To mitigate this issue, we propose a solution
based on input sparsification, specifically using random input masking. Our
method, which we refer to as Mask, Inpaint and Denoise (MID), trains a denoiser
to simultaneously denoise and inpaint synthetic clean-noisy pairs. On one hand,
input sparsification reduces the gap between synthetic and real noisy images.
On the other hand, an inpainter trained in a supervised manner can still
accurately reconstruct sparse inputs by predicting missing clean pixels using
the remaining unmasked pixels. Our approach begins with a synthetic Gaussian
noise sampler and iteratively refines it using a noise dataset derived from the
denoiser's predictions. The noise dataset is created by subtracting predicted
pseudo-clean images from real noisy images at each iteration. The core
intuition is that improving the denoiser results in a more accurate noise
dataset and, consequently, a better noise sampler. We validate our method
through extensive experiments on real-world noisy image datasets, demonstrating
competitive performance compared to existing unsupervised denoising methods.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 11:09:58 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Chihaoui",
"Hamadi",
""
],
[
"Favaro",
"Paolo",
""
]
] | TITLE: Unsupervised Real-World Denoising: Sparsity is All You Need
ABSTRACT: Supervised training for real-world denoising presents challenges due to the
difficulty of collecting large datasets of paired noisy and clean images.
Recent methods have attempted to address this by utilizing unpaired datasets of
clean and noisy images. Some approaches leverage such unpaired data to train
denoisers in a supervised manner by generating synthetic clean-noisy pairs.
However, these methods often fall short due to the distribution gap between
synthetic and real noisy images. To mitigate this issue, we propose a solution
based on input sparsification, specifically using random input masking. Our
method, which we refer to as Mask, Inpaint and Denoise (MID), trains a denoiser
to simultaneously denoise and inpaint synthetic clean-noisy pairs. On one hand,
input sparsification reduces the gap between synthetic and real noisy images.
On the other hand, an inpainter trained in a supervised manner can still
accurately reconstruct sparse inputs by predicting missing clean pixels using
the remaining unmasked pixels. Our approach begins with a synthetic Gaussian
noise sampler and iteratively refines it using a noise dataset derived from the
denoiser's predictions. The noise dataset is created by subtracting predicted
pseudo-clean images from real noisy images at each iteration. The core
intuition is that improving the denoiser results in a more accurate noise
dataset and, consequently, a better noise sampler. We validate our method
through extensive experiments on real-world noisy image datasets, demonstrating
competitive performance compared to existing unsupervised denoising methods.
|
2503.21378 | Kota Dohi | Kota Dohi, Tomoya Nishida, Harsh Purohit, Takashi Endo, Yohei
Kawaguchi | Retrieving Time-Series Differences Using Natural Language Queries | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Effectively searching time-series data is essential for system analysis;
however, traditional methods often require domain expertise to define search
criteria. Recent advancements have enabled natural language-based search, but
these methods struggle to handle differences between time-series data. To
address this limitation, we propose a natural language query-based approach for
retrieving pairs of time-series data based on differences specified in the
query. Specifically, we define six key characteristics of differences,
construct a corresponding dataset, and develop a contrastive learning-based
model to align differences between time-series data with query texts.
Experimental results demonstrate that our model achieves an overall mAP score
of 0.994 in retrieving time-series pairs.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 11:15:17 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Dohi",
"Kota",
""
],
[
"Nishida",
"Tomoya",
""
],
[
"Purohit",
"Harsh",
""
],
[
"Endo",
"Takashi",
""
],
[
"Kawaguchi",
"Yohei",
""
]
] | TITLE: Retrieving Time-Series Differences Using Natural Language Queries
ABSTRACT: Effectively searching time-series data is essential for system analysis;
however, traditional methods often require domain expertise to define search
criteria. Recent advancements have enabled natural language-based search, but
these methods struggle to handle differences between time-series data. To
address this limitation, we propose a natural language query-based approach for
retrieving pairs of time-series data based on differences specified in the
query. Specifically, we define six key characteristics of differences,
construct a corresponding dataset, and develop a contrastive learning-based
model to align differences between time-series data with query texts.
Experimental results demonstrate that our model achieves an overall mAP score
of 0.994 in retrieving time-series pairs.
|
2503.21397 | Erik Wallin | Erik Wallin, Fredrik Kahl, Lars Hammarstrand | ProHOC: Probabilistic Hierarchical Out-of-Distribution Classification
via Multi-Depth Networks | CVPR2025 | null | null | null | cs.LG cs.CV stat.ML | http://creativecommons.org/licenses/by/4.0/ | Out-of-distribution (OOD) detection in deep learning has traditionally been
framed as a binary task, where samples are either classified as belonging to
the known classes or marked as OOD, with little attention given to the semantic
relationships between OOD samples and the in-distribution (ID) classes. We
propose a framework for detecting and classifying OOD samples in a given class
hierarchy. Specifically, we aim to predict OOD data to their correct internal
nodes of the class hierarchy, whereas the known ID classes should be predicted
as their corresponding leaf nodes. Our approach leverages the class hierarchy
to create a probabilistic model and we implement this model by using networks
trained for ID classification at multiple hierarchy depths. We conduct
experiments on three datasets with predefined class hierarchies and show the
effectiveness of our method. Our code is available at
https://github.com/walline/prohoc.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 11:39:55 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Wallin",
"Erik",
""
],
[
"Kahl",
"Fredrik",
""
],
[
"Hammarstrand",
"Lars",
""
]
] | TITLE: ProHOC: Probabilistic Hierarchical Out-of-Distribution Classification
via Multi-Depth Networks
ABSTRACT: Out-of-distribution (OOD) detection in deep learning has traditionally been
framed as a binary task, where samples are either classified as belonging to
the known classes or marked as OOD, with little attention given to the semantic
relationships between OOD samples and the in-distribution (ID) classes. We
propose a framework for detecting and classifying OOD samples in a given class
hierarchy. Specifically, we aim to predict OOD data to their correct internal
nodes of the class hierarchy, whereas the known ID classes should be predicted
as their corresponding leaf nodes. Our approach leverages the class hierarchy
to create a probabilistic model and we implement this model by using networks
trained for ID classification at multiple hierarchy depths. We conduct
experiments on three datasets with predefined class hierarchies and show the
effectiveness of our method. Our code is available at
https://github.com/walline/prohoc.
|
2503.21408 | Marshall Thomas | Marshall Thomas, Edward Fish, Richard Bowden | VALLR: Visual ASR Language Model for Lip Reading | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Lip Reading, or Visual Automatic Speech Recognition (V-ASR), is a complex
task requiring the interpretation of spoken language exclusively from visual
cues, primarily lip movements and facial expressions. This task is especially
challenging due to the absence of auditory information and the inherent
ambiguity when visually distinguishing phonemes that have overlapping visemes
where different phonemes appear identical on the lips. Current methods
typically attempt to predict words or characters directly from these visual
cues, but this approach frequently encounters high error rates due to
coarticulation effects and viseme ambiguity. We propose a novel two-stage,
phoneme-centric framework for Visual Automatic Speech Recognition (V-ASR) that
addresses these longstanding challenges. First, our model predicts a compact
sequence of phonemes from visual inputs using a Video Transformer with a CTC
head, thereby reducing the task complexity and achieving robust speaker
invariance. This phoneme output then serves as the input to a fine-tuned Large
Language Model (LLM), which reconstructs coherent words and sentences by
leveraging broader linguistic context. Unlike existing methods that either
predict words directly-often faltering on visually similar phonemes-or rely on
large-scale multimodal pre-training, our approach explicitly encodes
intermediate linguistic structure while remaining highly data efficient. We
demonstrate state-of-the-art performance on two challenging datasets, LRS2 and
LRS3, where our method achieves significant reductions in Word Error Rate (WER)
achieving a SOTA WER of 18.7 on LRS3 despite using 99.4% less labelled data
than the next best approach.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 11:52:08 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Thomas",
"Marshall",
""
],
[
"Fish",
"Edward",
""
],
[
"Bowden",
"Richard",
""
]
] | TITLE: VALLR: Visual ASR Language Model for Lip Reading
ABSTRACT: Lip Reading, or Visual Automatic Speech Recognition (V-ASR), is a complex
task requiring the interpretation of spoken language exclusively from visual
cues, primarily lip movements and facial expressions. This task is especially
challenging due to the absence of auditory information and the inherent
ambiguity when visually distinguishing phonemes that have overlapping visemes
where different phonemes appear identical on the lips. Current methods
typically attempt to predict words or characters directly from these visual
cues, but this approach frequently encounters high error rates due to
coarticulation effects and viseme ambiguity. We propose a novel two-stage,
phoneme-centric framework for Visual Automatic Speech Recognition (V-ASR) that
addresses these longstanding challenges. First, our model predicts a compact
sequence of phonemes from visual inputs using a Video Transformer with a CTC
head, thereby reducing the task complexity and achieving robust speaker
invariance. This phoneme output then serves as the input to a fine-tuned Large
Language Model (LLM), which reconstructs coherent words and sentences by
leveraging broader linguistic context. Unlike existing methods that either
predict words directly-often faltering on visually similar phonemes-or rely on
large-scale multimodal pre-training, our approach explicitly encodes
intermediate linguistic structure while remaining highly data efficient. We
demonstrate state-of-the-art performance on two challenging datasets, LRS2 and
LRS3, where our method achieves significant reductions in Word Error Rate (WER)
achieving a SOTA WER of 18.7 on LRS3 despite using 99.4% less labelled data
than the next best approach.
|
2503.21426 | Sen Zhang | Sen Zhang, Qingqing Ye, Haibo Hu, Jianliang Xu | AdvSGM: Differentially Private Graph Learning via Adversarial Skip-gram
Model | Accepted by ICDE 2025 | null | null | null | cs.LG cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The skip-gram model (SGM), which employs a neural network to generate node
vectors, serves as the basis for numerous popular graph embedding techniques.
However, since the training datasets contain sensitive linkage information, the
parameters of a released SGM may encode private information and pose
significant privacy risks. Differential privacy (DP) is a rigorous standard for
protecting individual privacy in data analysis. Nevertheless, when applying
differential privacy to skip-gram in graphs, it becomes highly challenging due
to the complex link relationships, which potentially result in high sensitivity
and necessitate substantial noise injection. To tackle this challenge, we
present AdvSGM, a differentially private skip-gram for graphs via adversarial
training. Our core idea is to leverage adversarial training to privatize
skip-gram while improving its utility. Towards this end, we develop a novel
adversarial training module by devising two optimizable noise terms that
correspond to the parameters of a skip-gram. By fine-tuning the weights between
modules within AdvSGM, we can achieve differentially private gradient updates
without additional noise injection. Extensive experimental results on six
real-world graph datasets show that AdvSGM preserves high data utility across
different downstream tasks.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 12:13:28 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Zhang",
"Sen",
""
],
[
"Ye",
"Qingqing",
""
],
[
"Hu",
"Haibo",
""
],
[
"Xu",
"Jianliang",
""
]
] | TITLE: AdvSGM: Differentially Private Graph Learning via Adversarial Skip-gram
Model
ABSTRACT: The skip-gram model (SGM), which employs a neural network to generate node
vectors, serves as the basis for numerous popular graph embedding techniques.
However, since the training datasets contain sensitive linkage information, the
parameters of a released SGM may encode private information and pose
significant privacy risks. Differential privacy (DP) is a rigorous standard for
protecting individual privacy in data analysis. Nevertheless, when applying
differential privacy to skip-gram in graphs, it becomes highly challenging due
to the complex link relationships, which potentially result in high sensitivity
and necessitate substantial noise injection. To tackle this challenge, we
present AdvSGM, a differentially private skip-gram for graphs via adversarial
training. Our core idea is to leverage adversarial training to privatize
skip-gram while improving its utility. Towards this end, we develop a novel
adversarial training module by devising two optimizable noise terms that
correspond to the parameters of a skip-gram. By fine-tuning the weights between
modules within AdvSGM, we can achieve differentially private gradient updates
without additional noise injection. Extensive experimental results on six
real-world graph datasets show that AdvSGM preserves high data utility across
different downstream tasks.
|
2503.21449 | Lucas Nunes | Lucas Nunes, Rodrigo Marcuzzi, Jens Behley, Cyrill Stachniss | Towards Generating Realistic 3D Semantic Training Data for Autonomous
Driving | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Semantic scene understanding is crucial for robotics and computer vision
applications. In autonomous driving, 3D semantic segmentation plays an
important role for enabling safe navigation. Despite significant advances in
the field, the complexity of collecting and annotating 3D data is a bottleneck
in this developments. To overcome that data annotation limitation, synthetic
simulated data has been used to generate annotated data on demand. There is
still however a domain gap between real and simulated data. More recently,
diffusion models have been in the spotlight, enabling close-to-real data
synthesis. Those generative models have been recently applied to the 3D data
domain for generating scene-scale data with semantic annotations. Still, those
methods either rely on image projection or decoupled models trained with
different resolutions in a coarse-to-fine manner. Such intermediary
representations impact the generated data quality due to errors added in those
transformations. In this work, we propose a novel approach able to generate 3D
semantic scene-scale data without relying on any projection or decoupled
trained multi-resolution models, achieving more realistic semantic scene data
generation compared to previous state-of-the-art methods. Besides improving 3D
semantic scene-scale data synthesis, we thoroughly evaluate the use of the
synthetic scene samples as labeled data to train a semantic segmentation
network. In our experiments, we show that using the synthetic annotated data
generated by our method as training data together with the real semantic
segmentation labels, leads to an improvement in the semantic segmentation model
performance. Our results show the potential of generated scene-scale point
clouds to generate more training data to extend existing datasets, reducing the
data annotation effort. Our code is available at
https://github.com/PRBonn/3DiSS.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 12:41:42 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Nunes",
"Lucas",
""
],
[
"Marcuzzi",
"Rodrigo",
""
],
[
"Behley",
"Jens",
""
],
[
"Stachniss",
"Cyrill",
""
]
] | TITLE: Towards Generating Realistic 3D Semantic Training Data for Autonomous
Driving
ABSTRACT: Semantic scene understanding is crucial for robotics and computer vision
applications. In autonomous driving, 3D semantic segmentation plays an
important role for enabling safe navigation. Despite significant advances in
the field, the complexity of collecting and annotating 3D data is a bottleneck
in this developments. To overcome that data annotation limitation, synthetic
simulated data has been used to generate annotated data on demand. There is
still however a domain gap between real and simulated data. More recently,
diffusion models have been in the spotlight, enabling close-to-real data
synthesis. Those generative models have been recently applied to the 3D data
domain for generating scene-scale data with semantic annotations. Still, those
methods either rely on image projection or decoupled models trained with
different resolutions in a coarse-to-fine manner. Such intermediary
representations impact the generated data quality due to errors added in those
transformations. In this work, we propose a novel approach able to generate 3D
semantic scene-scale data without relying on any projection or decoupled
trained multi-resolution models, achieving more realistic semantic scene data
generation compared to previous state-of-the-art methods. Besides improving 3D
semantic scene-scale data synthesis, we thoroughly evaluate the use of the
synthetic scene samples as labeled data to train a semantic segmentation
network. In our experiments, we show that using the synthetic annotated data
generated by our method as training data together with the real semantic
segmentation labels, leads to an improvement in the semantic segmentation model
performance. Our results show the potential of generated scene-scale point
clouds to generate more training data to extend existing datasets, reducing the
data annotation effort. Our code is available at
https://github.com/PRBonn/3DiSS.
|
2503.21457 | Xiaoqin Wang | Xiaoqin Wang, Xusen Ma, Xianxu Hou, Meidan Ding, Yudong Li, Junliang
Chen, Wenting Chen, Xiaoyang Peng, Linlin Shen | FaceBench: A Multi-View Multi-Level Facial Attribute VQA Dataset for
Benchmarking Face Perception MLLMs | Accepted by CVPR2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimodal large language models (MLLMs) have demonstrated remarkable
capabilities in various tasks. However, effectively evaluating these MLLMs on
face perception remains largely unexplored. To address this gap, we introduce
FaceBench, a dataset featuring hierarchical multi-view and multi-level
attributes specifically designed to assess the comprehensive face perception
abilities of MLLMs. Initially, we construct a hierarchical facial attribute
structure, which encompasses five views with up to three levels of attributes,
totaling over 210 attributes and 700 attribute values. Based on the structure,
the proposed FaceBench consists of 49,919 visual question-answering (VQA) pairs
for evaluation and 23,841 pairs for fine-tuning. Moreover, we further develop a
robust face perception MLLM baseline, Face-LLaVA, by training with our proposed
face VQA data. Extensive experiments on various mainstream MLLMs and Face-LLaVA
are conducted to test their face perception ability, with results also compared
against human performance. The results reveal that, the existing MLLMs are far
from satisfactory in understanding the fine-grained facial attributes, while
our Face-LLaVA significantly outperforms existing open-source models with a
small amount of training data and is comparable to commercial ones like GPT-4o
and Gemini. The dataset will be released at
https://github.com/CVI-SZU/FaceBench.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 12:45:44 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Wang",
"Xiaoqin",
""
],
[
"Ma",
"Xusen",
""
],
[
"Hou",
"Xianxu",
""
],
[
"Ding",
"Meidan",
""
],
[
"Li",
"Yudong",
""
],
[
"Chen",
"Junliang",
""
],
[
"Chen",
"Wenting",
""
],
[
"Peng",
"Xiaoyang",
""
],
[
"Shen",
"Linlin",
""
]
] | TITLE: FaceBench: A Multi-View Multi-Level Facial Attribute VQA Dataset for
Benchmarking Face Perception MLLMs
ABSTRACT: Multimodal large language models (MLLMs) have demonstrated remarkable
capabilities in various tasks. However, effectively evaluating these MLLMs on
face perception remains largely unexplored. To address this gap, we introduce
FaceBench, a dataset featuring hierarchical multi-view and multi-level
attributes specifically designed to assess the comprehensive face perception
abilities of MLLMs. Initially, we construct a hierarchical facial attribute
structure, which encompasses five views with up to three levels of attributes,
totaling over 210 attributes and 700 attribute values. Based on the structure,
the proposed FaceBench consists of 49,919 visual question-answering (VQA) pairs
for evaluation and 23,841 pairs for fine-tuning. Moreover, we further develop a
robust face perception MLLM baseline, Face-LLaVA, by training with our proposed
face VQA data. Extensive experiments on various mainstream MLLMs and Face-LLaVA
are conducted to test their face perception ability, with results also compared
against human performance. The results reveal that, the existing MLLMs are far
from satisfactory in understanding the fine-grained facial attributes, while
our Face-LLaVA significantly outperforms existing open-source models with a
small amount of training data and is comparable to commercial ones like GPT-4o
and Gemini. The dataset will be released at
https://github.com/CVI-SZU/FaceBench.
|
2503.21459 | Chirag Parikh | Chirag Parikh, Deepti Rawat, Rakshitha R. T., Tathagata Ghosh, Ravi
Kiran Sarvadevabhatla | RoadSocial: A Diverse VideoQA Dataset and Benchmark for Road Event
Understanding from Social Video Narratives | Accepted at CVPR 2025; Project Page: https://roadsocial.github.io/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce RoadSocial, a large-scale, diverse VideoQA dataset tailored for
generic road event understanding from social media narratives. Unlike existing
datasets limited by regional bias, viewpoint bias and expert-driven
annotations, RoadSocial captures the global complexity of road events with
varied geographies, camera viewpoints (CCTV, handheld, drones) and rich social
discourse. Our scalable semi-automatic annotation framework leverages Text LLMs
and Video LLMs to generate comprehensive question-answer pairs across 12
challenging QA tasks, pushing the boundaries of road event understanding.
RoadSocial is derived from social media videos spanning 14M frames and 414K
social comments, resulting in a dataset with 13.2K videos, 674 tags and 260K
high-quality QA pairs. We evaluate 18 Video LLMs (open-source and proprietary,
driving-specific and general-purpose) on our road event understanding
benchmark. We also demonstrate RoadSocial's utility in improving road event
understanding capabilities of general-purpose Video LLMs.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 12:49:09 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Parikh",
"Chirag",
""
],
[
"Rawat",
"Deepti",
""
],
[
"T.",
"Rakshitha R.",
""
],
[
"Ghosh",
"Tathagata",
""
],
[
"Sarvadevabhatla",
"Ravi Kiran",
""
]
] | TITLE: RoadSocial: A Diverse VideoQA Dataset and Benchmark for Road Event
Understanding from Social Video Narratives
ABSTRACT: We introduce RoadSocial, a large-scale, diverse VideoQA dataset tailored for
generic road event understanding from social media narratives. Unlike existing
datasets limited by regional bias, viewpoint bias and expert-driven
annotations, RoadSocial captures the global complexity of road events with
varied geographies, camera viewpoints (CCTV, handheld, drones) and rich social
discourse. Our scalable semi-automatic annotation framework leverages Text LLMs
and Video LLMs to generate comprehensive question-answer pairs across 12
challenging QA tasks, pushing the boundaries of road event understanding.
RoadSocial is derived from social media videos spanning 14M frames and 414K
social comments, resulting in a dataset with 13.2K videos, 674 tags and 260K
high-quality QA pairs. We evaluate 18 Video LLMs (open-source and proprietary,
driving-specific and general-purpose) on our road event understanding
benchmark. We also demonstrate RoadSocial's utility in improving road event
understanding capabilities of general-purpose Video LLMs.
|
2503.21464 | Josef Pichlmeier | Ryan Marinelli, Josef Pichlmeier, Tamas Bisztray | Harnessing Chain-of-Thought Metadata for Task Routing and Adversarial
Prompt Detection | null | null | null | null | cs.CL cs.AI cs.PF | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we propose a metric called Number of Thoughts (NofT) to
determine the difficulty of tasks pre-prompting and support Large Language
Models (LLMs) in production contexts. By setting thresholds based on the number
of thoughts, this metric can discern the difficulty of prompts and support more
effective prompt routing. A 2% decrease in latency is achieved when routing
prompts from the MathInstruct dataset through quantized, distilled versions of
Deepseek with 1.7 billion, 7 billion, and 14 billion parameters. Moreover, this
metric can be used to detect adversarial prompts used in prompt injection
attacks with high efficacy. The Number of Thoughts can inform a classifier that
achieves 95% accuracy in adversarial prompt detection. Our experiments ad
datasets used are available on our GitHub page:
https://github.com/rymarinelli/Number_Of_Thoughts/tree/main.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 12:54:00 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Marinelli",
"Ryan",
""
],
[
"Pichlmeier",
"Josef",
""
],
[
"Bisztray",
"Tamas",
""
]
] | TITLE: Harnessing Chain-of-Thought Metadata for Task Routing and Adversarial
Prompt Detection
ABSTRACT: In this work, we propose a metric called Number of Thoughts (NofT) to
determine the difficulty of tasks pre-prompting and support Large Language
Models (LLMs) in production contexts. By setting thresholds based on the number
of thoughts, this metric can discern the difficulty of prompts and support more
effective prompt routing. A 2% decrease in latency is achieved when routing
prompts from the MathInstruct dataset through quantized, distilled versions of
Deepseek with 1.7 billion, 7 billion, and 14 billion parameters. Moreover, this
metric can be used to detect adversarial prompts used in prompt injection
attacks with high efficacy. The Number of Thoughts can inform a classifier that
achieves 95% accuracy in adversarial prompt detection. Our experiments ad
datasets used are available on our GitHub page:
https://github.com/rymarinelli/Number_Of_Thoughts/tree/main.
|
2503.21465 | Deependra Singh | Deependra Singh, Saksham Agarwal, and Subhankar Mishra | Retinal Fundus Multi-Disease Image Classification using Hybrid
CNN-Transformer-Ensemble Architectures | 17 pages, 3 figures, 7 tables. Conference paper presented at the
International Health Informatics Conference (IHIC 2023) | In: Proceedings of the International Health Informatics Conference
(IHIC 2023). Lecture Notes in Networks and Systems, vol. 1113, Springer,
Singapore, pp. 103-120 (2025) | 10.1007/978-981-97-7190-5_9 | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Our research is motivated by the urgent global issue of a large population
affected by retinal diseases, which are evenly distributed but underserved by
specialized medical expertise, particularly in non-urban areas. Our primary
objective is to bridge this healthcare gap by developing a comprehensive
diagnostic system capable of accurately predicting retinal diseases solely from
fundus images. However, we faced significant challenges due to limited, diverse
datasets and imbalanced class distributions. To overcome these issues, we have
devised innovative strategies. Our research introduces novel approaches,
utilizing hybrid models combining deeper Convolutional Neural Networks (CNNs),
Transformer encoders, and ensemble architectures sequentially and in parallel
to classify retinal fundus images into 20 disease labels. Our overarching goal
is to assess these advanced models' potential in practical applications, with a
strong focus on enhancing retinal disease diagnosis accuracy across a broader
spectrum of conditions. Importantly, our efforts have surpassed baseline model
results, with the C-Tran ensemble model emerging as the leader, achieving a
remarkable model score of 0.9166, surpassing the baseline score of 0.9.
Additionally, experiments with the IEViT model showcased equally promising
outcomes with improved computational efficiency. We've also demonstrated the
effectiveness of dynamic patch extraction and the integration of domain
knowledge in computer vision tasks. In summary, our research strives to
contribute significantly to retinal disease diagnosis, addressing the critical
need for accessible healthcare solutions in underserved regions while aiming
for comprehensive and accurate disease prediction.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 12:55:07 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Singh",
"Deependra",
""
],
[
"Agarwal",
"Saksham",
""
],
[
"Mishra",
"Subhankar",
""
]
] | TITLE: Retinal Fundus Multi-Disease Image Classification using Hybrid
CNN-Transformer-Ensemble Architectures
ABSTRACT: Our research is motivated by the urgent global issue of a large population
affected by retinal diseases, which are evenly distributed but underserved by
specialized medical expertise, particularly in non-urban areas. Our primary
objective is to bridge this healthcare gap by developing a comprehensive
diagnostic system capable of accurately predicting retinal diseases solely from
fundus images. However, we faced significant challenges due to limited, diverse
datasets and imbalanced class distributions. To overcome these issues, we have
devised innovative strategies. Our research introduces novel approaches,
utilizing hybrid models combining deeper Convolutional Neural Networks (CNNs),
Transformer encoders, and ensemble architectures sequentially and in parallel
to classify retinal fundus images into 20 disease labels. Our overarching goal
is to assess these advanced models' potential in practical applications, with a
strong focus on enhancing retinal disease diagnosis accuracy across a broader
spectrum of conditions. Importantly, our efforts have surpassed baseline model
results, with the C-Tran ensemble model emerging as the leader, achieving a
remarkable model score of 0.9166, surpassing the baseline score of 0.9.
Additionally, experiments with the IEViT model showcased equally promising
outcomes with improved computational efficiency. We've also demonstrated the
effectiveness of dynamic patch extraction and the integration of domain
knowledge in computer vision tasks. In summary, our research strives to
contribute significantly to retinal disease diagnosis, addressing the critical
need for accessible healthcare solutions in underserved regions while aiming
for comprehensive and accurate disease prediction.
|
2503.21468 | Tin Tran | Tin T. Tran, V. Snasel | Improvement Graph Convolution Collaborative Filtering with Weighted
addition input | null | null | 10.1007/978-3-031-21743-2_51 | null | cs.IR | http://creativecommons.org/licenses/by-sa/4.0/ | Graph Neural Networks have been extensively applied in the field of machine
learning to find features of graphs, and recommendation systems are no
exception. The ratings of users on considered items can be represented by
graphs which are input for many efficient models to find out the
characteristics of the users and the items. From these insights, relevant items
are recommended to users. However, user's decisions on the items have varying
degrees of effects on different users, and this information should be learned
so as not to be lost in the process of information mining.
In this publication, we propose to build an additional graph showing the
recommended weight of an item to a target user to improve the accuracy of GNN
models. Although the users' friendships were not recorded, their correlation
was still evident through the commonalities in consumption behavior. We build a
model WiGCN (Weighted input GCN) to describe and experiment on well-known
datasets. Conclusions will be stated after comparing our results with
state-of-the-art such as GCMC, NGCF and LightGCN. The source code is also
included at https://github.com/trantin84/WiGCN.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 12:57:33 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Tran",
"Tin T.",
""
],
[
"Snasel",
"V.",
""
]
] | TITLE: Improvement Graph Convolution Collaborative Filtering with Weighted
addition input
ABSTRACT: Graph Neural Networks have been extensively applied in the field of machine
learning to find features of graphs, and recommendation systems are no
exception. The ratings of users on considered items can be represented by
graphs which are input for many efficient models to find out the
characteristics of the users and the items. From these insights, relevant items
are recommended to users. However, user's decisions on the items have varying
degrees of effects on different users, and this information should be learned
so as not to be lost in the process of information mining.
In this publication, we propose to build an additional graph showing the
recommended weight of an item to a target user to improve the accuracy of GNN
models. Although the users' friendships were not recorded, their correlation
was still evident through the commonalities in consumption behavior. We build a
model WiGCN (Weighted input GCN) to describe and experiment on well-known
datasets. Conclusions will be stated after comparing our results with
state-of-the-art such as GCMC, NGCF and LightGCN. The source code is also
included at https://github.com/trantin84/WiGCN.
|
2503.21471 | Tin Tran | Loc Tan Nguyen, Tin T. Tran | CombiGCN: An effective GCN model for Recommender System | null | null | 10.1007/978-981-97-0669-3_11 | null | cs.IR | http://creativecommons.org/licenses/by-sa/4.0/ | Graph Neural Networks (GNNs) have opened up a potential line of research for
collaborative filtering (CF). The key power of GNNs is based on injecting
collaborative signal into user and item embeddings which will contain
information about user-item interactions after that. However, there are still
some unsatisfactory points for a CF model that GNNs could have done better. The
way in which the collaborative signal are extracted through an implicit
feedback matrix that is essentially built on top of the message-passing
architecture of GNNs, and it only helps to update the embedding based on the
value of the items (or users) embeddings neighboring. By identifying the
similarity weight of users through their interaction history, a key concept of
CF, we endeavor to build a user-user weighted connection graph based on their
similarity weight.
In this study, we propose a recommendation framework, CombiGCN, in which item
embeddings are only linearly propagated on the user-item interaction graph,
while user embeddings are propagated simultaneously on both the user-user
weighted connection graph and user-item interaction graph graphs with Light
Graph Convolution (LGC) and combined in a simpler method by using the weighted
sum of the embeddings for each layer. We also conducted experiments comparing
CombiGCN with several state-of-the-art models on three real-world datasets.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 13:03:27 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Nguyen",
"Loc Tan",
""
],
[
"Tran",
"Tin T.",
""
]
] | TITLE: CombiGCN: An effective GCN model for Recommender System
ABSTRACT: Graph Neural Networks (GNNs) have opened up a potential line of research for
collaborative filtering (CF). The key power of GNNs is based on injecting
collaborative signal into user and item embeddings which will contain
information about user-item interactions after that. However, there are still
some unsatisfactory points for a CF model that GNNs could have done better. The
way in which the collaborative signal are extracted through an implicit
feedback matrix that is essentially built on top of the message-passing
architecture of GNNs, and it only helps to update the embedding based on the
value of the items (or users) embeddings neighboring. By identifying the
similarity weight of users through their interaction history, a key concept of
CF, we endeavor to build a user-user weighted connection graph based on their
similarity weight.
In this study, we propose a recommendation framework, CombiGCN, in which item
embeddings are only linearly propagated on the user-item interaction graph,
while user embeddings are propagated simultaneously on both the user-user
weighted connection graph and user-item interaction graph graphs with Light
Graph Convolution (LGC) and combined in a simpler method by using the weighted
sum of the embeddings for each layer. We also conducted experiments comparing
CombiGCN with several state-of-the-art models on three real-world datasets.
|
2503.21496 | Kai Wang | Huacheng Li, Jingyong Su, Kai Wang | Advancing CAN Network Security through RBM-Based Synthetic Attack Data
Generation for Intrusion Detection Systems | 11 pages, 10 figures, 7 tables | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rapid development of network technologies and industrial intelligence has
augmented the connectivity and intelligence within the automotive industry.
Notably, in the Internet of Vehicles (IoV), the Controller Area Network (CAN),
which is crucial for the communication of electronic control units but lacks
inbuilt security measures, has become extremely vulnerable to severe
cybersecurity threats. Meanwhile, the efficacy of Intrusion Detection Systems
(IDS) is hampered by the scarcity of sufficient attack data for robust model
training. To overcome this limitation, we introduce a novel methodology
leveraging the Restricted Boltzmann Machine (RBM) to generate synthetic CAN
attack data, thereby producing training datasets with a more balanced sample
distribution. Specifically, we design a CAN Data Processing Module for
transforming raw CAN data into an RBM-trainable format, and a Negative Sample
Generation Module to generate data reflecting the distribution of CAN data
frames denoting network intrusions. Experimental results show the generated
data significantly improves IDS performance, with CANet accuracy rising from
0.6477 to 0.9725 and EfficientNet from 0.1067 to 0.1555. Code is available at
https://github.com/wangkai-tech23/CANDataSynthetic.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 13:33:55 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Li",
"Huacheng",
""
],
[
"Su",
"Jingyong",
""
],
[
"Wang",
"Kai",
""
]
] | TITLE: Advancing CAN Network Security through RBM-Based Synthetic Attack Data
Generation for Intrusion Detection Systems
ABSTRACT: The rapid development of network technologies and industrial intelligence has
augmented the connectivity and intelligence within the automotive industry.
Notably, in the Internet of Vehicles (IoV), the Controller Area Network (CAN),
which is crucial for the communication of electronic control units but lacks
inbuilt security measures, has become extremely vulnerable to severe
cybersecurity threats. Meanwhile, the efficacy of Intrusion Detection Systems
(IDS) is hampered by the scarcity of sufficient attack data for robust model
training. To overcome this limitation, we introduce a novel methodology
leveraging the Restricted Boltzmann Machine (RBM) to generate synthetic CAN
attack data, thereby producing training datasets with a more balanced sample
distribution. Specifically, we design a CAN Data Processing Module for
transforming raw CAN data into an RBM-trainable format, and a Negative Sample
Generation Module to generate data reflecting the distribution of CAN data
frames denoting network intrusions. Experimental results show the generated
data significantly improves IDS performance, with CANet accuracy rising from
0.6477 to 0.9725 and EfficientNet from 0.1067 to 0.1555. Code is available at
https://github.com/wangkai-tech23/CANDataSynthetic.
|
2503.21501 | Brett Levac | Brett Levac, Ajil Jalal, Kannan Ramchandran, Jonathan I. Tamir | Double Blind Imaging with Generative Modeling | null | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Blind inverse problems in imaging arise from uncertainties in the system used
to collect (noisy) measurements of images. Recovering clean images from these
measurements typically requires identifying the imaging system, either
implicitly or explicitly. A common solution leverages generative models as
priors for both the images and the imaging system parameters (e.g., a class of
point spread functions). To learn these priors in a straightforward manner
requires access to a dataset of clean images as well as samples of the imaging
system. We propose an AmbientGAN-based generative technique to identify the
distribution of parameters in unknown imaging systems, using only unpaired
clean images and corrupted measurements. This learned distribution can then be
used in model-based recovery algorithms to solve blind inverse problems such as
blind deconvolution. We successfully demonstrate our technique for learning
Gaussian blur and motion blur priors from noisy measurements and show their
utility in solving blind deconvolution with diffusion posterior sampling.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 13:40:49 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Levac",
"Brett",
""
],
[
"Jalal",
"Ajil",
""
],
[
"Ramchandran",
"Kannan",
""
],
[
"Tamir",
"Jonathan I.",
""
]
] | TITLE: Double Blind Imaging with Generative Modeling
ABSTRACT: Blind inverse problems in imaging arise from uncertainties in the system used
to collect (noisy) measurements of images. Recovering clean images from these
measurements typically requires identifying the imaging system, either
implicitly or explicitly. A common solution leverages generative models as
priors for both the images and the imaging system parameters (e.g., a class of
point spread functions). To learn these priors in a straightforward manner
requires access to a dataset of clean images as well as samples of the imaging
system. We propose an AmbientGAN-based generative technique to identify the
distribution of parameters in unknown imaging systems, using only unpaired
clean images and corrupted measurements. This learned distribution can then be
used in model-based recovery algorithms to solve blind inverse problems such as
blind deconvolution. We successfully demonstrate our technique for learning
Gaussian blur and motion blur priors from noisy measurements and show their
utility in solving blind deconvolution with diffusion posterior sampling.
|
2503.21504 | Junsong Li | Yuxue Hu, Junsong Li, Meixuan Chen, Dongyu Su, Tongguan Wang, Ying Sha | Keyword-Oriented Multimodal Modeling for Euphemism Identification | null | null | null | null | cs.CL cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Euphemism identification deciphers the true meaning of euphemisms, such as
linking "weed" (euphemism) to "marijuana" (target keyword) in illicit texts,
aiding content moderation and combating underground markets. While existing
methods are primarily text-based, the rise of social media highlights the need
for multimodal analysis, incorporating text, images, and audio. However, the
lack of multimodal datasets for euphemisms limits further research. To address
this, we regard euphemisms and their corresponding target keywords as keywords
and first introduce a keyword-oriented multimodal corpus of euphemisms
(KOM-Euph), involving three datasets (Drug, Weapon, and Sexuality), including
text, images, and speech. We further propose a keyword-oriented multimodal
euphemism identification method (KOM-EI), which uses cross-modal feature
alignment and dynamic fusion modules to explicitly utilize the visual and audio
features of the keywords for efficient euphemism identification. Extensive
experiments demonstrate that KOM-EI outperforms state-of-the-art models and
large language models, and show the importance of our multimodal datasets.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 13:45:35 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Hu",
"Yuxue",
""
],
[
"Li",
"Junsong",
""
],
[
"Chen",
"Meixuan",
""
],
[
"Su",
"Dongyu",
""
],
[
"Wang",
"Tongguan",
""
],
[
"Sha",
"Ying",
""
]
] | TITLE: Keyword-Oriented Multimodal Modeling for Euphemism Identification
ABSTRACT: Euphemism identification deciphers the true meaning of euphemisms, such as
linking "weed" (euphemism) to "marijuana" (target keyword) in illicit texts,
aiding content moderation and combating underground markets. While existing
methods are primarily text-based, the rise of social media highlights the need
for multimodal analysis, incorporating text, images, and audio. However, the
lack of multimodal datasets for euphemisms limits further research. To address
this, we regard euphemisms and their corresponding target keywords as keywords
and first introduce a keyword-oriented multimodal corpus of euphemisms
(KOM-Euph), involving three datasets (Drug, Weapon, and Sexuality), including
text, images, and speech. We further propose a keyword-oriented multimodal
euphemism identification method (KOM-EI), which uses cross-modal feature
alignment and dynamic fusion modules to explicitly utilize the visual and audio
features of the keywords for efficient euphemism identification. Extensive
experiments demonstrate that KOM-EI outperforms state-of-the-art models and
large language models, and show the importance of our multimodal datasets.
|
2503.21505 | Yue Li | Yue Li, Meng Tian, Zhenyu Lin, Jiangtong Zhu, Dechang Zhu, Haiqiang
Liu, Zining Wang, Yueyi Zhang, Zhiwei Xiong, Xinhai Zhao | Fine-Grained Evaluation of Large Vision-Language Models in Autonomous
Driving | null | null | null | null | cs.CL cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing benchmarks for Vision-Language Model (VLM) on autonomous driving
(AD) primarily assess interpretability through open-form visual question
answering (QA) within coarse-grained tasks, which remain insufficient to assess
capabilities in complex driving scenarios. To this end, we introduce
$\textbf{VLADBench}$, a challenging and fine-grained dataset featuring
close-form QAs that progress from static foundational knowledge and elements to
advanced reasoning for dynamic on-road situations. The elaborate
$\textbf{VLADBench}$ spans 5 key domains: Traffic Knowledge Understanding,
General Element Recognition, Traffic Graph Generation, Target Attribute
Comprehension, and Ego Decision-Making and Planning. These domains are further
broken down into 11 secondary aspects and 29 tertiary tasks for a granular
evaluation. A thorough assessment of general and domain-specific (DS) VLMs on
this benchmark reveals both their strengths and critical limitations in AD
contexts. To further exploit the cognitive and reasoning interactions among the
5 domains for AD understanding, we start from a small-scale VLM and train the
DS models on individual domain datasets (collected from 1.4M DS QAs across
public sources). The experimental results demonstrate that the proposed
benchmark provides a crucial step toward a more comprehensive assessment of
VLMs in AD, paving the way for the development of more cognitively
sophisticated and reasoning-capable AD systems.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 13:45:47 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Li",
"Yue",
""
],
[
"Tian",
"Meng",
""
],
[
"Lin",
"Zhenyu",
""
],
[
"Zhu",
"Jiangtong",
""
],
[
"Zhu",
"Dechang",
""
],
[
"Liu",
"Haiqiang",
""
],
[
"Wang",
"Zining",
""
],
[
"Zhang",
"Yueyi",
""
],
[
"Xiong",
"Zhiwei",
""
],
[
"Zhao",
"Xinhai",
""
]
] | TITLE: Fine-Grained Evaluation of Large Vision-Language Models in Autonomous
Driving
ABSTRACT: Existing benchmarks for Vision-Language Model (VLM) on autonomous driving
(AD) primarily assess interpretability through open-form visual question
answering (QA) within coarse-grained tasks, which remain insufficient to assess
capabilities in complex driving scenarios. To this end, we introduce
$\textbf{VLADBench}$, a challenging and fine-grained dataset featuring
close-form QAs that progress from static foundational knowledge and elements to
advanced reasoning for dynamic on-road situations. The elaborate
$\textbf{VLADBench}$ spans 5 key domains: Traffic Knowledge Understanding,
General Element Recognition, Traffic Graph Generation, Target Attribute
Comprehension, and Ego Decision-Making and Planning. These domains are further
broken down into 11 secondary aspects and 29 tertiary tasks for a granular
evaluation. A thorough assessment of general and domain-specific (DS) VLMs on
this benchmark reveals both their strengths and critical limitations in AD
contexts. To further exploit the cognitive and reasoning interactions among the
5 domains for AD understanding, we start from a small-scale VLM and train the
DS models on individual domain datasets (collected from 1.4M DS QAs across
public sources). The experimental results demonstrate that the proposed
benchmark provides a crucial step toward a more comprehensive assessment of
VLMs in AD, paving the way for the development of more cognitively
sophisticated and reasoning-capable AD systems.
|
2503.21513 | Ana-Maria Bucur | Ana-Maria Bucur, Andreea-Codrina Moldovan, Krutika Parvatikar, Marcos
Zampieri, Ashiqur R. KhudaBukhsh, Liviu P. Dinu | Datasets for Depression Modeling in Social Media: An Overview | Accepted to CLPsych Workshop, NAACL 2025 | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Depression is the most common mental health disorder, and its prevalence
increased during the COVID-19 pandemic. As one of the most extensively
researched psychological conditions, recent research has increasingly focused
on leveraging social media data to enhance traditional methods of depression
screening. This paper addresses the growing interest in interdisciplinary
research on depression, and aims to support early-career researchers by
providing a comprehensive and up-to-date list of datasets for analyzing and
predicting depression through social media data. We present an overview of
datasets published between 2019 and 2024. We also make the comprehensive list
of datasets available online as a continuously updated resource, with the hope
that it will facilitate further interdisciplinary research into the linguistic
expressions of depression on social media.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 14:03:25 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Bucur",
"Ana-Maria",
""
],
[
"Moldovan",
"Andreea-Codrina",
""
],
[
"Parvatikar",
"Krutika",
""
],
[
"Zampieri",
"Marcos",
""
],
[
"KhudaBukhsh",
"Ashiqur R.",
""
],
[
"Dinu",
"Liviu P.",
""
]
] | TITLE: Datasets for Depression Modeling in Social Media: An Overview
ABSTRACT: Depression is the most common mental health disorder, and its prevalence
increased during the COVID-19 pandemic. As one of the most extensively
researched psychological conditions, recent research has increasingly focused
on leveraging social media data to enhance traditional methods of depression
screening. This paper addresses the growing interest in interdisciplinary
research on depression, and aims to support early-career researchers by
providing a comprehensive and up-to-date list of datasets for analyzing and
predicting depression through social media data. We present an overview of
datasets published between 2019 and 2024. We also make the comprehensive list
of datasets available online as a continuously updated resource, with the hope
that it will facilitate further interdisciplinary research into the linguistic
expressions of depression on social media.
|
2503.21525 | Yuxi Hu | Yuxi Hu, Jun Zhang, Zhe Zhang, Rafael Weilharter, Yuchen Rao, Kuangyi
Chen, Runze Yuan, Friedrich Fraundorfer | ICG-MVSNet: Learning Intra-view and Cross-view Relationships for
Guidance in Multi-View Stereo | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-view Stereo (MVS) aims to estimate depth and reconstruct 3D point
clouds from a series of overlapping images. Recent learning-based MVS
frameworks overlook the geometric information embedded in features and
correlations, leading to weak cost matching. In this paper, we propose
ICG-MVSNet, which explicitly integrates intra-view and cross-view relationships
for depth estimation. Specifically, we develop an intra-view feature fusion
module that leverages the feature coordinate correlations within a single image
to enhance robust cost matching. Additionally, we introduce a lightweight
cross-view aggregation module that efficiently utilizes the contextual
information from volume correlations to guide regularization. Our method is
evaluated on the DTU dataset and Tanks and Temples benchmark, consistently
achieving competitive performance against state-of-the-art works, while
requiring lower computational resources.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 14:13:31 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Hu",
"Yuxi",
""
],
[
"Zhang",
"Jun",
""
],
[
"Zhang",
"Zhe",
""
],
[
"Weilharter",
"Rafael",
""
],
[
"Rao",
"Yuchen",
""
],
[
"Chen",
"Kuangyi",
""
],
[
"Yuan",
"Runze",
""
],
[
"Fraundorfer",
"Friedrich",
""
]
] | TITLE: ICG-MVSNet: Learning Intra-view and Cross-view Relationships for
Guidance in Multi-View Stereo
ABSTRACT: Multi-view Stereo (MVS) aims to estimate depth and reconstruct 3D point
clouds from a series of overlapping images. Recent learning-based MVS
frameworks overlook the geometric information embedded in features and
correlations, leading to weak cost matching. In this paper, we propose
ICG-MVSNet, which explicitly integrates intra-view and cross-view relationships
for depth estimation. Specifically, we develop an intra-view feature fusion
module that leverages the feature coordinate correlations within a single image
to enhance robust cost matching. Additionally, we introduce a lightweight
cross-view aggregation module that efficiently utilizes the contextual
information from volume correlations to guide regularization. Our method is
evaluated on the DTU dataset and Tanks and Temples benchmark, consistently
achieving competitive performance against state-of-the-art works, while
requiring lower computational resources.
|
2503.21526 | Christine Bang | Christine W. Bang and Vanessa Didelez | Constraint-based causal discovery with tiered background knowledge and
latent variables in single or overlapping datasets | Accepted for the 4th Conference on Causal Learning and Reasoning
(CLeaR 2025) | null | null | null | stat.ML cs.LG math.ST stat.TH | http://creativecommons.org/licenses/by/4.0/ | In this paper we consider the use of tiered background knowledge within
constraint based causal discovery. Our focus is on settings relaxing causal
sufficiency, i.e. allowing for latent variables which may arise because
relevant information could not be measured at all, or not jointly, as in the
case of multiple overlapping datasets. We first present novel insights into the
properties of the 'tiered FCI' (tFCI) algorithm. Building on this, we introduce
a new extension of the IOD (integrating overlapping datasets) algorithm
incorporating tiered background knowledge, the 'tiered IOD' (tIOD) algorithm.
We show that under full usage of the tiered background knowledge tFCI and tIOD
are sound, while simple versions of the tIOD and tFCI are sound and complete.
We further show that the tIOD algorithm can often be expected to be
considerably more efficient and informative than the IOD algorithm even beyond
the obvious restriction of the Markov equivalence classes. We provide a formal
result on the conditions for this gain in efficiency and informativeness. Our
results are accompanied by a series of examples illustrating the exact role and
usefulness of tiered background knowledge.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 14:14:21 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Bang",
"Christine W.",
""
],
[
"Didelez",
"Vanessa",
""
]
] | TITLE: Constraint-based causal discovery with tiered background knowledge and
latent variables in single or overlapping datasets
ABSTRACT: In this paper we consider the use of tiered background knowledge within
constraint based causal discovery. Our focus is on settings relaxing causal
sufficiency, i.e. allowing for latent variables which may arise because
relevant information could not be measured at all, or not jointly, as in the
case of multiple overlapping datasets. We first present novel insights into the
properties of the 'tiered FCI' (tFCI) algorithm. Building on this, we introduce
a new extension of the IOD (integrating overlapping datasets) algorithm
incorporating tiered background knowledge, the 'tiered IOD' (tIOD) algorithm.
We show that under full usage of the tiered background knowledge tFCI and tIOD
are sound, while simple versions of the tIOD and tFCI are sound and complete.
We further show that the tIOD algorithm can often be expected to be
considerably more efficient and informative than the IOD algorithm even beyond
the obvious restriction of the Markov equivalence classes. We provide a formal
result on the conditions for this gain in efficiency and informativeness. Our
results are accompanied by a series of examples illustrating the exact role and
usefulness of tiered background knowledge.
|
2503.21528 | Robert Chew | Robert Chew, Matthew R. Williams, Elan A. Segarra, Alexander J.
Preiss, Amanda Konet, Terrance D. Savitsky | Bayesian Pseudo Posterior Mechanism for Differentially Private Machine
Learning | null | null | null | null | stat.ML cs.CR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Differential privacy (DP) is becoming increasingly important for deployed
machine learning applications because it provides strong guarantees for
protecting the privacy of individuals whose data is used to train models.
However, DP mechanisms commonly used in machine learning tend to struggle on
many real world distributions, including highly imbalanced or small labeled
training sets. In this work, we propose a new scalable DP mechanism for deep
learning models, SWAG-PPM, by using a pseudo posterior distribution that
downweights by-record likelihood contributions proportionally to their
disclosure risks as the randomized mechanism. As a motivating example from
official statistics, we demonstrate SWAG-PPM on a workplace injury text
classification task using a highly imbalanced public dataset published by the
U.S. Occupational Safety and Health Administration (OSHA). We find that
SWAG-PPM exhibits only modest utility degradation against a non-private
comparator while greatly outperforming the industry standard DP-SGD for a
similar privacy budget.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 14:17:05 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Chew",
"Robert",
""
],
[
"Williams",
"Matthew R.",
""
],
[
"Segarra",
"Elan A.",
""
],
[
"Preiss",
"Alexander J.",
""
],
[
"Konet",
"Amanda",
""
],
[
"Savitsky",
"Terrance D.",
""
]
] | TITLE: Bayesian Pseudo Posterior Mechanism for Differentially Private Machine
Learning
ABSTRACT: Differential privacy (DP) is becoming increasingly important for deployed
machine learning applications because it provides strong guarantees for
protecting the privacy of individuals whose data is used to train models.
However, DP mechanisms commonly used in machine learning tend to struggle on
many real world distributions, including highly imbalanced or small labeled
training sets. In this work, we propose a new scalable DP mechanism for deep
learning models, SWAG-PPM, by using a pseudo posterior distribution that
downweights by-record likelihood contributions proportionally to their
disclosure risks as the randomized mechanism. As a motivating example from
official statistics, we demonstrate SWAG-PPM on a workplace injury text
classification task using a highly imbalanced public dataset published by the
U.S. Occupational Safety and Health Administration (OSHA). We find that
SWAG-PPM exhibits only modest utility degradation against a non-private
comparator while greatly outperforming the industry standard DP-SGD for a
similar privacy budget.
|
2503.21558 | Ruifeng Wang | Gaofeng Zhou, Rui-Feng Wang, Kangning Cui | A Local Perspective-based Model for Overlapping Community Detection | 10 pages, 3 figures, 3 tables | null | null | null | cs.SI cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Community detection, which identifies densely connected node clusters with
sparse between-group links, is vital for analyzing network structure and
function in real-world systems. Most existing community detection methods based
on GCNs primarily focus on node-level information while overlooking
community-level features, leading to performance limitations on large-scale
networks. To address this issue, we propose LQ-GCN, an overlapping community
detection model from a local community perspective. LQ-GCN employs a
Bernoulli-Poisson model to construct a community affiliation matrix and form an
end-to-end detection framework. By adopting local modularity as the objective
function, the model incorporates local community information to enhance the
quality and accuracy of clustering results. Additionally, the conventional GCNs
architecture is optimized to improve the model capability in identifying
overlapping communities in large-scale networks. Experimental results
demonstrate that LQ-GCN achieves up to a 33% improvement in Normalized Mutual
Information (NMI) and a 26.3% improvement in Recall compared to baseline models
across multiple real-world benchmark datasets.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 14:43:42 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Zhou",
"Gaofeng",
""
],
[
"Wang",
"Rui-Feng",
""
],
[
"Cui",
"Kangning",
""
]
] | TITLE: A Local Perspective-based Model for Overlapping Community Detection
ABSTRACT: Community detection, which identifies densely connected node clusters with
sparse between-group links, is vital for analyzing network structure and
function in real-world systems. Most existing community detection methods based
on GCNs primarily focus on node-level information while overlooking
community-level features, leading to performance limitations on large-scale
networks. To address this issue, we propose LQ-GCN, an overlapping community
detection model from a local community perspective. LQ-GCN employs a
Bernoulli-Poisson model to construct a community affiliation matrix and form an
end-to-end detection framework. By adopting local modularity as the objective
function, the model incorporates local community information to enhance the
quality and accuracy of clustering results. Additionally, the conventional GCNs
architecture is optimized to improve the model capability in identifying
overlapping communities in large-scale networks. Experimental results
demonstrate that LQ-GCN achieves up to a 33% improvement in Normalized Mutual
Information (NMI) and a 26.3% improvement in Recall compared to baseline models
across multiple real-world benchmark datasets.
|
2503.21562 | Jonathan Lee | Jonathan Lee, Bolivar Solarte, Chin-Hsuan Wu, Jin-Cheng Jhang, Fu-En
Wang, Yi-Hsuan Tsai, Min Sun | uLayout: Unified Room Layout Estimation for Perspective and Panoramic
Images | Accepted to WACV-2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We present uLayout, a unified model for estimating room layout geometries
from both perspective and panoramic images, whereas traditional solutions
require different model designs for each image type. The key idea of our
solution is to unify both domains into the equirectangular projection,
particularly, allocating perspective images into the most suitable latitude
coordinate to effectively exploit both domains seamlessly. To address the
Field-of-View (FoV) difference between the input domains, we design uLayout
with a shared feature extractor with an extra 1D-Convolution layer to condition
each domain input differently. This conditioning allows us to efficiently
formulate a column-wise feature regression problem regardless of the FoV input.
This simple yet effective approach achieves competitive performance with
current state-of-the-art solutions and shows for the first time a single
end-to-end model for both domains. Extensive experiments in the real-world
datasets, LSUN, Matterport3D, PanoContext, and Stanford 2D-3D evidence the
contribution of our approach. Code is available at
https://github.com/JonathanLee112/uLayout.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 14:47:05 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Lee",
"Jonathan",
""
],
[
"Solarte",
"Bolivar",
""
],
[
"Wu",
"Chin-Hsuan",
""
],
[
"Jhang",
"Jin-Cheng",
""
],
[
"Wang",
"Fu-En",
""
],
[
"Tsai",
"Yi-Hsuan",
""
],
[
"Sun",
"Min",
""
]
] | TITLE: uLayout: Unified Room Layout Estimation for Perspective and Panoramic
Images
ABSTRACT: We present uLayout, a unified model for estimating room layout geometries
from both perspective and panoramic images, whereas traditional solutions
require different model designs for each image type. The key idea of our
solution is to unify both domains into the equirectangular projection,
particularly, allocating perspective images into the most suitable latitude
coordinate to effectively exploit both domains seamlessly. To address the
Field-of-View (FoV) difference between the input domains, we design uLayout
with a shared feature extractor with an extra 1D-Convolution layer to condition
each domain input differently. This conditioning allows us to efficiently
formulate a column-wise feature regression problem regardless of the FoV input.
This simple yet effective approach achieves competitive performance with
current state-of-the-art solutions and shows for the first time a single
end-to-end model for both domains. Extensive experiments in the real-world
datasets, LSUN, Matterport3D, PanoContext, and Stanford 2D-3D evidence the
contribution of our approach. Code is available at
https://github.com/JonathanLee112/uLayout.
|
2503.21571 | Yinfeng Yu | Alimjan Mattursun, Liejun Wang, Yinfeng Yu, Chunyang Ma | Magnitude-Phase Dual-Path Speech Enhancement Network based on
Self-Supervised Embedding and Perceptual Contrast Stretch Boosting | Main paper (6 pages). Accepted for publication by ICME 2025 | null | null | null | cs.SD cs.AI eess.AS | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Speech self-supervised learning (SSL) has made great progress in various
speech processing tasks, but there is still room for improvement in speech
enhancement (SE). This paper presents BSP-MPNet, a dual-path framework that
combines self-supervised features with magnitude-phase information for SE. The
approach starts by applying the perceptual contrast stretching (PCS) algorithm
to enhance the magnitude-phase spectrum. A magnitude-phase 2D coarse (MP-2DC)
encoder then extracts coarse features from the enhanced spectrum. Next, a
feature-separating self-supervised learning (FS-SSL) model generates
self-supervised embeddings for the magnitude and phase components separately.
These embeddings are fused to create cross-domain feature representations.
Finally, two parallel RNN-enhanced multi-attention (REMA) mask decoders refine
the features, apply them to the mask, and reconstruct the speech signal. We
evaluate BSP-MPNet on the VoiceBank+DEMAND and WHAMR! datasets. Experimental
results show that BSP-MPNet outperforms existing methods under various noise
conditions, providing new directions for self-supervised speech enhancement
research. The implementation of the BSP-MPNet code is available
online\footnote[2]{https://github.com/AlimMat/BSP-MPNet. \label{s1}}
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 14:52:06 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Mattursun",
"Alimjan",
""
],
[
"Wang",
"Liejun",
""
],
[
"Yu",
"Yinfeng",
""
],
[
"Ma",
"Chunyang",
""
]
] | TITLE: Magnitude-Phase Dual-Path Speech Enhancement Network based on
Self-Supervised Embedding and Perceptual Contrast Stretch Boosting
ABSTRACT: Speech self-supervised learning (SSL) has made great progress in various
speech processing tasks, but there is still room for improvement in speech
enhancement (SE). This paper presents BSP-MPNet, a dual-path framework that
combines self-supervised features with magnitude-phase information for SE. The
approach starts by applying the perceptual contrast stretching (PCS) algorithm
to enhance the magnitude-phase spectrum. A magnitude-phase 2D coarse (MP-2DC)
encoder then extracts coarse features from the enhanced spectrum. Next, a
feature-separating self-supervised learning (FS-SSL) model generates
self-supervised embeddings for the magnitude and phase components separately.
These embeddings are fused to create cross-domain feature representations.
Finally, two parallel RNN-enhanced multi-attention (REMA) mask decoders refine
the features, apply them to the mask, and reconstruct the speech signal. We
evaluate BSP-MPNet on the VoiceBank+DEMAND and WHAMR! datasets. Experimental
results show that BSP-MPNet outperforms existing methods under various noise
conditions, providing new directions for self-supervised speech enhancement
research. The implementation of the BSP-MPNet code is available
online\footnote[2]{https://github.com/AlimMat/BSP-MPNet. \label{s1}}
|
2503.21581 | Liuyue Xie | Liuyue Xie, Jiancong Guo, Ozan Cakmakci, Andre Araujo, Laszlo A. Jeni,
Zhiheng Jia | AlignDiff: Learning Physically-Grounded Camera Alignment via Diffusion | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Accurate camera calibration is a fundamental task for 3D perception,
especially when dealing with real-world, in-the-wild environments where complex
optical distortions are common. Existing methods often rely on pre-rectified
images or calibration patterns, which limits their applicability and
flexibility. In this work, we introduce a novel framework that addresses these
challenges by jointly modeling camera intrinsic and extrinsic parameters using
a generic ray camera model. Unlike previous approaches, AlignDiff shifts focus
from semantic to geometric features, enabling more accurate modeling of local
distortions. We propose AlignDiff, a diffusion model conditioned on geometric
priors, enabling the simultaneous estimation of camera distortions and scene
geometry. To enhance distortion prediction, we incorporate edge-aware
attention, focusing the model on geometric features around image edges, rather
than semantic content. Furthermore, to enhance generalizability to real-world
captures, we incorporate a large database of ray-traced lenses containing over
three thousand samples. This database characterizes the distortion inherent in
a diverse variety of lens forms. Our experiments demonstrate that the proposed
method significantly reduces the angular error of estimated ray bundles by ~8.2
degrees and overall calibration accuracy, outperforming existing approaches on
challenging, real-world datasets.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 14:59:59 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Xie",
"Liuyue",
""
],
[
"Guo",
"Jiancong",
""
],
[
"Cakmakci",
"Ozan",
""
],
[
"Araujo",
"Andre",
""
],
[
"Jeni",
"Laszlo A.",
""
],
[
"Jia",
"Zhiheng",
""
]
] | TITLE: AlignDiff: Learning Physically-Grounded Camera Alignment via Diffusion
ABSTRACT: Accurate camera calibration is a fundamental task for 3D perception,
especially when dealing with real-world, in-the-wild environments where complex
optical distortions are common. Existing methods often rely on pre-rectified
images or calibration patterns, which limits their applicability and
flexibility. In this work, we introduce a novel framework that addresses these
challenges by jointly modeling camera intrinsic and extrinsic parameters using
a generic ray camera model. Unlike previous approaches, AlignDiff shifts focus
from semantic to geometric features, enabling more accurate modeling of local
distortions. We propose AlignDiff, a diffusion model conditioned on geometric
priors, enabling the simultaneous estimation of camera distortions and scene
geometry. To enhance distortion prediction, we incorporate edge-aware
attention, focusing the model on geometric features around image edges, rather
than semantic content. Furthermore, to enhance generalizability to real-world
captures, we incorporate a large database of ray-traced lenses containing over
three thousand samples. This database characterizes the distortion inherent in
a diverse variety of lens forms. Our experiments demonstrate that the proposed
method significantly reduces the angular error of estimated ray bundles by ~8.2
degrees and overall calibration accuracy, outperforming existing approaches on
challenging, real-world datasets.
|
2503.21585 | Haixu Wang | Haixu Wang, Jiguo Cao | Probabilistic Functional Neural Networks | null | null | null | null | stat.ML cs.LG | http://creativecommons.org/licenses/by/4.0/ | High-dimensional functional time series (HDFTS) are often characterized by
nonlinear trends and high spatial dimensions. Such data poses unique challenges
for modeling and forecasting due to the nonlinearity, nonstationarity, and high
dimensionality. We propose a novel probabilistic functional neural network
(ProFnet) to address these challenges. ProFnet integrates the strengths of
feedforward and deep neural networks with probabilistic modeling. The model
generates probabilistic forecasts using Monte Carlo sampling and also enables
the quantification of uncertainty in predictions. While capturing both temporal
and spatial dependencies across multiple regions, ProFnet offers a scalable and
unified solution for large datasets. Applications to Japan's mortality rates
demonstrate superior performance. This approach enhances predictive accuracy
and provides interpretable uncertainty estimates, making it a valuable tool for
forecasting complex high-dimensional functional data and HDFTS.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 15:01:37 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Wang",
"Haixu",
""
],
[
"Cao",
"Jiguo",
""
]
] | TITLE: Probabilistic Functional Neural Networks
ABSTRACT: High-dimensional functional time series (HDFTS) are often characterized by
nonlinear trends and high spatial dimensions. Such data poses unique challenges
for modeling and forecasting due to the nonlinearity, nonstationarity, and high
dimensionality. We propose a novel probabilistic functional neural network
(ProFnet) to address these challenges. ProFnet integrates the strengths of
feedforward and deep neural networks with probabilistic modeling. The model
generates probabilistic forecasts using Monte Carlo sampling and also enables
the quantification of uncertainty in predictions. While capturing both temporal
and spatial dependencies across multiple regions, ProFnet offers a scalable and
unified solution for large datasets. Applications to Japan's mortality rates
demonstrate superior performance. This approach enhances predictive accuracy
and provides interpretable uncertainty estimates, making it a valuable tool for
forecasting complex high-dimensional functional data and HDFTS.
|
2503.21591 | Yarden Sharon | Yarden Sharon, Alex Geftler, Hanna Kossowsky Lev, and Ilana Nisky | Dataset and Analysis of Long-Term Skill Acquisition in Robot-Assisted
Minimally Invasive Surgery | 12 pages, 8 figures | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | Objective: We aim to investigate long-term robotic surgical skill acquisition
among surgical residents and the effects of training intervals and fatigue on
performance. Methods: For six months, surgical residents participated in three
training sessions once a month, surrounding a single 26-hour hospital shift. In
each shift, they participated in training sessions scheduled before, during,
and after the shift. In each training session, they performed three dry-lab
training tasks: Ring Tower Transfer, Knot-Tying, and Suturing. We collected a
comprehensive dataset, including videos synchronized with kinematic data,
activity tracking, and scans of the suturing pads. Results: We collected a
dataset of 972 trials performed by 18 residents of different surgical
specializations. Participants demonstrated consistent performance improvement
across all tasks. In addition, we found variations in between-shift learning
and forgetting across metrics and tasks, and hints for possible effects of
fatigue. Conclusion: The findings from our first analysis shed light on the
long-term learning processes of robotic surgical skills with extended intervals
and varying levels of fatigue. Significance: This study lays the groundwork for
future research aimed at optimizing training protocols and enhancing AI
applications in surgery, ultimately contributing to improved patient outcomes.
The dataset will be made available upon acceptance of our journal submission.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 15:08:03 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Sharon",
"Yarden",
""
],
[
"Geftler",
"Alex",
""
],
[
"Lev",
"Hanna Kossowsky",
""
],
[
"Nisky",
"Ilana",
""
]
] | TITLE: Dataset and Analysis of Long-Term Skill Acquisition in Robot-Assisted
Minimally Invasive Surgery
ABSTRACT: Objective: We aim to investigate long-term robotic surgical skill acquisition
among surgical residents and the effects of training intervals and fatigue on
performance. Methods: For six months, surgical residents participated in three
training sessions once a month, surrounding a single 26-hour hospital shift. In
each shift, they participated in training sessions scheduled before, during,
and after the shift. In each training session, they performed three dry-lab
training tasks: Ring Tower Transfer, Knot-Tying, and Suturing. We collected a
comprehensive dataset, including videos synchronized with kinematic data,
activity tracking, and scans of the suturing pads. Results: We collected a
dataset of 972 trials performed by 18 residents of different surgical
specializations. Participants demonstrated consistent performance improvement
across all tasks. In addition, we found variations in between-shift learning
and forgetting across metrics and tasks, and hints for possible effects of
fatigue. Conclusion: The findings from our first analysis shed light on the
long-term learning processes of robotic surgical skills with extended intervals
and varying levels of fatigue. Significance: This study lays the groundwork for
future research aimed at optimizing training protocols and enhancing AI
applications in surgery, ultimately contributing to improved patient outcomes.
The dataset will be made available upon acceptance of our journal submission.
|
2503.21622 | Lars Heckler-Kram | Lars Heckler-Kram, Jan-Hendrik Neudeck, Ulla Scheler, Rebecca K\"onig,
Carsten Steger | The MVTec AD 2 Dataset: Advanced Scenarios for Unsupervised Anomaly
Detection | paper under review; dataset first released for the VAND3.0 challenge
@ CVPR 2025 https://sites.google.com/view/vand30cvpr2025/challenge | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In recent years, performance on existing anomaly detection benchmarks like
MVTec AD and VisA has started to saturate in terms of segmentation AU-PRO, with
state-of-the-art models often competing in the range of less than one
percentage point. This lack of discriminatory power prevents a meaningful
comparison of models and thus hinders progress of the field, especially when
considering the inherent stochastic nature of machine learning results. We
present MVTec AD 2, a collection of eight anomaly detection scenarios with more
than 8000 high-resolution images. It comprises challenging and highly relevant
industrial inspection use cases that have not been considered in previous
datasets, including transparent and overlapping objects, dark-field and back
light illumination, objects with high variance in the normal data, and
extremely small defects. We provide comprehensive evaluations of
state-of-the-art methods and show that their performance remains below 60%
average AU-PRO. Additionally, our dataset provides test scenarios with lighting
condition changes to assess the robustness of methods under real-world
distribution shifts. We host a publicly accessible evaluation server that holds
the pixel-precise ground truth of the test set (https://benchmark.mvtec.com/).
All image data is available at
https://www.mvtec.com/company/research/datasets/mvtec-ad-2.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 15:41:46 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Heckler-Kram",
"Lars",
""
],
[
"Neudeck",
"Jan-Hendrik",
""
],
[
"Scheler",
"Ulla",
""
],
[
"König",
"Rebecca",
""
],
[
"Steger",
"Carsten",
""
]
] | TITLE: The MVTec AD 2 Dataset: Advanced Scenarios for Unsupervised Anomaly
Detection
ABSTRACT: In recent years, performance on existing anomaly detection benchmarks like
MVTec AD and VisA has started to saturate in terms of segmentation AU-PRO, with
state-of-the-art models often competing in the range of less than one
percentage point. This lack of discriminatory power prevents a meaningful
comparison of models and thus hinders progress of the field, especially when
considering the inherent stochastic nature of machine learning results. We
present MVTec AD 2, a collection of eight anomaly detection scenarios with more
than 8000 high-resolution images. It comprises challenging and highly relevant
industrial inspection use cases that have not been considered in previous
datasets, including transparent and overlapping objects, dark-field and back
light illumination, objects with high variance in the normal data, and
extremely small defects. We provide comprehensive evaluations of
state-of-the-art methods and show that their performance remains below 60%
average AU-PRO. Additionally, our dataset provides test scenarios with lighting
condition changes to assess the robustness of methods under real-world
distribution shifts. We host a publicly accessible evaluation server that holds
the pixel-precise ground truth of the test set (https://benchmark.mvtec.com/).
All image data is available at
https://www.mvtec.com/company/research/datasets/mvtec-ad-2.
|
2503.21629 | Saeyoung Rho | Saeyoung Rho, Andrew Tang, Noah Bergam, Rachel Cummings, Vishal Misra | ClusterSC: Advancing Synthetic Control with Donor Selection | 35 pages, 11 figures, to be published in Proceedings of The 28th
International Conference on Artificial Intelligence and Statistics (AIStats)
2025 | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In causal inference with observational studies, synthetic control (SC) has
emerged as a prominent tool. SC has traditionally been applied to
aggregate-level datasets, but more recent work has extended its use to
individual-level data. As they contain a greater number of observed units, this
shift introduces the curse of dimensionality to SC. To address this, we propose
Cluster Synthetic Control (ClusterSC), based on the idea that groups of
individuals may exist where behavior aligns internally but diverges between
groups. ClusterSC incorporates a clustering step to select only the relevant
donors for the target. We provide theoretical guarantees on the improvements
induced by ClusterSC, supported by empirical demonstrations on synthetic and
real-world datasets. The results indicate that ClusterSC consistently
outperforms classical SC approaches.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 15:50:32 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Rho",
"Saeyoung",
""
],
[
"Tang",
"Andrew",
""
],
[
"Bergam",
"Noah",
""
],
[
"Cummings",
"Rachel",
""
],
[
"Misra",
"Vishal",
""
]
] | TITLE: ClusterSC: Advancing Synthetic Control with Donor Selection
ABSTRACT: In causal inference with observational studies, synthetic control (SC) has
emerged as a prominent tool. SC has traditionally been applied to
aggregate-level datasets, but more recent work has extended its use to
individual-level data. As they contain a greater number of observed units, this
shift introduces the curse of dimensionality to SC. To address this, we propose
Cluster Synthetic Control (ClusterSC), based on the idea that groups of
individuals may exist where behavior aligns internally but diverges between
groups. ClusterSC incorporates a clustering step to select only the relevant
donors for the target. We provide theoretical guarantees on the improvements
induced by ClusterSC, supported by empirical demonstrations on synthetic and
real-world datasets. The results indicate that ClusterSC consistently
outperforms classical SC approaches.
|
2503.21634 | Yassir Lairgi | Yassir Lairgi | When Astronomy Meets AI: Manazel For Crescent Visibility Prediction in
Morocco | null | null | null | null | cs.LG cs.AI cs.CV | http://creativecommons.org/publicdomain/zero/1.0/ | The accurate determination of the beginning of each Hijri month is essential
for religious, cultural, and administrative purposes. Manazel (The code and
datasets are available at https://github.com/lairgiyassir/manazel) addresses
this challenge in Morocco by leveraging 13 years of crescent visibility data to
refine the ODEH criterion, a widely used standard for lunar crescent visibility
prediction. The study integrates two key features, the Arc of Vision (ARCV) and
the total width of the crescent (W), to enhance the accuracy of lunar
visibility assessments. A machine learning approach utilizing the Logistic
Regression algorithm is employed to classify crescent visibility conditions,
achieving a predictive accuracy of 98.83%. This data-driven methodology offers
a robust and reliable framework for determining the start of the Hijri month,
comparing different data classification tools, and improving the consistency of
lunar calendar calculations in Morocco. The findings demonstrate the
effectiveness of machine learning in astronomical applications and highlight
the potential for further enhancements in the modeling of crescent visibility.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 15:56:55 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Lairgi",
"Yassir",
""
]
] | TITLE: When Astronomy Meets AI: Manazel For Crescent Visibility Prediction in
Morocco
ABSTRACT: The accurate determination of the beginning of each Hijri month is essential
for religious, cultural, and administrative purposes. Manazel (The code and
datasets are available at https://github.com/lairgiyassir/manazel) addresses
this challenge in Morocco by leveraging 13 years of crescent visibility data to
refine the ODEH criterion, a widely used standard for lunar crescent visibility
prediction. The study integrates two key features, the Arc of Vision (ARCV) and
the total width of the crescent (W), to enhance the accuracy of lunar
visibility assessments. A machine learning approach utilizing the Logistic
Regression algorithm is employed to classify crescent visibility conditions,
achieving a predictive accuracy of 98.83%. This data-driven methodology offers
a robust and reliable framework for determining the start of the Hijri month,
comparing different data classification tools, and improving the consistency of
lunar calendar calculations in Morocco. The findings demonstrate the
effectiveness of machine learning in astronomical applications and highlight
the potential for further enhancements in the modeling of crescent visibility.
|
2503.21670 | Rajvee Sheth | Rajvee Sheth, Himanshu Beniwal, Mayank Singh | COMI-LINGUA: Expert Annotated Large-Scale Dataset for Multitask NLP in
Hindi-English Code-Mixing | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | The rapid growth of digital communication has driven the widespread use of
code-mixing, particularly Hindi-English, in multilingual communities. Existing
datasets often focus on romanized text, have limited scope, or rely on
synthetic data, which fails to capture realworld language nuances. Human
annotations are crucial for assessing the naturalness and acceptability of
code-mixed text. To address these challenges, We introduce COMI-LINGUA, the
largest manually annotated dataset for code-mixed text, comprising 100,970
instances evaluated by three expert annotators in both Devanagari and Roman
scripts. The dataset supports five fundamental NLP tasks: Language
Identification, Matrix Language Identification, Part-of-Speech Tagging, Named
Entity Recognition, and Translation. We evaluate LLMs on these tasks using
COMILINGUA, revealing limitations in current multilingual modeling strategies
and emphasizing the need for improved code-mixed text processing capabilities.
COMI-LINGUA is publically availabe at:
https://huggingface.co/datasets/LingoIITGN/COMI-LINGUA.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 16:36:39 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Sheth",
"Rajvee",
""
],
[
"Beniwal",
"Himanshu",
""
],
[
"Singh",
"Mayank",
""
]
] | TITLE: COMI-LINGUA: Expert Annotated Large-Scale Dataset for Multitask NLP in
Hindi-English Code-Mixing
ABSTRACT: The rapid growth of digital communication has driven the widespread use of
code-mixing, particularly Hindi-English, in multilingual communities. Existing
datasets often focus on romanized text, have limited scope, or rely on
synthetic data, which fails to capture realworld language nuances. Human
annotations are crucial for assessing the naturalness and acceptability of
code-mixed text. To address these challenges, We introduce COMI-LINGUA, the
largest manually annotated dataset for code-mixed text, comprising 100,970
instances evaluated by three expert annotators in both Devanagari and Roman
scripts. The dataset supports five fundamental NLP tasks: Language
Identification, Matrix Language Identification, Part-of-Speech Tagging, Named
Entity Recognition, and Translation. We evaluate LLMs on these tasks using
COMILINGUA, revealing limitations in current multilingual modeling strategies
and emphasizing the need for improved code-mixed text processing capabilities.
COMI-LINGUA is publically availabe at:
https://huggingface.co/datasets/LingoIITGN/COMI-LINGUA.
|
2503.21681 | Carlos Oliver Dr. | Luis Wyss, Vincent Mallet, Wissam Karroucha, Karsten Borgwardt, Carlos
Oliver | A Comprehensive Benchmark for RNA 3D Structure-Function Modeling | null | null | null | null | q-bio.BM cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | The RNA structure-function relationship has recently garnered significant
attention within the deep learning community, promising to grow in importance
as nucleic acid structure models advance. However, the absence of standardized
and accessible benchmarks for deep learning on RNA 3D structures has impeded
the development of models for RNA functional characteristics.
In this work, we introduce a set of seven benchmarking datasets for RNA
structure-function prediction, designed to address this gap. Our library builds
on the established Python library rnaglib, and offers easy data distribution
and encoding, splitters and evaluation methods, providing a convenient
all-in-one framework for comparing models. Datasets are implemented in a fully
modular and reproducible manner, facilitating for community contributions and
customization. Finally, we provide initial baseline results for all tasks using
a graph neural network.
Source code: https://github.com/cgoliver/rnaglib
Documentation: https://rnaglib.org
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 16:49:31 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Wyss",
"Luis",
""
],
[
"Mallet",
"Vincent",
""
],
[
"Karroucha",
"Wissam",
""
],
[
"Borgwardt",
"Karsten",
""
],
[
"Oliver",
"Carlos",
""
]
] | TITLE: A Comprehensive Benchmark for RNA 3D Structure-Function Modeling
ABSTRACT: The RNA structure-function relationship has recently garnered significant
attention within the deep learning community, promising to grow in importance
as nucleic acid structure models advance. However, the absence of standardized
and accessible benchmarks for deep learning on RNA 3D structures has impeded
the development of models for RNA functional characteristics.
In this work, we introduce a set of seven benchmarking datasets for RNA
structure-function prediction, designed to address this gap. Our library builds
on the established Python library rnaglib, and offers easy data distribution
and encoding, splitters and evaluation methods, providing a convenient
all-in-one framework for comparing models. Datasets are implemented in a fully
modular and reproducible manner, facilitating for community contributions and
customization. Finally, we provide initial baseline results for all tasks using
a graph neural network.
Source code: https://github.com/cgoliver/rnaglib
Documentation: https://rnaglib.org
|
2503.21690 | Nikin Matharaarachchi | Nikin~Matharaarachchi, Muhammad~Fermi Pasha, Sonya~Coleman, and Kah
PengWong | CMED: A Child Micro-Expression Dataset | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Micro-expressions are short bursts of emotion that are difficult to hide.
Their detection in children is an important cue to assist psychotherapists in
conducting better therapy. However, existing research on the detection of
micro-expressions has focused on adults, whose expressions differ in their
characteristics from those of children. The lack of research is a direct
consequence of the lack of a child-based micro-expressions dataset as it is
much more challenging to capture children's facial expressions due to the lack
of predictability and controllability. This study compiles a dataset of
spontaneous child micro-expression videos, the first of its kind, to the best
of the authors knowledge. The dataset is captured in the wild using video
conferencing software. This dataset enables us to then explore key features and
differences between adult and child micro-expressions. This study also
establishes a baseline for the automated spotting and recognition of
micro-expressions in children using three approaches comprising of hand-created
and learning-based approaches.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 16:55:32 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Nikin~Matharaarachchi",
"",
""
],
[
"Pasha",
"Muhammad~Fermi",
""
],
[
"Sonya~Coleman",
"",
""
],
[
"PengWong",
"Kah",
""
]
] | TITLE: CMED: A Child Micro-Expression Dataset
ABSTRACT: Micro-expressions are short bursts of emotion that are difficult to hide.
Their detection in children is an important cue to assist psychotherapists in
conducting better therapy. However, existing research on the detection of
micro-expressions has focused on adults, whose expressions differ in their
characteristics from those of children. The lack of research is a direct
consequence of the lack of a child-based micro-expressions dataset as it is
much more challenging to capture children's facial expressions due to the lack
of predictability and controllability. This study compiles a dataset of
spontaneous child micro-expression videos, the first of its kind, to the best
of the authors knowledge. The dataset is captured in the wild using video
conferencing software. This dataset enables us to then explore key features and
differences between adult and child micro-expressions. This study also
establishes a baseline for the automated spotting and recognition of
micro-expressions in children using three approaches comprising of hand-created
and learning-based approaches.
|
2503.21692 | Daniel Bermuth | Daniel Bermuth, Alexander Poeppel, Wolfgang Reif | RapidPoseTriangulation: Multi-view Multi-person Whole-body Human Pose
Triangulation in a Millisecond | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | The integration of multi-view imaging and pose estimation represents a
significant advance in computer vision applications, offering new possibilities
for understanding human movement and interactions. This work presents a new
algorithm that improves multi-view multi-person pose estimation, focusing on
fast triangulation speeds and good generalization capabilities. The approach
extends to whole-body pose estimation, capturing details from facial
expressions to finger movements across multiple individuals and viewpoints.
Adaptability to different settings is demonstrated through strong performance
across unseen datasets and configurations. To support further progress in this
field, all of this work is publicly accessible.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 16:57:33 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Bermuth",
"Daniel",
""
],
[
"Poeppel",
"Alexander",
""
],
[
"Reif",
"Wolfgang",
""
]
] | TITLE: RapidPoseTriangulation: Multi-view Multi-person Whole-body Human Pose
Triangulation in a Millisecond
ABSTRACT: The integration of multi-view imaging and pose estimation represents a
significant advance in computer vision applications, offering new possibilities
for understanding human movement and interactions. This work presents a new
algorithm that improves multi-view multi-person pose estimation, focusing on
fast triangulation speeds and good generalization capabilities. The approach
extends to whole-body pose estimation, capturing details from facial
expressions to finger movements across multiple individuals and viewpoints.
Adaptability to different settings is demonstrated through strong performance
across unseen datasets and configurations. To support further progress in this
field, all of this work is publicly accessible.
|
2503.21695 | Bo Zhou | Jiahe Qian, Yaoyu Fang, Jinkui Hao, Bo Zhou | AMA-SAM: Adversarial Multi-Domain Alignment of Segment Anything Model
for High-Fidelity Histology Nuclei Segmentation | 13 pages, 4 tables, 2 figures | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Accurate segmentation of cell nuclei in histopathology images is essential
for numerous biomedical research and clinical applications. However, existing
cell nucleus segmentation methods only consider a single dataset (i.e., primary
domain), while neglecting to leverage supplementary data from diverse sources
(i.e., auxiliary domains) to reduce overfitting and enhance the performance.
Although incorporating multiple datasets could alleviate overfitting, it often
exacerbates performance drops caused by domain shifts. In this work, we
introduce Adversarial Multi-domain Alignment of Segment Anything Model
(AMA-SAM) that extends the Segment Anything Model (SAM) to overcome these
obstacles through two key innovations. First, we propose a Conditional Gradient
Reversal Layer (CGRL), a multi-domain alignment module that harmonizes features
from diverse domains to promote domain-invariant representation learning while
preserving crucial discriminative features for the primary dataset. Second, we
address SAM's inherent low-resolution output by designing a High-Resolution
Decoder (HR-Decoder), which directly produces fine-grained segmentation maps in
order to capture intricate nuclei boundaries in high-resolution histology
images. To the best of our knowledge, this is the first attempt to adapt SAM
for multi-dataset learning with application to histology nuclei segmentation.
We validate our method on several publicly available datasets, demonstrating
consistent and significant improvements over state-of-the-art approaches.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 16:59:39 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Qian",
"Jiahe",
""
],
[
"Fang",
"Yaoyu",
""
],
[
"Hao",
"Jinkui",
""
],
[
"Zhou",
"Bo",
""
]
] | TITLE: AMA-SAM: Adversarial Multi-Domain Alignment of Segment Anything Model
for High-Fidelity Histology Nuclei Segmentation
ABSTRACT: Accurate segmentation of cell nuclei in histopathology images is essential
for numerous biomedical research and clinical applications. However, existing
cell nucleus segmentation methods only consider a single dataset (i.e., primary
domain), while neglecting to leverage supplementary data from diverse sources
(i.e., auxiliary domains) to reduce overfitting and enhance the performance.
Although incorporating multiple datasets could alleviate overfitting, it often
exacerbates performance drops caused by domain shifts. In this work, we
introduce Adversarial Multi-domain Alignment of Segment Anything Model
(AMA-SAM) that extends the Segment Anything Model (SAM) to overcome these
obstacles through two key innovations. First, we propose a Conditional Gradient
Reversal Layer (CGRL), a multi-domain alignment module that harmonizes features
from diverse domains to promote domain-invariant representation learning while
preserving crucial discriminative features for the primary dataset. Second, we
address SAM's inherent low-resolution output by designing a High-Resolution
Decoder (HR-Decoder), which directly produces fine-grained segmentation maps in
order to capture intricate nuclei boundaries in high-resolution histology
images. To the best of our knowledge, this is the first attempt to adapt SAM
for multi-dataset learning with application to histology nuclei segmentation.
We validate our method on several publicly available datasets, demonstrating
consistent and significant improvements over state-of-the-art approaches.
|
2503.21714 | Pietro Tropeano | Pietro Tropeano, Maria Maistro, Tuukka Ruotsalo, Christina Lioma | As easy as PIE: understanding when pruning causes language models to
disagree | Accepted to NAACL 2025 (Findings) | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Language Model (LM) pruning compresses the model by removing weights, nodes,
or other parts of its architecture. Typically, pruning focuses on the resulting
efficiency gains at the cost of effectiveness. However, when looking at how
individual data points are affected by pruning, it turns out that a particular
subset of data points always bears most of the brunt (in terms of reduced
accuracy) when pruning, but this effect goes unnoticed when reporting the mean
accuracy of all data points. These data points are called PIEs and have been
studied in image processing, but not in NLP. In a study of various NLP
datasets, pruning methods, and levels of compression, we find that PIEs impact
inference quality considerably, regardless of class frequency, and that BERT is
more prone to this than BiLSTM. We also find that PIEs contain a high amount of
data points that have the largest influence on how well the model generalises
to unseen data. This means that when pruning, with seemingly moderate loss to
accuracy across all data points, we in fact hurt tremendously those data points
that matter the most. We trace what makes PIEs both hard and impactful to
inference to their overall longer and more semantically complex text. These
findings are novel and contribute to understanding how LMs are affected by
pruning. The code is available at: https://github.com/pietrotrope/AsEasyAsPIE
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 17:26:32 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Tropeano",
"Pietro",
""
],
[
"Maistro",
"Maria",
""
],
[
"Ruotsalo",
"Tuukka",
""
],
[
"Lioma",
"Christina",
""
]
] | TITLE: As easy as PIE: understanding when pruning causes language models to
disagree
ABSTRACT: Language Model (LM) pruning compresses the model by removing weights, nodes,
or other parts of its architecture. Typically, pruning focuses on the resulting
efficiency gains at the cost of effectiveness. However, when looking at how
individual data points are affected by pruning, it turns out that a particular
subset of data points always bears most of the brunt (in terms of reduced
accuracy) when pruning, but this effect goes unnoticed when reporting the mean
accuracy of all data points. These data points are called PIEs and have been
studied in image processing, but not in NLP. In a study of various NLP
datasets, pruning methods, and levels of compression, we find that PIEs impact
inference quality considerably, regardless of class frequency, and that BERT is
more prone to this than BiLSTM. We also find that PIEs contain a high amount of
data points that have the largest influence on how well the model generalises
to unseen data. This means that when pruning, with seemingly moderate loss to
accuracy across all data points, we in fact hurt tremendously those data points
that matter the most. We trace what makes PIEs both hard and impactful to
inference to their overall longer and more semantically complex text. These
findings are novel and contribute to understanding how LMs are affected by
pruning. The code is available at: https://github.com/pietrotrope/AsEasyAsPIE
|
2503.21717 | Jiefu Ou | Jiefu Ou, William Gantt Walden, Kate Sanders, Zhengping Jiang, Kaiser
Sun, Jeffrey Cheng, William Jurayj, Miriam Wanner, Shaobo Liang, Candice
Morgan, Seunghoon Han, Weiqi Wang, Chandler May, Hannah Recknor, Daniel
Khashabi, Benjamin Van Durme | CLAIMCHECK: How Grounded are LLM Critiques of Scientific Papers? | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | A core part of scientific peer review involves providing expert critiques
that directly assess the scientific claims a paper makes. While it is now
possible to automatically generate plausible (if generic) reviews, ensuring
that these reviews are sound and grounded in the papers' claims remains
challenging. To facilitate LLM benchmarking on these challenges, we introduce
CLAIMCHECK, an annotated dataset of NeurIPS 2023 and 2024 submissions and
reviews mined from OpenReview. CLAIMCHECK is richly annotated by ML experts for
weakness statements in the reviews and the paper claims that they dispute, as
well as fine-grained labels of the validity, objectivity, and type of the
identified weaknesses. We benchmark several LLMs on three claim-centric tasks
supported by CLAIMCHECK, requiring models to (1) associate weaknesses with the
claims they dispute, (2) predict fine-grained labels for weaknesses and rewrite
the weaknesses to enhance their specificity, and (3) verify a paper's claims
with grounded reasoning. Our experiments reveal that cutting-edge LLMs, while
capable of predicting weakness labels in (2), continue to underperform relative
to human experts on all other tasks.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 17:29:45 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Ou",
"Jiefu",
""
],
[
"Walden",
"William Gantt",
""
],
[
"Sanders",
"Kate",
""
],
[
"Jiang",
"Zhengping",
""
],
[
"Sun",
"Kaiser",
""
],
[
"Cheng",
"Jeffrey",
""
],
[
"Jurayj",
"William",
""
],
[
"Wanner",
"Miriam",
""
],
[
"Liang",
"Shaobo",
""
],
[
"Morgan",
"Candice",
""
],
[
"Han",
"Seunghoon",
""
],
[
"Wang",
"Weiqi",
""
],
[
"May",
"Chandler",
""
],
[
"Recknor",
"Hannah",
""
],
[
"Khashabi",
"Daniel",
""
],
[
"Van Durme",
"Benjamin",
""
]
] | TITLE: CLAIMCHECK: How Grounded are LLM Critiques of Scientific Papers?
ABSTRACT: A core part of scientific peer review involves providing expert critiques
that directly assess the scientific claims a paper makes. While it is now
possible to automatically generate plausible (if generic) reviews, ensuring
that these reviews are sound and grounded in the papers' claims remains
challenging. To facilitate LLM benchmarking on these challenges, we introduce
CLAIMCHECK, an annotated dataset of NeurIPS 2023 and 2024 submissions and
reviews mined from OpenReview. CLAIMCHECK is richly annotated by ML experts for
weakness statements in the reviews and the paper claims that they dispute, as
well as fine-grained labels of the validity, objectivity, and type of the
identified weaknesses. We benchmark several LLMs on three claim-centric tasks
supported by CLAIMCHECK, requiring models to (1) associate weaknesses with the
claims they dispute, (2) predict fine-grained labels for weaknesses and rewrite
the weaknesses to enhance their specificity, and (3) verify a paper's claims
with grounded reasoning. Our experiments reveal that cutting-edge LLMs, while
capable of predicting weakness labels in (2), continue to underperform relative
to human experts on all other tasks.
|
2503.21721 | Jefferson Hernandez Enrique | Jaywon Koo, Jefferson Hernandez, Moayed Haji-Ali, Ziyan Yang, and
Vicente Ordonez | Evaluating Text-to-Image Synthesis with a Conditional Fr\'{e}chet
Distance | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Evaluating text-to-image synthesis is challenging due to misalignment between
established metrics and human preferences. We propose cFreD, a metric based on
the notion of Conditional Fr\'echet Distance that explicitly accounts for both
visual fidelity and text-prompt alignment. Existing metrics such as Inception
Score (IS), Fr\'echet Inception Distance (FID) and CLIPScore assess either
image quality or image-text alignment but not both which limits their
correlation with human preferences. Scoring models explicitly trained to
replicate human preferences require constant updates and may not generalize to
novel generation techniques or out-of-domain inputs. Through extensive
experiments across multiple recently proposed text-to-image models and diverse
prompt datasets, we demonstrate that cFreD exhibits a higher correlation with
human judgments compared to statistical metrics, including metrics trained with
human preferences. Our findings validate cFreD as a robust, future-proof metric
for the systematic evaluation of text-to-image models, standardizing
benchmarking in this rapidly evolving field. We release our evaluation toolkit
and benchmark in the appendix.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 17:35:14 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Koo",
"Jaywon",
""
],
[
"Hernandez",
"Jefferson",
""
],
[
"Haji-Ali",
"Moayed",
""
],
[
"Yang",
"Ziyan",
""
],
[
"Ordonez",
"Vicente",
""
]
] | TITLE: Evaluating Text-to-Image Synthesis with a Conditional Fr\'{e}chet
Distance
ABSTRACT: Evaluating text-to-image synthesis is challenging due to misalignment between
established metrics and human preferences. We propose cFreD, a metric based on
the notion of Conditional Fr\'echet Distance that explicitly accounts for both
visual fidelity and text-prompt alignment. Existing metrics such as Inception
Score (IS), Fr\'echet Inception Distance (FID) and CLIPScore assess either
image quality or image-text alignment but not both which limits their
correlation with human preferences. Scoring models explicitly trained to
replicate human preferences require constant updates and may not generalize to
novel generation techniques or out-of-domain inputs. Through extensive
experiments across multiple recently proposed text-to-image models and diverse
prompt datasets, we demonstrate that cFreD exhibits a higher correlation with
human judgments compared to statistical metrics, including metrics trained with
human preferences. Our findings validate cFreD as a robust, future-proof metric
for the systematic evaluation of text-to-image models, standardizing
benchmarking in this rapidly evolving field. We release our evaluation toolkit
and benchmark in the appendix.
|
2503.21723 | Mallika Garg | Mallika Garg, Debashis Ghosh, Pyari Mohan Pradhan | OccRobNet : Occlusion Robust Network for Accurate 3D Interacting
Hand-Object Pose Estimation | Accepted in NATIONAL CONFERENCE ON COMMUNICATIONS (NCC) 2025 | null | null | null | cs.CV cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Occlusion is one of the challenging issues when estimating 3D hand pose. This
problem becomes more prominent when hand interacts with an object or two hands
are involved. In the past works, much attention has not been given to these
occluded regions. But these regions contain important and beneficial
information that is vital for 3D hand pose estimation. Thus, in this paper, we
propose an occlusion robust and accurate method for the estimation of 3D
hand-object pose from the input RGB image. Our method includes first localising
the hand joints using a CNN based model and then refining them by extracting
contextual information. The self attention transformer then identifies the
specific joints along with the hand identity. This helps the model to identify
the hand belongingness of a particular joint which helps to detect the joint
even in the occluded region. Further, these joints with hand identity are then
used to estimate the pose using cross attention mechanism. Thus, by identifying
the joints in the occluded region, the obtained network becomes robust to
occlusion. Hence, this network achieves state-of-the-art results when evaluated
on the InterHand2.6M, HO3D and H$_2$O3D datasets.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 17:36:55 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Garg",
"Mallika",
""
],
[
"Ghosh",
"Debashis",
""
],
[
"Pradhan",
"Pyari Mohan",
""
]
] | TITLE: OccRobNet : Occlusion Robust Network for Accurate 3D Interacting
Hand-Object Pose Estimation
ABSTRACT: Occlusion is one of the challenging issues when estimating 3D hand pose. This
problem becomes more prominent when hand interacts with an object or two hands
are involved. In the past works, much attention has not been given to these
occluded regions. But these regions contain important and beneficial
information that is vital for 3D hand pose estimation. Thus, in this paper, we
propose an occlusion robust and accurate method for the estimation of 3D
hand-object pose from the input RGB image. Our method includes first localising
the hand joints using a CNN based model and then refining them by extracting
contextual information. The self attention transformer then identifies the
specific joints along with the hand identity. This helps the model to identify
the hand belongingness of a particular joint which helps to detect the joint
even in the occluded region. Further, these joints with hand identity are then
used to estimate the pose using cross attention mechanism. Thus, by identifying
the joints in the occluded region, the obtained network becomes robust to
occlusion. Hence, this network achieves state-of-the-art results when evaluated
on the InterHand2.6M, HO3D and H$_2$O3D datasets.
|
2503.21735 | Arsham Gholamzadeh Khoee | Arsham Gholamzadeh Khoee, Shuai Wang, Yinan Yu, Robert Feldt, and
Dhasarathy Parthasarathy | GateLens: A Reasoning-Enhanced LLM Agent for Automotive Software Release
Analytics | null | null | null | null | cs.SE cs.AI cs.CL cs.MA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ensuring the reliability and effectiveness of software release decisions is
critical, particularly in safety-critical domains like automotive systems.
Precise analysis of release validation data, often presented in tabular form,
plays a pivotal role in this process. However, traditional methods that rely on
manual analysis of extensive test datasets and validation metrics are prone to
delays and high costs. Large Language Models (LLMs) offer a promising
alternative but face challenges in analytical reasoning, contextual
understanding, handling out-of-scope queries, and processing structured test
data consistently; limitations that hinder their direct application in
safety-critical scenarios. This paper introduces GateLens, an LLM-based tool
for analyzing tabular data in the automotive domain. GateLens translates
natural language queries into Relational Algebra (RA) expressions and then
generates optimized Python code. It outperforms the baseline system on
benchmarking datasets, achieving higher F1 scores and handling complex and
ambiguous queries with greater robustness. Ablation studies confirm the
critical role of the RA module, with performance dropping sharply when omitted.
Industrial evaluations reveal that GateLens reduces analysis time by over 80%
while maintaining high accuracy and reliability. As demonstrated by presented
results, GateLens achieved high performance without relying on few-shot
examples, showcasing strong generalization across various query types from
diverse company roles. Insights from deploying GateLens with a partner
automotive company offer practical guidance for integrating AI into critical
workflows such as release validation. Results show that by automating test
result analysis, GateLens enables faster, more informed, and dependable release
decisions, and can thus advance software scalability and reliability in
automotive systems.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 17:48:32 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Khoee",
"Arsham Gholamzadeh",
""
],
[
"Wang",
"Shuai",
""
],
[
"Yu",
"Yinan",
""
],
[
"Feldt",
"Robert",
""
],
[
"Parthasarathy",
"Dhasarathy",
""
]
] | TITLE: GateLens: A Reasoning-Enhanced LLM Agent for Automotive Software Release
Analytics
ABSTRACT: Ensuring the reliability and effectiveness of software release decisions is
critical, particularly in safety-critical domains like automotive systems.
Precise analysis of release validation data, often presented in tabular form,
plays a pivotal role in this process. However, traditional methods that rely on
manual analysis of extensive test datasets and validation metrics are prone to
delays and high costs. Large Language Models (LLMs) offer a promising
alternative but face challenges in analytical reasoning, contextual
understanding, handling out-of-scope queries, and processing structured test
data consistently; limitations that hinder their direct application in
safety-critical scenarios. This paper introduces GateLens, an LLM-based tool
for analyzing tabular data in the automotive domain. GateLens translates
natural language queries into Relational Algebra (RA) expressions and then
generates optimized Python code. It outperforms the baseline system on
benchmarking datasets, achieving higher F1 scores and handling complex and
ambiguous queries with greater robustness. Ablation studies confirm the
critical role of the RA module, with performance dropping sharply when omitted.
Industrial evaluations reveal that GateLens reduces analysis time by over 80%
while maintaining high accuracy and reliability. As demonstrated by presented
results, GateLens achieved high performance without relying on few-shot
examples, showcasing strong generalization across various query types from
diverse company roles. Insights from deploying GateLens with a partner
automotive company offer practical guidance for integrating AI into critical
workflows such as release validation. Results show that by automating test
result analysis, GateLens enables faster, more informed, and dependable release
decisions, and can thus advance software scalability and reliability in
automotive systems.
|
2503.21745 | Yuhan Zhang | Yuhan Zhang, Mengchen Zhang, Tong Wu, Tengfei Wang, Gordon Wetzstein,
Dahua Lin, Ziwei Liu | 3DGen-Bench: Comprehensive Benchmark Suite for 3D Generative Models | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | 3D generation is experiencing rapid advancements, while the development of 3D
evaluation has not kept pace. How to keep automatic evaluation equitably
aligned with human perception has become a well-recognized challenge. Recent
advances in the field of language and image generation have explored human
preferences and showcased respectable fitting ability. However, the 3D domain
still lacks such a comprehensive preference dataset over generative models. To
mitigate this absence, we develop 3DGen-Arena, an integrated platform in a
battle manner. Then, we carefully design diverse text and image prompts and
leverage the arena platform to gather human preferences from both public users
and expert annotators, resulting in a large-scale multi-dimension human
preference dataset 3DGen-Bench. Using this dataset, we further train a
CLIP-based scoring model, 3DGen-Score, and a MLLM-based automatic evaluator,
3DGen-Eval. These two models innovatively unify the quality evaluation of
text-to-3D and image-to-3D generation, and jointly form our automated
evaluation system with their respective strengths. Extensive experiments
demonstrate the efficacy of our scoring model in predicting human preferences,
exhibiting a superior correlation with human ranks compared to existing
metrics. We believe that our 3DGen-Bench dataset and automated evaluation
system will foster a more equitable evaluation in the field of 3D generation,
further promoting the development of 3D generative models and their downstream
applications.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 17:53:00 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Zhang",
"Yuhan",
""
],
[
"Zhang",
"Mengchen",
""
],
[
"Wu",
"Tong",
""
],
[
"Wang",
"Tengfei",
""
],
[
"Wetzstein",
"Gordon",
""
],
[
"Lin",
"Dahua",
""
],
[
"Liu",
"Ziwei",
""
]
] | TITLE: 3DGen-Bench: Comprehensive Benchmark Suite for 3D Generative Models
ABSTRACT: 3D generation is experiencing rapid advancements, while the development of 3D
evaluation has not kept pace. How to keep automatic evaluation equitably
aligned with human perception has become a well-recognized challenge. Recent
advances in the field of language and image generation have explored human
preferences and showcased respectable fitting ability. However, the 3D domain
still lacks such a comprehensive preference dataset over generative models. To
mitigate this absence, we develop 3DGen-Arena, an integrated platform in a
battle manner. Then, we carefully design diverse text and image prompts and
leverage the arena platform to gather human preferences from both public users
and expert annotators, resulting in a large-scale multi-dimension human
preference dataset 3DGen-Bench. Using this dataset, we further train a
CLIP-based scoring model, 3DGen-Score, and a MLLM-based automatic evaluator,
3DGen-Eval. These two models innovatively unify the quality evaluation of
text-to-3D and image-to-3D generation, and jointly form our automated
evaluation system with their respective strengths. Extensive experiments
demonstrate the efficacy of our scoring model in predicting human preferences,
exhibiting a superior correlation with human ranks compared to existing
metrics. We believe that our 3DGen-Bench dataset and automated evaluation
system will foster a more equitable evaluation in the field of 3D generation,
further promoting the development of 3D generative models and their downstream
applications.
|
2503.21749 | Zhen Li | Shitian Zhao, Qilong Wu, Xinyue Li, Bo Zhang, Ming Li, Qi Qin,
Dongyang Liu, Kaipeng Zhang, Hongsheng Li, Yu Qiao, Peng Gao, Bin Fu, Zhen Li | LeX-Art: Rethinking Text Generation via Scalable High-Quality Data
Synthesis | Project page: https://zhaoshitian.github.io/lexart/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We introduce LeX-Art, a comprehensive suite for high-quality text-image
synthesis that systematically bridges the gap between prompt expressiveness and
text rendering fidelity. Our approach follows a data-centric paradigm,
constructing a high-quality data synthesis pipeline based on Deepseek-R1 to
curate LeX-10K, a dataset of 10K high-resolution, aesthetically refined
1024$\times$1024 images. Beyond dataset construction, we develop LeX-Enhancer,
a robust prompt enrichment model, and train two text-to-image models, LeX-FLUX
and LeX-Lumina, achieving state-of-the-art text rendering performance. To
systematically evaluate visual text generation, we introduce LeX-Bench, a
benchmark that assesses fidelity, aesthetics, and alignment, complemented by
Pairwise Normalized Edit Distance (PNED), a novel metric for robust text
accuracy evaluation. Experiments demonstrate significant improvements, with
LeX-Lumina achieving a 79.81% PNED gain on CreateBench, and LeX-FLUX
outperforming baselines in color (+3.18%), positional (+4.45%), and font
accuracy (+3.81%). Our codes, models, datasets, and demo are publicly
available.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 17:56:15 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Zhao",
"Shitian",
""
],
[
"Wu",
"Qilong",
""
],
[
"Li",
"Xinyue",
""
],
[
"Zhang",
"Bo",
""
],
[
"Li",
"Ming",
""
],
[
"Qin",
"Qi",
""
],
[
"Liu",
"Dongyang",
""
],
[
"Zhang",
"Kaipeng",
""
],
[
"Li",
"Hongsheng",
""
],
[
"Qiao",
"Yu",
""
],
[
"Gao",
"Peng",
""
],
[
"Fu",
"Bin",
""
],
[
"Li",
"Zhen",
""
]
] | TITLE: LeX-Art: Rethinking Text Generation via Scalable High-Quality Data
Synthesis
ABSTRACT: We introduce LeX-Art, a comprehensive suite for high-quality text-image
synthesis that systematically bridges the gap between prompt expressiveness and
text rendering fidelity. Our approach follows a data-centric paradigm,
constructing a high-quality data synthesis pipeline based on Deepseek-R1 to
curate LeX-10K, a dataset of 10K high-resolution, aesthetically refined
1024$\times$1024 images. Beyond dataset construction, we develop LeX-Enhancer,
a robust prompt enrichment model, and train two text-to-image models, LeX-FLUX
and LeX-Lumina, achieving state-of-the-art text rendering performance. To
systematically evaluate visual text generation, we introduce LeX-Bench, a
benchmark that assesses fidelity, aesthetics, and alignment, complemented by
Pairwise Normalized Edit Distance (PNED), a novel metric for robust text
accuracy evaluation. Experiments demonstrate significant improvements, with
LeX-Lumina achieving a 79.81% PNED gain on CreateBench, and LeX-FLUX
outperforming baselines in color (+3.18%), positional (+4.45%), and font
accuracy (+3.81%). Our codes, models, datasets, and demo are publicly
available.
|
2503.21760 | Jason Cai | Rana Salama, Jason Cai, Michelle Yuan, Anna Currey, Monica Sunkara, Yi
Zhang, Yassine Benajiba | MemInsight: Autonomous Memory Augmentation for LLM Agents | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Large language model (LLM) agents have evolved to intelligently process
information, make decisions, and interact with users or tools. A key capability
is the integration of long-term memory capabilities, enabling these agents to
draw upon historical interactions and knowledge. However, the growing memory
size and need for semantic structuring pose significant challenges. In this
work, we propose an autonomous memory augmentation approach, MemInsight, to
enhance semantic data representation and retrieval mechanisms. By leveraging
autonomous augmentation to historical interactions, LLM agents are shown to
deliver more accurate and contextualized responses. We empirically validate the
efficacy of our proposed approach in three task scenarios; conversational
recommendation, question answering and event summarization. On the LLM-REDIAL
dataset, MemInsight boosts persuasiveness of recommendations by up to 14%.
Moreover, it outperforms a RAG baseline by 34% in recall for LoCoMo retrieval.
Our empirical results show the potential of MemInsight to enhance the
contextual performance of LLM agents across multiple tasks.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 17:57:28 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Salama",
"Rana",
""
],
[
"Cai",
"Jason",
""
],
[
"Yuan",
"Michelle",
""
],
[
"Currey",
"Anna",
""
],
[
"Sunkara",
"Monica",
""
],
[
"Zhang",
"Yi",
""
],
[
"Benajiba",
"Yassine",
""
]
] | TITLE: MemInsight: Autonomous Memory Augmentation for LLM Agents
ABSTRACT: Large language model (LLM) agents have evolved to intelligently process
information, make decisions, and interact with users or tools. A key capability
is the integration of long-term memory capabilities, enabling these agents to
draw upon historical interactions and knowledge. However, the growing memory
size and need for semantic structuring pose significant challenges. In this
work, we propose an autonomous memory augmentation approach, MemInsight, to
enhance semantic data representation and retrieval mechanisms. By leveraging
autonomous augmentation to historical interactions, LLM agents are shown to
deliver more accurate and contextualized responses. We empirically validate the
efficacy of our proposed approach in three task scenarios; conversational
recommendation, question answering and event summarization. On the LLM-REDIAL
dataset, MemInsight boosts persuasiveness of recommendations by up to 14%.
Moreover, it outperforms a RAG baseline by 34% in recall for LoCoMo retrieval.
Our empirical results show the potential of MemInsight to enhance the
contextual performance of LLM agents across multiple tasks.
|
2503.21767 | Hairong Yin | Hairong Yin, Huangying Zhan, Yi Xu, Raymond A. Yeh | Semantic Consistent Language Gaussian Splatting for Point-Level
Open-vocabulary Querying | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Open-vocabulary querying in 3D Gaussian Splatting aims to identify
semantically relevant regions within a 3D Gaussian representation based on a
given text query. Prior work, such as LangSplat, addressed this task by
retrieving these regions in the form of segmentation masks on 2D renderings.
More recently, OpenGaussian introduced point-level querying, which directly
selects a subset of 3D Gaussians. In this work, we propose a point-level
querying method that builds upon LangSplat's framework. Our approach improves
the framework in two key ways: (a) we leverage masklets from the Segment
Anything Model 2 (SAM2) to establish semantic consistent ground-truth for
distilling the language Gaussians; (b) we introduces a novel two-step querying
approach that first retrieves the distilled ground-truth and subsequently uses
the ground-truth to query the individual Gaussians. Experimental evaluations on
three benchmark datasets demonstrate that the proposed method achieves better
performance compared to state-of-the-art approaches. For instance, our method
achieves an mIoU improvement of +20.42 on the 3D-OVS dataset.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 17:59:05 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Yin",
"Hairong",
""
],
[
"Zhan",
"Huangying",
""
],
[
"Xu",
"Yi",
""
],
[
"Yeh",
"Raymond A.",
""
]
] | TITLE: Semantic Consistent Language Gaussian Splatting for Point-Level
Open-vocabulary Querying
ABSTRACT: Open-vocabulary querying in 3D Gaussian Splatting aims to identify
semantically relevant regions within a 3D Gaussian representation based on a
given text query. Prior work, such as LangSplat, addressed this task by
retrieving these regions in the form of segmentation masks on 2D renderings.
More recently, OpenGaussian introduced point-level querying, which directly
selects a subset of 3D Gaussians. In this work, we propose a point-level
querying method that builds upon LangSplat's framework. Our approach improves
the framework in two key ways: (a) we leverage masklets from the Segment
Anything Model 2 (SAM2) to establish semantic consistent ground-truth for
distilling the language Gaussians; (b) we introduces a novel two-step querying
approach that first retrieves the distilled ground-truth and subsequently uses
the ground-truth to query the individual Gaussians. Experimental evaluations on
three benchmark datasets demonstrate that the proposed method achieves better
performance compared to state-of-the-art approaches. For instance, our method
achieves an mIoU improvement of +20.42 on the 3D-OVS dataset.
|
2503.21771 | Dingkang Liang | Hongkai Lin, Dingkang Liang, Zhenghao Qi, Xiang Bai | A Unified Image-Dense Annotation Generation Model for Underwater Scenes | Accepted by CVPR 2025. The code is available at https:
//github.com/HongkLin/TIDE | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Underwater dense prediction, especially depth estimation and semantic
segmentation, is crucial for gaining a comprehensive understanding of
underwater scenes. Nevertheless, high-quality and large-scale underwater
datasets with dense annotations remain scarce because of the complex
environment and the exorbitant data collection costs. This paper proposes a
unified Text-to-Image and DEnse annotation generation method (TIDE) for
underwater scenes. It relies solely on text as input to simultaneously generate
realistic underwater images and multiple highly consistent dense annotations.
Specifically, we unify the generation of text-to-image and text-to-dense
annotations within a single model. The Implicit Layout Sharing mechanism (ILS)
and cross-modal interaction method called Time Adaptive Normalization (TAN) are
introduced to jointly optimize the consistency between image and dense
annotations. We synthesize a large-scale underwater dataset using TIDE to
validate the effectiveness of our method in underwater dense prediction tasks.
The results demonstrate that our method effectively improves the performance of
existing underwater dense prediction models and mitigates the scarcity of
underwater data with dense annotations. We hope our method can offer new
perspectives on alleviating data scarcity issues in other fields. The code is
available at https: //github.com/HongkLin/TIDE.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 17:59:43 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Lin",
"Hongkai",
""
],
[
"Liang",
"Dingkang",
""
],
[
"Qi",
"Zhenghao",
""
],
[
"Bai",
"Xiang",
""
]
] | TITLE: A Unified Image-Dense Annotation Generation Model for Underwater Scenes
ABSTRACT: Underwater dense prediction, especially depth estimation and semantic
segmentation, is crucial for gaining a comprehensive understanding of
underwater scenes. Nevertheless, high-quality and large-scale underwater
datasets with dense annotations remain scarce because of the complex
environment and the exorbitant data collection costs. This paper proposes a
unified Text-to-Image and DEnse annotation generation method (TIDE) for
underwater scenes. It relies solely on text as input to simultaneously generate
realistic underwater images and multiple highly consistent dense annotations.
Specifically, we unify the generation of text-to-image and text-to-dense
annotations within a single model. The Implicit Layout Sharing mechanism (ILS)
and cross-modal interaction method called Time Adaptive Normalization (TAN) are
introduced to jointly optimize the consistency between image and dense
annotations. We synthesize a large-scale underwater dataset using TIDE to
validate the effectiveness of our method in underwater dense prediction tasks.
The results demonstrate that our method effectively improves the performance of
existing underwater dense prediction models and mitigates the scarcity of
underwater data with dense annotations. We hope our method can offer new
perspectives on alleviating data scarcity issues in other fields. The code is
available at https: //github.com/HongkLin/TIDE.
|
2503.21776 | Kaituo Feng | Kaituo Feng, Kaixiong Gong, Bohao Li, Zonghao Guo, Yibing Wang,
Tianshuo Peng, Benyou Wang, Xiangyu Yue | Video-R1: Reinforcing Video Reasoning in MLLMs | Project page: https://github.com/tulerfeng/Video-R1 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Inspired by DeepSeek-R1's success in eliciting reasoning abilities through
rule-based reinforcement learning (RL), we introduce Video-R1 as the first
attempt to systematically explore the R1 paradigm for eliciting video reasoning
within multimodal large language models (MLLMs). However, directly applying RL
training with the GRPO algorithm to video reasoning presents two primary
challenges: (i) a lack of temporal modeling for video reasoning, and (ii) the
scarcity of high-quality video-reasoning data. To address these issues, we
first propose the T-GRPO algorithm, which encourages models to utilize temporal
information in videos for reasoning. Additionally, instead of relying solely on
video data, we incorporate high-quality image-reasoning data into the training
process. We have constructed two datasets: Video-R1-COT-165k for SFT cold start
and Video-R1-260k for RL training, both comprising image and video data.
Experimental results demonstrate that Video-R1 achieves significant
improvements on video reasoning benchmarks such as VideoMMMU and VSI-Bench, as
well as on general video benchmarks including MVBench and TempCompass, etc.
Notably, Video-R1-7B attains a 35.8% accuracy on video spatial reasoning
benchmark VSI-bench, surpassing the commercial proprietary model GPT-4o. All
codes, models, data are released.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 17:59:51 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Feng",
"Kaituo",
""
],
[
"Gong",
"Kaixiong",
""
],
[
"Li",
"Bohao",
""
],
[
"Guo",
"Zonghao",
""
],
[
"Wang",
"Yibing",
""
],
[
"Peng",
"Tianshuo",
""
],
[
"Wang",
"Benyou",
""
],
[
"Yue",
"Xiangyu",
""
]
] | TITLE: Video-R1: Reinforcing Video Reasoning in MLLMs
ABSTRACT: Inspired by DeepSeek-R1's success in eliciting reasoning abilities through
rule-based reinforcement learning (RL), we introduce Video-R1 as the first
attempt to systematically explore the R1 paradigm for eliciting video reasoning
within multimodal large language models (MLLMs). However, directly applying RL
training with the GRPO algorithm to video reasoning presents two primary
challenges: (i) a lack of temporal modeling for video reasoning, and (ii) the
scarcity of high-quality video-reasoning data. To address these issues, we
first propose the T-GRPO algorithm, which encourages models to utilize temporal
information in videos for reasoning. Additionally, instead of relying solely on
video data, we incorporate high-quality image-reasoning data into the training
process. We have constructed two datasets: Video-R1-COT-165k for SFT cold start
and Video-R1-260k for RL training, both comprising image and video data.
Experimental results demonstrate that Video-R1 achieves significant
improvements on video reasoning benchmarks such as VideoMMMU and VSI-Bench, as
well as on general video benchmarks including MVBench and TempCompass, etc.
Notably, Video-R1-7B attains a 35.8% accuracy on video spatial reasoning
benchmark VSI-bench, surpassing the commercial proprietary model GPT-4o. All
codes, models, data are released.
|
2503.21780 | Matteo Poggi | Reza Qorbani, Gianluca Villani, Theodoros Panagiotakopoulos, Marc
Botet Colomer, Linus H\"arenstam-Nielsen, Mattia Segu, Pier Luigi Dovesi,
Jussi Karlgren, Daniel Cremers, Federico Tombari, Matteo Poggi | Semantic Library Adaptation: LoRA Retrieval and Fusion for
Open-Vocabulary Semantic Segmentation | CVPR 2025. Project page: https://thegoodailab.org/semla Code:
https://github.com/rezaqorbani/SemLA | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Open-vocabulary semantic segmentation models associate vision and text to
label pixels from an undefined set of classes using textual queries, providing
versatile performance on novel datasets. However, large shifts between training
and test domains degrade their performance, requiring fine-tuning for effective
real-world applications. We introduce Semantic Library Adaptation (SemLA), a
novel framework for training-free, test-time domain adaptation. SemLA leverages
a library of LoRA-based adapters indexed with CLIP embeddings, dynamically
merging the most relevant adapters based on proximity to the target domain in
the embedding space. This approach constructs an ad-hoc model tailored to each
specific input without additional training. Our method scales efficiently,
enhances explainability by tracking adapter contributions, and inherently
protects data privacy, making it ideal for sensitive applications.
Comprehensive experiments on a 20-domain benchmark built over 10 standard
datasets demonstrate SemLA's superior adaptability and performance across
diverse settings, establishing a new standard in domain adaptation for
open-vocabulary semantic segmentation.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 17:59:58 GMT"
}
] | 2025-03-28T00:00:00 | [
[
"Qorbani",
"Reza",
""
],
[
"Villani",
"Gianluca",
""
],
[
"Panagiotakopoulos",
"Theodoros",
""
],
[
"Colomer",
"Marc Botet",
""
],
[
"Härenstam-Nielsen",
"Linus",
""
],
[
"Segu",
"Mattia",
""
],
[
"Dovesi",
"Pier Luigi",
""
],
[
"Karlgren",
"Jussi",
""
],
[
"Cremers",
"Daniel",
""
],
[
"Tombari",
"Federico",
""
],
[
"Poggi",
"Matteo",
""
]
] | TITLE: Semantic Library Adaptation: LoRA Retrieval and Fusion for
Open-Vocabulary Semantic Segmentation
ABSTRACT: Open-vocabulary semantic segmentation models associate vision and text to
label pixels from an undefined set of classes using textual queries, providing
versatile performance on novel datasets. However, large shifts between training
and test domains degrade their performance, requiring fine-tuning for effective
real-world applications. We introduce Semantic Library Adaptation (SemLA), a
novel framework for training-free, test-time domain adaptation. SemLA leverages
a library of LoRA-based adapters indexed with CLIP embeddings, dynamically
merging the most relevant adapters based on proximity to the target domain in
the embedding space. This approach constructs an ad-hoc model tailored to each
specific input without additional training. Our method scales efficiently,
enhances explainability by tracking adapter contributions, and inherently
protects data privacy, making it ideal for sensitive applications.
Comprehensive experiments on a 20-domain benchmark built over 10 standard
datasets demonstrate SemLA's superior adaptability and performance across
diverse settings, establishing a new standard in domain adaptation for
open-vocabulary semantic segmentation.
|
2211.15143 | Bin Wang | Bin Wang, Wenbin Pei, Bing Xue, Mengjie Zhang | Explaining Deep Convolutional Neural Networks for Image Classification
by Evolving Local Interpretable Model-agnostic Explanations | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep convolutional neural networks have proven their effectiveness, and have
been acknowledged as the most dominant method for image classification.
However, a severe drawback of deep convolutional neural networks is poor
explainability. Unfortunately, in many real-world applications, users need to
understand the rationale behind the predictions of deep convolutional neural
networks when determining whether they should trust the predictions or not. To
resolve this issue, a novel genetic algorithm-based method is proposed for the
first time to automatically evolve local explanations that can assist users to
assess the rationality of the predictions. Furthermore, the proposed method is
model-agnostic, i.e., it can be utilised to explain any deep convolutional
neural network models. In the experiments, ResNet is used as an example model
to be explained, and the ImageNet dataset is selected as the benchmark dataset.
DenseNet and MobileNet are further explained to demonstrate the model-agnostic
characteristic of the proposed method. The evolved local explanations on four
images, randomly selected from ImageNet, are presented, which show that the
evolved local explanations are straightforward to be recognised by humans.
Moreover, the evolved explanations can explain the predictions of deep
convolutional neural networks on all four images very well by successfully
capturing meaningful interpretable features of the sample images. Further
analysis based on the 30 runs of the experiments exhibits that the evolved
local explanations can also improve the probabilities/confidences of the deep
convolutional neural network models in making the predictions. The proposed
method can obtain local explanations within one minute, which is more than ten
times faster than LIME (the state-of-the-art method).
| [
{
"version": "v1",
"created": "Mon, 28 Nov 2022 08:56:00 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 04:52:14 GMT"
},
{
"version": "v3",
"created": "Wed, 26 Mar 2025 01:45:30 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Wang",
"Bin",
""
],
[
"Pei",
"Wenbin",
""
],
[
"Xue",
"Bing",
""
],
[
"Zhang",
"Mengjie",
""
]
] | TITLE: Explaining Deep Convolutional Neural Networks for Image Classification
by Evolving Local Interpretable Model-agnostic Explanations
ABSTRACT: Deep convolutional neural networks have proven their effectiveness, and have
been acknowledged as the most dominant method for image classification.
However, a severe drawback of deep convolutional neural networks is poor
explainability. Unfortunately, in many real-world applications, users need to
understand the rationale behind the predictions of deep convolutional neural
networks when determining whether they should trust the predictions or not. To
resolve this issue, a novel genetic algorithm-based method is proposed for the
first time to automatically evolve local explanations that can assist users to
assess the rationality of the predictions. Furthermore, the proposed method is
model-agnostic, i.e., it can be utilised to explain any deep convolutional
neural network models. In the experiments, ResNet is used as an example model
to be explained, and the ImageNet dataset is selected as the benchmark dataset.
DenseNet and MobileNet are further explained to demonstrate the model-agnostic
characteristic of the proposed method. The evolved local explanations on four
images, randomly selected from ImageNet, are presented, which show that the
evolved local explanations are straightforward to be recognised by humans.
Moreover, the evolved explanations can explain the predictions of deep
convolutional neural networks on all four images very well by successfully
capturing meaningful interpretable features of the sample images. Further
analysis based on the 30 runs of the experiments exhibits that the evolved
local explanations can also improve the probabilities/confidences of the deep
convolutional neural network models in making the predictions. The proposed
method can obtain local explanations within one minute, which is more than ten
times faster than LIME (the state-of-the-art method).
|
2302.10463 | Renhao Huang | Renhao Huang, Hao Xue, Maurice Pagnucco, Flora Salim, Yang Song | Vision-based Multi-future Trajectory Prediction: A Survey | Accepted by TNNLS 2025 | null | 10.1109/TNNLS.2025.3550350 | null | cs.RO cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Vision-based trajectory prediction is an important task that supports safe
and intelligent behaviours in autonomous systems. Many advanced approaches have
been proposed over the years with improved spatial and temporal feature
extraction. However, human behaviour is naturally diverse and uncertain. Given
the past trajectory and surrounding environment information, an agent can have
multiple plausible trajectories in the future. To tackle this problem, an
essential task named multi-future trajectory prediction (MTP) has recently been
studied. This task aims to generate a diverse, acceptable and explainable
distribution of future predictions for each agent. In this paper, we present
the first survey for MTP with our unique taxonomies and a comprehensive
analysis of frameworks, datasets and evaluation metrics. We also compare models
on existing MTP datasets and conduct experiments on the ForkingPath dataset.
Finally, we discuss multiple future directions that can help researchers
develop novel multi-future trajectory prediction systems and other diverse
learning tasks similar to MTP.
| [
{
"version": "v1",
"created": "Tue, 21 Feb 2023 06:11:08 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 05:54:55 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Huang",
"Renhao",
""
],
[
"Xue",
"Hao",
""
],
[
"Pagnucco",
"Maurice",
""
],
[
"Salim",
"Flora",
""
],
[
"Song",
"Yang",
""
]
] | TITLE: Vision-based Multi-future Trajectory Prediction: A Survey
ABSTRACT: Vision-based trajectory prediction is an important task that supports safe
and intelligent behaviours in autonomous systems. Many advanced approaches have
been proposed over the years with improved spatial and temporal feature
extraction. However, human behaviour is naturally diverse and uncertain. Given
the past trajectory and surrounding environment information, an agent can have
multiple plausible trajectories in the future. To tackle this problem, an
essential task named multi-future trajectory prediction (MTP) has recently been
studied. This task aims to generate a diverse, acceptable and explainable
distribution of future predictions for each agent. In this paper, we present
the first survey for MTP with our unique taxonomies and a comprehensive
analysis of frameworks, datasets and evaluation metrics. We also compare models
on existing MTP datasets and conduct experiments on the ForkingPath dataset.
Finally, we discuss multiple future directions that can help researchers
develop novel multi-future trajectory prediction systems and other diverse
learning tasks similar to MTP.
|
2303.11056 | Michael Gilson | Chapin E. Cavender, David A. Case, Julian C.-H. Chen, Lillian T.
Chong, Daniel A. Keedy, Kresten Lindorff-Larsen, David L. Mobley, O. H.
Samuli Ollila, Chris Oostenbrink, Paul Robustelli, Vincent A. Voelz, Michael
E. Wall, David C. Wych, Michael K. Gilson | Structure-Based Experimental Datasets for Benchmarking Protein
Simulation Force Fields | 46 pages, 4 figures. Substantial revision and expansion of content
from previous version | null | null | null | q-bio.BM physics.bio-ph physics.comp-ph | http://creativecommons.org/licenses/by/4.0/ | This review article provides an overview of structurally oriented
experimental datasets that can be used to benchmark protein force fields,
focusing on data generated by nuclear magnetic resonance (NMR) spectroscopy and
room temperature (RT) protein crystallography. We discuss what the observables
are, what they tell us about structure and dynamics, what makes them useful for
assessing force field accuracy, and how they can be connected to molecular
dynamics simulations carried out using the force field one wishes to benchmark.
We also touch on statistical issues that arise when comparing simulations with
experiment. We hope this article will be particularly useful to computational
researchers and trainees who develop, benchmark, or use protein force fields
for molecular simulations.
| [
{
"version": "v1",
"created": "Thu, 2 Mar 2023 14:34:56 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 19:40:10 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Cavender",
"Chapin E.",
""
],
[
"Case",
"David A.",
""
],
[
"Chen",
"Julian C. -H.",
""
],
[
"Chong",
"Lillian T.",
""
],
[
"Keedy",
"Daniel A.",
""
],
[
"Lindorff-Larsen",
"Kresten",
""
],
[
"Mobley",
"David L.",
""
],
[
"Ollila",
"O. H. Samuli",
""
],
[
"Oostenbrink",
"Chris",
""
],
[
"Robustelli",
"Paul",
""
],
[
"Voelz",
"Vincent A.",
""
],
[
"Wall",
"Michael E.",
""
],
[
"Wych",
"David C.",
""
],
[
"Gilson",
"Michael K.",
""
]
] | TITLE: Structure-Based Experimental Datasets for Benchmarking Protein
Simulation Force Fields
ABSTRACT: This review article provides an overview of structurally oriented
experimental datasets that can be used to benchmark protein force fields,
focusing on data generated by nuclear magnetic resonance (NMR) spectroscopy and
room temperature (RT) protein crystallography. We discuss what the observables
are, what they tell us about structure and dynamics, what makes them useful for
assessing force field accuracy, and how they can be connected to molecular
dynamics simulations carried out using the force field one wishes to benchmark.
We also touch on statistical issues that arise when comparing simulations with
experiment. We hope this article will be particularly useful to computational
researchers and trainees who develop, benchmark, or use protein force fields
for molecular simulations.
|
2304.06370 | Yiming Ma | Yiming Ma, Victor Sanchez, Soodeh Nikan, Devesh Upadhyay, Bhushan
Atote, Tanaya Guha | Robust Multiview Multimodal Driver Monitoring System Using Masked
Multi-Head Self-Attention | 9 pages (1 for reference); accepted by the 6th Multimodal Learning
and Applications Workshop (MULA) at CVPR 2023 | null | 10.1109/CVPRW59228.2023.00260 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Driver Monitoring Systems (DMSs) are crucial for safe hand-over actions in
Level-2+ self-driving vehicles. State-of-the-art DMSs leverage multiple sensors
mounted at different locations to monitor the driver and the vehicle's interior
scene and employ decision-level fusion to integrate these heterogenous data.
However, this fusion method may not fully utilize the complementarity of
different data sources and may overlook their relative importance. To address
these limitations, we propose a novel multiview multimodal driver monitoring
system based on feature-level fusion through multi-head self-attention (MHSA).
We demonstrate its effectiveness by comparing it against four alternative
fusion strategies (Sum, Conv, SE, and AFF). We also present a novel
GPU-friendly supervised contrastive learning framework SuMoCo to learn better
representations. Furthermore, We fine-grained the test split of the DAD dataset
to enable the multi-class recognition of drivers' activities. Experiments on
this enhanced database demonstrate that 1) the proposed MHSA-based fusion
method (AUC-ROC: 97.0\%) outperforms all baselines and previous approaches, and
2) training MHSA with patch masking can improve its robustness against
modality/view collapses. The code and annotations are publicly available.
| [
{
"version": "v1",
"created": "Thu, 13 Apr 2023 09:50:32 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Ma",
"Yiming",
""
],
[
"Sanchez",
"Victor",
""
],
[
"Nikan",
"Soodeh",
""
],
[
"Upadhyay",
"Devesh",
""
],
[
"Atote",
"Bhushan",
""
],
[
"Guha",
"Tanaya",
""
]
] | TITLE: Robust Multiview Multimodal Driver Monitoring System Using Masked
Multi-Head Self-Attention
ABSTRACT: Driver Monitoring Systems (DMSs) are crucial for safe hand-over actions in
Level-2+ self-driving vehicles. State-of-the-art DMSs leverage multiple sensors
mounted at different locations to monitor the driver and the vehicle's interior
scene and employ decision-level fusion to integrate these heterogenous data.
However, this fusion method may not fully utilize the complementarity of
different data sources and may overlook their relative importance. To address
these limitations, we propose a novel multiview multimodal driver monitoring
system based on feature-level fusion through multi-head self-attention (MHSA).
We demonstrate its effectiveness by comparing it against four alternative
fusion strategies (Sum, Conv, SE, and AFF). We also present a novel
GPU-friendly supervised contrastive learning framework SuMoCo to learn better
representations. Furthermore, We fine-grained the test split of the DAD dataset
to enable the multi-class recognition of drivers' activities. Experiments on
this enhanced database demonstrate that 1) the proposed MHSA-based fusion
method (AUC-ROC: 97.0\%) outperforms all baselines and previous approaches, and
2) training MHSA with patch masking can improve its robustness against
modality/view collapses. The code and annotations are publicly available.
|
2307.15054 | Cl\'ement Guerner | Cl\'ement Guerner, Tianyu Liu, Anej Svete, Alexander Warstadt, Ryan
Cotterell | A Geometric Notion of Causal Probing | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | The linear subspace hypothesis (Bolukbasi et al., 2016) states that, in a
language model's representation space, all information about a concept such as
verbal number is encoded in a linear subspace. Prior work has relied on
auxiliary classification tasks to identify and evaluate candidate subspaces
that might give support for this hypothesis. We instead give a set of intrinsic
criteria which characterize an ideal linear concept subspace and enable us to
identify the subspace using only the language model distribution. Our
information-theoretic framework accounts for spuriously correlated features in
the representation space (Kumar et al., 2022) by reconciling the statistical
notion of concept information and the geometric notion of how concepts are
encoded in the representation space. As a byproduct of this analysis, we
hypothesize a causal process for how a language model might leverage concepts
during generation. Empirically, we find that linear concept erasure is
successful in erasing most concept information under our framework for verbal
number as well as some complex aspect-level sentiment concepts from a
restaurant review dataset. Our causal intervention for controlled generation
shows that, for at least one concept across two languages models, the concept
subspace can be used to manipulate the concept value of the generated word with
precision.
| [
{
"version": "v1",
"created": "Thu, 27 Jul 2023 17:57:57 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Jul 2023 14:22:07 GMT"
},
{
"version": "v3",
"created": "Sat, 24 Feb 2024 19:53:58 GMT"
},
{
"version": "v4",
"created": "Wed, 26 Mar 2025 16:33:43 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Guerner",
"Clément",
""
],
[
"Liu",
"Tianyu",
""
],
[
"Svete",
"Anej",
""
],
[
"Warstadt",
"Alexander",
""
],
[
"Cotterell",
"Ryan",
""
]
] | TITLE: A Geometric Notion of Causal Probing
ABSTRACT: The linear subspace hypothesis (Bolukbasi et al., 2016) states that, in a
language model's representation space, all information about a concept such as
verbal number is encoded in a linear subspace. Prior work has relied on
auxiliary classification tasks to identify and evaluate candidate subspaces
that might give support for this hypothesis. We instead give a set of intrinsic
criteria which characterize an ideal linear concept subspace and enable us to
identify the subspace using only the language model distribution. Our
information-theoretic framework accounts for spuriously correlated features in
the representation space (Kumar et al., 2022) by reconciling the statistical
notion of concept information and the geometric notion of how concepts are
encoded in the representation space. As a byproduct of this analysis, we
hypothesize a causal process for how a language model might leverage concepts
during generation. Empirically, we find that linear concept erasure is
successful in erasing most concept information under our framework for verbal
number as well as some complex aspect-level sentiment concepts from a
restaurant review dataset. Our causal intervention for controlled generation
shows that, for at least one concept across two languages models, the concept
subspace can be used to manipulate the concept value of the generated word with
precision.
|
2309.14949 | Yongyi Su | Yongyi Su, Xun Xu, Kui Jia | Towards Real-World Test-Time Adaptation: Tri-Net Self-Training with
Balanced Normalization | Accepted by AAAI 2024. 19 pages, 7 figures and 22 tables | null | null | null | cs.LG cs.CV | http://creativecommons.org/licenses/by/4.0/ | Test-Time Adaptation aims to adapt source domain model to testing data at
inference stage with success demonstrated in adapting to unseen corruptions.
However, these attempts may fail under more challenging real-world scenarios.
Existing works mainly consider real-world test-time adaptation under non-i.i.d.
data stream and continual domain shift. In this work, we first complement the
existing real-world TTA protocol with a globally class imbalanced testing set.
We demonstrate that combining all settings together poses new challenges to
existing methods. We argue the failure of state-of-the-art methods is first
caused by indiscriminately adapting normalization layers to imbalanced testing
data. To remedy this shortcoming, we propose a balanced batchnorm layer to swap
out the regular batchnorm at inference stage. The new batchnorm layer is
capable of adapting without biasing towards majority classes. We are further
inspired by the success of self-training (ST) in learning from unlabeled data
and adapt ST for test-time adaptation. However, ST alone is prone to over
adaption which is responsible for the poor performance under continual domain
shift. Hence, we propose to improve self-training under continual domain shift
by regularizing model updates with an anchored loss. The final TTA model,
termed as TRIBE, is built upon a tri-net architecture with balanced batchnorm
layers. We evaluate TRIBE on four datasets representing real-world TTA
settings. TRIBE consistently achieves the state-of-the-art performance across
multiple evaluation protocols. The code is available at
https://github.com/Gorilla-Lab-SCUT/TRIBE.
| [
{
"version": "v1",
"created": "Tue, 26 Sep 2023 14:06:26 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 12:16:13 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Su",
"Yongyi",
""
],
[
"Xu",
"Xun",
""
],
[
"Jia",
"Kui",
""
]
] | TITLE: Towards Real-World Test-Time Adaptation: Tri-Net Self-Training with
Balanced Normalization
ABSTRACT: Test-Time Adaptation aims to adapt source domain model to testing data at
inference stage with success demonstrated in adapting to unseen corruptions.
However, these attempts may fail under more challenging real-world scenarios.
Existing works mainly consider real-world test-time adaptation under non-i.i.d.
data stream and continual domain shift. In this work, we first complement the
existing real-world TTA protocol with a globally class imbalanced testing set.
We demonstrate that combining all settings together poses new challenges to
existing methods. We argue the failure of state-of-the-art methods is first
caused by indiscriminately adapting normalization layers to imbalanced testing
data. To remedy this shortcoming, we propose a balanced batchnorm layer to swap
out the regular batchnorm at inference stage. The new batchnorm layer is
capable of adapting without biasing towards majority classes. We are further
inspired by the success of self-training (ST) in learning from unlabeled data
and adapt ST for test-time adaptation. However, ST alone is prone to over
adaption which is responsible for the poor performance under continual domain
shift. Hence, we propose to improve self-training under continual domain shift
by regularizing model updates with an anchored loss. The final TTA model,
termed as TRIBE, is built upon a tri-net architecture with balanced batchnorm
layers. We evaluate TRIBE on four datasets representing real-world TTA
settings. TRIBE consistently achieves the state-of-the-art performance across
multiple evaluation protocols. The code is available at
https://github.com/Gorilla-Lab-SCUT/TRIBE.
|
2310.07135 | Shreya Havaldar | Shreya Havaldar, Matthew Pressimone, Eric Wong, Lyle Ungar | Comparing Styles across Languages: A Cross-Cultural Exploration of
Politeness | Accepted to EMNLP 2023 | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Understanding how styles differ across languages is advantageous for training
both humans and computers to generate culturally appropriate text. We introduce
an explanation framework to extract stylistic differences from multilingual LMs
and compare styles across languages. Our framework (1) generates comprehensive
style lexica in any language and (2) consolidates feature importances from LMs
into comparable lexical categories. We apply this framework to compare
politeness, creating the first holistic multilingual politeness dataset and
exploring how politeness varies across four languages. Our approach enables an
effective evaluation of how distinct linguistic categories contribute to
stylistic variations and provides interpretable insights into how people
communicate differently around the world.
| [
{
"version": "v1",
"created": "Wed, 11 Oct 2023 02:16:12 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Dec 2023 02:18:40 GMT"
},
{
"version": "v3",
"created": "Wed, 26 Mar 2025 16:04:41 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Havaldar",
"Shreya",
""
],
[
"Pressimone",
"Matthew",
""
],
[
"Wong",
"Eric",
""
],
[
"Ungar",
"Lyle",
""
]
] | TITLE: Comparing Styles across Languages: A Cross-Cultural Exploration of
Politeness
ABSTRACT: Understanding how styles differ across languages is advantageous for training
both humans and computers to generate culturally appropriate text. We introduce
an explanation framework to extract stylistic differences from multilingual LMs
and compare styles across languages. Our framework (1) generates comprehensive
style lexica in any language and (2) consolidates feature importances from LMs
into comparable lexical categories. We apply this framework to compare
politeness, creating the first holistic multilingual politeness dataset and
exploring how politeness varies across four languages. Our approach enables an
effective evaluation of how distinct linguistic categories contribute to
stylistic variations and provides interpretable insights into how people
communicate differently around the world.
|
2310.15928 | Claire Chen | Carlota Par\'es Morlans, Claire Chen, Yijia Weng, Michelle Yi, Yuying
Huang, Nick Heppert, Linqi Zhou, Leonidas Guibas, Jeannette Bohg | AO-Grasp: Articulated Object Grasp Generation | Project website: https://stanford-iprl-lab.github.io/ao-grasp | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | We introduce AO-Grasp, a grasp proposal method that generates 6 DoF grasps
that enable robots to interact with articulated objects, such as opening and
closing cabinets and appliances. AO-Grasp consists of two main contributions:
the AO-Grasp Model and the AO-Grasp Dataset. Given a segmented partial point
cloud of a single articulated object, the AO-Grasp Model predicts the best
grasp points on the object with an Actionable Grasp Point Predictor. Then, it
finds corresponding grasp orientations for each of these points, resulting in
stable and actionable grasp proposals. We train the AO-Grasp Model on our new
AO-Grasp Dataset, which contains 78K actionable parallel-jaw grasps on
synthetic articulated objects. In simulation, AO-Grasp achieves a 45.0 % grasp
success rate, whereas the highest performing baseline achieves a 35.0% success
rate. Additionally, we evaluate AO-Grasp on 120 real-world scenes of objects
with varied geometries, articulation axes, and joint states, where AO-Grasp
produces successful grasps on 67.5% of scenes, while the baseline only produces
successful grasps on 33.3% of scenes. To the best of our knowledge, AO-Grasp is
the first method for generating 6 DoF grasps on articulated objects directly
from partial point clouds without requiring part detection or hand-designed
grasp heuristics. Project website: https://stanford-iprl-lab.github.io/ao-grasp
| [
{
"version": "v1",
"created": "Tue, 24 Oct 2023 15:26:57 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Mar 2024 17:36:33 GMT"
},
{
"version": "v3",
"created": "Thu, 10 Oct 2024 15:36:30 GMT"
},
{
"version": "v4",
"created": "Tue, 25 Mar 2025 23:41:23 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Morlans",
"Carlota Parés",
""
],
[
"Chen",
"Claire",
""
],
[
"Weng",
"Yijia",
""
],
[
"Yi",
"Michelle",
""
],
[
"Huang",
"Yuying",
""
],
[
"Heppert",
"Nick",
""
],
[
"Zhou",
"Linqi",
""
],
[
"Guibas",
"Leonidas",
""
],
[
"Bohg",
"Jeannette",
""
]
] | TITLE: AO-Grasp: Articulated Object Grasp Generation
ABSTRACT: We introduce AO-Grasp, a grasp proposal method that generates 6 DoF grasps
that enable robots to interact with articulated objects, such as opening and
closing cabinets and appliances. AO-Grasp consists of two main contributions:
the AO-Grasp Model and the AO-Grasp Dataset. Given a segmented partial point
cloud of a single articulated object, the AO-Grasp Model predicts the best
grasp points on the object with an Actionable Grasp Point Predictor. Then, it
finds corresponding grasp orientations for each of these points, resulting in
stable and actionable grasp proposals. We train the AO-Grasp Model on our new
AO-Grasp Dataset, which contains 78K actionable parallel-jaw grasps on
synthetic articulated objects. In simulation, AO-Grasp achieves a 45.0 % grasp
success rate, whereas the highest performing baseline achieves a 35.0% success
rate. Additionally, we evaluate AO-Grasp on 120 real-world scenes of objects
with varied geometries, articulation axes, and joint states, where AO-Grasp
produces successful grasps on 67.5% of scenes, while the baseline only produces
successful grasps on 33.3% of scenes. To the best of our knowledge, AO-Grasp is
the first method for generating 6 DoF grasps on articulated objects directly
from partial point clouds without requiring part detection or hand-designed
grasp heuristics. Project website: https://stanford-iprl-lab.github.io/ao-grasp
|
2311.16917 | Jiaxin Lu | Jiaxin Lu, Hao Kang, Haoxiang Li, Bo Liu, Yiding Yang, Qixing Huang,
Gang Hua | UGG: Unified Generative Grasping | 17 pages, 14 figures, ECCV 2024 | null | 10.1007/978-3-031-72855-6_24 | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dexterous grasping aims to produce diverse grasping postures with a high
grasping success rate. Regression-based methods that directly predict grasping
parameters given the object may achieve a high success rate but often lack
diversity. Generation-based methods that generate grasping postures conditioned
on the object can often produce diverse grasping, but they are insufficient for
high grasping success due to lack of discriminative information. To mitigate,
we introduce a unified diffusion-based dexterous grasp generation model, dubbed
the name UGG, which operates within the object point cloud and hand parameter
spaces. Our all-transformer architecture unifies the information from the
object, the hand, and the contacts, introducing a novel representation of
contact points for improved contact modeling. The flexibility and quality of
our model enable the integration of a lightweight discriminator, benefiting
from simulated discriminative data, which pushes for a high success rate while
preserving high diversity. Beyond grasp generation, our model can also generate
objects based on hand information, offering valuable insights into object
design and studying how the generative model perceives objects. Our model
achieves state-of-the-art dexterous grasping on the large-scale DexGraspNet
dataset while facilitating human-centric object design, marking a significant
advancement in dexterous grasping research. Our project page is
https://jiaxin-lu.github.io/ugg/.
| [
{
"version": "v1",
"created": "Tue, 28 Nov 2023 16:20:33 GMT"
},
{
"version": "v2",
"created": "Fri, 26 Jul 2024 17:59:14 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Lu",
"Jiaxin",
""
],
[
"Kang",
"Hao",
""
],
[
"Li",
"Haoxiang",
""
],
[
"Liu",
"Bo",
""
],
[
"Yang",
"Yiding",
""
],
[
"Huang",
"Qixing",
""
],
[
"Hua",
"Gang",
""
]
] | TITLE: UGG: Unified Generative Grasping
ABSTRACT: Dexterous grasping aims to produce diverse grasping postures with a high
grasping success rate. Regression-based methods that directly predict grasping
parameters given the object may achieve a high success rate but often lack
diversity. Generation-based methods that generate grasping postures conditioned
on the object can often produce diverse grasping, but they are insufficient for
high grasping success due to lack of discriminative information. To mitigate,
we introduce a unified diffusion-based dexterous grasp generation model, dubbed
the name UGG, which operates within the object point cloud and hand parameter
spaces. Our all-transformer architecture unifies the information from the
object, the hand, and the contacts, introducing a novel representation of
contact points for improved contact modeling. The flexibility and quality of
our model enable the integration of a lightweight discriminator, benefiting
from simulated discriminative data, which pushes for a high success rate while
preserving high diversity. Beyond grasp generation, our model can also generate
objects based on hand information, offering valuable insights into object
design and studying how the generative model perceives objects. Our model
achieves state-of-the-art dexterous grasping on the large-scale DexGraspNet
dataset while facilitating human-centric object design, marking a significant
advancement in dexterous grasping research. Our project page is
https://jiaxin-lu.github.io/ugg/.
|
2312.00123 | Joschka Birk | Joschka Birk, Erik Buhmann, Cedric Ewen, Gregor Kasieczka, David Shih | Flow Matching Beyond Kinematics: Generating Jets with Particle-ID and
Trajectory Displacement Information | null | Phys. Rev. D 111, 052008 (2025) | 10.1103/PhysRevD.111.052008 | null | hep-ph cs.LG hep-ex physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce the first generative model trained on the JetClass dataset. Our
model generates jets at the constituent level, and it is a
permutation-equivariant continuous normalizing flow (CNF) trained with the flow
matching technique. It is conditioned on the jet type, so that a single model
can be used to generate the ten different jet types of JetClass. For the first
time, we also introduce a generative model that goes beyond the kinematic
features of jet constituents. The JetClass dataset includes more features, such
as particle-ID and track impact parameter, and we demonstrate that our CNF can
accurately model all of these additional features as well. Our generative model
for JetClass expands on the versatility of existing jet generation techniques,
enhancing their potential utility in high-energy physics research, and offering
a more comprehensive understanding of the generated jets.
| [
{
"version": "v1",
"created": "Thu, 30 Nov 2023 19:00:02 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 12:50:52 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Birk",
"Joschka",
""
],
[
"Buhmann",
"Erik",
""
],
[
"Ewen",
"Cedric",
""
],
[
"Kasieczka",
"Gregor",
""
],
[
"Shih",
"David",
""
]
] | TITLE: Flow Matching Beyond Kinematics: Generating Jets with Particle-ID and
Trajectory Displacement Information
ABSTRACT: We introduce the first generative model trained on the JetClass dataset. Our
model generates jets at the constituent level, and it is a
permutation-equivariant continuous normalizing flow (CNF) trained with the flow
matching technique. It is conditioned on the jet type, so that a single model
can be used to generate the ten different jet types of JetClass. For the first
time, we also introduce a generative model that goes beyond the kinematic
features of jet constituents. The JetClass dataset includes more features, such
as particle-ID and track impact parameter, and we demonstrate that our CNF can
accurately model all of these additional features as well. Our generative model
for JetClass expands on the versatility of existing jet generation techniques,
enhancing their potential utility in high-energy physics research, and offering
a more comprehensive understanding of the generated jets.
|
2312.11232 | J\'er\'emy Scanvic | J\'er\'emy Scanvic, Mike Davies, Patrice Abry, Juli\'an Tachella | Scale-Equivariant Imaging: Self-Supervised Learning for Image
Super-Resolution and Deblurring | null | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Self-supervised methods have recently proved to be nearly as effective as
supervised ones in various imaging inverse problems, paving the way for
learning-based approaches in scientific and medical imaging applications where
ground truth data is hard or expensive to obtain. These methods critically rely
on invariance to translations and/or rotations of the image distribution to
learn from incomplete measurement data alone. However, existing approaches fail
to obtain competitive performances in the problems of image super-resolution
and deblurring, which play a key role in most imaging systems. In this work, we
show that invariance to roto-translations is insufficient to learn from
measurements that only contain low-frequency information. Instead, we propose
scale-equivariant imaging, a new self-supervised approach that leverages the
fact that many image distributions are approximately scale-invariant, enabling
the recovery of high-frequency information lost in the measurement process. We
demonstrate throughout a series of experiments on real datasets that the
proposed method outperforms other self-supervised approaches, and obtains
performances on par with fully supervised learning.
| [
{
"version": "v1",
"created": "Mon, 18 Dec 2023 14:30:54 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Mar 2024 17:05:57 GMT"
},
{
"version": "v3",
"created": "Wed, 26 Mar 2025 13:34:53 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Scanvic",
"Jérémy",
""
],
[
"Davies",
"Mike",
""
],
[
"Abry",
"Patrice",
""
],
[
"Tachella",
"Julián",
""
]
] | TITLE: Scale-Equivariant Imaging: Self-Supervised Learning for Image
Super-Resolution and Deblurring
ABSTRACT: Self-supervised methods have recently proved to be nearly as effective as
supervised ones in various imaging inverse problems, paving the way for
learning-based approaches in scientific and medical imaging applications where
ground truth data is hard or expensive to obtain. These methods critically rely
on invariance to translations and/or rotations of the image distribution to
learn from incomplete measurement data alone. However, existing approaches fail
to obtain competitive performances in the problems of image super-resolution
and deblurring, which play a key role in most imaging systems. In this work, we
show that invariance to roto-translations is insufficient to learn from
measurements that only contain low-frequency information. Instead, we propose
scale-equivariant imaging, a new self-supervised approach that leverages the
fact that many image distributions are approximately scale-invariant, enabling
the recovery of high-frequency information lost in the measurement process. We
demonstrate throughout a series of experiments on real datasets that the
proposed method outperforms other self-supervised approaches, and obtains
performances on par with fully supervised learning.
|
2401.01128 | Jianzhi Liu | Weijin Cheng, Jianzhi Liu, Jiawen Deng, Fuji Ren | SSP: A Simple and Safe automatic Prompt engineering method towards
realistic image synthesis on LVM | 10 pages, 8 figures | 2024 IEEE International Conference on Systems, Man, and
Cybernetics (SMC) | 10.1109/SMC54092.2024.10832083 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, text-to-image (T2I) synthesis has undergone significant
advancements, particularly with the emergence of Large Language Models (LLM)
and their enhancement in Large Vision Models (LVM), greatly enhancing the
instruction-following capabilities of traditional T2I models. Nevertheless,
previous methods focus on improving generation quality but introduce unsafe
factors into prompts. We explore that appending specific camera descriptions to
prompts can enhance safety performance. Consequently, we propose a simple and
safe prompt engineering method (SSP) to improve image generation quality by
providing optimal camera descriptions. Specifically, we create a dataset from
multi-datasets as original prompts. To select the optimal camera, we design an
optimal camera matching approach and implement a classifier for original
prompts capable of automatically matching. Appending camera descriptions to
original prompts generates optimized prompts for further LVM image generation.
Experiments demonstrate that SSP improves semantic consistency by an average of
16% compared to others and safety metrics by 48.9%.
| [
{
"version": "v1",
"created": "Tue, 2 Jan 2024 09:51:39 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Cheng",
"Weijin",
""
],
[
"Liu",
"Jianzhi",
""
],
[
"Deng",
"Jiawen",
""
],
[
"Ren",
"Fuji",
""
]
] | TITLE: SSP: A Simple and Safe automatic Prompt engineering method towards
realistic image synthesis on LVM
ABSTRACT: Recently, text-to-image (T2I) synthesis has undergone significant
advancements, particularly with the emergence of Large Language Models (LLM)
and their enhancement in Large Vision Models (LVM), greatly enhancing the
instruction-following capabilities of traditional T2I models. Nevertheless,
previous methods focus on improving generation quality but introduce unsafe
factors into prompts. We explore that appending specific camera descriptions to
prompts can enhance safety performance. Consequently, we propose a simple and
safe prompt engineering method (SSP) to improve image generation quality by
providing optimal camera descriptions. Specifically, we create a dataset from
multi-datasets as original prompts. To select the optimal camera, we design an
optimal camera matching approach and implement a classifier for original
prompts capable of automatically matching. Appending camera descriptions to
original prompts generates optimized prompts for further LVM image generation.
Experiments demonstrate that SSP improves semantic consistency by an average of
16% compared to others and safety metrics by 48.9%.
|
2401.02739 | Yingheng Wang | Wasu Top Piriyakulkij, Yingheng Wang, Volodymyr Kuleshov | Denoising Diffusion Variational Inference: Diffusion Models as
Expressive Variational Posteriors | published at AAAI 2025; code available at
https://github.com/topwasu/DDVI | null | null | null | cs.LG q-bio.QM stat.ML | http://creativecommons.org/licenses/by/4.0/ | We propose denoising diffusion variational inference (DDVI), a black-box
variational inference algorithm for latent variable models which relies on
diffusion models as flexible approximate posteriors. Specifically, our method
introduces an expressive class of diffusion-based variational posteriors that
perform iterative refinement in latent space; we train these posteriors with a
novel regularized evidence lower bound (ELBO) on the marginal likelihood
inspired by the wake-sleep algorithm. Our method is easy to implement (it fits
a regularized extension of the ELBO), is compatible with black-box variational
inference, and outperforms alternative classes of approximate posteriors based
on normalizing flows or adversarial networks. We find that DDVI improves
inference and learning in deep latent variable models across common benchmarks
as well as on a motivating task in biology -- inferring latent ancestry from
human genomes -- where it outperforms strong baselines on the Thousand Genomes
dataset.
| [
{
"version": "v1",
"created": "Fri, 5 Jan 2024 10:27:44 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Feb 2024 15:50:35 GMT"
},
{
"version": "v3",
"created": "Fri, 25 Oct 2024 20:42:02 GMT"
},
{
"version": "v4",
"created": "Tue, 25 Mar 2025 23:22:46 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Piriyakulkij",
"Wasu Top",
""
],
[
"Wang",
"Yingheng",
""
],
[
"Kuleshov",
"Volodymyr",
""
]
] | TITLE: Denoising Diffusion Variational Inference: Diffusion Models as
Expressive Variational Posteriors
ABSTRACT: We propose denoising diffusion variational inference (DDVI), a black-box
variational inference algorithm for latent variable models which relies on
diffusion models as flexible approximate posteriors. Specifically, our method
introduces an expressive class of diffusion-based variational posteriors that
perform iterative refinement in latent space; we train these posteriors with a
novel regularized evidence lower bound (ELBO) on the marginal likelihood
inspired by the wake-sleep algorithm. Our method is easy to implement (it fits
a regularized extension of the ELBO), is compatible with black-box variational
inference, and outperforms alternative classes of approximate posteriors based
on normalizing flows or adversarial networks. We find that DDVI improves
inference and learning in deep latent variable models across common benchmarks
as well as on a motivating task in biology -- inferring latent ancestry from
human genomes -- where it outperforms strong baselines on the Thousand Genomes
dataset.
|
2401.08351 | Mahrokh Ghoddousi Boroujeni | Mahrokh Ghoddousi Boroujeni, Andreas Krause, Giancarlo Ferrari Trecate | Personalized Federated Learning of Probabilistic Models: A PAC-Bayesian
Approach | null | Boroujeni, M. G., Krause, A., & Ferrari-Trecate, G. (2025).
Personalized Federated Learning of Probabilistic Models: A PAC-Bayesian
Approach. Transactions on Machine Learning Research | null | null | cs.LG cs.CR | http://creativecommons.org/licenses/by/4.0/ | Federated Learning (FL) aims to infer a shared model from private and
decentralized data stored by multiple clients. Personalized FL (PFL) enhances
the model's fit for each client by adapting the global model to the clients. A
significant level of personalization is required for highly heterogeneous
clients but can be challenging to achieve, especially when clients' datasets
are small. To address this issue, we introduce the PAC-PFL framework for PFL of
probabilistic models. PAC-PFL infers a shared hyper-posterior and treats each
client's posterior inference as the personalization step. Unlike previous PFL
algorithms, PAC-PFL does not regularize all personalized models towards a
single shared model, thereby greatly enhancing its personalization flexibility.
By establishing and minimizing a PAC-Bayesian generalization bound on the
average true loss of clients, PAC-PFL effectively mitigates overfitting even in
data-poor scenarios. Additionally, PAC-PFL provides generalization bounds for
new clients joining later. PAC-PFL achieves accurate and well-calibrated
predictions, as supported by our experiments.
| [
{
"version": "v1",
"created": "Tue, 16 Jan 2024 13:30:37 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 13:19:10 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Boroujeni",
"Mahrokh Ghoddousi",
""
],
[
"Krause",
"Andreas",
""
],
[
"Trecate",
"Giancarlo Ferrari",
""
]
] | TITLE: Personalized Federated Learning of Probabilistic Models: A PAC-Bayesian
Approach
ABSTRACT: Federated Learning (FL) aims to infer a shared model from private and
decentralized data stored by multiple clients. Personalized FL (PFL) enhances
the model's fit for each client by adapting the global model to the clients. A
significant level of personalization is required for highly heterogeneous
clients but can be challenging to achieve, especially when clients' datasets
are small. To address this issue, we introduce the PAC-PFL framework for PFL of
probabilistic models. PAC-PFL infers a shared hyper-posterior and treats each
client's posterior inference as the personalization step. Unlike previous PFL
algorithms, PAC-PFL does not regularize all personalized models towards a
single shared model, thereby greatly enhancing its personalization flexibility.
By establishing and minimizing a PAC-Bayesian generalization bound on the
average true loss of clients, PAC-PFL effectively mitigates overfitting even in
data-poor scenarios. Additionally, PAC-PFL provides generalization bounds for
new clients joining later. PAC-PFL achieves accurate and well-calibrated
predictions, as supported by our experiments.
|
2402.02242 | Yi Xin | Yi Xin, Jianjiang Yang, Siqi Luo, Haodi Zhou, Junlong Du, Xiaohong
Liu, Yue Fan, Qing Li, Yuntao Du | Parameter-Efficient Fine-Tuning for Pre-Trained Vision Models: A Survey | 9 pages, 3 figures, 2 tables | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large-scale pre-trained vision models (PVMs) have shown great potential for
adaptability across various downstream vision tasks. However, with
state-of-the-art PVMs growing to billions or even trillions of parameters, the
standard full fine-tuning paradigm is becoming unsustainable due to high
computational and storage demands. In response, researchers are exploring
parameter-efficient fine-tuning (PEFT), which seeks to exceed the performance
of full fine-tuning with minimal parameter modifications. This survey provides
a comprehensive overview and future directions for visual PEFT, offering a
systematic review of the latest advancements. First, we provide a formal
definition of PEFT and discuss model pre-training methods. We then categorize
existing methods into three categories: addition-based, partial-based, and
unified-based. Finally, we introduce the commonly used datasets and
applications and suggest potential future research challenges. A comprehensive
collection of resources is available at
https://github.com/synbol/Awesome-Parameter-Efficient-Transfer-Learning.
| [
{
"version": "v1",
"created": "Sat, 3 Feb 2024 19:12:20 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Feb 2024 08:17:57 GMT"
},
{
"version": "v3",
"created": "Tue, 25 Mar 2025 04:37:33 GMT"
},
{
"version": "v4",
"created": "Wed, 26 Mar 2025 05:36:30 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Xin",
"Yi",
""
],
[
"Yang",
"Jianjiang",
""
],
[
"Luo",
"Siqi",
""
],
[
"Zhou",
"Haodi",
""
],
[
"Du",
"Junlong",
""
],
[
"Liu",
"Xiaohong",
""
],
[
"Fan",
"Yue",
""
],
[
"Li",
"Qing",
""
],
[
"Du",
"Yuntao",
""
]
] | TITLE: Parameter-Efficient Fine-Tuning for Pre-Trained Vision Models: A Survey
ABSTRACT: Large-scale pre-trained vision models (PVMs) have shown great potential for
adaptability across various downstream vision tasks. However, with
state-of-the-art PVMs growing to billions or even trillions of parameters, the
standard full fine-tuning paradigm is becoming unsustainable due to high
computational and storage demands. In response, researchers are exploring
parameter-efficient fine-tuning (PEFT), which seeks to exceed the performance
of full fine-tuning with minimal parameter modifications. This survey provides
a comprehensive overview and future directions for visual PEFT, offering a
systematic review of the latest advancements. First, we provide a formal
definition of PEFT and discuss model pre-training methods. We then categorize
existing methods into three categories: addition-based, partial-based, and
unified-based. Finally, we introduce the commonly used datasets and
applications and suggest potential future research challenges. A comprehensive
collection of resources is available at
https://github.com/synbol/Awesome-Parameter-Efficient-Transfer-Learning.
|
2402.13065 | EPTCS | Luca Mondada (University of Oxford), Pablo Andr\'es-Mart\'inez
(Quantinuum Ltd) | Scalable Pattern Matching in Computation Graphs | In Proceedings GCM 2023 and 2024, arXiv:2503.19632 | EPTCS 417, 2025, pp. 71-95 | 10.4204/EPTCS.417.5 | null | cs.DS math.CO quant-ph | http://creativecommons.org/licenses/by/4.0/ | Graph rewriting is a popular tool for the optimisation and modification of
graph expressions in domains such as compilers, machine learning and quantum
computing. The underlying data structures are often port graphs - graphs with
labels at edge endpoints. A pre-requisite for graph rewriting is the ability to
find graph patterns. We propose a new solution to pattern matching in port
graphs. Its novelty lies in the use of a pre-computed data structure that makes
the pattern matching runtime complexity independent of the number of patterns.
This offers a significant advantage over existing solutions for use cases with
large sets of small patterns.
Our approach is particularly well-suited for quantum superoptimisation. We
provide an implementation and benchmarks showing that our algorithm offers a
20x speedup over current implementations on a dataset of 10000 real world
patterns describing quantum circuits.
| [
{
"version": "v1",
"created": "Tue, 20 Feb 2024 15:02:24 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 11:51:45 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Mondada",
"Luca",
"",
"University of Oxford"
],
[
"Andrés-Martínez",
"Pablo",
"",
"Quantinuum Ltd"
]
] | TITLE: Scalable Pattern Matching in Computation Graphs
ABSTRACT: Graph rewriting is a popular tool for the optimisation and modification of
graph expressions in domains such as compilers, machine learning and quantum
computing. The underlying data structures are often port graphs - graphs with
labels at edge endpoints. A pre-requisite for graph rewriting is the ability to
find graph patterns. We propose a new solution to pattern matching in port
graphs. Its novelty lies in the use of a pre-computed data structure that makes
the pattern matching runtime complexity independent of the number of patterns.
This offers a significant advantage over existing solutions for use cases with
large sets of small patterns.
Our approach is particularly well-suited for quantum superoptimisation. We
provide an implementation and benchmarks showing that our algorithm offers a
20x speedup over current implementations on a dataset of 10000 real world
patterns describing quantum circuits.
|
2402.18205 | Wei Zhang | Wei Zhang, Xiangyuan Guan, Lu Yunhong, Jie Zhang, Shuangyong Song,
Xianfu Cheng, Zhenhe Wu, Zhoujun Li | Lemur: Log Parsing with Entropy Sampling and Chain-of-Thought Merging | null | null | null | null | cs.SE cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Logs produced by extensive software systems are integral to monitoring system
behaviors. Advanced log analysis facilitates the detection, alerting, and
diagnosis of system faults. Log parsing, which entails transforming raw log
messages into structured templates, constitutes a critical phase in the
automation of log analytics. Existing log parsers fail to identify the correct
templates due to reliance on human-made rules. Besides, these methods focus on
statistical features while ignoring semantic information in log messages. To
address these challenges, we introduce a cutting-edge \textbf{L}og parsing
framework with \textbf{E}ntropy sampling and chain-of-thought \textbf{M}erging
(\model{}). Specifically, to discard the tedious manual rules, we propose a
novel sampling method inspired by information entropy, which efficiently
clusters typical logs. Furthermore, to enhance the merging of log templates, we
design a chain-of-thought method for large language models (LLMs). LLMs exhibit
exceptional semantic comprehension and deftly distinguish between parameters
and invariant tokens. We have conducted experiments on large-scale public
datasets. Extensive evaluation demonstrates that \model{} achieves
state-of-the-art performance and impressive efficiency. The Code is available
at https://github.com/zwpride/lemur.
| [
{
"version": "v1",
"created": "Wed, 28 Feb 2024 09:51:55 GMT"
},
{
"version": "v2",
"created": "Sat, 2 Mar 2024 03:47:13 GMT"
},
{
"version": "v3",
"created": "Tue, 31 Dec 2024 16:14:51 GMT"
},
{
"version": "v4",
"created": "Wed, 8 Jan 2025 15:18:15 GMT"
},
{
"version": "v5",
"created": "Wed, 26 Mar 2025 08:55:05 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Zhang",
"Wei",
""
],
[
"Guan",
"Xiangyuan",
""
],
[
"Yunhong",
"Lu",
""
],
[
"Zhang",
"Jie",
""
],
[
"Song",
"Shuangyong",
""
],
[
"Cheng",
"Xianfu",
""
],
[
"Wu",
"Zhenhe",
""
],
[
"Li",
"Zhoujun",
""
]
] | TITLE: Lemur: Log Parsing with Entropy Sampling and Chain-of-Thought Merging
ABSTRACT: Logs produced by extensive software systems are integral to monitoring system
behaviors. Advanced log analysis facilitates the detection, alerting, and
diagnosis of system faults. Log parsing, which entails transforming raw log
messages into structured templates, constitutes a critical phase in the
automation of log analytics. Existing log parsers fail to identify the correct
templates due to reliance on human-made rules. Besides, these methods focus on
statistical features while ignoring semantic information in log messages. To
address these challenges, we introduce a cutting-edge \textbf{L}og parsing
framework with \textbf{E}ntropy sampling and chain-of-thought \textbf{M}erging
(\model{}). Specifically, to discard the tedious manual rules, we propose a
novel sampling method inspired by information entropy, which efficiently
clusters typical logs. Furthermore, to enhance the merging of log templates, we
design a chain-of-thought method for large language models (LLMs). LLMs exhibit
exceptional semantic comprehension and deftly distinguish between parameters
and invariant tokens. We have conducted experiments on large-scale public
datasets. Extensive evaluation demonstrates that \model{} achieves
state-of-the-art performance and impressive efficiency. The Code is available
at https://github.com/zwpride/lemur.
|
2403.07746 | Philipp Wolters | Philipp Wolters, Johannes Gilg, Torben Teepe, Fabian Herzog, Anouar
Laouichi, Martin Hofmann, Gerhard Rigoll | Unleashing HyDRa: Hybrid Fusion, Depth Consistency and Radar for Unified
3D Perception | Accepted to ICRA 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Low-cost, vision-centric 3D perception systems for autonomous driving have
made significant progress in recent years, narrowing the gap to expensive
LiDAR-based methods. The primary challenge in becoming a fully reliable
alternative lies in robust depth prediction capabilities, as camera-based
systems struggle with long detection ranges and adverse lighting and weather
conditions. In this work, we introduce HyDRa, a novel camera-radar fusion
architecture for diverse 3D perception tasks. Building upon the principles of
dense BEV (Bird's Eye View)-based architectures, HyDRa introduces a hybrid
fusion approach to combine the strengths of complementary camera and radar
features in two distinct representation spaces. Our Height Association
Transformer module leverages radar features already in the perspective view to
produce more robust and accurate depth predictions. In the BEV, we refine the
initial sparse representation by a Radar-weighted Depth Consistency. HyDRa
achieves a new state-of-the-art for camera-radar fusion of 64.2 NDS (+1.8) and
58.4 AMOTA (+1.5) on the public nuScenes dataset. Moreover, our new
semantically rich and spatially accurate BEV features can be directly converted
into a powerful occupancy representation, beating all previous camera-based
methods on the Occ3D benchmark by an impressive 3.7 mIoU. Code and models are
available at https://github.com/phi-wol/hydra.
| [
{
"version": "v1",
"created": "Tue, 12 Mar 2024 15:28:51 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Jun 2024 13:34:38 GMT"
},
{
"version": "v3",
"created": "Wed, 5 Mar 2025 15:35:06 GMT"
},
{
"version": "v4",
"created": "Wed, 26 Mar 2025 08:48:13 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Wolters",
"Philipp",
""
],
[
"Gilg",
"Johannes",
""
],
[
"Teepe",
"Torben",
""
],
[
"Herzog",
"Fabian",
""
],
[
"Laouichi",
"Anouar",
""
],
[
"Hofmann",
"Martin",
""
],
[
"Rigoll",
"Gerhard",
""
]
] | TITLE: Unleashing HyDRa: Hybrid Fusion, Depth Consistency and Radar for Unified
3D Perception
ABSTRACT: Low-cost, vision-centric 3D perception systems for autonomous driving have
made significant progress in recent years, narrowing the gap to expensive
LiDAR-based methods. The primary challenge in becoming a fully reliable
alternative lies in robust depth prediction capabilities, as camera-based
systems struggle with long detection ranges and adverse lighting and weather
conditions. In this work, we introduce HyDRa, a novel camera-radar fusion
architecture for diverse 3D perception tasks. Building upon the principles of
dense BEV (Bird's Eye View)-based architectures, HyDRa introduces a hybrid
fusion approach to combine the strengths of complementary camera and radar
features in two distinct representation spaces. Our Height Association
Transformer module leverages radar features already in the perspective view to
produce more robust and accurate depth predictions. In the BEV, we refine the
initial sparse representation by a Radar-weighted Depth Consistency. HyDRa
achieves a new state-of-the-art for camera-radar fusion of 64.2 NDS (+1.8) and
58.4 AMOTA (+1.5) on the public nuScenes dataset. Moreover, our new
semantically rich and spatially accurate BEV features can be directly converted
into a powerful occupancy representation, beating all previous camera-based
methods on the Occ3D benchmark by an impressive 3.7 mIoU. Code and models are
available at https://github.com/phi-wol/hydra.
|
2403.10039 | Yang Liu | Yang Liu, Peiran Wu, Jiayu Huo, Gongyu Zhang, Zhen Yuan, Christos
Bergeles, Rachel Sparks, Prokar Dasgupta, Alejandro Granados, and Sebastien
Ourselin | Motion-Boundary-Driven Unsupervised Surgical Instrument Segmentation in
Low-Quality Optical Flow | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Unsupervised video-based surgical instrument segmentation has the potential
to accelerate the adoption of robot-assisted procedures by reducing the
reliance on manual annotations. However, the generally low quality of optical
flow in endoscopic footage poses a great challenge for unsupervised methods
that rely heavily on motion cues. To overcome this limitation, we propose a
novel approach that pinpoints motion boundaries, regions with abrupt flow
changes, while selectively discarding frames with globally low-quality flow and
adapting to varying motion patterns. Experiments on the EndoVis2017 VOS and
EndoVis2017 Challenge datasets show that our method achieves mean
Intersection-over-Union (mIoU) scores of 0.75 and 0.72, respectively,
effectively alleviating the constraints imposed by suboptimal optical flow.
This enables a more scalable and robust surgical instrument segmentation
solution in clinical settings. The code will be publicly released.
| [
{
"version": "v1",
"created": "Fri, 15 Mar 2024 06:19:02 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 20:18:43 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Liu",
"Yang",
""
],
[
"Wu",
"Peiran",
""
],
[
"Huo",
"Jiayu",
""
],
[
"Zhang",
"Gongyu",
""
],
[
"Yuan",
"Zhen",
""
],
[
"Bergeles",
"Christos",
""
],
[
"Sparks",
"Rachel",
""
],
[
"Dasgupta",
"Prokar",
""
],
[
"Granados",
"Alejandro",
""
],
[
"Ourselin",
"Sebastien",
""
]
] | TITLE: Motion-Boundary-Driven Unsupervised Surgical Instrument Segmentation in
Low-Quality Optical Flow
ABSTRACT: Unsupervised video-based surgical instrument segmentation has the potential
to accelerate the adoption of robot-assisted procedures by reducing the
reliance on manual annotations. However, the generally low quality of optical
flow in endoscopic footage poses a great challenge for unsupervised methods
that rely heavily on motion cues. To overcome this limitation, we propose a
novel approach that pinpoints motion boundaries, regions with abrupt flow
changes, while selectively discarding frames with globally low-quality flow and
adapting to varying motion patterns. Experiments on the EndoVis2017 VOS and
EndoVis2017 Challenge datasets show that our method achieves mean
Intersection-over-Union (mIoU) scores of 0.75 and 0.72, respectively,
effectively alleviating the constraints imposed by suboptimal optical flow.
This enables a more scalable and robust surgical instrument segmentation
solution in clinical settings. The code will be publicly released.
|
2403.17790 | Mahrokh Ghoddousi Boroujeni | Mahrokh Ghoddousi Boroujeni, Clara Luc\'ia Galimberti, Andreas Krause,
Giancarlo Ferrari-Trecate | A PAC-Bayesian Framework for Optimal Control with Stability Guarantees | null | null | 10.1109/CDC56724.2024.10886285 | null | eess.SY cs.SY | http://creativecommons.org/licenses/by/4.0/ | Stochastic Nonlinear Optimal Control (SNOC) involves minimizing a cost
function that averages out the random uncertainties affecting the dynamics of
nonlinear systems. For tractability reasons, this problem is typically
addressed by minimizing an empirical cost, which represents the average cost
across a finite dataset of sampled disturbances. However, this approach raises
the challenge of quantifying the control performance against out-of-sample
uncertainties. Particularly, in scenarios where the training dataset is small,
SNOC policies are prone to overfitting, resulting in significant discrepancies
between the empirical cost and the true cost, i.e., the average SNOC cost
incurred during control deployment. Therefore, establishing generalization
bounds on the true cost is crucial for ensuring reliability in real-world
applications. In this paper, we introduce a novel approach that leverages
PAC-Bayes theory to provide rigorous generalization bounds for SNOC. Based on
these bounds, we propose a new method for designing optimal controllers,
offering a principled way to incorporate prior knowledge into the synthesis
process, which aids in improving the control policy and mitigating overfitting.
Furthermore, by leveraging recent parametrizations of stabilizing controllers
for nonlinear systems, our framework inherently ensures closed-loop stability.
The effectiveness of our proposed method in incorporating prior knowledge and
combating overfitting is shown by designing neural network controllers for
tasks in cooperative robotics.
| [
{
"version": "v1",
"created": "Tue, 26 Mar 2024 15:21:18 GMT"
},
{
"version": "v2",
"created": "Tue, 24 Dec 2024 11:04:52 GMT"
},
{
"version": "v3",
"created": "Wed, 26 Mar 2025 13:55:18 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Boroujeni",
"Mahrokh Ghoddousi",
""
],
[
"Galimberti",
"Clara Lucía",
""
],
[
"Krause",
"Andreas",
""
],
[
"Ferrari-Trecate",
"Giancarlo",
""
]
] | TITLE: A PAC-Bayesian Framework for Optimal Control with Stability Guarantees
ABSTRACT: Stochastic Nonlinear Optimal Control (SNOC) involves minimizing a cost
function that averages out the random uncertainties affecting the dynamics of
nonlinear systems. For tractability reasons, this problem is typically
addressed by minimizing an empirical cost, which represents the average cost
across a finite dataset of sampled disturbances. However, this approach raises
the challenge of quantifying the control performance against out-of-sample
uncertainties. Particularly, in scenarios where the training dataset is small,
SNOC policies are prone to overfitting, resulting in significant discrepancies
between the empirical cost and the true cost, i.e., the average SNOC cost
incurred during control deployment. Therefore, establishing generalization
bounds on the true cost is crucial for ensuring reliability in real-world
applications. In this paper, we introduce a novel approach that leverages
PAC-Bayes theory to provide rigorous generalization bounds for SNOC. Based on
these bounds, we propose a new method for designing optimal controllers,
offering a principled way to incorporate prior knowledge into the synthesis
process, which aids in improving the control policy and mitigating overfitting.
Furthermore, by leveraging recent parametrizations of stabilizing controllers
for nonlinear systems, our framework inherently ensures closed-loop stability.
The effectiveness of our proposed method in incorporating prior knowledge and
combating overfitting is shown by designing neural network controllers for
tasks in cooperative robotics.
|
2404.04910 | Hou-I Liu | Hou-I Liu, Christine Wu, Jen-Hao Cheng, Wenhao Chai, Shian-Yun Wang,
Gaowen Liu, Hugo Latapie, Jhih-Ciang Wu, Jenq-Neng Hwang, Hong-Han Shuai and
Wen-Huang Cheng | MonoTAKD: Teaching Assistant Knowledge Distillation for Monocular 3D
Object Detection | Accepted by CVPR 2025. Our code is available at
https://github.com/hoiliu-0801/MonoTAKD | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Monocular 3D object detection (Mono3D) holds noteworthy promise for
autonomous driving applications owing to the cost-effectiveness and rich visual
context of monocular camera sensors. However, depth ambiguity poses a
significant challenge, as it requires extracting precise 3D scene geometry from
a single image, resulting in suboptimal performance when transferring knowledge
from a LiDAR-based teacher model to a camera-based student model. To facilitate
effective distillation, we introduce Monocular Teaching Assistant Knowledge
Distillation (MonoTAKD), which proposes a camera-based teaching assistant (TA)
model to transfer robust 3D visual knowledge to the student model, leveraging
the smaller feature representation gap. Additionally, we define 3D spatial cues
as residual features that capture the differences between the teacher and the
TA models. We then leverage these cues to improve the student model's 3D
perception capabilities. Experimental results show that our MonoTAKD achieves
state-of-the-art performance on the KITTI3D dataset. Furthermore, we evaluate
the performance on nuScenes and KITTI raw datasets to demonstrate the
generalization of our model to multi-view 3D and unsupervised data settings.
Our code is available at https://github.com/hoiliu-0801/MonoTAKD.
| [
{
"version": "v1",
"created": "Sun, 7 Apr 2024 10:39:04 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Feb 2025 02:56:48 GMT"
},
{
"version": "v3",
"created": "Wed, 26 Mar 2025 04:08:02 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Liu",
"Hou-I",
""
],
[
"Wu",
"Christine",
""
],
[
"Cheng",
"Jen-Hao",
""
],
[
"Chai",
"Wenhao",
""
],
[
"Wang",
"Shian-Yun",
""
],
[
"Liu",
"Gaowen",
""
],
[
"Latapie",
"Hugo",
""
],
[
"Wu",
"Jhih-Ciang",
""
],
[
"Hwang",
"Jenq-Neng",
""
],
[
"Shuai",
"Hong-Han",
""
],
[
"Cheng",
"Wen-Huang",
""
]
] | TITLE: MonoTAKD: Teaching Assistant Knowledge Distillation for Monocular 3D
Object Detection
ABSTRACT: Monocular 3D object detection (Mono3D) holds noteworthy promise for
autonomous driving applications owing to the cost-effectiveness and rich visual
context of monocular camera sensors. However, depth ambiguity poses a
significant challenge, as it requires extracting precise 3D scene geometry from
a single image, resulting in suboptimal performance when transferring knowledge
from a LiDAR-based teacher model to a camera-based student model. To facilitate
effective distillation, we introduce Monocular Teaching Assistant Knowledge
Distillation (MonoTAKD), which proposes a camera-based teaching assistant (TA)
model to transfer robust 3D visual knowledge to the student model, leveraging
the smaller feature representation gap. Additionally, we define 3D spatial cues
as residual features that capture the differences between the teacher and the
TA models. We then leverage these cues to improve the student model's 3D
perception capabilities. Experimental results show that our MonoTAKD achieves
state-of-the-art performance on the KITTI3D dataset. Furthermore, we evaluate
the performance on nuScenes and KITTI raw datasets to demonstrate the
generalization of our model to multi-view 3D and unsupervised data settings.
Our code is available at https://github.com/hoiliu-0801/MonoTAKD.
|
2404.07943 | Yizheng Wang | Yizheng Wang, Xiang Li, Ziming Yan, Shuaifeng Ma, Jinshuai Bai, Bokai
Liu, Timon Rabczuk, Yinghua Liu | A Pretraining-Finetuning Computational Framework for Material
Homogenization | null | null | null | null | cs.CE cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Homogenization is a fundamental tool for studying multiscale physical
phenomena. Traditional numerical homogenization methods, heavily reliant on
finite element analysis, demand significant computational resources, especially
for complex geometries, materials, and high-resolution problems. To address
these challenges, we propose PreFine-Homo, a novel numerical homogenization
framework comprising two phases: pretraining and fine-tuning. In the
pretraining phase, a Fourier Neural Operator (FNO) is trained on large datasets
to learn the mapping from input geometries and material properties to
displacement fields. In the fine-tuning phase, the pretrained predictions serve
as initial solutions for iterative algorithms, drastically reducing the number
of iterations needed for convergence. The pretraining phase of PreFine-Homo
delivers homogenization results up to 1000 times faster than conventional
methods, while the fine-tuning phase further enhances accuracy. Moreover, the
fine-tuning phase grants PreFine-Homo unlimited generalization capabilities,
enabling continuous learning and improvement as data availability increases. We
validate PreFine-Homo by predicting the effective elastic tensor for 3D
periodic materials, specifically Triply Periodic Minimal Surfaces (TPMS). The
results demonstrate that PreFine-Homo achieves high precision, exceptional
efficiency, robust learning capabilities, and strong extrapolation ability,
establishing it as a powerful tool for multiscale homogenization tasks.
| [
{
"version": "v1",
"created": "Mon, 18 Mar 2024 06:47:35 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 17:52:45 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Wang",
"Yizheng",
""
],
[
"Li",
"Xiang",
""
],
[
"Yan",
"Ziming",
""
],
[
"Ma",
"Shuaifeng",
""
],
[
"Bai",
"Jinshuai",
""
],
[
"Liu",
"Bokai",
""
],
[
"Rabczuk",
"Timon",
""
],
[
"Liu",
"Yinghua",
""
]
] | TITLE: A Pretraining-Finetuning Computational Framework for Material
Homogenization
ABSTRACT: Homogenization is a fundamental tool for studying multiscale physical
phenomena. Traditional numerical homogenization methods, heavily reliant on
finite element analysis, demand significant computational resources, especially
for complex geometries, materials, and high-resolution problems. To address
these challenges, we propose PreFine-Homo, a novel numerical homogenization
framework comprising two phases: pretraining and fine-tuning. In the
pretraining phase, a Fourier Neural Operator (FNO) is trained on large datasets
to learn the mapping from input geometries and material properties to
displacement fields. In the fine-tuning phase, the pretrained predictions serve
as initial solutions for iterative algorithms, drastically reducing the number
of iterations needed for convergence. The pretraining phase of PreFine-Homo
delivers homogenization results up to 1000 times faster than conventional
methods, while the fine-tuning phase further enhances accuracy. Moreover, the
fine-tuning phase grants PreFine-Homo unlimited generalization capabilities,
enabling continuous learning and improvement as data availability increases. We
validate PreFine-Homo by predicting the effective elastic tensor for 3D
periodic materials, specifically Triply Periodic Minimal Surfaces (TPMS). The
results demonstrate that PreFine-Homo achieves high precision, exceptional
efficiency, robust learning capabilities, and strong extrapolation ability,
establishing it as a powerful tool for multiscale homogenization tasks.
|
2405.14132 | Zexi Li | Zexi Li, Lingzhi Gao, Chao Wu | Text-to-Model: Text-Conditioned Neural Network Diffusion for
Train-Once-for-All Personalization | Preprint | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generative artificial intelligence (GenAI) has made significant progress in
understanding world knowledge and generating content from human languages
across various modalities, like text-to-text large language models,
text-to-image stable diffusion, and text-to-video Sora. While in this paper, we
investigate the capability of GenAI for text-to-model generation, to see
whether GenAI can comprehend hyper-level knowledge embedded within AI itself
parameters. Specifically, we study a practical scenario termed
train-once-for-all personalization, aiming to generate personalized models for
diverse end-users and tasks using text prompts. Inspired by the recent
emergence of neural network diffusion, we present Tina, a text-conditioned
neural network diffusion for train-once-for-all personalization. Tina leverages
a diffusion transformer model conditioned on task descriptions embedded using a
CLIP model. Despite the astronomical number of potential personalized tasks
(e.g., $1.73\times10^{13}$), by our design, Tina demonstrates remarkable
in-distribution and out-of-distribution generalization even trained on small
datasets ($\sim 1000$). We further verify whether and how \Tina understands
world knowledge by analyzing its capabilities under zero-shot/few-shot image
prompts, different numbers of personalized classes, prompts of natural language
descriptions, and predicting unseen entities.
| [
{
"version": "v1",
"created": "Thu, 23 May 2024 03:11:18 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 16:33:17 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Li",
"Zexi",
""
],
[
"Gao",
"Lingzhi",
""
],
[
"Wu",
"Chao",
""
]
] | TITLE: Text-to-Model: Text-Conditioned Neural Network Diffusion for
Train-Once-for-All Personalization
ABSTRACT: Generative artificial intelligence (GenAI) has made significant progress in
understanding world knowledge and generating content from human languages
across various modalities, like text-to-text large language models,
text-to-image stable diffusion, and text-to-video Sora. While in this paper, we
investigate the capability of GenAI for text-to-model generation, to see
whether GenAI can comprehend hyper-level knowledge embedded within AI itself
parameters. Specifically, we study a practical scenario termed
train-once-for-all personalization, aiming to generate personalized models for
diverse end-users and tasks using text prompts. Inspired by the recent
emergence of neural network diffusion, we present Tina, a text-conditioned
neural network diffusion for train-once-for-all personalization. Tina leverages
a diffusion transformer model conditioned on task descriptions embedded using a
CLIP model. Despite the astronomical number of potential personalized tasks
(e.g., $1.73\times10^{13}$), by our design, Tina demonstrates remarkable
in-distribution and out-of-distribution generalization even trained on small
datasets ($\sim 1000$). We further verify whether and how \Tina understands
world knowledge by analyzing its capabilities under zero-shot/few-shot image
prompts, different numbers of personalized classes, prompts of natural language
descriptions, and predicting unseen entities.
|
2405.17391 | Vitaly Vanchurin | Ekaterina Kukleva and Vitaly Vanchurin | Dataset-learning duality and emergent criticality | 22 pages, 5 figures, 1 table. Improved analysis; main results
unchanged | null | null | null | cs.LG cond-mat.dis-nn cond-mat.stat-mech cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In artificial neural networks, the activation dynamics of non-trainable
variables is strongly coupled to the learning dynamics of trainable variables.
During the activation pass, the boundary neurons (e.g., input neurons) are
mapped to the bulk neurons (e.g., hidden neurons), and during the learning
pass, both bulk and boundary neurons are mapped to changes in trainable
variables (e.g., weights and biases). For example, in feed-forward neural
networks, forward propagation is the activation pass and backward propagation
is the learning pass. We show that a composition of the two maps establishes a
duality map between a subspace of non-trainable boundary variables (e.g.,
dataset) and a tangent subspace of trainable variables (i.e., learning). In
general, the dataset-learning duality is a complex non-linear map between
high-dimensional spaces. We use duality to study the emergence of criticality,
or the power-law distribution of fluctuations of the trainable variables, using
a toy model at learning equilibrium. In particular, we show that criticality
can emerge in the learning system even from the dataset in a non-critical
state, and that the power-law distribution can be modified by changing either
the activation function or the loss function.
| [
{
"version": "v1",
"created": "Mon, 27 May 2024 17:44:33 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Aug 2024 15:29:52 GMT"
},
{
"version": "v3",
"created": "Tue, 25 Mar 2025 22:39:21 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Kukleva",
"Ekaterina",
""
],
[
"Vanchurin",
"Vitaly",
""
]
] | TITLE: Dataset-learning duality and emergent criticality
ABSTRACT: In artificial neural networks, the activation dynamics of non-trainable
variables is strongly coupled to the learning dynamics of trainable variables.
During the activation pass, the boundary neurons (e.g., input neurons) are
mapped to the bulk neurons (e.g., hidden neurons), and during the learning
pass, both bulk and boundary neurons are mapped to changes in trainable
variables (e.g., weights and biases). For example, in feed-forward neural
networks, forward propagation is the activation pass and backward propagation
is the learning pass. We show that a composition of the two maps establishes a
duality map between a subspace of non-trainable boundary variables (e.g.,
dataset) and a tangent subspace of trainable variables (i.e., learning). In
general, the dataset-learning duality is a complex non-linear map between
high-dimensional spaces. We use duality to study the emergence of criticality,
or the power-law distribution of fluctuations of the trainable variables, using
a toy model at learning equilibrium. In particular, we show that criticality
can emerge in the learning system even from the dataset in a non-critical
state, and that the power-law distribution can be modified by changing either
the activation function or the loss function.
|
2406.06642 | Lev Telyatnikov | Lev Telyatnikov, Guillermo Bernardez, Marco Montagna, Mustafa Hajij,
Martin Carrasco, Pavlo Vasylenko, Mathilde Papillon, Ghada Zamzmi, Michael T.
Schaub, Jonas Verhellen, Pavel Snopov, Bertran Miquel-Oliver, Manel
Gil-Sorribes, Alexis Molina, Victor Guallar, Theodore Long, Julian Suk,
Patryk Rygiel, Alexander Nikitin, Giordan Escalona, Michael Banf, Dominik
Filipiak, Max Schattauer, Liliya Imasheva, Alvaro Martinez, Halley Fritze,
Marissa Masden, Valentina S\'anchez, Manuel Lecha, Andrea Cavallo, Claudio
Battiloro, Matt Piekenbrock, Mauricio Tec, George Dasoulas, Nina Miolane,
Simone Scardapane, Theodore Papamarkou | TopoBench: A Framework for Benchmarking Topological Deep Learning | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | This work introduces TopoBench, an open-source library designed to
standardize benchmarking and accelerate research in topological deep learning
(TDL). TopoBench decomposes TDL into a sequence of independent modules for data
generation, loading, transforming and processing, as well as model training,
optimization and evaluation. This modular organization provides flexibility for
modifications and facilitates the adaptation and optimization of various TDL
pipelines. A key feature of TopoBench is its support for transformations and
lifting across topological domains. Mapping the topology and features of a
graph to higher-order topological domains, such as simplicial and cell
complexes, enables richer data representations and more fine-grained analyses.
The applicability of TopoBench is demonstrated by benchmarking several TDL
architectures across diverse tasks and datasets.
| [
{
"version": "v1",
"created": "Sun, 9 Jun 2024 18:31:19 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 10:42:17 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Telyatnikov",
"Lev",
""
],
[
"Bernardez",
"Guillermo",
""
],
[
"Montagna",
"Marco",
""
],
[
"Hajij",
"Mustafa",
""
],
[
"Carrasco",
"Martin",
""
],
[
"Vasylenko",
"Pavlo",
""
],
[
"Papillon",
"Mathilde",
""
],
[
"Zamzmi",
"Ghada",
""
],
[
"Schaub",
"Michael T.",
""
],
[
"Verhellen",
"Jonas",
""
],
[
"Snopov",
"Pavel",
""
],
[
"Miquel-Oliver",
"Bertran",
""
],
[
"Gil-Sorribes",
"Manel",
""
],
[
"Molina",
"Alexis",
""
],
[
"Guallar",
"Victor",
""
],
[
"Long",
"Theodore",
""
],
[
"Suk",
"Julian",
""
],
[
"Rygiel",
"Patryk",
""
],
[
"Nikitin",
"Alexander",
""
],
[
"Escalona",
"Giordan",
""
],
[
"Banf",
"Michael",
""
],
[
"Filipiak",
"Dominik",
""
],
[
"Schattauer",
"Max",
""
],
[
"Imasheva",
"Liliya",
""
],
[
"Martinez",
"Alvaro",
""
],
[
"Fritze",
"Halley",
""
],
[
"Masden",
"Marissa",
""
],
[
"Sánchez",
"Valentina",
""
],
[
"Lecha",
"Manuel",
""
],
[
"Cavallo",
"Andrea",
""
],
[
"Battiloro",
"Claudio",
""
],
[
"Piekenbrock",
"Matt",
""
],
[
"Tec",
"Mauricio",
""
],
[
"Dasoulas",
"George",
""
],
[
"Miolane",
"Nina",
""
],
[
"Scardapane",
"Simone",
""
],
[
"Papamarkou",
"Theodore",
""
]
] | TITLE: TopoBench: A Framework for Benchmarking Topological Deep Learning
ABSTRACT: This work introduces TopoBench, an open-source library designed to
standardize benchmarking and accelerate research in topological deep learning
(TDL). TopoBench decomposes TDL into a sequence of independent modules for data
generation, loading, transforming and processing, as well as model training,
optimization and evaluation. This modular organization provides flexibility for
modifications and facilitates the adaptation and optimization of various TDL
pipelines. A key feature of TopoBench is its support for transformations and
lifting across topological domains. Mapping the topology and features of a
graph to higher-order topological domains, such as simplicial and cell
complexes, enables richer data representations and more fine-grained analyses.
The applicability of TopoBench is demonstrated by benchmarking several TDL
architectures across diverse tasks and datasets.
|
2406.09390 | Dominick Reilly | Dominick Reilly, Rajatsubhra Chakraborty, Arkaprava Sinha, Manish
Kumar Govind, Pu Wang, Francois Bremond, Le Xue, Srijan Das | LLAVIDAL: A Large LAnguage VIsion Model for Daily Activities of Living | CVPR 2025 Camera Ready | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Current Large Language Vision Models (LLVMs) trained on web videos perform
well in general video understanding but struggle with fine-grained details,
complex human-object interactions (HOI), and view-invariant representation
learning essential for Activities of Daily Living (ADL). This limitation stems
from a lack of specialized ADL video instruction-tuning datasets and
insufficient modality integration to capture discriminative action
representations. To address this, we propose a semi-automated framework for
curating ADL datasets, creating ADL-X, a multiview, multimodal RGBS
instruction-tuning dataset. Additionally, we introduce LLAVIDAL, an LLVM
integrating videos, 3D skeletons, and HOIs to model ADL's complex
spatiotemporal relationships. For training LLAVIDAL a simple joint alignment of
all modalities yields suboptimal results; thus, we propose a Multimodal
Progressive (MMPro) training strategy, incorporating modalities in stages
following a curriculum. We also establish ADL MCQ and video description
benchmarks to assess LLVM performance in ADL tasks. Trained on ADL-X, LLAVIDAL
achieves state-of-the-art performance across ADL benchmarks. Code and data will
be made publicly available at: https://adl-x.github.io/.
| [
{
"version": "v1",
"created": "Thu, 13 Jun 2024 17:59:05 GMT"
},
{
"version": "v2",
"created": "Thu, 12 Dec 2024 18:58:34 GMT"
},
{
"version": "v3",
"created": "Tue, 25 Mar 2025 18:54:55 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Reilly",
"Dominick",
""
],
[
"Chakraborty",
"Rajatsubhra",
""
],
[
"Sinha",
"Arkaprava",
""
],
[
"Govind",
"Manish Kumar",
""
],
[
"Wang",
"Pu",
""
],
[
"Bremond",
"Francois",
""
],
[
"Xue",
"Le",
""
],
[
"Das",
"Srijan",
""
]
] | TITLE: LLAVIDAL: A Large LAnguage VIsion Model for Daily Activities of Living
ABSTRACT: Current Large Language Vision Models (LLVMs) trained on web videos perform
well in general video understanding but struggle with fine-grained details,
complex human-object interactions (HOI), and view-invariant representation
learning essential for Activities of Daily Living (ADL). This limitation stems
from a lack of specialized ADL video instruction-tuning datasets and
insufficient modality integration to capture discriminative action
representations. To address this, we propose a semi-automated framework for
curating ADL datasets, creating ADL-X, a multiview, multimodal RGBS
instruction-tuning dataset. Additionally, we introduce LLAVIDAL, an LLVM
integrating videos, 3D skeletons, and HOIs to model ADL's complex
spatiotemporal relationships. For training LLAVIDAL a simple joint alignment of
all modalities yields suboptimal results; thus, we propose a Multimodal
Progressive (MMPro) training strategy, incorporating modalities in stages
following a curriculum. We also establish ADL MCQ and video description
benchmarks to assess LLVM performance in ADL tasks. Trained on ADL-X, LLAVIDAL
achieves state-of-the-art performance across ADL benchmarks. Code and data will
be made publicly available at: https://adl-x.github.io/.
|
2407.02862 | Nikolaos Fanourakis | Nikolaos Fanourakis and Fatia Lekbour and Guillaume Renton and Vasilis
Efthymiou and Vassilis Christophides | HybEA: Hybrid Models for Entity Alignment | null | null | null | null | cs.DB | http://creativecommons.org/licenses/by/4.0/ | Entity Alignment (EA) aims to detect descriptions of the same real-world
entities among different Knowledge Graphs (KG). Several embedding methods have
been proposed to rank potentially matching entities of two KGs according to
their similarity in the embedding space. However, existing EA embedding methods
are challenged by the diverse levels of structural (i.e., neighborhood
entities) and semantic (e.g., entity names and literal property values)
heterogeneity exhibited by real-world KGs, especially when they are spanning
several domains (DBpedia, Wikidata). Existing methods either focus on one of
the two heterogeneity kinds depending on the context (mono- vs multi-lingual).
To address this limitation, we propose a flexible framework called HybEA, that
is a hybrid of two models, a novel attention-based factual model, co-trained
with a state-of-the-art structural model. Our experimental results demonstrate
that HybEA outperforms the state-of-the-art EA systems, achieving a 16% average
relative improvement of Hits@1, ranging from 3.6% up to 40% in 5 monolingual
datasets, with some datasets that can now be considered as solved. We also show
that HybEA outperforms state-of-the-art methods in 3 multi-lingual datasets, as
well as on 2 datasets that drop the unrealistic, yet widely adopted, one-to-one
assumption. Overall, HybEA outperforms all (11) baseline methods in all (3)
measures and in all (10) datasets evaluated, with a statistically significant
difference.
| [
{
"version": "v1",
"created": "Wed, 3 Jul 2024 07:22:20 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 17:44:17 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Fanourakis",
"Nikolaos",
""
],
[
"Lekbour",
"Fatia",
""
],
[
"Renton",
"Guillaume",
""
],
[
"Efthymiou",
"Vasilis",
""
],
[
"Christophides",
"Vassilis",
""
]
] | TITLE: HybEA: Hybrid Models for Entity Alignment
ABSTRACT: Entity Alignment (EA) aims to detect descriptions of the same real-world
entities among different Knowledge Graphs (KG). Several embedding methods have
been proposed to rank potentially matching entities of two KGs according to
their similarity in the embedding space. However, existing EA embedding methods
are challenged by the diverse levels of structural (i.e., neighborhood
entities) and semantic (e.g., entity names and literal property values)
heterogeneity exhibited by real-world KGs, especially when they are spanning
several domains (DBpedia, Wikidata). Existing methods either focus on one of
the two heterogeneity kinds depending on the context (mono- vs multi-lingual).
To address this limitation, we propose a flexible framework called HybEA, that
is a hybrid of two models, a novel attention-based factual model, co-trained
with a state-of-the-art structural model. Our experimental results demonstrate
that HybEA outperforms the state-of-the-art EA systems, achieving a 16% average
relative improvement of Hits@1, ranging from 3.6% up to 40% in 5 monolingual
datasets, with some datasets that can now be considered as solved. We also show
that HybEA outperforms state-of-the-art methods in 3 multi-lingual datasets, as
well as on 2 datasets that drop the unrealistic, yet widely adopted, one-to-one
assumption. Overall, HybEA outperforms all (11) baseline methods in all (3)
measures and in all (10) datasets evaluated, with a statistically significant
difference.
|
2407.12883 | Hongjin Su | Hongjin Su, Howard Yen, Mengzhou Xia, Weijia Shi, Niklas Muennighoff,
Han-yu Wang, Haisu Liu, Quan Shi, Zachary S. Siegel, Michael Tang, Ruoxi Sun,
Jinsung Yoon, Sercan O. Arik, Danqi Chen, Tao Yu | BRIGHT: A Realistic and Challenging Benchmark for Reasoning-Intensive
Retrieval | 51 pages | null | null | null | cs.CL cs.AI cs.IR | http://creativecommons.org/licenses/by/4.0/ | Existing retrieval benchmarks primarily consist of information-seeking
queries (e.g., aggregated questions from search engines) where keyword or
semantic-based retrieval is usually sufficient. However, many complex
real-world queries require in-depth reasoning to identify relevant documents
that go beyond surface form matching. For example, finding documentation for a
coding question requires understanding the logic and syntax of the functions
involved. To better benchmark retrieval on such challenging queries, we
introduce BRIGHT, the first text retrieval benchmark that requires intensive
reasoning to retrieve relevant documents. Our dataset consists of 1,384
real-world queries spanning diverse domains, such as economics, psychology,
mathematics, and coding. These queries are drawn from naturally occurring and
carefully curated human data. Extensive evaluation reveals that even
state-of-the-art retrieval models perform poorly on BRIGHT. The leading model
on the MTEB leaderboard (Muennighoff et al., 2023) SFR-Embedding-Mistral (Meng
et al., 2024), which achieves a score of 59.0 nDCG@10,1 produces a score of
nDCG@10 of 18.3 on BRIGHT. We show that incorporating explicit reasoning about
the query improves retrieval performance by up to 12.2 points. Moreover,
incorporating retrieved documents from the top-performing retriever boosts
question-answering performance. We believe that BRIGHT paves the way for future
research on retrieval systems in more realistic and challenging settings.
| [
{
"version": "v1",
"created": "Tue, 16 Jul 2024 17:58:27 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Oct 2024 17:49:31 GMT"
},
{
"version": "v3",
"created": "Thu, 24 Oct 2024 04:51:21 GMT"
},
{
"version": "v4",
"created": "Wed, 26 Mar 2025 07:37:26 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Su",
"Hongjin",
""
],
[
"Yen",
"Howard",
""
],
[
"Xia",
"Mengzhou",
""
],
[
"Shi",
"Weijia",
""
],
[
"Muennighoff",
"Niklas",
""
],
[
"Wang",
"Han-yu",
""
],
[
"Liu",
"Haisu",
""
],
[
"Shi",
"Quan",
""
],
[
"Siegel",
"Zachary S.",
""
],
[
"Tang",
"Michael",
""
],
[
"Sun",
"Ruoxi",
""
],
[
"Yoon",
"Jinsung",
""
],
[
"Arik",
"Sercan O.",
""
],
[
"Chen",
"Danqi",
""
],
[
"Yu",
"Tao",
""
]
] | TITLE: BRIGHT: A Realistic and Challenging Benchmark for Reasoning-Intensive
Retrieval
ABSTRACT: Existing retrieval benchmarks primarily consist of information-seeking
queries (e.g., aggregated questions from search engines) where keyword or
semantic-based retrieval is usually sufficient. However, many complex
real-world queries require in-depth reasoning to identify relevant documents
that go beyond surface form matching. For example, finding documentation for a
coding question requires understanding the logic and syntax of the functions
involved. To better benchmark retrieval on such challenging queries, we
introduce BRIGHT, the first text retrieval benchmark that requires intensive
reasoning to retrieve relevant documents. Our dataset consists of 1,384
real-world queries spanning diverse domains, such as economics, psychology,
mathematics, and coding. These queries are drawn from naturally occurring and
carefully curated human data. Extensive evaluation reveals that even
state-of-the-art retrieval models perform poorly on BRIGHT. The leading model
on the MTEB leaderboard (Muennighoff et al., 2023) SFR-Embedding-Mistral (Meng
et al., 2024), which achieves a score of 59.0 nDCG@10,1 produces a score of
nDCG@10 of 18.3 on BRIGHT. We show that incorporating explicit reasoning about
the query improves retrieval performance by up to 12.2 points. Moreover,
incorporating retrieved documents from the top-performing retriever boosts
question-answering performance. We believe that BRIGHT paves the way for future
research on retrieval systems in more realistic and challenging settings.
|
2409.00250 | Mingjie Li | Yijian Fan, Zhenbang Yang, Rui Liu, Mingjie Li and Xiaojun Chang | Medical Report Generation Is A Multi-label Classification Problem | Accepted to 2024 IEEE International Conference on Medical Artificial
Intelligence | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Medical report generation is a critical task in healthcare that involves the
automatic creation of detailed and accurate descriptions from medical images.
Traditionally, this task has been approached as a sequence generation problem,
relying on vision-and-language techniques to generate coherent and contextually
relevant reports. However, in this paper, we propose a novel perspective:
rethinking medical report generation as a multi-label classification problem.
By framing the task this way, we leverage the radiology nodes from the commonly
used knowledge graph, which can be better captured through classification
techniques. To verify our argument, we introduce a novel report generation
framework based on BLIP integrated with classified key nodes, which allows for
effective report generation with accurate classification of multiple key
aspects within the medical images. This approach not only simplifies the report
generation process but also significantly enhances performance metrics. Our
extensive experiments demonstrate that leveraging key nodes can achieve
state-of-the-art (SOTA) performance, surpassing existing approaches across two
benchmark datasets. The results underscore the potential of re-envisioning
traditional tasks with innovative methodologies, paving the way for more
efficient and accurate medical report generation.
| [
{
"version": "v1",
"created": "Fri, 30 Aug 2024 20:43:35 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 23:19:47 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Fan",
"Yijian",
""
],
[
"Yang",
"Zhenbang",
""
],
[
"Liu",
"Rui",
""
],
[
"Li",
"Mingjie",
""
],
[
"Chang",
"Xiaojun",
""
]
] | TITLE: Medical Report Generation Is A Multi-label Classification Problem
ABSTRACT: Medical report generation is a critical task in healthcare that involves the
automatic creation of detailed and accurate descriptions from medical images.
Traditionally, this task has been approached as a sequence generation problem,
relying on vision-and-language techniques to generate coherent and contextually
relevant reports. However, in this paper, we propose a novel perspective:
rethinking medical report generation as a multi-label classification problem.
By framing the task this way, we leverage the radiology nodes from the commonly
used knowledge graph, which can be better captured through classification
techniques. To verify our argument, we introduce a novel report generation
framework based on BLIP integrated with classified key nodes, which allows for
effective report generation with accurate classification of multiple key
aspects within the medical images. This approach not only simplifies the report
generation process but also significantly enhances performance metrics. Our
extensive experiments demonstrate that leveraging key nodes can achieve
state-of-the-art (SOTA) performance, surpassing existing approaches across two
benchmark datasets. The results underscore the potential of re-envisioning
traditional tasks with innovative methodologies, paving the way for more
efficient and accurate medical report generation.
|
2409.08681 | Huan Yin | Zehuan Yu, Zhijian Qiao, Wenyi Liu, Huan Yin, and Shaojie Shen | SLIM: Scalable and Lightweight LiDAR Mapping in Urban Environments | Accepted for publication in IEEE Transactions on Robotics. Video:
https://youtu.be/8HQnYMf_BWI Code:
https://github.com/HKUST-Aerial-Robotics/SLIM | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | LiDAR point cloud maps are extensively utilized on roads for robot navigation
due to their high consistency. However, dense point clouds face challenges of
high memory consumption and reduced maintainability for long-term operations.
In this study, we introduce SLIM, a scalable and lightweight mapping system for
long-term LiDAR mapping in urban environments. The system begins by
parameterizing structural point clouds into lines and planes. These lightweight
and structural representations meet the requirements of map merging, pose graph
optimization, and bundle adjustment, ensuring incremental management and local
consistency. For long-term operations, a map-centric nonlinear factor recovery
method is designed to sparsify poses while preserving mapping accuracy. We
validate the SLIM system with multi-session real-world LiDAR data from
classical LiDAR mapping datasets, including KITTI, NCLT, HeLiPR and M2DGR. The
experiments demonstrate its capabilities in mapping accuracy, lightweightness,
and scalability. Map re-use is also verified through map-based robot
localization. Finally, with multi-session LiDAR data, the SLIM system provides
a globally consistent map with low memory consumption (~130 KB/km on KITTI).
| [
{
"version": "v1",
"created": "Fri, 13 Sep 2024 09:50:04 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 05:31:23 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Yu",
"Zehuan",
""
],
[
"Qiao",
"Zhijian",
""
],
[
"Liu",
"Wenyi",
""
],
[
"Yin",
"Huan",
""
],
[
"Shen",
"Shaojie",
""
]
] | TITLE: SLIM: Scalable and Lightweight LiDAR Mapping in Urban Environments
ABSTRACT: LiDAR point cloud maps are extensively utilized on roads for robot navigation
due to their high consistency. However, dense point clouds face challenges of
high memory consumption and reduced maintainability for long-term operations.
In this study, we introduce SLIM, a scalable and lightweight mapping system for
long-term LiDAR mapping in urban environments. The system begins by
parameterizing structural point clouds into lines and planes. These lightweight
and structural representations meet the requirements of map merging, pose graph
optimization, and bundle adjustment, ensuring incremental management and local
consistency. For long-term operations, a map-centric nonlinear factor recovery
method is designed to sparsify poses while preserving mapping accuracy. We
validate the SLIM system with multi-session real-world LiDAR data from
classical LiDAR mapping datasets, including KITTI, NCLT, HeLiPR and M2DGR. The
experiments demonstrate its capabilities in mapping accuracy, lightweightness,
and scalability. Map re-use is also verified through map-based robot
localization. Finally, with multi-session LiDAR data, the SLIM system provides
a globally consistent map with low memory consumption (~130 KB/km on KITTI).
|
2409.18253 | Jean-Michel Fortin | Jean-Michel Fortin, Olivier Gamache, William Fecteau, Effie Daum,
William Larriv\'ee-Hardy, Fran\c{c}ois Pomerleau, Philippe Gigu\`ere | UAV-Assisted Self-Supervised Terrain Awareness for Off-Road Navigation | 7 pages, 5 figures, submitted to ICRA 2025 | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | Terrain awareness is an essential milestone to enable truly autonomous
off-road navigation. Accurately predicting terrain characteristics allows
optimizing a vehicle's path against potential hazards. Recent methods use deep
neural networks to predict traversability-related terrain properties in a
self-supervised manner, relying on proprioception as a training signal.
However, onboard cameras are inherently limited by their point-of-view relative
to the ground, suffering from occlusions and vanishing pixel density with
distance. This paper introduces a novel approach for self-supervised terrain
characterization using an aerial perspective from a hovering drone. We capture
terrain-aligned images while sampling the environment with a ground vehicle,
effectively training a simple predictor for vibrations, bumpiness, and energy
consumption. Our dataset includes 2.8 km of off-road data collected in forest
environment, comprising 13 484 ground-based images and 12 935 aerial images.
Our findings show that drone imagery improves terrain property prediction by
21.37 % on the whole dataset and 37.35 % in high vegetation, compared to ground
robot images. We conduct ablation studies to identify the main causes of these
performance improvements. We also demonstrate the real-world applicability of
our approach by scouting an unseen area with a drone, planning and executing an
optimized path on the ground.
| [
{
"version": "v1",
"created": "Thu, 26 Sep 2024 19:54:24 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 14:02:12 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Fortin",
"Jean-Michel",
""
],
[
"Gamache",
"Olivier",
""
],
[
"Fecteau",
"William",
""
],
[
"Daum",
"Effie",
""
],
[
"Larrivée-Hardy",
"William",
""
],
[
"Pomerleau",
"François",
""
],
[
"Giguère",
"Philippe",
""
]
] | TITLE: UAV-Assisted Self-Supervised Terrain Awareness for Off-Road Navigation
ABSTRACT: Terrain awareness is an essential milestone to enable truly autonomous
off-road navigation. Accurately predicting terrain characteristics allows
optimizing a vehicle's path against potential hazards. Recent methods use deep
neural networks to predict traversability-related terrain properties in a
self-supervised manner, relying on proprioception as a training signal.
However, onboard cameras are inherently limited by their point-of-view relative
to the ground, suffering from occlusions and vanishing pixel density with
distance. This paper introduces a novel approach for self-supervised terrain
characterization using an aerial perspective from a hovering drone. We capture
terrain-aligned images while sampling the environment with a ground vehicle,
effectively training a simple predictor for vibrations, bumpiness, and energy
consumption. Our dataset includes 2.8 km of off-road data collected in forest
environment, comprising 13 484 ground-based images and 12 935 aerial images.
Our findings show that drone imagery improves terrain property prediction by
21.37 % on the whole dataset and 37.35 % in high vegetation, compared to ground
robot images. We conduct ablation studies to identify the main causes of these
performance improvements. We also demonstrate the real-world applicability of
our approach by scouting an unseen area with a drone, planning and executing an
optimized path on the ground.
|
2410.02604 | Ningya Feng | Ningya Feng, Junwei Pan, Jialong Wu, Baixu Chen, Ximei Wang, Qian Li,
Xian Hu, Jie Jiang, Mingsheng Long | Long-Sequence Recommendation Models Need Decoupled Embeddings | ICLR 2025. First three authors contributed equally. Code is available
at https://github.com/thuml/DARE | null | null | null | cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Lifelong user behavior sequences are crucial for capturing user interests and
predicting user responses in modern recommendation systems. A two-stage
paradigm is typically adopted to handle these long sequences: a subset of
relevant behaviors is first searched from the original long sequences via an
attention mechanism in the first stage and then aggregated with the target item
to construct a discriminative representation for prediction in the second
stage. In this work, we identify and characterize, for the first time, a
neglected deficiency in existing long-sequence recommendation models: a single
set of embeddings struggles with learning both attention and representation,
leading to interference between these two processes. Initial attempts to
address this issue with some common methods (e.g., linear projections -- a
technique borrowed from language processing) proved ineffective, shedding light
on the unique challenges of recommendation models. To overcome this, we propose
the Decoupled Attention and Representation Embeddings (DARE) model, where two
distinct embedding tables are initialized and learned separately to fully
decouple attention and representation. Extensive experiments and analysis
demonstrate that DARE provides more accurate searches of correlated behaviors
and outperforms baselines with AUC gains up to 0.9% on public datasets and
notable improvements on Tencent's advertising platform. Furthermore, decoupling
embedding spaces allows us to reduce the attention embedding dimension and
accelerate the search procedure by 50% without significant performance impact,
enabling more efficient, high-performance online serving. Code in PyTorch for
experiments, including model analysis, is available at
https://github.com/thuml/DARE.
| [
{
"version": "v1",
"created": "Thu, 3 Oct 2024 15:45:15 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 02:48:49 GMT"
},
{
"version": "v3",
"created": "Wed, 26 Mar 2025 12:45:15 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Feng",
"Ningya",
""
],
[
"Pan",
"Junwei",
""
],
[
"Wu",
"Jialong",
""
],
[
"Chen",
"Baixu",
""
],
[
"Wang",
"Ximei",
""
],
[
"Li",
"Qian",
""
],
[
"Hu",
"Xian",
""
],
[
"Jiang",
"Jie",
""
],
[
"Long",
"Mingsheng",
""
]
] | TITLE: Long-Sequence Recommendation Models Need Decoupled Embeddings
ABSTRACT: Lifelong user behavior sequences are crucial for capturing user interests and
predicting user responses in modern recommendation systems. A two-stage
paradigm is typically adopted to handle these long sequences: a subset of
relevant behaviors is first searched from the original long sequences via an
attention mechanism in the first stage and then aggregated with the target item
to construct a discriminative representation for prediction in the second
stage. In this work, we identify and characterize, for the first time, a
neglected deficiency in existing long-sequence recommendation models: a single
set of embeddings struggles with learning both attention and representation,
leading to interference between these two processes. Initial attempts to
address this issue with some common methods (e.g., linear projections -- a
technique borrowed from language processing) proved ineffective, shedding light
on the unique challenges of recommendation models. To overcome this, we propose
the Decoupled Attention and Representation Embeddings (DARE) model, where two
distinct embedding tables are initialized and learned separately to fully
decouple attention and representation. Extensive experiments and analysis
demonstrate that DARE provides more accurate searches of correlated behaviors
and outperforms baselines with AUC gains up to 0.9% on public datasets and
notable improvements on Tencent's advertising platform. Furthermore, decoupling
embedding spaces allows us to reduce the attention embedding dimension and
accelerate the search procedure by 50% without significant performance impact,
enabling more efficient, high-performance online serving. Code in PyTorch for
experiments, including model analysis, is available at
https://github.com/thuml/DARE.
|
2410.04980 | Lennart Jahn | Lennart Jahn, Sarah Fl\"ugge, Dajie Zhang, Luise Poustka, Sven
B\"olte, Florentin W\"org\"otter, Peter B Marschik and Tomas Kulvicius | Comparison of marker-less 2D image-based methods for infant pose
estimation | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | In this study we compare the performance of available generic- and
infant-pose estimators for a video-based automated general movement assessment
(GMA), and the choice of viewing angle for optimal recordings, i.e.,
conventional diagonal view used in GMA vs. top-down view. We used 4500
annotated video-frames from 75 recordings of infant spontaneous motor functions
from 4 to 26 weeks. To determine which pose estimation method and camera angle
yield the best pose estimation accuracy on infants in a GMA related setting,
the distance to human annotations and the percentage of correct key-points
(PCK) were computed and compared. The results show that the best performing
generic model trained on adults, ViTPose, also performs best on infants. We see
no improvement from using infant-pose estimators over the generic pose
estimators on our infant dataset. However, when retraining a generic model on
our data, there is a significant improvement in pose estimation accuracy. The
pose estimation accuracy obtained from the top-down view is significantly
better than that obtained from the diagonal view, especially for the detection
of the hip key-points. The results also indicate limited generalization
capabilities of infant-pose estimators to other infant datasets, which hints
that one should be careful when choosing infant pose estimators and using them
on infant datasets which they were not trained on. While the standard GMA
method uses a diagonal view for assessment, pose estimation accuracy
significantly improves using a top-down view. This suggests that a top-down
view should be included in recording setups for automated GMA research.
| [
{
"version": "v1",
"created": "Mon, 7 Oct 2024 12:21:49 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Nov 2024 11:59:22 GMT"
},
{
"version": "v3",
"created": "Wed, 26 Mar 2025 14:45:59 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Jahn",
"Lennart",
""
],
[
"Flügge",
"Sarah",
""
],
[
"Zhang",
"Dajie",
""
],
[
"Poustka",
"Luise",
""
],
[
"Bölte",
"Sven",
""
],
[
"Wörgötter",
"Florentin",
""
],
[
"Marschik",
"Peter B",
""
],
[
"Kulvicius",
"Tomas",
""
]
] | TITLE: Comparison of marker-less 2D image-based methods for infant pose
estimation
ABSTRACT: In this study we compare the performance of available generic- and
infant-pose estimators for a video-based automated general movement assessment
(GMA), and the choice of viewing angle for optimal recordings, i.e.,
conventional diagonal view used in GMA vs. top-down view. We used 4500
annotated video-frames from 75 recordings of infant spontaneous motor functions
from 4 to 26 weeks. To determine which pose estimation method and camera angle
yield the best pose estimation accuracy on infants in a GMA related setting,
the distance to human annotations and the percentage of correct key-points
(PCK) were computed and compared. The results show that the best performing
generic model trained on adults, ViTPose, also performs best on infants. We see
no improvement from using infant-pose estimators over the generic pose
estimators on our infant dataset. However, when retraining a generic model on
our data, there is a significant improvement in pose estimation accuracy. The
pose estimation accuracy obtained from the top-down view is significantly
better than that obtained from the diagonal view, especially for the detection
of the hip key-points. The results also indicate limited generalization
capabilities of infant-pose estimators to other infant datasets, which hints
that one should be careful when choosing infant pose estimators and using them
on infant datasets which they were not trained on. While the standard GMA
method uses a diagonal view for assessment, pose estimation accuracy
significantly improves using a top-down view. This suggests that a top-down
view should be included in recording setups for automated GMA research.
|
2410.12138 | Zhuokai Zhao | Chaoqi Wang, Zhuokai Zhao, Chen Zhu, Karthik Abinav Sankararaman,
Michal Valko, Xuefei Cao, Zhaorun Chen, Madian Khabsa, Yuxin Chen, Hao Ma,
Sinong Wang | Preference Optimization with Multi-Sample Comparisons | Code is available at
https://github.com/alecwangcq/multi-sample-alignment | null | null | null | cs.LG cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advancements in generative models, particularly large language models
(LLMs) and diffusion models, have been driven by extensive pretraining on large
datasets followed by post-training. However, current post-training methods such
as reinforcement learning from human feedback (RLHF) and direct alignment from
preference methods (DAP) primarily utilize single-sample comparisons. These
approaches often fail to capture critical characteristics such as generative
diversity and bias, which are more accurately assessed through multiple
samples. To address these limitations, we introduce a novel approach that
extends post-training to include multi-sample comparisons. To achieve this, we
propose Multi-sample Direct Preference Optimization (mDPO) and Multi-sample
Identity Preference Optimization (mIPO). These methods improve traditional DAP
methods by focusing on group-wise characteristics. Empirically, we demonstrate
that multi-sample comparison is more effective in optimizing collective
characteristics~(e.g., diversity and bias) for generative models than
single-sample comparison. Additionally, our findings suggest that multi-sample
comparisons provide a more robust optimization framework, particularly for
dataset with label noise.
| [
{
"version": "v1",
"created": "Wed, 16 Oct 2024 00:59:19 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 06:48:11 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Wang",
"Chaoqi",
""
],
[
"Zhao",
"Zhuokai",
""
],
[
"Zhu",
"Chen",
""
],
[
"Sankararaman",
"Karthik Abinav",
""
],
[
"Valko",
"Michal",
""
],
[
"Cao",
"Xuefei",
""
],
[
"Chen",
"Zhaorun",
""
],
[
"Khabsa",
"Madian",
""
],
[
"Chen",
"Yuxin",
""
],
[
"Ma",
"Hao",
""
],
[
"Wang",
"Sinong",
""
]
] | TITLE: Preference Optimization with Multi-Sample Comparisons
ABSTRACT: Recent advancements in generative models, particularly large language models
(LLMs) and diffusion models, have been driven by extensive pretraining on large
datasets followed by post-training. However, current post-training methods such
as reinforcement learning from human feedback (RLHF) and direct alignment from
preference methods (DAP) primarily utilize single-sample comparisons. These
approaches often fail to capture critical characteristics such as generative
diversity and bias, which are more accurately assessed through multiple
samples. To address these limitations, we introduce a novel approach that
extends post-training to include multi-sample comparisons. To achieve this, we
propose Multi-sample Direct Preference Optimization (mDPO) and Multi-sample
Identity Preference Optimization (mIPO). These methods improve traditional DAP
methods by focusing on group-wise characteristics. Empirically, we demonstrate
that multi-sample comparison is more effective in optimizing collective
characteristics~(e.g., diversity and bias) for generative models than
single-sample comparison. Additionally, our findings suggest that multi-sample
comparisons provide a more robust optimization framework, particularly for
dataset with label noise.
|
2410.17579 | Mridul Gupta | Mridul Gupta and Samyak Jain and Vansh Ramani and Hariprasad Kodamana
and Sayan Ranu | Bonsai: Gradient-free Graph Condensation for Node Classification | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Graph condensation has emerged as a promising avenue to enable scalable
training of GNNs by compressing the training dataset while preserving essential
graph characteristics. Our study uncovers significant shortcomings in current
graph condensation techniques. First, the majority of the algorithms
paradoxically require training on the full dataset to perform condensation.
Second, due to their gradient-emulating approach, these methods require fresh
condensation for any change in hyperparameters or GNN architecture, limiting
their flexibility and reusability. Finally, they fail to achieve substantial
size reduction due to synthesizing fully-connected, edge-weighted graphs. To
address these challenges, we present Bonsai, a novel graph condensation method
empowered by the observation that \textit{computation trees} form the
fundamental processing units of message-passing GNNs. Bonsai condenses datasets
by encoding a careful selection of \textit{exemplar} trees that maximize the
representation of all computation trees in the training set. This unique
approach imparts Bonsai as the first linear-time, model-agnostic graph
condensation algorithm for node classification that outperforms existing
baselines across $7$ real-world datasets on accuracy, while being $22$ times
faster on average. Bonsai is grounded in rigorous mathematical guarantees on
the adopted approximation strategies making it robust to GNN architectures,
datasets, and parameters.
| [
{
"version": "v1",
"created": "Wed, 23 Oct 2024 06:08:45 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Oct 2024 05:24:53 GMT"
},
{
"version": "v3",
"created": "Wed, 5 Mar 2025 17:09:46 GMT"
},
{
"version": "v4",
"created": "Wed, 19 Mar 2025 06:20:44 GMT"
},
{
"version": "v5",
"created": "Wed, 26 Mar 2025 05:50:10 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Gupta",
"Mridul",
""
],
[
"Jain",
"Samyak",
""
],
[
"Ramani",
"Vansh",
""
],
[
"Kodamana",
"Hariprasad",
""
],
[
"Ranu",
"Sayan",
""
]
] | TITLE: Bonsai: Gradient-free Graph Condensation for Node Classification
ABSTRACT: Graph condensation has emerged as a promising avenue to enable scalable
training of GNNs by compressing the training dataset while preserving essential
graph characteristics. Our study uncovers significant shortcomings in current
graph condensation techniques. First, the majority of the algorithms
paradoxically require training on the full dataset to perform condensation.
Second, due to their gradient-emulating approach, these methods require fresh
condensation for any change in hyperparameters or GNN architecture, limiting
their flexibility and reusability. Finally, they fail to achieve substantial
size reduction due to synthesizing fully-connected, edge-weighted graphs. To
address these challenges, we present Bonsai, a novel graph condensation method
empowered by the observation that \textit{computation trees} form the
fundamental processing units of message-passing GNNs. Bonsai condenses datasets
by encoding a careful selection of \textit{exemplar} trees that maximize the
representation of all computation trees in the training set. This unique
approach imparts Bonsai as the first linear-time, model-agnostic graph
condensation algorithm for node classification that outperforms existing
baselines across $7$ real-world datasets on accuracy, while being $22$ times
faster on average. Bonsai is grounded in rigorous mathematical guarantees on
the adopted approximation strategies making it robust to GNN architectures,
datasets, and parameters.
|
2411.03239 | Huan Zheng | Huan Zheng, Wencheng Han, Jianbing Shen | Decoupling Fine Detail and Global Geometry for Compressed Depth Map
Super-Resolution | Accepted by CVPR 2025 & The 1st place award for the ECCV 2024 AIM
Compressed Depth Upsampling Challenge | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recovering high-quality depth maps from compressed sources has gained
significant attention due to the limitations of consumer-grade depth cameras
and the bandwidth restrictions during data transmission. However, current
methods still suffer from two challenges. First, bit-depth compression produces
a uniform depth representation in regions with subtle variations, hindering the
recovery of detailed information. Second, densely distributed random noise
reduces the accuracy of estimating the global geometric structure of the scene.
To address these challenges, we propose a novel framework, termed
geometry-decoupled network (GDNet), for compressed depth map super-resolution
that decouples the high-quality depth map reconstruction process by handling
global and detailed geometric features separately. To be specific, we propose
the fine geometry detail encoder (FGDE), which is designed to aggregate fine
geometry details in high-resolution low-level image features while
simultaneously enriching them with complementary information from
low-resolution context-level image features. In addition, we develop the global
geometry encoder (GGE) that aims at suppressing noise and extracting global
geometric information effectively via constructing compact feature
representation in a low-rank space. We conduct experiments on multiple
benchmark datasets, demonstrating that our GDNet significantly outperforms
current methods in terms of geometric consistency and detail recovery. In the
ECCV 2024 AIM Compressed Depth Upsampling Challenge, our solution won the 1st
place award. Our codes are available at: https://github.com/Ian0926/GDNet.
| [
{
"version": "v1",
"created": "Tue, 5 Nov 2024 16:37:30 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Nov 2024 09:46:39 GMT"
},
{
"version": "v3",
"created": "Wed, 26 Mar 2025 09:09:55 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Zheng",
"Huan",
""
],
[
"Han",
"Wencheng",
""
],
[
"Shen",
"Jianbing",
""
]
] | TITLE: Decoupling Fine Detail and Global Geometry for Compressed Depth Map
Super-Resolution
ABSTRACT: Recovering high-quality depth maps from compressed sources has gained
significant attention due to the limitations of consumer-grade depth cameras
and the bandwidth restrictions during data transmission. However, current
methods still suffer from two challenges. First, bit-depth compression produces
a uniform depth representation in regions with subtle variations, hindering the
recovery of detailed information. Second, densely distributed random noise
reduces the accuracy of estimating the global geometric structure of the scene.
To address these challenges, we propose a novel framework, termed
geometry-decoupled network (GDNet), for compressed depth map super-resolution
that decouples the high-quality depth map reconstruction process by handling
global and detailed geometric features separately. To be specific, we propose
the fine geometry detail encoder (FGDE), which is designed to aggregate fine
geometry details in high-resolution low-level image features while
simultaneously enriching them with complementary information from
low-resolution context-level image features. In addition, we develop the global
geometry encoder (GGE) that aims at suppressing noise and extracting global
geometric information effectively via constructing compact feature
representation in a low-rank space. We conduct experiments on multiple
benchmark datasets, demonstrating that our GDNet significantly outperforms
current methods in terms of geometric consistency and detail recovery. In the
ECCV 2024 AIM Compressed Depth Upsampling Challenge, our solution won the 1st
place award. Our codes are available at: https://github.com/Ian0926/GDNet.
|
2411.04752 | Aniket Deroy | Aniket Deroy, Subhankar Maity | RetrieveGPT: Merging Prompts and Mathematical Models for Enhanced
Code-Mixed Information Retrieval | Final and Updated version | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Code-mixing, the integration of lexical and grammatical elements from
multiple languages within a single sentence, is a widespread linguistic
phenomenon, particularly prevalent in multilingual societies. In India, social
media users frequently engage in code-mixed conversations using the Roman
script, especially among migrant communities who form online groups to share
relevant local information. This paper focuses on the challenges of extracting
relevant information from code-mixed conversations, specifically within Roman
transliterated Bengali mixed with English. This study presents a novel approach
to address these challenges by developing a mechanism to automatically identify
the most relevant answers from code-mixed conversations. We have experimented
with a dataset comprising of queries and documents from Facebook, and Query
Relevance files (QRels) to aid in this task. Our results demonstrate the
effectiveness of our approach in extracting pertinent information from complex,
code-mixed digital conversations, contributing to the broader field of natural
language processing in multilingual and informal text environments. We use
GPT-3.5 Turbo via prompting alongwith using the sequential nature of relevant
documents to frame a mathematical model which helps to detect relevant
documents corresponding to a query.
| [
{
"version": "v1",
"created": "Thu, 7 Nov 2024 14:41:01 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Mar 2025 08:04:15 GMT"
},
{
"version": "v3",
"created": "Wed, 26 Mar 2025 12:30:49 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Deroy",
"Aniket",
""
],
[
"Maity",
"Subhankar",
""
]
] | TITLE: RetrieveGPT: Merging Prompts and Mathematical Models for Enhanced
Code-Mixed Information Retrieval
ABSTRACT: Code-mixing, the integration of lexical and grammatical elements from
multiple languages within a single sentence, is a widespread linguistic
phenomenon, particularly prevalent in multilingual societies. In India, social
media users frequently engage in code-mixed conversations using the Roman
script, especially among migrant communities who form online groups to share
relevant local information. This paper focuses on the challenges of extracting
relevant information from code-mixed conversations, specifically within Roman
transliterated Bengali mixed with English. This study presents a novel approach
to address these challenges by developing a mechanism to automatically identify
the most relevant answers from code-mixed conversations. We have experimented
with a dataset comprising of queries and documents from Facebook, and Query
Relevance files (QRels) to aid in this task. Our results demonstrate the
effectiveness of our approach in extracting pertinent information from complex,
code-mixed digital conversations, contributing to the broader field of natural
language processing in multilingual and informal text environments. We use
GPT-3.5 Turbo via prompting alongwith using the sequential nature of relevant
documents to frame a mathematical model which helps to detect relevant
documents corresponding to a query.
|
2411.11706 | Ruichuan An | Ruichuan An, Sihan Yang, Ming Lu, Renrui Zhang, Kai Zeng, Yulin Luo,
Jiajun Cao, Hao Liang, Ying Chen, Qi She, Shanghang Zhang, Wentao Zhang | MC-LLaVA: Multi-Concept Personalized Vision-Language Model | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current vision-language models (VLMs) show exceptional abilities across
diverse tasks, such as visual question answering. To enhance user experience,
recent studies investigate VLM personalization to understand user-provided
concepts. However, they mainly focus on single-concept personalization,
neglecting the existence and interplay of multiple concepts, which limits
real-world applicability. This paper proposes the first multi-concept
personalization paradigm, MC-LLaVA. Specifically, MC-LLaVA employs a
multi-concept instruction tuning strategy, effectively integrating multiple
concepts in a single training step. To reduce the costs related to joint
training, we propose a personalized textual prompt that uses visual token
information to initialize concept tokens. Additionally, we introduce a
personalized visual prompt during inference, aggregating location confidence
maps for enhanced recognition and grounding capabilities. To advance
multi-concept personalization research, we further contribute a high-quality
instruction tuning dataset. We carefully collect images with multiple
characters and objects from movies and manually generate question-answer
samples for multi-concept scenarios, featuring superior diversity.
Comprehensive qualitative and quantitative experiments demonstrate that
MC-LLaVA can achieve impressive multi-concept personalized responses, paving
the way for VLMs to become better user-specific assistants. The code and
dataset will be publicly available at https://github.com/arctanxarc/MC-LLaVA.
| [
{
"version": "v1",
"created": "Mon, 18 Nov 2024 16:33:52 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Dec 2024 13:27:22 GMT"
},
{
"version": "v3",
"created": "Wed, 26 Mar 2025 15:44:01 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"An",
"Ruichuan",
""
],
[
"Yang",
"Sihan",
""
],
[
"Lu",
"Ming",
""
],
[
"Zhang",
"Renrui",
""
],
[
"Zeng",
"Kai",
""
],
[
"Luo",
"Yulin",
""
],
[
"Cao",
"Jiajun",
""
],
[
"Liang",
"Hao",
""
],
[
"Chen",
"Ying",
""
],
[
"She",
"Qi",
""
],
[
"Zhang",
"Shanghang",
""
],
[
"Zhang",
"Wentao",
""
]
] | TITLE: MC-LLaVA: Multi-Concept Personalized Vision-Language Model
ABSTRACT: Current vision-language models (VLMs) show exceptional abilities across
diverse tasks, such as visual question answering. To enhance user experience,
recent studies investigate VLM personalization to understand user-provided
concepts. However, they mainly focus on single-concept personalization,
neglecting the existence and interplay of multiple concepts, which limits
real-world applicability. This paper proposes the first multi-concept
personalization paradigm, MC-LLaVA. Specifically, MC-LLaVA employs a
multi-concept instruction tuning strategy, effectively integrating multiple
concepts in a single training step. To reduce the costs related to joint
training, we propose a personalized textual prompt that uses visual token
information to initialize concept tokens. Additionally, we introduce a
personalized visual prompt during inference, aggregating location confidence
maps for enhanced recognition and grounding capabilities. To advance
multi-concept personalization research, we further contribute a high-quality
instruction tuning dataset. We carefully collect images with multiple
characters and objects from movies and manually generate question-answer
samples for multi-concept scenarios, featuring superior diversity.
Comprehensive qualitative and quantitative experiments demonstrate that
MC-LLaVA can achieve impressive multi-concept personalized responses, paving
the way for VLMs to become better user-specific assistants. The code and
dataset will be publicly available at https://github.com/arctanxarc/MC-LLaVA.
|
2411.16425 | Chen Gao | Linqing Zhong, Chen Gao, Zihan Ding, Yue Liao, Huimin Ma, Shifeng
Zhang, Xu Zhou, Si Liu | TopV-Nav: Unlocking the Top-View Spatial Reasoning Potential of MLLM for
Zero-shot Object Navigation | 10 pages | null | null | null | cs.CV cs.AI cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Zero-Shot Object Navigation (ZSON) task requires embodied agents to find
a previously unseen object by navigating in unfamiliar environments. Such a
goal-oriented exploration heavily relies on the ability to perceive,
understand, and reason based on the spatial information of the environment.
However, current LLM-based approaches convert visual observations to language
descriptions and reason in the linguistic space, leading to the loss of spatial
information. In this paper, we introduce TopV-Nav, an MLLM-based method that
directly reasons on the top-view map with sufficient spatial information. To
fully unlock the MLLM's spatial reasoning potential in top-view perspective, we
propose the Adaptive Visual Prompt Generation (AVPG) method to adaptively
construct semantically-rich top-view map. It enables the agent to directly
utilize spatial information contained in the top-view map to conduct thorough
reasoning. Besides, we design a Dynamic Map Scaling (DMS) mechanism to
dynamically zoom top-view map at preferred scales, enhancing local fine-grained
reasoning. Additionally, we devise a Potential Target Driven (PTD) mechanism to
predict and to utilize target locations, facilitating global and human-like
exploration. Experiments on MP3D and HM3D datasets demonstrate the superiority
of our TopV-Nav.
| [
{
"version": "v1",
"created": "Mon, 25 Nov 2024 14:27:55 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 07:26:43 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Zhong",
"Linqing",
""
],
[
"Gao",
"Chen",
""
],
[
"Ding",
"Zihan",
""
],
[
"Liao",
"Yue",
""
],
[
"Ma",
"Huimin",
""
],
[
"Zhang",
"Shifeng",
""
],
[
"Zhou",
"Xu",
""
],
[
"Liu",
"Si",
""
]
] | TITLE: TopV-Nav: Unlocking the Top-View Spatial Reasoning Potential of MLLM for
Zero-shot Object Navigation
ABSTRACT: The Zero-Shot Object Navigation (ZSON) task requires embodied agents to find
a previously unseen object by navigating in unfamiliar environments. Such a
goal-oriented exploration heavily relies on the ability to perceive,
understand, and reason based on the spatial information of the environment.
However, current LLM-based approaches convert visual observations to language
descriptions and reason in the linguistic space, leading to the loss of spatial
information. In this paper, we introduce TopV-Nav, an MLLM-based method that
directly reasons on the top-view map with sufficient spatial information. To
fully unlock the MLLM's spatial reasoning potential in top-view perspective, we
propose the Adaptive Visual Prompt Generation (AVPG) method to adaptively
construct semantically-rich top-view map. It enables the agent to directly
utilize spatial information contained in the top-view map to conduct thorough
reasoning. Besides, we design a Dynamic Map Scaling (DMS) mechanism to
dynamically zoom top-view map at preferred scales, enhancing local fine-grained
reasoning. Additionally, we devise a Potential Target Driven (PTD) mechanism to
predict and to utilize target locations, facilitating global and human-like
exploration. Experiments on MP3D and HM3D datasets demonstrate the superiority
of our TopV-Nav.
|
2411.17130 | Yuanming Li | Yuan-Ming Li, An-Lan Wang, Kun-Yu Lin, Yu-Ming Tang, Ling-An Zeng,
Jian-Fang Hu and Wei-Shi Zheng | TechCoach: Towards Technical-Point-Aware Descriptive Action Coaching | 21 pages, 16 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To guide a learner in mastering action skills, it is crucial for a coach to
1) reason through the learner's action execution and technical points
(TechPoints), and 2) provide detailed, comprehensible feedback on what is done
well and what can be improved. However, existing score-based action assessment
methods are still far from reaching this practical scenario. To bridge this
gap, we investigate a new task termed Descriptive Action Coaching (DescCoach)
which requires the model to provide detailed commentary on what is done well
and what can be improved beyond a simple quality score for action execution. To
this end, we first build a new dataset named EE4D-DescCoach. Through an
automatic annotation pipeline, our dataset goes beyond the existing action
assessment datasets by providing detailed TechPoint-level commentary.
Furthermore, we propose TechCoach, a new framework that explicitly incorporates
TechPoint-level reasoning into the DescCoach process. The central to our method
lies in the Context-aware TechPoint Reasoner, which enables TechCoach to learn
TechPoint-related quality representation by querying visual context under the
supervision of TechPoint-level coaching commentary. By leveraging the visual
context and the TechPoint-related quality representation, a unified
TechPoint-aware Action Assessor is then employed to provide the overall
coaching commentary together with the quality score. Combining all of these, we
establish a new benchmark for DescCoach and evaluate the effectiveness of our
method through extensive experiments. The data and code will be made publicly
available.
| [
{
"version": "v1",
"created": "Tue, 26 Nov 2024 05:49:25 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 13:09:32 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Li",
"Yuan-Ming",
""
],
[
"Wang",
"An-Lan",
""
],
[
"Lin",
"Kun-Yu",
""
],
[
"Tang",
"Yu-Ming",
""
],
[
"Zeng",
"Ling-An",
""
],
[
"Hu",
"Jian-Fang",
""
],
[
"Zheng",
"Wei-Shi",
""
]
] | TITLE: TechCoach: Towards Technical-Point-Aware Descriptive Action Coaching
ABSTRACT: To guide a learner in mastering action skills, it is crucial for a coach to
1) reason through the learner's action execution and technical points
(TechPoints), and 2) provide detailed, comprehensible feedback on what is done
well and what can be improved. However, existing score-based action assessment
methods are still far from reaching this practical scenario. To bridge this
gap, we investigate a new task termed Descriptive Action Coaching (DescCoach)
which requires the model to provide detailed commentary on what is done well
and what can be improved beyond a simple quality score for action execution. To
this end, we first build a new dataset named EE4D-DescCoach. Through an
automatic annotation pipeline, our dataset goes beyond the existing action
assessment datasets by providing detailed TechPoint-level commentary.
Furthermore, we propose TechCoach, a new framework that explicitly incorporates
TechPoint-level reasoning into the DescCoach process. The central to our method
lies in the Context-aware TechPoint Reasoner, which enables TechCoach to learn
TechPoint-related quality representation by querying visual context under the
supervision of TechPoint-level coaching commentary. By leveraging the visual
context and the TechPoint-related quality representation, a unified
TechPoint-aware Action Assessor is then employed to provide the overall
coaching commentary together with the quality score. Combining all of these, we
establish a new benchmark for DescCoach and evaluate the effectiveness of our
method through extensive experiments. The data and code will be made publicly
available.
|
2411.17945 | Mohammad Sadil Khan | Sankalp Sinha, Mohammad Sadil Khan, Muhammad Usama, Shino Sam, Didier
Stricker, Sk Aziz Ali, Muhammad Zeshan Afzal | MARVEL-40M+: Multi-Level Visual Elaboration for High-Fidelity Text-to-3D
Content Creation | null | null | null | null | cs.CV cs.AI cs.GR cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Generating high-fidelity 3D content from text prompts remains a significant
challenge in computer vision due to the limited size, diversity, and annotation
depth of the existing datasets. To address this, we introduce MARVEL-40M+, an
extensive dataset with 40 million text annotations for over 8.9 million 3D
assets aggregated from seven major 3D datasets. Our contribution is a novel
multi-stage annotation pipeline that integrates open-source pretrained
multi-view VLMs and LLMs to automatically produce multi-level descriptions,
ranging from detailed (150-200 words) to concise semantic tags (10-20 words).
This structure supports both fine-grained 3D reconstruction and rapid
prototyping. Furthermore, we incorporate human metadata from source datasets
into our annotation pipeline to add domain-specific information in our
annotation and reduce VLM hallucinations. Additionally, we develop MARVEL-FX3D,
a two-stage text-to-3D pipeline. We fine-tune Stable Diffusion with our
annotations and use a pretrained image-to-3D network to generate 3D textured
meshes within 15s. Extensive evaluations show that MARVEL-40M+ significantly
outperforms existing datasets in annotation quality and linguistic diversity,
achieving win rates of 72.41% by GPT-4 and 73.40% by human evaluators. Project
page is available at https://sankalpsinha-cmos.github.io/MARVEL/.
| [
{
"version": "v1",
"created": "Tue, 26 Nov 2024 23:39:43 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 11:06:10 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Sinha",
"Sankalp",
""
],
[
"Khan",
"Mohammad Sadil",
""
],
[
"Usama",
"Muhammad",
""
],
[
"Sam",
"Shino",
""
],
[
"Stricker",
"Didier",
""
],
[
"Ali",
"Sk Aziz",
""
],
[
"Afzal",
"Muhammad Zeshan",
""
]
] | TITLE: MARVEL-40M+: Multi-Level Visual Elaboration for High-Fidelity Text-to-3D
Content Creation
ABSTRACT: Generating high-fidelity 3D content from text prompts remains a significant
challenge in computer vision due to the limited size, diversity, and annotation
depth of the existing datasets. To address this, we introduce MARVEL-40M+, an
extensive dataset with 40 million text annotations for over 8.9 million 3D
assets aggregated from seven major 3D datasets. Our contribution is a novel
multi-stage annotation pipeline that integrates open-source pretrained
multi-view VLMs and LLMs to automatically produce multi-level descriptions,
ranging from detailed (150-200 words) to concise semantic tags (10-20 words).
This structure supports both fine-grained 3D reconstruction and rapid
prototyping. Furthermore, we incorporate human metadata from source datasets
into our annotation pipeline to add domain-specific information in our
annotation and reduce VLM hallucinations. Additionally, we develop MARVEL-FX3D,
a two-stage text-to-3D pipeline. We fine-tune Stable Diffusion with our
annotations and use a pretrained image-to-3D network to generate 3D textured
meshes within 15s. Extensive evaluations show that MARVEL-40M+ significantly
outperforms existing datasets in annotation quality and linguistic diversity,
achieving win rates of 72.41% by GPT-4 and 73.40% by human evaluators. Project
page is available at https://sankalpsinha-cmos.github.io/MARVEL/.
|
2411.18968 | Nardiena A. Pratama | Nardiena A. Pratama, Shaoyang Fan, Gianluca Demartini | Perception of Visual Content: Differences Between Humans and Foundation
Models | 12 pages, 5 figures, 5 tables; updated version for a
Revise-and-Resubmit at ICWSM 2025. This version includes a larger and more
diverse dataset, leading to updated results | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Human-annotated content is often used to train machine learning (ML) models.
However, recently, language and multi-modal foundational models have been used
to replace and scale-up human annotator's efforts. This study compares
human-generated and ML-generated annotations of images representing diverse
socio-economic contexts. We aim to understand differences in perception and
identify potential biases in content interpretation. Our dataset comprises
images of people from various geographical regions and income levels, covering
various daily activities and home environments. We compare human and
ML-generated annotations semantically and evaluate their impact on predictive
models. Our results show highest similarity between ML captions and human
labels from a low-level perspective, i.e., types of words that appear and
sentence structures, but all three annotations are alike in how similar or
dissimilar they perceive images across different regions. Additionally, ML
Captions resulted in best overall region classification performance, while ML
Objects and ML Captions performed best overall for income regression. The
varying performance of annotation sets highlights the notion that all
annotations are important, and that human-generated annotations are yet to be
replaceable.
| [
{
"version": "v1",
"created": "Thu, 28 Nov 2024 07:37:04 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 13:02:34 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Pratama",
"Nardiena A.",
""
],
[
"Fan",
"Shaoyang",
""
],
[
"Demartini",
"Gianluca",
""
]
] | TITLE: Perception of Visual Content: Differences Between Humans and Foundation
Models
ABSTRACT: Human-annotated content is often used to train machine learning (ML) models.
However, recently, language and multi-modal foundational models have been used
to replace and scale-up human annotator's efforts. This study compares
human-generated and ML-generated annotations of images representing diverse
socio-economic contexts. We aim to understand differences in perception and
identify potential biases in content interpretation. Our dataset comprises
images of people from various geographical regions and income levels, covering
various daily activities and home environments. We compare human and
ML-generated annotations semantically and evaluate their impact on predictive
models. Our results show highest similarity between ML captions and human
labels from a low-level perspective, i.e., types of words that appear and
sentence structures, but all three annotations are alike in how similar or
dissimilar they perceive images across different regions. Additionally, ML
Captions resulted in best overall region classification performance, while ML
Objects and ML Captions performed best overall for income regression. The
varying performance of annotation sets highlights the notion that all
annotations are important, and that human-generated annotations are yet to be
replaceable.
|
2412.01136 | Seongchan Kim | Seongchan Kim, Woojeong Jin, Sangbeom Lim, Heeji Yoon, Hyunwook Choi,
Seungryong Kim | Referring Video Object Segmentation via Language-aligned Track Selection | Project page is available at https://cvlab-kaist.github.io/SOLA | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Referring video object segmentation (RVOS) requires tracking and segmenting
an object throughout a video according to a given natural language expression,
demanding both complex motion understanding and the alignment of visual
representations with language descriptions. Given these challenges, the
recently proposed Segment Anything Model 2 (SAM2) emerges as a potential
candidate due to its ability to generate coherent segmentation mask tracks
across video frames, and provide an inherent spatio-temporal objectness in its
object token representations. In this paper, we introduce SOLA (Selection by
Object Language Alignment), a novel framework that leverages SAM2 object tokens
as compact video-level object representations, which are aligned with language
features through a lightweight track selection module. To effectively
facilitate this alignment, we propose an IoU-based pseudo-labeling strategy,
which bridges the modality gap between SAM2 representations with language
features. Extensive experiments show that SOLA achieves state-of-the-art
performance on the MeViS dataset and demonstrate that SOLA offers an effective
solution for RVOS. Our project page is available at:
https://cvlab-kaist.github.io/SOLA.
| [
{
"version": "v1",
"created": "Mon, 2 Dec 2024 05:20:35 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 08:59:35 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Kim",
"Seongchan",
""
],
[
"Jin",
"Woojeong",
""
],
[
"Lim",
"Sangbeom",
""
],
[
"Yoon",
"Heeji",
""
],
[
"Choi",
"Hyunwook",
""
],
[
"Kim",
"Seungryong",
""
]
] | TITLE: Referring Video Object Segmentation via Language-aligned Track Selection
ABSTRACT: Referring video object segmentation (RVOS) requires tracking and segmenting
an object throughout a video according to a given natural language expression,
demanding both complex motion understanding and the alignment of visual
representations with language descriptions. Given these challenges, the
recently proposed Segment Anything Model 2 (SAM2) emerges as a potential
candidate due to its ability to generate coherent segmentation mask tracks
across video frames, and provide an inherent spatio-temporal objectness in its
object token representations. In this paper, we introduce SOLA (Selection by
Object Language Alignment), a novel framework that leverages SAM2 object tokens
as compact video-level object representations, which are aligned with language
features through a lightweight track selection module. To effectively
facilitate this alignment, we propose an IoU-based pseudo-labeling strategy,
which bridges the modality gap between SAM2 representations with language
features. Extensive experiments show that SOLA achieves state-of-the-art
performance on the MeViS dataset and demonstrate that SOLA offers an effective
solution for RVOS. Our project page is available at:
https://cvlab-kaist.github.io/SOLA.
|
2412.01256 | Qun Li | Bikang Pan, Qun Li, Xiaoying Tang, Wei Huang, Zhen Fang, Feng Liu,
Jingya Wang, Jingyi Yu, Ye Shi | NLPrompt: Noise-Label Prompt Learning for Vision-Language Models | null | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | The emergence of vision-language foundation models, such as CLIP, has
revolutionized image-text representation, enabling a broad range of
applications via prompt learning. Despite its promise, real-world datasets
often contain noisy labels that can degrade prompt learning performance. In
this paper, we demonstrate that using mean absolute error (MAE) loss in prompt
learning, named PromptMAE, significantly enhances robustness against noisy
labels while maintaining high accuracy. Though MAE is straightforward and
recognized for its robustness, it is rarely used in noisy-label learning due to
its slow convergence and poor performance outside prompt learning scenarios. To
elucidate the robustness of PromptMAE, we leverage feature learning theory to
show that MAE can suppress the influence of noisy samples, thereby improving
the signal-to-noise ratio and enhancing overall robustness. Additionally, we
introduce PromptOT, a prompt-based optimal transport data purification method
to enhance the robustness further. PromptOT employs text features in
vision-language models as prototypes to construct an optimal transportation
matrix. This matrix effectively partitions datasets into clean and noisy
subsets, allowing for the application of cross-entropy loss to the clean subset
and MAE loss to the noisy subset. Our Noise-Label Prompt Learning method, named
NLPrompt, offers a simple and efficient approach that leverages the expressive
representations and precise alignment capabilities of vision-language models
for robust prompt learning. We validate NLPrompt through extensive experiments
across various noise settings, demonstrating significant performance
improvements.
| [
{
"version": "v1",
"created": "Mon, 2 Dec 2024 08:25:09 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Mar 2025 09:08:24 GMT"
}
] | 2025-03-27T00:00:00 | [
[
"Pan",
"Bikang",
""
],
[
"Li",
"Qun",
""
],
[
"Tang",
"Xiaoying",
""
],
[
"Huang",
"Wei",
""
],
[
"Fang",
"Zhen",
""
],
[
"Liu",
"Feng",
""
],
[
"Wang",
"Jingya",
""
],
[
"Yu",
"Jingyi",
""
],
[
"Shi",
"Ye",
""
]
] | TITLE: NLPrompt: Noise-Label Prompt Learning for Vision-Language Models
ABSTRACT: The emergence of vision-language foundation models, such as CLIP, has
revolutionized image-text representation, enabling a broad range of
applications via prompt learning. Despite its promise, real-world datasets
often contain noisy labels that can degrade prompt learning performance. In
this paper, we demonstrate that using mean absolute error (MAE) loss in prompt
learning, named PromptMAE, significantly enhances robustness against noisy
labels while maintaining high accuracy. Though MAE is straightforward and
recognized for its robustness, it is rarely used in noisy-label learning due to
its slow convergence and poor performance outside prompt learning scenarios. To
elucidate the robustness of PromptMAE, we leverage feature learning theory to
show that MAE can suppress the influence of noisy samples, thereby improving
the signal-to-noise ratio and enhancing overall robustness. Additionally, we
introduce PromptOT, a prompt-based optimal transport data purification method
to enhance the robustness further. PromptOT employs text features in
vision-language models as prototypes to construct an optimal transportation
matrix. This matrix effectively partitions datasets into clean and noisy
subsets, allowing for the application of cross-entropy loss to the clean subset
and MAE loss to the noisy subset. Our Noise-Label Prompt Learning method, named
NLPrompt, offers a simple and efficient approach that leverages the expressive
representations and precise alignment capabilities of vision-language models
for robust prompt learning. We validate NLPrompt through extensive experiments
across various noise settings, demonstrating significant performance
improvements.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.