Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1908.10993 | Deyan Ginev | Deyan Ginev, Bruce R. Miller | Scientific Statement Classification over arXiv.org | null | Proceedings of the Twelfth Language Resources and Evaluation
Conference, pages 1219--1226, Marseille, France. European Language Resources
Association (2020) | null | 2020.lrec-1.153 | cs.CL cs.AI cs.DL | http://creativecommons.org/publicdomain/zero/1.0/ | We introduce a new classification task for scientific statements and release
a large-scale dataset for supervised learning. Our resource is derived from a
machine-readable representation of the arXiv.org collection of preprint
articles. We explore fifty author-annotated categories and empirically motivate
a task design of grouping 10.5 million annotated paragraphs into thirteen
classes. We demonstrate that the task setup aligns with known success rates
from the state of the art, peaking at a 0.91 F1-score via a BiLSTM
encoder-decoder model. Additionally, we introduce a lexeme serialization for
mathematical formulas, and observe that context-aware models could improve when
also trained on the symbolic modality. Finally, we discuss the limitations of
both data and task design, and outline potential directions towards
increasingly complex models of scientific discourse, beyond isolated
statements.
| [
{
"version": "v1",
"created": "Thu, 29 Aug 2019 00:25:38 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Ginev",
"Deyan",
""
],
[
"Miller",
"Bruce R.",
""
]
] | TITLE: Scientific Statement Classification over arXiv.org
ABSTRACT: We introduce a new classification task for scientific statements and release
a large-scale dataset for supervised learning. Our resource is derived from a
machine-readable representation of the arXiv.org collection of preprint
articles. We explore fifty author-annotated categories and empirically motivate
a task design of grouping 10.5 million annotated paragraphs into thirteen
classes. We demonstrate that the task setup aligns with known success rates
from the state of the art, peaking at a 0.91 F1-score via a BiLSTM
encoder-decoder model. Additionally, we introduce a lexeme serialization for
mathematical formulas, and observe that context-aware models could improve when
also trained on the symbolic modality. Finally, we discuss the limitations of
both data and task design, and outline potential directions towards
increasingly complex models of scientific discourse, beyond isolated
statements.
|
2110.01729 | Julio Castrillon PhD | Julio Enrique Castrillon-Candas, Dingning Liu, Sicheng Yang, Xiaoling
Zhang, Mark Kon | Stochastic tensor space feature theory with applications to robust
machine learning | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we develop a Multilevel Orthogonal Subspace (MOS)
Karhunen-Loeve feature theory based on stochastic tensor spaces, for the
construction of robust machine learning features. Training data is treated as
instances of a random field within a relevant Bochner space. Our key
observation is that separate machine learning classes can reside predominantly
in mostly distinct subspaces. Using the Karhunen-Loeve expansion and a
hierarchical expansion of the first (nominal) class, a MOS is constructed to
detect anomalous signal components, treating the second class as an outlier of
the first. The projection coefficients of the input data into these subspaces
are then used to train a Machine Learning (ML) classifier. These coefficients
become new features from which much clearer separation surfaces can arise for
the underlying classes. Tests in the blood plasma dataset (Alzheimer's Disease
Neuroimaging Initiative) show dramatic increases in accuracy. This is in
contrast to popular ML methods such as Gradient Boosting, RUS Boost, Random
Forest and (Convolutional) Neural Networks.
| [
{
"version": "v1",
"created": "Mon, 4 Oct 2021 22:01:01 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Oct 2022 17:18:40 GMT"
},
{
"version": "v3",
"created": "Tue, 13 Jun 2023 15:22:04 GMT"
},
{
"version": "v4",
"created": "Fri, 25 Aug 2023 18:23:26 GMT"
},
{
"version": "v5",
"created": "Thu, 20 Mar 2025 12:32:40 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Castrillon-Candas",
"Julio Enrique",
""
],
[
"Liu",
"Dingning",
""
],
[
"Yang",
"Sicheng",
""
],
[
"Zhang",
"Xiaoling",
""
],
[
"Kon",
"Mark",
""
]
] | TITLE: Stochastic tensor space feature theory with applications to robust
machine learning
ABSTRACT: In this paper we develop a Multilevel Orthogonal Subspace (MOS)
Karhunen-Loeve feature theory based on stochastic tensor spaces, for the
construction of robust machine learning features. Training data is treated as
instances of a random field within a relevant Bochner space. Our key
observation is that separate machine learning classes can reside predominantly
in mostly distinct subspaces. Using the Karhunen-Loeve expansion and a
hierarchical expansion of the first (nominal) class, a MOS is constructed to
detect anomalous signal components, treating the second class as an outlier of
the first. The projection coefficients of the input data into these subspaces
are then used to train a Machine Learning (ML) classifier. These coefficients
become new features from which much clearer separation surfaces can arise for
the underlying classes. Tests in the blood plasma dataset (Alzheimer's Disease
Neuroimaging Initiative) show dramatic increases in accuracy. This is in
contrast to popular ML methods such as Gradient Boosting, RUS Boost, Random
Forest and (Convolutional) Neural Networks.
|
2305.10361 | Eilam Shapira | Eilam Shapira, Omer Madmon, Reut Apel, Moshe Tennenholtz, Roi Reichart | Human Choice Prediction in Language-based Persuasion Games:
Simulation-based Off-Policy Evaluation | null | null | null | null | cs.LG cs.AI cs.GT | http://creativecommons.org/licenses/by/4.0/ | Recent advances in Large Language Models (LLMs) have spurred interest in
designing LLM-based agents for tasks that involve interaction with human and
artificial agents. This paper addresses a key aspect in the design of such
agents: predicting human decisions in off-policy evaluation (OPE). We focus on
language-based persuasion games, where an expert aims to influence the
decision-maker through verbal messages. In our OPE framework, the prediction
model is trained on human interaction data collected from encounters with one
set of expert agents, and its performance is evaluated on interactions with a
different set of experts. Using a dedicated application, we collected a dataset
of 87K decisions from humans playing a repeated decision-making game with
artificial agents. To enhance off-policy performance, we propose a simulation
technique involving interactions across the entire agent space and simulated
decision-makers. Our learning strategy yields significant OPE gains, e.g.,
improving prediction accuracy in the top 15% challenging cases by 7.1%. Our
code and the large dataset we collected and generated are submitted as
supplementary material and publicly available in our GitHub repository:
https://github.com/eilamshapira/HumanChoicePrediction
| [
{
"version": "v1",
"created": "Wed, 17 May 2023 16:38:11 GMT"
},
{
"version": "v2",
"created": "Tue, 23 May 2023 18:58:21 GMT"
},
{
"version": "v3",
"created": "Wed, 29 Nov 2023 13:46:53 GMT"
},
{
"version": "v4",
"created": "Wed, 28 Feb 2024 21:36:54 GMT"
},
{
"version": "v5",
"created": "Thu, 20 Mar 2025 14:27:22 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Shapira",
"Eilam",
""
],
[
"Madmon",
"Omer",
""
],
[
"Apel",
"Reut",
""
],
[
"Tennenholtz",
"Moshe",
""
],
[
"Reichart",
"Roi",
""
]
] | TITLE: Human Choice Prediction in Language-based Persuasion Games:
Simulation-based Off-Policy Evaluation
ABSTRACT: Recent advances in Large Language Models (LLMs) have spurred interest in
designing LLM-based agents for tasks that involve interaction with human and
artificial agents. This paper addresses a key aspect in the design of such
agents: predicting human decisions in off-policy evaluation (OPE). We focus on
language-based persuasion games, where an expert aims to influence the
decision-maker through verbal messages. In our OPE framework, the prediction
model is trained on human interaction data collected from encounters with one
set of expert agents, and its performance is evaluated on interactions with a
different set of experts. Using a dedicated application, we collected a dataset
of 87K decisions from humans playing a repeated decision-making game with
artificial agents. To enhance off-policy performance, we propose a simulation
technique involving interactions across the entire agent space and simulated
decision-makers. Our learning strategy yields significant OPE gains, e.g.,
improving prediction accuracy in the top 15% challenging cases by 7.1%. Our
code and the large dataset we collected and generated are submitted as
supplementary material and publicly available in our GitHub repository:
https://github.com/eilamshapira/HumanChoicePrediction
|
2306.01176 | Jiamian Wang | Jiamian Wang, Zongliang Wu, Yulun Zhang, Xin Yuan, Tao Lin, Zhiqiang
Tao | Cooperative Hardware-Prompt Learning for Snapshot Compressive Imaging | Accepted by NeurIPS 2024 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing reconstruction models in snapshot compressive imaging systems (SCI)
are trained with a single well-calibrated hardware instance, making their
performance vulnerable to hardware shifts and limited in adapting to multiple
hardware configurations. To facilitate cross-hardware learning, previous
efforts attempt to directly collect multi-hardware data and perform centralized
training, which is impractical due to severe user data privacy concerns and
hardware heterogeneity across different platforms/institutions. In this study,
we explicitly consider data privacy and heterogeneity in cooperatively
optimizing SCI systems by proposing a Federated Hardware-Prompt learning
(FedHP) framework. Rather than mitigating the client drift by rectifying the
gradients, which only takes effect on the learning manifold but fails to solve
the heterogeneity rooted in the input data space, FedHP learns a
hardware-conditioned prompter to align inconsistent data distribution across
clients, serving as an indicator of the data inconsistency among different
hardware (e.g., coded apertures). Extensive experimental results demonstrate
that the proposed FedHP coordinates the pre-trained model to multiple hardware
configurations, outperforming prevalent FL frameworks for 0.35dB under
challenging heterogeneous settings. Moreover, a Snapshot Spectral Heterogeneous
Dataset has been built upon multiple practical SCI systems. Data and code are
aveilable at https://github.com/Jiamian-Wang/FedHP-Snapshot-Compressive-Imaging
| [
{
"version": "v1",
"created": "Thu, 1 Jun 2023 22:21:28 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 00:27:01 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Wang",
"Jiamian",
""
],
[
"Wu",
"Zongliang",
""
],
[
"Zhang",
"Yulun",
""
],
[
"Yuan",
"Xin",
""
],
[
"Lin",
"Tao",
""
],
[
"Tao",
"Zhiqiang",
""
]
] | TITLE: Cooperative Hardware-Prompt Learning for Snapshot Compressive Imaging
ABSTRACT: Existing reconstruction models in snapshot compressive imaging systems (SCI)
are trained with a single well-calibrated hardware instance, making their
performance vulnerable to hardware shifts and limited in adapting to multiple
hardware configurations. To facilitate cross-hardware learning, previous
efforts attempt to directly collect multi-hardware data and perform centralized
training, which is impractical due to severe user data privacy concerns and
hardware heterogeneity across different platforms/institutions. In this study,
we explicitly consider data privacy and heterogeneity in cooperatively
optimizing SCI systems by proposing a Federated Hardware-Prompt learning
(FedHP) framework. Rather than mitigating the client drift by rectifying the
gradients, which only takes effect on the learning manifold but fails to solve
the heterogeneity rooted in the input data space, FedHP learns a
hardware-conditioned prompter to align inconsistent data distribution across
clients, serving as an indicator of the data inconsistency among different
hardware (e.g., coded apertures). Extensive experimental results demonstrate
that the proposed FedHP coordinates the pre-trained model to multiple hardware
configurations, outperforming prevalent FL frameworks for 0.35dB under
challenging heterogeneous settings. Moreover, a Snapshot Spectral Heterogeneous
Dataset has been built upon multiple practical SCI systems. Data and code are
aveilable at https://github.com/Jiamian-Wang/FedHP-Snapshot-Compressive-Imaging
|
2307.09420 | Ahmed Abdelkawy | Ahmed Abdelkawy, Islam Alkabbany, Asem Ali and Aly Farag | Measuring Student Behavioral Engagement using Histogram of Actions | null | null | null | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a novel technique for measuring behavioral
engagement through students' actions recognition. The proposed approach
recognizes student actions then predicts the student behavioral engagement
level. For student action recognition, we use human skeletons to model student
postures and upper body movements. To learn the dynamics of student upper body,
a 3D-CNN model is used. The trained 3D-CNN model is used to recognize actions
within every 2minute video segment then these actions are used to build a
histogram of actions which encodes the student actions and their frequencies.
This histogram is utilized as an input to SVM classifier to classify whether
the student is engaged or disengaged. To evaluate the proposed framework, we
build a dataset consisting of 1414 2-minute video segments annotated with 13
actions and 112 video segments annotated with two engagement levels.
Experimental results indicate that student actions can be recognized with top 1
accuracy 83.63% and the proposed framework can capture the average engagement
of the class.
| [
{
"version": "v1",
"created": "Tue, 18 Jul 2023 16:37:37 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Abdelkawy",
"Ahmed",
""
],
[
"Alkabbany",
"Islam",
""
],
[
"Ali",
"Asem",
""
],
[
"Farag",
"Aly",
""
]
] | TITLE: Measuring Student Behavioral Engagement using Histogram of Actions
ABSTRACT: In this paper, we propose a novel technique for measuring behavioral
engagement through students' actions recognition. The proposed approach
recognizes student actions then predicts the student behavioral engagement
level. For student action recognition, we use human skeletons to model student
postures and upper body movements. To learn the dynamics of student upper body,
a 3D-CNN model is used. The trained 3D-CNN model is used to recognize actions
within every 2minute video segment then these actions are used to build a
histogram of actions which encodes the student actions and their frequencies.
This histogram is utilized as an input to SVM classifier to classify whether
the student is engaged or disengaged. To evaluate the proposed framework, we
build a dataset consisting of 1414 2-minute video segments annotated with 13
actions and 112 video segments annotated with two engagement levels.
Experimental results indicate that student actions can be recognized with top 1
accuracy 83.63% and the proposed framework can capture the average engagement
of the class.
|
2308.09701 | Joao F. Doriguello | Joao F. Doriguello, Alessandro Luongo, Ewin Tang | Do you know what q-means? | 14 pages. v2: improved the quantum complexity, added references | null | null | null | quant-ph cs.DS cs.LG | http://creativecommons.org/licenses/by/4.0/ | Clustering is one of the most important tools for analysis of large datasets,
and perhaps the most popular clustering algorithm is Lloyd's iteration for
$k$-means. This iteration takes $n$ vectors
$V=[v_1,\dots,v_n]\in\mathbb{R}^{n\times d}$ and outputs $k$ centroids
$c_1,\dots,c_k\in\mathbb{R}^d$; these partition the vectors into clusters based
on which centroid is closest to a particular vector. We present an overall
improved version of the "$q$-means" algorithm, the quantum algorithm originally
proposed by Kerenidis, Landman, Luongo, and Prakash (NeurIPS'19) which performs
$\varepsilon$-$k$-means, an approximate version of $k$-means clustering. Our
algorithm does not rely on quantum linear algebra primitives of prior work, but
instead only uses QRAM to prepare simple states based on the current
iteration's clusters and multivariate quantum amplitude estimation. The time
complexity is
$\widetilde{O}\big(\frac{\|V\|_F}{\sqrt{n}}\frac{k^{5/2}d}{\varepsilon}(\sqrt{k}
+ \log{n})\big)$ and maintains the logarithmic dependence on $n$ while
improving the dependence on most of the other parameters. We also present a
"dequantized" algorithm for $\varepsilon$-$k$-means which runs in
$O\big(\frac{\|V\|_F^2}{n}\frac{k^{2}}{\varepsilon^2}(kd + \log{n})\big)$ time.
Notably, this classical algorithm matches the logarithmic dependence on $n$
attained by the quantum algorithm.
| [
{
"version": "v1",
"created": "Fri, 18 Aug 2023 17:52:12 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 17:47:44 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Doriguello",
"Joao F.",
""
],
[
"Luongo",
"Alessandro",
""
],
[
"Tang",
"Ewin",
""
]
] | TITLE: Do you know what q-means?
ABSTRACT: Clustering is one of the most important tools for analysis of large datasets,
and perhaps the most popular clustering algorithm is Lloyd's iteration for
$k$-means. This iteration takes $n$ vectors
$V=[v_1,\dots,v_n]\in\mathbb{R}^{n\times d}$ and outputs $k$ centroids
$c_1,\dots,c_k\in\mathbb{R}^d$; these partition the vectors into clusters based
on which centroid is closest to a particular vector. We present an overall
improved version of the "$q$-means" algorithm, the quantum algorithm originally
proposed by Kerenidis, Landman, Luongo, and Prakash (NeurIPS'19) which performs
$\varepsilon$-$k$-means, an approximate version of $k$-means clustering. Our
algorithm does not rely on quantum linear algebra primitives of prior work, but
instead only uses QRAM to prepare simple states based on the current
iteration's clusters and multivariate quantum amplitude estimation. The time
complexity is
$\widetilde{O}\big(\frac{\|V\|_F}{\sqrt{n}}\frac{k^{5/2}d}{\varepsilon}(\sqrt{k}
+ \log{n})\big)$ and maintains the logarithmic dependence on $n$ while
improving the dependence on most of the other parameters. We also present a
"dequantized" algorithm for $\varepsilon$-$k$-means which runs in
$O\big(\frac{\|V\|_F^2}{n}\frac{k^{2}}{\varepsilon^2}(kd + \log{n})\big)$ time.
Notably, this classical algorithm matches the logarithmic dependence on $n$
attained by the quantum algorithm.
|
2312.00267 | Viraj Mehta | Viraj Mehta and Syrine Belakaria and Vikramjeet Das and Ojash Neopane
and Yijia Dai and Ilija Bogunovic and Barbara Engelhardt and Stefano Ermon
and Jeff Schneider and Willie Neiswanger | Sample Efficient Preference Alignment in LLMs via Active Exploration | null | null | null | null | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Preference-based feedback is important for many applications in machine
learning where evaluation of a reward function is not feasible. Notable recent
examples arise in preference alignment for large language models, including in
reinforcement learning from human feedback (RLHF) and direct preference
optimization (DPO). For many applications of preference alignment, the cost of
acquiring human feedback can be substantial. In this work, we take advantage of
the fact that one can often choose contexts at which to obtain human feedback
to most efficiently identify a good policy, and formalize the setting as an
active contextual dueling bandit problem. We propose an active exploration
algorithm to efficiently select the data and provide theoretical proof that it
has a polynomial worst-case regret bound. We extend the setting and methodology
for practical use in preference alignment of large language models. We provide
two extensions, an online and an offline approach. Our method outperforms the
baselines with limited samples of human preferences on several language models
and four real-world datasets including two new datasets that we contribute to
the literature.
| [
{
"version": "v1",
"created": "Fri, 1 Dec 2023 00:54:02 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 14:23:52 GMT"
},
{
"version": "v3",
"created": "Thu, 20 Mar 2025 14:23:17 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Mehta",
"Viraj",
""
],
[
"Belakaria",
"Syrine",
""
],
[
"Das",
"Vikramjeet",
""
],
[
"Neopane",
"Ojash",
""
],
[
"Dai",
"Yijia",
""
],
[
"Bogunovic",
"Ilija",
""
],
[
"Engelhardt",
"Barbara",
""
],
[
"Ermon",
"Stefano",
""
],
[
"Schneider",
"Jeff",
""
],
[
"Neiswanger",
"Willie",
""
]
] | TITLE: Sample Efficient Preference Alignment in LLMs via Active Exploration
ABSTRACT: Preference-based feedback is important for many applications in machine
learning where evaluation of a reward function is not feasible. Notable recent
examples arise in preference alignment for large language models, including in
reinforcement learning from human feedback (RLHF) and direct preference
optimization (DPO). For many applications of preference alignment, the cost of
acquiring human feedback can be substantial. In this work, we take advantage of
the fact that one can often choose contexts at which to obtain human feedback
to most efficiently identify a good policy, and formalize the setting as an
active contextual dueling bandit problem. We propose an active exploration
algorithm to efficiently select the data and provide theoretical proof that it
has a polynomial worst-case regret bound. We extend the setting and methodology
for practical use in preference alignment of large language models. We provide
two extensions, an online and an offline approach. Our method outperforms the
baselines with limited samples of human preferences on several language models
and four real-world datasets including two new datasets that we contribute to
the literature.
|
2312.06358 | Vivek Gopalakrishnan | Vivek Gopalakrishnan, Neel Dey, Polina Golland | Intraoperative 2D/3D Image Registration via Differentiable X-ray
Rendering | CVPR 2024 | null | 10.1109/cvpr52733.2024.01108 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Surgical decisions are informed by aligning rapid portable 2D intraoperative
images (e.g., X-rays) to a high-fidelity 3D preoperative reference scan (e.g.,
CT). 2D/3D image registration often fails in practice: conventional
optimization methods are prohibitively slow and susceptible to local minima,
while neural networks trained on small datasets fail on new patients or require
impractical landmark supervision. We present DiffPose, a self-supervised
approach that leverages patient-specific simulation and differentiable
physics-based rendering to achieve accurate 2D/3D registration without relying
on manually labeled data. Preoperatively, a CNN is trained to regress the pose
of a randomly oriented synthetic X-ray rendered from the preoperative CT. The
CNN then initializes rapid intraoperative test-time optimization that uses the
differentiable X-ray renderer to refine the solution. Our work further proposes
several geometrically principled methods for sampling camera poses from
$\mathbf{SE}(3)$, for sparse differentiable rendering, and for driving
registration in the tangent space $\mathfrak{se}(3)$ with geodesic and
multiscale locality-sensitive losses. DiffPose achieves sub-millimeter accuracy
across surgical datasets at intraoperative speeds, improving upon existing
unsupervised methods by an order of magnitude and even outperforming supervised
baselines. Our code is available at https://github.com/eigenvivek/DiffPose.
| [
{
"version": "v1",
"created": "Mon, 11 Dec 2023 13:05:54 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Mar 2024 12:24:29 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Gopalakrishnan",
"Vivek",
""
],
[
"Dey",
"Neel",
""
],
[
"Golland",
"Polina",
""
]
] | TITLE: Intraoperative 2D/3D Image Registration via Differentiable X-ray
Rendering
ABSTRACT: Surgical decisions are informed by aligning rapid portable 2D intraoperative
images (e.g., X-rays) to a high-fidelity 3D preoperative reference scan (e.g.,
CT). 2D/3D image registration often fails in practice: conventional
optimization methods are prohibitively slow and susceptible to local minima,
while neural networks trained on small datasets fail on new patients or require
impractical landmark supervision. We present DiffPose, a self-supervised
approach that leverages patient-specific simulation and differentiable
physics-based rendering to achieve accurate 2D/3D registration without relying
on manually labeled data. Preoperatively, a CNN is trained to regress the pose
of a randomly oriented synthetic X-ray rendered from the preoperative CT. The
CNN then initializes rapid intraoperative test-time optimization that uses the
differentiable X-ray renderer to refine the solution. Our work further proposes
several geometrically principled methods for sampling camera poses from
$\mathbf{SE}(3)$, for sparse differentiable rendering, and for driving
registration in the tangent space $\mathfrak{se}(3)$ with geodesic and
multiscale locality-sensitive losses. DiffPose achieves sub-millimeter accuracy
across surgical datasets at intraoperative speeds, improving upon existing
unsupervised methods by an order of magnitude and even outperforming supervised
baselines. Our code is available at https://github.com/eigenvivek/DiffPose.
|
2312.13016 | Yuming Gu | Yuming Gu, You Xie, Hongyi Xu, Guoxian Song, Yichun Shi, Di Chang,
Jing Yang, Linjie Luo | DiffPortrait3D: Controllable Diffusion for Zero-Shot Portrait View
Synthesis | null | https://openaccess.thecvf.com/content/CVPR2024/html/Gu_DiffPortrait3D_Controllable_Diffusion_for_Zero-Shot_Portrait_View_Synthesis_CVPR_2024_paper.html | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We present DiffPortrait3D, a conditional diffusion model that is capable of
synthesizing 3D-consistent photo-realistic novel views from as few as a single
in-the-wild portrait. Specifically, given a single RGB input, we aim to
synthesize plausible but consistent facial details rendered from novel camera
views with retained both identity and facial expression. In lieu of
time-consuming optimization and fine-tuning, our zero-shot method generalizes
well to arbitrary face portraits with unposed camera views, extreme facial
expressions, and diverse artistic depictions. At its core, we leverage the
generative prior of 2D diffusion models pre-trained on large-scale image
datasets as our rendering backbone, while the denoising is guided with
disentangled attentive control of appearance and camera pose. To achieve this,
we first inject the appearance context from the reference image into the
self-attention layers of the frozen UNets. The rendering view is then
manipulated with a novel conditional control module that interprets the camera
pose by watching a condition image of a crossed subject from the same view.
Furthermore, we insert a trainable cross-view attention module to enhance view
consistency, which is further strengthened with a novel 3D-aware noise
generation process during inference. We demonstrate state-of-the-art results
both qualitatively and quantitatively on our challenging in-the-wild and
multi-view benchmarks.
| [
{
"version": "v1",
"created": "Wed, 20 Dec 2023 13:31:11 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Dec 2023 18:26:21 GMT"
},
{
"version": "v3",
"created": "Fri, 22 Dec 2023 15:56:46 GMT"
},
{
"version": "v4",
"created": "Tue, 19 Mar 2024 23:01:59 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Gu",
"Yuming",
""
],
[
"Xie",
"You",
""
],
[
"Xu",
"Hongyi",
""
],
[
"Song",
"Guoxian",
""
],
[
"Shi",
"Yichun",
""
],
[
"Chang",
"Di",
""
],
[
"Yang",
"Jing",
""
],
[
"Luo",
"Linjie",
""
]
] | TITLE: DiffPortrait3D: Controllable Diffusion for Zero-Shot Portrait View
Synthesis
ABSTRACT: We present DiffPortrait3D, a conditional diffusion model that is capable of
synthesizing 3D-consistent photo-realistic novel views from as few as a single
in-the-wild portrait. Specifically, given a single RGB input, we aim to
synthesize plausible but consistent facial details rendered from novel camera
views with retained both identity and facial expression. In lieu of
time-consuming optimization and fine-tuning, our zero-shot method generalizes
well to arbitrary face portraits with unposed camera views, extreme facial
expressions, and diverse artistic depictions. At its core, we leverage the
generative prior of 2D diffusion models pre-trained on large-scale image
datasets as our rendering backbone, while the denoising is guided with
disentangled attentive control of appearance and camera pose. To achieve this,
we first inject the appearance context from the reference image into the
self-attention layers of the frozen UNets. The rendering view is then
manipulated with a novel conditional control module that interprets the camera
pose by watching a condition image of a crossed subject from the same view.
Furthermore, we insert a trainable cross-view attention module to enhance view
consistency, which is further strengthened with a novel 3D-aware noise
generation process during inference. We demonstrate state-of-the-art results
both qualitatively and quantitatively on our challenging in-the-wild and
multi-view benchmarks.
|
2401.09769 | Chenghua Gong | Chenghua Gong, Yao Cheng, Jianxiang Yu, Can Xu, Caihua Shan, Siqiang
Luo, Xiang Li | A Survey on Learning from Graphs with Heterophily: Recent Advances and
Future Directions | 64 pages | Frontiers of Computer Science 2025 | 10.1007/s11704-025-41059-z | null | cs.SI cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graphs are structured data that models complex relations between real-world
entities. Heterophilic graphs, where linked nodes are prone to be with
different labels or dissimilar features, have recently attracted significant
attention and found many real-world applications. Meanwhile, increasing efforts
have been made to advance learning from graphs with heterophily. Various graph
heterophily measures, benchmark datasets, and learning paradigms are emerging
rapidly. In this survey, we comprehensively review existing works on learning
from graphs with heterophily. First, we overview over 500 publications, of
which more than 340 are directly related to heterophilic graphs. After that, we
survey existing metrics of graph heterophily and list recent benchmark
datasets. Further, we systematically categorize existing methods based on a
hierarchical taxonomy including GNN models, learning paradigms and practical
applications. In addition, broader topics related to graph heterophily are also
included. Finally, we discuss the primary challenges of existing studies and
highlight promising avenues for future research.
| [
{
"version": "v1",
"created": "Thu, 18 Jan 2024 07:36:38 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Feb 2024 12:12:21 GMT"
},
{
"version": "v3",
"created": "Wed, 24 Jul 2024 13:49:13 GMT"
},
{
"version": "v4",
"created": "Mon, 30 Sep 2024 05:56:58 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Gong",
"Chenghua",
""
],
[
"Cheng",
"Yao",
""
],
[
"Yu",
"Jianxiang",
""
],
[
"Xu",
"Can",
""
],
[
"Shan",
"Caihua",
""
],
[
"Luo",
"Siqiang",
""
],
[
"Li",
"Xiang",
""
]
] | TITLE: A Survey on Learning from Graphs with Heterophily: Recent Advances and
Future Directions
ABSTRACT: Graphs are structured data that models complex relations between real-world
entities. Heterophilic graphs, where linked nodes are prone to be with
different labels or dissimilar features, have recently attracted significant
attention and found many real-world applications. Meanwhile, increasing efforts
have been made to advance learning from graphs with heterophily. Various graph
heterophily measures, benchmark datasets, and learning paradigms are emerging
rapidly. In this survey, we comprehensively review existing works on learning
from graphs with heterophily. First, we overview over 500 publications, of
which more than 340 are directly related to heterophilic graphs. After that, we
survey existing metrics of graph heterophily and list recent benchmark
datasets. Further, we systematically categorize existing methods based on a
hierarchical taxonomy including GNN models, learning paradigms and practical
applications. In addition, broader topics related to graph heterophily are also
included. Finally, we discuss the primary challenges of existing studies and
highlight promising avenues for future research.
|
2401.10288 | Hyunju Kim | Hyunju Kim and Dongman Lee | Self-supervised New Activity Detection in Sensor-based Smart
Environments | null | null | null | null | cs.LG eess.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the rapid advancement of ubiquitous computing technology, human activity
analysis based on time series data from a diverse range of sensors enables the
delivery of more intelligent services. Despite the importance of exploring new
activities in real-world scenarios, existing human activity recognition studies
generally rely on predefined known activities and often overlook detecting new
patterns (novelties) that have not been previously observed during training.
Novelty detection in human activities becomes even more challenging due to (1)
diversity of patterns within the same known activity, (2) shared patterns
between known and new activities, and (3) differences in sensor properties of
each activity dataset. We introduce CLAN, a two-tower model that leverages
Contrastive Learning with diverse data Augmentation for New activity detection
in sensor-based environments. CLAN simultaneously and explicitly utilizes
multiple types of strongly shifted data as negative samples in contrastive
learning, effectively learning invariant representations that adapt to various
pattern variations within the same activity. To enhance the ability to
distinguish between known and new activities that share common features, CLAN
incorporates both time and frequency domains, enabling the learning of
multi-faceted discriminative representations. Additionally, we design an
automatic selection mechanism of data augmentation methods tailored to each
dataset's properties, generating appropriate positive and negative pairs for
contrastive learning. Comprehensive experiments on real-world datasets show
that CLAN achieves a 9.24% improvement in AUROC compared to the best-performing
baseline model.
| [
{
"version": "v1",
"created": "Wed, 17 Jan 2024 03:57:36 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 12:01:44 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Kim",
"Hyunju",
""
],
[
"Lee",
"Dongman",
""
]
] | TITLE: Self-supervised New Activity Detection in Sensor-based Smart
Environments
ABSTRACT: With the rapid advancement of ubiquitous computing technology, human activity
analysis based on time series data from a diverse range of sensors enables the
delivery of more intelligent services. Despite the importance of exploring new
activities in real-world scenarios, existing human activity recognition studies
generally rely on predefined known activities and often overlook detecting new
patterns (novelties) that have not been previously observed during training.
Novelty detection in human activities becomes even more challenging due to (1)
diversity of patterns within the same known activity, (2) shared patterns
between known and new activities, and (3) differences in sensor properties of
each activity dataset. We introduce CLAN, a two-tower model that leverages
Contrastive Learning with diverse data Augmentation for New activity detection
in sensor-based environments. CLAN simultaneously and explicitly utilizes
multiple types of strongly shifted data as negative samples in contrastive
learning, effectively learning invariant representations that adapt to various
pattern variations within the same activity. To enhance the ability to
distinguish between known and new activities that share common features, CLAN
incorporates both time and frequency domains, enabling the learning of
multi-faceted discriminative representations. Additionally, we design an
automatic selection mechanism of data augmentation methods tailored to each
dataset's properties, generating appropriate positive and negative pairs for
contrastive learning. Comprehensive experiments on real-world datasets show
that CLAN achieves a 9.24% improvement in AUROC compared to the best-performing
baseline model.
|
2402.15216 | Yongzhi Huang | Yongzhi Huang, Fengjun Xi, Liyun Tu, Jinxin Zhu, Haseeb Hassan,
Liyilei Su, Yun Peng, Jingyu Li, Jun Ma, Bingding Huang | Label-efficient multi-organ segmentation with a diffusion model | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate segmentation of multiple organs in Computed Tomography (CT) images
plays a vital role in computer-aided diagnosis systems. While various
supervised learning approaches have been proposed recently, these methods
heavily depend on a large amount of high-quality labeled data, which are
expensive to obtain in practice. To address this challenge, we propose a
label-efficient framework using knowledge transfer from a pre-trained diffusion
model for CT multi-organ segmentation. Specifically, we first pre-train a
denoising diffusion model on 207,029 unlabeled 2D CT slices to capture
anatomical patterns. Then, the model backbone is transferred to the downstream
multi-organ segmentation task, followed by fine-tuning with few labeled data.
In fine-tuning, two fine-tuning strategies, linear classification and
fine-tuning decoder, are employed to enhance segmentation performance while
preserving learned representations. Quantitative results show that the
pre-trained diffusion model is capable of generating diverse and realistic
256x256 CT images (Fr\'echet inception distance (FID): 11.32, spatial Fr\'echet
inception distance (sFID): 46.93, F1-score: 73.1%). Compared to
state-of-the-art methods for multi-organ segmentation, our method achieves
competitive performance on the FLARE 2022 dataset, particularly in limited
labeled data scenarios. After fine-tuning with 1% and 10% labeled data, our
method achieves dice similarity coefficients (DSCs) of 71.56% and 78.51%,
respectively. Remarkably, the method achieves a DSC score of 51.81% using only
four labeled CT slices. These results demonstrate the efficacy of our approach
in overcoming the limitations of supervised learning approaches that is highly
dependent on large-scale labeled data.
| [
{
"version": "v1",
"created": "Fri, 23 Feb 2024 09:25:57 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 02:42:26 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Huang",
"Yongzhi",
""
],
[
"Xi",
"Fengjun",
""
],
[
"Tu",
"Liyun",
""
],
[
"Zhu",
"Jinxin",
""
],
[
"Hassan",
"Haseeb",
""
],
[
"Su",
"Liyilei",
""
],
[
"Peng",
"Yun",
""
],
[
"Li",
"Jingyu",
""
],
[
"Ma",
"Jun",
""
],
[
"Huang",
"Bingding",
""
]
] | TITLE: Label-efficient multi-organ segmentation with a diffusion model
ABSTRACT: Accurate segmentation of multiple organs in Computed Tomography (CT) images
plays a vital role in computer-aided diagnosis systems. While various
supervised learning approaches have been proposed recently, these methods
heavily depend on a large amount of high-quality labeled data, which are
expensive to obtain in practice. To address this challenge, we propose a
label-efficient framework using knowledge transfer from a pre-trained diffusion
model for CT multi-organ segmentation. Specifically, we first pre-train a
denoising diffusion model on 207,029 unlabeled 2D CT slices to capture
anatomical patterns. Then, the model backbone is transferred to the downstream
multi-organ segmentation task, followed by fine-tuning with few labeled data.
In fine-tuning, two fine-tuning strategies, linear classification and
fine-tuning decoder, are employed to enhance segmentation performance while
preserving learned representations. Quantitative results show that the
pre-trained diffusion model is capable of generating diverse and realistic
256x256 CT images (Fr\'echet inception distance (FID): 11.32, spatial Fr\'echet
inception distance (sFID): 46.93, F1-score: 73.1%). Compared to
state-of-the-art methods for multi-organ segmentation, our method achieves
competitive performance on the FLARE 2022 dataset, particularly in limited
labeled data scenarios. After fine-tuning with 1% and 10% labeled data, our
method achieves dice similarity coefficients (DSCs) of 71.56% and 78.51%,
respectively. Remarkably, the method achieves a DSC score of 51.81% using only
four labeled CT slices. These results demonstrate the efficacy of our approach
in overcoming the limitations of supervised learning approaches that is highly
dependent on large-scale labeled data.
|
2403.03029 | Anmol Goel | Anmol Goel, Nico Daheim, Christian Montag, Iryna Gurevych | Socratic Reasoning Improves Positive Text Rewriting | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Reframing a negative into a positive thought is at the crux of several
cognitive approaches to mental health and psychotherapy that could be made more
accessible by large language model-based solutions. Such reframing is typically
non-trivial and requires multiple rationalization steps to uncover the
underlying issue of a negative thought and transform it to be more positive.
However, this rationalization process is currently neglected by both datasets
and models which reframe thoughts in one step. In this work, we address this
gap by augmenting open-source datasets for positive text rewriting with
synthetically-generated Socratic rationales using a novel framework called
\textsc{SocraticReframe}. SocraticReframe uses a sequence of question-answer
pairs to rationalize the thought rewriting process. We show that such Socratic
rationales significantly improve positive text rewriting for different
open-source LLMs according to both automatic and human evaluations guided by
criteria from psychotherapy research. We validate our framework and the
synthetic rationalizations with expert judgements from domain experts and
psychology students in an IRB-approved annotation study. Our findings highlight
the potential of utilizing the synergy between LLM reasoning and established
psychotherapy techniques to build assistive solutions for reframing negative
thoughts.
| [
{
"version": "v1",
"created": "Tue, 5 Mar 2024 15:05:06 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 13:43:29 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Goel",
"Anmol",
""
],
[
"Daheim",
"Nico",
""
],
[
"Montag",
"Christian",
""
],
[
"Gurevych",
"Iryna",
""
]
] | TITLE: Socratic Reasoning Improves Positive Text Rewriting
ABSTRACT: Reframing a negative into a positive thought is at the crux of several
cognitive approaches to mental health and psychotherapy that could be made more
accessible by large language model-based solutions. Such reframing is typically
non-trivial and requires multiple rationalization steps to uncover the
underlying issue of a negative thought and transform it to be more positive.
However, this rationalization process is currently neglected by both datasets
and models which reframe thoughts in one step. In this work, we address this
gap by augmenting open-source datasets for positive text rewriting with
synthetically-generated Socratic rationales using a novel framework called
\textsc{SocraticReframe}. SocraticReframe uses a sequence of question-answer
pairs to rationalize the thought rewriting process. We show that such Socratic
rationales significantly improve positive text rewriting for different
open-source LLMs according to both automatic and human evaluations guided by
criteria from psychotherapy research. We validate our framework and the
synthetic rationalizations with expert judgements from domain experts and
psychology students in an IRB-approved annotation study. Our findings highlight
the potential of utilizing the synergy between LLM reasoning and established
psychotherapy techniques to build assistive solutions for reframing negative
thoughts.
|
2403.11371 | Baolu Li | Baolu Li and Jinlong Li and Xinyu Liu and Runsheng Xu and Zhengzhong
Tu and Jiacheng Guo and Xiaopeng Li and Hongkai Yu | V2X-DGW: Domain Generalization for Multi-agent Perception under Adverse
Weather Conditions | accepted by ICRA 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Current LiDAR-based Vehicle-to-Everything (V2X) multi-agent perception
systems have shown the significant success on 3D object detection. While these
models perform well in the trained clean weather, they struggle in unseen
adverse weather conditions with the domain gap. In this paper, we propose a
Domain Generalization based approach, named \textit{V2X-DGW}, for LiDAR-based
3D object detection on multi-agent perception system under adverse weather
conditions. Our research aims to not only maintain favorable multi-agent
performance in the clean weather but also promote the performance in the unseen
adverse weather conditions by learning only on the clean weather data. To
realize the Domain Generalization, we first introduce the Adaptive Weather
Augmentation (AWA) to mimic the unseen adverse weather conditions, and then
propose two alignments for generalizable representation learning: Trust-region
Weather-invariant Alignment (TWA) and Agent-aware Contrastive Alignment (ACA).
To evaluate this research, we add Fog, Rain, Snow conditions on two publicized
multi-agent datasets based on physics-based models, resulting in two new
datasets: OPV2V-w and V2XSet-w. Extensive experiments demonstrate that our
V2X-DGW achieved significant improvements in the unseen adverse weathers. The
code is available at https://github.com/Baolu1998/V2X-DGW.
| [
{
"version": "v1",
"created": "Sun, 17 Mar 2024 23:29:41 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Mar 2024 19:50:51 GMT"
},
{
"version": "v3",
"created": "Thu, 21 Mar 2024 00:55:04 GMT"
},
{
"version": "v4",
"created": "Fri, 29 Mar 2024 14:19:56 GMT"
},
{
"version": "v5",
"created": "Tue, 24 Sep 2024 15:57:10 GMT"
},
{
"version": "v6",
"created": "Wed, 19 Mar 2025 18:34:05 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Li",
"Baolu",
""
],
[
"Li",
"Jinlong",
""
],
[
"Liu",
"Xinyu",
""
],
[
"Xu",
"Runsheng",
""
],
[
"Tu",
"Zhengzhong",
""
],
[
"Guo",
"Jiacheng",
""
],
[
"Li",
"Xiaopeng",
""
],
[
"Yu",
"Hongkai",
""
]
] | TITLE: V2X-DGW: Domain Generalization for Multi-agent Perception under Adverse
Weather Conditions
ABSTRACT: Current LiDAR-based Vehicle-to-Everything (V2X) multi-agent perception
systems have shown the significant success on 3D object detection. While these
models perform well in the trained clean weather, they struggle in unseen
adverse weather conditions with the domain gap. In this paper, we propose a
Domain Generalization based approach, named \textit{V2X-DGW}, for LiDAR-based
3D object detection on multi-agent perception system under adverse weather
conditions. Our research aims to not only maintain favorable multi-agent
performance in the clean weather but also promote the performance in the unseen
adverse weather conditions by learning only on the clean weather data. To
realize the Domain Generalization, we first introduce the Adaptive Weather
Augmentation (AWA) to mimic the unseen adverse weather conditions, and then
propose two alignments for generalizable representation learning: Trust-region
Weather-invariant Alignment (TWA) and Agent-aware Contrastive Alignment (ACA).
To evaluate this research, we add Fog, Rain, Snow conditions on two publicized
multi-agent datasets based on physics-based models, resulting in two new
datasets: OPV2V-w and V2XSet-w. Extensive experiments demonstrate that our
V2X-DGW achieved significant improvements in the unseen adverse weathers. The
code is available at https://github.com/Baolu1998/V2X-DGW.
|
2403.19612 | Dmitrii Zhemchuzhnikov | Dmitrii Zhemchuzhnikov and Sergei Grudinin | ILPO-NET: Network for the invariant recognition of arbitrary volumetric
patterns in 3D | null | Machine Learning and Knowledge Discovery in Databases. Research
Track. ECML PKDD 2024. Lecture Notes in Computer Science(), vol 14944.
Springer, Cham | 10.1007/978-3-031-70359-1_21 | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Effective recognition of spatial patterns and learning their hierarchy is
crucial in modern spatial data analysis. Volumetric data applications seek
techniques ensuring invariance not only to shifts but also to pattern
rotations. While traditional methods can readily achieve translational
invariance, rotational invariance possesses multiple challenges and remains an
active area of research. Here, we present ILPO-Net (Invariant to Local Patterns
Orientation Network), a novel approach that handles arbitrarily shaped patterns
with the convolutional operation inherently invariant to local spatial pattern
orientations using the Wigner matrix expansions. Our architecture seamlessly
integrates the new convolution operator and, when benchmarked on diverse
volumetric datasets such as MedMNIST and CATH, demonstrates superior
performance over the baselines with significantly reduced parameter counts - up
to 1000 times fewer in the case of MedMNIST. Beyond these demonstrations,
ILPO-Net's rotational invariance paves the way for other applications across
multiple disciplines. Our code is publicly available at
https://gricad-gitlab.univ-grenoble-alpes.fr/GruLab/ILPO/-/tree/main/ILPONet.
| [
{
"version": "v1",
"created": "Thu, 28 Mar 2024 17:32:01 GMT"
},
{
"version": "v2",
"created": "Thu, 4 Apr 2024 14:44:23 GMT"
},
{
"version": "v3",
"created": "Wed, 24 Apr 2024 14:26:52 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Zhemchuzhnikov",
"Dmitrii",
""
],
[
"Grudinin",
"Sergei",
""
]
] | TITLE: ILPO-NET: Network for the invariant recognition of arbitrary volumetric
patterns in 3D
ABSTRACT: Effective recognition of spatial patterns and learning their hierarchy is
crucial in modern spatial data analysis. Volumetric data applications seek
techniques ensuring invariance not only to shifts but also to pattern
rotations. While traditional methods can readily achieve translational
invariance, rotational invariance possesses multiple challenges and remains an
active area of research. Here, we present ILPO-Net (Invariant to Local Patterns
Orientation Network), a novel approach that handles arbitrarily shaped patterns
with the convolutional operation inherently invariant to local spatial pattern
orientations using the Wigner matrix expansions. Our architecture seamlessly
integrates the new convolution operator and, when benchmarked on diverse
volumetric datasets such as MedMNIST and CATH, demonstrates superior
performance over the baselines with significantly reduced parameter counts - up
to 1000 times fewer in the case of MedMNIST. Beyond these demonstrations,
ILPO-Net's rotational invariance paves the way for other applications across
multiple disciplines. Our code is publicly available at
https://gricad-gitlab.univ-grenoble-alpes.fr/GruLab/ILPO/-/tree/main/ILPONet.
|
2404.18212 | Noam Rotstein | Navve Wasserman, Noam Rotstein, Roy Ganz, Ron Kimmel | Paint by Inpaint: Learning to Add Image Objects by Removing Them First | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Image editing has advanced significantly with the introduction of
text-conditioned diffusion models. Despite this progress, seamlessly adding
objects to images based on textual instructions without requiring user-provided
input masks remains a challenge. We address this by leveraging the insight that
removing objects (Inpaint) is significantly simpler than its inverse process of
adding them (Paint), attributed to inpainting models that benefit from
segmentation mask guidance. Capitalizing on this realization, by implementing
an automated and extensive pipeline, we curate a filtered large-scale image
dataset containing pairs of images and their corresponding object-removed
versions. Using these pairs, we train a diffusion model to inverse the
inpainting process, effectively adding objects into images. Unlike other
editing datasets, ours features natural target images instead of synthetic ones
while ensuring source-target consistency by construction. Additionally, we
utilize a large Vision-Language Model to provide detailed descriptions of the
removed objects and a Large Language Model to convert these descriptions into
diverse, natural-language instructions. Our quantitative and qualitative
results show that the trained model surpasses existing models in both object
addition and general editing tasks. Visit our project page for the released
dataset and trained models at https://rotsteinnoam.github.io/Paint-by-Inpaint.
| [
{
"version": "v1",
"created": "Sun, 28 Apr 2024 15:07:53 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 13:48:18 GMT"
},
{
"version": "v3",
"created": "Thu, 20 Mar 2025 06:59:54 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Wasserman",
"Navve",
""
],
[
"Rotstein",
"Noam",
""
],
[
"Ganz",
"Roy",
""
],
[
"Kimmel",
"Ron",
""
]
] | TITLE: Paint by Inpaint: Learning to Add Image Objects by Removing Them First
ABSTRACT: Image editing has advanced significantly with the introduction of
text-conditioned diffusion models. Despite this progress, seamlessly adding
objects to images based on textual instructions without requiring user-provided
input masks remains a challenge. We address this by leveraging the insight that
removing objects (Inpaint) is significantly simpler than its inverse process of
adding them (Paint), attributed to inpainting models that benefit from
segmentation mask guidance. Capitalizing on this realization, by implementing
an automated and extensive pipeline, we curate a filtered large-scale image
dataset containing pairs of images and their corresponding object-removed
versions. Using these pairs, we train a diffusion model to inverse the
inpainting process, effectively adding objects into images. Unlike other
editing datasets, ours features natural target images instead of synthetic ones
while ensuring source-target consistency by construction. Additionally, we
utilize a large Vision-Language Model to provide detailed descriptions of the
removed objects and a Large Language Model to convert these descriptions into
diverse, natural-language instructions. Our quantitative and qualitative
results show that the trained model surpasses existing models in both object
addition and general editing tasks. Visit our project page for the released
dataset and trained models at https://rotsteinnoam.github.io/Paint-by-Inpaint.
|
2405.09682 | Yachan Guo | Yachan Guo, Yi Xiao, Danna Xue, Jose Luis Gomez Zurita, Antonio M.
Lopez | UDA4Inst: Unsupervised Domain Adaptation for Instance Segmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Instance segmentation is crucial for autonomous driving but is hindered by
the lack of annotated real-world data due to expensive labeling costs.
Unsupervised Domain Adaptation (UDA) offers a solution by transferring
knowledge from labeled synthetic data to unlabeled real-world data. While UDA
methods for synthetic to real-world domains (synth-to-real) show remarkable
performance in tasks such as semantic segmentation and object detection, very
few have been proposed for instance segmentation in vision-based autonomous
driving. Moreover, existing methods rely on suboptimal baselines, which
severely limits performance. We introduce \textbf{UDA4Inst}, a powerful
framework for synth-to-real UDA in instance segmentation. Our framework
enhances instance segmentation through \textit{Semantic Category Training} and
\textit{Bidirectional Mixing Training}. With the Semantic Category Training
method, semantically related classes are grouped and trained separately,
enabling the generation of higher-quality pseudo-labels and improved
segmentation performance. We further propose a bidirectional cross-domain data
mixing strategy that combines instance-wise and patch-wise mixing techniques to
effectively utilize data from both source and target domains, producing
realistic composite images that improve the model's generalization performance.
Extensive experiments demonstrate the effectiveness of our methods. Our
approach establishes a new state-of-the-art on the SYNTHIA->Cityscapes
benchmark with mAP 31.3. Notably, we are the first to report results on
multiple novel synth-to-real instance segmentation datasets, using UrbanSyn and
Synscapes as source domains while Cityscapes and KITTI360 serve as target
domains. Our code will be released soon.
| [
{
"version": "v1",
"created": "Wed, 15 May 2024 19:53:52 GMT"
},
{
"version": "v2",
"created": "Wed, 22 May 2024 16:37:01 GMT"
},
{
"version": "v3",
"created": "Fri, 5 Jul 2024 10:53:07 GMT"
},
{
"version": "v4",
"created": "Fri, 3 Jan 2025 19:25:26 GMT"
},
{
"version": "v5",
"created": "Thu, 20 Mar 2025 05:31:41 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Guo",
"Yachan",
""
],
[
"Xiao",
"Yi",
""
],
[
"Xue",
"Danna",
""
],
[
"Zurita",
"Jose Luis Gomez",
""
],
[
"Lopez",
"Antonio M.",
""
]
] | TITLE: UDA4Inst: Unsupervised Domain Adaptation for Instance Segmentation
ABSTRACT: Instance segmentation is crucial for autonomous driving but is hindered by
the lack of annotated real-world data due to expensive labeling costs.
Unsupervised Domain Adaptation (UDA) offers a solution by transferring
knowledge from labeled synthetic data to unlabeled real-world data. While UDA
methods for synthetic to real-world domains (synth-to-real) show remarkable
performance in tasks such as semantic segmentation and object detection, very
few have been proposed for instance segmentation in vision-based autonomous
driving. Moreover, existing methods rely on suboptimal baselines, which
severely limits performance. We introduce \textbf{UDA4Inst}, a powerful
framework for synth-to-real UDA in instance segmentation. Our framework
enhances instance segmentation through \textit{Semantic Category Training} and
\textit{Bidirectional Mixing Training}. With the Semantic Category Training
method, semantically related classes are grouped and trained separately,
enabling the generation of higher-quality pseudo-labels and improved
segmentation performance. We further propose a bidirectional cross-domain data
mixing strategy that combines instance-wise and patch-wise mixing techniques to
effectively utilize data from both source and target domains, producing
realistic composite images that improve the model's generalization performance.
Extensive experiments demonstrate the effectiveness of our methods. Our
approach establishes a new state-of-the-art on the SYNTHIA->Cityscapes
benchmark with mAP 31.3. Notably, we are the first to report results on
multiple novel synth-to-real instance segmentation datasets, using UrbanSyn and
Synscapes as source domains while Cityscapes and KITTI360 serve as target
domains. Our code will be released soon.
|
2406.03146 | Erik Landolsi | Erik Landolsi, Fredrik Kahl | Tiny models from tiny data: Textual and null-text inversion for few-shot
distillation | 24 pages (13 main pages + references and appendix) | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Few-shot learning deals with problems such as image classification using very
few training examples. Recent vision foundation models show excellent few-shot
transfer abilities, but are large and slow at inference. Using knowledge
distillation, the capabilities of high-performing but slow models can be
transferred to tiny, efficient models. However, common distillation methods
require a large set of unlabeled data, which is not available in the few-shot
setting. To overcome this lack of data, there has been a recent interest in
using synthetic data. We expand on this line of research by presenting a novel
diffusion model inversion technique (TINT) combining the diversity of textual
inversion with the specificity of null-text inversion. Using this method in a
few-shot distillation pipeline leads to state-of-the-art accuracy among small
student models on popular benchmarks, while being significantly faster than
prior work. Popular few-shot benchmarks involve evaluation over a large number
of episodes, which is computationally cumbersome for methods involving
synthetic data generation. We also present a theoretical analysis on how the
accuracy estimator variance depends on the number of episodes and query
examples, and use these results to lower the computational effort required for
method evaluation. Finally, to further motivate the use of generative models in
few-shot distillation, we demonstrate that our method outperforms training on
real data mined from the dataset used in the original diffusion model training.
Source code is available at https://github.com/pixwse/tiny2.
| [
{
"version": "v1",
"created": "Wed, 5 Jun 2024 11:01:42 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 12:04:41 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Landolsi",
"Erik",
""
],
[
"Kahl",
"Fredrik",
""
]
] | TITLE: Tiny models from tiny data: Textual and null-text inversion for few-shot
distillation
ABSTRACT: Few-shot learning deals with problems such as image classification using very
few training examples. Recent vision foundation models show excellent few-shot
transfer abilities, but are large and slow at inference. Using knowledge
distillation, the capabilities of high-performing but slow models can be
transferred to tiny, efficient models. However, common distillation methods
require a large set of unlabeled data, which is not available in the few-shot
setting. To overcome this lack of data, there has been a recent interest in
using synthetic data. We expand on this line of research by presenting a novel
diffusion model inversion technique (TINT) combining the diversity of textual
inversion with the specificity of null-text inversion. Using this method in a
few-shot distillation pipeline leads to state-of-the-art accuracy among small
student models on popular benchmarks, while being significantly faster than
prior work. Popular few-shot benchmarks involve evaluation over a large number
of episodes, which is computationally cumbersome for methods involving
synthetic data generation. We also present a theoretical analysis on how the
accuracy estimator variance depends on the number of episodes and query
examples, and use these results to lower the computational effort required for
method evaluation. Finally, to further motivate the use of generative models in
few-shot distillation, we demonstrate that our method outperforms training on
real data mined from the dataset used in the original diffusion model training.
Source code is available at https://github.com/pixwse/tiny2.
|
2406.11624 | \"Omer \c{S}ahin Ta\c{s} | Omer Sahin Tas and Royden Wagner | Words in Motion: Extracting Interpretable Control Vectors for Motion
Transformers | ICLR 2025 camera-ready. Our implementation is available at
github.com/kit-mrt/future-motion | null | null | null | cs.LG cs.CL cs.CV | http://creativecommons.org/licenses/by/4.0/ | Transformer-based models generate hidden states that are difficult to
interpret. In this work, we analyze hidden states and modify them at inference,
with a focus on motion forecasting. We use linear probing to analyze whether
interpretable features are embedded in hidden states. Our experiments reveal
high probing accuracy, indicating latent space regularities with functionally
important directions. Building on this, we use the directions between hidden
states with opposing features to fit control vectors. At inference, we add our
control vectors to hidden states and evaluate their impact on predictions.
Remarkably, such modifications preserve the feasibility of predictions. We
further refine our control vectors using sparse autoencoders (SAEs). This leads
to more linear changes in predictions when scaling control vectors. Our
approach enables mechanistic interpretation as well as zero-shot generalization
to unseen dataset characteristics with negligible computational overhead.
| [
{
"version": "v1",
"created": "Mon, 17 Jun 2024 15:07:55 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Oct 2024 22:39:55 GMT"
},
{
"version": "v3",
"created": "Thu, 5 Dec 2024 11:47:49 GMT"
},
{
"version": "v4",
"created": "Thu, 20 Mar 2025 12:06:17 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Tas",
"Omer Sahin",
""
],
[
"Wagner",
"Royden",
""
]
] | TITLE: Words in Motion: Extracting Interpretable Control Vectors for Motion
Transformers
ABSTRACT: Transformer-based models generate hidden states that are difficult to
interpret. In this work, we analyze hidden states and modify them at inference,
with a focus on motion forecasting. We use linear probing to analyze whether
interpretable features are embedded in hidden states. Our experiments reveal
high probing accuracy, indicating latent space regularities with functionally
important directions. Building on this, we use the directions between hidden
states with opposing features to fit control vectors. At inference, we add our
control vectors to hidden states and evaluate their impact on predictions.
Remarkably, such modifications preserve the feasibility of predictions. We
further refine our control vectors using sparse autoencoders (SAEs). This leads
to more linear changes in predictions when scaling control vectors. Our
approach enables mechanistic interpretation as well as zero-shot generalization
to unseen dataset characteristics with negligible computational overhead.
|
2406.12179 | Roman Beliy | Roman Beliy, Navve Wasserman, Amit Zalcher, Michal Irani | The Wisdom of a Crowd of Brains: A Universal Brain Encoder | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Image-to-fMRI encoding is important for both neuroscience research and
practical applications. However, such "Brain-Encoders" have been typically
trained per-subject and per fMRI-dataset, thus restricted to very limited
training data. In this paper we propose a Universal Brain-Encoder, which can be
trained jointly on data from many different subjects/datasets/machines. What
makes this possible is our new voxel-centric Encoder architecture, which learns
a unique "voxel-embedding" per brain-voxel. Our Encoder trains to predict the
response of each brain-voxel on every image, by directly computing the
cross-attention between the brain-voxel embedding and multi-level deep image
features. This voxel-centric architecture allows the functional role of each
brain-voxel to naturally emerge from the voxel-image cross-attention. We show
the power of this approach to (i) combine data from multiple different subjects
(a "Crowd of Brains") to improve each individual brain-encoding, (ii) quick &
effective Transfer-Learning across subjects, datasets, and machines (e.g.,
3-Tesla, 7-Tesla), with few training examples, and (iii) use the learned
voxel-embeddings as a powerful tool to explore brain functionality (e.g., what
is encoded where in the brain).
| [
{
"version": "v1",
"created": "Tue, 18 Jun 2024 01:17:07 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 23:24:48 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Beliy",
"Roman",
""
],
[
"Wasserman",
"Navve",
""
],
[
"Zalcher",
"Amit",
""
],
[
"Irani",
"Michal",
""
]
] | TITLE: The Wisdom of a Crowd of Brains: A Universal Brain Encoder
ABSTRACT: Image-to-fMRI encoding is important for both neuroscience research and
practical applications. However, such "Brain-Encoders" have been typically
trained per-subject and per fMRI-dataset, thus restricted to very limited
training data. In this paper we propose a Universal Brain-Encoder, which can be
trained jointly on data from many different subjects/datasets/machines. What
makes this possible is our new voxel-centric Encoder architecture, which learns
a unique "voxel-embedding" per brain-voxel. Our Encoder trains to predict the
response of each brain-voxel on every image, by directly computing the
cross-attention between the brain-voxel embedding and multi-level deep image
features. This voxel-centric architecture allows the functional role of each
brain-voxel to naturally emerge from the voxel-image cross-attention. We show
the power of this approach to (i) combine data from multiple different subjects
(a "Crowd of Brains") to improve each individual brain-encoding, (ii) quick &
effective Transfer-Learning across subjects, datasets, and machines (e.g.,
3-Tesla, 7-Tesla), with few training examples, and (iii) use the learned
voxel-embeddings as a powerful tool to explore brain functionality (e.g., what
is encoded where in the brain).
|
2406.18992 | Lijie Hu | Lijie Hu, Tianhao Huang, Huanyi Xie, Xilin Gong, Chenyang Ren, Zhengyu
Hu, Lu Yu, Ping Ma, and Di Wang | Semi-supervised Concept Bottleneck Models | 16 pages | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Concept Bottleneck Models (CBMs) have garnered increasing attention due to
their ability to provide concept-based explanations for black-box deep learning
models while achieving high final prediction accuracy using human-like
concepts. However, the training of current CBMs is heavily dependent on the
precision and richness of the annotated concepts in the dataset. These concept
labels are typically provided by experts, which can be costly and require
significant resources and effort. Additionally, concept saliency maps
frequently misalign with input saliency maps, causing concept predictions to
correspond to irrelevant input features - an issue related to annotation
alignment. To address these limitations, we propose a new framework called
SSCBM (Semi-supervised Concept Bottleneck Model). Our SSCBM is suitable for
practical situations where annotated data is scarce. By leveraging joint
training on both labeled and unlabeled data and aligning the unlabeled data at
the concept level, we effectively solve these issues. We proposed a strategy to
generate pseudo labels and an alignment loss. Experiments demonstrate that our
SSCBM is both effective and efficient. With only 10% labeled data, our model's
concept and task accuracy on average across four datasets is only 2.44% and
3.93% lower, respectively, compared to the best baseline in the fully
supervised learning setting.
| [
{
"version": "v1",
"created": "Thu, 27 Jun 2024 08:33:35 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Mar 2025 03:57:55 GMT"
},
{
"version": "v3",
"created": "Wed, 19 Mar 2025 20:33:22 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Hu",
"Lijie",
""
],
[
"Huang",
"Tianhao",
""
],
[
"Xie",
"Huanyi",
""
],
[
"Gong",
"Xilin",
""
],
[
"Ren",
"Chenyang",
""
],
[
"Hu",
"Zhengyu",
""
],
[
"Yu",
"Lu",
""
],
[
"Ma",
"Ping",
""
],
[
"Wang",
"Di",
""
]
] | TITLE: Semi-supervised Concept Bottleneck Models
ABSTRACT: Concept Bottleneck Models (CBMs) have garnered increasing attention due to
their ability to provide concept-based explanations for black-box deep learning
models while achieving high final prediction accuracy using human-like
concepts. However, the training of current CBMs is heavily dependent on the
precision and richness of the annotated concepts in the dataset. These concept
labels are typically provided by experts, which can be costly and require
significant resources and effort. Additionally, concept saliency maps
frequently misalign with input saliency maps, causing concept predictions to
correspond to irrelevant input features - an issue related to annotation
alignment. To address these limitations, we propose a new framework called
SSCBM (Semi-supervised Concept Bottleneck Model). Our SSCBM is suitable for
practical situations where annotated data is scarce. By leveraging joint
training on both labeled and unlabeled data and aligning the unlabeled data at
the concept level, we effectively solve these issues. We proposed a strategy to
generate pseudo labels and an alignment loss. Experiments demonstrate that our
SSCBM is both effective and efficient. With only 10% labeled data, our model's
concept and task accuracy on average across four datasets is only 2.44% and
3.93% lower, respectively, compared to the best baseline in the fully
supervised learning setting.
|
2407.18908 | Boyi Li | Boyi Li and Ligeng Zhu and Ran Tian and Shuhan Tan and Yuxiao Chen and
Yao Lu and Yin Cui and Sushant Veer and Max Ehrlich and Jonah Philion and
Xinshuo Weng and Fuzhao Xue and Linxi Fan and Yuke Zhu and Jan Kautz and
Andrew Tao and Ming-Yu Liu and Sanja Fidler and Boris Ivanovic and Trevor
Darrell and Jitendra Malik and Song Han and Marco Pavone | Wolf: Dense Video Captioning with a World Summarization Framework | null | null | null | null | cs.LG cs.CL cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose Wolf, a WOrLd summarization Framework for accurate video
captioning. Wolf is an automated captioning framework that adopts a
mixture-of-experts approach, leveraging complementary strengths of Vision
Language Models (VLMs). By utilizing both image and video models, our framework
captures different levels of information and summarizes them efficiently. Our
approach can be applied to enhance video understanding, auto-labeling, and
captioning. To evaluate caption quality, we introduce CapScore, an LLM-based
metric to assess the similarity and quality of generated captions compared to
the ground truth captions. We further build four human-annotated datasets in
three domains: autonomous driving, general scenes, and robotics, to facilitate
comprehensive comparisons. We show that Wolf achieves superior captioning
performance compared to state-of-the-art approaches from the research community
(VILA1.5, CogAgent) and commercial solutions (Gemini-Pro-1.5, GPT-4V). For
instance, in comparison with GPT-4V, Wolf improves CapScore both quality-wise
by 55.6% and similarity-wise by 77.4% on challenging driving videos. Finally,
we establish a benchmark for video captioning and introduce a leaderboard,
aiming to accelerate advancements in video understanding, captioning, and data
alignment. Webpage: https://wolfv0.github.io/.
| [
{
"version": "v1",
"created": "Fri, 26 Jul 2024 17:59:09 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 17:56:05 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Li",
"Boyi",
""
],
[
"Zhu",
"Ligeng",
""
],
[
"Tian",
"Ran",
""
],
[
"Tan",
"Shuhan",
""
],
[
"Chen",
"Yuxiao",
""
],
[
"Lu",
"Yao",
""
],
[
"Cui",
"Yin",
""
],
[
"Veer",
"Sushant",
""
],
[
"Ehrlich",
"Max",
""
],
[
"Philion",
"Jonah",
""
],
[
"Weng",
"Xinshuo",
""
],
[
"Xue",
"Fuzhao",
""
],
[
"Fan",
"Linxi",
""
],
[
"Zhu",
"Yuke",
""
],
[
"Kautz",
"Jan",
""
],
[
"Tao",
"Andrew",
""
],
[
"Liu",
"Ming-Yu",
""
],
[
"Fidler",
"Sanja",
""
],
[
"Ivanovic",
"Boris",
""
],
[
"Darrell",
"Trevor",
""
],
[
"Malik",
"Jitendra",
""
],
[
"Han",
"Song",
""
],
[
"Pavone",
"Marco",
""
]
] | TITLE: Wolf: Dense Video Captioning with a World Summarization Framework
ABSTRACT: We propose Wolf, a WOrLd summarization Framework for accurate video
captioning. Wolf is an automated captioning framework that adopts a
mixture-of-experts approach, leveraging complementary strengths of Vision
Language Models (VLMs). By utilizing both image and video models, our framework
captures different levels of information and summarizes them efficiently. Our
approach can be applied to enhance video understanding, auto-labeling, and
captioning. To evaluate caption quality, we introduce CapScore, an LLM-based
metric to assess the similarity and quality of generated captions compared to
the ground truth captions. We further build four human-annotated datasets in
three domains: autonomous driving, general scenes, and robotics, to facilitate
comprehensive comparisons. We show that Wolf achieves superior captioning
performance compared to state-of-the-art approaches from the research community
(VILA1.5, CogAgent) and commercial solutions (Gemini-Pro-1.5, GPT-4V). For
instance, in comparison with GPT-4V, Wolf improves CapScore both quality-wise
by 55.6% and similarity-wise by 77.4% on challenging driving videos. Finally,
we establish a benchmark for video captioning and introduce a leaderboard,
aiming to accelerate advancements in video understanding, captioning, and data
alignment. Webpage: https://wolfv0.github.io/.
|
2408.05421 | Ahmed Abdelkawy | Ahmed Abdelkawy, Asem Ali, and Aly Farag | EPAM-Net: An Efficient Pose-driven Attention-guided Multimodal Network
for Video Action Recognition | null | Neurocomputing, Volume 633, 7 June 2025, 129781 | 10.1016/j.neucom.2025.129781 | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Existing multimodal-based human action recognition approaches are
computationally intensive, limiting their deployment in real-time applications.
In this work, we present a novel and efficient pose-driven attention-guided
multimodal network (EPAM-Net) for action recognition in videos. Specifically,
we propose eXpand temporal Shift (X-ShiftNet) convolutional architectures for
RGB and pose streams to capture spatio-temporal features from RGB videos and
their skeleton sequences. The X-ShiftNet tackles the high computational cost of
the 3D CNNs by integrating the Temporal Shift Module (TSM) into an efficient 2D
CNN, enabling efficient spatiotemporal learning. Then skeleton features are
utilized to guide the visual network stream, focusing on keyframes and their
salient spatial regions using the proposed spatial-temporal attention block.
Finally, the predictions of the two streams are fused for final classification.
The experimental results show that our method, with a significant reduction in
floating-point operations (FLOPs), outperforms and competes with the
state-of-the-art methods on NTU RGB-D 60, NTU RGB-D 120, PKU-MMD, and Toyota
SmartHome datasets. The proposed EPAM-Net provides up to a 72.8x reduction in
FLOPs and up to a 48.6x reduction in the number of network parameters. The code
will be available at
https://github.com/ahmed-nady/Multimodal-Action-Recognition.
| [
{
"version": "v1",
"created": "Sat, 10 Aug 2024 03:15:24 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 15:21:00 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Abdelkawy",
"Ahmed",
""
],
[
"Ali",
"Asem",
""
],
[
"Farag",
"Aly",
""
]
] | TITLE: EPAM-Net: An Efficient Pose-driven Attention-guided Multimodal Network
for Video Action Recognition
ABSTRACT: Existing multimodal-based human action recognition approaches are
computationally intensive, limiting their deployment in real-time applications.
In this work, we present a novel and efficient pose-driven attention-guided
multimodal network (EPAM-Net) for action recognition in videos. Specifically,
we propose eXpand temporal Shift (X-ShiftNet) convolutional architectures for
RGB and pose streams to capture spatio-temporal features from RGB videos and
their skeleton sequences. The X-ShiftNet tackles the high computational cost of
the 3D CNNs by integrating the Temporal Shift Module (TSM) into an efficient 2D
CNN, enabling efficient spatiotemporal learning. Then skeleton features are
utilized to guide the visual network stream, focusing on keyframes and their
salient spatial regions using the proposed spatial-temporal attention block.
Finally, the predictions of the two streams are fused for final classification.
The experimental results show that our method, with a significant reduction in
floating-point operations (FLOPs), outperforms and competes with the
state-of-the-art methods on NTU RGB-D 60, NTU RGB-D 120, PKU-MMD, and Toyota
SmartHome datasets. The proposed EPAM-Net provides up to a 72.8x reduction in
FLOPs and up to a 48.6x reduction in the number of network parameters. The code
will be available at
https://github.com/ahmed-nady/Multimodal-Action-Recognition.
|
2408.07726 | Santhanakrishnan Narayanan | Nikita Makarov, Santhanakrishnan Narayanan, Constantinos Antoniou | Development of a graph neural network surrogate for travel demand
modelling | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | As urban environments grow, the modelling of transportation systems becomes
increasingly complex. This paper advances the field of travel demand modelling
by introducing advanced Graph Neural Network (GNN) architectures as surrogate
models, addressing key limitations of previous approaches. Building on prior
work with Graph Convolutional Networks (GCNs), we introduce GATv3, a new Graph
Attention Network (GAT) variant that mitigates over-smoothing through residual
connections, enabling deeper and more expressive architectures. Additionally,
we propose a fine-grained classification framework that improves predictive
stability while achieving numerical precision comparable to regression,
offering a more interpretable and efficient alternative. To enhance model
performance, we develop a synthetic data generation strategy, which expands the
augmented training dataset without overfitting. Our experiments demonstrate
that GATv3 significantly improves classification performance, while the GCN
model shows unexpected dominance in fine-grained classification when
supplemented with additional training data. The results highlight the
advantages of fine-grained classification over regression for travel demand
modelling tasks and reveal new challenges in extending GAT-based architectures
to complex transport scenarios. Notably, GATv3 appears well-suited for
classification-based transportation applications, such as section control and
congestion warning systems, which require a higher degree of differentiation
among neighboring links. These findings contribute to refining GNN-based
surrogates, offering new possibilities for applying GATv3 and fine-grained
classification in broader transportation challenges.
| [
{
"version": "v1",
"created": "Wed, 14 Aug 2024 14:18:47 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 10:47:07 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Makarov",
"Nikita",
""
],
[
"Narayanan",
"Santhanakrishnan",
""
],
[
"Antoniou",
"Constantinos",
""
]
] | TITLE: Development of a graph neural network surrogate for travel demand
modelling
ABSTRACT: As urban environments grow, the modelling of transportation systems becomes
increasingly complex. This paper advances the field of travel demand modelling
by introducing advanced Graph Neural Network (GNN) architectures as surrogate
models, addressing key limitations of previous approaches. Building on prior
work with Graph Convolutional Networks (GCNs), we introduce GATv3, a new Graph
Attention Network (GAT) variant that mitigates over-smoothing through residual
connections, enabling deeper and more expressive architectures. Additionally,
we propose a fine-grained classification framework that improves predictive
stability while achieving numerical precision comparable to regression,
offering a more interpretable and efficient alternative. To enhance model
performance, we develop a synthetic data generation strategy, which expands the
augmented training dataset without overfitting. Our experiments demonstrate
that GATv3 significantly improves classification performance, while the GCN
model shows unexpected dominance in fine-grained classification when
supplemented with additional training data. The results highlight the
advantages of fine-grained classification over regression for travel demand
modelling tasks and reveal new challenges in extending GAT-based architectures
to complex transport scenarios. Notably, GATv3 appears well-suited for
classification-based transportation applications, such as section control and
congestion warning systems, which require a higher degree of differentiation
among neighboring links. These findings contribute to refining GNN-based
surrogates, offering new possibilities for applying GATv3 and fine-grained
classification in broader transportation challenges.
|
2408.12629 | Zhenyu Lu | Zhenyu Lu, Hao Tang | Continual Gesture Learning without Data via Synthetic Feature Sampling | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data-Free Class Incremental Learning (DFCIL) aims to enable models to
continuously learn new classes while retraining knowledge of old classes, even
when the training data for old classes is unavailable. Although explored
primarily with image datasets by researchers, this study focuses on
investigating DFCIL for skeleton-based gesture classification due to its
significant real-world implications, particularly considering the growing
prevalence of VR/AR headsets where gestures serve as the primary means of
control and interaction. In this work, we made an intriguing observation:
skeleton models trained with base classes(even very limited) demonstrate strong
generalization capabilities to unseen classes without requiring additional
training. Building on this insight, we developed Synthetic Feature Replay (SFR)
that can sample synthetic features from class prototypes to replay for old
classes and augment for new classes (under a few-shot setting). Our proposed
method showcases significant advancements over the state-of-the-art, achieving
up to 15% enhancements in mean accuracy across all steps and largely mitigating
the accuracy imbalance between base classes and new classes.
| [
{
"version": "v1",
"created": "Wed, 21 Aug 2024 18:44:15 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 20:54:43 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Lu",
"Zhenyu",
""
],
[
"Tang",
"Hao",
""
]
] | TITLE: Continual Gesture Learning without Data via Synthetic Feature Sampling
ABSTRACT: Data-Free Class Incremental Learning (DFCIL) aims to enable models to
continuously learn new classes while retraining knowledge of old classes, even
when the training data for old classes is unavailable. Although explored
primarily with image datasets by researchers, this study focuses on
investigating DFCIL for skeleton-based gesture classification due to its
significant real-world implications, particularly considering the growing
prevalence of VR/AR headsets where gestures serve as the primary means of
control and interaction. In this work, we made an intriguing observation:
skeleton models trained with base classes(even very limited) demonstrate strong
generalization capabilities to unseen classes without requiring additional
training. Building on this insight, we developed Synthetic Feature Replay (SFR)
that can sample synthetic features from class prototypes to replay for old
classes and augment for new classes (under a few-shot setting). Our proposed
method showcases significant advancements over the state-of-the-art, achieving
up to 15% enhancements in mean accuracy across all steps and largely mitigating
the accuracy imbalance between base classes and new classes.
|
2408.13226 | Jingyu Liu | Jingyu Liu, Minquan Wang, Ye Ma, Bo Wang, Aozhu Chen, Quan Chen, Peng
Jiang, Xirong Li | D&M: Enriching E-commerce Videos with Sound Effects by Key Moment
Detection and SFX Matching | Accepted by AAAI 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Videos showcasing specific products are increasingly important for
E-commerce. Key moments naturally exist as the first appearance of a specific
product, presentation of its distinctive features, the presence of a buying
link, etc. Adding proper sound effects (SFX) to these key moments, or video
decoration with SFX (VDSFX), is crucial for enhancing the user engaging
experience. Previous studies about adding SFX to videos perform video to SFX
matching at a holistic level, lacking the ability of adding SFX to a specific
moment. Meanwhile, previous studies on video highlight detection or video
moment retrieval consider only moment localization, leaving moment to SFX
matching untouched. By contrast, we propose in this paper D&M, a unified method
that accomplishes key moment detection and moment to SFX matching
simultaneously. Moreover, for the new VDSFX task we build a large-scale dataset
SFX-Moment from an E-commerce platform. For a fair comparison, we build
competitive baselines by extending a number of current video moment detection
methods to the new task. Extensive experiments on SFX-Moment show the superior
performance of the proposed method over the baselines.
| [
{
"version": "v1",
"created": "Fri, 23 Aug 2024 17:01:35 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Feb 2025 16:46:03 GMT"
},
{
"version": "v3",
"created": "Thu, 20 Mar 2025 03:05:15 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Liu",
"Jingyu",
""
],
[
"Wang",
"Minquan",
""
],
[
"Ma",
"Ye",
""
],
[
"Wang",
"Bo",
""
],
[
"Chen",
"Aozhu",
""
],
[
"Chen",
"Quan",
""
],
[
"Jiang",
"Peng",
""
],
[
"Li",
"Xirong",
""
]
] | TITLE: D&M: Enriching E-commerce Videos with Sound Effects by Key Moment
Detection and SFX Matching
ABSTRACT: Videos showcasing specific products are increasingly important for
E-commerce. Key moments naturally exist as the first appearance of a specific
product, presentation of its distinctive features, the presence of a buying
link, etc. Adding proper sound effects (SFX) to these key moments, or video
decoration with SFX (VDSFX), is crucial for enhancing the user engaging
experience. Previous studies about adding SFX to videos perform video to SFX
matching at a holistic level, lacking the ability of adding SFX to a specific
moment. Meanwhile, previous studies on video highlight detection or video
moment retrieval consider only moment localization, leaving moment to SFX
matching untouched. By contrast, we propose in this paper D&M, a unified method
that accomplishes key moment detection and moment to SFX matching
simultaneously. Moreover, for the new VDSFX task we build a large-scale dataset
SFX-Moment from an E-commerce platform. For a fair comparison, we build
competitive baselines by extending a number of current video moment detection
methods to the new task. Extensive experiments on SFX-Moment show the superior
performance of the proposed method over the baselines.
|
2408.14329 | Ghazal Alinezhad Noghre | Armin Danesh Pazho, Shanle Yao, Ghazal Alinezhad Noghre, Babak Rahimi
Ardabili, Vinit Katariya, Hamed Tabkhi | Towards Adaptive Human-centric Video Anomaly Detection: A Comprehensive
Framework and A New Benchmark | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human-centric Video Anomaly Detection (VAD) aims to identify human behaviors
that deviate from normal. At its core, human-centric VAD faces substantial
challenges, such as the complexity of diverse human behaviors, the rarity of
anomalies, and ethical constraints. These challenges limit access to
high-quality datasets and highlight the need for a dataset and framework
supporting continual learning. Moving towards adaptive human-centric VAD, we
introduce the HuVAD (Human-centric privacy-enhanced Video Anomaly Detection)
dataset and a novel Unsupervised Continual Anomaly Learning (UCAL) framework.
UCAL enables incremental learning, allowing models to adapt over time, bridging
traditional training and real-world deployment. HuVAD prioritizes privacy by
providing de-identified annotations and includes seven indoor/outdoor scenes,
offering over 5x more pose-annotated frames than previous datasets. Our
standard and continual benchmarks, utilize a comprehensive set of metrics,
demonstrating that UCAL-enhanced models achieve superior performance in 82.14%
of cases, setting a new state-of-the-art (SOTA). The dataset can be accessed at
https://github.com/TeCSAR-UNCC/HuVAD.
| [
{
"version": "v1",
"created": "Mon, 26 Aug 2024 14:55:23 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 18:13:10 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Pazho",
"Armin Danesh",
""
],
[
"Yao",
"Shanle",
""
],
[
"Noghre",
"Ghazal Alinezhad",
""
],
[
"Ardabili",
"Babak Rahimi",
""
],
[
"Katariya",
"Vinit",
""
],
[
"Tabkhi",
"Hamed",
""
]
] | TITLE: Towards Adaptive Human-centric Video Anomaly Detection: A Comprehensive
Framework and A New Benchmark
ABSTRACT: Human-centric Video Anomaly Detection (VAD) aims to identify human behaviors
that deviate from normal. At its core, human-centric VAD faces substantial
challenges, such as the complexity of diverse human behaviors, the rarity of
anomalies, and ethical constraints. These challenges limit access to
high-quality datasets and highlight the need for a dataset and framework
supporting continual learning. Moving towards adaptive human-centric VAD, we
introduce the HuVAD (Human-centric privacy-enhanced Video Anomaly Detection)
dataset and a novel Unsupervised Continual Anomaly Learning (UCAL) framework.
UCAL enables incremental learning, allowing models to adapt over time, bridging
traditional training and real-world deployment. HuVAD prioritizes privacy by
providing de-identified annotations and includes seven indoor/outdoor scenes,
offering over 5x more pose-annotated frames than previous datasets. Our
standard and continual benchmarks, utilize a comprehensive set of metrics,
demonstrating that UCAL-enhanced models achieve superior performance in 82.14%
of cases, setting a new state-of-the-art (SOTA). The dataset can be accessed at
https://github.com/TeCSAR-UNCC/HuVAD.
|
2408.16939 | Mohammadamin Banayeeanzade | Amin Banayeeanzade, Mahdi Soltanolkotabi, Mohammad Rostami | Theoretical Insights into Overparameterized Models in Multi-Task and
Replay-Based Continual Learning | TMLR camera-ready version | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Multi-task learning (MTL) is a machine learning paradigm that aims to improve
the generalization performance of a model on multiple related tasks by training
it simultaneously on those tasks. Unlike MTL, where the model has instant
access to the training data of all tasks, continual learning (CL) involves
adapting to new sequentially arriving tasks over time without forgetting the
previously acquired knowledge. Despite the wide practical adoption of CL and
MTL and extensive literature on both areas, there remains a gap in the
theoretical understanding of these methods when used with overparameterized
models such as deep neural networks. This paper studies the overparameterized
linear models as a proxy for more complex models. We develop theoretical
results describing the effect of various system parameters on the model's
performance in an MTL setup. Specifically, we study the impact of model size,
dataset size, and task similarity on the generalization error and knowledge
transfer. Additionally, we present theoretical results to characterize the
performance of replay-based CL models. Our results reveal the impact of buffer
size and model capacity on the forgetting rate in a CL setup and help shed
light on some of the state-of-the-art CL methods. Finally, through extensive
empirical evaluations, we demonstrate that our theoretical findings are also
applicable to deep neural networks, offering valuable guidance for designing
MTL and CL models in practice.
| [
{
"version": "v1",
"created": "Thu, 29 Aug 2024 23:22:40 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 18:13:46 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Banayeeanzade",
"Amin",
""
],
[
"Soltanolkotabi",
"Mahdi",
""
],
[
"Rostami",
"Mohammad",
""
]
] | TITLE: Theoretical Insights into Overparameterized Models in Multi-Task and
Replay-Based Continual Learning
ABSTRACT: Multi-task learning (MTL) is a machine learning paradigm that aims to improve
the generalization performance of a model on multiple related tasks by training
it simultaneously on those tasks. Unlike MTL, where the model has instant
access to the training data of all tasks, continual learning (CL) involves
adapting to new sequentially arriving tasks over time without forgetting the
previously acquired knowledge. Despite the wide practical adoption of CL and
MTL and extensive literature on both areas, there remains a gap in the
theoretical understanding of these methods when used with overparameterized
models such as deep neural networks. This paper studies the overparameterized
linear models as a proxy for more complex models. We develop theoretical
results describing the effect of various system parameters on the model's
performance in an MTL setup. Specifically, we study the impact of model size,
dataset size, and task similarity on the generalization error and knowledge
transfer. Additionally, we present theoretical results to characterize the
performance of replay-based CL models. Our results reveal the impact of buffer
size and model capacity on the forgetting rate in a CL setup and help shed
light on some of the state-of-the-art CL methods. Finally, through extensive
empirical evaluations, we demonstrate that our theoretical findings are also
applicable to deep neural networks, offering valuable guidance for designing
MTL and CL models in practice.
|
2409.00101 | Wei-Bang Jiang | Wei-Bang Jiang, Yansen Wang, Bao-Liang Lu, Dongsheng Li | NeuroLM: A Universal Multi-task Foundation Model for Bridging the Gap
between Language and EEG Signals | The Thirteenth International Conference on Learning Representations | The Thirteenth International Conference on Learning
Representations, 2025 | null | null | eess.SP cs.HC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advancements for large-scale pre-training with neural signals such as
electroencephalogram (EEG) have shown promising results, significantly boosting
the development of brain-computer interfaces (BCIs) and healthcare. However,
these pre-trained models often require full fine-tuning on each downstream task
to achieve substantial improvements, limiting their versatility and usability,
and leading to considerable resource wastage. To tackle these challenges, we
propose NeuroLM, the first multi-task foundation model that leverages the
capabilities of Large Language Models (LLMs) by regarding EEG signals as a
foreign language, endowing the model with multi-task learning and inference
capabilities. Our approach begins with learning a text-aligned neural tokenizer
through vector-quantized temporal-frequency prediction, which encodes EEG
signals into discrete neural tokens. These EEG tokens, generated by the frozen
vector-quantized (VQ) encoder, are then fed into an LLM that learns causal EEG
information via multi-channel autoregression. Consequently, NeuroLM can
understand both EEG and language modalities. Finally, multi-task instruction
tuning adapts NeuroLM to various downstream tasks. We are the first to
demonstrate that, by specific incorporation with LLMs, NeuroLM unifies diverse
EEG tasks within a single model through instruction tuning. The largest variant
NeuroLM-XL has record-breaking 1.7B parameters for EEG signal processing, and
is pre-trained on a large-scale corpus comprising approximately 25,000-hour EEG
data. When evaluated on six diverse downstream datasets, NeuroLM showcases the
huge potential of this multi-task learning paradigm.
| [
{
"version": "v1",
"created": "Tue, 27 Aug 2024 12:07:09 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Feb 2025 08:36:36 GMT"
},
{
"version": "v3",
"created": "Thu, 20 Mar 2025 08:26:21 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Jiang",
"Wei-Bang",
""
],
[
"Wang",
"Yansen",
""
],
[
"Lu",
"Bao-Liang",
""
],
[
"Li",
"Dongsheng",
""
]
] | TITLE: NeuroLM: A Universal Multi-task Foundation Model for Bridging the Gap
between Language and EEG Signals
ABSTRACT: Recent advancements for large-scale pre-training with neural signals such as
electroencephalogram (EEG) have shown promising results, significantly boosting
the development of brain-computer interfaces (BCIs) and healthcare. However,
these pre-trained models often require full fine-tuning on each downstream task
to achieve substantial improvements, limiting their versatility and usability,
and leading to considerable resource wastage. To tackle these challenges, we
propose NeuroLM, the first multi-task foundation model that leverages the
capabilities of Large Language Models (LLMs) by regarding EEG signals as a
foreign language, endowing the model with multi-task learning and inference
capabilities. Our approach begins with learning a text-aligned neural tokenizer
through vector-quantized temporal-frequency prediction, which encodes EEG
signals into discrete neural tokens. These EEG tokens, generated by the frozen
vector-quantized (VQ) encoder, are then fed into an LLM that learns causal EEG
information via multi-channel autoregression. Consequently, NeuroLM can
understand both EEG and language modalities. Finally, multi-task instruction
tuning adapts NeuroLM to various downstream tasks. We are the first to
demonstrate that, by specific incorporation with LLMs, NeuroLM unifies diverse
EEG tasks within a single model through instruction tuning. The largest variant
NeuroLM-XL has record-breaking 1.7B parameters for EEG signal processing, and
is pre-trained on a large-scale corpus comprising approximately 25,000-hour EEG
data. When evaluated on six diverse downstream datasets, NeuroLM showcases the
huge potential of this multi-task learning paradigm.
|
2409.07725 | Quanjun Li | Kaizhe Fan, Quanjun Li | GRE^2-MDCL: Graph Representation Embedding Enhanced via Multidimensional
Contrastive Learning | I am requesting the withdrawal of my paper due to errors identified
in the methodology and experimental results. Specifically, there are
inaccuracies in the analysis section that may lead to misleading conclusions | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Graph representation learning has emerged as a powerful tool for preserving
graph topology when mapping nodes to vector representations, enabling various
downstream tasks such as node classification and community detection. However,
most current graph neural network models face the challenge of requiring
extensive labeled data, which limits their practical applicability in
real-world scenarios where labeled data is scarce. To address this challenge,
researchers have explored Graph Contrastive Learning (GCL), which leverages
enhanced graph data and contrastive learning techniques. While promising,
existing GCL methods often struggle with effectively capturing both local and
global graph structures, and balancing the trade-off between nodelevel and
graph-level representations. In this work, we propose Graph Representation
Embedding Enhanced via Multidimensional Contrastive Learning (GRE2-MDCL). Our
model introduces a novel triple network architecture with a multi-head
attention GNN as the core. GRE2-MDCL first globally and locally augments the
input graph using SVD and LAGNN techniques. It then constructs a
multidimensional contrastive loss, incorporating cross-network, cross-view, and
neighbor contrast, to optimize the model. Extensive experiments on benchmark
datasets Cora, Citeseer, and PubMed demonstrate that GRE2-MDCL achieves
state-of-the-art performance, with average accuracies of 82.5%, 72.5%, and
81.6% respectively. Visualizations further show tighter intra-cluster
aggregation and clearer inter-cluster boundaries, highlighting the
effectiveness of our framework in improving upon baseline GCL models.
| [
{
"version": "v1",
"created": "Thu, 12 Sep 2024 03:09:05 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 02:10:52 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Fan",
"Kaizhe",
""
],
[
"Li",
"Quanjun",
""
]
] | TITLE: GRE^2-MDCL: Graph Representation Embedding Enhanced via Multidimensional
Contrastive Learning
ABSTRACT: Graph representation learning has emerged as a powerful tool for preserving
graph topology when mapping nodes to vector representations, enabling various
downstream tasks such as node classification and community detection. However,
most current graph neural network models face the challenge of requiring
extensive labeled data, which limits their practical applicability in
real-world scenarios where labeled data is scarce. To address this challenge,
researchers have explored Graph Contrastive Learning (GCL), which leverages
enhanced graph data and contrastive learning techniques. While promising,
existing GCL methods often struggle with effectively capturing both local and
global graph structures, and balancing the trade-off between nodelevel and
graph-level representations. In this work, we propose Graph Representation
Embedding Enhanced via Multidimensional Contrastive Learning (GRE2-MDCL). Our
model introduces a novel triple network architecture with a multi-head
attention GNN as the core. GRE2-MDCL first globally and locally augments the
input graph using SVD and LAGNN techniques. It then constructs a
multidimensional contrastive loss, incorporating cross-network, cross-view, and
neighbor contrast, to optimize the model. Extensive experiments on benchmark
datasets Cora, Citeseer, and PubMed demonstrate that GRE2-MDCL achieves
state-of-the-art performance, with average accuracies of 82.5%, 72.5%, and
81.6% respectively. Visualizations further show tighter intra-cluster
aggregation and clearer inter-cluster boundaries, highlighting the
effectiveness of our framework in improving upon baseline GCL models.
|
2409.09849 | Ankush Dhawan | Ankush Kundan Dhawan and Camille Chungyoun and Karina Ting and Monroe
Kennedy III | Dynamic Layer Detection of a Thin Materials using DenseTact Optical
Tactile Sensors | 7 pages, 9 figures, submitted to IROS 2025 | null | null | null | cs.RO | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Manipulation of thin materials is critical for many everyday tasks and
remains a significant challenge for robots. While existing research has made
strides in tasks like material smoothing and folding, many studies struggle
with common failure modes (crumpled corners/edges, incorrect grasp
con-figurations) that a preliminary step of layer detection can solve. We
present a novel method for classifying the number of grasped material layers
using a custom gripper equipped with DenseTact 2.0 optical tactile sensors.
After grasping a thin material, the gripper performs an anthropomorphic rubbing
motion while collecting optical flow, 6-axis wrench, and joint state data.
Using this data in a transformer-based network achieves a test accuracy of
98.21% in correctly classifying the number of grasped cloth layers, and 81.25%
accuracy in classifying layers of grasped paper, showing the effectiveness of
our dynamic rubbing method. Evaluating different inputs and model architectures
highlights the usefulness of tactile sensor information and a transformer model
for this task. A comprehensive dataset of 568 labeled trials (368 for cloth and
200 for paper) was collected and made open-source along with this paper. Our
project page is available at
https://armlabstanford.github.io/dynamic-cloth-detection.
| [
{
"version": "v1",
"created": "Sun, 15 Sep 2024 19:57:32 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 02:01:44 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Dhawan",
"Ankush Kundan",
""
],
[
"Chungyoun",
"Camille",
""
],
[
"Ting",
"Karina",
""
],
[
"Kennedy",
"Monroe",
"III"
]
] | TITLE: Dynamic Layer Detection of a Thin Materials using DenseTact Optical
Tactile Sensors
ABSTRACT: Manipulation of thin materials is critical for many everyday tasks and
remains a significant challenge for robots. While existing research has made
strides in tasks like material smoothing and folding, many studies struggle
with common failure modes (crumpled corners/edges, incorrect grasp
con-figurations) that a preliminary step of layer detection can solve. We
present a novel method for classifying the number of grasped material layers
using a custom gripper equipped with DenseTact 2.0 optical tactile sensors.
After grasping a thin material, the gripper performs an anthropomorphic rubbing
motion while collecting optical flow, 6-axis wrench, and joint state data.
Using this data in a transformer-based network achieves a test accuracy of
98.21% in correctly classifying the number of grasped cloth layers, and 81.25%
accuracy in classifying layers of grasped paper, showing the effectiveness of
our dynamic rubbing method. Evaluating different inputs and model architectures
highlights the usefulness of tactile sensor information and a transformer model
for this task. A comprehensive dataset of 568 labeled trials (368 for cloth and
200 for paper) was collected and made open-source along with this paper. Our
project page is available at
https://armlabstanford.github.io/dynamic-cloth-detection.
|
2409.16502 | Ruslan Rakhimov | Gennady Sidorov, Malik Mohrat, Denis Gridusov, Ruslan Rakhimov, Sergey
Kolyubin | GSplatLoc: Grounding Keypoint Descriptors into 3D Gaussian Splatting for
Improved Visual Localization | Project website at https://gsplatloc.github.io/ | null | null | null | cs.CV cs.AI cs.LG cs.RO | http://creativecommons.org/licenses/by/4.0/ | Although various visual localization approaches exist, such as scene
coordinate regression and camera pose regression, these methods often struggle
with optimization complexity or limited accuracy. To address these challenges,
we explore the use of novel view synthesis techniques, particularly 3D Gaussian
Splatting (3DGS), which enables the compact encoding of both 3D geometry and
scene appearance. We propose a two-stage procedure that integrates dense and
robust keypoint descriptors from the lightweight XFeat feature extractor into
3DGS, enhancing performance in both indoor and outdoor environments. The coarse
pose estimates are directly obtained via 2D-3D correspondences between the 3DGS
representation and query image descriptors. In the second stage, the initial
pose estimate is refined by minimizing the rendering-based photometric warp
loss. Benchmarking on widely used indoor and outdoor datasets demonstrates
improvements over recent neural rendering-based localization methods, such as
NeRFMatch and PNeRFLoc.
| [
{
"version": "v1",
"created": "Tue, 24 Sep 2024 23:18:32 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 14:11:44 GMT"
},
{
"version": "v3",
"created": "Thu, 20 Mar 2025 12:57:03 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Sidorov",
"Gennady",
""
],
[
"Mohrat",
"Malik",
""
],
[
"Gridusov",
"Denis",
""
],
[
"Rakhimov",
"Ruslan",
""
],
[
"Kolyubin",
"Sergey",
""
]
] | TITLE: GSplatLoc: Grounding Keypoint Descriptors into 3D Gaussian Splatting for
Improved Visual Localization
ABSTRACT: Although various visual localization approaches exist, such as scene
coordinate regression and camera pose regression, these methods often struggle
with optimization complexity or limited accuracy. To address these challenges,
we explore the use of novel view synthesis techniques, particularly 3D Gaussian
Splatting (3DGS), which enables the compact encoding of both 3D geometry and
scene appearance. We propose a two-stage procedure that integrates dense and
robust keypoint descriptors from the lightweight XFeat feature extractor into
3DGS, enhancing performance in both indoor and outdoor environments. The coarse
pose estimates are directly obtained via 2D-3D correspondences between the 3DGS
representation and query image descriptors. In the second stage, the initial
pose estimate is refined by minimizing the rendering-based photometric warp
loss. Benchmarking on widely used indoor and outdoor datasets demonstrates
improvements over recent neural rendering-based localization methods, such as
NeRFMatch and PNeRFLoc.
|
2409.17385 | Ruining Yang | Ruining Yang and Yi Xu and Yun Fu and Lili Su | SSTP: Efficient Sample Selection for Trajectory Prediction | null | null | null | null | cs.LG cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Trajectory prediction is a core task in autonomous driving. However, training
advanced trajectory prediction models on large-scale datasets is both
time-consuming and computationally expensive. In addition, the imbalanced
distribution of driving scenarios often biases models toward data-rich cases,
limiting performance in safety-critical, data-scarce conditions. To address
these challenges, we propose the Sample Selection for Trajectory Prediction
(SSTP) framework, which constructs a compact yet balanced dataset for
trajectory prediction. SSTP consists of two main stages (1) Extraction, in
which a pretrained trajectory prediction model computes gradient vectors for
each sample to capture their influence on parameter updates; and (2) Selection,
where a submodular function is applied to greedily choose a representative
subset that covers diverse driving scenarios. This approach significantly
reduces the dataset size and mitigates scenario imbalance, without sacrificing
prediction accuracy and even improving in high-density cases. We evaluate our
proposed SSTP on the Argoverse 1 and Argoverse 2 benchmarks using a wide range
of recent state-of-the-art models. Our experiments demonstrate that SSTP
achieves comparable performance to full-dataset training using only half the
data while delivering substantial improvements in high-density traffic scenes
and significantly reducing training time. Importantly, SSTP exhibits strong
generalization and robustness, and the selected subset is model-agnostic,
offering a broadly applicable solution.
| [
{
"version": "v1",
"created": "Wed, 25 Sep 2024 22:00:11 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 03:32:59 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Yang",
"Ruining",
""
],
[
"Xu",
"Yi",
""
],
[
"Fu",
"Yun",
""
],
[
"Su",
"Lili",
""
]
] | TITLE: SSTP: Efficient Sample Selection for Trajectory Prediction
ABSTRACT: Trajectory prediction is a core task in autonomous driving. However, training
advanced trajectory prediction models on large-scale datasets is both
time-consuming and computationally expensive. In addition, the imbalanced
distribution of driving scenarios often biases models toward data-rich cases,
limiting performance in safety-critical, data-scarce conditions. To address
these challenges, we propose the Sample Selection for Trajectory Prediction
(SSTP) framework, which constructs a compact yet balanced dataset for
trajectory prediction. SSTP consists of two main stages (1) Extraction, in
which a pretrained trajectory prediction model computes gradient vectors for
each sample to capture their influence on parameter updates; and (2) Selection,
where a submodular function is applied to greedily choose a representative
subset that covers diverse driving scenarios. This approach significantly
reduces the dataset size and mitigates scenario imbalance, without sacrificing
prediction accuracy and even improving in high-density cases. We evaluate our
proposed SSTP on the Argoverse 1 and Argoverse 2 benchmarks using a wide range
of recent state-of-the-art models. Our experiments demonstrate that SSTP
achieves comparable performance to full-dataset training using only half the
data while delivering substantial improvements in high-density traffic scenes
and significantly reducing training time. Importantly, SSTP exhibits strong
generalization and robustness, and the selected subset is model-agnostic,
offering a broadly applicable solution.
|
2409.17993 | Junchen Yu | Junchen Yu, Si-Yuan Cao, Runmin Zhang, Chenghao Zhang, Zhu Yu, Shujie
Chen, Bailin Yang, Hui-liang Shen | SSHNet: Unsupervised Cross-modal Homography Estimation via Problem
Reformulation and Split Optimization | Accepted by CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We propose a novel unsupervised cross-modal homography estimation learning
framework, named Split Supervised Homography estimation Network (SSHNet).
SSHNet reformulates the unsupervised cross-modal homography estimation into two
supervised sub-problems, each addressed by its specialized network: a
homography estimation network and a modality transfer network. To realize
stable training, we introduce an effective split optimization strategy to train
each network separately within its respective sub-problem. We also formulate an
extra homography feature space supervision to enhance feature consistency,
further boosting the estimation accuracy. Moreover, we employ a simple yet
effective distillation training technique to reduce model parameters and
improve cross-domain generalization ability while maintaining comparable
performance. The training stability of SSHNet enables its cooperation with
various homography estimation architectures. Experiments reveal that the SSHNet
using IHN as homography estimation network, namely SSHNet-IHN, outperforms
previous unsupervised approaches by a significant margin. Even compared to
supervised approaches MHN and LocalTrans, SSHNet-IHN achieves 47.4% and 85.8%
mean average corner errors (MACEs) reduction on the challenging OPT-SAR
dataset.
| [
{
"version": "v1",
"created": "Thu, 26 Sep 2024 16:04:31 GMT"
},
{
"version": "v2",
"created": "Fri, 27 Sep 2024 02:35:47 GMT"
},
{
"version": "v3",
"created": "Thu, 27 Feb 2025 13:46:50 GMT"
},
{
"version": "v4",
"created": "Thu, 20 Mar 2025 14:31:10 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Yu",
"Junchen",
""
],
[
"Cao",
"Si-Yuan",
""
],
[
"Zhang",
"Runmin",
""
],
[
"Zhang",
"Chenghao",
""
],
[
"Yu",
"Zhu",
""
],
[
"Chen",
"Shujie",
""
],
[
"Yang",
"Bailin",
""
],
[
"Shen",
"Hui-liang",
""
]
] | TITLE: SSHNet: Unsupervised Cross-modal Homography Estimation via Problem
Reformulation and Split Optimization
ABSTRACT: We propose a novel unsupervised cross-modal homography estimation learning
framework, named Split Supervised Homography estimation Network (SSHNet).
SSHNet reformulates the unsupervised cross-modal homography estimation into two
supervised sub-problems, each addressed by its specialized network: a
homography estimation network and a modality transfer network. To realize
stable training, we introduce an effective split optimization strategy to train
each network separately within its respective sub-problem. We also formulate an
extra homography feature space supervision to enhance feature consistency,
further boosting the estimation accuracy. Moreover, we employ a simple yet
effective distillation training technique to reduce model parameters and
improve cross-domain generalization ability while maintaining comparable
performance. The training stability of SSHNet enables its cooperation with
various homography estimation architectures. Experiments reveal that the SSHNet
using IHN as homography estimation network, namely SSHNet-IHN, outperforms
previous unsupervised approaches by a significant margin. Even compared to
supervised approaches MHN and LocalTrans, SSHNet-IHN achieves 47.4% and 85.8%
mean average corner errors (MACEs) reduction on the challenging OPT-SAR
dataset.
|
2410.05638 | Emam Hossain | Emam Hossain, Md Osman Gani, Devon Dunmire, Aneesh Subramanian, Hammad
Younas | Time Series Classification of Supraglacial Lakes Evolution over
Greenland Ice Sheet | Published in 2024 International Conference on Machine Learning and
Applications (ICMLA). [DOI: https://doi.org/10.1109/ICMLA61862.2024.00072] | 2024 International Conference on Machine Learning and Applications
(ICMLA), Miami, FL, USA, pp. 490-497 | 10.1109/ICMLA61862.2024.00072 | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Greenland Ice Sheet (GrIS) has emerged as a significant contributor to
global sea level rise, primarily due to increased meltwater runoff.
Supraglacial lakes, which form on the ice sheet surface during the summer
months, can impact ice sheet dynamics and mass loss; thus, better understanding
these lakes' seasonal evolution and dynamics is an important task. This study
presents a computationally efficient time series classification approach that
uses Gaussian Mixture Models (GMMs) of the Reconstructed Phase Spaces (RPSs) to
identify supraglacial lakes based on their seasonal evolution: 1) those that
refreeze at the end of the melt season, 2) those that drain during the melt
season, and 3) those that become buried, remaining liquid insulated a few
meters beneath the surface. Our approach uses time series data from the
Sentinel-1 and Sentinel-2 satellites, which utilize microwave and visible
radiation, respectively. Evaluated on a GrIS-wide dataset, the RPS-GMM model,
trained on a single representative sample per class, achieves 85.46% accuracy
with Sentinel-1 data alone and 89.70% with combined Sentinel-1 and Sentinel-2
data. This performance significantly surpasses existing machine learning and
deep learning models which require a large training data. The results
demonstrate the robustness of the RPS-GMM model in capturing the complex
temporal dynamics of supraglacial lakes with minimal training data.
| [
{
"version": "v1",
"created": "Tue, 8 Oct 2024 02:42:15 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 22:40:56 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Hossain",
"Emam",
""
],
[
"Gani",
"Md Osman",
""
],
[
"Dunmire",
"Devon",
""
],
[
"Subramanian",
"Aneesh",
""
],
[
"Younas",
"Hammad",
""
]
] | TITLE: Time Series Classification of Supraglacial Lakes Evolution over
Greenland Ice Sheet
ABSTRACT: The Greenland Ice Sheet (GrIS) has emerged as a significant contributor to
global sea level rise, primarily due to increased meltwater runoff.
Supraglacial lakes, which form on the ice sheet surface during the summer
months, can impact ice sheet dynamics and mass loss; thus, better understanding
these lakes' seasonal evolution and dynamics is an important task. This study
presents a computationally efficient time series classification approach that
uses Gaussian Mixture Models (GMMs) of the Reconstructed Phase Spaces (RPSs) to
identify supraglacial lakes based on their seasonal evolution: 1) those that
refreeze at the end of the melt season, 2) those that drain during the melt
season, and 3) those that become buried, remaining liquid insulated a few
meters beneath the surface. Our approach uses time series data from the
Sentinel-1 and Sentinel-2 satellites, which utilize microwave and visible
radiation, respectively. Evaluated on a GrIS-wide dataset, the RPS-GMM model,
trained on a single representative sample per class, achieves 85.46% accuracy
with Sentinel-1 data alone and 89.70% with combined Sentinel-1 and Sentinel-2
data. This performance significantly surpasses existing machine learning and
deep learning models which require a large training data. The results
demonstrate the robustness of the RPS-GMM model in capturing the complex
temporal dynamics of supraglacial lakes with minimal training data.
|
2410.10491 | Aritra Bhowmik | Aritra Bhowmik, Mohammad Mahdi Derakhshani, Dennis Koelma, Yuki M.
Asano, Martin R. Oswald, Cees G. M. Snoek | TWIST & SCOUT: Grounding Multimodal LLM-Experts by Forget-Free Tuning | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Spatial awareness is key to enable embodied multimodal AI systems. Yet,
without vast amounts of spatial supervision, current Multimodal Large Language
Models (MLLMs) struggle at this task. In this paper, we introduce TWIST &
SCOUT, a framework that equips pre-trained MLLMs with visual grounding ability
without forgetting their existing image and language understanding skills. To
this end, we propose TWIST, a twin-expert stepwise tuning module that modifies
the decoder of the language model using one frozen module pre-trained on image
understanding tasks and another learnable one for visual grounding tasks. This
allows the MLLM to retain previously learned knowledge and skills, while
acquiring what is missing. To fine-tune the model effectively, we generate a
high-quality synthetic dataset we call SCOUT, which mimics human reasoning in
visual grounding. This dataset provides rich supervision signals, describing a
step-by-step multimodal reasoning process, thereby simplifying the task of
visual grounding. We evaluate our approach on several standard benchmark
datasets, encompassing grounded image captioning, zero-shot localization, and
visual grounding tasks. Our method consistently delivers strong performance
across all tasks, while retaining the pre-trained image understanding
capabilities.
| [
{
"version": "v1",
"created": "Mon, 14 Oct 2024 13:35:47 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 15:32:47 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Bhowmik",
"Aritra",
""
],
[
"Derakhshani",
"Mohammad Mahdi",
""
],
[
"Koelma",
"Dennis",
""
],
[
"Asano",
"Yuki M.",
""
],
[
"Oswald",
"Martin R.",
""
],
[
"Snoek",
"Cees G. M.",
""
]
] | TITLE: TWIST & SCOUT: Grounding Multimodal LLM-Experts by Forget-Free Tuning
ABSTRACT: Spatial awareness is key to enable embodied multimodal AI systems. Yet,
without vast amounts of spatial supervision, current Multimodal Large Language
Models (MLLMs) struggle at this task. In this paper, we introduce TWIST &
SCOUT, a framework that equips pre-trained MLLMs with visual grounding ability
without forgetting their existing image and language understanding skills. To
this end, we propose TWIST, a twin-expert stepwise tuning module that modifies
the decoder of the language model using one frozen module pre-trained on image
understanding tasks and another learnable one for visual grounding tasks. This
allows the MLLM to retain previously learned knowledge and skills, while
acquiring what is missing. To fine-tune the model effectively, we generate a
high-quality synthetic dataset we call SCOUT, which mimics human reasoning in
visual grounding. This dataset provides rich supervision signals, describing a
step-by-step multimodal reasoning process, thereby simplifying the task of
visual grounding. We evaluate our approach on several standard benchmark
datasets, encompassing grounded image captioning, zero-shot localization, and
visual grounding tasks. Our method consistently delivers strong performance
across all tasks, while retaining the pre-trained image understanding
capabilities.
|
2410.11374 | Yoonjeon Kim | Yoonjeon Kim, Soohyun Ryu, Yeonsung Jung, Hyunkoo Lee, Joowon Kim,
June Yong Yang, Jaeryong Hwang, Eunho Yang | Preserve or Modify? Context-Aware Evaluation for Balancing Preservation
and Modification in Text-Guided Image Editing | accepted to CVPR 2025 | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | The development of vision-language and generative models has significantly
advanced text-guided image editing, which seeks the preservation of core
elements in the source image while implementing modifications based on the
target text. However, existing metrics have a context-blindness problem,
indiscriminately applying the same evaluation criteria on completely different
pairs of source image and target text, biasing towards either modification or
preservation. Directional CLIP similarity, the only metric that considers both
source image and target text, is also biased towards modification aspects and
attends to irrelevant editing regions of the image. We propose AugCLIP, a
context-aware metric that adaptively coordinates preservation and modification
aspects, depending on the specific context of a given source image and target
text. This is done by deriving the CLIP representation of an ideally edited
image, that preserves the source image with necessary modifications to align
with target text. More specifically, using a multi-modal large language model,
AugCLIP augments the textual descriptions of the source and target, then
calculates a modification vector through a hyperplane that separates source and
target attributes in CLIP space. Extensive experiments on five benchmark
datasets, encompassing a diverse range of editing scenarios, show that AugCLIP
aligns remarkably well with human evaluation standards, outperforming existing
metrics. The code is available at https://github.com/augclip/augclip_eval.
| [
{
"version": "v1",
"created": "Tue, 15 Oct 2024 08:12:54 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Dec 2024 07:35:20 GMT"
},
{
"version": "v3",
"created": "Thu, 20 Mar 2025 07:36:52 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Kim",
"Yoonjeon",
""
],
[
"Ryu",
"Soohyun",
""
],
[
"Jung",
"Yeonsung",
""
],
[
"Lee",
"Hyunkoo",
""
],
[
"Kim",
"Joowon",
""
],
[
"Yang",
"June Yong",
""
],
[
"Hwang",
"Jaeryong",
""
],
[
"Yang",
"Eunho",
""
]
] | TITLE: Preserve or Modify? Context-Aware Evaluation for Balancing Preservation
and Modification in Text-Guided Image Editing
ABSTRACT: The development of vision-language and generative models has significantly
advanced text-guided image editing, which seeks the preservation of core
elements in the source image while implementing modifications based on the
target text. However, existing metrics have a context-blindness problem,
indiscriminately applying the same evaluation criteria on completely different
pairs of source image and target text, biasing towards either modification or
preservation. Directional CLIP similarity, the only metric that considers both
source image and target text, is also biased towards modification aspects and
attends to irrelevant editing regions of the image. We propose AugCLIP, a
context-aware metric that adaptively coordinates preservation and modification
aspects, depending on the specific context of a given source image and target
text. This is done by deriving the CLIP representation of an ideally edited
image, that preserves the source image with necessary modifications to align
with target text. More specifically, using a multi-modal large language model,
AugCLIP augments the textual descriptions of the source and target, then
calculates a modification vector through a hyperplane that separates source and
target attributes in CLIP space. Extensive experiments on five benchmark
datasets, encompassing a diverse range of editing scenarios, show that AugCLIP
aligns remarkably well with human evaluation standards, outperforming existing
metrics. The code is available at https://github.com/augclip/augclip_eval.
|
2410.13924 | Guangda Ji | Guangda Ji, Silvan Weder, Francis Engelmann, Marc Pollefeys, Hermann
Blum | ARKit LabelMaker: A New Scale for Indoor 3D Scene Understanding | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural network performance scales with both model size and data volume, as
shown in both language and image processing. This requires scaling-friendly
architectures and large datasets. While transformers have been adapted for 3D
vision, a `GPT-moment' remains elusive due to limited training data. We
introduce ARKit LabelMaker, a large-scale real-world 3D dataset with dense
semantic annotation that is more than three times larger than prior largest
dataset. Specifically, we extend ARKitScenes with automatically generated dense
3D labels using an extended LabelMaker pipeline, tailored for large-scale
pre-training. Training on our dataset improves accuracy across architectures,
achieving state-of-the-art 3D semantic segmentation scores on ScanNet and
ScanNet200, with notable gains on tail classes. Our code is available at
https://labelmaker.org and our dataset at
https://huggingface.co/datasets/labelmaker/arkit_labelmaker.
| [
{
"version": "v1",
"created": "Thu, 17 Oct 2024 14:44:35 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 10:16:27 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Ji",
"Guangda",
""
],
[
"Weder",
"Silvan",
""
],
[
"Engelmann",
"Francis",
""
],
[
"Pollefeys",
"Marc",
""
],
[
"Blum",
"Hermann",
""
]
] | TITLE: ARKit LabelMaker: A New Scale for Indoor 3D Scene Understanding
ABSTRACT: Neural network performance scales with both model size and data volume, as
shown in both language and image processing. This requires scaling-friendly
architectures and large datasets. While transformers have been adapted for 3D
vision, a `GPT-moment' remains elusive due to limited training data. We
introduce ARKit LabelMaker, a large-scale real-world 3D dataset with dense
semantic annotation that is more than three times larger than prior largest
dataset. Specifically, we extend ARKitScenes with automatically generated dense
3D labels using an extended LabelMaker pipeline, tailored for large-scale
pre-training. Training on our dataset improves accuracy across architectures,
achieving state-of-the-art 3D semantic segmentation scores on ScanNet and
ScanNet200, with notable gains on tail classes. Our code is available at
https://labelmaker.org and our dataset at
https://huggingface.co/datasets/labelmaker/arkit_labelmaker.
|
2410.19464 | Jiajun Zhang | Jiajun Zhang, Boyang Qiang, Xiaoyu Guo, Weiwei Xing, Yue Cheng, Witold
Pedrycz | LOCAL: Learning with Orientation Matrix to Infer Causal Structure from
Time Series Data | 16 pages, 7 figures | null | null | null | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Discovering the underlying Directed Acyclic Graph (DAG) from time series
observational data is highly challenging due to the dynamic nature and complex
nonlinear interactions between variables. Existing methods typically search for
the optimal DAG by optimizing an objective function but face scalability
challenges, as their computational demands grow exponentially with the
dimensional expansion of variables. To this end, we propose LOCAL, a highly
efficient, easy-to-implement, and constraint-free method for recovering dynamic
causal structures. LOCAL is the first attempt to formulate a quasi-maximum
likelihood-based score function for learning the dynamic DAG equivalent to the
ground truth. Building on this, we introduce two adaptive modules that enhance
the algebraic characterization of acyclicity: Asymptotic Causal Mask Learning
(ACML) and Dynamic Graph Parameter Learning (DGPL). ACML constructs causal
masks using learnable priority vectors and the Gumbel-Sigmoid function,
ensuring DAG formation while optimizing computational efficiency. DGPL
transforms causal learning into decomposed matrix products, capturing dynamic
causal structure in high-dimensional data and improving interpretability.
Extensive experiments on synthetic and real-world datasets demonstrate that
LOCAL significantly outperforms existing methods and highlight LOCAL's
potential as a robust and efficient method for dynamic causal discovery.
| [
{
"version": "v1",
"created": "Fri, 25 Oct 2024 10:48:41 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Oct 2024 01:44:41 GMT"
},
{
"version": "v3",
"created": "Wed, 19 Mar 2025 12:59:43 GMT"
},
{
"version": "v4",
"created": "Thu, 20 Mar 2025 03:32:03 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Zhang",
"Jiajun",
""
],
[
"Qiang",
"Boyang",
""
],
[
"Guo",
"Xiaoyu",
""
],
[
"Xing",
"Weiwei",
""
],
[
"Cheng",
"Yue",
""
],
[
"Pedrycz",
"Witold",
""
]
] | TITLE: LOCAL: Learning with Orientation Matrix to Infer Causal Structure from
Time Series Data
ABSTRACT: Discovering the underlying Directed Acyclic Graph (DAG) from time series
observational data is highly challenging due to the dynamic nature and complex
nonlinear interactions between variables. Existing methods typically search for
the optimal DAG by optimizing an objective function but face scalability
challenges, as their computational demands grow exponentially with the
dimensional expansion of variables. To this end, we propose LOCAL, a highly
efficient, easy-to-implement, and constraint-free method for recovering dynamic
causal structures. LOCAL is the first attempt to formulate a quasi-maximum
likelihood-based score function for learning the dynamic DAG equivalent to the
ground truth. Building on this, we introduce two adaptive modules that enhance
the algebraic characterization of acyclicity: Asymptotic Causal Mask Learning
(ACML) and Dynamic Graph Parameter Learning (DGPL). ACML constructs causal
masks using learnable priority vectors and the Gumbel-Sigmoid function,
ensuring DAG formation while optimizing computational efficiency. DGPL
transforms causal learning into decomposed matrix products, capturing dynamic
causal structure in high-dimensional data and improving interpretability.
Extensive experiments on synthetic and real-world datasets demonstrate that
LOCAL significantly outperforms existing methods and highlight LOCAL's
potential as a robust and efficient method for dynamic causal discovery.
|
2410.20081 | Viswanath Sivakumar | Viswanath Sivakumar, Jeffrey Seely, Alan Du, Sean R Bittner, Adam
Berenzweig, Anuoluwapo Bolarinwa, Alexandre Gramfort, Michael I Mandel | emg2qwerty: A Large Dataset with Baselines for Touch Typing using
Surface Electromyography | Published at NeurIPS 2024 Datasets and Benchmarks Track | null | null | null | cs.LG cs.HC eess.AS | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Surface electromyography (sEMG) non-invasively measures signals generated by
muscle activity with sufficient sensitivity to detect individual spinal neurons
and richness to identify dozens of gestures and their nuances. Wearable
wrist-based sEMG sensors have the potential to offer low friction, subtle,
information rich, always available human-computer inputs. To this end, we
introduce emg2qwerty, a large-scale dataset of non-invasive electromyographic
signals recorded at the wrists while touch typing on a QWERTY keyboard,
together with ground-truth annotations and reproducible baselines. With 1,135
sessions spanning 108 users and 346 hours of recording, this is the largest
such public dataset to date. These data demonstrate non-trivial, but well
defined hierarchical relationships both in terms of the generative process,
from neurons to muscles and muscle combinations, as well as in terms of domain
shift across users and user sessions. Applying standard modeling techniques
from the closely related field of Automatic Speech Recognition (ASR), we show
strong baseline performance on predicting key-presses using sEMG signals alone.
We believe the richness of this task and dataset will facilitate progress in
several problems of interest to both the machine learning and neuroscientific
communities. Dataset and code can be accessed at
https://github.com/facebookresearch/emg2qwerty.
| [
{
"version": "v1",
"created": "Sat, 26 Oct 2024 05:18:48 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Nov 2024 16:29:43 GMT"
},
{
"version": "v3",
"created": "Thu, 20 Mar 2025 15:51:46 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Sivakumar",
"Viswanath",
""
],
[
"Seely",
"Jeffrey",
""
],
[
"Du",
"Alan",
""
],
[
"Bittner",
"Sean R",
""
],
[
"Berenzweig",
"Adam",
""
],
[
"Bolarinwa",
"Anuoluwapo",
""
],
[
"Gramfort",
"Alexandre",
""
],
[
"Mandel",
"Michael I",
""
]
] | TITLE: emg2qwerty: A Large Dataset with Baselines for Touch Typing using
Surface Electromyography
ABSTRACT: Surface electromyography (sEMG) non-invasively measures signals generated by
muscle activity with sufficient sensitivity to detect individual spinal neurons
and richness to identify dozens of gestures and their nuances. Wearable
wrist-based sEMG sensors have the potential to offer low friction, subtle,
information rich, always available human-computer inputs. To this end, we
introduce emg2qwerty, a large-scale dataset of non-invasive electromyographic
signals recorded at the wrists while touch typing on a QWERTY keyboard,
together with ground-truth annotations and reproducible baselines. With 1,135
sessions spanning 108 users and 346 hours of recording, this is the largest
such public dataset to date. These data demonstrate non-trivial, but well
defined hierarchical relationships both in terms of the generative process,
from neurons to muscles and muscle combinations, as well as in terms of domain
shift across users and user sessions. Applying standard modeling techniques
from the closely related field of Automatic Speech Recognition (ASR), we show
strong baseline performance on predicting key-presses using sEMG signals alone.
We believe the richness of this task and dataset will facilitate progress in
several problems of interest to both the machine learning and neuroscientific
communities. Dataset and code can be accessed at
https://github.com/facebookresearch/emg2qwerty.
|
2411.01411 | Amit Misra | Amit Misra, Kevin White, Simone Fobi Nsutezo, William Straka, and Juan
Lavista | Mapping Global Floods with 10 Years of Satellite Radar Data | 18 pages, 8 figures, under review | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Floods cause extensive global damage annually, making effective monitoring
essential. While satellite observations have proven invaluable for flood
detection and tracking, comprehensive global flood datasets spanning extended
time periods remain scarce. In this study, we introduce a novel deep learning
flood detection model that leverages the cloud-penetrating capabilities of
Sentinel-1 Synthetic Aperture Radar (SAR) satellite imagery, enabling
consistent flood extent mapping in through cloud cover and in both day and
night conditions. By applying this model to 10 years of SAR data, we create a
unique, longitudinal global flood extent dataset with predictions unaffected by
cloud coverage, offering comprehensive and consistent insights into
historically flood-prone areas over the past decade. We use our model
predictions to identify historically flood-prone areas in Ethiopia and
demonstrate real-time disaster response capabilities during the May 2024 floods
in Kenya. Additionally, our longitudinal analysis reveals potential increasing
trends in global flood extent over time, although further validation is
required to explore links to climate change. To maximize impact, we provide
public access to both our model predictions and a code repository, empowering
researchers and practitioners worldwide to advance flood monitoring and enhance
disaster response strategies.
| [
{
"version": "v1",
"created": "Sun, 3 Nov 2024 02:44:32 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 00:26:25 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Misra",
"Amit",
""
],
[
"White",
"Kevin",
""
],
[
"Nsutezo",
"Simone Fobi",
""
],
[
"Straka",
"William",
""
],
[
"Lavista",
"Juan",
""
]
] | TITLE: Mapping Global Floods with 10 Years of Satellite Radar Data
ABSTRACT: Floods cause extensive global damage annually, making effective monitoring
essential. While satellite observations have proven invaluable for flood
detection and tracking, comprehensive global flood datasets spanning extended
time periods remain scarce. In this study, we introduce a novel deep learning
flood detection model that leverages the cloud-penetrating capabilities of
Sentinel-1 Synthetic Aperture Radar (SAR) satellite imagery, enabling
consistent flood extent mapping in through cloud cover and in both day and
night conditions. By applying this model to 10 years of SAR data, we create a
unique, longitudinal global flood extent dataset with predictions unaffected by
cloud coverage, offering comprehensive and consistent insights into
historically flood-prone areas over the past decade. We use our model
predictions to identify historically flood-prone areas in Ethiopia and
demonstrate real-time disaster response capabilities during the May 2024 floods
in Kenya. Additionally, our longitudinal analysis reveals potential increasing
trends in global flood extent over time, although further validation is
required to explore links to climate change. To maximize impact, we provide
public access to both our model predictions and a code repository, empowering
researchers and practitioners worldwide to advance flood monitoring and enhance
disaster response strategies.
|
2411.02344 | Md Rifat Arefin | Md Rifat Arefin, Gopeshh Subbaraj, Nicolas Gontier, Yann LeCun, Irina
Rish, Ravid Shwartz-Ziv, Christopher Pal | Seq-VCR: Preventing Collapse in Intermediate Transformer Representations
for Enhanced Reasoning | null | null | null | null | cs.LG cs.CL | http://creativecommons.org/licenses/by/4.0/ | Decoder-only Transformers often struggle with complex reasoning tasks,
particularly arithmetic reasoning requiring multiple sequential operations. In
this work, we identify representation collapse in the model's intermediate
layers as a key factor limiting their reasoning capabilities. To address this,
we propose Sequential Variance-Covariance Regularization (Seq-VCR), which
enhances the entropy of intermediate representations and prevents collapse.
Combined with dummy pause tokens as substitutes for chain-of-thought (CoT)
tokens, our method significantly improves performance in arithmetic reasoning
problems. In the challenging $5 \times 5$ integer multiplication task, our
approach achieves $99.5\%$ exact match accuracy, outperforming models of the
same size (which yield $0\%$ accuracy) and GPT-4 with five-shot CoT prompting
($44\%$). We also demonstrate superior results on arithmetic expression and
longest increasing subsequence (LIS) datasets. Our findings highlight the
importance of preventing intermediate layer representation collapse to enhance
the reasoning capabilities of Transformers and show that Seq-VCR offers an
effective solution without requiring explicit CoT supervision.
| [
{
"version": "v1",
"created": "Mon, 4 Nov 2024 18:14:07 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 17:37:44 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Arefin",
"Md Rifat",
""
],
[
"Subbaraj",
"Gopeshh",
""
],
[
"Gontier",
"Nicolas",
""
],
[
"LeCun",
"Yann",
""
],
[
"Rish",
"Irina",
""
],
[
"Shwartz-Ziv",
"Ravid",
""
],
[
"Pal",
"Christopher",
""
]
] | TITLE: Seq-VCR: Preventing Collapse in Intermediate Transformer Representations
for Enhanced Reasoning
ABSTRACT: Decoder-only Transformers often struggle with complex reasoning tasks,
particularly arithmetic reasoning requiring multiple sequential operations. In
this work, we identify representation collapse in the model's intermediate
layers as a key factor limiting their reasoning capabilities. To address this,
we propose Sequential Variance-Covariance Regularization (Seq-VCR), which
enhances the entropy of intermediate representations and prevents collapse.
Combined with dummy pause tokens as substitutes for chain-of-thought (CoT)
tokens, our method significantly improves performance in arithmetic reasoning
problems. In the challenging $5 \times 5$ integer multiplication task, our
approach achieves $99.5\%$ exact match accuracy, outperforming models of the
same size (which yield $0\%$ accuracy) and GPT-4 with five-shot CoT prompting
($44\%$). We also demonstrate superior results on arithmetic expression and
longest increasing subsequence (LIS) datasets. Our findings highlight the
importance of preventing intermediate layer representation collapse to enhance
the reasoning capabilities of Transformers and show that Seq-VCR offers an
effective solution without requiring explicit CoT supervision.
|
2411.04713 | Sijie Zhu | Xin Gu, Ming Li, Libo Zhang, Fan Chen, Longyin Wen, Tiejian Luo, Sijie
Zhu | Multi-Reward as Condition for Instruction-based Image Editing | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | High-quality training triplets (instruction, original image, edited image)
are essential for instruction-based image editing. Predominant training
datasets (e.g., InsPix2Pix) are created using text-to-image generative models
(e.g., Stable Diffusion, DALL-E) which are not trained for image editing.
Accordingly, these datasets suffer from inaccurate instruction following, poor
detail preserving, and generation artifacts. In this paper, we propose to
address the training data quality issue with multi-perspective reward data
instead of refining the ground-truth image quality. 1) we first design a
quantitative metric system based on best-in-class LVLM (Large Vision Language
Model), i.e., GPT-4o in our case, to evaluate the generation quality from 3
perspectives, namely, instruction following, detail preserving, and generation
quality. For each perspective, we collected quantitative score in $0\sim 5$ and
text descriptive feedback on the specific failure points in ground-truth edited
images, resulting in a high-quality editing reward dataset, i.e.,
RewardEdit20K. 2) We further proposed a novel training framework to seamlessly
integrate the metric output, regarded as multi-reward, into editing models to
learn from the imperfect training triplets. During training, the reward scores
and text descriptions are encoded as embeddings and fed into both the latent
space and the U-Net of the editing models as auxiliary conditions. 3) We also
build a challenging evaluation benchmark with real-world images/photos and
diverse editing instructions, named Real-Edit. Experiments indicate that our
multi-reward conditioned model outperforms its no-reward counterpart on two
popular editing pipelines, i.e., InsPix2Pix and SmartEdit. Code is released at
https://github.com/bytedance/Multi-Reward-Editing.
| [
{
"version": "v1",
"created": "Wed, 6 Nov 2024 05:02:29 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 00:04:47 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Gu",
"Xin",
""
],
[
"Li",
"Ming",
""
],
[
"Zhang",
"Libo",
""
],
[
"Chen",
"Fan",
""
],
[
"Wen",
"Longyin",
""
],
[
"Luo",
"Tiejian",
""
],
[
"Zhu",
"Sijie",
""
]
] | TITLE: Multi-Reward as Condition for Instruction-based Image Editing
ABSTRACT: High-quality training triplets (instruction, original image, edited image)
are essential for instruction-based image editing. Predominant training
datasets (e.g., InsPix2Pix) are created using text-to-image generative models
(e.g., Stable Diffusion, DALL-E) which are not trained for image editing.
Accordingly, these datasets suffer from inaccurate instruction following, poor
detail preserving, and generation artifacts. In this paper, we propose to
address the training data quality issue with multi-perspective reward data
instead of refining the ground-truth image quality. 1) we first design a
quantitative metric system based on best-in-class LVLM (Large Vision Language
Model), i.e., GPT-4o in our case, to evaluate the generation quality from 3
perspectives, namely, instruction following, detail preserving, and generation
quality. For each perspective, we collected quantitative score in $0\sim 5$ and
text descriptive feedback on the specific failure points in ground-truth edited
images, resulting in a high-quality editing reward dataset, i.e.,
RewardEdit20K. 2) We further proposed a novel training framework to seamlessly
integrate the metric output, regarded as multi-reward, into editing models to
learn from the imperfect training triplets. During training, the reward scores
and text descriptions are encoded as embeddings and fed into both the latent
space and the U-Net of the editing models as auxiliary conditions. 3) We also
build a challenging evaluation benchmark with real-world images/photos and
diverse editing instructions, named Real-Edit. Experiments indicate that our
multi-reward conditioned model outperforms its no-reward counterpart on two
popular editing pipelines, i.e., InsPix2Pix and SmartEdit. Code is released at
https://github.com/bytedance/Multi-Reward-Editing.
|
2411.07563 | Dongrui Han | Dongrui Han, Mingyu Cui, Jiawen Kang, Xixin Wu, Xunying Liu, Helen
Meng | Improving Grapheme-to-Phoneme Conversion through In-Context Knowledge
Retrieval with Large Language Models | accepted by ISCSLP 2024 | null | 10.1109/ISCSLP63861.2024.10800392 | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Grapheme-to-phoneme (G2P) conversion is a crucial step in Text-to-Speech
(TTS) systems, responsible for mapping grapheme to corresponding phonetic
representations. However, it faces ambiguities problems where the same grapheme
can represent multiple phonemes depending on contexts, posing a challenge for
G2P conversion. Inspired by the remarkable success of Large Language Models
(LLMs) in handling context-aware scenarios, contextual G2P conversion systems
with LLMs' in-context knowledge retrieval (ICKR) capabilities are proposed to
promote disambiguation capability. The efficacy of incorporating ICKR into G2P
conversion systems is demonstrated thoroughly on the Librig2p dataset. In
particular, the best contextual G2P conversion system using ICKR outperforms
the baseline with weighted average phoneme error rate (PER) reductions of 2.0%
absolute (28.9% relative). Using GPT-4 in the ICKR system can increase of 3.5%
absolute (3.8% relative) on the Librig2p dataset.
| [
{
"version": "v1",
"created": "Tue, 12 Nov 2024 05:38:43 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Han",
"Dongrui",
""
],
[
"Cui",
"Mingyu",
""
],
[
"Kang",
"Jiawen",
""
],
[
"Wu",
"Xixin",
""
],
[
"Liu",
"Xunying",
""
],
[
"Meng",
"Helen",
""
]
] | TITLE: Improving Grapheme-to-Phoneme Conversion through In-Context Knowledge
Retrieval with Large Language Models
ABSTRACT: Grapheme-to-phoneme (G2P) conversion is a crucial step in Text-to-Speech
(TTS) systems, responsible for mapping grapheme to corresponding phonetic
representations. However, it faces ambiguities problems where the same grapheme
can represent multiple phonemes depending on contexts, posing a challenge for
G2P conversion. Inspired by the remarkable success of Large Language Models
(LLMs) in handling context-aware scenarios, contextual G2P conversion systems
with LLMs' in-context knowledge retrieval (ICKR) capabilities are proposed to
promote disambiguation capability. The efficacy of incorporating ICKR into G2P
conversion systems is demonstrated thoroughly on the Librig2p dataset. In
particular, the best contextual G2P conversion system using ICKR outperforms
the baseline with weighted average phoneme error rate (PER) reductions of 2.0%
absolute (28.9% relative). Using GPT-4 in the ICKR system can increase of 3.5%
absolute (3.8% relative) on the Librig2p dataset.
|
2411.08402 | Xun Huang | Xun Huang, Jinlong Wang, Qiming Xia, Siheng Chen, Bisheng Yang, Xin
Li, Cheng Wang, Chenglu Wen | V2X-R: Cooperative LiDAR-4D Radar Fusion for 3D Object Detection with
Denoising Diffusion | Accepted by CVPR2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current Vehicle-to-Everything (V2X) systems have significantly enhanced 3D
object detection using LiDAR and camera data. However, these methods suffer
from performance degradation in adverse weather conditions. The weather-robust
4D radar provides Doppler and additional geometric information, raising the
possibility of addressing this challenge. To this end, we present V2X-R, the
first simulated V2X dataset incorporating LiDAR, camera, and 4D radar. V2X-R
contains 12,079 scenarios with 37,727 frames of LiDAR and 4D radar point
clouds, 150,908 images, and 170,859 annotated 3D vehicle bounding boxes.
Subsequently, we propose a novel cooperative LiDAR-4D radar fusion pipeline for
3D object detection and implement it with various fusion strategies. To achieve
weather-robust detection, we additionally propose a Multi-modal Denoising
Diffusion (MDD) module in our fusion pipeline. MDD utilizes weather-robust 4D
radar feature as a condition to prompt the diffusion model to denoise noisy
LiDAR features. Experiments show that our LiDAR-4D radar fusion pipeline
demonstrates superior performance in the V2X-R dataset. Over and above this,
our MDD module further improved the performance of basic fusion model by up to
5.73%/6.70% in foggy/snowy conditions with barely disrupting normal
performance. The dataset and code will be publicly available at:
https://github.com/ylwhxht/V2X-R.
| [
{
"version": "v1",
"created": "Wed, 13 Nov 2024 07:41:47 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Nov 2024 16:54:54 GMT"
},
{
"version": "v3",
"created": "Sat, 1 Mar 2025 10:36:44 GMT"
},
{
"version": "v4",
"created": "Thu, 20 Mar 2025 03:55:02 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Huang",
"Xun",
""
],
[
"Wang",
"Jinlong",
""
],
[
"Xia",
"Qiming",
""
],
[
"Chen",
"Siheng",
""
],
[
"Yang",
"Bisheng",
""
],
[
"Li",
"Xin",
""
],
[
"Wang",
"Cheng",
""
],
[
"Wen",
"Chenglu",
""
]
] | TITLE: V2X-R: Cooperative LiDAR-4D Radar Fusion for 3D Object Detection with
Denoising Diffusion
ABSTRACT: Current Vehicle-to-Everything (V2X) systems have significantly enhanced 3D
object detection using LiDAR and camera data. However, these methods suffer
from performance degradation in adverse weather conditions. The weather-robust
4D radar provides Doppler and additional geometric information, raising the
possibility of addressing this challenge. To this end, we present V2X-R, the
first simulated V2X dataset incorporating LiDAR, camera, and 4D radar. V2X-R
contains 12,079 scenarios with 37,727 frames of LiDAR and 4D radar point
clouds, 150,908 images, and 170,859 annotated 3D vehicle bounding boxes.
Subsequently, we propose a novel cooperative LiDAR-4D radar fusion pipeline for
3D object detection and implement it with various fusion strategies. To achieve
weather-robust detection, we additionally propose a Multi-modal Denoising
Diffusion (MDD) module in our fusion pipeline. MDD utilizes weather-robust 4D
radar feature as a condition to prompt the diffusion model to denoise noisy
LiDAR features. Experiments show that our LiDAR-4D radar fusion pipeline
demonstrates superior performance in the V2X-R dataset. Over and above this,
our MDD module further improved the performance of basic fusion model by up to
5.73%/6.70% in foggy/snowy conditions with barely disrupting normal
performance. The dataset and code will be publicly available at:
https://github.com/ylwhxht/V2X-R.
|
2411.10411 | Onay Urfalioglu | Markus Karmann, Onay Urfalioglu | Repurposing Stable Diffusion Attention for Training-Free Unsupervised
Interactive Segmentation | Accepted by CVPR 2025 | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent progress in interactive point prompt based Image Segmentation allows
to significantly reduce the manual effort to obtain high quality semantic
labels. State-of-the-art unsupervised methods use self-supervised pre-trained
models to obtain pseudo-labels which are used in training a prompt-based
segmentation model. In this paper, we propose a novel unsupervised and
training-free approach based solely on the self-attention of Stable Diffusion.
We interpret the self-attention tensor as a Markov transition operator, which
enables us to iteratively construct a Markov chain. Pixel-wise counting of the
required number of iterations along the Markov chain to reach a relative
probability threshold yields a Markov-iteration-map, which we simply call a
Markov-map. Compared to the raw attention maps, we show that our proposed
Markov-map has less noise, sharper semantic boundaries and more uniform values
within semantically similar regions. We integrate the Markov-map in a simple
yet effective truncated nearest neighbor framework to obtain interactive point
prompt based segmentation. Despite being training-free, we experimentally show
that our approach yields excellent results in terms of Number of Clicks (NoC),
even outperforming state-of-the-art training based unsupervised methods in most
of the datasets. Code is available at https://github.com/mkarmann/m2n2.
| [
{
"version": "v1",
"created": "Fri, 15 Nov 2024 18:29:59 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 16:15:14 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Karmann",
"Markus",
""
],
[
"Urfalioglu",
"Onay",
""
]
] | TITLE: Repurposing Stable Diffusion Attention for Training-Free Unsupervised
Interactive Segmentation
ABSTRACT: Recent progress in interactive point prompt based Image Segmentation allows
to significantly reduce the manual effort to obtain high quality semantic
labels. State-of-the-art unsupervised methods use self-supervised pre-trained
models to obtain pseudo-labels which are used in training a prompt-based
segmentation model. In this paper, we propose a novel unsupervised and
training-free approach based solely on the self-attention of Stable Diffusion.
We interpret the self-attention tensor as a Markov transition operator, which
enables us to iteratively construct a Markov chain. Pixel-wise counting of the
required number of iterations along the Markov chain to reach a relative
probability threshold yields a Markov-iteration-map, which we simply call a
Markov-map. Compared to the raw attention maps, we show that our proposed
Markov-map has less noise, sharper semantic boundaries and more uniform values
within semantically similar regions. We integrate the Markov-map in a simple
yet effective truncated nearest neighbor framework to obtain interactive point
prompt based segmentation. Despite being training-free, we experimentally show
that our approach yields excellent results in terms of Number of Clicks (NoC),
even outperforming state-of-the-art training based unsupervised methods in most
of the datasets. Code is available at https://github.com/mkarmann/m2n2.
|
2411.10867 | Aarush Sinha | Vipula Rawte, Sarthak Jain, Aarush Sinha, Garv Kaushik, Aman Bansal,
Prathiksha Rumale Vishwanath, Samyak Rajesh Jain, Aishwarya Naresh Reganti,
Vinija Jain, Aman Chadha, Amit P. Sheth, Amitava Das | ViBe: A Text-to-Video Benchmark for Evaluating Hallucination in Large
Multimodal Models | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recent advances in Large Multimodal Models (LMMs) have expanded their
capabilities to video understanding, with Text-to-Video (T2V) models excelling
in generating videos from textual prompts. However, they still frequently
produce hallucinated content, revealing AI-generated inconsistencies. We
introduce ViBe (https://vibe-t2v-bench.github.io/): a large-scale dataset of
hallucinated videos from open-source T2V models. We identify five major
hallucination types: Vanishing Subject, Omission Error, Numeric Variability,
Subject Dysmorphia, and Visual Incongruity. Using ten T2V models, we generated
and manually annotated 3,782 videos from 837 diverse MS COCO captions. Our
proposed benchmark includes a dataset of hallucinated videos and a
classification framework using video embeddings. ViBe serves as a critical
resource for evaluating T2V reliability and advancing hallucination detection.
We establish classification as a baseline, with the TimeSFormer + CNN ensemble
achieving the best performance (0.345 accuracy, 0.342 F1 score). While initial
baselines proposed achieve modest accuracy, this highlights the difficulty of
automated hallucination detection and the need for improved methods. Our
research aims to drive the development of more robust T2V models and evaluate
their outputs based on user preferences.
| [
{
"version": "v1",
"created": "Sat, 16 Nov 2024 19:23:12 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 18:53:09 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Rawte",
"Vipula",
""
],
[
"Jain",
"Sarthak",
""
],
[
"Sinha",
"Aarush",
""
],
[
"Kaushik",
"Garv",
""
],
[
"Bansal",
"Aman",
""
],
[
"Vishwanath",
"Prathiksha Rumale",
""
],
[
"Jain",
"Samyak Rajesh",
""
],
[
"Reganti",
"Aishwarya Naresh",
""
],
[
"Jain",
"Vinija",
""
],
[
"Chadha",
"Aman",
""
],
[
"Sheth",
"Amit P.",
""
],
[
"Das",
"Amitava",
""
]
] | TITLE: ViBe: A Text-to-Video Benchmark for Evaluating Hallucination in Large
Multimodal Models
ABSTRACT: Recent advances in Large Multimodal Models (LMMs) have expanded their
capabilities to video understanding, with Text-to-Video (T2V) models excelling
in generating videos from textual prompts. However, they still frequently
produce hallucinated content, revealing AI-generated inconsistencies. We
introduce ViBe (https://vibe-t2v-bench.github.io/): a large-scale dataset of
hallucinated videos from open-source T2V models. We identify five major
hallucination types: Vanishing Subject, Omission Error, Numeric Variability,
Subject Dysmorphia, and Visual Incongruity. Using ten T2V models, we generated
and manually annotated 3,782 videos from 837 diverse MS COCO captions. Our
proposed benchmark includes a dataset of hallucinated videos and a
classification framework using video embeddings. ViBe serves as a critical
resource for evaluating T2V reliability and advancing hallucination detection.
We establish classification as a baseline, with the TimeSFormer + CNN ensemble
achieving the best performance (0.345 accuracy, 0.342 F1 score). While initial
baselines proposed achieve modest accuracy, this highlights the difficulty of
automated hallucination detection and the need for improved methods. Our
research aims to drive the development of more robust T2V models and evaluate
their outputs based on user preferences.
|
2411.14743 | Zhengrui Guo | Zhengrui Guo, Conghao Xiong, Jiabo Ma, Qichen Sun, Lishuang Feng,
Jinzhuo Wang, Hao Chen | FOCUS: Knowledge-enhanced Adaptive Visual Compression for Few-shot Whole
Slide Image Classification | Accepted by CVPR'2025 | null | null | null | cs.CV cs.AI q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Few-shot learning presents a critical solution for cancer diagnosis in
computational pathology (CPath), addressing fundamental limitations in data
availability, particularly the scarcity of expert annotations and patient
privacy constraints. A key challenge in this paradigm stems from the inherent
disparity between the limited training set of whole slide images (WSIs) and the
enormous number of contained patches, where a significant portion of these
patches lacks diagnostically relevant information, potentially diluting the
model's ability to learn and focus on critical diagnostic features. While
recent works attempt to address this by incorporating additional knowledge,
several crucial gaps hinder further progress: (1) despite the emergence of
powerful pathology foundation models (FMs), their potential remains largely
untapped, with most approaches limiting their use to basic feature extraction;
(2) current language guidance mechanisms attempt to align text prompts with
vast numbers of WSI patches all at once, struggling to leverage rich
pathological semantic information. To this end, we introduce the
knowledge-enhanced adaptive visual compression framework, dubbed FOCUS, which
uniquely combines pathology FMs with language prior knowledge to enable a
focused analysis of diagnostically relevant regions by prioritizing
discriminative WSI patches. Our approach implements a progressive three-stage
compression strategy: we first leverage FMs for global visual redundancy
elimination, and integrate compressed features with language prompts for
semantic relevance assessment, then perform neighbor-aware visual token
filtering while preserving spatial coherence. Extensive experiments on
pathological datasets spanning breast, lung, and ovarian cancers demonstrate
its superior performance in few-shot pathology diagnosis. Codes are available
at https://github.com/dddavid4real/FOCUS.
| [
{
"version": "v1",
"created": "Fri, 22 Nov 2024 05:36:38 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 12:16:47 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Guo",
"Zhengrui",
""
],
[
"Xiong",
"Conghao",
""
],
[
"Ma",
"Jiabo",
""
],
[
"Sun",
"Qichen",
""
],
[
"Feng",
"Lishuang",
""
],
[
"Wang",
"Jinzhuo",
""
],
[
"Chen",
"Hao",
""
]
] | TITLE: FOCUS: Knowledge-enhanced Adaptive Visual Compression for Few-shot Whole
Slide Image Classification
ABSTRACT: Few-shot learning presents a critical solution for cancer diagnosis in
computational pathology (CPath), addressing fundamental limitations in data
availability, particularly the scarcity of expert annotations and patient
privacy constraints. A key challenge in this paradigm stems from the inherent
disparity between the limited training set of whole slide images (WSIs) and the
enormous number of contained patches, where a significant portion of these
patches lacks diagnostically relevant information, potentially diluting the
model's ability to learn and focus on critical diagnostic features. While
recent works attempt to address this by incorporating additional knowledge,
several crucial gaps hinder further progress: (1) despite the emergence of
powerful pathology foundation models (FMs), their potential remains largely
untapped, with most approaches limiting their use to basic feature extraction;
(2) current language guidance mechanisms attempt to align text prompts with
vast numbers of WSI patches all at once, struggling to leverage rich
pathological semantic information. To this end, we introduce the
knowledge-enhanced adaptive visual compression framework, dubbed FOCUS, which
uniquely combines pathology FMs with language prior knowledge to enable a
focused analysis of diagnostically relevant regions by prioritizing
discriminative WSI patches. Our approach implements a progressive three-stage
compression strategy: we first leverage FMs for global visual redundancy
elimination, and integrate compressed features with language prompts for
semantic relevance assessment, then perform neighbor-aware visual token
filtering while preserving spatial coherence. Extensive experiments on
pathological datasets spanning breast, lung, and ovarian cancers demonstrate
its superior performance in few-shot pathology diagnosis. Codes are available
at https://github.com/dddavid4real/FOCUS.
|
2411.16154 | Sizai Hou | Sizai Hou, Songze Li and Duanyi Yao | DeDe: Detecting Backdoor Samples for SSL Encoders via Decoders | To appear on CVPR 2025 | null | null | null | cs.LG cs.CR | http://creativecommons.org/licenses/by/4.0/ | Self-supervised learning (SSL) is pervasively exploited in training
high-quality upstream encoders with a large amount of unlabeled data. However,
it is found to be susceptible to backdoor attacks merely via polluting a small
portion of training data. The victim encoders associate triggered inputs with
target embeddings, e.g., mapping a triggered cat image to an airplane
embedding, such that the downstream tasks inherit unintended behaviors when the
trigger is activated. Emerging backdoor attacks have shown great threats across
different SSL paradigms such as contrastive learning and CLIP, yet limited
research is devoted to defending against such attacks, and existing defenses
fall short in detecting advanced stealthy backdoors. To address the
limitations, we propose a novel detection mechanism, DeDe, which detects the
activation of backdoor mappings caused by triggered inputs on victim encoders.
Specifically, DeDe trains a decoder for any given SSL encoder using an
auxiliary dataset (which can be out-of-distribution or even slightly poisoned),
so that for any triggered input that misleads the encoder into the target
embedding, the decoder generates an output image significantly different from
the input. DeDe leverages the discrepancy between the input and the decoded
output to identify potential backdoor misbehavior during inference. We
empirically evaluate DeDe on both contrastive learning and CLIP models against
various types of backdoor attacks. Our results demonstrate promising detection
effectiveness over various advanced attacks and superior performance compared
over state-of-the-art detection methods.
| [
{
"version": "v1",
"created": "Mon, 25 Nov 2024 07:26:22 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 07:05:27 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Hou",
"Sizai",
""
],
[
"Li",
"Songze",
""
],
[
"Yao",
"Duanyi",
""
]
] | TITLE: DeDe: Detecting Backdoor Samples for SSL Encoders via Decoders
ABSTRACT: Self-supervised learning (SSL) is pervasively exploited in training
high-quality upstream encoders with a large amount of unlabeled data. However,
it is found to be susceptible to backdoor attacks merely via polluting a small
portion of training data. The victim encoders associate triggered inputs with
target embeddings, e.g., mapping a triggered cat image to an airplane
embedding, such that the downstream tasks inherit unintended behaviors when the
trigger is activated. Emerging backdoor attacks have shown great threats across
different SSL paradigms such as contrastive learning and CLIP, yet limited
research is devoted to defending against such attacks, and existing defenses
fall short in detecting advanced stealthy backdoors. To address the
limitations, we propose a novel detection mechanism, DeDe, which detects the
activation of backdoor mappings caused by triggered inputs on victim encoders.
Specifically, DeDe trains a decoder for any given SSL encoder using an
auxiliary dataset (which can be out-of-distribution or even slightly poisoned),
so that for any triggered input that misleads the encoder into the target
embedding, the decoder generates an output image significantly different from
the input. DeDe leverages the discrepancy between the input and the decoded
output to identify potential backdoor misbehavior during inference. We
empirically evaluate DeDe on both contrastive learning and CLIP models against
various types of backdoor attacks. Our results demonstrate promising detection
effectiveness over various advanced attacks and superior performance compared
over state-of-the-art detection methods.
|
2411.18941 | Hongda Liu | Hongda Liu, Yunfan Liu, Min Ren, Hao Wang, Yunlong Wang, Zhenan Sun | Revealing Key Details to See Differences: A Novel Prototypical
Perspective for Skeleton-based Action Recognition | Accepted by CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In skeleton-based action recognition, a key challenge is distinguishing
between actions with similar trajectories of joints due to the lack of
image-level details in skeletal representations. Recognizing that the
differentiation of similar actions relies on subtle motion details in specific
body parts, we direct our approach to focus on the fine-grained motion of local
skeleton components. To this end, we introduce ProtoGCN, a Graph Convolutional
Network (GCN)-based model that breaks down the dynamics of entire skeleton
sequences into a combination of learnable prototypes representing core motion
patterns of action units. By contrasting the reconstruction of prototypes,
ProtoGCN can effectively identify and enhance the discriminative representation
of similar actions. Without bells and whistles, ProtoGCN achieves
state-of-the-art performance on multiple benchmark datasets, including NTU
RGB+D, NTU RGB+D 120, Kinetics-Skeleton, and FineGYM, which demonstrates the
effectiveness of the proposed method. The code is available at
https://github.com/firework8/ProtoGCN.
| [
{
"version": "v1",
"created": "Thu, 28 Nov 2024 06:18:31 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 15:57:02 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Liu",
"Hongda",
""
],
[
"Liu",
"Yunfan",
""
],
[
"Ren",
"Min",
""
],
[
"Wang",
"Hao",
""
],
[
"Wang",
"Yunlong",
""
],
[
"Sun",
"Zhenan",
""
]
] | TITLE: Revealing Key Details to See Differences: A Novel Prototypical
Perspective for Skeleton-based Action Recognition
ABSTRACT: In skeleton-based action recognition, a key challenge is distinguishing
between actions with similar trajectories of joints due to the lack of
image-level details in skeletal representations. Recognizing that the
differentiation of similar actions relies on subtle motion details in specific
body parts, we direct our approach to focus on the fine-grained motion of local
skeleton components. To this end, we introduce ProtoGCN, a Graph Convolutional
Network (GCN)-based model that breaks down the dynamics of entire skeleton
sequences into a combination of learnable prototypes representing core motion
patterns of action units. By contrasting the reconstruction of prototypes,
ProtoGCN can effectively identify and enhance the discriminative representation
of similar actions. Without bells and whistles, ProtoGCN achieves
state-of-the-art performance on multiple benchmark datasets, including NTU
RGB+D, NTU RGB+D 120, Kinetics-Skeleton, and FineGYM, which demonstrates the
effectiveness of the proposed method. The code is available at
https://github.com/firework8/ProtoGCN.
|
2412.03911 | Chamuditha Jayanga Galappaththige | Chamuditha Jayanga Galappaththige, Jason Lai, Lloyd Windrim, Donald
Dansereau, Niko Suenderhauf, Dimity Miller | Multi-View Pose-Agnostic Change Localization with Zero Labels | Accepted at CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Autonomous agents often require accurate methods for detecting and localizing
changes in their environment, particularly when observations are captured from
unconstrained and inconsistent viewpoints. We propose a novel label-free,
pose-agnostic change detection method that integrates information from multiple
viewpoints to construct a change-aware 3D Gaussian Splatting (3DGS)
representation of the scene. With as few as 5 images of the post-change scene,
our approach can learn an additional change channel in a 3DGS and produce
change masks that outperform single-view techniques. Our change-aware 3D scene
representation additionally enables the generation of accurate change masks for
unseen viewpoints. Experimental results demonstrate state-of-the-art
performance in complex multi-object scenes, achieving a 1.7x and 1.5x
improvement in Mean Intersection Over Union and F1 score respectively over
other baselines. We also contribute a new real-world dataset to benchmark
change detection in diverse challenging scenes in the presence of lighting
variations.
| [
{
"version": "v1",
"created": "Thu, 5 Dec 2024 06:28:54 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 09:35:49 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Galappaththige",
"Chamuditha Jayanga",
""
],
[
"Lai",
"Jason",
""
],
[
"Windrim",
"Lloyd",
""
],
[
"Dansereau",
"Donald",
""
],
[
"Suenderhauf",
"Niko",
""
],
[
"Miller",
"Dimity",
""
]
] | TITLE: Multi-View Pose-Agnostic Change Localization with Zero Labels
ABSTRACT: Autonomous agents often require accurate methods for detecting and localizing
changes in their environment, particularly when observations are captured from
unconstrained and inconsistent viewpoints. We propose a novel label-free,
pose-agnostic change detection method that integrates information from multiple
viewpoints to construct a change-aware 3D Gaussian Splatting (3DGS)
representation of the scene. With as few as 5 images of the post-change scene,
our approach can learn an additional change channel in a 3DGS and produce
change masks that outperform single-view techniques. Our change-aware 3D scene
representation additionally enables the generation of accurate change masks for
unseen viewpoints. Experimental results demonstrate state-of-the-art
performance in complex multi-object scenes, achieving a 1.7x and 1.5x
improvement in Mean Intersection Over Union and F1 score respectively over
other baselines. We also contribute a new real-world dataset to benchmark
change detection in diverse challenging scenes in the presence of lighting
variations.
|
2412.08460 | Fermin Orozco | Fermin Orozco, Pedro Porto Buarque de Gusm\~ao, Hongkai Wen, Johan
Wahlstr\"om, Man Luo | Federated Learning for Traffic Flow Prediction with Synthetic Data
Augmentation | 11 pages, 7 figures, 6 tables, ACM format | null | null | null | cs.LG cs.AI cs.DC | http://creativecommons.org/licenses/by/4.0/ | Deep-learning based traffic prediction models require vast amounts of data to
learn embedded spatial and temporal dependencies. The inherent privacy and
commercial sensitivity of such data has encouraged a shift towards
decentralised data-driven methods, such as Federated Learning (FL). Under a
traditional Machine Learning paradigm, traffic flow prediction models can
capture spatial and temporal relationships within centralised data. In reality,
traffic data is likely distributed across separate data silos owned by multiple
stakeholders. In this work, a cross-silo FL setting is motivated to facilitate
stakeholder collaboration for optimal traffic flow prediction applications.
This work introduces an FL framework, referred to as FedTPS, to generate
synthetic data to augment each client's local dataset by training a
diffusion-based trajectory generation model through FL. The proposed framework
is evaluated on a large-scale real world ride-sharing dataset using various FL
methods and Traffic Flow Prediction models, including a novel prediction model
we introduce, which leverages Temporal and Graph Attention mechanisms to learn
the Spatio-Temporal dependencies embedded within regional traffic flow data.
Experimental results show that FedTPS outperforms multiple other FL baselines
with respect to global model performance.
| [
{
"version": "v1",
"created": "Wed, 11 Dec 2024 15:25:38 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 13:29:36 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Orozco",
"Fermin",
""
],
[
"de Gusmão",
"Pedro Porto Buarque",
""
],
[
"Wen",
"Hongkai",
""
],
[
"Wahlström",
"Johan",
""
],
[
"Luo",
"Man",
""
]
] | TITLE: Federated Learning for Traffic Flow Prediction with Synthetic Data
Augmentation
ABSTRACT: Deep-learning based traffic prediction models require vast amounts of data to
learn embedded spatial and temporal dependencies. The inherent privacy and
commercial sensitivity of such data has encouraged a shift towards
decentralised data-driven methods, such as Federated Learning (FL). Under a
traditional Machine Learning paradigm, traffic flow prediction models can
capture spatial and temporal relationships within centralised data. In reality,
traffic data is likely distributed across separate data silos owned by multiple
stakeholders. In this work, a cross-silo FL setting is motivated to facilitate
stakeholder collaboration for optimal traffic flow prediction applications.
This work introduces an FL framework, referred to as FedTPS, to generate
synthetic data to augment each client's local dataset by training a
diffusion-based trajectory generation model through FL. The proposed framework
is evaluated on a large-scale real world ride-sharing dataset using various FL
methods and Traffic Flow Prediction models, including a novel prediction model
we introduce, which leverages Temporal and Graph Attention mechanisms to learn
the Spatio-Temporal dependencies embedded within regional traffic flow data.
Experimental results show that FedTPS outperforms multiple other FL baselines
with respect to global model performance.
|
2412.08949 | Xinyue Liu | Xinyue Liu, Jianyuan Wang, Biao Leng, Shuo Zhang | Multimodal Industrial Anomaly Detection by Crossmodal Reverse
Distillation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge distillation (KD) has been widely studied in unsupervised
Industrial Image Anomaly Detection (AD), but its application to unsupervised
multimodal AD remains underexplored. Existing KD-based methods for multimodal
AD that use fused multimodal features to obtain teacher representations face
challenges. Anomalies in one modality may not be effectively captured in the
fused teacher features, leading to detection failures. Besides, these methods
do not fully leverage the rich intra- and inter-modality information. In this
paper, we propose Crossmodal Reverse Distillation (CRD) based on Multi-branch
design to realize Multimodal Industrial AD. By assigning independent branches
to each modality, our method enables finer detection of anomalies within each
modality. Furthermore, we enhance the interaction between modalities during the
distillation process by designing Crossmodal Filter and Amplifier. With the
idea of crossmodal mapping, the student network is allowed to better learn
normal features while anomalies in all modalities are ensured to be effectively
detected. Experimental verifications on the MVTec 3D-AD dataset demonstrate
that our method achieves state-of-the-art performance in multimodal anomaly
detection and localization.
| [
{
"version": "v1",
"created": "Thu, 12 Dec 2024 05:26:50 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 02:17:32 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Liu",
"Xinyue",
""
],
[
"Wang",
"Jianyuan",
""
],
[
"Leng",
"Biao",
""
],
[
"Zhang",
"Shuo",
""
]
] | TITLE: Multimodal Industrial Anomaly Detection by Crossmodal Reverse
Distillation
ABSTRACT: Knowledge distillation (KD) has been widely studied in unsupervised
Industrial Image Anomaly Detection (AD), but its application to unsupervised
multimodal AD remains underexplored. Existing KD-based methods for multimodal
AD that use fused multimodal features to obtain teacher representations face
challenges. Anomalies in one modality may not be effectively captured in the
fused teacher features, leading to detection failures. Besides, these methods
do not fully leverage the rich intra- and inter-modality information. In this
paper, we propose Crossmodal Reverse Distillation (CRD) based on Multi-branch
design to realize Multimodal Industrial AD. By assigning independent branches
to each modality, our method enables finer detection of anomalies within each
modality. Furthermore, we enhance the interaction between modalities during the
distillation process by designing Crossmodal Filter and Amplifier. With the
idea of crossmodal mapping, the student network is allowed to better learn
normal features while anomalies in all modalities are ensured to be effectively
detected. Experimental verifications on the MVTec 3D-AD dataset demonstrate
that our method achieves state-of-the-art performance in multimodal anomaly
detection and localization.
|
2412.10116 | Zican Shi | Zican Shi, Jing Hu, Jie Ren, Hengkang Ye, Xuyang Yuan, Yan Ouyang, Jia
He, Bo Ji, Junyu Guo | HS-FPN: High Frequency and Spatial Perception FPN for Tiny Object
Detection | 13 pages,12 figures,7 tables | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The introduction of Feature Pyramid Network (FPN) has significantly improved
object detection performance. However, substantial challenges remain in
detecting tiny objects, as their features occupy only a very small proportion
of the feature maps. Although FPN integrates multi-scale features, it does not
directly enhance or enrich the features of tiny objects. Furthermore, FPN lacks
spatial perception ability. To address these issues, we propose a novel High
Frequency and Spatial Perception Feature Pyramid Network (HS-FPN) with two
innovative modules. First, we designed a high frequency perception module (HFP)
that generates high frequency responses through high pass filters. These high
frequency responses are used as mask weights from both spatial and channel
perspectives to enrich and highlight the features of tiny objects in the
original feature maps. Second, we developed a spatial dependency perception
module (SDP) to capture the spatial dependencies that FPN lacks. Our
experiments demonstrate that detectors based on HS-FPN exhibit competitive
advantages over state-of-the-art models on the AI-TOD dataset for tiny object
detection.
| [
{
"version": "v1",
"created": "Fri, 13 Dec 2024 12:59:12 GMT"
},
{
"version": "v2",
"created": "Mon, 23 Dec 2024 06:49:13 GMT"
},
{
"version": "v3",
"created": "Thu, 20 Mar 2025 08:09:25 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Shi",
"Zican",
""
],
[
"Hu",
"Jing",
""
],
[
"Ren",
"Jie",
""
],
[
"Ye",
"Hengkang",
""
],
[
"Yuan",
"Xuyang",
""
],
[
"Ouyang",
"Yan",
""
],
[
"He",
"Jia",
""
],
[
"Ji",
"Bo",
""
],
[
"Guo",
"Junyu",
""
]
] | TITLE: HS-FPN: High Frequency and Spatial Perception FPN for Tiny Object
Detection
ABSTRACT: The introduction of Feature Pyramid Network (FPN) has significantly improved
object detection performance. However, substantial challenges remain in
detecting tiny objects, as their features occupy only a very small proportion
of the feature maps. Although FPN integrates multi-scale features, it does not
directly enhance or enrich the features of tiny objects. Furthermore, FPN lacks
spatial perception ability. To address these issues, we propose a novel High
Frequency and Spatial Perception Feature Pyramid Network (HS-FPN) with two
innovative modules. First, we designed a high frequency perception module (HFP)
that generates high frequency responses through high pass filters. These high
frequency responses are used as mask weights from both spatial and channel
perspectives to enrich and highlight the features of tiny objects in the
original feature maps. Second, we developed a spatial dependency perception
module (SDP) to capture the spatial dependencies that FPN lacks. Our
experiments demonstrate that detectors based on HS-FPN exhibit competitive
advantages over state-of-the-art models on the AI-TOD dataset for tiny object
detection.
|
2412.12318 | Shuzhou Yuan | Shuzhou Yuan, Jingyi Sun, Ran Zhang, Michael F\"arber, Steffen Eger,
Pepa Atanasova, Isabelle Augenstein | Graph-Guided Textual Explanation Generation Framework | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Natural language explanations (NLEs) are commonly used to provide plausible
free-text explanations of a model's reasoning about its predictions. However,
recent work has questioned their faithfulness, as they may not accurately
reflect the model's internal reasoning process regarding its predicted answer.
In contrast, highlight explanations--input fragments critical for the model's
predicted answers--exhibit measurable faithfulness. Building on this
foundation, we propose G-Tex, a Graph-Guided Textual Explanation Generation
framework designed to enhance the faithfulness of NLEs. Specifically, highlight
explanations are first extracted as faithful cues reflecting the model's
reasoning logic toward answer prediction. They are subsequently encoded through
a graph neural network layer to guide the NLE generation, which aligns the
generated explanations with the model's underlying reasoning toward the
predicted answer. Experiments on T5 and BART using three reasoning datasets
show that G-Tex improves NLE faithfulness by up to 12.18% compared to baseline
methods. Additionally, G-Tex generates NLEs with greater semantic and lexical
similarity to human-written ones. Human evaluations show that G-Tex can
decrease redundant content and enhance the overall quality of NLEs. Our work
presents a novel method for explicitly guiding NLE generation to enhance
faithfulness, serving as a foundation for addressing broader criteria in NLE
and generated text.
| [
{
"version": "v1",
"created": "Mon, 16 Dec 2024 19:35:55 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Feb 2025 07:08:17 GMT"
},
{
"version": "v3",
"created": "Thu, 20 Mar 2025 15:13:26 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Yuan",
"Shuzhou",
""
],
[
"Sun",
"Jingyi",
""
],
[
"Zhang",
"Ran",
""
],
[
"Färber",
"Michael",
""
],
[
"Eger",
"Steffen",
""
],
[
"Atanasova",
"Pepa",
""
],
[
"Augenstein",
"Isabelle",
""
]
] | TITLE: Graph-Guided Textual Explanation Generation Framework
ABSTRACT: Natural language explanations (NLEs) are commonly used to provide plausible
free-text explanations of a model's reasoning about its predictions. However,
recent work has questioned their faithfulness, as they may not accurately
reflect the model's internal reasoning process regarding its predicted answer.
In contrast, highlight explanations--input fragments critical for the model's
predicted answers--exhibit measurable faithfulness. Building on this
foundation, we propose G-Tex, a Graph-Guided Textual Explanation Generation
framework designed to enhance the faithfulness of NLEs. Specifically, highlight
explanations are first extracted as faithful cues reflecting the model's
reasoning logic toward answer prediction. They are subsequently encoded through
a graph neural network layer to guide the NLE generation, which aligns the
generated explanations with the model's underlying reasoning toward the
predicted answer. Experiments on T5 and BART using three reasoning datasets
show that G-Tex improves NLE faithfulness by up to 12.18% compared to baseline
methods. Additionally, G-Tex generates NLEs with greater semantic and lexical
similarity to human-written ones. Human evaluations show that G-Tex can
decrease redundant content and enhance the overall quality of NLEs. Our work
presents a novel method for explicitly guiding NLE generation to enhance
faithfulness, serving as a foundation for addressing broader criteria in NLE
and generated text.
|
2412.13176 | Andrea Dunn Beltran | Andrea Dunn Beltran, Daniel Rho, Stephen Pizer, Marc Niethammer, Roni
Sengupta | NFL-BA: Improving Endoscopic SLAM with Near-Field Light Bundle
Adjustment | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Simultaneous Localization And Mapping (SLAM) from endoscopy videos can enable
autonomous navigation, guidance to unsurveyed regions, blindspot detections,
and 3D visualizations, which can significantly improve patient outcomes and
endoscopy experience for both physicians and patients. Existing dense SLAM
algorithms often assume distant and static lighting and optimize scene geometry
and camera parameters by minimizing a photometric rendering loss, often called
Photometric Bundle Adjustment. However, endoscopy videos exhibit dynamic
near-field lighting due to the co-located light and camera moving extremely
close to the surface. In addition, low texture surfaces in endoscopy videos
cause photometric bundle adjustment of the existing SLAM frameworks to perform
poorly compared to indoor/outdoor scenes. To mitigate this problem, we
introduce Near-Field Lighting Bundle Adjustment Loss (NFL-BA) which explicitly
models near-field lighting as a part of Bundle Adjustment loss and enables
better performance for low texture surfaces. Our proposed NFL-BA can be applied
to any neural-rendering based SLAM framework. We show that by replacing
traditional photometric bundle adjustment loss with our proposed NFL-BA results
in improvement, using neural implicit SLAM and 3DGS SLAMs. In addition to
producing state-of-the-art tracking and mapping results on colonoscopy C3VD
dataset we also show improvement on real colonoscopy videos. See results at
https://asdunnbe.github.io/NFL-BA/
| [
{
"version": "v1",
"created": "Tue, 17 Dec 2024 18:54:28 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 18:38:44 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Beltran",
"Andrea Dunn",
""
],
[
"Rho",
"Daniel",
""
],
[
"Pizer",
"Stephen",
""
],
[
"Niethammer",
"Marc",
""
],
[
"Sengupta",
"Roni",
""
]
] | TITLE: NFL-BA: Improving Endoscopic SLAM with Near-Field Light Bundle
Adjustment
ABSTRACT: Simultaneous Localization And Mapping (SLAM) from endoscopy videos can enable
autonomous navigation, guidance to unsurveyed regions, blindspot detections,
and 3D visualizations, which can significantly improve patient outcomes and
endoscopy experience for both physicians and patients. Existing dense SLAM
algorithms often assume distant and static lighting and optimize scene geometry
and camera parameters by minimizing a photometric rendering loss, often called
Photometric Bundle Adjustment. However, endoscopy videos exhibit dynamic
near-field lighting due to the co-located light and camera moving extremely
close to the surface. In addition, low texture surfaces in endoscopy videos
cause photometric bundle adjustment of the existing SLAM frameworks to perform
poorly compared to indoor/outdoor scenes. To mitigate this problem, we
introduce Near-Field Lighting Bundle Adjustment Loss (NFL-BA) which explicitly
models near-field lighting as a part of Bundle Adjustment loss and enables
better performance for low texture surfaces. Our proposed NFL-BA can be applied
to any neural-rendering based SLAM framework. We show that by replacing
traditional photometric bundle adjustment loss with our proposed NFL-BA results
in improvement, using neural implicit SLAM and 3DGS SLAMs. In addition to
producing state-of-the-art tracking and mapping results on colonoscopy C3VD
dataset we also show improvement on real colonoscopy videos. See results at
https://asdunnbe.github.io/NFL-BA/
|
2412.18011 | Mathieu Ravaut | Hailin Chen, Fangkai Jiao, Mathieu Ravaut, Nawshad Farruque, Xuan Phi
Nguyen, Chengwei Qin, Manan Dey, Bosheng Ding, Caiming Xiong, Shafiq Joty,
Yingbo Zhou | StructTest: Benchmarking LLMs' Reasoning through Compositional
Structured Outputs | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rapid advancement of large language models (LLMs) demands robust,
unbiased, and scalable evaluation methods. However, human annotations are
costly to scale, model-based evaluations are susceptible to stylistic biases,
and target-answer-based benchmarks are vulnerable to data contamination and
cheating. To address these limitations, we propose StructTest, a novel
benchmark that evaluates LLMs on their ability to follow compositional
instructions and generate structured outputs, providing an unbiased,
cost-effective, and difficult-to-cheat evaluation framework. Assessments are
conducted deterministically using a rule-based evaluator, which can be easily
extended to new tasks and datasets. By testing structured outputs across
diverse domains including Summarization, Code, HTML, and Math, and evaluating
17 popular LLMs, we demonstrate that StructTest remains challenging even for
top-performing models like Deepseek-V3/R1 and GPT-4o, establishing it as a
robust proxy for measuring reasoning capabilities. We believe StructTest offers
a critical and complementary approach to achieving objective and comprehensive
model evaluation.
| [
{
"version": "v1",
"created": "Mon, 23 Dec 2024 22:08:40 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 19:37:12 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Chen",
"Hailin",
""
],
[
"Jiao",
"Fangkai",
""
],
[
"Ravaut",
"Mathieu",
""
],
[
"Farruque",
"Nawshad",
""
],
[
"Nguyen",
"Xuan Phi",
""
],
[
"Qin",
"Chengwei",
""
],
[
"Dey",
"Manan",
""
],
[
"Ding",
"Bosheng",
""
],
[
"Xiong",
"Caiming",
""
],
[
"Joty",
"Shafiq",
""
],
[
"Zhou",
"Yingbo",
""
]
] | TITLE: StructTest: Benchmarking LLMs' Reasoning through Compositional
Structured Outputs
ABSTRACT: The rapid advancement of large language models (LLMs) demands robust,
unbiased, and scalable evaluation methods. However, human annotations are
costly to scale, model-based evaluations are susceptible to stylistic biases,
and target-answer-based benchmarks are vulnerable to data contamination and
cheating. To address these limitations, we propose StructTest, a novel
benchmark that evaluates LLMs on their ability to follow compositional
instructions and generate structured outputs, providing an unbiased,
cost-effective, and difficult-to-cheat evaluation framework. Assessments are
conducted deterministically using a rule-based evaluator, which can be easily
extended to new tasks and datasets. By testing structured outputs across
diverse domains including Summarization, Code, HTML, and Math, and evaluating
17 popular LLMs, we demonstrate that StructTest remains challenging even for
top-performing models like Deepseek-V3/R1 and GPT-4o, establishing it as a
robust proxy for measuring reasoning capabilities. We believe StructTest offers
a critical and complementary approach to achieving objective and comprehensive
model evaluation.
|
2412.20392 | Zhifang Zhang | Zhifang Zhang, Shuo He, Haobo Wang, Bingquan Shen, Lei Feng | Defending Multimodal Backdoored Models by Repulsive Visual Prompt Tuning | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Multimodal contrastive learning models (e.g., CLIP) can learn high-quality
representations from large-scale image-text datasets, yet they exhibit
significant vulnerabilities to backdoor attacks, raising serious safety
concerns. In this paper, we disclose that CLIP's vulnerabilities primarily stem
from its excessive encoding of class-irrelevant features, which can compromise
the model's visual feature resistivity to input perturbations, making it more
susceptible to capturing the trigger patterns inserted by backdoor attacks.
Inspired by this finding, we propose Repulsive Visual Prompt Tuning (RVPT), a
novel defense approach that employs specially designed deep visual prompt
tuning and feature-repelling loss to eliminate excessive class-irrelevant
features while simultaneously optimizing cross-entropy loss to maintain clean
accuracy. Unlike existing multimodal backdoor defense methods that typically
require the availability of poisoned data or involve fine-tuning the entire
model, RVPT leverages few-shot downstream clean samples and only tunes a small
number of parameters. Empirical results demonstrate that RVPT tunes only 0.27\%
of the parameters relative to CLIP, yet it significantly outperforms
state-of-the-art baselines, reducing the attack success rate from 67.53\% to
2.76\% against SoTA attacks and effectively generalizing its defensive
capabilities across multiple datasets.
| [
{
"version": "v1",
"created": "Sun, 29 Dec 2024 08:09:20 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 13:29:43 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Zhang",
"Zhifang",
""
],
[
"He",
"Shuo",
""
],
[
"Wang",
"Haobo",
""
],
[
"Shen",
"Bingquan",
""
],
[
"Feng",
"Lei",
""
]
] | TITLE: Defending Multimodal Backdoored Models by Repulsive Visual Prompt Tuning
ABSTRACT: Multimodal contrastive learning models (e.g., CLIP) can learn high-quality
representations from large-scale image-text datasets, yet they exhibit
significant vulnerabilities to backdoor attacks, raising serious safety
concerns. In this paper, we disclose that CLIP's vulnerabilities primarily stem
from its excessive encoding of class-irrelevant features, which can compromise
the model's visual feature resistivity to input perturbations, making it more
susceptible to capturing the trigger patterns inserted by backdoor attacks.
Inspired by this finding, we propose Repulsive Visual Prompt Tuning (RVPT), a
novel defense approach that employs specially designed deep visual prompt
tuning and feature-repelling loss to eliminate excessive class-irrelevant
features while simultaneously optimizing cross-entropy loss to maintain clean
accuracy. Unlike existing multimodal backdoor defense methods that typically
require the availability of poisoned data or involve fine-tuning the entire
model, RVPT leverages few-shot downstream clean samples and only tunes a small
number of parameters. Empirical results demonstrate that RVPT tunes only 0.27\%
of the parameters relative to CLIP, yet it significantly outperforms
state-of-the-art baselines, reducing the attack success rate from 67.53\% to
2.76\% against SoTA attacks and effectively generalizing its defensive
capabilities across multiple datasets.
|
2501.00895 | Liu Chenyang | Chenyang Liu, Keyan Chen, Rui Zhao, Zhengxia Zou, and Zhenwei Shi | Text2Earth: Unlocking Text-driven Remote Sensing Image Generation with a
Global-Scale Dataset and a Foundation Model | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generative foundation models have advanced large-scale text-driven natural
image generation, becoming a prominent research trend across various vertical
domains. However, in the remote sensing field, there is still a lack of
research on large-scale text-to-image (text2image) generation technology.
Existing remote sensing image-text datasets are small in scale and confined to
specific geographic areas and scene types. Besides, existing text2image methods
have struggled to achieve global-scale, multi-resolution controllable, and
unbounded image generation. To address these challenges, this paper presents
two key contributions: the Git-10M dataset and the Text2Earth foundation model.
Git-10M is a global-scale image-text dataset comprising 10.5 million image-text
pairs, 5 times larger than the previous largest one. The dataset covers a wide
range of geographic scenes and contains resolution information, significantly
surpassing existing datasets in both size and diversity. Building on Git-10M,
we propose Text2Earth, a 1.3 billion parameter generative foundation model
based on the diffusion framework to model global-scale remote sensing scenes.
Text2Earth integrates a resolution guidance mechanism, enabling users to
specify image resolutions. A dynamic condition adaptation strategy is proposed
for training and inference to improve image quality. Text2Earth excels in
zero-shot text2image generation and demonstrates robust generalization and
flexibility across multiple tasks, including unbounded scene construction,
image editing, and cross-modal image generation. This robust capability
surpasses previous models restricted to the basic fixed size and limited scene
types. On the previous benchmark dataset, Text2Earth outperforms previous
models with an improvement of +26.23 FID and +20.95% Zero-shot Cls-OA
metric.Our project page is https://chen-yang-liu.github.io/Text2Earth
| [
{
"version": "v1",
"created": "Wed, 1 Jan 2025 16:56:43 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 13:03:26 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Liu",
"Chenyang",
""
],
[
"Chen",
"Keyan",
""
],
[
"Zhao",
"Rui",
""
],
[
"Zou",
"Zhengxia",
""
],
[
"Shi",
"Zhenwei",
""
]
] | TITLE: Text2Earth: Unlocking Text-driven Remote Sensing Image Generation with a
Global-Scale Dataset and a Foundation Model
ABSTRACT: Generative foundation models have advanced large-scale text-driven natural
image generation, becoming a prominent research trend across various vertical
domains. However, in the remote sensing field, there is still a lack of
research on large-scale text-to-image (text2image) generation technology.
Existing remote sensing image-text datasets are small in scale and confined to
specific geographic areas and scene types. Besides, existing text2image methods
have struggled to achieve global-scale, multi-resolution controllable, and
unbounded image generation. To address these challenges, this paper presents
two key contributions: the Git-10M dataset and the Text2Earth foundation model.
Git-10M is a global-scale image-text dataset comprising 10.5 million image-text
pairs, 5 times larger than the previous largest one. The dataset covers a wide
range of geographic scenes and contains resolution information, significantly
surpassing existing datasets in both size and diversity. Building on Git-10M,
we propose Text2Earth, a 1.3 billion parameter generative foundation model
based on the diffusion framework to model global-scale remote sensing scenes.
Text2Earth integrates a resolution guidance mechanism, enabling users to
specify image resolutions. A dynamic condition adaptation strategy is proposed
for training and inference to improve image quality. Text2Earth excels in
zero-shot text2image generation and demonstrates robust generalization and
flexibility across multiple tasks, including unbounded scene construction,
image editing, and cross-modal image generation. This robust capability
surpasses previous models restricted to the basic fixed size and limited scene
types. On the previous benchmark dataset, Text2Earth outperforms previous
models with an improvement of +26.23 FID and +20.95% Zero-shot Cls-OA
metric.Our project page is https://chen-yang-liu.github.io/Text2Earth
|
2501.04004 | Xiang Xu | Xiang Xu and Lingdong Kong and Hui Shuai and Liang Pan and Ziwei Liu
and Qingshan Liu | LiMoE: Mixture of LiDAR Representation Learners from Automotive Scenes | CVPR 2025; 27 pages, 17 figures, 10 tables; Project Page at
https://ldkong.com/LiMoE | null | null | null | cs.CV cs.LG cs.RO | http://creativecommons.org/licenses/by-sa/4.0/ | LiDAR data pretraining offers a promising approach to leveraging large-scale,
readily available datasets for enhanced data utilization. However, existing
methods predominantly focus on sparse voxel representation, overlooking the
complementary attributes provided by other LiDAR representations. In this work,
we propose LiMoE, a framework that integrates the Mixture of Experts (MoE)
paradigm into LiDAR data representation learning to synergistically combine
multiple representations, such as range images, sparse voxels, and raw points.
Our approach consists of three stages: i) Image-to-LiDAR Pretraining, which
transfers prior knowledge from images to point clouds across different
representations; ii) Contrastive Mixture Learning (CML), which uses MoE to
adaptively activate relevant attributes from each representation and distills
these mixed features into a unified 3D network; iii) Semantic Mixture
Supervision (SMS), which combines semantic logits from multiple representations
to boost downstream segmentation performance. Extensive experiments across
eleven large-scale LiDAR datasets demonstrate our effectiveness and
superiority. The code has been made publicly accessible.
| [
{
"version": "v1",
"created": "Tue, 7 Jan 2025 18:59:58 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 13:53:48 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Xu",
"Xiang",
""
],
[
"Kong",
"Lingdong",
""
],
[
"Shuai",
"Hui",
""
],
[
"Pan",
"Liang",
""
],
[
"Liu",
"Ziwei",
""
],
[
"Liu",
"Qingshan",
""
]
] | TITLE: LiMoE: Mixture of LiDAR Representation Learners from Automotive Scenes
ABSTRACT: LiDAR data pretraining offers a promising approach to leveraging large-scale,
readily available datasets for enhanced data utilization. However, existing
methods predominantly focus on sparse voxel representation, overlooking the
complementary attributes provided by other LiDAR representations. In this work,
we propose LiMoE, a framework that integrates the Mixture of Experts (MoE)
paradigm into LiDAR data representation learning to synergistically combine
multiple representations, such as range images, sparse voxels, and raw points.
Our approach consists of three stages: i) Image-to-LiDAR Pretraining, which
transfers prior knowledge from images to point clouds across different
representations; ii) Contrastive Mixture Learning (CML), which uses MoE to
adaptively activate relevant attributes from each representation and distills
these mixed features into a unified 3D network; iii) Semantic Mixture
Supervision (SMS), which combines semantic logits from multiple representations
to boost downstream segmentation performance. Extensive experiments across
eleven large-scale LiDAR datasets demonstrate our effectiveness and
superiority. The code has been made publicly accessible.
|
2501.05488 | Matt Schwartz | Patrick Dermyer, Angad Kalra, Matt Schwartz | EndoDINO: A Foundation Model for GI Endoscopy | null | null | null | null | cs.CV eess.IV | http://creativecommons.org/licenses/by/4.0/ | In this work, we present EndoDINO, a foundation model for GI endoscopy tasks
that achieves strong generalizability by pre-training on a well-curated image
dataset sampled from the largest known GI endoscopy video dataset in the
literature. Specifically, we pre-trained ViT models with 1B, 307M, and 86M
parameters using datasets ranging from 100K to 10M curated images. Using
EndoDINO as a frozen feature encoder, we achieved state-of-the-art performance
in anatomical landmark classification, polyp segmentation, and Mayo endoscopic
scoring (MES) for ulcerative colitis with only simple decoder heads.
| [
{
"version": "v1",
"created": "Wed, 8 Jan 2025 18:57:05 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Dermyer",
"Patrick",
""
],
[
"Kalra",
"Angad",
""
],
[
"Schwartz",
"Matt",
""
]
] | TITLE: EndoDINO: A Foundation Model for GI Endoscopy
ABSTRACT: In this work, we present EndoDINO, a foundation model for GI endoscopy tasks
that achieves strong generalizability by pre-training on a well-curated image
dataset sampled from the largest known GI endoscopy video dataset in the
literature. Specifically, we pre-trained ViT models with 1B, 307M, and 86M
parameters using datasets ranging from 100K to 10M curated images. Using
EndoDINO as a frozen feature encoder, we achieved state-of-the-art performance
in anatomical landmark classification, polyp segmentation, and Mayo endoscopic
scoring (MES) for ulcerative colitis with only simple decoder heads.
|
2501.06187 | Tsai-Shien Chen | Tsai-Shien Chen, Aliaksandr Siarohin, Willi Menapace, Yuwei Fang, Kwot
Sin Lee, Ivan Skorokhodov, Kfir Aberman, Jun-Yan Zhu, Ming-Hsuan Yang, Sergey
Tulyakov | Multi-subject Open-set Personalization in Video Generation | CVPR 2025. Project page:
https://snap-research.github.io/open-set-video-personalization/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Video personalization methods allow us to synthesize videos with specific
concepts such as people, pets, and places. However, existing methods often
focus on limited domains, require time-consuming optimization per subject, or
support only a single subject. We present Video Alchemist $-$ a video model
with built-in multi-subject, open-set personalization capabilities for both
foreground objects and background, eliminating the need for time-consuming
test-time optimization. Our model is built on a new Diffusion Transformer
module that fuses each conditional reference image and its corresponding
subject-level text prompt with cross-attention layers. Developing such a large
model presents two main challenges: dataset and evaluation. First, as paired
datasets of reference images and videos are extremely hard to collect, we
sample selected video frames as reference images and synthesize a clip of the
target video. However, while models can easily denoise training videos given
reference frames, they fail to generalize to new contexts. To mitigate this
issue, we design a new automatic data construction pipeline with extensive
image augmentations. Second, evaluating open-set video personalization is a
challenge in itself. To address this, we introduce a personalization benchmark
that focuses on accurate subject fidelity and supports diverse personalization
scenarios. Finally, our extensive experiments show that our method
significantly outperforms existing personalization methods in both quantitative
and qualitative evaluations.
| [
{
"version": "v1",
"created": "Fri, 10 Jan 2025 18:59:54 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 17:59:56 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Chen",
"Tsai-Shien",
""
],
[
"Siarohin",
"Aliaksandr",
""
],
[
"Menapace",
"Willi",
""
],
[
"Fang",
"Yuwei",
""
],
[
"Lee",
"Kwot Sin",
""
],
[
"Skorokhodov",
"Ivan",
""
],
[
"Aberman",
"Kfir",
""
],
[
"Zhu",
"Jun-Yan",
""
],
[
"Yang",
"Ming-Hsuan",
""
],
[
"Tulyakov",
"Sergey",
""
]
] | TITLE: Multi-subject Open-set Personalization in Video Generation
ABSTRACT: Video personalization methods allow us to synthesize videos with specific
concepts such as people, pets, and places. However, existing methods often
focus on limited domains, require time-consuming optimization per subject, or
support only a single subject. We present Video Alchemist $-$ a video model
with built-in multi-subject, open-set personalization capabilities for both
foreground objects and background, eliminating the need for time-consuming
test-time optimization. Our model is built on a new Diffusion Transformer
module that fuses each conditional reference image and its corresponding
subject-level text prompt with cross-attention layers. Developing such a large
model presents two main challenges: dataset and evaluation. First, as paired
datasets of reference images and videos are extremely hard to collect, we
sample selected video frames as reference images and synthesize a clip of the
target video. However, while models can easily denoise training videos given
reference frames, they fail to generalize to new contexts. To mitigate this
issue, we design a new automatic data construction pipeline with extensive
image augmentations. Second, evaluating open-set video personalization is a
challenge in itself. To address this, we introduce a personalization benchmark
that focuses on accurate subject fidelity and supports diverse personalization
scenarios. Finally, our extensive experiments show that our method
significantly outperforms existing personalization methods in both quantitative
and qualitative evaluations.
|
2501.12281 | Qishen Zhou | Qishen Zhou, Yifan Zhang, Michail A. Makridis, Anastasios Kouvelas,
Yibing Wang, Simon Hu | MoGERNN: An Inductive Traffic Predictor for Unobserved Locations in
Dynamic Sensing Networks | null | Transportation Research Part C: Emerging Technologies, Volume 174,
2025, 105080, ISSN 0968-090X | 10.1016/j.trc.2025.105080 | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given a partially observed road network, how can we predict the traffic state
of unobserved locations? While deep learning approaches show exceptional
performance in traffic prediction, most assume sensors at all locations of
interest, which is impractical due to financial constraints. Furthermore, these
methods typically require costly retraining when sensor configurations change.
We propose MoGERNN, an inductive spatio-temporal graph representation model, to
address these challenges. Inspired by the Mixture of Experts approach in Large
Language Models, we introduce a Mixture of Graph Expert (MoGE) block to model
complex spatial dependencies through multiple graph message aggregators and a
sparse gating network. This block estimates initial states for unobserved
locations, which are then processed by a GRU-based Encoder-Decoder that
integrates a graph message aggregator to capture spatio-temporal dependencies
and predict future states. Experiments on two real-world datasets show MoGERNN
consistently outperforms baseline methods for both observed and unobserved
locations. MoGERNN can accurately predict congestion evolution even in areas
without sensors, offering valuable information for traffic management.
Moreover, MoGERNN is adaptable to dynamic sensing networks, maintaining
competitive performance even compared to its retrained counterpart. Tests with
different numbers of available sensors confirm its consistent superiority, and
ablation studies validate the effectiveness of its key modules.
| [
{
"version": "v1",
"created": "Tue, 21 Jan 2025 16:52:42 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Zhou",
"Qishen",
""
],
[
"Zhang",
"Yifan",
""
],
[
"Makridis",
"Michail A.",
""
],
[
"Kouvelas",
"Anastasios",
""
],
[
"Wang",
"Yibing",
""
],
[
"Hu",
"Simon",
""
]
] | TITLE: MoGERNN: An Inductive Traffic Predictor for Unobserved Locations in
Dynamic Sensing Networks
ABSTRACT: Given a partially observed road network, how can we predict the traffic state
of unobserved locations? While deep learning approaches show exceptional
performance in traffic prediction, most assume sensors at all locations of
interest, which is impractical due to financial constraints. Furthermore, these
methods typically require costly retraining when sensor configurations change.
We propose MoGERNN, an inductive spatio-temporal graph representation model, to
address these challenges. Inspired by the Mixture of Experts approach in Large
Language Models, we introduce a Mixture of Graph Expert (MoGE) block to model
complex spatial dependencies through multiple graph message aggregators and a
sparse gating network. This block estimates initial states for unobserved
locations, which are then processed by a GRU-based Encoder-Decoder that
integrates a graph message aggregator to capture spatio-temporal dependencies
and predict future states. Experiments on two real-world datasets show MoGERNN
consistently outperforms baseline methods for both observed and unobserved
locations. MoGERNN can accurately predict congestion evolution even in areas
without sensors, offering valuable information for traffic management.
Moreover, MoGERNN is adaptable to dynamic sensing networks, maintaining
competitive performance even compared to its retrained counterpart. Tests with
different numbers of available sensors confirm its consistent superiority, and
ablation studies validate the effectiveness of its key modules.
|
2501.12372 | Yeounoh Chung | Yeounoh Chung, Gaurav T. Kakkar, Yu Gan, Brenton Milne, Fatma Ozcan | Is Long Context All You Need? Leveraging LLM's Extended Context for
NL2SQL | 13 pages, 6 figures, VLDB 2025 | null | null | null | cs.DB cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Large Language Models (LLMs) have demonstrated impressive capabilities across
a range of natural language processing tasks. In particular, improvements in
reasoning abilities and the expansion of context windows have opened new
avenues for leveraging these powerful models. NL2SQL is challenging in that the
natural language question is inherently ambiguous, while the SQL generation
requires a precise understanding of complex data schema and semantics. One
approach to this semantic ambiguous problem is to provide more and sufficient
contextual information.
In this work, we explore the performance and the latency trade-offs of the
extended context window (a.k.a., long context) offered by Google's
state-of-the-art LLM (\textit{gemini-1.5-pro}). We study the impact of various
contextual information, including column example values, question and SQL query
pairs, user-provided hints, SQL documentation, and schema. To the best of our
knowledge, this is the first work to study how the extended context window and
extra contextual information can help NL2SQL generation with respect to both
accuracy and latency cost. We show that long context LLMs are robust and do not
get lost in the extended contextual information. Additionally, our long-context
NL2SQL pipeline based on Google's \textit{gemini-pro-1.5} achieve strong
performances on various benchmark datasets without finetuning and expensive
self-consistency based techniques.
| [
{
"version": "v1",
"created": "Tue, 21 Jan 2025 18:52:15 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Feb 2025 02:00:46 GMT"
},
{
"version": "v3",
"created": "Thu, 13 Feb 2025 23:39:12 GMT"
},
{
"version": "v4",
"created": "Fri, 7 Mar 2025 23:17:42 GMT"
},
{
"version": "v5",
"created": "Thu, 20 Mar 2025 17:39:13 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Chung",
"Yeounoh",
""
],
[
"Kakkar",
"Gaurav T.",
""
],
[
"Gan",
"Yu",
""
],
[
"Milne",
"Brenton",
""
],
[
"Ozcan",
"Fatma",
""
]
] | TITLE: Is Long Context All You Need? Leveraging LLM's Extended Context for
NL2SQL
ABSTRACT: Large Language Models (LLMs) have demonstrated impressive capabilities across
a range of natural language processing tasks. In particular, improvements in
reasoning abilities and the expansion of context windows have opened new
avenues for leveraging these powerful models. NL2SQL is challenging in that the
natural language question is inherently ambiguous, while the SQL generation
requires a precise understanding of complex data schema and semantics. One
approach to this semantic ambiguous problem is to provide more and sufficient
contextual information.
In this work, we explore the performance and the latency trade-offs of the
extended context window (a.k.a., long context) offered by Google's
state-of-the-art LLM (\textit{gemini-1.5-pro}). We study the impact of various
contextual information, including column example values, question and SQL query
pairs, user-provided hints, SQL documentation, and schema. To the best of our
knowledge, this is the first work to study how the extended context window and
extra contextual information can help NL2SQL generation with respect to both
accuracy and latency cost. We show that long context LLMs are robust and do not
get lost in the extended contextual information. Additionally, our long-context
NL2SQL pipeline based on Google's \textit{gemini-pro-1.5} achieve strong
performances on various benchmark datasets without finetuning and expensive
self-consistency based techniques.
|
2501.15598 | Sichen Zhu | Sichen Zhu, Yuchen Zhu, Molei Tao, Peng Qiu | Diffusion Generative Modeling for Spatially Resolved Gene Expression
Inference from Histology Images | Accepted to ICLR 2025 | null | null | null | cs.CV cs.AI cs.LG q-bio.QM stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spatial Transcriptomics (ST) allows a high-resolution measurement of RNA
sequence abundance by systematically connecting cell morphology depicted in
Hematoxylin and Eosin (H&E) stained histology images to spatially resolved gene
expressions. ST is a time-consuming, expensive yet powerful experimental
technique that provides new opportunities to understand cancer mechanisms at a
fine-grained molecular level, which is critical for uncovering new approaches
for disease diagnosis and treatments. Here, we present $\textbf{Stem}$
($\textbf{S}$pa$\textbf{T}$ially resolved gene $\textbf{E}$xpression inference
with diffusion $\textbf{M}$odel), a novel computational tool that leverages a
conditional diffusion generative model to enable in silico gene expression
inference from H&E stained images. Through better capturing the inherent
stochasticity and heterogeneity in ST data, $\textbf{Stem}$ achieves
state-of-the-art performance on spatial gene expression prediction and
generates biologically meaningful gene profiles for new H&E stained images at
test time. We evaluate the proposed algorithm on datasets with various tissue
sources and sequencing platforms, where it demonstrates clear improvement over
existing approaches. $\textbf{Stem}$ generates high-fidelity gene expression
predictions that share similar gene variation levels as ground truth data,
suggesting that our method preserves the underlying biological heterogeneity.
Our proposed pipeline opens up the possibility of analyzing existing, easily
accessible H&E stained histology images from a genomics point of view without
physically performing gene expression profiling and empowers potential
biological discovery from H&E stained histology images.
| [
{
"version": "v1",
"created": "Sun, 26 Jan 2025 16:52:27 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Zhu",
"Sichen",
""
],
[
"Zhu",
"Yuchen",
""
],
[
"Tao",
"Molei",
""
],
[
"Qiu",
"Peng",
""
]
] | TITLE: Diffusion Generative Modeling for Spatially Resolved Gene Expression
Inference from Histology Images
ABSTRACT: Spatial Transcriptomics (ST) allows a high-resolution measurement of RNA
sequence abundance by systematically connecting cell morphology depicted in
Hematoxylin and Eosin (H&E) stained histology images to spatially resolved gene
expressions. ST is a time-consuming, expensive yet powerful experimental
technique that provides new opportunities to understand cancer mechanisms at a
fine-grained molecular level, which is critical for uncovering new approaches
for disease diagnosis and treatments. Here, we present $\textbf{Stem}$
($\textbf{S}$pa$\textbf{T}$ially resolved gene $\textbf{E}$xpression inference
with diffusion $\textbf{M}$odel), a novel computational tool that leverages a
conditional diffusion generative model to enable in silico gene expression
inference from H&E stained images. Through better capturing the inherent
stochasticity and heterogeneity in ST data, $\textbf{Stem}$ achieves
state-of-the-art performance on spatial gene expression prediction and
generates biologically meaningful gene profiles for new H&E stained images at
test time. We evaluate the proposed algorithm on datasets with various tissue
sources and sequencing platforms, where it demonstrates clear improvement over
existing approaches. $\textbf{Stem}$ generates high-fidelity gene expression
predictions that share similar gene variation levels as ground truth data,
suggesting that our method preserves the underlying biological heterogeneity.
Our proposed pipeline opens up the possibility of analyzing existing, easily
accessible H&E stained histology images from a genomics point of view without
physically performing gene expression profiling and empowers potential
biological discovery from H&E stained histology images.
|
2501.15890 | Karahan Sar{\i}ta\c{s} | Karahan Sar{\i}ta\c{s}, Peter Dayan, Tingke Shen, Surabhi S Nath | Complexity in Complexity: Understanding Visual Complexity Through
Structure, Color, and Surprise | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Understanding how humans perceive visual complexity is a key area of study in
visual cognition. Previous approaches to modeling visual complexity assessments
have often resulted in intricate, difficult-to-interpret algorithms that employ
numerous features or sophisticated deep learning architectures. While these
complex models achieve high performance on specific datasets, they often
sacrifice interpretability, making it challenging to understand the factors
driving human perception of complexity. Recently (Shen, et al. 2024) proposed
an interpretable segmentation-based model that accurately predicted complexity
across various datasets, supporting the idea that complexity can be explained
simply. In this work, we investigate the failure of their model to capture
structural, color and surprisal contributions to complexity. To this end, we
propose Multi-Scale Sobel Gradient (MSG) which measures spatial intensity
variations, Multi-Scale Unique Color (MUC) which quantifies colorfulness across
multiple scales, and surprise scores generated using a Large Language Model. We
test our features on existing benchmarks and a novel dataset (Surprising Visual
Genome) containing surprising images from Visual Genome. Our experiments
demonstrate that modeling complexity accurately is not as simple as previously
thought, requiring additional perceptual and semantic factors to address
dataset biases. Our model improves predictive performance while maintaining
interpretability, offering deeper insights into how visual complexity is
perceived and assessed. Our code, analysis and data are available at
https://github.com/Complexity-Project/Complexity-in-Complexity.
| [
{
"version": "v1",
"created": "Mon, 27 Jan 2025 09:32:56 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Feb 2025 19:36:23 GMT"
},
{
"version": "v3",
"created": "Thu, 20 Mar 2025 12:06:51 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Sarıtaş",
"Karahan",
""
],
[
"Dayan",
"Peter",
""
],
[
"Shen",
"Tingke",
""
],
[
"Nath",
"Surabhi S",
""
]
] | TITLE: Complexity in Complexity: Understanding Visual Complexity Through
Structure, Color, and Surprise
ABSTRACT: Understanding how humans perceive visual complexity is a key area of study in
visual cognition. Previous approaches to modeling visual complexity assessments
have often resulted in intricate, difficult-to-interpret algorithms that employ
numerous features or sophisticated deep learning architectures. While these
complex models achieve high performance on specific datasets, they often
sacrifice interpretability, making it challenging to understand the factors
driving human perception of complexity. Recently (Shen, et al. 2024) proposed
an interpretable segmentation-based model that accurately predicted complexity
across various datasets, supporting the idea that complexity can be explained
simply. In this work, we investigate the failure of their model to capture
structural, color and surprisal contributions to complexity. To this end, we
propose Multi-Scale Sobel Gradient (MSG) which measures spatial intensity
variations, Multi-Scale Unique Color (MUC) which quantifies colorfulness across
multiple scales, and surprise scores generated using a Large Language Model. We
test our features on existing benchmarks and a novel dataset (Surprising Visual
Genome) containing surprising images from Visual Genome. Our experiments
demonstrate that modeling complexity accurately is not as simple as previously
thought, requiring additional perceptual and semantic factors to address
dataset biases. Our model improves predictive performance while maintaining
interpretability, offering deeper insights into how visual complexity is
perceived and assessed. Our code, analysis and data are available at
https://github.com/Complexity-Project/Complexity-in-Complexity.
|
2501.18532 | Anmol Goel | Anmol Goel, Yaxi Hu, Iryna Gurevych, Amartya Sanyal | Differentially Private Steering for Large Language Model Alignment | ICLR 2025 Camera Ready; Code: https://github.com/UKPLab/iclr2025-psa | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Aligning Large Language Models (LLMs) with human values and away from
undesirable behaviors (such as hallucination) has become increasingly
important. Recently, steering LLMs towards a desired behavior via activation
editing has emerged as an effective method to mitigate harmful generations at
inference-time. Activation editing modifies LLM representations by preserving
information from positive demonstrations (e.g., truthful) and minimising
information from negative demonstrations (e.g., hallucinations). When these
demonstrations come from a private dataset, the aligned LLM may leak private
information contained in those private samples. In this work, we present the
first study of aligning LLM behavior with private datasets. Our work proposes
the Private Steering for LLM Alignment (PSA) algorithm to edit LLM activations
with differential privacy (DP) guarantees. We conduct extensive experiments on
seven different benchmarks with open-source LLMs of different sizes (0.5B to
7B) and model families (LlaMa, Qwen, Mistral and Gemma). Our results show that
PSA achieves DP guarantees for LLM alignment with minimal loss in performance,
including alignment metrics, open-ended text generation quality, and
general-purpose reasoning. We also develop the first Membership Inference
Attack (MIA) for evaluating and auditing the empirical privacy for the problem
of LLM steering via activation editing. Our experiments support the theoretical
guarantees by showing improved guarantees for our PSA algorithm compared to
several existing non-private techniques.
| [
{
"version": "v1",
"created": "Thu, 30 Jan 2025 17:58:36 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 09:58:49 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Goel",
"Anmol",
""
],
[
"Hu",
"Yaxi",
""
],
[
"Gurevych",
"Iryna",
""
],
[
"Sanyal",
"Amartya",
""
]
] | TITLE: Differentially Private Steering for Large Language Model Alignment
ABSTRACT: Aligning Large Language Models (LLMs) with human values and away from
undesirable behaviors (such as hallucination) has become increasingly
important. Recently, steering LLMs towards a desired behavior via activation
editing has emerged as an effective method to mitigate harmful generations at
inference-time. Activation editing modifies LLM representations by preserving
information from positive demonstrations (e.g., truthful) and minimising
information from negative demonstrations (e.g., hallucinations). When these
demonstrations come from a private dataset, the aligned LLM may leak private
information contained in those private samples. In this work, we present the
first study of aligning LLM behavior with private datasets. Our work proposes
the Private Steering for LLM Alignment (PSA) algorithm to edit LLM activations
with differential privacy (DP) guarantees. We conduct extensive experiments on
seven different benchmarks with open-source LLMs of different sizes (0.5B to
7B) and model families (LlaMa, Qwen, Mistral and Gemma). Our results show that
PSA achieves DP guarantees for LLM alignment with minimal loss in performance,
including alignment metrics, open-ended text generation quality, and
general-purpose reasoning. We also develop the first Membership Inference
Attack (MIA) for evaluating and auditing the empirical privacy for the problem
of LLM steering via activation editing. Our experiments support the theoretical
guarantees by showing improved guarantees for our PSA algorithm compared to
several existing non-private techniques.
|
2501.19140 | Dariusz Pojda | Agnieszka Anna Tomaka and Dariusz Pojda and Micha{\l} Tarnawski and
Leszek Luchowski | Transformation trees -- documentation of multimodal image registration | 28 pages, 15 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimodal image registration plays a key role in creating digital patient
models by combining data from different imaging techniques into a single
coordinate system. This process often involves multiple sequential and
interconnected transformations, which must be well-documented to ensure
transparency and reproducibility. In this paper, we propose the use of
transformation trees as a method for structured recording and management of
these transformations. This approach has been implemented in the dpVision
software and uses a dedicated .dpw file format to store hierarchical
relationships between images, transformations, and motion data. Transformation
trees allow precise tracking of all image processing steps, reduce the need to
store multiple copies of the same data, and enable the indirect registration of
images that do not share common reference points. This improves the
reproducibility of the analyses and facilitates later processing and
integration of images from different sources. The practical application of this
method is demonstrated with examples from orthodontics, including the
integration of 3D face scans, intraoral scans, and CBCT images, as well as the
documentation of mandibular motion. Beyond orthodontics, this method can be
applied in other fields that require systematic management of image
registration processes, such as maxillofacial surgery, oncology, and
biomechanical analysis. Maintaining long-term data consistency is essential for
both scientific research and clinical practice. It enables easier comparison of
results in longitudinal studies, improves retrospective analysis, and supports
the development of artificial intelligence algorithms by providing standardized
and well-documented datasets. The proposed approach enhances data organization,
allows for efficient analysis, and facilitates the reuse of information in
future studies and diagnostic procedures.
| [
{
"version": "v1",
"created": "Fri, 31 Jan 2025 13:49:16 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 12:43:48 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Tomaka",
"Agnieszka Anna",
""
],
[
"Pojda",
"Dariusz",
""
],
[
"Tarnawski",
"Michał",
""
],
[
"Luchowski",
"Leszek",
""
]
] | TITLE: Transformation trees -- documentation of multimodal image registration
ABSTRACT: Multimodal image registration plays a key role in creating digital patient
models by combining data from different imaging techniques into a single
coordinate system. This process often involves multiple sequential and
interconnected transformations, which must be well-documented to ensure
transparency and reproducibility. In this paper, we propose the use of
transformation trees as a method for structured recording and management of
these transformations. This approach has been implemented in the dpVision
software and uses a dedicated .dpw file format to store hierarchical
relationships between images, transformations, and motion data. Transformation
trees allow precise tracking of all image processing steps, reduce the need to
store multiple copies of the same data, and enable the indirect registration of
images that do not share common reference points. This improves the
reproducibility of the analyses and facilitates later processing and
integration of images from different sources. The practical application of this
method is demonstrated with examples from orthodontics, including the
integration of 3D face scans, intraoral scans, and CBCT images, as well as the
documentation of mandibular motion. Beyond orthodontics, this method can be
applied in other fields that require systematic management of image
registration processes, such as maxillofacial surgery, oncology, and
biomechanical analysis. Maintaining long-term data consistency is essential for
both scientific research and clinical practice. It enables easier comparison of
results in longitudinal studies, improves retrospective analysis, and supports
the development of artificial intelligence algorithms by providing standardized
and well-documented datasets. The proposed approach enhances data organization,
allows for efficient analysis, and facilitates the reuse of information in
future studies and diagnostic procedures.
|
2502.00379 | Alexander Nikulin | Alexander Nikulin, Ilya Zisman, Denis Tarasov, Nikita Lyubaykin,
Andrei Polubarov, Igor Kiselev, Vladislav Kurenkov | Latent Action Learning Requires Supervision in the Presence of
Distractors | Preprint. In review. Edit: Accepted by ICLR 2025 Workshop on World
Models: Understanding, Modelling and Scaling | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Recently, latent action learning, pioneered by Latent Action Policies (LAPO),
have shown remarkable pre-training efficiency on observation-only data,
offering potential for leveraging vast amounts of video available on the web
for embodied AI. However, prior work has focused on distractor-free data, where
changes between observations are primarily explained by ground-truth actions.
Unfortunately, real-world videos contain action-correlated distractors that may
hinder latent action learning. Using Distracting Control Suite (DCS) we
empirically investigate the effect of distractors on latent action learning and
demonstrate that LAPO struggle in such scenario. We propose LAOM, a simple LAPO
modification that improves the quality of latent actions by 8x, as measured by
linear probing. Importantly, we show that providing supervision with
ground-truth actions, as few as 2.5% of the full dataset, during latent action
learning improves downstream performance by 4.2x on average. Our findings
suggest that integrating supervision during Latent Action Models (LAM) training
is critical in the presence of distractors, challenging the conventional
pipeline of first learning LAM and only then decoding from latent to
ground-truth actions.
| [
{
"version": "v1",
"created": "Sat, 1 Feb 2025 09:35:51 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 20:57:58 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Nikulin",
"Alexander",
""
],
[
"Zisman",
"Ilya",
""
],
[
"Tarasov",
"Denis",
""
],
[
"Lyubaykin",
"Nikita",
""
],
[
"Polubarov",
"Andrei",
""
],
[
"Kiselev",
"Igor",
""
],
[
"Kurenkov",
"Vladislav",
""
]
] | TITLE: Latent Action Learning Requires Supervision in the Presence of
Distractors
ABSTRACT: Recently, latent action learning, pioneered by Latent Action Policies (LAPO),
have shown remarkable pre-training efficiency on observation-only data,
offering potential for leveraging vast amounts of video available on the web
for embodied AI. However, prior work has focused on distractor-free data, where
changes between observations are primarily explained by ground-truth actions.
Unfortunately, real-world videos contain action-correlated distractors that may
hinder latent action learning. Using Distracting Control Suite (DCS) we
empirically investigate the effect of distractors on latent action learning and
demonstrate that LAPO struggle in such scenario. We propose LAOM, a simple LAPO
modification that improves the quality of latent actions by 8x, as measured by
linear probing. Importantly, we show that providing supervision with
ground-truth actions, as few as 2.5% of the full dataset, during latent action
learning improves downstream performance by 4.2x on average. Our findings
suggest that integrating supervision during Latent Action Models (LAM) training
is critical in the presence of distractors, challenging the conventional
pipeline of first learning LAM and only then decoding from latent to
ground-truth actions.
|
2502.02257 | Tao Zhang | Tao Zhang, Jinyong Wen, Zhen Chen, Kun Ding, Shiming Xiang, Chunhong
Pan | UNIP: Rethinking Pre-trained Attention Patterns for Infrared Semantic
Segmentation | ICLR 2025. 27 pages, 13 figures, 21 tables | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Pre-training techniques significantly enhance the performance of semantic
segmentation tasks with limited training data. However, the efficacy under a
large domain gap between pre-training (e.g. RGB) and fine-tuning (e.g.
infrared) remains underexplored. In this study, we first benchmark the infrared
semantic segmentation performance of various pre-training methods and reveal
several phenomena distinct from the RGB domain. Next, our layerwise analysis of
pre-trained attention maps uncovers that: (1) There are three typical attention
patterns (local, hybrid, and global); (2) Pre-training tasks notably influence
the pattern distribution across layers; (3) The hybrid pattern is crucial for
semantic segmentation as it attends to both nearby and foreground elements; (4)
The texture bias impedes model generalization in infrared tasks. Building on
these insights, we propose UNIP, a UNified Infrared Pre-training framework, to
enhance the pre-trained model performance. This framework uses the
hybrid-attention distillation NMI-HAD as the pre-training target, a large-scale
mixed dataset InfMix for pre-training, and a last-layer feature pyramid network
LL-FPN for fine-tuning. Experimental results show that UNIP outperforms various
pre-training methods by up to 13.5\% in average mIoU on three infrared
segmentation tasks, evaluated using fine-tuning and linear probing metrics.
UNIP-S achieves performance on par with MAE-L while requiring only 1/10 of the
computational cost. Furthermore, UNIP significantly surpasses state-of-the-art
(SOTA) infrared or RGB segmentation methods and demonstrates broad potential
for application in other modalities, such as RGB and depth. Our code is
available at https://github.com/casiatao/UNIP.
| [
{
"version": "v1",
"created": "Tue, 4 Feb 2025 12:08:20 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 13:55:08 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Zhang",
"Tao",
""
],
[
"Wen",
"Jinyong",
""
],
[
"Chen",
"Zhen",
""
],
[
"Ding",
"Kun",
""
],
[
"Xiang",
"Shiming",
""
],
[
"Pan",
"Chunhong",
""
]
] | TITLE: UNIP: Rethinking Pre-trained Attention Patterns for Infrared Semantic
Segmentation
ABSTRACT: Pre-training techniques significantly enhance the performance of semantic
segmentation tasks with limited training data. However, the efficacy under a
large domain gap between pre-training (e.g. RGB) and fine-tuning (e.g.
infrared) remains underexplored. In this study, we first benchmark the infrared
semantic segmentation performance of various pre-training methods and reveal
several phenomena distinct from the RGB domain. Next, our layerwise analysis of
pre-trained attention maps uncovers that: (1) There are three typical attention
patterns (local, hybrid, and global); (2) Pre-training tasks notably influence
the pattern distribution across layers; (3) The hybrid pattern is crucial for
semantic segmentation as it attends to both nearby and foreground elements; (4)
The texture bias impedes model generalization in infrared tasks. Building on
these insights, we propose UNIP, a UNified Infrared Pre-training framework, to
enhance the pre-trained model performance. This framework uses the
hybrid-attention distillation NMI-HAD as the pre-training target, a large-scale
mixed dataset InfMix for pre-training, and a last-layer feature pyramid network
LL-FPN for fine-tuning. Experimental results show that UNIP outperforms various
pre-training methods by up to 13.5\% in average mIoU on three infrared
segmentation tasks, evaluated using fine-tuning and linear probing metrics.
UNIP-S achieves performance on par with MAE-L while requiring only 1/10 of the
computational cost. Furthermore, UNIP significantly surpasses state-of-the-art
(SOTA) infrared or RGB segmentation methods and demonstrates broad potential
for application in other modalities, such as RGB and depth. Our code is
available at https://github.com/casiatao/UNIP.
|
2502.06759 | Gaetano Rossiello | Gaetano Rossiello, Nhan Pham, Michael Glass, Junkyu Lee, Dharmashankar
Subramanian | Rationalization Models for Text-to-SQL | Published at ICLR 2025 Workshop on Reasoning and Planning for LLMs | null | null | null | cs.CL cs.AI cs.DB | http://creativecommons.org/licenses/by/4.0/ | We introduce a framework for generating Chain-of-Thought (CoT) rationales to
enhance text-to-SQL model fine-tuning. These rationales consist of intermediate
SQL statements and explanations, serving as incremental steps toward
constructing the final SQL query. The process begins with manually annotating a
small set of examples, which are then used to prompt a large language model in
an iterative, dynamic few-shot knowledge distillation procedure from a teacher
model. A rationalization model is subsequently trained on the validated
decomposed queries, enabling extensive synthetic CoT annotations for
text-to-SQL datasets. To evaluate the approach, we fine-tune small language
models with and without these rationales on the BIRD dataset. Results indicate
that step-by-step query generation improves execution accuracy, especially for
moderately and highly complex queries, while also enhancing explainability.
| [
{
"version": "v1",
"created": "Mon, 10 Feb 2025 18:38:57 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Feb 2025 17:12:34 GMT"
},
{
"version": "v3",
"created": "Tue, 11 Mar 2025 17:37:30 GMT"
},
{
"version": "v4",
"created": "Thu, 20 Mar 2025 13:46:48 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Rossiello",
"Gaetano",
""
],
[
"Pham",
"Nhan",
""
],
[
"Glass",
"Michael",
""
],
[
"Lee",
"Junkyu",
""
],
[
"Subramanian",
"Dharmashankar",
""
]
] | TITLE: Rationalization Models for Text-to-SQL
ABSTRACT: We introduce a framework for generating Chain-of-Thought (CoT) rationales to
enhance text-to-SQL model fine-tuning. These rationales consist of intermediate
SQL statements and explanations, serving as incremental steps toward
constructing the final SQL query. The process begins with manually annotating a
small set of examples, which are then used to prompt a large language model in
an iterative, dynamic few-shot knowledge distillation procedure from a teacher
model. A rationalization model is subsequently trained on the validated
decomposed queries, enabling extensive synthetic CoT annotations for
text-to-SQL datasets. To evaluate the approach, we fine-tune small language
models with and without these rationales on the BIRD dataset. Results indicate
that step-by-step query generation improves execution accuracy, especially for
moderately and highly complex queries, while also enhancing explainability.
|
2502.06825 | Minxiao Chen | Minxiao Chen, Haitao Yuan, Nan Jiang, Zhihan Zheng, Sai Wu, Ao Zhou,
Shangguang Wang | RLOMM: An Efficient and Robust Online Map Matching Framework with
Reinforcement Learning | Accepted by SIGMOD 2025 | null | null | null | cs.LG cs.DB | http://creativecommons.org/licenses/by/4.0/ | Online map matching is a fundamental problem in location-based services,
aiming to incrementally match trajectory data step-by-step onto a road network.
However, existing methods fail to meet the needs for efficiency, robustness,
and accuracy required by large-scale online applications, making this task
still challenging. This paper introduces a novel framework that achieves high
accuracy and efficient matching while ensuring robustness in handling diverse
scenarios. To improve efficiency, we begin by modeling the online map matching
problem as an Online Markov Decision Process (OMDP) based on its inherent
characteristics. This approach helps efficiently merge historical and real-time
data, reducing unnecessary calculations. Next, to enhance robustness, we design
a reinforcement learning method, enabling robust handling of real-time data
from dynamically changing environments. In particular, we propose a novel model
learning process and a comprehensive reward function, allowing the model to
make reasonable current matches from a future-oriented perspective, and to
continuously update and optimize during the decision-making process based on
feedback. Lastly, to address the heterogeneity between trajectories and roads,
we design distinct graph structures, facilitating efficient representation
learning through graph and recurrent neural networks. To further align
trajectory and road data, we introduce contrastive learning to decrease their
distance in the latent space, thereby promoting effective integration of the
two. Extensive evaluations on three real-world datasets confirm that our method
significantly outperforms existing state-of-the-art solutions in terms of
accuracy, efficiency and robustness.
| [
{
"version": "v1",
"created": "Wed, 5 Feb 2025 11:26:32 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 14:07:59 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Chen",
"Minxiao",
""
],
[
"Yuan",
"Haitao",
""
],
[
"Jiang",
"Nan",
""
],
[
"Zheng",
"Zhihan",
""
],
[
"Wu",
"Sai",
""
],
[
"Zhou",
"Ao",
""
],
[
"Wang",
"Shangguang",
""
]
] | TITLE: RLOMM: An Efficient and Robust Online Map Matching Framework with
Reinforcement Learning
ABSTRACT: Online map matching is a fundamental problem in location-based services,
aiming to incrementally match trajectory data step-by-step onto a road network.
However, existing methods fail to meet the needs for efficiency, robustness,
and accuracy required by large-scale online applications, making this task
still challenging. This paper introduces a novel framework that achieves high
accuracy and efficient matching while ensuring robustness in handling diverse
scenarios. To improve efficiency, we begin by modeling the online map matching
problem as an Online Markov Decision Process (OMDP) based on its inherent
characteristics. This approach helps efficiently merge historical and real-time
data, reducing unnecessary calculations. Next, to enhance robustness, we design
a reinforcement learning method, enabling robust handling of real-time data
from dynamically changing environments. In particular, we propose a novel model
learning process and a comprehensive reward function, allowing the model to
make reasonable current matches from a future-oriented perspective, and to
continuously update and optimize during the decision-making process based on
feedback. Lastly, to address the heterogeneity between trajectories and roads,
we design distinct graph structures, facilitating efficient representation
learning through graph and recurrent neural networks. To further align
trajectory and road data, we introduce contrastive learning to decrease their
distance in the latent space, thereby promoting effective integration of the
two. Extensive evaluations on three real-world datasets confirm that our method
significantly outperforms existing state-of-the-art solutions in terms of
accuracy, efficiency and robustness.
|
2502.07058 | Zixin Tang | Zixin Tang, Chieh-Yang Huang, Tsung-Che Li, Ho Yin Sam Ng, Hen-Hsen
Huang, Ting-Hao 'Kenneth' Huang | Using Contextually Aligned Online Reviews to Measure LLMs' Performance
Disparities Across Language Varieties | Accepted by 2025 Annual Conference of the Nations of the Americas
Chapter of the Association for Computational Linguistics (NAACL), theme track | null | null | null | cs.CL cs.HC | http://creativecommons.org/licenses/by/4.0/ | A language can have different varieties. These varieties can affect the
performance of natural language processing (NLP) models, including large
language models (LLMs), which are often trained on data from widely spoken
varieties. This paper introduces a novel and cost-effective approach to
benchmark model performance across language varieties. We argue that
international online review platforms, such as Booking.com, can serve as
effective data sources for constructing datasets that capture comments in
different language varieties from similar real-world scenarios, like reviews
for the same hotel with the same rating using the same language (e.g., Mandarin
Chinese) but different language varieties (e.g., Taiwan Mandarin, Mainland
Mandarin). To prove this concept, we constructed a contextually aligned dataset
comprising reviews in Taiwan Mandarin and Mainland Mandarin and tested six LLMs
in a sentiment analysis task. Our results show that LLMs consistently
underperform in Taiwan Mandarin.
| [
{
"version": "v1",
"created": "Mon, 10 Feb 2025 21:49:35 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Feb 2025 04:55:27 GMT"
},
{
"version": "v3",
"created": "Thu, 20 Mar 2025 15:01:11 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Tang",
"Zixin",
""
],
[
"Huang",
"Chieh-Yang",
""
],
[
"Li",
"Tsung-Che",
""
],
[
"Ng",
"Ho Yin Sam",
""
],
[
"Huang",
"Hen-Hsen",
""
],
[
"Huang",
"Ting-Hao 'Kenneth'",
""
]
] | TITLE: Using Contextually Aligned Online Reviews to Measure LLMs' Performance
Disparities Across Language Varieties
ABSTRACT: A language can have different varieties. These varieties can affect the
performance of natural language processing (NLP) models, including large
language models (LLMs), which are often trained on data from widely spoken
varieties. This paper introduces a novel and cost-effective approach to
benchmark model performance across language varieties. We argue that
international online review platforms, such as Booking.com, can serve as
effective data sources for constructing datasets that capture comments in
different language varieties from similar real-world scenarios, like reviews
for the same hotel with the same rating using the same language (e.g., Mandarin
Chinese) but different language varieties (e.g., Taiwan Mandarin, Mainland
Mandarin). To prove this concept, we constructed a contextually aligned dataset
comprising reviews in Taiwan Mandarin and Mainland Mandarin and tested six LLMs
in a sentiment analysis task. Our results show that LLMs consistently
underperform in Taiwan Mandarin.
|
2502.12454 | He Zhang | He Zhang and Xinyi Fu | Benchmarking Zero-Shot Facial Emotion Annotation with Large Language
Models: A Multi-Class and Multi-Frame Approach in DailyLife | 10 pages | null | null | null | cs.CV cs.AI cs.HC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study investigates the feasibility and performance of using large
language models (LLMs) to automatically annotate human emotions in everyday
scenarios. We conducted experiments on the DailyLife subset of the publicly
available FERV39k dataset, employing the GPT-4o-mini model for rapid, zero-shot
labeling of key frames extracted from video segments. Under a seven-class
emotion taxonomy ("Angry," "Disgust," "Fear," "Happy," "Neutral," "Sad,"
"Surprise"), the LLM achieved an average precision of approximately 50%. In
contrast, when limited to ternary emotion classification
(negative/neutral/positive), the average precision increased to approximately
64%. Additionally, we explored a strategy that integrates multiple frames
within 1-2 second video clips to enhance labeling performance and reduce costs.
The results indicate that this approach can slightly improve annotation
accuracy. Overall, our preliminary findings highlight the potential application
of zero-shot LLMs in human facial emotion annotation tasks, offering new
avenues for reducing labeling costs and broadening the applicability of LLMs in
complex multimodal environments.
| [
{
"version": "v1",
"created": "Tue, 18 Feb 2025 02:36:16 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Zhang",
"He",
""
],
[
"Fu",
"Xinyi",
""
]
] | TITLE: Benchmarking Zero-Shot Facial Emotion Annotation with Large Language
Models: A Multi-Class and Multi-Frame Approach in DailyLife
ABSTRACT: This study investigates the feasibility and performance of using large
language models (LLMs) to automatically annotate human emotions in everyday
scenarios. We conducted experiments on the DailyLife subset of the publicly
available FERV39k dataset, employing the GPT-4o-mini model for rapid, zero-shot
labeling of key frames extracted from video segments. Under a seven-class
emotion taxonomy ("Angry," "Disgust," "Fear," "Happy," "Neutral," "Sad,"
"Surprise"), the LLM achieved an average precision of approximately 50%. In
contrast, when limited to ternary emotion classification
(negative/neutral/positive), the average precision increased to approximately
64%. Additionally, we explored a strategy that integrates multiple frames
within 1-2 second video clips to enhance labeling performance and reduce costs.
The results indicate that this approach can slightly improve annotation
accuracy. Overall, our preliminary findings highlight the potential application
of zero-shot LLMs in human facial emotion annotation tasks, offering new
avenues for reducing labeling costs and broadening the applicability of LLMs in
complex multimodal environments.
|
2502.12509 | Kangda Wei | Kangda Wei, Xi Shi, Jonathan Tong, Sai Ramana Reddy, Anandhavelu
Natarajan, Rajiv Jain, Aparna Garimella, Ruihong Huang | LegalCore: A Dataset for Event Coreference Resolution in Legal Documents | Need company internal approval before public release | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Recognizing events and their coreferential mentions in a document is
essential for understanding semantic meanings of text. The existing research on
event coreference resolution is mostly limited to news articles. In this paper,
we present the first dataset for the legal domain, LegalCore, which has been
annotated with comprehensive event and event coreference information. The legal
contract documents we annotated in this dataset are several times longer than
news articles, with an average length of around 25k tokens per document. The
annotations show that legal documents have dense event mentions and feature
both short-distance and super long-distance coreference links between event
mentions. We further benchmark mainstream Large Language Models (LLMs) on this
dataset for both event detection and event coreference resolution tasks, and
find that this dataset poses significant challenges for state-of-the-art
open-source and proprietary LLMs, which perform significantly worse than a
supervised baseline. We will publish the dataset as well as the code.
| [
{
"version": "v1",
"created": "Tue, 18 Feb 2025 03:47:53 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 19:36:00 GMT"
},
{
"version": "v3",
"created": "Sun, 9 Mar 2025 16:53:11 GMT"
},
{
"version": "v4",
"created": "Thu, 20 Mar 2025 16:45:57 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Wei",
"Kangda",
""
],
[
"Shi",
"Xi",
""
],
[
"Tong",
"Jonathan",
""
],
[
"Reddy",
"Sai Ramana",
""
],
[
"Natarajan",
"Anandhavelu",
""
],
[
"Jain",
"Rajiv",
""
],
[
"Garimella",
"Aparna",
""
],
[
"Huang",
"Ruihong",
""
]
] | TITLE: LegalCore: A Dataset for Event Coreference Resolution in Legal Documents
ABSTRACT: Recognizing events and their coreferential mentions in a document is
essential for understanding semantic meanings of text. The existing research on
event coreference resolution is mostly limited to news articles. In this paper,
we present the first dataset for the legal domain, LegalCore, which has been
annotated with comprehensive event and event coreference information. The legal
contract documents we annotated in this dataset are several times longer than
news articles, with an average length of around 25k tokens per document. The
annotations show that legal documents have dense event mentions and feature
both short-distance and super long-distance coreference links between event
mentions. We further benchmark mainstream Large Language Models (LLMs) on this
dataset for both event detection and event coreference resolution tasks, and
find that this dataset poses significant challenges for state-of-the-art
open-source and proprietary LLMs, which perform significantly worse than a
supervised baseline. We will publish the dataset as well as the code.
|
2502.13056 | Gurinder Singh | Gurinder Singh, Hongni Jin, and Kenneth M. Merz Jr | Benchmarking MedMNIST dataset on real quantum hardware | null | null | null | null | quant-ph cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Quantum machine learning (QML) has emerged as a promising domain to leverage
the computational capabilities of quantum systems to solve complex
classification tasks. In this work, we present the first comprehensive QML
study by benchmarking the MedMNIST-a diverse collection of medical imaging
datasets on a 127-qubit real IBM quantum hardware, to evaluate the feasibility
and performance of quantum models (without any classical neural networks) in
practical applications. This study explores recent advancements in quantum
computing such as device-aware quantum circuits, error suppression, and
mitigation for medical image classification. Our methodology is comprised of
three stages: preprocessing, generation of noise-resilient and
hardware-efficient quantum circuits, optimizing/training of quantum circuits on
classical hardware, and inference on real IBM quantum hardware. Firstly, we
process all input images in the preprocessing stage to reduce the spatial
dimension due to quantum hardware limitations. We generate hardware-efficient
quantum circuits using backend properties expressible to learn complex patterns
for medical image classification. After classical optimization of QML models,
we perform inference on real quantum hardware. We also incorporate advanced
error suppression and mitigation techniques in our QML workflow, including
dynamical decoupling (DD), gate twirling, and matrix-free measurement
mitigation (M3) to mitigate the effects of noise and improve classification
performance. The experimental results showcase the potential of quantum
computing for medical imaging and establish a benchmark for future advancements
in QML applied to healthcare.
| [
{
"version": "v1",
"created": "Tue, 18 Feb 2025 17:02:41 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 19:21:51 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Singh",
"Gurinder",
""
],
[
"Jin",
"Hongni",
""
],
[
"Merz",
"Kenneth M.",
"Jr"
]
] | TITLE: Benchmarking MedMNIST dataset on real quantum hardware
ABSTRACT: Quantum machine learning (QML) has emerged as a promising domain to leverage
the computational capabilities of quantum systems to solve complex
classification tasks. In this work, we present the first comprehensive QML
study by benchmarking the MedMNIST-a diverse collection of medical imaging
datasets on a 127-qubit real IBM quantum hardware, to evaluate the feasibility
and performance of quantum models (without any classical neural networks) in
practical applications. This study explores recent advancements in quantum
computing such as device-aware quantum circuits, error suppression, and
mitigation for medical image classification. Our methodology is comprised of
three stages: preprocessing, generation of noise-resilient and
hardware-efficient quantum circuits, optimizing/training of quantum circuits on
classical hardware, and inference on real IBM quantum hardware. Firstly, we
process all input images in the preprocessing stage to reduce the spatial
dimension due to quantum hardware limitations. We generate hardware-efficient
quantum circuits using backend properties expressible to learn complex patterns
for medical image classification. After classical optimization of QML models,
we perform inference on real quantum hardware. We also incorporate advanced
error suppression and mitigation techniques in our QML workflow, including
dynamical decoupling (DD), gate twirling, and matrix-free measurement
mitigation (M3) to mitigate the effects of noise and improve classification
performance. The experimental results showcase the potential of quantum
computing for medical imaging and establish a benchmark for future advancements
in QML applied to healthcare.
|
2502.13308 | Yixin Liu | Junjun Pan, Yixin Liu, Xin Zheng, Yizhen Zheng, Alan Wee-Chung Liew,
Fuyi Li, Shirui Pan | A Label-Free Heterophily-Guided Approach for Unsupervised Graph Fraud
Detection | 9 pages, 3 figures. Accepted by AAAI 2025 | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Graph fraud detection (GFD) has rapidly advanced in protecting online
services by identifying malicious fraudsters. Recent supervised GFD research
highlights that heterophilic connections between fraudsters and users can
greatly impact detection performance, since fraudsters tend to camouflage
themselves by building more connections to benign users. Despite the promising
performance of supervised GFD methods, the reliance on labels limits their
applications to unsupervised scenarios; Additionally, accurately capturing
complex and diverse heterophily patterns without labels poses a further
challenge. To fill the gap, we propose a Heterophily-guided Unsupervised Graph
fraud dEtection approach (HUGE) for unsupervised GFD, which contains two
essential components: a heterophily estimation module and an alignment-based
fraud detection module. In the heterophily estimation module, we design a novel
label-free heterophily metric called HALO, which captures the critical graph
properties for GFD, enabling its outstanding ability to estimate heterophily
from node attributes. In the alignment-based fraud detection module, we develop
a joint MLP-GNN architecture with ranking loss and asymmetric alignment loss.
The ranking loss aligns the predicted fraud score with the relative order of
HALO, providing an extra robustness guarantee by comparing heterophily among
non-adjacent nodes. Moreover, the asymmetric alignment loss effectively
utilizes structural information while alleviating the feature-smooth effects of
GNNs. Extensive experiments on 6 datasets demonstrate that HUGE significantly
outperforms competitors, showcasing its effectiveness and robustness.
| [
{
"version": "v1",
"created": "Tue, 18 Feb 2025 22:07:36 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Feb 2025 06:15:48 GMT"
},
{
"version": "v3",
"created": "Thu, 20 Mar 2025 03:59:44 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Pan",
"Junjun",
""
],
[
"Liu",
"Yixin",
""
],
[
"Zheng",
"Xin",
""
],
[
"Zheng",
"Yizhen",
""
],
[
"Liew",
"Alan Wee-Chung",
""
],
[
"Li",
"Fuyi",
""
],
[
"Pan",
"Shirui",
""
]
] | TITLE: A Label-Free Heterophily-Guided Approach for Unsupervised Graph Fraud
Detection
ABSTRACT: Graph fraud detection (GFD) has rapidly advanced in protecting online
services by identifying malicious fraudsters. Recent supervised GFD research
highlights that heterophilic connections between fraudsters and users can
greatly impact detection performance, since fraudsters tend to camouflage
themselves by building more connections to benign users. Despite the promising
performance of supervised GFD methods, the reliance on labels limits their
applications to unsupervised scenarios; Additionally, accurately capturing
complex and diverse heterophily patterns without labels poses a further
challenge. To fill the gap, we propose a Heterophily-guided Unsupervised Graph
fraud dEtection approach (HUGE) for unsupervised GFD, which contains two
essential components: a heterophily estimation module and an alignment-based
fraud detection module. In the heterophily estimation module, we design a novel
label-free heterophily metric called HALO, which captures the critical graph
properties for GFD, enabling its outstanding ability to estimate heterophily
from node attributes. In the alignment-based fraud detection module, we develop
a joint MLP-GNN architecture with ranking loss and asymmetric alignment loss.
The ranking loss aligns the predicted fraud score with the relative order of
HALO, providing an extra robustness guarantee by comparing heterophily among
non-adjacent nodes. Moreover, the asymmetric alignment loss effectively
utilizes structural information while alleviating the feature-smooth effects of
GNNs. Extensive experiments on 6 datasets demonstrate that HUGE significantly
outperforms competitors, showcasing its effectiveness and robustness.
|
2502.15540 | Milad Sefidgaran | Milad Sefidgaran and Abdellatif Zaidi and Piotr Krasnowski | Generalization Guarantees for Representation Learning via Data-Dependent
Gaussian Mixture Priors | Accepted as a Spotlight Paper at ICLR 2025 | null | null | null | stat.ML cs.IT cs.LG math.IT | http://creativecommons.org/licenses/by/4.0/ | We establish in-expectation and tail bounds on the generalization error of
representation learning type algorithms. The bounds are in terms of the
relative entropy between the distribution of the representations extracted from
the training and "test'' datasets and a data-dependent symmetric prior, i.e.,
the Minimum Description Length (MDL) of the latent variables for the training
and test datasets. Our bounds are shown to reflect the "structure" and
"simplicity'' of the encoder and significantly improve upon the few existing
ones for the studied model. We then use our in-expectation bound to devise a
suitable data-dependent regularizer; and we investigate thoroughly the
important question of the selection of the prior. We propose a systematic
approach to simultaneously learning a data-dependent Gaussian mixture prior and
using it as a regularizer. Interestingly, we show that a weighted attention
mechanism emerges naturally in this procedure. Our experiments show that our
approach outperforms the now popular Variational Information Bottleneck (VIB)
method as well as the recent Category-Dependent VIB (CDVIB).
| [
{
"version": "v1",
"created": "Fri, 21 Feb 2025 15:43:31 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 22:37:44 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Sefidgaran",
"Milad",
""
],
[
"Zaidi",
"Abdellatif",
""
],
[
"Krasnowski",
"Piotr",
""
]
] | TITLE: Generalization Guarantees for Representation Learning via Data-Dependent
Gaussian Mixture Priors
ABSTRACT: We establish in-expectation and tail bounds on the generalization error of
representation learning type algorithms. The bounds are in terms of the
relative entropy between the distribution of the representations extracted from
the training and "test'' datasets and a data-dependent symmetric prior, i.e.,
the Minimum Description Length (MDL) of the latent variables for the training
and test datasets. Our bounds are shown to reflect the "structure" and
"simplicity'' of the encoder and significantly improve upon the few existing
ones for the studied model. We then use our in-expectation bound to devise a
suitable data-dependent regularizer; and we investigate thoroughly the
important question of the selection of the prior. We propose a systematic
approach to simultaneously learning a data-dependent Gaussian mixture prior and
using it as a regularizer. Interestingly, we show that a weighted attention
mechanism emerges naturally in this procedure. Our experiments show that our
approach outperforms the now popular Variational Information Bottleneck (VIB)
method as well as the recent Category-Dependent VIB (CDVIB).
|
2502.18435 | Yizhe Zhang | Yizhe Zhang, Richard Bai, Zijin Gu, Ruixiang Zhang, Jiatao Gu,
Emmanuel Abbe, Samy Bengio, Navdeep Jaitly | Reversal Blessing: Thinking Backward May Outpace Thinking Forward in
Multi-choice Questions | null | null | null | null | cs.CL cs.IT cs.LG math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Language models usually use left-to-right (L2R) autoregressive factorization.
However, L2R factorization may not always be the best inductive bias.
Therefore, we investigate whether alternative factorizations of the text
distribution could be beneficial in some tasks. We investigate right-to-left
(R2L) training as a compelling alternative, focusing on multiple-choice
questions (MCQs) as a test bed for knowledge extraction and reasoning. Through
extensive experiments across various model sizes (2B-8B parameters) and
training datasets, we find that R2L models can significantly outperform L2R
models on several MCQ benchmarks, including logical reasoning, commonsense
understanding, and truthfulness assessment tasks. Our analysis reveals that
this performance difference may be fundamentally linked to multiple factors
including calibration, computability and directional conditional entropy. We
ablate the impact of these factors through controlled simulation studies using
arithmetic tasks, where the impacting factors can be better disentangled. Our
work demonstrates that exploring alternative factorizations of the text
distribution can lead to improvements in LLM capabilities and provides
theoretical insights into optimal factorization towards approximating human
language distribution, and when each reasoning order might be more
advantageous.
| [
{
"version": "v1",
"created": "Tue, 25 Feb 2025 18:30:25 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 03:25:21 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Zhang",
"Yizhe",
""
],
[
"Bai",
"Richard",
""
],
[
"Gu",
"Zijin",
""
],
[
"Zhang",
"Ruixiang",
""
],
[
"Gu",
"Jiatao",
""
],
[
"Abbe",
"Emmanuel",
""
],
[
"Bengio",
"Samy",
""
],
[
"Jaitly",
"Navdeep",
""
]
] | TITLE: Reversal Blessing: Thinking Backward May Outpace Thinking Forward in
Multi-choice Questions
ABSTRACT: Language models usually use left-to-right (L2R) autoregressive factorization.
However, L2R factorization may not always be the best inductive bias.
Therefore, we investigate whether alternative factorizations of the text
distribution could be beneficial in some tasks. We investigate right-to-left
(R2L) training as a compelling alternative, focusing on multiple-choice
questions (MCQs) as a test bed for knowledge extraction and reasoning. Through
extensive experiments across various model sizes (2B-8B parameters) and
training datasets, we find that R2L models can significantly outperform L2R
models on several MCQ benchmarks, including logical reasoning, commonsense
understanding, and truthfulness assessment tasks. Our analysis reveals that
this performance difference may be fundamentally linked to multiple factors
including calibration, computability and directional conditional entropy. We
ablate the impact of these factors through controlled simulation studies using
arithmetic tasks, where the impacting factors can be better disentangled. Our
work demonstrates that exploring alternative factorizations of the text
distribution can lead to improvements in LLM capabilities and provides
theoretical insights into optimal factorization towards approximating human
language distribution, and when each reasoning order might be more
advantageous.
|
2502.18637 | Dominik Va\v{s}inka | Dominik Va\v{s}inka, Filip Jur\'a\v{n}, Jarom\'ir B\v{e}hal, and
Miroslav Je\v{z}ek | From Stars to Molecules: AI Guided Device-Agnostic Super-Resolution
Imaging | 10 pages, 7 figures | null | null | null | physics.optics astro-ph.IM quant-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Super-resolution imaging has revolutionized the study of systems ranging from
molecular structures to distant galaxies. However, existing super-resolution
methods require extensive calibration and retraining for each imaging setup,
limiting their practical deployment. We introduce a device-agnostic
deep-learning framework for super-resolution imaging of point-like emitters
that eliminates the need for calibration data or explicit knowledge of optical
system parameters. Our model is trained on a diverse, numerically simulated
dataset encompassing a broad range of imaging conditions, enabling
generalization across different optical setups. Once trained, it reconstructs
super-resolved images directly from a single resolution-limited camera frame
with superior accuracy and computational efficiency compared to
state-of-the-art methods. We experimentally validate our approach using a
custom microscopy setup with ground-truth emitter positions. We also
demonstrate its versatility on astronomical and single-molecule localization
microscopy datasets, achieving unprecedented resolution without prior
information. Our findings establish a pathway toward universal,
calibration-free super-resolution imaging, expanding its applicability across
scientific disciplines.
| [
{
"version": "v1",
"created": "Tue, 25 Feb 2025 20:54:27 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 17:15:36 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Vašinka",
"Dominik",
""
],
[
"Juráň",
"Filip",
""
],
[
"Běhal",
"Jaromír",
""
],
[
"Ježek",
"Miroslav",
""
]
] | TITLE: From Stars to Molecules: AI Guided Device-Agnostic Super-Resolution
Imaging
ABSTRACT: Super-resolution imaging has revolutionized the study of systems ranging from
molecular structures to distant galaxies. However, existing super-resolution
methods require extensive calibration and retraining for each imaging setup,
limiting their practical deployment. We introduce a device-agnostic
deep-learning framework for super-resolution imaging of point-like emitters
that eliminates the need for calibration data or explicit knowledge of optical
system parameters. Our model is trained on a diverse, numerically simulated
dataset encompassing a broad range of imaging conditions, enabling
generalization across different optical setups. Once trained, it reconstructs
super-resolved images directly from a single resolution-limited camera frame
with superior accuracy and computational efficiency compared to
state-of-the-art methods. We experimentally validate our approach using a
custom microscopy setup with ground-truth emitter positions. We also
demonstrate its versatility on astronomical and single-molecule localization
microscopy datasets, achieving unprecedented resolution without prior
information. Our findings establish a pathway toward universal,
calibration-free super-resolution imaging, expanding its applicability across
scientific disciplines.
|
2503.00057 | Ranjan Sapkota | Ranjan Sapkota, Manoj Karkee | Improved YOLOv12 with LLM-Generated Synthetic Data for Enhanced Apple
Detection and Benchmarking Against YOLOv11 and YOLOv10 | 8 pages, 5 Figures, 2 Tables | null | null | null | cs.CV cs.CL | http://creativecommons.org/licenses/by/4.0/ | This study evaluated the performance of the YOLOv12 object detection model,
and compared against the performances YOLOv11 and YOLOv10 for apple detection
in commercial orchards based on the model training completed entirely on
synthetic images generated by Large Language Models (LLMs). The YOLOv12n
configuration achieved the highest precision at 0.916, the highest recall at
0.969, and the highest mean Average Precision (mAP@50) at 0.978. In comparison,
the YOLOv11 series was led by YOLO11x, which achieved the highest precision at
0.857, recall at 0.85, and mAP@50 at 0.91. For the YOLOv10 series, YOLOv10b and
YOLOv10l both achieved the highest precision at 0.85, with YOLOv10n achieving
the highest recall at 0.8 and mAP@50 at 0.89. These findings demonstrated that
YOLOv12, when trained on realistic LLM-generated datasets surpassed its
predecessors in key performance metrics. The technique also offered a
cost-effective solution by reducing the need for extensive manual data
collection in the agricultural field. In addition, this study compared the
computational efficiency of all versions of YOLOv12, v11 and v10, where
YOLOv11n reported the lowest inference time at 4.7 ms, compared to YOLOv12n's
5.6 ms and YOLOv10n's 5.9 ms. Although YOLOv12 is new and more accurate than
YOLOv11, and YOLOv10, YOLO11n still stays the fastest YOLO model among YOLOv10,
YOLOv11 and YOLOv12 series of models. (Index: YOLOv12, YOLOv11, YOLOv10,
YOLOv13, YOLOv14, YOLOv15, YOLOE, YOLO Object detection)
| [
{
"version": "v1",
"created": "Wed, 26 Feb 2025 20:24:01 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 18:04:39 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Sapkota",
"Ranjan",
""
],
[
"Karkee",
"Manoj",
""
]
] | TITLE: Improved YOLOv12 with LLM-Generated Synthetic Data for Enhanced Apple
Detection and Benchmarking Against YOLOv11 and YOLOv10
ABSTRACT: This study evaluated the performance of the YOLOv12 object detection model,
and compared against the performances YOLOv11 and YOLOv10 for apple detection
in commercial orchards based on the model training completed entirely on
synthetic images generated by Large Language Models (LLMs). The YOLOv12n
configuration achieved the highest precision at 0.916, the highest recall at
0.969, and the highest mean Average Precision (mAP@50) at 0.978. In comparison,
the YOLOv11 series was led by YOLO11x, which achieved the highest precision at
0.857, recall at 0.85, and mAP@50 at 0.91. For the YOLOv10 series, YOLOv10b and
YOLOv10l both achieved the highest precision at 0.85, with YOLOv10n achieving
the highest recall at 0.8 and mAP@50 at 0.89. These findings demonstrated that
YOLOv12, when trained on realistic LLM-generated datasets surpassed its
predecessors in key performance metrics. The technique also offered a
cost-effective solution by reducing the need for extensive manual data
collection in the agricultural field. In addition, this study compared the
computational efficiency of all versions of YOLOv12, v11 and v10, where
YOLOv11n reported the lowest inference time at 4.7 ms, compared to YOLOv12n's
5.6 ms and YOLOv10n's 5.9 ms. Although YOLOv12 is new and more accurate than
YOLOv11, and YOLOv10, YOLO11n still stays the fastest YOLO model among YOLOv10,
YOLOv11 and YOLOv12 series of models. (Index: YOLOv12, YOLOv11, YOLOv10,
YOLOv13, YOLOv14, YOLOv15, YOLOE, YOLO Object detection)
|
2503.01130 | Runmao Yao | Runmao Yao, Yi Du, Zhuoqun Chen, Haoze Zheng, Chen Wang | AirRoom: Objects Matter in Room Reidentification | Paper accepted at CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Room reidentification (ReID) is a challenging yet essential task with
numerous applications in fields such as augmented reality (AR) and homecare
robotics. Existing visual place recognition (VPR) methods, which typically rely
on global descriptors or aggregate local features, often struggle in cluttered
indoor environments densely populated with man-made objects. These methods tend
to overlook the crucial role of object-oriented information. To address this,
we propose AirRoom, an object-aware pipeline that integrates multi-level
object-oriented information-from global context to object patches, object
segmentation, and keypoints-utilizing a coarse-to-fine retrieval approach.
Extensive experiments on four newly constructed datasets-MPReID, HMReID,
GibsonReID, and ReplicaReID-demonstrate that AirRoom outperforms
state-of-the-art (SOTA) models across nearly all evaluation metrics, with
improvements ranging from 6% to 80%. Moreover, AirRoom exhibits significant
flexibility, allowing various modules within the pipeline to be substituted
with different alternatives without compromising overall performance. It also
shows robust and consistent performance under diverse viewpoint variations.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 03:20:08 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 01:13:23 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Yao",
"Runmao",
""
],
[
"Du",
"Yi",
""
],
[
"Chen",
"Zhuoqun",
""
],
[
"Zheng",
"Haoze",
""
],
[
"Wang",
"Chen",
""
]
] | TITLE: AirRoom: Objects Matter in Room Reidentification
ABSTRACT: Room reidentification (ReID) is a challenging yet essential task with
numerous applications in fields such as augmented reality (AR) and homecare
robotics. Existing visual place recognition (VPR) methods, which typically rely
on global descriptors or aggregate local features, often struggle in cluttered
indoor environments densely populated with man-made objects. These methods tend
to overlook the crucial role of object-oriented information. To address this,
we propose AirRoom, an object-aware pipeline that integrates multi-level
object-oriented information-from global context to object patches, object
segmentation, and keypoints-utilizing a coarse-to-fine retrieval approach.
Extensive experiments on four newly constructed datasets-MPReID, HMReID,
GibsonReID, and ReplicaReID-demonstrate that AirRoom outperforms
state-of-the-art (SOTA) models across nearly all evaluation metrics, with
improvements ranging from 6% to 80%. Moreover, AirRoom exhibits significant
flexibility, allowing various modules within the pipeline to be substituted
with different alternatives without compromising overall performance. It also
shows robust and consistent performance under diverse viewpoint variations.
|
2503.01448 | Xiangjun Tang | Xiangjun Tang, Biao Zhang and Peter Wonka | Generative Human Geometry Distribution | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Realistic human geometry generation is an important yet challenging task,
requiring both the preservation of fine clothing details and the accurate
modeling of clothing-pose interactions. Geometry distributions, which can model
the geometry of a single human as a distribution, provide a promising
representation for high-fidelity synthesis. However, applying geometry
distributions for human generation requires learning a dataset-level
distribution over numerous individual geometry distributions. To address the
resulting challenges, we propose a novel 3D human generative framework that,
for the first time, models the distribution of human geometry distributions.
Our framework operates in two stages: first, generating the human geometry
distribution, and second, synthesizing high-fidelity humans by sampling from
this distribution. We validate our method on two tasks: pose-conditioned 3D
human generation and single-view-based novel pose generation. Experimental
results demonstrate that our approach achieves the best quantitative results in
terms of realism and geometric fidelity, outperforming state-of-the-art
generative methods.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 11:55:19 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 08:48:44 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Tang",
"Xiangjun",
""
],
[
"Zhang",
"Biao",
""
],
[
"Wonka",
"Peter",
""
]
] | TITLE: Generative Human Geometry Distribution
ABSTRACT: Realistic human geometry generation is an important yet challenging task,
requiring both the preservation of fine clothing details and the accurate
modeling of clothing-pose interactions. Geometry distributions, which can model
the geometry of a single human as a distribution, provide a promising
representation for high-fidelity synthesis. However, applying geometry
distributions for human generation requires learning a dataset-level
distribution over numerous individual geometry distributions. To address the
resulting challenges, we propose a novel 3D human generative framework that,
for the first time, models the distribution of human geometry distributions.
Our framework operates in two stages: first, generating the human geometry
distribution, and second, synthesizing high-fidelity humans by sampling from
this distribution. We validate our method on two tasks: pose-conditioned 3D
human generation and single-view-based novel pose generation. Experimental
results demonstrate that our approach achieves the best quantitative results in
terms of realism and geometric fidelity, outperforming state-of-the-art
generative methods.
|
2503.01754 | Guande Wu | Guande Wu, Huan Song, Yawei Wang, Qiaojing Yan, Yijun Tian, Lin Lee
Cheong, Panpan Xu | SDRT: Enhance Vision-Language Models by Self-Distillation with Diverse
Reasoning Traces | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Reasoning is increasingly crucial for various tasks. While chain-of-thought
prompting enables large language models to leverage reasoning effectively,
harnessing the reasoning capabilities of Vision-Language Models (VLMs) remains
challenging. To solve this problem, we propose a novel self-distillation
framework that enhances the reasoning capabilities of the model. The proposed
framework introduces several key innovations. We start by employing a prompt
library tailored to visual reasoning tasks to generate diverse in-context
questions and utilize a two-step reasoning procedure to derive reasoning-guided
responses. These responses are then used for self-distillation, enabling the
model to internalize the reasoning process. Additionally, we improve the model
architecture with several innovative components, including an intervention
adapter for efficient parameter updates, a cross-modal skip connection to
facilitate information exchange between modalities, and an ensemble learning
algorithm to integrate diverse reasoning from multiple in-context questions.
Extensive experiments show that our method significantly improves the baseline
performance across five VQA datasets.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 17:24:42 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 08:05:25 GMT"
},
{
"version": "v3",
"created": "Wed, 19 Mar 2025 18:35:44 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Wu",
"Guande",
""
],
[
"Song",
"Huan",
""
],
[
"Wang",
"Yawei",
""
],
[
"Yan",
"Qiaojing",
""
],
[
"Tian",
"Yijun",
""
],
[
"Cheong",
"Lin Lee",
""
],
[
"Xu",
"Panpan",
""
]
] | TITLE: SDRT: Enhance Vision-Language Models by Self-Distillation with Diverse
Reasoning Traces
ABSTRACT: Reasoning is increasingly crucial for various tasks. While chain-of-thought
prompting enables large language models to leverage reasoning effectively,
harnessing the reasoning capabilities of Vision-Language Models (VLMs) remains
challenging. To solve this problem, we propose a novel self-distillation
framework that enhances the reasoning capabilities of the model. The proposed
framework introduces several key innovations. We start by employing a prompt
library tailored to visual reasoning tasks to generate diverse in-context
questions and utilize a two-step reasoning procedure to derive reasoning-guided
responses. These responses are then used for self-distillation, enabling the
model to internalize the reasoning process. Additionally, we improve the model
architecture with several innovative components, including an intervention
adapter for efficient parameter updates, a cross-modal skip connection to
facilitate information exchange between modalities, and an ensemble learning
algorithm to integrate diverse reasoning from multiple in-context questions.
Extensive experiments show that our method significantly improves the baseline
performance across five VQA datasets.
|
2503.02593 | Yanlong Xu | Yanlong Xu, Haoxuan Qu, Jun Liu, Wenxiao Zhang, Xun Yang | CMMLoc: Advancing Text-to-PointCloud Localization with
Cauchy-Mixture-Model Based Framework | Accepted by CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | The goal of point cloud localization based on linguistic description is to
identify a 3D position using textual description in large urban environments,
which has potential applications in various fields, such as determining the
location for vehicle pickup or goods delivery. Ideally, for a textual
description and its corresponding 3D location, the objects around the 3D
location should be fully described in the text description. However, in
practical scenarios, e.g., vehicle pickup, passengers usually describe only the
part of the most significant and nearby surroundings instead of the entire
environment. In response to this $\textbf{partially relevant}$ challenge, we
propose $\textbf{CMMLoc}$, an uncertainty-aware
$\textbf{C}$auchy-$\textbf{M}$ixture-$\textbf{M}$odel ($\textbf{CMM}$) based
framework for text-to-point-cloud $\textbf{Loc}$alization. To model the
uncertain semantic relations between text and point cloud, we integrate CMM
constraints as a prior during the interaction between the two modalities. We
further design a spatial consolidation scheme to enable adaptive aggregation of
different 3D objects with varying receptive fields. To achieve precise
localization, we propose a cardinal direction integration module alongside a
modality pre-alignment strategy, helping capture the spatial relationships
among objects and bringing the 3D objects closer to the text modality.
Comprehensive experiments validate that CMMLoc outperforms existing methods,
achieving state-of-the-art results on the KITTI360Pose dataset. Codes are
available in this GitHub repository https://github.com/kevin301342/CMMLoc.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 13:17:17 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 02:11:25 GMT"
},
{
"version": "v3",
"created": "Thu, 20 Mar 2025 00:06:14 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Xu",
"Yanlong",
""
],
[
"Qu",
"Haoxuan",
""
],
[
"Liu",
"Jun",
""
],
[
"Zhang",
"Wenxiao",
""
],
[
"Yang",
"Xun",
""
]
] | TITLE: CMMLoc: Advancing Text-to-PointCloud Localization with
Cauchy-Mixture-Model Based Framework
ABSTRACT: The goal of point cloud localization based on linguistic description is to
identify a 3D position using textual description in large urban environments,
which has potential applications in various fields, such as determining the
location for vehicle pickup or goods delivery. Ideally, for a textual
description and its corresponding 3D location, the objects around the 3D
location should be fully described in the text description. However, in
practical scenarios, e.g., vehicle pickup, passengers usually describe only the
part of the most significant and nearby surroundings instead of the entire
environment. In response to this $\textbf{partially relevant}$ challenge, we
propose $\textbf{CMMLoc}$, an uncertainty-aware
$\textbf{C}$auchy-$\textbf{M}$ixture-$\textbf{M}$odel ($\textbf{CMM}$) based
framework for text-to-point-cloud $\textbf{Loc}$alization. To model the
uncertain semantic relations between text and point cloud, we integrate CMM
constraints as a prior during the interaction between the two modalities. We
further design a spatial consolidation scheme to enable adaptive aggregation of
different 3D objects with varying receptive fields. To achieve precise
localization, we propose a cardinal direction integration module alongside a
modality pre-alignment strategy, helping capture the spatial relationships
among objects and bringing the 3D objects closer to the text modality.
Comprehensive experiments validate that CMMLoc outperforms existing methods,
achieving state-of-the-art results on the KITTI360Pose dataset. Codes are
available in this GitHub repository https://github.com/kevin301342/CMMLoc.
|
2503.03644 | Shuo Li | Xiaojun Bi, Shuo Li, Ziyue Wang, Fuwen Luo, Weizheng Qiao, Lu Han,
Ziwei Sun, Peng Li, Yang Liu | DongbaMIE: A Multimodal Information Extraction Dataset for Evaluating
Semantic Understanding of Dongba Pictograms | Our dataset can be obtained from:
https://github.com/thinklis/DongbaMIE | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Dongba pictographs are the only pictographs still in use in the world. They
have pictorial ideographic features, and their symbols carry rich cultural and
contextual information. Due to the lack of relevant datasets, existing research
has difficulty in advancing the study of semantic understanding of Dongba
pictographs. To this end, we propose \textbf{DongbaMIE}, the first multimodal
dataset for semantic understanding and extraction of Dongba pictographs,
consisting of Dongba pictograph images and corresponding Chinese semantic
annotations. DongbaMIE contains 23,530 sentence-level and 2,539 paragraph-level
images, covering four semantic dimensions: objects, actions, relations, and
attributes. We systematically evaluate multimodal large language models
(MLLMs), such as GPT-4o, Gemini-2.0, and Qwen2-VL. Experimental results show
that best F1 scores of proprietary models, GPT-4o and Gemini, for object
extraction task are only 3.16 and 3.11 respectively. For the open-source model
Qwen2-VL, it achieves only 11.49 after supervised fine-tuning. These suggest
that current MLLMs still face significant challenges in accurately recognizing
diverse semantic information in Dongba pictographs.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 16:20:53 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Mar 2025 11:36:33 GMT"
},
{
"version": "v3",
"created": "Thu, 20 Mar 2025 12:16:23 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Bi",
"Xiaojun",
""
],
[
"Li",
"Shuo",
""
],
[
"Wang",
"Ziyue",
""
],
[
"Luo",
"Fuwen",
""
],
[
"Qiao",
"Weizheng",
""
],
[
"Han",
"Lu",
""
],
[
"Sun",
"Ziwei",
""
],
[
"Li",
"Peng",
""
],
[
"Liu",
"Yang",
""
]
] | TITLE: DongbaMIE: A Multimodal Information Extraction Dataset for Evaluating
Semantic Understanding of Dongba Pictograms
ABSTRACT: Dongba pictographs are the only pictographs still in use in the world. They
have pictorial ideographic features, and their symbols carry rich cultural and
contextual information. Due to the lack of relevant datasets, existing research
has difficulty in advancing the study of semantic understanding of Dongba
pictographs. To this end, we propose \textbf{DongbaMIE}, the first multimodal
dataset for semantic understanding and extraction of Dongba pictographs,
consisting of Dongba pictograph images and corresponding Chinese semantic
annotations. DongbaMIE contains 23,530 sentence-level and 2,539 paragraph-level
images, covering four semantic dimensions: objects, actions, relations, and
attributes. We systematically evaluate multimodal large language models
(MLLMs), such as GPT-4o, Gemini-2.0, and Qwen2-VL. Experimental results show
that best F1 scores of proprietary models, GPT-4o and Gemini, for object
extraction task are only 3.16 and 3.11 respectively. For the open-source model
Qwen2-VL, it achieves only 11.49 after supervised fine-tuning. These suggest
that current MLLMs still face significant challenges in accurately recognizing
diverse semantic information in Dongba pictographs.
|
2503.04997 | Paul Krassnig | Paul J. Krassnig and Dieter P. Gruber | ISP-AD: A Large-Scale Real-World Dataset for Advancing Industrial
Anomaly Detection with Synthetic and Real Defects | 26 pages, 6 figures, this preprint has been submitted to the Journal
of Intelligent Manufacturing, the dataset is available at
https://doi.org/10.5281/zenodo.14911043 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatic visual inspection using machine learning-based methods plays a key
role in achieving zero-defect policies in industry. Research on anomaly
detection approaches is constrained by the availability of datasets that
represent complex defect appearances and imperfect imaging conditions, which
are typical to industrial processes. Recent benchmarks indicate that most
publicly available datasets are biased towards optimal imaging conditions,
leading to an overestimation of the methods' applicability to real-world
industrial scenarios. To address this gap, we introduce the Industrial Screen
Printing Anomaly Detection dataset (ISP-AD). It presents challenging small and
weakly contrasted surface defects embedded within structured patterns
exhibiting high permitted design variability. To the best of our knowledge, it
is the largest publicly available industrial dataset to date, including both
synthetic and real defects collected directly from the factory floor. In
addition to the evaluation of defect detection performance of recent
unsupervised anomaly detection methods, experiments on a mixed supervised
training approach, incorporating both synthesized and real defects, were
conducted. Even small amounts of injected real defects prove beneficial for
model generalization. Furthermore, starting from training on purely synthetic
defects, emerging real defective samples can be efficiently integrated into
subsequent scalable training. Research findings indicate that supervision by
means of both synthetic and accumulated real defects can complement each other,
meeting demanded industrial inspection requirements such as low false positive
rates and high recall. The presented unsupervised and supervised dataset splits
are designed to emphasize research on unsupervised, self-supervised, and
supervised approaches, enhancing their applicability to industrial settings.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 21:56:31 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 08:40:35 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Krassnig",
"Paul J.",
""
],
[
"Gruber",
"Dieter P.",
""
]
] | TITLE: ISP-AD: A Large-Scale Real-World Dataset for Advancing Industrial
Anomaly Detection with Synthetic and Real Defects
ABSTRACT: Automatic visual inspection using machine learning-based methods plays a key
role in achieving zero-defect policies in industry. Research on anomaly
detection approaches is constrained by the availability of datasets that
represent complex defect appearances and imperfect imaging conditions, which
are typical to industrial processes. Recent benchmarks indicate that most
publicly available datasets are biased towards optimal imaging conditions,
leading to an overestimation of the methods' applicability to real-world
industrial scenarios. To address this gap, we introduce the Industrial Screen
Printing Anomaly Detection dataset (ISP-AD). It presents challenging small and
weakly contrasted surface defects embedded within structured patterns
exhibiting high permitted design variability. To the best of our knowledge, it
is the largest publicly available industrial dataset to date, including both
synthetic and real defects collected directly from the factory floor. In
addition to the evaluation of defect detection performance of recent
unsupervised anomaly detection methods, experiments on a mixed supervised
training approach, incorporating both synthesized and real defects, were
conducted. Even small amounts of injected real defects prove beneficial for
model generalization. Furthermore, starting from training on purely synthetic
defects, emerging real defective samples can be efficiently integrated into
subsequent scalable training. Research findings indicate that supervision by
means of both synthetic and accumulated real defects can complement each other,
meeting demanded industrial inspection requirements such as low false positive
rates and high recall. The presented unsupervised and supervised dataset splits
are designed to emphasize research on unsupervised, self-supervised, and
supervised approaches, enhancing their applicability to industrial settings.
|
2503.07459 | Xiangru Tang | Xiangru Tang, Daniel Shao, Jiwoong Sohn, Jiapeng Chen, Jiayi Zhang,
Jinyu Xiang, Fang Wu, Yilun Zhao, Chenglin Wu, Wenqi Shi, Arman Cohan, Mark
Gerstein | MedAgentsBench: Benchmarking Thinking Models and Agent Frameworks for
Complex Medical Reasoning | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) have shown impressive performance on existing
medical question-answering benchmarks. This high performance makes it
increasingly difficult to meaningfully evaluate and differentiate advanced
methods. We present MedAgentsBench, a benchmark that focuses on challenging
medical questions requiring multi-step clinical reasoning, diagnosis
formulation, and treatment planning-scenarios where current models still
struggle despite their strong performance on standard tests. Drawing from seven
established medical datasets, our benchmark addresses three key limitations in
existing evaluations: (1) the prevalence of straightforward questions where
even base models achieve high performance, (2) inconsistent sampling and
evaluation protocols across studies, and (3) lack of systematic analysis of the
interplay between performance, cost, and inference time. Through experiments
with various base models and reasoning methods, we demonstrate that the latest
thinking models, DeepSeek R1 and OpenAI o3, exhibit exceptional performance in
complex medical reasoning tasks. Additionally, advanced search-based agent
methods offer promising performance-to-cost ratios compared to traditional
approaches. Our analysis reveals substantial performance gaps between model
families on complex questions and identifies optimal model selections for
different computational constraints. Our benchmark and evaluation framework are
publicly available at https://github.com/gersteinlab/medagents-benchmark.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 15:38:44 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 01:30:56 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Tang",
"Xiangru",
""
],
[
"Shao",
"Daniel",
""
],
[
"Sohn",
"Jiwoong",
""
],
[
"Chen",
"Jiapeng",
""
],
[
"Zhang",
"Jiayi",
""
],
[
"Xiang",
"Jinyu",
""
],
[
"Wu",
"Fang",
""
],
[
"Zhao",
"Yilun",
""
],
[
"Wu",
"Chenglin",
""
],
[
"Shi",
"Wenqi",
""
],
[
"Cohan",
"Arman",
""
],
[
"Gerstein",
"Mark",
""
]
] | TITLE: MedAgentsBench: Benchmarking Thinking Models and Agent Frameworks for
Complex Medical Reasoning
ABSTRACT: Large Language Models (LLMs) have shown impressive performance on existing
medical question-answering benchmarks. This high performance makes it
increasingly difficult to meaningfully evaluate and differentiate advanced
methods. We present MedAgentsBench, a benchmark that focuses on challenging
medical questions requiring multi-step clinical reasoning, diagnosis
formulation, and treatment planning-scenarios where current models still
struggle despite their strong performance on standard tests. Drawing from seven
established medical datasets, our benchmark addresses three key limitations in
existing evaluations: (1) the prevalence of straightforward questions where
even base models achieve high performance, (2) inconsistent sampling and
evaluation protocols across studies, and (3) lack of systematic analysis of the
interplay between performance, cost, and inference time. Through experiments
with various base models and reasoning methods, we demonstrate that the latest
thinking models, DeepSeek R1 and OpenAI o3, exhibit exceptional performance in
complex medical reasoning tasks. Additionally, advanced search-based agent
methods offer promising performance-to-cost ratios compared to traditional
approaches. Our analysis reveals substantial performance gaps between model
families on complex questions and identifies optimal model selections for
different computational constraints. Our benchmark and evaluation framework are
publicly available at https://github.com/gersteinlab/medagents-benchmark.
|
2503.07645 | Hongyuan Yang | Hongyuan Yang, Siqi Peng, Akihiro Yamamoto | BicliqueEncoder: An Efficient Method for Link Prediction in Bipartite
Networks using Formal Concept Analysis and Transformer Encoder | 33 pages, 8 figures | null | null | null | cs.LG cs.SI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We propose a novel and efficient method for link prediction in bipartite
networks, using \textit{formal concept analysis} (FCA) and the Transformer
encoder. Link prediction in bipartite networks finds practical applications in
various domains such as product recommendation in online sales, and prediction
of chemical-disease interaction in medical science. Since for link prediction,
the topological structure of a network contains valuable information, many
approaches focus on extracting structural features and then utilizing them for
link prediction. Bi-cliques, as a type of structural feature of bipartite
graphs, can be utilized for link prediction. Although several link prediction
methods utilizing bi-cliques have been proposed and perform well in rather
small datasets, all of them face challenges with scalability when dealing with
large datasets since they demand substantial computational resources. This
limits the practical utility of these approaches in real-world applications. To
overcome the limitation, we introduce a novel approach employing iceberg
concept lattices and the Transformer encoder. Our method requires fewer
computational resources, making it suitable for large-scale datasets while
maintaining high prediction performance. We conduct experiments on five large
real-world datasets that exceed the capacity of previous bi-clique-based
approaches to demonstrate the efficacy of our method. Additionally, we perform
supplementary experiments on five small datasets to compare with the previous
bi-clique-based methods for bipartite link prediction and demonstrate that our
method is more efficient than the previous ones.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 04:47:37 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 15:31:27 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Yang",
"Hongyuan",
""
],
[
"Peng",
"Siqi",
""
],
[
"Yamamoto",
"Akihiro",
""
]
] | TITLE: BicliqueEncoder: An Efficient Method for Link Prediction in Bipartite
Networks using Formal Concept Analysis and Transformer Encoder
ABSTRACT: We propose a novel and efficient method for link prediction in bipartite
networks, using \textit{formal concept analysis} (FCA) and the Transformer
encoder. Link prediction in bipartite networks finds practical applications in
various domains such as product recommendation in online sales, and prediction
of chemical-disease interaction in medical science. Since for link prediction,
the topological structure of a network contains valuable information, many
approaches focus on extracting structural features and then utilizing them for
link prediction. Bi-cliques, as a type of structural feature of bipartite
graphs, can be utilized for link prediction. Although several link prediction
methods utilizing bi-cliques have been proposed and perform well in rather
small datasets, all of them face challenges with scalability when dealing with
large datasets since they demand substantial computational resources. This
limits the practical utility of these approaches in real-world applications. To
overcome the limitation, we introduce a novel approach employing iceberg
concept lattices and the Transformer encoder. Our method requires fewer
computational resources, making it suitable for large-scale datasets while
maintaining high prediction performance. We conduct experiments on five large
real-world datasets that exceed the capacity of previous bi-clique-based
approaches to demonstrate the efficacy of our method. Additionally, we perform
supplementary experiments on five small datasets to compare with the previous
bi-clique-based methods for bipartite link prediction and demonstrate that our
method is more efficient than the previous ones.
|
2503.08144 | Fei Wang | Fei Wang, Chengcheng Chen, Hongyu Chen, Yugang Chang, Weiming Zeng | Bring Remote Sensing Object Detect Into Nature Language Model: Using SFT
Method | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, large language models (LLMs) and vision-language models (VLMs) have
achieved significant success, demonstrating remarkable capabilities in
understanding various images and videos, particularly in classification and
detection tasks. However, due to the substantial differences between remote
sensing images and conventional optical images, these models face considerable
challenges in comprehension, especially in detection tasks. Directly prompting
VLMs with detection instructions often leads to unsatisfactory results. To
address this issue, this letter explores the application of VLMs for object
detection in remote sensing images. Specifically, we constructed supervised
fine-tuning (SFT) datasets using publicly available remote sensing object
detection datasets, including SSDD, HRSID, and NWPU-VHR-10. In these new
datasets, we converted annotation information into JSON-compliant natural
language descriptions, facilitating more effective understanding and training
for the VLM. We then evaluate the detection performance of various fine-tuning
strategies for VLMs and derive optimized model weights for object detection in
remote sensing images. Finally, we evaluate the model's prior knowledge
capabilities using natural language queries. Experimental results demonstrate
that, without modifying the model architecture, remote sensing object detection
can be effectively achieved using natural language alone. Additionally, the
model exhibits the ability to perform certain vision question answering (VQA)
tasks. Our datasets and related code will be released soon.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 08:02:54 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 13:21:00 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Wang",
"Fei",
""
],
[
"Chen",
"Chengcheng",
""
],
[
"Chen",
"Hongyu",
""
],
[
"Chang",
"Yugang",
""
],
[
"Zeng",
"Weiming",
""
]
] | TITLE: Bring Remote Sensing Object Detect Into Nature Language Model: Using SFT
Method
ABSTRACT: Recently, large language models (LLMs) and vision-language models (VLMs) have
achieved significant success, demonstrating remarkable capabilities in
understanding various images and videos, particularly in classification and
detection tasks. However, due to the substantial differences between remote
sensing images and conventional optical images, these models face considerable
challenges in comprehension, especially in detection tasks. Directly prompting
VLMs with detection instructions often leads to unsatisfactory results. To
address this issue, this letter explores the application of VLMs for object
detection in remote sensing images. Specifically, we constructed supervised
fine-tuning (SFT) datasets using publicly available remote sensing object
detection datasets, including SSDD, HRSID, and NWPU-VHR-10. In these new
datasets, we converted annotation information into JSON-compliant natural
language descriptions, facilitating more effective understanding and training
for the VLM. We then evaluate the detection performance of various fine-tuning
strategies for VLMs and derive optimized model weights for object detection in
remote sensing images. Finally, we evaluate the model's prior knowledge
capabilities using natural language queries. Experimental results demonstrate
that, without modifying the model architecture, remote sensing object detection
can be effectively achieved using natural language alone. Additionally, the
model exhibits the ability to perform certain vision question answering (VQA)
tasks. Our datasets and related code will be released soon.
|
2503.08923 | Anand Menon | Anand Menon, Samit S Miftah, Shamik Kundu, Souvik Kundu, Amisha
Srivastava, Arnab Raha, Gabriel Theodor Sonnenschein, Suvadeep Banerjee,
Deepak Mathaikutty, Kanad Basu | Enhancing Large Language Models for Hardware Verification: A Novel
SystemVerilog Assertion Dataset | 29 Pages | null | null | null | cs.LG cs.CR cs.PL | http://creativecommons.org/licenses/by/4.0/ | Hardware verification is crucial in modern SoC design, consuming around 70%
of development time. SystemVerilog assertions ensure correct functionality.
However, existing industrial practices rely on manual efforts for assertion
generation, which becomes increasingly untenable as hardware systems become
complex. Recent research shows that Large Language Models (LLMs) can automate
this process. However, proprietary SOTA models like GPT-4o often generate
inaccurate assertions and require expensive licenses, while smaller open-source
LLMs need fine-tuning to manage HDL code complexities. To address these issues,
we introduce **VERT**, an open-source dataset designed to enhance SystemVerilog
assertion generation using LLMs. VERT enables researchers in academia and
industry to fine-tune open-source models, outperforming larger proprietary ones
in both accuracy and efficiency while ensuring data privacy through local
fine-tuning and eliminating costly licenses. The dataset is curated by
systematically augmenting variables from open-source HDL repositories to
generate synthetic code snippets paired with corresponding assertions.
Experimental results demonstrate that fine-tuned models like Deepseek Coder
6.7B and Llama 3.1 8B outperform GPT-4o, achieving up to 96.88% improvement
over base models and 24.14% over GPT-4o on platforms including OpenTitan, CVA6,
OpenPiton and Pulpissimo. VERT is available at
https://github.com/AnandMenon12/VERT.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 22:13:26 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Menon",
"Anand",
""
],
[
"Miftah",
"Samit S",
""
],
[
"Kundu",
"Shamik",
""
],
[
"Kundu",
"Souvik",
""
],
[
"Srivastava",
"Amisha",
""
],
[
"Raha",
"Arnab",
""
],
[
"Sonnenschein",
"Gabriel Theodor",
""
],
[
"Banerjee",
"Suvadeep",
""
],
[
"Mathaikutty",
"Deepak",
""
],
[
"Basu",
"Kanad",
""
]
] | TITLE: Enhancing Large Language Models for Hardware Verification: A Novel
SystemVerilog Assertion Dataset
ABSTRACT: Hardware verification is crucial in modern SoC design, consuming around 70%
of development time. SystemVerilog assertions ensure correct functionality.
However, existing industrial practices rely on manual efforts for assertion
generation, which becomes increasingly untenable as hardware systems become
complex. Recent research shows that Large Language Models (LLMs) can automate
this process. However, proprietary SOTA models like GPT-4o often generate
inaccurate assertions and require expensive licenses, while smaller open-source
LLMs need fine-tuning to manage HDL code complexities. To address these issues,
we introduce **VERT**, an open-source dataset designed to enhance SystemVerilog
assertion generation using LLMs. VERT enables researchers in academia and
industry to fine-tune open-source models, outperforming larger proprietary ones
in both accuracy and efficiency while ensuring data privacy through local
fine-tuning and eliminating costly licenses. The dataset is curated by
systematically augmenting variables from open-source HDL repositories to
generate synthetic code snippets paired with corresponding assertions.
Experimental results demonstrate that fine-tuned models like Deepseek Coder
6.7B and Llama 3.1 8B outperform GPT-4o, achieving up to 96.88% improvement
over base models and 24.14% over GPT-4o on platforms including OpenTitan, CVA6,
OpenPiton and Pulpissimo. VERT is available at
https://github.com/AnandMenon12/VERT.
|
2503.09091 | Chen Zhao | Dong Li, Guihong Wan, Xintao Wu, Xinyu Wu, Xiaohui Chen, Yi He,
Christine G. Lian, Peter K. Sorger, Yevgeniy R. Semenov, Chen Zhao | Multi-Modal Foundation Models for Computational Pathology: A Survey | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Foundation models have emerged as a powerful paradigm in computational
pathology (CPath), enabling scalable and generalizable analysis of
histopathological images. While early developments centered on uni-modal models
trained solely on visual data, recent advances have highlighted the promise of
multi-modal foundation models that integrate heterogeneous data sources such as
textual reports, structured domain knowledge, and molecular profiles. In this
survey, we provide a comprehensive and up-to-date review of multi-modal
foundation models in CPath, with a particular focus on models built upon
hematoxylin and eosin (H&E) stained whole slide images (WSIs) and tile-level
representations. We categorize 32 state-of-the-art multi-modal foundation
models into three major paradigms: vision-language, vision-knowledge graph, and
vision-gene expression. We further divide vision-language models into
non-LLM-based and LLM-based approaches. Additionally, we analyze 28 available
multi-modal datasets tailored for pathology, grouped into image-text pairs,
instruction datasets, and image-other modality pairs. Our survey also presents
a taxonomy of downstream tasks, highlights training and evaluation strategies,
and identifies key challenges and future directions. We aim for this survey to
serve as a valuable resource for researchers and practitioners working at the
intersection of pathology and AI.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 06:03:33 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 16:43:54 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Li",
"Dong",
""
],
[
"Wan",
"Guihong",
""
],
[
"Wu",
"Xintao",
""
],
[
"Wu",
"Xinyu",
""
],
[
"Chen",
"Xiaohui",
""
],
[
"He",
"Yi",
""
],
[
"Lian",
"Christine G.",
""
],
[
"Sorger",
"Peter K.",
""
],
[
"Semenov",
"Yevgeniy R.",
""
],
[
"Zhao",
"Chen",
""
]
] | TITLE: Multi-Modal Foundation Models for Computational Pathology: A Survey
ABSTRACT: Foundation models have emerged as a powerful paradigm in computational
pathology (CPath), enabling scalable and generalizable analysis of
histopathological images. While early developments centered on uni-modal models
trained solely on visual data, recent advances have highlighted the promise of
multi-modal foundation models that integrate heterogeneous data sources such as
textual reports, structured domain knowledge, and molecular profiles. In this
survey, we provide a comprehensive and up-to-date review of multi-modal
foundation models in CPath, with a particular focus on models built upon
hematoxylin and eosin (H&E) stained whole slide images (WSIs) and tile-level
representations. We categorize 32 state-of-the-art multi-modal foundation
models into three major paradigms: vision-language, vision-knowledge graph, and
vision-gene expression. We further divide vision-language models into
non-LLM-based and LLM-based approaches. Additionally, we analyze 28 available
multi-modal datasets tailored for pathology, grouped into image-text pairs,
instruction datasets, and image-other modality pairs. Our survey also presents
a taxonomy of downstream tasks, highlights training and evaluation strategies,
and identifies key challenges and future directions. We aim for this survey to
serve as a valuable resource for researchers and practitioners working at the
intersection of pathology and AI.
|
2503.10745 | Alexander Swerdlow | Ayush Jain, Alexander Swerdlow, Yuzhou Wang, Sergio Arnaud, Ada
Martin, Alexander Sax, Franziska Meier, Katerina Fragkiadaki | Unifying 2D and 3D Vision-Language Understanding | The first two authors contributed equally | null | null | null | cs.CV cs.AI cs.RO | http://creativecommons.org/licenses/by/4.0/ | Progress in 3D vision-language learning has been hindered by the scarcity of
large-scale 3D datasets. We introduce UniVLG, a unified architecture for 2D and
3D vision-language understanding that bridges the gap between existing
2D-centric models and the rich 3D sensory data available in embodied systems.
Our approach initializes most model weights from pre-trained 2D models and
trains on both 2D and 3D vision-language data. We propose a novel
language-conditioned mask decoder shared across 2D and 3D modalities to ground
objects effectively in both RGB and RGB-D images, outperforming box-based
approaches. To further reduce the domain gap between 2D and 3D, we incorporate
2D-to-3D lifting strategies, enabling UniVLG to utilize 2D data to enhance 3D
performance. With these innovations, our model achieves state-of-the-art
performance across multiple 3D vision-language grounding tasks, demonstrating
the potential of transferring advances from 2D vision-language learning to the
data-constrained 3D domain. Furthermore, co-training on both 2D and 3D data
enhances performance across modalities without sacrificing 2D capabilities. By
removing the reliance on 3D mesh reconstruction and ground-truth object
proposals, UniVLG sets a new standard for realistic, embodied-aligned
evaluation. Code and additional visualizations are available at
https://univlg.github.io .
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 17:56:22 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 16:24:10 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Jain",
"Ayush",
""
],
[
"Swerdlow",
"Alexander",
""
],
[
"Wang",
"Yuzhou",
""
],
[
"Arnaud",
"Sergio",
""
],
[
"Martin",
"Ada",
""
],
[
"Sax",
"Alexander",
""
],
[
"Meier",
"Franziska",
""
],
[
"Fragkiadaki",
"Katerina",
""
]
] | TITLE: Unifying 2D and 3D Vision-Language Understanding
ABSTRACT: Progress in 3D vision-language learning has been hindered by the scarcity of
large-scale 3D datasets. We introduce UniVLG, a unified architecture for 2D and
3D vision-language understanding that bridges the gap between existing
2D-centric models and the rich 3D sensory data available in embodied systems.
Our approach initializes most model weights from pre-trained 2D models and
trains on both 2D and 3D vision-language data. We propose a novel
language-conditioned mask decoder shared across 2D and 3D modalities to ground
objects effectively in both RGB and RGB-D images, outperforming box-based
approaches. To further reduce the domain gap between 2D and 3D, we incorporate
2D-to-3D lifting strategies, enabling UniVLG to utilize 2D data to enhance 3D
performance. With these innovations, our model achieves state-of-the-art
performance across multiple 3D vision-language grounding tasks, demonstrating
the potential of transferring advances from 2D vision-language learning to the
data-constrained 3D domain. Furthermore, co-training on both 2D and 3D data
enhances performance across modalities without sacrificing 2D capabilities. By
removing the reliance on 3D mesh reconstruction and ground-truth object
proposals, UniVLG sets a new standard for realistic, embodied-aligned
evaluation. Code and additional visualizations are available at
https://univlg.github.io .
|
2503.11031 | Anirban Chandra | Anirban Chandra, Marius Koch, Suraj Pawar, Aniruddha Panda, Kamyar
Azizzadenesheli, Jeroen Snippe, Faruk O. Alpak, Farah Hariri, Clement
Etienam, Pandu Devarakota, Anima Anandkumar, Detlef Hohl | Fourier Neural Operator based surrogates for $CO_2$ storage in realistic
geologies | null | null | null | null | physics.comp-ph cs.AI physics.geo-ph | http://creativecommons.org/licenses/by/4.0/ | This study aims to develop surrogate models for accelerating decision making
processes associated with carbon capture and storage (CCS) technologies.
Selection of sub-surface $CO_2$ storage sites often necessitates expensive and
involved simulations of $CO_2$ flow fields. Here, we develop a Fourier Neural
Operator (FNO) based model for real-time, high-resolution simulation of $CO_2$
plume migration. The model is trained on a comprehensive dataset generated from
realistic subsurface parameters and offers $O(10^5)$ computational acceleration
with minimal sacrifice in prediction accuracy. We also explore super-resolution
experiments to improve the computational cost of training the FNO based models.
Additionally, we present various strategies for improving the reliability of
predictions from the model, which is crucial while assessing actual geological
sites. This novel framework, based on NVIDIA's Modulus library, will allow
rapid screening of sites for CCS. The discussed workflows and strategies can be
applied to other energy solutions like geothermal reservoir modeling and
hydrogen storage. Our work scales scientific machine learning models to
realistic 3D systems that are more consistent with real-life subsurface
aquifers/reservoirs, paving the way for next-generation digital twins for
subsurface CCS applications.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 02:58:24 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 15:44:45 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Chandra",
"Anirban",
""
],
[
"Koch",
"Marius",
""
],
[
"Pawar",
"Suraj",
""
],
[
"Panda",
"Aniruddha",
""
],
[
"Azizzadenesheli",
"Kamyar",
""
],
[
"Snippe",
"Jeroen",
""
],
[
"Alpak",
"Faruk O.",
""
],
[
"Hariri",
"Farah",
""
],
[
"Etienam",
"Clement",
""
],
[
"Devarakota",
"Pandu",
""
],
[
"Anandkumar",
"Anima",
""
],
[
"Hohl",
"Detlef",
""
]
] | TITLE: Fourier Neural Operator based surrogates for $CO_2$ storage in realistic
geologies
ABSTRACT: This study aims to develop surrogate models for accelerating decision making
processes associated with carbon capture and storage (CCS) technologies.
Selection of sub-surface $CO_2$ storage sites often necessitates expensive and
involved simulations of $CO_2$ flow fields. Here, we develop a Fourier Neural
Operator (FNO) based model for real-time, high-resolution simulation of $CO_2$
plume migration. The model is trained on a comprehensive dataset generated from
realistic subsurface parameters and offers $O(10^5)$ computational acceleration
with minimal sacrifice in prediction accuracy. We also explore super-resolution
experiments to improve the computational cost of training the FNO based models.
Additionally, we present various strategies for improving the reliability of
predictions from the model, which is crucial while assessing actual geological
sites. This novel framework, based on NVIDIA's Modulus library, will allow
rapid screening of sites for CCS. The discussed workflows and strategies can be
applied to other energy solutions like geothermal reservoir modeling and
hydrogen storage. Our work scales scientific machine learning models to
realistic 3D systems that are more consistent with real-life subsurface
aquifers/reservoirs, paving the way for next-generation digital twins for
subsurface CCS applications.
|
2503.13028 | Tony Danjun Wang | Tony Danjun Wang, Lennart Bastian, Tobias Czempiel, Christian
Heiliger, Nassir Navab | Beyond Role-Based Surgical Domain Modeling: Generalizable
Re-Identification in the Operating Room | 26 pages, 14 figures | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Surgical domain models improve workflow optimization through automated
predictions of each staff member's surgical role. However, mounting evidence
indicates that team familiarity and individuality impact surgical outcomes. We
present a novel staff-centric modeling approach that characterizes individual
team members through their distinctive movement patterns and physical
characteristics, enabling long-term tracking and analysis of surgical personnel
across multiple procedures. To address the challenge of inter-clinic
variability, we develop a generalizable re-identification framework that
encodes sequences of 3D point clouds to capture shape and articulated motion
patterns unique to each individual. Our method achieves 86.19% accuracy on
realistic clinical data while maintaining 75.27% accuracy when transferring
between different environments - a 12% improvement over existing methods. When
used to augment markerless personnel tracking, our approach improves accuracy
by over 50%. Through extensive validation across three datasets and the
introduction of a novel workflow visualization technique, we demonstrate how
our framework can reveal novel insights into surgical team dynamics and space
utilization patterns, advancing methods to analyze surgical workflows and team
coordination.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 10:30:26 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 12:08:07 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Wang",
"Tony Danjun",
""
],
[
"Bastian",
"Lennart",
""
],
[
"Czempiel",
"Tobias",
""
],
[
"Heiliger",
"Christian",
""
],
[
"Navab",
"Nassir",
""
]
] | TITLE: Beyond Role-Based Surgical Domain Modeling: Generalizable
Re-Identification in the Operating Room
ABSTRACT: Surgical domain models improve workflow optimization through automated
predictions of each staff member's surgical role. However, mounting evidence
indicates that team familiarity and individuality impact surgical outcomes. We
present a novel staff-centric modeling approach that characterizes individual
team members through their distinctive movement patterns and physical
characteristics, enabling long-term tracking and analysis of surgical personnel
across multiple procedures. To address the challenge of inter-clinic
variability, we develop a generalizable re-identification framework that
encodes sequences of 3D point clouds to capture shape and articulated motion
patterns unique to each individual. Our method achieves 86.19% accuracy on
realistic clinical data while maintaining 75.27% accuracy when transferring
between different environments - a 12% improvement over existing methods. When
used to augment markerless personnel tracking, our approach improves accuracy
by over 50%. Through extensive validation across three datasets and the
introduction of a novel workflow visualization technique, we demonstrate how
our framework can reveal novel insights into surgical team dynamics and space
utilization patterns, advancing methods to analyze surgical workflows and team
coordination.
|
2503.13344 | Shashikant Verma | Shashikant Verma, Harish Katti, Soumyaratna Debnath, Yamuna Swamy,
Shanmuganathan Raman | STEP: Simultaneous Tracking and Estimation of Pose for Animals and
Humans | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We introduce STEP, a novel framework utilizing Transformer-based
discriminative model prediction for simultaneous tracking and estimation of
pose across diverse animal species and humans. We are inspired by the fact that
the human brain exploits spatiotemporal continuity and performs concurrent
localization and pose estimation despite the specialization of brain areas for
form and motion processing. Traditional discriminative models typically require
predefined target states for determining model weights, a challenge we address
through Gaussian Map Soft Prediction (GMSP) and Offset Map Regression Adapter
(OMRA) Modules. These modules remove the necessity of keypoint target states as
input, streamlining the process. Our method starts with a known target state in
the initial frame of a given video sequence. It then seamlessly tracks the
target and estimates keypoints of anatomical importance as output for
subsequent frames. Unlike prevalent top-down pose estimation methods, our
approach doesn't rely on per-frame target detections due to its tracking
capability. This facilitates a significant advancement in inference efficiency
and potential applications. We train and validate our approach on datasets
encompassing diverse species. Our experiments demonstrate superior results
compared to existing methods, opening doors to various applications, including
but not limited to action recognition and behavioral analysis.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 16:22:00 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 10:11:27 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Verma",
"Shashikant",
""
],
[
"Katti",
"Harish",
""
],
[
"Debnath",
"Soumyaratna",
""
],
[
"Swamy",
"Yamuna",
""
],
[
"Raman",
"Shanmuganathan",
""
]
] | TITLE: STEP: Simultaneous Tracking and Estimation of Pose for Animals and
Humans
ABSTRACT: We introduce STEP, a novel framework utilizing Transformer-based
discriminative model prediction for simultaneous tracking and estimation of
pose across diverse animal species and humans. We are inspired by the fact that
the human brain exploits spatiotemporal continuity and performs concurrent
localization and pose estimation despite the specialization of brain areas for
form and motion processing. Traditional discriminative models typically require
predefined target states for determining model weights, a challenge we address
through Gaussian Map Soft Prediction (GMSP) and Offset Map Regression Adapter
(OMRA) Modules. These modules remove the necessity of keypoint target states as
input, streamlining the process. Our method starts with a known target state in
the initial frame of a given video sequence. It then seamlessly tracks the
target and estimates keypoints of anatomical importance as output for
subsequent frames. Unlike prevalent top-down pose estimation methods, our
approach doesn't rely on per-frame target detections due to its tracking
capability. This facilitates a significant advancement in inference efficiency
and potential applications. We train and validate our approach on datasets
encompassing diverse species. Our experiments demonstrate superior results
compared to existing methods, opening doors to various applications, including
but not limited to action recognition and behavioral analysis.
|
2503.13801 | Weicao Deng | Weicao Deng, Binpu Shi, Min Li, Osvaldo Simeone | SCAN-BEST: Efficient Sub-6GHz-Aided Near-field Beam Selection with
Formal Reliability Guarantees | 13 pages, 11 figures | null | null | null | cs.IT eess.SP math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As millimeter-wave (mmWave) multiple-input multiple-output (MIMO) systems
continue to incorporate larger antenna arrays, the range of near-field
propagation expands, making it more likely for users close to the transmitter
to fall within the near-field regime. Traditional far-field beam training
methods are no longer effective in this context. Additionally, near-field beam
training presents challenges, since the training codebook must account for both
angular and distance dimensions, leading to large codebook sizes. To reduce the
in-band training overhead, we propose the Sub-6G Channel-Aided Near-field BEam
SelecTion (SCAN-BEST) framework, which is motivated by the spatial-temporal
congruence between sub-6 GHz (sub-6G) and mmWave channels. SCAN-BEST utilizes
preprocessed sub-6G channel estimates as input, and employs a convolutional
neural network (CNN) to predict the probability of each beam being optimal
within the near-field beam training codebook. Given the prediction uncertainty
arising from the variance between sub-6G and mmWave channels, we introduce a
conformal risk control (CRC)-based module that generates a set of beam
candidates for further limited in-band training, enabling the final beam
selection to formally meet user-defined target coverage rate. Numerical results
confirm the thereoretical properties of SCAN-BEST in terms of the achieved
coverage rate of the beam candidates and various metrics. Moreover, SCAN-BEST
enjoys good scalability and robustness to various sub-6G system configurations,
including to the sizes of calibration datasets.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 01:16:16 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 00:09:02 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Deng",
"Weicao",
""
],
[
"Shi",
"Binpu",
""
],
[
"Li",
"Min",
""
],
[
"Simeone",
"Osvaldo",
""
]
] | TITLE: SCAN-BEST: Efficient Sub-6GHz-Aided Near-field Beam Selection with
Formal Reliability Guarantees
ABSTRACT: As millimeter-wave (mmWave) multiple-input multiple-output (MIMO) systems
continue to incorporate larger antenna arrays, the range of near-field
propagation expands, making it more likely for users close to the transmitter
to fall within the near-field regime. Traditional far-field beam training
methods are no longer effective in this context. Additionally, near-field beam
training presents challenges, since the training codebook must account for both
angular and distance dimensions, leading to large codebook sizes. To reduce the
in-band training overhead, we propose the Sub-6G Channel-Aided Near-field BEam
SelecTion (SCAN-BEST) framework, which is motivated by the spatial-temporal
congruence between sub-6 GHz (sub-6G) and mmWave channels. SCAN-BEST utilizes
preprocessed sub-6G channel estimates as input, and employs a convolutional
neural network (CNN) to predict the probability of each beam being optimal
within the near-field beam training codebook. Given the prediction uncertainty
arising from the variance between sub-6G and mmWave channels, we introduce a
conformal risk control (CRC)-based module that generates a set of beam
candidates for further limited in-band training, enabling the final beam
selection to formally meet user-defined target coverage rate. Numerical results
confirm the thereoretical properties of SCAN-BEST in terms of the achieved
coverage rate of the beam candidates and various metrics. Moreover, SCAN-BEST
enjoys good scalability and robustness to various sub-6G system configurations,
including to the sizes of calibration datasets.
|
2503.14258 | Weihang Su | Weihang Su, Baoqing Yue, Qingyao Ai, Yiran Hu, Jiaqi Li, Changyue
Wang, Kaiyuan Zhang, Yueyue Wu, Yiqun Liu | JuDGE: Benchmarking Judgment Document Generation for Chinese Legal
System | null | null | null | null | cs.CL cs.AI cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces JuDGE (Judgment Document Generation Evaluation), a
novel benchmark for evaluating the performance of judgment document generation
in the Chinese legal system. We define the task as generating a complete legal
judgment document from the given factual description of the case. To facilitate
this benchmark, we construct a comprehensive dataset consisting of factual
descriptions from real legal cases, paired with their corresponding full
judgment documents, which serve as the ground truth for evaluating the quality
of generated documents. This dataset is further augmented by two external legal
corpora that provide additional legal knowledge for the task: one comprising
statutes and regulations, and the other consisting of a large collection of
past judgment documents. In collaboration with legal professionals, we
establish a comprehensive automated evaluation framework to assess the quality
of generated judgment documents across various dimensions. We evaluate various
baseline approaches, including few-shot in-context learning, fine-tuning, and a
multi-source retrieval-augmented generation (RAG) approach, using both general
and legal-domain LLMs. The experimental results demonstrate that, while RAG
approaches can effectively improve performance in this task, there is still
substantial room for further improvement. All the codes and datasets are
available at: https://github.com/oneal2000/JuDGE.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 13:48:18 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 15:09:51 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Su",
"Weihang",
""
],
[
"Yue",
"Baoqing",
""
],
[
"Ai",
"Qingyao",
""
],
[
"Hu",
"Yiran",
""
],
[
"Li",
"Jiaqi",
""
],
[
"Wang",
"Changyue",
""
],
[
"Zhang",
"Kaiyuan",
""
],
[
"Wu",
"Yueyue",
""
],
[
"Liu",
"Yiqun",
""
]
] | TITLE: JuDGE: Benchmarking Judgment Document Generation for Chinese Legal
System
ABSTRACT: This paper introduces JuDGE (Judgment Document Generation Evaluation), a
novel benchmark for evaluating the performance of judgment document generation
in the Chinese legal system. We define the task as generating a complete legal
judgment document from the given factual description of the case. To facilitate
this benchmark, we construct a comprehensive dataset consisting of factual
descriptions from real legal cases, paired with their corresponding full
judgment documents, which serve as the ground truth for evaluating the quality
of generated documents. This dataset is further augmented by two external legal
corpora that provide additional legal knowledge for the task: one comprising
statutes and regulations, and the other consisting of a large collection of
past judgment documents. In collaboration with legal professionals, we
establish a comprehensive automated evaluation framework to assess the quality
of generated judgment documents across various dimensions. We evaluate various
baseline approaches, including few-shot in-context learning, fine-tuning, and a
multi-source retrieval-augmented generation (RAG) approach, using both general
and legal-domain LLMs. The experimental results demonstrate that, while RAG
approaches can effectively improve performance in this task, there is still
substantial room for further improvement. All the codes and datasets are
available at: https://github.com/oneal2000/JuDGE.
|
2503.14295 | Baiqin Wang | Baiqin Wang, Xiangyu Zhu, Fan Shen, Hao Xu, Zhen Lei | PC-Talk: Precise Facial Animation Control for Audio-Driven Talking Face
Generation | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advancements in audio-driven talking face generation have made great
progress in lip synchronization. However, current methods often lack sufficient
control over facial animation such as speaking style and emotional expression,
resulting in uniform outputs. In this paper, we focus on improving two key
factors: lip-audio alignment and emotion control, to enhance the diversity and
user-friendliness of talking videos. Lip-audio alignment control focuses on
elements like speaking style and the scale of lip movements, whereas emotion
control is centered on generating realistic emotional expressions, allowing for
modifications in multiple attributes such as intensity. To achieve precise
control of facial animation, we propose a novel framework, PC-Talk, which
enables lip-audio alignment and emotion control through implicit keypoint
deformations. First, our lip-audio alignment control module facilitates precise
editing of speaking styles at the word level and adjusts lip movement scales to
simulate varying vocal loudness levels, maintaining lip synchronization with
the audio. Second, our emotion control module generates vivid emotional facial
features with pure emotional deformation. This module also enables the fine
modification of intensity and the combination of multiple emotions across
different facial regions. Our method demonstrates outstanding control
capabilities and achieves state-of-the-art performance on both HDTF and MEAD
datasets in extensive experiments.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 14:35:48 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 10:27:54 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Wang",
"Baiqin",
""
],
[
"Zhu",
"Xiangyu",
""
],
[
"Shen",
"Fan",
""
],
[
"Xu",
"Hao",
""
],
[
"Lei",
"Zhen",
""
]
] | TITLE: PC-Talk: Precise Facial Animation Control for Audio-Driven Talking Face
Generation
ABSTRACT: Recent advancements in audio-driven talking face generation have made great
progress in lip synchronization. However, current methods often lack sufficient
control over facial animation such as speaking style and emotional expression,
resulting in uniform outputs. In this paper, we focus on improving two key
factors: lip-audio alignment and emotion control, to enhance the diversity and
user-friendliness of talking videos. Lip-audio alignment control focuses on
elements like speaking style and the scale of lip movements, whereas emotion
control is centered on generating realistic emotional expressions, allowing for
modifications in multiple attributes such as intensity. To achieve precise
control of facial animation, we propose a novel framework, PC-Talk, which
enables lip-audio alignment and emotion control through implicit keypoint
deformations. First, our lip-audio alignment control module facilitates precise
editing of speaking styles at the word level and adjusts lip movement scales to
simulate varying vocal loudness levels, maintaining lip synchronization with
the audio. Second, our emotion control module generates vivid emotional facial
features with pure emotional deformation. This module also enables the fine
modification of intensity and the combination of multiple emotions across
different facial regions. Our method demonstrates outstanding control
capabilities and achieves state-of-the-art performance on both HDTF and MEAD
datasets in extensive experiments.
|
2503.14523 | Xinyuan Song | Siyi Wu, Leyi Zhao, Haotian Ma, Xinyuan Song | SDF-TopoNet: A Two-Stage Framework for Tubular Structure Segmentation
via SDF Pre-training and Topology-Aware Fine-Tuning | null | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Accurate segmentation of tubular and curvilinear structures, such as blood
vessels, neurons, and road networks, is crucial in various applications. A key
challenge is ensuring topological correctness while maintaining computational
efficiency. Existing approaches often employ topological loss functions based
on persistent homology, such as Betti error, to enforce structural consistency.
However, these methods suffer from high computational costs and are insensitive
to pixel-level accuracy, often requiring additional loss terms like Dice or MSE
to compensate. To address these limitations, we propose \textbf{SDF-TopoNet},
an improved topology-aware segmentation framework that enhances both
segmentation accuracy and training efficiency. Our approach introduces a novel
two-stage training strategy. In the pre-training phase, we utilize the signed
distance function (SDF) as an auxiliary learning target, allowing the model to
encode topological information without directly relying on computationally
expensive topological loss functions. In the fine-tuning phase, we incorporate
a dynamic adapter alongside a refined topological loss to ensure topological
correctness while mitigating overfitting and computational overhead. We
evaluate our method on five benchmark datasets. Experimental results
demonstrate that SDF-TopoNet outperforms existing methods in both topological
accuracy and quantitative segmentation metrics, while significantly reducing
training complexity.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 23:54:38 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 01:43:59 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Wu",
"Siyi",
""
],
[
"Zhao",
"Leyi",
""
],
[
"Ma",
"Haotian",
""
],
[
"Song",
"Xinyuan",
""
]
] | TITLE: SDF-TopoNet: A Two-Stage Framework for Tubular Structure Segmentation
via SDF Pre-training and Topology-Aware Fine-Tuning
ABSTRACT: Accurate segmentation of tubular and curvilinear structures, such as blood
vessels, neurons, and road networks, is crucial in various applications. A key
challenge is ensuring topological correctness while maintaining computational
efficiency. Existing approaches often employ topological loss functions based
on persistent homology, such as Betti error, to enforce structural consistency.
However, these methods suffer from high computational costs and are insensitive
to pixel-level accuracy, often requiring additional loss terms like Dice or MSE
to compensate. To address these limitations, we propose \textbf{SDF-TopoNet},
an improved topology-aware segmentation framework that enhances both
segmentation accuracy and training efficiency. Our approach introduces a novel
two-stage training strategy. In the pre-training phase, we utilize the signed
distance function (SDF) as an auxiliary learning target, allowing the model to
encode topological information without directly relying on computationally
expensive topological loss functions. In the fine-tuning phase, we incorporate
a dynamic adapter alongside a refined topological loss to ensure topological
correctness while mitigating overfitting and computational overhead. We
evaluate our method on five benchmark datasets. Experimental results
demonstrate that SDF-TopoNet outperforms existing methods in both topological
accuracy and quantitative segmentation metrics, while significantly reducing
training complexity.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.