title
stringlengths 28
135
| abstract
stringlengths 0
12k
| introduction
stringlengths 0
12k
|
---|---|---|
Maggioni_Tunable_Convolutions_With_Parametric_Multi-Loss_Optimization_CVPR_2023
|
Abstract
Behavior of neural networks is irremediably determined
by the specific loss and data used during training. How-
ever it is often desirable to tune the model at inference
time based on external factors such as preferences of the
user or dynamic characteristics of the data. This is espe-
cially important to balance the perception-distortion trade-
off of ill-posed image-to-image translation tasks. In this
work, we propose to optimize a parametric tunable con-
volutional layer, which includes a number of different ker-
nels, using a parametric multi-loss, which includes an equal
number of objectives. Our key insight is to use a shared
set of parameters to dynamically interpolate both the ob-
jectives and the kernels. During training, these parame-
ters are sampled at random to explicitly optimize all possi-
ble combinations of objectives and consequently disentan-
gle their effect into the corresponding kernels. During in-
ference, these parameters become interactive inputs of the
model hence enabling reliable and consistent control over
the model behavior. Extensive experimental results demon-
strate that our tunable convolutions effectively work as a
drop-in replacement for traditional convolutions in existing
neural networks at virtually no extra computational cost,
outperforming state-of-the-art control strategies in a wide
range of applications; including image denoising, deblur-
ring, super-resolution, and style transfer.
|
1. Introduction
Neural networks are commonly trained by optimizing a
set of learnable weights against a pre-defined loss function,
often composed of multiple competing objectives which are
delicately balanced together to capture complex behaviors
from the data. Specifically, in vision, and in image restora-
tion in particular, many problems are ill-posed, i.e. admit a
potentially infinite number of valid solutions [15]. Thus, se-
lecting an appropriate loss function is necessary to constrain
neural networks to a specific inference behavior [39, 64].
However, any individual and fixed loss defined empirically
before training is inherently incapable of generating opti-←Perceptual Quality Fidelity →
Figure 1. We propose a framework to build a single neural net-
work that can be tuned at inference without retraining by interact-
ing with controllable parameters, e.g. to balance the perception-
distortion tradeoff in image restoration tasks.
mal results for any possible input [43]. A classic example is
the difficulty in finding a good balance for the perception-
distortion trade-off [5, 64], as shown in the illustrative ex-
ample of Fig. 1. The solution to this problem is to design a
mechanism to reliably control ( i.e.tune) neural networks at
inference time. This comes with several advantages, namely
providing a flexible behavior without the need to retrain the
model, correcting failure cases on the fly, and balancing
competing objectives according to user preference.
Existing approaches to control neural networks, com-
monly based on weights [54, 55] or feature [53, 63] mod-
ulation, are fundamentally limited to consider only two ob-
jectives, and furthermore require the addition of a new set
of layers or parameters for every additional loss consid-
ered. Different approaches, specific to image restoration
tasks, first train a network conditioned on the true degra-
dation parameter of the image, e.g. noise standard deviation
or blur size, and then, at inference time, propose to interact
with these parameters to modulate the effects of the restora-
tion [17, 24, 50]. However this leads the network to an un-
defined state when asked to operate in regimes correspond-
ing to combinations of input and parameters unseen during
training [27].
In this work, we introduce a novel framework to reliably
and consistently tune model behavior at inference time. We
propose a parametric dynamic layer, called tunable convo-
lution, consisting in pindividual kernels (and biases) which
we optimize using a parametric dynamic multi-loss, con-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20226
sisting in pindividual objectives. Different parameters can
be used to obtain different combinations of kernels and ob-
jectives by linear interpolation. The key insight of our work
is to establish an explicit link between the pkernels and
objectives using a shared set of pparameters. Specifically,
during training, these parameters are randomly sampled to
explicitly optimize the complete loss landscape identified
by all combinations of the pobjectives. As a result, dur-
ing inference, each individual objective is disentangled into
a different kernel, and thus its influence can be controlled
by interacting with the corresponding parameter of the tun-
able convolution. In contrast to previous approaches, our
strategy is capable of handling an arbitrary number of ob-
jectives, and by explicitly optimizing all their intermediate
combinations, it allows to tune the overall network behavior
in a predictable and intuitive fashion. Furthermore, our tun-
able layer can be used as a drop-in replacement for standard
layers in existing neural networks with negligible difference
in computational cost.
In summary the main contributions of our work are:
• A novel plug-and-play tunable convolution capable to
reliably control neural networks through the use of in-
teractive parameters;
• A unique parametric multi-loss optimization strategy
dictating how tunable convolution should be optimized
to disentangle the different objectives into the different
tunable kernels;
• Extensive experimental validation across several
image-to-image translation tasks demonstrating state-
of-the-art performance for tunable inference.
|
Muhle_Learning_Correspondence_Uncertainty_via_Differentiable_Nonlinear_Least_Squares_CVPR_2023
|
Abstract
We propose a differentiable nonlinear least squares
framework to account for uncertainty in relative pose es-
timation from feature correspondences. Specifically, we
introduce a symmetric version of the probabilistic normal
epipolar constraint, and an approach to estimate the co-
variance of feature positions by differentiating through the
camera pose estimation procedure. We evaluate our ap-
proach on synthetic, as well as the KITTI and EuRoC real-
world datasets. On the synthetic dataset, we confirm that
our learned covariances accurately approximate the true
noise distribution. In real world experiments, we find that
our approach consistently outperforms state-of-the-art non-
probabilistic and probabilistic approaches, regardless of
the feature extraction algorithm of choice.
|
1. Introduction
Estimating the relative pose between two images given
mutual feature correspondences is a fundamental problem
in computer vision. It is a key component of structure from
motion (SfM) and visual odometry (VO) methods which in
turn fuel a plethora of applications from autonomous vehi-
cles or robots to augmented and virtual reality.
Project Page: https://dominikmuhle.github.io/dnls covs/Estimating the relative pose – rotation and translation
– between two images, is often formulated as a geomet-
ric problem that can be solved by estimating the essen-
tial matrix [42] for calibrated cameras, or the fundamental
matrix [24] for uncalibrated cameras. Related algorithms
like the eight-point algorithm [23, 42] provide fast solu-
tions. However, essential matrix based approaches suffer
issues such as solution multiplicity [18, 24] and planar de-
generacy [33]. The normal epipolar constraint (NEC) [34]
addresses issues such as by estimating the issues such as
which leads to more accurate relative poses [33].
Neither of the aforementioned algorithms takes into ac-
count the quality of feature correspondences – an impor-
tant cue that potentially improves pose estimation accuracy.
Instead, feature correspondences are classified into inliers
and outliers through a RANSAC scheme [11]. However,
keypoint detectors [12, 56] for feature correspondences or
tracking algorithms [63] yield imperfect points [40] that ex-
hibit a richer family of error distributions, as opposed to an
inlier-outlier distribution family. Algorithms, that make use
of feature correspondence quality have been proposed for
essential/fundamental matrix estimation [7, 53] and for the
NEC [48], respectively.
While estimating the relative pose can be formulated as
a classical optimization problem [15, 33], the rise in popu-
larity of deep learning has led to several works augmenting
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13102
(a) covariances from [48] per pixel
(b) points and covariances [48]
(c) learned covariances per pixel (Ours)
(d) points and learned covariances (Ours)
Figure 2. Comparison between covariances used in [48] (first row) and our learned covariances (second row). The first column shows a
dense color coded ( s, α, β mapped to HLS with γcorrection) representation for each pixel, while the second column shows subsampled
keypoints and their corresponding (enlarged) covariances. The higher saturation in (a) shows that the covariances are more anisotropic.
The learned covariances (c) show a more fine-grained detail in the scale (brightness) and less blurring than the covariances in (a).
VO or visual simultaneous localisation and mapping (VS-
LAM) pipelines with learned components. GN-Net [67]
learns robust feature representations for direct methods like
DSO [15]. For feature based methods Superpoint [12] pro-
vides learned features, while Superglue [57] uses graph neu-
ral networks to find corresponding matches between feature
points in two images. DSAC introduces a differential re-
laxation to RANSAC that allows gradient flow through the
otherwise non-differentiable operation. In [53] a network
learns to re-weight correspondences for estimating the fun-
damental matrix. PixLoc [58] estimates the pose from an
image and a 3D model based on direct alignment.
In this work we combine the predictive power of deep
learning with the precision of geometric modeling for
highly accurate relative pose estimation. Estimating the
noise distributions for the feature positions of different fea-
ture extractors allows us to incorporate this information into
relative pose estimation. Instead of modeling the noise for
each feature extractor explicitly, we present a method to
learn these distributions from data, using the same domain
that the feature extractors work with - images. We achieve
this based on the following technical contributions:
• We introduce a symmetric version of the probabilistic
normal epipolar constraint (PNEC), that more accurately
models the geometry of relative pose estimation with un-
certain feature positions.
• We propose a learning strategy to minimize the rela-
tive pose error by learning feature position uncertainty
through differentiable nonlinear least squares (DNLS),
see Fig. 1.
• We show with synthetic experiments, that using the gradi-
ent from the relative pose error leads to meaningful esti-
mates of the positional uncertainty that reflect the correct
error distribution.
• We validate our approach on real-world data in a vi-
sual odometry setting and compare our method to non-probabilistic relative pose estimation algorithms, namely
Nist´er 5pt [50], and NEC [33], as well as to the PNEC
with non-learned covariances [48].
• We show that our method is able to generalize to different
feature extraction algorithms such as SuperPoint [12] and
feature tracking approaches on real-world data.
• We release the code for all experiments and the training
setup to facilitate future research.
|
Li_Patch-Based_3D_Natural_Scene_Generation_From_a_Single_Example_CVPR_2023
|
Abstract
We target a 3D generative model for general natural
scenes that are typically unique and intricate. Lacking the
necessary volumes of training data, along with the diffi-
culties of having ad hoc designs in presence of varying
scene characteristics, renders existing setups intractable.
Inspired by classical patch-based image models, we advo-
cate for synthesizing 3D scenes at the patch level, given a
single example. At the core of this work lies important al-
gorithmic designs w.r.t the scene representation and gener-
ative patch nearest-neighbor module, that address unique
challenges arising from lifting classical 2D patch-based
framework to 3D generation. These design choices, on a
collective level, contribute to a robust, effective, and effi-
cient model that can generate high-quality general natural
scenes with both realistic geometric structure and visual ap-
pearance, in large quantities and varieties, as demonstrated
upon a variety of exemplar scenes. Data and code can be
found at http://wyysf-98.github.io/Sin3DGen .
*Joint first authors
†Corresponding author
|
1. Introduction
3D scene generation generally carries the generation of
both realistic geometric structure and visual appearance. A
wide assortment of scenes on earth, or digital ones across
the internet, exhibiting artistic characteristics and ample
variations over geometry and appearance, can be easily
listed. Being able to populate these intriguing scenes in the
virtual universe has been a long pursuit in the community.
Research has taken several routes, among which a preva-
lent one is learning to extract common patterns of the ge-
ometry orappearance from homogeneous scene samples,
such as indoor scenes [ 14,25,34,37,59,63,71,72,75], ter-
rains [ 15,19,21,26], urban scenes [ 12,30,44], etc. Another
line learns to generate single objects [ 6,7,16,25,33,35,45,
74]. A dominant trend in recent has emerged that learns 3D
generative models to jointly synthesize 3D structures and
appearances via differentiable rendering [ 4,5,8,18,43,50].
Nevertheless, all these learning setups are limited in their
ability to generalize in terms of varied scene types. While a
more promising direction is the exemplar-based one, where
one or a few exemplars featuring the scene of interest are
provided, algorithm designs tailored for certain scene types
in existing methods [ 38–40,73] again draw clear boundaries
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16762
of scene characteristics they can handle.
This work seeks to generate general natural scenes,
wherein the geometry and appearance of constituents are
often tightly entangled and contribute jointly to unique fea-
tures. This uniqueness hinders one from collecting suf-
ficient homogeneous samples for learning common fea-
tures, directing us to the exemplar-based paradigm. On the
other hand, varying characteristics across different exem-
plar scenes restrain us from having ad hoc designs for a cer-
tain scene type (e.g., terrains). Hence, we resort to clas-
sical patch-based algorithms, which date long before the
deep learning era and prevail in several image generation
tasks even today [ 13,17,20]. Specifically, given an input
3D scene, we synthesize novel scenes at the patch level and
particularly adopt the multi-scale generative patch-based
framework introduced in [ 17], where the core is a Gener-
ative Patch Nearest-Neighbor module that maximizes the
bidirectional visual summary [ 54] between the input and
output. Nevertheless, key design questions yet remain in
the3Dgeneration: What representation to work with? And
how to synthesize effectively andefficiently ?
In this work, we exploit a grid-based radiance field –
Plenoxels [ 69], which boasts great visual effects, for rep-
resenting the input scene. While its simplicity and regu-
lar structure benefit patch-based algorithms, important de-
signs must be adopted. Specifically, we construct the exem-
plar pyramid via coarse-to-fine training Plenoxels on im-
ages of the input scene, instead of trivially downsampling
a pretrained high-resolution one. Furthermore, we trans-
form the high-dimensional, unbounded, and noisy features
of the Plenoxels-based exemplar at each scale into more
well-defined and compact geometric and appearance fea-
tures, improving the robustness and efficiency in the subse-
quent patch matching.
On the other end, we employ heterogeneous representa-
tions for the synthesis inside the generative nearest neigh-
bor module. Specifically, the patch matching and blending
operate in tandem at each scale to gradually synthesize an
intermediate value -based scene, which will be eventually
converted to a coordinate -based counterpart at the end. The
benefits are several-fold: a) the transition between consec-
utive generation scales, where the value range of exemplar
features may fluctuate, is more stable; b) the transformed
features in the synthesized output is inverted to the original
”renderable” Plenoxels features; so that c) the visual real-
ism in the Plenoxels-based exemplar is preserved intactly.
Last, working on voxels with patch-based algorithms nec-
essarily leads to high computation issues. So we use an
exact-to-approximate patch nearest-neighbor module in the
pyramid, which keeps the search space under a manageable
range while introducing negligible compromise on the vi-
sual summary optimality. These designs, on a collective
level, essentially lay a solid foundation for an effective andefficient 3D generative model.
To our knowledge, our method is the first3D generative
model that can generate 3D general natural scenes from a
single example, with both realistic geometry and visual ap-
pearance, in large quantities and varieties. We validate the
efficacy of our method on random scene generation with
an array of exemplars featuring a variety of general natural
scenes, and show the superiority by comparing to baseline
methods. The importance of each design choice is also val-
idated. Extensive experiments also demonstrates the versa-
tility of our method in several 3D modeling applications.
|
Ma_Towards_Better_Gradient_Consistency_for_Neural_Signed_Distance_Functions_via_CVPR_2023
|
Abstract
Neural signed distance functions (SDFs) have shown re-
markable capability in representing geometry with detail-
s. However, without signed distance supervision, it is still
a challenge to infer SDFs from point clouds or multi-view
images using neural networks. In this paper, we claim that
gradient consistency in the field, indicated by the parallelis-
m of level sets, is the key factor affecting the inference ac-
curacy. Hence, we propose a level set alignment loss to
evaluate the parallelism of level sets, which can be mini-
mized to achieve better gradient consistency. Our novel-
ty lies in that we can align all level sets to the zero lev-
el set by constraining gradients at queries and their pro-
jections on the zero level set in an adaptive way. Our in-
sight is to propagate the zero level set to everywhere in the
field through consistent gradients to eliminate uncertainty
in the field that is caused by the discreteness of 3D point
clouds or the lack of observations from multi-view images.
Our proposed loss is a general term which can be used up-
on different methods to infer SDFs from 3D point clouds
and multi-view images. Our numerical and visual compar-
isons demonstrate that our loss can significantly improve
the accuracy of SDFs inferred from point clouds or multi-
view images under various benchmarks. Code and data
are available at https://github.com/mabaorui/
TowardsBetterGradient.
|
1. Introduction
Signed distance functions (SDFs) have shown remark-
able abilities in representing high fidelity 3D geometry [6,
14,16,28,32,36,38,41,42,45,51,52,54,55,62,63,67–69, 76].
Current methods mainly use neural networks to learn SDFs
Equal contribution.
yThe corresponding author is Yu-Shen Liu. This work was supported
by National Key R&D Program of China (2022YFC3800600), the Nation-
al Natural Science Foundation of China (62272263, 62072268), and in part
by Tsinghua-Kuaishou Institute of Future Media Data.as a mapping from 3D coordinates to signed distances. Us-
ing gradient descent, we can train neural networks by ad-
justing parameters to minimize errors to either signed dis-
tance ground truth [9, 31, 45, 51, 52] or signed distances in-
ferred from 3D point clouds [1, 2,11,22,38,60,77] or multi-
view images [19, 24,66–69, 73,74,76]. However, factors
like the discreteness in point clouds and the lack of obser-
vations in multi-view images result in 3D ambiguity, which
makes inferring SDFs without ground truth signed distances
remain a challenge.
Recent solutions [1, 23,32,60,68] impose additional con-
straints on gradients with respect to input coordinates. The
gradients determine the rate of change of signed distances in
a field, which is vital for the accuracy of SDFs. Specifically,
Eikonal term [1, 23,32] is widely used to learn SDFs, which
constrains the norm of gradients to be one at any location
in the field. This regularization ensures the networks to pre-
dict valid signed distances. NeuralPull [38] constrains the
directions of gradients to pull arbitrary queries onto the sur-
face. One issue here is that these methods merely constrain
gradients at single locations, without considering gradient
consistency to their corresponding projections on different
level sets. This results in inconsistent gradients in the field,
indicated by level sets with poor parallelism, which drasti-
cally decreases the accuracy of inferred SDFs.
To resolve this issue, we introduce a level set alignment
loss to pursue better gradient consistency for SDF inference
without ground truth signed distances. Our loss is a general
term which can be used to train different networks for learn-
ing SDFs from either 3D point clouds or multi-view images.
Our key idea is to constrain gradients at corresponding lo-
cations on different level sets of the inferred SDF to be con-
sistent. We achieve this by minimizing the cosine distance
between the gradient of a query and the gradient of its pro-
jection on the zero level set. Minimize our loss is equivalent
to aligning all level sets onto a reference, i.e. the zero lev-
el set, in a pairwise way. This enables us to propagate the
zero level set to everywhere in the field, which eliminates
uncertainty in the field that is caused by the discreteness of
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17724
3D point clouds or the lack of observations from multi-view
images. Moreover, we introduce an adaptive weight to fo-
cus more on the gradient consistency nearer to the zero level
set. We evaluate our loss upon the latest methods in surface
reconstruction and multi-view 3D reconstruction under the
widely used benchmarks. Our improvements over baselines
justify not only our effectiveness but also the importance
of gradient consistency to the inference of signed distance
fields. Our contributions are listed below.
i) We introduce a level set alignment loss to achieve bet-
ter gradient consistency for inferring SDFs without
signed distance ground truth.
ii) We justify the importance of gradient consistency to
the accuracy of SDFs inferred from 3D point cloud-
s and multi-view images, and show that aligning level
sets together is an effective way of learning more con-
sistent gradients for eliminating 3D ambiguity.
iii) We show our superiority over the state-of-the-art meth-
ods in surface reconstruction from point clouds and
multi-view 3D reconstruction under the widely used
benchmarks.
|
Otani_Toward_Verifiable_and_Reproducible_Human_Evaluation_for_Text-to-Image_Generation_CVPR_2023
|
Abstract
Human evaluation is critical for validating the perfor-
mance of text-to-image generative models, as this highly
cognitive process requires deep comprehension of text and
images. However, our survey of 37 recent papers reveals
that many works rely solely on automatic measures ( e.g.,
FID) or perform poorly described human evaluations that
are not reliable or repeatable. This paper proposes a stan-
dardized and well-defined human evaluation protocol to fa-
cilitate verifiable and reproducible human evaluation in fu-
ture works. In our pilot data collection, we experimentally
show that the current automatic measures are incompatible
with human perception in evaluating the performance of the
text-to-image generation results. Furthermore, we provide
insights for designing human evaluation experiments reli-
ably and conclusively. Finally, we make several resources
publicly available to the community to facilitate easy and
fast implementations.
|
1. Introduction
Text-to-image synthesis has seen substantial develop-
ment in recent years. Several new models have been in-
troduced with remarkable results. The majority of the
works validate their models using automatic measures, such
as FID [13] and recently proposed CLIPScore [12], even
though many papers point out problems with these mea-
sures. The most popular measure, FID, is criticized for
misalignment with human perception [9]. For example, im-
age resizing and compression hardly degrade the percep-
tual quality but induce high variations in the FID score [28],
while CLIPScore can inflate for a model trained to optimize
text-to-image alignment in the CLIP space [27].
This empirical evidence of the misalignment of the au-
tomatic measures motivates human evaluation of perceived
quality. However, according to our study of 37 recent pa-
pers, the current practices in human evaluation face signifi-
Open-source
Empirically validated protocol
Annotation quality assessment
Recommended report templateWhat details
should we report?Missing details Ad-hoc task design
Is the rating reliable?Lack of quality check
Rank the images
Score the image
1 2 3Which is better?
vsA B A
Which task is efficient?Conventional human evaluation for text-to-image generation
Standardized human evaluationFigure 1. Conventionally, researchers have used different proto-
cols for human evaluation, and setup details are often unclear. We
aim to build a standardized human evaluation.
cant challenges in reliability and reproducibility. We mainly
identified the following two problematic practices. Firstly,
evaluation protocols vary significantly from one paper to
another. For example, some employ relative evaluation by
simultaneously showing annotators two or more samples for
comparison, and others collect scores of individual samples
based on certain criteria. Secondly, important details of ex-
perimental configurations and collected data are often omit-
ted. For example, the number of annotators who rate each
sample is mostly missing, and sometimes even the number
of assessed samples is not reported. These inconsistencies
in the evaluation protocol make future analysis almost im-
possible. We also find that recent papers do not analyze the
quality of the collected human annotations. Therefore, we
cannot assess how reliable the evaluation result is. It is also
difficult to know which is a good way to evaluate text-to-
image synthesis among various evaluation protocols.
The natural language generation (NLG) community has
extensively explored human evaluation. Human evaluation
is typically done in a crowdsourcing platform, and there are
many well-known practices. Yet, quality control is an open
challenge [16]. Recently, a platform for benchmarking mul-
tiple NLG tasks was launched [18]. The platform offers a
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14277
web form where researchers can submit their model predic-
tions. The platform automatically enqueues a human evalu-
ation task on AMT, allowing a fair comparison for its users.
We address the lack of a standardized evaluation proto-
col in text-to-image generation. To this end, we carefully
design an evaluation protocol using crowdsourcing and em-
pirically validate the protocol. We also provide recommen-
dations for reporting a configuration and evaluation result.
We evaluate state-of-the-art generative models with our
protocol and provide an in-depth analysis of collected hu-
man ratings. We also investigate automatic measures, i.e.,
FID and CLIPScore, by checking the agreement between
the measures and human evaluation. The results reveal the
critical limitations of these two automatic measures.
Findings and resources Our main findings can be sum-
marized as follows:
•Reliability of prior human evaluation is question-
able. We provide insights about the required numbers
of prompts and human ratings to be conclusive.
•FID is inconsistent with human perception. This is
already known at least empirically, and our experiment
supports it.
•CLIPScore is already saturated. State-of-the-art
generative models are already on par with authentic
images in terms of CLIPScores.
These findings motivate us to build a standardized pro-
tocol for human evaluation for better verifiability and re-
producibility, facilitating to draw reliable conclusions. For
continuous development, we open-source the following re-
sources for the community.
• Implementation of human evaluation on a crowdsourc-
ing platform, i.e., Amazon Mechanical Turk (AMT)1,
which allows researchers to evaluate their generative
models with a standardized protocol.
• Template for reporting human evaluation results. This
template will enhance their transparency.
• Human ratings by our protocol on multiple datasets:
MS-COCO [23], DrawBench [30], and PartiPrompts
[37]. These per-image ratings, and not only their statis-
tics, will facilitate designing automatic measures.
|
Lu_TransFlow_Transformer_As_Flow_Learner_CVPR_2023
|
Abstract
Optical flow is an indispensable building block for
various important computer vision tasks, including mo-
tion estimation, object tracking, and disparity measure-
ment. In this work, we propose TransFlow, a pure trans-
former architecture for optical flow estimation. Com-
pared to dominant CNN-based methods, TransFlow
demonstrates three advantages. First, it provides more
accurate correlation and trustworthy matching in flow
estimation by utilizing spatial self-attention and cross-
attention mechanisms between adjacent frames to effec-
tively capture global dependencies; Second, it recovers
more compromised information ( e.g., occlusion and mo-
tion blur) in flow estimation through long-range tempo-
ral association in dynamic scenes; Third, it enables a
concise self-learning paradigm and effectively eliminate
the complex and laborious multi-stage pre-training pro-
cedures. We achieve the state-of-the-art results on the
Sintel, KITTI-15, as well as several downstream tasks,
including video object detection, interpolation and sta-
bilization. For its efficacy, we hope TransFlow could
serve as a flexible baseline for optical flow estimation.
|
1. Introduction
With the renaissance of connectionism, rapid
progress has been made in optical flow. Till now, most
of the state-of-the-art flow learners were built upon Con-
volutional Neural Networks (CNNs) [5,7,16,32,40,44].
Despite their diversified model designs and tantalizing
results, existing CNN-based methods commonly rely on
FlyingChairs
FlyingThings
Sintel(a) CNN -based pipeline
Tedious pre-training
Temporally
disconnected✘
✘
✘Flow
predictionInput
frameRecon.(b) Our TransFlow pipeline
Spatial
Globalization
Concise pre-trainingEncoderTemporal
Association
DecoderSpatial locality
Target
domainInput
videoFlow
prediction
Figure 1. Conceptual comparison of flow estimation meth-
ods. Existing CNN-based methods regress flow via lo-
cal spatial convolutions, while TransFlow relies on Trans-
former to perform global matching (both spatial andtempo-
ral). The demo video can be found in https://youtu.
be/xbnyj9wspqA .
spatial locality and compute displacements from all-pair
correlation volume for flow prediction (Fig. 1(a)). Very
recently, the vast success of Transformer [42, 47, 48]
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18063
stimulates the emergence of attention-based paradigms
for various tasks in language, vision, speech, and more.
It appears to form a unanimous endeavor in the deep
learning community to develop unified methodologies
for solving problems in different areas. Towards unify-
ing methodologies, less inductive biases [42] are intro-
duced for a specific problem, which urges the models to
learn useful knowledge purely from the input data.
Jumping on the bandwagon of unifying architecture,
we study applying Vision Transformer [13] to the task
of optical flow. The following question naturally arises:
What are the major limitations of existing CNN-based
approaches? Tackling this question can provide in-
sights into the model design of optical flow, and mo-
tivate us to rethink the task from an attention-driven
view. First, the concurrent CNN-based methods demon-
strate inefficiency in modeling global spatial dependen-
cies due to the intrinsic locality of the convolution op-
eration. It usually requires a large number of CNN lay-
ers to capture the correlations between two pixels that
are spatially far away. Second, CNN-based flow learn-
ers generally model the flow between only two consecu-
tive frames, and fail to explore temporal associations in
the neighboring contexts, resulting in weak prediction
in the presence of significant photometric and geometric
changes. Third, the existing training strategy usually re-
quires a tedious pipeline . Performance guarantees heav-
ily rely on excessive pre-training on extra datasets ( e.g.,
FlyingChairs [14], FlyingThings [33], etc). Without ad-
equate pre-training procedures, the model typically
|
Qu_A_Characteristic_Function-Based_Method_for_Bottom-Up_Human_Pose_Estimation_CVPR_2023
|
Abstract
Most recent methods formulate the task of human pose
estimation as a heatmap estimation problem, and use the
overall L2 loss computed from the entire heatmap to opti-
mize the heatmap prediction. In this paper, we show that
in bottom-up human pose estimation where each heatmap
often contains multiple body joints, using the overall L2
loss to optimize the heatmap prediction may not be the op-
timal choice. This is because, minimizing the overall L2
loss cannot always lead the model to locate all the body
joints across different sub-regions of the heatmap more ac-
curately. To cope with this problem, from a novel perspec-
tive, we propose a new bottom-up human pose estimation
method that optimizes the heatmap prediction via minimiz-
ing the distance between two characteristic functions re-
spectively constructed from the predicted heatmap and the
groundtruth heatmap. Our analysis presented in this pa-
per indicates that the distance between these two charac-
teristic functions is essentially the upper bound of the L2
losses w.r.t. sub-regions of the predicted heatmap. There-
fore, via minimizing the distance between the two character-
istic functions, we can optimize the model to provide a more
accurate localization result for the body joints in different
sub-regions of the predicted heatmap. We show the effec-
tiveness of our proposed method through extensive exper-
iments on the COCO dataset and the CrowdPose dataset.
|
1. Introduction
Human pose estimation aims to locate the body joints of
each person in a given RGB image. It is relevant to vari-
ous applications, such as action recognition [7, 43], person
Re-ID [28], and human object interaction [35]. For tack-
ling human pose estimation, most of the recent methods fall
*Corresponding Authorinto two major categories: top-down methods and bottom-
upmethods. Top-down methods [24,32,33,39,44] generally
use a human detector to detect all the people in the image,
and then perform single-person pose estimation for each
detected subject separately. In contrast, bottom-up meth-
ods [5,6,16,17,22,23,25,26] usually locate the body joints
of all people in the image at the same time. Hence, bottom-
upmethods, the main focus of this paper, are often a more
efficient choice compared to top-down methods, especially
when there are many people in the input image [5].
In existing works, it is common to regard human pose
estimation as a heatmap prediction problem, since this can
preserve the spatial structure of the input image throughout
the encoding and decoding process [12]. During the gen-
eral optimization process, the groundtruth (GT) heatmaps
Hgare first constructed via putting 2D Gaussian blobs cen-
tered at the GT coordinates of the body joints. After that,
these constructed GT heatmaps are used to supervise the
predicted heatmaps Hpvia the overall L2 loss Loverall
2 cal-
culated (averaged) over the whole heatmap. Specifically,
denoting the area of the heatmap as A, we have Loverall
2 =
∥Hp−Hg∥2
2
A.
We argue that using the overall L2 loss to supervise the
predicted heatmap may not be the optimal choice in bottom-
up methods where each heatmap often contains multiple
body joints from the multiple people in various sub-regions,
as shown in Fig. 1(b). This is because, a smaller overall L2
loss calculated over the whole heatmap cannot always lead
the model to locate all the body joints across different sub-
regions in the heatmap more accurately. As illustrated in
Fig. 1(a), the predicted heatmap #2 has a smaller overall L2
loss compared to the predicted heatmap #1. However, the
predicted heatmap #2 locates the body joint in the top-
right sub-region wrongly , whereas the predicted heatmap
#1locates body joints in both the top-right and bottom-
left sub-regions correctly . This is because, while the de-
crease of the overall L2 loss can be achieved when the L2
loss w.r.t. each sub-region either decreases or remains the
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13009
Figure 1. (a) Illustration of heatmaps. The predicted heatmap #2 with a smaller overall L2 loss locates the body joint in the top-right
sub-region wrongly, while the predicted heatmap #1 with a larger overall L2 loss locates body joints in both the top-right and bottom-left
sub-regions correctly. (b) Output of a commonly used bottom-up method, HrHRNet-W32 [6]. As shown, it misses left ankle in the dashed
sub-region of image (i) completely, and misidentifies right knee in the dashed sub-region of image (ii). This indicates that accurately
localizing the body joints of multiple people in a single heatmap is a challenging problem. (Best viewed in color.)
same (e.g., from predicted heatmap #0 to predicted heatmap
#1), it can also be achieved when there is a decrease of L2
loss w.r.t. certain sub-regions and an increase of L2 loss for
some other sub-regions (e.g., from predicted heatmap #1 to
predicted heatmap #2). This indicates that, in bottom-up
methods, the decrease of the overall L2 loss does not al-
ways lead to a more accurate localization result for the body
joints in different sub-regions of the predicted heatmap at
the same time. Besides, we also show some results of a
commonly used bottom-up method, HrHRNet-W32 [6], in
Fig. 1(b). As shown, it may even miss or misidentify certain
body joints when there are a number of people in the input
image. This indicates that it is quite difficult to accurately
locate all body joints of all people in the predicted heatmap.
To tackle the above-mentioned problem in bottom-up
methods, in this paper, rather than using the overall L2 loss
to supervise the whole heatmap, we instead aim to optimize
the body joints over sub-regions of the predicted heatmap at
the same time . To this end, from a new perspective, we ex-
press the predicted and GT heatmaps as characteristic func-
tions, and minimize the difference between these functions,
allowing different sub-regions of the predicted heatmap to
be optimized at the same time.
More specifically, we first construct two distributions re-
spectively from the predicted heatmap and the GT heatmap.
After that, we obtain two characteristic functions of these
two distributions and optimize the heatmap prediction via
minimizing the distance between these two characteristic
functions. We analyze in Sec. 3.3 that the distance between
the two characteristic functions is the upper bound of theL2 losses w.r.t sub-regions in the predicted heatmap. There-
fore, via minimizing the distance between the two charac-
teristic functions, our method can locate body joints in dif-
ferent sub-regions more accurately at the same time, and
thus achieve superior performance.
The contributions of our work are summarized as fol-
lows. 1) From a new perspective, we supervise the pre-
dicted heatmap using the distance between the character-
istic functions of the predicted and GT heatmaps. 2) We
analyze (in Sec. 3.3) that the L2 losses w.r.t. sub-regions
of the predicted heatmap are upper-bounded by the dis-
tance between the characteristic functions. 3) Our proposed
method achieves state-of-the-art performance on the evalu-
ation benchmarks [19, 21].
|
Maralan_Computational_Flash_Photography_Through_Intrinsics_CVPR_2023
|
Abstract
Flash is an essential tool as it often serves as the sole
controllable light source in everyday photography. How-
ever, the use of flash is a binary decision at the time a
photograph is captured with limited control over its char-
acteristics such as strength or color. In this work, we study
the computational control of the flash light in photographs
taken with or without flash. We present a physically moti-
vated intrinsic formulation for flash photograph formation
and develop flash decomposition and generation methods
for flash and no-flash photographs, respectively. We demon-
strate that our intrinsic formulation outperforms alterna-
tives in the literature and allows us to computationally con-
trol flash in in-the-wild images.
|
1. Introduction
Flash is an essential tool as it often serves as the sole con-
trollable light source in everyday photography available on
a wide range of cameras. Although most cameras measure
the environment light to adjust the flash strength, whether to
use flash or not is a binary decision for the casual photog-
rapher without any control over flash characteristics such
as strength or color temperature. In addition to hardware
constraints, this limitation also stems from the typical use
case of flash in situations that do not allow lengthy experi-mentation with illumination. This makes the computational
control of the flash a desirable application in photography.
Using the superposition of illumination principle, a flash
photograph can be modeled as the summation of flash and
ambient illuminations in linear RGB:
P=A+F, (1)
where P,A, and Frepresent the flash photograph, the am-
bient illumination, and the flash illumination respectively.
To computationally control the flash post-capture, the am-
bient and flash illuminations need to be decomposed. Sim-
ilarly, to addflash light to a no-flash photograph, the flash
illumination needs to be generated. In this work, we tackle
both flash illumination decomposition and generation prob-
lems and show that regardless of the original photograph
being captured with or without flash, we can generate and
control the flash photograph computationally.
We start by modeling the flash photograph formation
through image intrinsics. Intrinsic image decomposition
models the image in terms of reflectance and shading:
I=R·S, (2)
where I,R, and Srepresent the input image, the re-
flectance map, and the shading respectively. This repre-
sentation separates the effects of lighting in the scene from
the illumination-invariant surface properties. With lighting
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16654
Figure 2. We can model the flash photograph formation in terms of separate shading maps for flash and ambient lights with a shared albedo.
being the only difference between the ambient and flash il-
luminations, both the decomposition and generation prob-
lems are more conveniently modeled on shading. Assum-
ing a mono-color ambient illumination and a single-channel
shading representation, we re-write Eq. 1 using Eq. 2:
P=R·cASA+R·SF=R·(cASA+SF),(3)
where cAis a 3-dimensional vector representing the color
temperature of the ambient illumination assuming the flash
photograph Pis white-balanced for the flash light. Fig. 2
visualizes our representation.
We develop a system for flash decomposition where we
estimate cA,SA, and SFfrom a flash photograph and com-
bine them with the shared reflectance Rto compute the final
flash and ambient illumination estimations. We show that
this physically-inspired modeling of the flash photograph
allows us to generate high-quality decomposition results.
Flash decomposition is an under-constrained problem
where both AandFare partially observed within the in-
putP. Flash generation, on the other hand, is a more
challenging problem that amounts to full scene relighting
as, in this scenario, only Ais given as input for estimat-
ingF. In addition to our intrinsic modeling, we formu-
late a self-supervision strategy for the generation problem
through the cyclic relationship between decomposition and
generation. We show that with a combined supervised and
self-supervised strategy, we can successfully simulate flash
photographs from no-flash input.
|
Qu_Towards_Robust_Tampered_Text_Detection_in_Document_Image_New_Dataset_CVPR_2023
|
Abstract
Recently, tampered text detection in document image has
attracted increasingly attention due to its essential role on
information security. However, detecting visually consis-
tent tampered text in photographed document images is still
a main challenge. In this paper, we propose a novel frame-
work to capture more fine-grained clues in complex scenar-
ios for tampered text detection, termed as Document Tam-
pering Detector (DTD), which consists of a Frequency Per-
ception Head (FPH) to compensate the deficiencies caused
by the inconspicuous visual features, and a Multi-view It-
erative Decoder (MID) for fully utilizing the information
of features in different scales. In addition, we design a
new training paradigm, termed as Curriculum Learning for
Tampering Detection (CLTD), which can address the con-
fusion during the training procedure and thus to improve
the robustness for image compression and the ability to
generalize. To further facilitate the tampered text detec-
tion in document images, we construct a large-scale docu-
ment image dataset, termed as DocTamper, which contains
170,000 document images of various types. Experiments
demonstrate that our proposed DTD outperforms previous
state-of-the-art by 9.2%, 26.3% and 12.3% in terms of
F-measure on the DocTamper testing set, and the cross-
domain testing sets of DocTamper-FCD and DocTamper-
SCD, respectively. Codes and dataset will be available at
https://github.com/qcf-568/DocTamper.
|
1. Introduction
Document images are one of the most essential media
for information transmission in modern society, which con-
tains amounts of sensitive and privacy information such
as telephone numbers. As the rapid development of the
image editing technologies, such sensitive text informa-
tion can be more easily to be tampered for malicious pur-
Figure 1. Tampered text in document images usually have rela-
tively small areas and few visual tampering clue.
poses such as defraud, causing serious information security
risks [33,42,48,50]. Therefore, detecting tampering in doc-
ument images has become an important research topic in re-
cent years [18,47]. It is crucial to develop effective methods
to examine whether a document image is modified, mean-
while identifying the exact location of the tampered text.
Most text tamper methods in documents images can be
generally categorized into three types: (1) Splicing, which
copies regions from one image and paste to other images;
(2) Copy-move, which shifts the spatial locations of objects
within images; (3) Generation, which replaces regions of
images with visually plausible but different contents, As
shown in Fig. 1. Though tampering detection in natural
images has been studied for years [14, 49], it differs a lot
from that in document images. For natural images, tam-
pering detection mainly relies on the relatively obvious vi-
sual tampered clues on edge or surface of the object, which
hardly exist in documents, especially for copy-move and
splicing [1, 47]. This is because document images mostly
have the same background color, and text within clusters
usually has the same font and size. Therefore, the tampered
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5937
text regions can not be effectively detected based only on vi-
sual clues. To this end, in this paper we propose to incorpo-
rate both visual and frequency clues to improve the ability
on identifying the tampered text regions in documents.
Recently, some promising methods have been proposed
for tampered text detection [8,18,47] by analysing the text’s
appearance on scanned documents. Though significant pro-
gresses have been achieved on simple and clean documents,
detecting elaborately tampered text regions in various pho-
tographed documents is still an open challenge.
In this paper, we propose a multi-modality Transformer-
based method, termed as Document Tampering Detector
(DTD), for Document Image Tampering Detection (DITD).
The proposed model utilizes features from both visual do-
main and frequency domain. The former one are extracted
from Visual Perception Head with the original image as in-
put. For the latter one, different from the previous work [43]
that leveraged the high-pass filtered results of RGB images,
we utilize the DCT coefficients as the input of our model’s
Frequency Perception Head to obtain the corresponding em-
bedding. Through a fusion module with a concatenation op-
eration and an attention module, the features in these two
modules are incorporated effectively and then fed into a
Swin-Transformer [27] based encoder. Finally, we intro-
duce a new Multi-view Iterative Decoder to progressively
perceive the tampered text regions.
From our experiments, we find image compression can
cover up some of tampering clues and models usually lack
robustness to it at start. Training in randomly compressed
images will bring confusion to models and they couldn’t
work well on the challenging DITD tasks. Therefore, we
further propose a new training paradigm, termed as Curricu-
lum Learning for Tampering Detection (CLTD), to train the
models in an easy-to-hard manner. In such way, the model
can firstly learn how to spot tampering clues accurately and
then gradually gain the robustness to image compression.
As there lack large-scale tampered document dataset, We
introduce a new method to create realistic tampered text
data and construct a large-scale dataset, termed as DocTam-
per, with 170k tampered document images of diverse types.
We conduct sufficient experiments on both our proposed
DocTamper and the T-SROIE dataset [47]. Both the qualita-
tive and quantitative results demonstrate that our DTD can
significantly outperform previous state-of-the-art methods.
In summary, our main contributions are as follows:
• We introduce DTD, a powerful multi-modality model
for tampered text detection in document images.
• We propose CLTD, a new training paradigm to en-
hance the generalization ability and robustness of the
proposed tampering detection model
• We propose a novel data synthetic method to gener-
ate realistic tampered documents efficiently with onlyunlabeled document images.
• We construct a comprehensive large-scale dataset with
various scenarios and tampering methods to further fa-
cilitate the research on tampered text detection task.
|
Qiu_CafeBoost_Causal_Feature_Boost_To_Eliminate_Task-Induced_Bias_for_Class_CVPR_2023
|
Abstract
Continual learning requires a model to incrementally
learn a sequence of tasks and aims to predict well on all
the learned tasks so far, which notoriously suffers from the
catastrophic forgetting problem. In this paper, we find a new
type of bias appearing in continual learning, coined as task-
induced bias. We place continual learning into a causal
framework, based on which we find the task-induced bias
is reduced naturally by two underlying mechanisms in task
and domain incremental learning. However, these mecha-
nisms do not exist in class incremental learning (CIL), in
which each task contains a unique subset of classes. To
eliminate the task-induced bias in CIL, we devise a causal
intervention operation so as to cut off the causal path that
causes the task-induced bias, and then implement it as a
causal debias module that transforms biased features into
unbiased ones. In addition, we propose a training pipeline
to incorporate the novel module into existing methods and
jointly optimize the entire architecture. Our overall ap-
proach does not rely on data replay, and is simple and
convenient to plug into existing methods. Extensive em-
pirical study on CIFAR-100 and ImageNet shows that our
approach can improve accuracy and reduce forgetting of
well-established methods by a large margin.
|
1. Introduction
Deep learning has been widely applied in many fields
like computer vision, social media analytics, natural lan-
guage processing, etc. The standard paradigm of deep
learning is to train models on a prepared dataset, of which
all data are accessible in the whole training stage. However,
in most real-world applications, a sequence of tasks arrives
incrementally, which requires models to learn continuously
from a new task, namely continual learning [27]. Despite
fine-tuning achieves this goal somewhat, such na¨ıvemethod
forgets obtained knowledge of previous tasks after learning
*Corresponding authors.
X YT
X YT(b) Task -induced bias between class features C1 C2T
C1 C2T
A Task
A Task
(c) Task -induced bias between images and labelsA Task
C1 C2
XYT
TColor
StripeElephant Leopard
StripeElephant Leopard
Color
StripeElephant Leopard
Old classes
Old classesNew classes
Old classesNew classesProb.
Old classesNew classesProb.
Current Task Previous Task
Prob.
Prob.
Train
data
Test
dataTrain
data
Test
data
(a) Illustration of CILFigure 1. Illustration of CIL, task-induced bias and their detrimen-
tal effects. (a) Illustration of CIL. (b) Task-induced bias between
class features. Two classes in a task are correlated by a task iden-
tifier. (c) Task-induced bias between images and labels. Images
and labels in a task are correlated by a task identifier. In (b)(c), the
first column describes perspectives of task-induced bias, the sec-
ond shows the underlying causal effect causing task-induced bias,
and the last illustrates adverse influence of task-induced bias.
a new task, dubbed as catastrophic forgetting [18].
Much effort has been devoted as a remedy for alleviating
this problem in literature [2, 3, 11–14, 16, 33]. One general
direction is to preserve knowledge representations of pre-
vious tasks by knowledge distillation [10] and by storing a
few observed exemplars for replay [22]. [11] discovers the
class imbalance between old classes and new ones is a key
challenge of large-scale applications in class incremental
learning (CIL), in which classes of a new task never appear
in the previous tasks, as illustrated in Fig. 1 (a). [2,7,31,35]
all follow this direction and propose effective methods to
overcome the class imbalance. However, these methods ig-
nore bias problems in continual learning. In this work, we
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16016
consider the problem of CIL in a novel perspective of bias.
We think that catastrophic forgetting is the reflection of a
new-type bias, coined as task-induced bias.
Deep neural networks are easily misled by language
bias [1] and vision bias [6, 9, 29], which are both derived
from spurious correlations of two or more items in differ-
ent positions of a sentence or an image. In contrast to these
two bias, the task-induced bias occurs across different im-
ages in a common task. In CIL, data of specific classes
are accessible only in a certain task if replay buffer is not
applied. A deep neural network is capable of learning cor-
relations among images of different categories or between
images and labels as possible as it can, which includes a few
spurious correlations, namely biases, leading to a limited
generalization. On the one hand, a portion of these image
correlations can be shared beneficially in this task, but these
benefits cannot share among different tasks. For example, in
Fig. 1 (b), color is sufficient to discriminate elephants and
leopards in a specific task since elephants are grey while
leopards are orange, and thus deep models have no moti-
vation to capture more complex characteristic, like stripe.
This is detrimental to the feature extractor because it cap-
tures a few details that cannot generalize among all tasks.
On the other hand, as shown in Fig. 1 (c), a classifier will
be biased to new classes in the current task [11,31,35], be-
cause the classes in the training data are all the new classes
and to predict new classes is more possible to be correct.
To better understand the task-induced bias in continual
learning, we model three continual learning scenarios [27]
into a common causal inference framework, which merely
contains three necessary variables: image, label, and task
identifier. Thanks to the framework, we discover that the
task-induced bias is minor in task incremental and domain
incremental learning due to innate mechanisms that cut off
an inappropriate causal path, which partially explain why
the two scenarios are easier to tackle than CIL [27]. For
the existence of the causal path in CIL, we seek an ac-
tive causal intervention to close the path. To implement
the intervention, we devise a simple but effective causal
debias module, which is established on attention mecha-
nisms [28]. Subsequently, we incorporate this module into
existing CIL architectures and propose a training pipeline
to optimize it. Our overall approach transforms causally bi-
ased features into unbiased ones, and hence we refer to it as
Casual Feature Boost (CafeBoost). Finally, we systemat-
ically compare accuracy and forgetting of well-established
methods with ours in various settings. Experimental results
show that CafeBoost yields significant and consistent per-
formance boost for existing methods in CIFAR-100 and Im-
ageNet by a large margin (0.67% ∼12.08% on accuracy).
|
Li_MSeg3D_Multi-Modal_3D_Semantic_Segmentation_for_Autonomous_Driving_CVPR_2023
|
Abstract
LiDAR and camera are two modalities available for
3D semantic segmentation in autonomous driving. The
popular LiDAR-only methods severely suffer from inferior
segmentation on small and distant objects due to insufficient
laser points, while the robust multi-modal solution is
under-explored, where we investigate three crucial inherent
difficulties: modality heterogeneity, limited sensor field
of view intersection, and multi-modal data augmentation.
We propose a multi-modal 3D semantic segmentation
model (MSeg3D) with joint intra-modal feature extraction
and inter-modal feature fusion to mitigate the modality
heterogeneity. The multi-modal fusion in MSeg3D consists
of geometry-based feature fusion GF-Phase, cross-modal
feature completion, and semantic-based feature fusion
SF-Phase on all visible points. The multi-modal data
augmentation is reinvigorated by applying asymmetric
transformations on LiDAR point cloud and multi-camera
images individually, which benefits the model training
with diversified augmentation transformations. MSeg3D
achieves state-of-the-art results on nuScenes, Waymo, and
SemanticKITTI datasets. Under the malfunctioning multi-
camera input and the multi-frame point clouds input,
MSeg3D still shows robustness and improves the LiDAR-
only baseline. Our code is publicly available at https:
//github.com/jialeli1/lidarseg3d .
|
1. Introduction
Scene understanding for safe autonomous driving can
be achieved through semantic segmentation using camera
2D images and LiDAR 3D point clouds, which densely
classifies each smallest sensing unit of the modality. The
image-based 2D semantic segmentation has been developed
with massive solid studies [12, 34, 61, 63]. The camera
image has rich appearance information about the object but
severely suffers from illumination, varying object scales,
and indirect applications in the 3D world. Another modal-
ity, LiDAR point cloud, drives 3D semantic segmentation
with laser points [1, 3, 11, 37]. Unfortunately, irregularlaser points are too sparse to capture the details of objects.
The inaccurate segmentation appears especially on small
and distant objects. The other under-explored direction is
using multi-modal data to increase both the robustness and
accuracy in 3D semantic segmentation [67].
Despite the conceptual superiority, the development of
multi-modal segmentation model is still nontrivial, lagging
behind the single-modal methods [27]. We rationally
attribute the difficulties to the three following aspects. i)
Heterogeneity between modalities. Due to sparse points
and dense pixels, point cloud feature extractors [14, 31]
and image feature extractors [15, 44, 47] are developed
distinctly. Separate intra-modal feature extractors are
used to address the heterogeneity [13, 20, 25, 42, 51],
but the lack of joint optimization leads to suboptimal
features from irrelevant network parameters. ii) Limited
intersection on the field of view (FOV) between sensors.
Only the points falling into the intersected FOV are
geometrically associated with multi-modal data, while
simply considering the intersected multi-modal data is not
sufficient to be practically applicable. Performing fusion
solely in the limited FOV intersection like [20,67] results in
unsatisfactory overall segmentation performance as shown
in Fig. 1. iii) Multi-modal data augmentation. For
example, PMF [67] uses only several 2D augmentations
for spatially aligned point cloud projection image and
camera RGB image. Under the constraint of modal
alignment or 2D representation of point cloud, the multi-
modal segmentation works [20, 25, 67] discard many useful
and critical point cloud augmentation transformations with
sacrificed perception performance [36, 48].
Accordingly, we propose a top-performing multi-modal
3D semantic segmentation method termed MSeg3D, in-
herently motivated by addressing the aforementioned three
technical difficulties. i) Unlike separately extracting modal
features in existing methods [13, 25, 42, 51], we jointly
optimize intra-modal feature extraction and inter-modal
feature fusion to drive maximum correlation and com-
plementarity between heterogeneous modalities. ii) To
overcome the disregarded multi-modal fusion outside FOV
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21694
intersection [25, 67], we propose a cross-modal feature
completion and a semantic-based feature fusion phase
SF-Phase to collaborate with the geometry-based feature
fusion phase GF-Phase. For points outside the FOV
intersection, the former completes the missing camera
features using predicted pseudo-camera features, under the
explicit guidance of cross-modal supervision. For all the
points outside and inside the FOV intersection, the later SF-
Phase leverages the multi-head attention [41] to model the
semantic relation between point and interested categories
so that we can attentively fuse the semantic embeddings
aggregated from all the visible fields to each point. iii) The
challenging multi-modal data augmentation is reinvigorated
by being decomposed as the asymmetric transformations
in the LiDAR, camera worlds, and local cameras, which
enables flexible permutation to enrich training samples.
As the proposed components accumulated in Fig. 1,
mIoU and mIoU1are significantly increased while the gaps
between mIoU and mIoU1are gradually decreased. Our
contributions are four-fold: i) We propose a multi-modal
segmentation model MSeg3D with joint intra-modal feature
extraction and inter-modal feature fusion, achieving state-
of-the-art 3D segmentation performance on the competitive
nuScenes [3], Waymo [37], and SemanticKITTI [1] datasets
for autonomous driving. The proposed framework won 2nd
place in the Waymo 3D semantic segmentation challenge
at CVPR 2022. ii) We propose a cross-modal feature
completion and a semantic-based feature fusion phase. To
our best knowledge, it is the first time to address the
overlooked and inapplicable multi-modal fusion outside the
sensor FOV intersection. iii) By applying augmentation
transformations asymmetrically on point cloud and images,
the proposed asymmetrical multi-modal data augmentation
significantly increases the diversity of multi-modal samples
for training model with robust improvements. iv) Extensive
experimental analyses on the improvement and robustness
of our method clearly investigate our designs.
|
Meuleman_Progressively_Optimized_Local_Radiance_Fields_for_Robust_View_Synthesis_CVPR_2023
|
Abstract
We present an algorithm for reconstructing the radiance
field of a large-scale scene from a single casually cap-
tured video. The task poses two core challenges. First,
most existing radiance field reconstruction approaches rely
on accurate pre-estimated camera poses from Structure-
from-Motion algorithms, which frequently fail on in-the-
wild videos. Second, using a single, global radiance field
with finite representational capacity does not scale to longer
trajectories in an unbounded scene. For handling unknown
poses, we jointly estimate the camera poses with radiance
field in a progressive manner. We show that progressive op-
‡Part of the work was done while Andreas and Yu-Lun were interns at
Meta.timization significantly improves the robustness of the re-
construction. For handling large unbounded scenes, we dy-
namically allocate new local radiance fields trained with
frames within a temporal window. This further improves
robustness (e.g., performs well even under moderate pose
drifts) and allows us to scale to large scenes. Our extensive
evaluation on the TANKS AND TEMPLES dataset and our
collected outdoor dataset, STATIC HIKES , show that our
approach compares favorably with the state-of-the-art.
|
1. Introduction
Dense scene reconstruction for photorealistic view syn-
thesis has many critical applications, for example, in
VR/AR (virtual traveling, preserving of important cultural
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16539
artifacts), video processing (stabilization and special ef-
fects), and mapping (real-estate, human-level maps). Re-
cently, rapid progress has been made in increasing the fi-
delity of reconstructions using radiance fields [22]. Unlike
most traditional methods, radiance fields can model com-
mon phenomena such as view-dependent appearance, semi-
transparency, and intricate micro-details.
Challenges. In this paper, we aim to create radiance field
reconstructions of large-scale scenes that are acquired us-
ing a single handheld camera since this is arguably the most
practical way of capturing them outside the realm of pro-
fessional applications. In this setting, we are faced with
two main challenges: (1) estimating accurate camera tra-
jectory of a long path and (2) reconstructing the large-scale
radiance fields of scenes. Resolving them together is dif-
ficult because changes in observation can be explained by
either camera motion or the radiance field’s ability to model
view-dependent appearance. For this reason, many radiance
field estimation techniques assume that the accurate poses
are known in advance (typically fixed during the radiance
field optimization). However, in practice, one has to use a
separate method, such as Structure-from-Motion (SfM), for
estimating the camera poses in a pre-processing step. Un-
fortunately, SfM is not robust in the handheld video setting.
It frequently fails because, unlike radiance fields, it does not
model view-dependent appearance and struggles in the ab-
sence of highly textured features and in the presence of even
slight dynamic motion (such as swaying tree branches).
To remove the dependency on known camera poses, sev-
eral approaches propose jointly optimizing camera poses
and radiance fields [11,17, 41]. These methods perform well
when dealing with a few frames and a good pose initializa-
tion. However, as shown in our experiments, they have dif-
ficulty estimating long trajectories of a video camera from
scratch and often fall into local minima.
Our work. In this paper, we propose a joint pose and
radiance field estimation method. We design our method
by drawing inspiration from classical incremental SfM al-
gorithms and keyframe-based SLAM systems for improving
the robustness. The core of our approach is to process the
video sequence progressively using overlapping local radi-
ance fields. More specifically, we progressively estimate the
poses of input frames while updating the radiance fields. To
model large-scale unbounded scenes, we dynamically in-
stantiate local radiance fields. The increased locality and
progressive optimization yield several major advantages:
• Our method scales to processing arbitrarily long
videos without loss of accuracy and without hitting
memory limitations.
• Increased robustness because the impact of misestima-
tions is locally bounded.
• Increased sharpness because we use multiple radiance
fields to model local details of the scene (see Figure 1
RepresentedspaceUncontractedspace
(a) NDC (b) Mip-NeRF360 (c) Ours
Figure 2. Space parameterization. (a) NDC, used by NeRF [22]
for forward-facing scenes, maps a frustum to a unit cube volume.
While a sensible approach for forward-facing cameras, it is only
able to represent a small portion of a scene as the frustum can-
not be extended beyond a field of view of 120◦or so without
significant distortion. (b) Mip-NeRF360’s [4] space contraction
squeezes the background and fits the entire space into a sphere of
radius 2. It is designed for inward-facing 360 scenes and cannot
scale to long trajectories. (c) Our approach allocates several radi-
ance fields along the camera trajectory. Each radiance field maps
the entire space to a [−2,2]cube (Equation (5)) and, each having
its own center for contraction (Equatione (7)), the high-resolution
uncontracted space follows the camera trajectory and our approach
can adapt to any camera path.
and2b).
We validate our method on the T ANKS AND TEMPLES
dataset. We also collect a new dataset S TATIC HIKES of
twelve outdoor scenes using four consumer cameras to eval-
uate our method. These sequences are challenging due to
long handheld camera trajectories, motion blur, and com-
plex appearance.
Our contributions. We present a new method for re-
constructing the radiance field of a large-scale scene, which
contains the following contributions:
• We propose to progressively estimate the camera poses
and radiance fields, leading to significantly improved
robustness.
• We show that multiple overlapping local radiance
fields improve visual quality and support modeling
large-scale unbounded scenes.
• We contribute a newly collected video dataset that
presents new challenges not covered by existing view
synthesis datasets.
Limitations. Our work aims to synthesize novel views
from the reconstructed radiance fields. While we jointly es-
timate the poses in the pipeline, we do not perform global
bundle adjustment and loop closure (i.e., not a complete
SLAM system). We leave this important direction for fu-
ture work.
16540
|
Mei_LightPainter_Interactive_Portrait_Relighting_With_Freehand_Scribble_CVPR_2023
|
Abstract
Recent portrait relighting methods have achieved realis-
tic results of portrait lighting effects given a desired light-
ing representation such as an environment map. However,
these methods are not intuitive for user interaction and
lack precise lighting control. We introduce LightPainter, a
scribble-based relighting system that allows users to inter-
actively manipulate portrait lighting effect with ease. This
is achieved by two conditional neural networks, a delighting
module that recovers geometry and albedo optionally con-
ditioned on skin tone, and a scribble-based module for re-
lighting. To train the relighting module, we propose a novel
scribble simulation procedure to mimic real user scribbles,
which allows our pipeline to be trained without any human
annotations. We demonstrate high-quality and flexible por-
trait lighting editing capability with both quantitative and
qualitative experiments. User study comparisons with com-
mercial lighting editing tools also demonstrate consistent
user preference for our method.
|
1. Introduction
Lighting is a fundamental aspect of portrait photograph,
as lights shape the reality, and give the work depth, color-
fulness and excitement. Professional photographers [17,38]
spend hours designing lighting such that shadow and high-
light are distributed accurately on the subject to achieve the
desired photographic look. Getting the exact lighting setups
requires years of training, expensive equipment, environ-
ment setup, timing, and costly teamwork. Recently, portrait
relighting techniques [20, 22, 33, 43, 45, 48, 53, 61, 63] al-
low users to apply a different lighting condition to a portrait
photo. These methods require a given lighting condition:
some use an exemplar image [42, 43], which lacks precise
lighting control and requires exhaustive image search to find
the specific style; some use a high dynamic range (HDR)
environment map [33,45,48,53] that is difficult and unintu-
itive to interpret or edit.
Hand-drawn sketches and scribbles have been shown to
be good for user interaction and thus are widely used in vari-
ous image editing applications [6,9,10,30,32,57]. Inspired
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
195
by this, we propose LightPainter , a scribble-based inter-
active portrait relighting system. As shown in Figure 1,
LightPainter is an intuitive and flexible lighting editing sys-
tem that only requires casual scribbles drawn on the input.
Unlike widely-used lighting representations such as envi-
ronment maps and spherical harmonics, it is non-trivial to
interpret free-hand scribbles as lighting effects for a number
of challenges.
The first challenge is simulating scribbles to mimic real
free-hand input as it is impractical to collect a large num-
ber of human inputs. In addition, unlike other sketch-based
editing tasks [6,9,10,30,32,57] where sketches can be com-
puted from edges or orientation maps, there is no conven-
tional way to connect scribbles with lighting effects. To
address such challenge, we propose a scribble simulation
algorithm that can generate a diverse set of synthetic scrib-
bles that mimic real human inputs. For an interactive re-
lighting task, scribbles should be flexible and expressive:
easy to draw and accurately reflecting the lighting effect,
such as changes in local shading and color. Compared to a
shading map, scribbles are often “incomplete”: users tend
to sparsely place the scribbles on a few key areas on the
face. Therefore, we propose to use a set of locally con-
nected “shading stripes” to describe local shading patterns,
including shape, intensity, and color, and use them to simu-
late scribbles. To this end, we simulate scribbles by starting
from a full shading map and applying a series of operations
to generate coarse and sparse shading stripes. We show that
training with our synthetic scribbles enables the system to
generalize well to real user scribbles from human inputs,
with which our model can generate high-quality results with
desirable lighting effects.
The second challenge is how to effectively use local and
noisy scribbles to robustly represent portrait lighting that
is often a global effect. LightPainter uses a carefully de-
signed network architecture and training strategy to han-
dle these discrepancies. Specifically, we introduce a two-
step relighting pipeline to process sparse scribbles. The
first stage produces a plausible completion of the shading
map from the input scribbles and the geometry; the sec-
ond stage refines the shading and renders the appearance
with a learned albedo map. We propose a carefully de-
signed neural network with an augmented receptive field.
Compared with commonly-used UNet for portrait relight-
ing [21, 31, 33, 48], our design can better handle the sparse
scribbles and achieve geometry-consistent relighting.
Last, there is one major challenge in portrait relight-
ing that originates from the ill-posed nature of the intrin-
sic decomposition problem. That is to decouple albedo and
shading from an image. It is also difficult to address with
a learning framework due to the extreme scarcity of real-
istic labeled data and infinite possible lighting conditions
for a scene. In the context of portrait relighting, it meansrecovering the true skin tone of a portrait subject is very
challenging [12, 49]. Instead of trying to collect a balanced
large-scale light-stage [8] dataset to capture the continuous
and subtle variations in different skin tones, we propose an
alternative solution dubbed SkinFill . We draw inspiration
from the standard makeup routine and design SkinFill to al-
low users to specify skin tone in our relighting pipeline. We
use a tone map , a per-pixel skin tone representation, to con-
dition the albedo prediction to follow the exact skin tone as
desired. This also naturally enables additional user control
at inference time.
Similar to prior work [33, 45, 62], we train our system
with a light stage [8] dataset. With our novel designs, Light-
Painter is a user-friendly system that enables creative and
interactive portrait lighting editing. We demonstrate the
simple and intuitive workflow of LightPainter through a
thorough user study. We show it generates relit portraits
with superior photo-realism and higher fidelity compared to
state-of-the-art methods. We summarize our contributions
as follows:
• We propose LightPainter, a novel scribble-based por-
trait relighting system that offers flexible user control,
allowing users to easily design portrait lighting effects.
• We introduce a novel scribble simulation algorithm
that can automatically generate realistic scribbles for
training. Combining it with a carefully designed neural
relighting module, our system can robustly generalize
to real user input.
• We introduce SkinFill to allow users to specify skin
tone in the relighting pipeline, which allows data-
efficient training and offers additional control to ad-
dress potential skin tone data bias.
|
Oksuz_Towards_Building_Self-Aware_Object_Detectors_via_Reliable_Uncertainty_Quantification_and_CVPR_2023
|
Abstract
The current approach for testing the robustness of object
detectors suffers from serious deficiencies such as improper
methods of performing out-of-distribution detection and us-
ing calibration metrics which do not consider both locali-
sation and classification quality. In this work, we address
these issues, and introduce the SelfAwareObjectDetection
(SAOD) task, a unified testing framework which respects
and adheres to the challenges that object detectors face in
safety-critical environments such as autonomous driving.
Specifically, the SAOD task requires an object detector to
be: robust to domain shift; obtain reliable uncertainty es-
timates for the entire scene; and provide calibrated con-
fidence scores for the detections. We extensively use our
framework, which introduces novel metrics and large scale
test datasets, to test numerous object detectors in two dif-
ferent use-cases, allowing us to highlight critical insights
into their robustness performance. Finally, we introduce
a simple baseline for the SAOD task, enabling researchers
to benchmark future proposed methods and move towards
robust object detectors which are fit for purpose. Code is
available at: https://github.com/fiveai/saod .
|
1. Introduction
The safe and reliable usage of object detectors in safety
critical systems such as autonomous driving [10,65,73], de-
pends not only on its accuracy, but also critically on other
robustness aspects which are often only considered in addi-
tion or not all. These aspects represent its ability to be ro-
bust to domain shift, obtain well-calibrated predictions and
yield reliable uncertainty estimates at the image-level, en-
abling it to flag the scene for human intervention instead of
making unreliable predictions. Consequently, the develop-
ment of object detectors for safety critical systems requires
a thorough evaluation framework which also accounts for
these robustness aspects, a feature lacking in current evalu-
ation methodologies.
Figure 1. (Top) The vanilla object detection task vs. (Bottom)
the self-aware object detection (SAOD) task. Different from the
vanilla approach; the SAOD task requires the detector to: predict
ˆa∈ {0,1}representing whether the image Xis accepted or not
for further processing; yield accurate and calibrated detections;
and be robust to domain shift. Accordingly, for SAOD we evaluate
on ID, domain-shift and OOD data using our novel DAQ measure.
Here,{ˆci,ˆbi,ˆpi}Nare the predicted set of detections.
Whilst object detectors are able to obtain uncertainty at
thedetection-level , they do not naturally produce uncer-
tainty at the image-level . This has lead researchers to of-
ten evaluate uncertainty by performing out-of-distribution
(OOD) detection at the detection-level [13, 21], which can-
not be clearly defined. Thereby creating a misunderstanding
between OOD and in-distribution (ID) data. This leads to
an improper evaluation, as defining OOD at the detection
level is non-trivial due to the presence of known-unknowns
or background objects [12]. Furthermore, the test sets for
OOD in such evaluations are small, typically containing
around 1-2K images [13, 21].
Moreover, as there is no direct access to the labels of the
test sets and the evaluation servers only report accuracy [18,
43], researchers have no choice but to use small validation
sets as testing sets to report robustness performance, such
as calibration and performance under domain shift. As a
result, either the training set [11, 59]; the validation set [13,
21]; or a subset of the validation set [36] is employed for
cross-validation, leading to an unideal usage of the dataset
splits and a poor choice of the hyper-parameters.
Finally, prior work typically focuses on only one of: cal-
ibration [35, 36]; OOD detection [13]; domain-shift [45,
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9263
68, 71, 72]; or leveraging uncertainty to improve accuracy
[5, 9, 20, 23, 70], with no prior work taking a holistic ap-
proach by evaluating all of them. Specifically for cali
|
Ma_Dynamic_Aggregated_Network_for_Gait_Recognition_CVPR_2023
|
Abstract
Gait recognition is beneficial for a variety of applica-
tions, including video surveillance, crime scene investi-
gation, and social security, to mention a few. However,
gait recognition often suffers from multiple exterior fac-
tors in real scenes, such as carrying conditions, wear-
ing overcoats, and diverse viewing angles. Recently, var-
ious deep learning-based gait recognition methods have
achieved promising results, but they tend to extract one of
the salient features using fixed-weighted convolutional net-
works, do not well consider the relationship within gait fea-
tures in key regions, and ignore the aggregation of complete
motion patterns. In this paper, we propose a new perspec-
tive that actual gait features include global motion patterns
in multiple key regions, and each global motion pattern is
composed of a series of local motion patterns. To this end,
we propose a Dynamic Aggregation Network (DANet) to
learn more discriminative gait features. Specifically, we
create a dynamic attention mechanism between the features
of neighboring pixels that not only adaptively focuses on
key regions but also generates more expressive local motion
patterns. In addition, we develop a self-attention mecha-
nism to select representative local motion patterns and fur-
ther learn robust global motion patterns. Extensive exper-
iments on three popular public gait datasets, i.e., CASIA-
B, OUMVLP , and Gait3D, demonstrate that the proposed
method can provide substantial improvements over the cur-
rent state-of-the-art methods.1
|
1. Introduction
Gait recognition aims to retrieve the same identity at a
long distance, and has been widely used throughout social
security [28], video surveillance [4, 15, 49], crime investi-
gation [25], and so on. Compared with action recognition
[17, 53, 54] and person re-identification [2, 55, 60, 61], the
*Corresponding Authors
1Code available at https://github.com/XKMar/FastGait
w1 1 w12 w13
w21 w22 w23
w31 w32 w33w1 1 w12 w13
w21 w22 w23
w31 w32 w33w22 w23
w32 w33w1 1 w12 w13
w21 w22 w23
w31 w32 w33
VectorsDynamic Attention
Focusing on the
Key Regions
i i
ii
i
i i iii i
ii
i
i i iii i
ii
i
i i iiMagnitude
PhaseMixFigure 1. The features of each pixel are mapped as a vector with
both magnitude and phase components. The magnitude represents
contextual information, while the phase direction is used to con-
struct dynamic attention models for the key regions. The convo-
lution operation is denoted by “ ∗”, and the blue circles in the dia-
grams represent the key regions learned by the dynamic attention.
gait recognition task is one of the most challenging fine-
grained label classification problems. On the one hand, sil-
houette data is a binary image of a person suffering from
the limitations of the segmentation algorithm [26, 62, 63],
with occasional holes and broken edges. On the other hand,
gait recognition is also impacted by various exterior factors
in real scenes, such as carrying conditions, wearing coats,
and diverse viewing angles. Different angles and clothing
conditions will greatly change the silhouette appearance of
the same person, resulting in the intra-class variance being
much greater than inter-class. We ask: How to learn more
robust features adaptively for each person under the influ-
ence of various external factors ? We attempt to answer this
question from the following perspectives:
(i) Local Motion Patterns. Gait, or the act of walking,
is essentially the coordinated movement of body parts. In a
gait sequence, we observe that each part has a unique rep-
resentative motion pattern, and each motion pattern is com-
posed of a set of localized sub-movements. Therefore, it
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22076
TimeMotion Patterns
Max
Mean Max-based Methods
Mean-based Methods
GMP A (ours)Actual Motion
Disturbances during
actual movementSelected ef fective local
motion patternsFigure 2. Comparison of the actual motion pattern with the Max-
based method, Mean-based method, and Global Motion Pattern
Aggregator (GMPA) module. The black curve represents a single
periodic action that is affected by disturbances, whereas the green
curve represents a synthesized periodic action consisting of dis-
tinct local motion patterns selected by the GMPA.
is critical to accurately locate the discriminative parts and
obtain representative local motion patterns under the inter-
ference of various external factors. However, previous gait-
based approaches [7, 8, 13, 14, 20, 24, 33] simply use con-
volutional networks with non-linear activation to model the
dynamic movements. Once the network has been trained,
the parameters and the non-linear function can only focus
on the fixed patterns. To this end, we propose to encode the
features of each pixel as a vector with magnitude and phase,
as shown in Fig. 1, which allows learning the dynamic at-
tention mapping functions among the neighboring pixel of
focusing. By modeling the relationship, the network can
further focus on local motion patterns in key regions.
(ii) Global Motion Patterns. Gait is a periodic move-
ment. We assume that the actual motion pattern is a one-
dimensional signal, as shown in Fig. 2, whereby the local
motion patterns are the points on the signal. Therefore, it is
essential to use a series of local motion patterns to further
fit the actual motion patterns for obtaining discriminative
gait features. However, recent gait-based methods [8,20,33]
only use Max- or Mean-based methods to extract one of the
significant local features. These methods are susceptible to
disturbances and can not fit the actual motion patterns. Ac-
cording to the Nyquist-Shannon sampling theorem [37, 39]
in signal processing theory, when a continuous signal is
sampled at a frequency greater than twice the frequency of
the signal, the information of the original signal is retained
intact. In this regard, we propose to construct a global at-
tention model and use it to dynamically select a preset num-
ber of distinguishable local motion patterns (green arrows),
while excluding the effect of noise (red arrows). By se-
lecting sufficient discriminative local motion patterns, the
network can further obtain robust global motion patterns.
Driven by this analysis, we propose a novel and ef-
fective Dynamic Aggregated Network (DANet) for gait
recognition. As shown in Fig. 3, DANet consists of twowell-designed components, i.e., Local Conv-Mixing Block
(LCMB) and Global Motion Patterns Aggregator (GMPA).
Firstly , we encode the features of each pixel into the com-
plex domain including magnitude and phase, where the
magnitude term represents the contextual information and
the phase term is used to establish the relationship between
each vector. The local motion pattern is generated by ag-
gregating the magnitude and phase of the vectors in the
neighboring regions of focus. Secondly , we use the self-
attentive mechanism in the GMPA model to dynamically se-
lect sufficient discriminative local motion patterns and fur-
ther learn to fit the actual gait patterns. Finally , with our
proposed modules, we obtain the most representative sta-
ble gait features for each person and outperform the state-
of-the-art (SOTA) methods, especially under the most chal-
lenging condition of cross-dressing.
Our main contributions can be summarized as follows:
• We propose a novel LCMB to extract the represen-
tative local motion patterns, which can dynamically
model the relationships among the features of neigh-
boring pixels and then accurately locate key regions.
• We design an effective GMPA to select the discrimi-
native local motion patterns and then aggregate them
to obtain a robust global representation. To the best
of our knowledge, it is the first attempt to explore the
potential of self-attention model in this task.
• Experimental results are illustrated to demonstrate the
effectiveness of the proposed method, outperforming
the SOTA method on CASIA-B [56], OUMVLP [41]
and Gait3D [59] datasets. In addition, many rigorous
ablation experiments on CASIA-B [56] further vali-
dated the effectiveness of each component in DANet.
|
Maheshwari_Transfer4D_A_Framework_for_Frugal_Motion_Capture_and_Deformation_Transfer_CVPR_2023
|
Abstract
Animating a virtual character based on a real perfor-
mance of an actor is a challenging task that currently re-
quires expensive motion capture setups and additional effort
by expert animators, rendering it accessible only to large pro-
duction houses. The goal of our work is to democratize this
task by developing a frugal alternative termed “Transfer4D”
that uses only commodity depth sensors and further reduces
animators’ effort by automating the rigging and animation
transfer process. Our approach can transfer motion from an
incomplete, single-view depth video to a semantically similar
target mesh, unlike prior works that make a stricter assump-
tion on the source to be noise-free and watertight. To handle
sparse, incomplete videos from depth video inputs and varia-
tions between source and target objects, we propose to use
skeletons as an intermediary representation between motion
capture and transfer. We propose a novel unsupervised skele-
ton extraction pipeline from a single-view depth sequence
that incorporates additional geometric information, result-
ing in superior performance in motion reconstruction and
transfer in comparison to the contemporary methods and
making our approach generic. We use non-rigid reconstruc-
tion to track motion from the depth sequence, and then we rig
the source object using skinning decomposition. Finally, therig is embedded into the target object for motion retargeting.
|
1. Introduction
The growing demand for immersive animated content
originates primarily from its widespread applications in en-
tertainment, metaverse, education, and augmented/virtual
reality to name a few. Production of animation content re-
quiring modeling the geometry of the characters of interest,
rigging a skeleton to determine each character’s degrees
of freedom, and then animating their motion. The latter
step is often performed by transferring the motion captured
from a real performer, whether a human actor, an animal,
or even a puppet. The intermediate rigging step is tedious
and labor-intensive, while motion capture requires an expen-
sive multi-camera system; both factors hinder the large-scale
adoption of 3D content creation. We aim to democratize
content creation by (i) replacing expensive motion capture
setups with a single monocular depth sensor, and (ii) by au-
tomatically generating character rigs to transfer motion to
non-isometric objects.
By the term animation transfer we refer to the process of
transferring the motion captured from a real performer to an
articulated character modelled as a polygon mesh. Specif-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12836
ically, we work in a setting where the source is an point
cloud obtained from a single-view commodity sensor such
as Kinect/Realsense, and the target is a user-defined poly-
gon mesh without a predefined skeletal rig. These choices
make our approach more practical for deployment in future.
In this setting, performing automatic animation transfer re-
quires reconstructing the source motion from the point cloud,
and finding a correspondence between points on the source
and target shapes, before the source performance can be
remapped to the target character. Furthermore, many appli-
cations often require animation content not just of humans
but of other kinds of characters and objects such as animals,
birds, and articulated mechanical objects, precluding the use
of predefined human body models. As we discuss below,
existing techniques have limited applicability under all these
constraints.
Monocular motion systems generally require a template
before registering the incoming video. Templates are cre-
ated by 360◦static registration of the source object [19, 52]
or by creating a parametric model from a large corpus of
meshes [2, 23, 28, 35, 39, 65]. Unfortunately, these meth-
ods do not generalize to unknown objects. As a parallel
approach, methods that rely on neural implicit representa-
tions [8, 37, 51, 59 –61] are comparatively high on computa-
tion cost for working on a single video making them unsuit-
able for frugal animation transfer. Dynamic reconstruction
based methods [26, 33, 42, 43, 50] provide promising results,
but tracking error could cause structural artifacts thereby
resulting in issues of shape correspondence or skeleton ex-
traction.
Without using a predefined template, establishing cor-
respondence between parts of the partial source and the
complete target objects becomes challenging. Automatic cor-
respondence methods like [4, 11, 17, 45, 53, 62] do not work
in our setting where input is sparse, noisy and incomplete. A
family of approaches based on surface level correspondence
such as the functional maps [17,21,27,31,40,47] incorporate
geometry; however, these approaches are data intensive and
do not account for the underlying shape topologies explicitly
which are critical to matching generic shapes.
Recently, a deep learning approach, Morig [57] was pro-
posed to automatically rig character meshes and capture
the motion of performing characters from single-view point
cloud streams. However, Morig requires supervision at all
stages. Furthermore, Liao et al. [24] introduced a skeleton-
free approach to transfer poses between 3D characters. The
target object is restricted to bipedal characters in T-pose, and
the mocap data is limited to human motion.
To address the above shortcomings, we propose Trans-
fer4D , a frugal motion capture and a retargeting framework.
We extract the skeleton motion of the source object from
a monocular depth video in an unsupervised fashion and
retarget the motion to a similar target virtual model. Insteadof relying on a predefined template, we directly extract the
skeleton from the incoming video. Furthermore, by directly
embedding the skeleton into the target object, we bypass
establishing dense (surface) correspondence between the
source and target that can prove to be computationally ex-
pensive and noisy.
Key Contributions: (1) To the best of our knowledge, Trans-
fer4D is the first approach for intra-category motion transfer
technique from a monocular depth video to a target virtual
object (Fig. 1). The research aid in efforts towards the de-
mocratization of motion-capture systems and reduce anima-
tors’ drudgery through rigging and automatic motion transfer.
(2)To transfer motion from a single view incomplete mesh,
we propose a novel unsupervised skeleton extraction algo-
rithm. We construct the skeleton based on rigidity constraints
from the motion and structural cues from the curve skeleton.
See Sec. 4 for details.
|
Mo_Continuous_Intermediate_Token_Learning_With_Implicit_Motion_Manifold_for_Keyframe_CVPR_2023
|
Abstract
Deriving sophisticated 3D motions from sparse
keyframes is a particularly challenging problem, due to
continuity and exceptionally skeletal precision. The action
features are often derivable accurately from the full series
of keyframes, and thus, leveraging the global context with
transformers has been a promising data-driven embedding
approach. However, existing methods are often with inputs
of interpolated intermediate frame for continuity using
basic interpolation methods with keyframes, which result
in a trivial local minimum during training. In this paper,
we propose a novel framework to formulate latent motion
manifolds with keyframe-based constraints, from which the
continuous nature of intermediate token representations is
considered. Particularly, our proposed framework consists
of two stages for identifying a latent motion subspace,
i.e., a keyframe encoding stage and an intermediate token
generation stage, and a subsequent motion synthesis stage
to extrapolate and compose motion data from manifolds.
Through our extensive experiments conducted on both the
LaFAN1 and CMU Mocap datasets, our proposed method
demonstrates both superior interpolation accuracy and
high visual similarity to ground truth motions.
|
1. Introduction
Pose-to-pose keyframing is a fundamental principle of
character animation, and animation processes often rely
on key pose definitions to efficiently construct motions
[6, 11, 30]. In computer animation, keyframes are tempo-
rally connected via interpolation algorithms, which derive
intermediate pose attributes to produce smooth transitions
between key poses. However, human motion is often com-
plex and difficult to be effectively represented by sparse
keyframe sequences alone. While this can be addressed
*Corresponding author.
Code available at: https://github.com/MiniEval/CITL
Figure 1. An example of motion interpolation by our method (first
row), given the keyframes of a hopping motion (in blue), compared
with the ground truth (second row).
by producing denser sequences of key poses, this approach
is laborious for animators, thereby increasing the cost of
keyframed animation processes. Even with Motion Capture
(MoCap) workflows, artists must often resort to keyfram-
ing in order to clean artifacts, impose motion constraints,
or introduce motion features irreplicable by motion capture
performers.
Learning-based motion interpolation methods have re-
cently been proposed as an acceleration of the keyframed
animation process, by automatically deriving details within
keyframe transitions as shown in Figure 1. Various ma-
chine learning methods have been explored to enable more
realistic interpolation solutions from high quality MoCap
databases, e.g. by using recurrent networks [14, 15, 40] or
transformer-based approaches [10, 26, 31]. Guiding data-
driven interpolation with real motions is particularly attrac-
tive for keyframe animation workflows, as realistic motions
often require the greatest amount of keyframing, by virtue
of their subtle motion details and physical constraints.
Naturally, as a sequence in-painting problem, motion
interpolation can be formulated as a masked sequence-to-
sequence task, which the recent popular transformer ap-
proach is expected to learn effectively [4, 38, 42, 43]. How-
ever, sequential learning of masked continuous attributes
with transformers is largely impaired by the conventional
masked tokens for intermediate data. A token is defined
as an individual data element on the extendable axis of a
sequence, namely the temporal axis for motions. In cur-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13894
rent sequence modelling formulations, a token is usually
represented by a one-hot vocabulary vector to specify in-
dividual words or masked elements, which poses a limita-
tion on continuous attributes. Since continuous attributes
can be assigned any real value, there exists no value by
which a masking token can be defined without correspond-
ing to an otherwise valid input. Previous approaches have
employed transformer decoder-level mask tokens and lin-
ear interpolation (LERP)-based tokens have been explored
to work around this issue [10, 16, 31]. However, these ap-
proaches have innate incompatibilities with the transformer
architecture. Singular mask token representations, regard-
less of their point of introduction, result in discontinuous
hidden representations, which are antithetical to the evalua-
tion of continuous motion data. On the other hand, the use
of LERP as a pre- or post-processing step necessarily intro-
duces an accurate starting estimate to the solution, which
transformer models are prone to becoming over-reliant on
[24, 45]. To fully address these limitations, we propose
a novel transformer-based framework that learns to model
keyframe sequences into latent motion manifold represen-
tations for intermediate tokens, which reflects the smooth
and continuous nature of human motion.
As illustrated in Figure 2, our proposed framework incor-
porates three stages with transformers to convert a keyframe
sequence into a complete motion sequence: Stage-I is a
keyframe encoding stage to formulate the overall motion
patterns from the keyframe sequence into keyframe context
tokens as a guidance for further modelling; Stage-II is an in-
termediate token generation stage , where temporal indices
are mapped into intermediate token representations with the
keyframe context tokens, which serve as an implicit latent
motion manifold constraint; and Stage-III, a motion synthe-
sis stage , takes the obtained intermediate tokens by inject-
ing them within the keyframe token sequence, and interpo-
lating them to derive a refined motion sequence estimation.
With this framework, our transformer-based approach
exhibits two key advantages over existing approaches that
enable its high-quality motion interpolation: a) Manifold
learning allows our framework to establish temporal con-
tinuity in its latent representation space, and b) The la-
tent motion manifold constrains our transformer model
to concentrate its attention exclusively towards motion
keyframes, as opposed to intermediate tokens derived from
non-keyframe poses, such as those derived from LERP,
thereby forcing a necessary alignment between the known
and unknown tokens adaptively.
In addition, we identify an adverse link between con-
tinuous features and normalisation methods with per-token
re-centering. Specifically, layer normalisation (LayerNorm)
[1], which is commonly used in transformer architectures,
constrains the biases of token features based on their in-
dividual distributions. Though this is well-known to beeffective with linguistic models [25, 42], continuous data
inherently contain biases that should be leveraged at se-
quence level. Therefore, we introduce a sequence-level
re-centering (Seq-RC) technique, where positional pose at-
tributes of keyframes are recentred based on their distribu-
tion throughout a motion sequence, and root-mean-square
normalisation (RMSNorm) [47] layers are then employed
to perform magnitude-only normalisation. Though RM-
SNorm was initially proposed as only a speedup to Lay-
erNorm, our observations demonstrate that Seq-RC leads to
superior performance in terms of accuracy and visual simi-
larity to MoCap sequences.
In summary, our paper’s key contributions are threefold:
1. We propose a novel transformer-based architecture
consisting of three cooperative stages. It constrains the
evaluation of unknown intermediate representations of
continuous attributes to the guidance of keyframe con-
text tokens in a learned latent manifold.
2. We devise sequence-level re-centering (Seq-RC) nor-
malisation to effectively operate with real scalar at-
tributes with minimal accuracy loss.
3. Extensive comparisons and ablation results obtained
on LaFAN1 and CMU Mocap strongly demonstrate the
superiority of our method over the state-of-the-art.
|
Li_Multi-View_Inverse_Rendering_for_Large-Scale_Real-World_Indoor_Scenes_CVPR_2023
|
Abstract
We present a efficient multi-view inverse rendering
method for large-scale real-world indoor scenes that re-
constructs global illumination and physically-reasonable
SVBRDFs. Unlike previous representations, where the
global illumination of large scenes is simplified as multiple
environment maps, we propose a compact representation
called Texture- based Lighting (TBL). It consists of 3D mesh
and HDR textures, and efficiently models direct and infinite-
bounce indirect lighting of the entire large scene. Based on
TBL, we further propose a hybrid lighting representation
with precomputed irradiance, which significantly improves
the efficiency and alleviates the rendering noise in the ma-
terial optimization. To physically disentangle the ambiguity
between materials, we propose a three-stage material opti-
mization strategy based on the priors of semantic segmen-
tation and room segmentation. Extensive experiments show
that the proposed method outperforms the state-of-the-art
quantitatively and qualitatively, and enables physically-
reasonable mixed-reality applications such as material
editing, editable novel view synthesis and relighting. The
project page is at https://lzleejean.github.io/TexIR.
|
1. Introduction
Inverse rendering aims to reconstruct geometry, material
and illumination of an object or a scene from images. These
properties are essential to downstream applications such as
scene editing, editable novel view synthesis and relighting.
However, decomposing such properties from the images
is extremely ill-posed, because different configurations of
such properties often lead to similar appearance. With re-
cent advances in differentiable rendering and implicit neural
representation, several approaches have achieved significant
success on small-scale object-centric scenes with explicit or
implicit priors [7,32,34,43,50,51,55,57,58]. However, in-
verse rendering of large-scale indoor scenes has not been
well solved.
*Co-corresponding authors.
Posed sparse-view images
Roughness Albedo Lighting
• • •
Material Editing Editable Novel View RelightingInput Output
Applications
Figure 1. Given a set of posed sparse-view images for a large-
scale scene, we reconstruct global illumination and SVBRDFs.
The recovered properties are able to produce convincing results
for several mixed-reality applications such as material editing, ed-
itable novel view synthesis and relighting. Note that we change
roughness of all walls, and albedo of all floors. The detailed spec-
ular reflectance shows that our method successfully decomposes
physically-reasonable SVBRDFs and lighting. Please refer to sup-
plementary videos for more animations.
There are two main challenges for large-scale indoor
scenes. 1) Modelling the physically-correct global illu-
mination. There are far more complex lighting effects,
e.g., inter-reflection and cast shadows, in large-scale in-
door scenes than object-centric scenes due to complex oc-
clusions, materials and local light sources. Although the
widely-used image-based lighting (IBL) is able to effi-
ciently model direct and indirect illumination, it only rep-
resents the lighting of a certain position [13, 17, 18, 42].
The spatial consistency of per-pixel or per-voxel IBL rep-
resentations [26, 29, 47, 59] is difficult to ensure. Moreover,
such incompact representations require large memory. Pa-
rameterized lights [16, 27] such as point light, area light
and directional light are naturally globally-consistent, but
modeling the expensive global light transport will be in-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12499
evitable [1,37,58]. Thus, simple lighting representations ap-
plied in previous works are unsuitable in large-scale scenes.
2)Disentangling the ambiguity between materials. Differ-
ent configurations of materials often lead to similar appear-
ance, and to add insult to injury, there are an abundance of
objects with complex and diverse materials in large-scale
scenes. In object-centric scenes, dense views distributed
on the hemisphere are helpful for alleviating the ambigu-
ity [14, 22, 32, 35, 55, 56]. However, only sparse views are
available in large-scale scenes, which more easily lead to
ambiguous predictions [37, 51].
In this work, we present TexIR, an efficient inverse ren-
dering method for large-scale indoor scenes. Aforemen-
tioned challenges are tackled individually in the following.
1) We model the infinite-bounce global illumination of the
entire scene with a novel compact lighting representation,
called TBL. The TBL is able to efficiently represent the
infinite-bounce global illumination of any position within
the large scene. Such a compact and explicit representation
provides more physically-accurate and spatially-varying il-
lumination to guide the material estimation. Directly op-
timizing materials with TBL leads to expensive compu-
tation costs caused by high samples of the monte carlo
sampling. Therefore, we precompute the irradiance based
on our TBL, which significantly accelerates the expensive
computation in the material optimization process. 2) To
ameliorate the ambiguity between materials, we introduce a
segmentation-based three-stage material optimization strat-
egy. Specifically, we optimize a coarse albedo based on
Lambertian-assumption in the first stage. In the second
stage, we integrate semantics priors to guide the propaga-
tion of physically-correct roughness in regions with same
semantics. In the last stage, we fine-tune both albedo and
roughness based on the priors of semantic segmentation and
room segmentation. By leveraging such priors, physically-
reasonable albedo and roughness are disentangled globally.
To summarize, the main contributions of our method are
as follows:
1. A compact lighting representation for large-scale
scenes, where the infinite-bounce global illumination
of the entire large scene can be handled efficiently.
2. A segmentation-based material optimization strategy
to globally and physically disentangle the ambiguity
between albedo and roughness of the entire scene.
3. A hybrid lighting representation based on the proposed
TBL and precomputed irradiance to improve the effi-
ciency in the material optimization process.
|
Li_SteerNeRF_Accelerating_NeRF_Rendering_via_Smooth_Viewpoint_Trajectory_CVPR_2023
|
Abstract
Neural Radiance Fields (NeRF) have demonstrated su-
perior novel view synthesis performance but are slow at ren-
dering. To speed up the volume rendering process, many ac-
celeration methods have been proposed at the cost of large
memory consumption. To push the frontier of the efficiency-
memory trade-off, we explore a new perspective to accel-
erate NeRF rendering, leveraging a key fact that the view-
point change is usually smooth and continuous in interac-
tive viewpoint control. This allows us to leverage the in-
formation of preceding viewpoints to reduce the number of
rendered pixels as well as the number of sampled points
along the ray of the remaining pixels. In our pipeline, a
low-resolution feature map is rendered first by volume ren-
dering, then a lightweight 2D neural renderer is applied
to generate the output image at target resolution leverag-
ing the features of preceding and current frames. We show
that the proposed method can achieve competitive render-
ing quality while reducing the rendering time with little
memory overhead, enabling 30FPS at 1080P image reso-
lution with a low memory footprint.
|
1. Introduction
Novel View Synthesis (NVS) is a long-standing problem
in computer vision and computer graphics with applications
in navigation [40], telepresence [60], and free-viewpoint
video [51]. Given a set of posed images, the goal is to ren-
der the scene from unseen viewpoints to facilitate viewpoint
control interactively.
Recently, Neural Radiance Fields (NeRF) have emerged
as a popular representation for NVS due to the capacity to
render high-quality images from novel viewpoints. NeRF
represents a scene as a continuous function, parameterized
by a multilayer perceptron (MLP), that maps a continuous
3D position and a viewing direction to a density and view-
dependent radiance [23]. A 2D image is then obtained via
volume rendering, i.e., accumulating colors along each ray.
∗Corresponding author.∗∗Co-corresponding author.
Volume
RenderingVolume
Rendering
Volume
RenderingNeural
RenderingSmooth Viewpoint
TrajectoryLow Resolution
FeatureHigh Resolution
Result
Neural
Rendering
Figure 1. Illustration . We exploit smooth viewpoint trajectory to
accelerate NeRF rendering, achieved by performing volume ren-
dering at a low resolution and recovering the target image guided
by multiple viewpoints. Our method enables fast rendering with a
low memory footprint.
However, NeRF is slow at rendering as it needs to query
the MLP millions of times to render a single image, pre-
venting NeRF from interactive view synthesis. Many recent
works have focused on improving the rendering speed of
NeRF, yet there is a trade-off between rendering speed and
memory cost. State-of-the-art acceleration approaches typi-
cally achieve fast rendering at the expense of large memory
consumption [13, 56], e.g., by pre-caching the intermedi-
ate output of the MLP, leading to hundreds of megabytes
to represent a single scene. While there are some attempts
to accelerate NeRF rendering with a low memory foot-
print [16,25], the performance has yet to reach cache-based
methods. In practice, it is desired to achieve faster rendering
at a lower memory cost.
To push the frontier of this trade-off, we propose to speed
up NeRF rendering from a new perspective, leveraging the
critical fact that the viewpoint trajectory is usually smooth
and continuous in interactive control. Unlike existing NeRF
acceleration methods that reduce the rendering time of each
viewpoint individually , we accelerate the rendering by ex-
ploiting the information overlap between multiple consecu-
tive viewpoints. Fig. 1 illustrates our SteerNeRF, a simple
yet effective framework leveraging the SmooTh viEwpoint
trajEctoRy to speed up NeRF rendering. Here, “steer” also
refers to a user smoothly controlling the movement of a
camera during interactive real-time rendering.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20701
Exploiting the smooth view trajectory, we can acceler-
ate volume rendering by reducing the number of sample
points while maintaining image fidelity using efficient 2D
neural rendering. More specifically, our method comprises
a rendering buffer, neural feature fields, and a lightweight
2D neural renderer. We first render a low-resolution feature
map at a given viewpoint via volume rendering. The sam-
pling range along each ray is reduced by fetching a depth
map from the rendering buffer and projecting it to the cur-
rent view. This effectively reduces the volume rendering
computation as both the number of pixels and the number
of samples for the remaining pixels are reduced. Next, we
combine preceding and current feature maps to recover the
image at the target resolution using a 2D neural renderer,
i.e., a 2D convolutional neural network. The neural feature
fields and the 2D neural renderer are trained jointly end-to-
end. The combination of low-resolution volume rendering
and high-resolution neural rendering leads to fast render-
ing, yet maintains high fidelity and temporal consistency at
a low memory cost.
We summarize our contributions as follows. i) We pro-
vide a new perspective on NeRF rendering acceleration
based on the assumption of smooth viewpoint trajectory.
Our method is orthogonal to existing NeRF rendering accel-
eration methods and can be combined with existing work to
achieve real-time rendering at a low memory footprint. ii)
To fully exploit information from preceding viewpoints, we
propose a simple framework that combines low-resolution
volume rendering and high-resolution 2D neural render-
ing. With end-to-end joint training, the proposed frame-
work maintains high image fidelity. iii) Our experiments
on synthetic and real-world datasets show that our method
achieved a rendering speed of nearly 100 FPS at an image
resolution of 800×800pixels and 30 FPS at 1920×1080
pixels. It is faster than other low-memory NeRF accel-
eration methods and narrows the speed gap between low-
memory and cache-based methods.
|
Nguyen_Micron-BERT_BERT-Based_Facial_Micro-Expression_Recognition_CVPR_2023
|
Abstract
Micro-expression recognition is one of the most chal-
lenging topics in affective computing. It aims to recog-
nize tiny facial movements difficult for humans to per-
ceive in a brief period, i.e., 0.25 to 0.5 seconds. Re-
cent advances in pre-training deep Bidirectional Trans-
formers (BERT) have significantly improved self-supervised
learning tasks in computer vision. However, the standard
BERT in vision problems is designed to learn only from
full images or videos, and the architecture cannot accu-
rately detect details of facial micro-expressions. This pa-
per presents Micron-BERT ( µ-BERT), a novel approach to
facial micro-expression recognition. The proposed method
can automatically capture these movements in an unsu-
pervised manner based on two key ideas. First, we em-
ploy Diagonal Micro-Attention (DMA) to detect tiny dif-
ferences between two frames. Second, we introduce a
new Patch of Interest (PoI) module to localize and high-
light micro-expression interest regions and simultaneously
reduce noisy backgrounds and distractions. By incorpo-
rating these components into an end-to-end deep network,
the proposed µ-BERT significantly outperforms all previ-
ous work in various micro-expression tasks. µ-BERT can
be trained on a large-scale unlabeled dataset, i.e., up to
8 million images, and achieves high accuracy on new un-
seen facial micro-expression datasets. Empirical experi-
ments show µ-BERT consistently outperforms state-of-the-
art performance on four micro-expression benchmarks, in-
cluding SAMM, CASME II, SMIC, and CASME3, by sig-
nificant margins. Code will be available at https://
github.com/uark-cviu/Micron-BERT
|
1. Introduction
Facial expressions are a complex mixture of conscious
reactions directed toward given stimuli. They involve ex-
periential, behavioral, and physiological elements. Be-
cause they are crucial to understanding human reactions,
this topic has been widely studied in various application do-
mains [5]. In general, facial expression problems can be
classified into two main categories, macro-expression, and
Figure 1. Given two frames from a high-speed video, the proposed
µ-BERT method can localize and highlight the regions of micro-
movements. Best viewed in color.
micro-expression. The main differences between the two
are facial expression intensities, and duration [2]. In partic-
ular, macro-expressions happen spontaneously, cover large
movement areas in a given face, e.g., mouth, eyes, cheeks,
and typically last from 0.5 to 4 seconds. Humans can
usually recognize these expressions. By contrast, micro-
expressions are involuntary occurrences, have low inten-
sity, and last between 5 milliseconds and half a second. In-
deed, micro-expressions are challenging to identify and are
mostly detectable only by experts. Micro-expression under-
standing is essential in numerous applications, primarily lie
detection, which is crucial in criminal analysis.
Micro-expression identification requires both semantics
and micro-movement analysis. Since they are difficult to
observe through human eyes, a high-speed camera, usu-
ally with 200 frames per second (FPS) [6, 15, 51], is typi-
cally used to capture the required video frames. Previous
work [11] tried to understand this micro information using
MagNet [29] to amplify small motions between two frames,
e.g., onset and apex frames. However, these methods still
have limitations in terms of accuracy and robustness. In
summary, the contributions of this work are four-fold:
• A novel Facial Micro-expression Recognition (MER)
via Pre-training of Deep Bidirectional Transformers
approach (Micron-BERT or µ-BERT) is presented to
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1482
tackle the problem in a self-supervised learning man-
ner. The proposed method aims to identify and localize
micro-movements in faces accurately.
• As detecting the tiny moment changes in faces is an
essential input to the MER module, a new Diagonal
Micro Attention (DMA) mechanism is proposed to pre-
cisely identify small movements in faces between two
consecutive video frames.
• A new Patch of Interest (POI) module is introduced
to efficiently spot facial regions containing the micro-
expressions. Far apart from prior methods, it is trained
in an unsupervised manner without using any facial la-
bels, such as facial bounding boxes or landmarks.
• The proposed µ-BERT framework is designed in a
self-supervised learning manner and trained in an end-
to-end deep network. Indeed, it consistently achieves
State-of-the-Art (SOTA) results in various standard
micro-expression benchmarks, including CASME II
[50], CASME3 [14], SAMM [6] and SMIC [15]. It
achieves high recognition accuracy on new unseen
subjects of various gender, age, and ethnicity.
|
Nunes_Temporal_Consistent_3D_LiDAR_Representation_Learning_for_Semantic_Perception_in_CVPR_2023
|
Abstract
Semantic perception is a core building block in au-
tonomous driving, since it provides information about the
drivable space and location of other traffic participants.
For learning-based perception, often a large amount of di-
verse training data is necessary to achieve high perfor-
mance. Data labeling is usually a bottleneck for developing
such methods, especially for dense prediction tasks, e.g., se-
mantic segmentation or panoptic segmentation. For 3D Li-
DAR data, the annotation process demands even more effort
than for images. Especially in autonomous driving, point
clouds are sparse, and objects appearance depends on its
distance from the sensor, making it harder to acquire large
amounts of labeled training data. This paper aims at taking
an alternative path proposing a self-supervised representa-
tion learning method for 3D LiDAR data. Our approach
exploits the vehicle motion to match objects across time
viewed in different scans. We then train a model to max-
imize the point-wise feature similarities from points of the
associated object in different scans, which enables to learn
a consistent representation across time. The experimental
results show that our approach performs better than previ-
ous state-of-the-art self-supervised representation learning
methods when fine-tuning to different downstream tasks. We
furthermore show that with only 10% of labeled data, a net-
work pre-trained with our approach can achieve better per-
formance than the same network trained from scratch with
all labels for semantic segmentation on SemanticKITTI.1
|
1. Introduction
Semantic perception is essential for safe interaction be-
tween autonomous vehicles and their surrounding. For
learning-based perception, a massive amount of training
data is usually required for training high-performance mod-
els. However, the data annotation is the bottleneck of col-
lecting such large training sets, especially for dense predic-
tion tasks, such as semantic segmentation [60, 80], instance
1Code: https://github.com/PRBonn/TARL
0.11 10 50100
Labels (%)2840505560mIoU (%)
Semantic segmentation
TARL vs Scratch
Scratch
TARL10x less labels
0.11 10 50100
Labels (%)420305055PQ (%)
Panoptic segmentation
TARL vs Scratch
Scratch
TARL2x less labelsFigure 1. Our pre-training (TARL) can reduce the amount of nec-
essary labels during fine-tuning on SemanticKITTI [4, 22]. Our
method requires only 10% of labels to surpass the network trained
from scratch with the full dataset for semantic segmentation. For
panoptic segmentation, our method requires only half of the labels.
segmentation [46, 73], and panoptic segmentation [45, 79],
which require fine-grained labels. In the context of au-
tonomous driving, recent approaches exploit 3D LiDAR
data [20, 54, 55, 61], where the data annotation process is
more complex than on 2D RGB images due to the sparsity
of the point cloud and object appearance varying with its
distance to the sensor.
Recent self-supervised representation learning meth-
ods [10, 11, 13–15, 25, 27, 76] tackle the lack of annotated
data with a pre-training step requiring no labels. Those
methods use data augmentation to generate different views
from one data sample and train the network to learn an
embedding space able to have similar representations for
the generated augmented views. Other approaches [52, 63,
65, 69, 78] propose optimizing the pixel embedding space
to learn a dense representation suited to be fine-tuned to
more fine-grained tasks. For 3D point cloud data, recent
approaches [21, 42, 49, 56] focus on synthetic point clouds
of single objects to learn a robust representation for object
classification. Other approaches [31,36,50,67,74,75] target
real-world data representation, such as LiDAR or RGB-D
data, but fewer target autonomous driving scenarios.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5217
In this paper, we propose a novel temporal association
representation learning (TARL) method for 3D LiDAR data
designed for autonomous driving data. We exploit the ve-
hicle motion to extract different views of the same objects
across time. Then, we compute point-wise features from
the objects in the point cloud and use a Transformer en-
coder [2] as a projection head to learn a temporal asso-
ciation from the object representation, embedding the ob-
jects dynamics. We conduct extensive experiments to com-
pare our approach with state-of-the-art methods and show
that our approach surpasses previous self-supervised pre-
training methods [50, 67, 75] when fine-tuning to different
downstream tasks and datasets [4, 8, 22, 64]. In summary,
our key contributions are:
• We propose a novel self-supervised pre-training for
3D LiDAR data able to learn a robust and temporally-
consistent representation by clustering together points
from the same object viewed at different points in time.
• We achieve better performance than previous state-of-
the-art methods on different downstream tasks, i.e., se-
mantic segmentation, panoptic segmentation, and ob-
ject detection.
• With our pre-training, we require only 10% of labels
to surpass the network trained from scratch for seman-
tic segmentation using the whole training set on Se-
manticKITTI (see Fig. 1).
• Our self-supervised pre-training produces representa-
tions more suited for transfer learning than supervised
pre-training, achieving better performance when fine-
tuning to a different dataset.
|
Prinzler_DINER_Depth-Aware_Image-Based_NEural_Radiance_Fields_CVPR_2023
|
Abstract
We present Depth-aware Image-based NEural Radiance
fields (DINER). Given a sparse set of RGB input views, we
predict depth and feature maps to guide the reconstruction
of a volumetric scene representation that allows us to ren-
der 3D objects under novel views. Specifically, we propose
novel techniques to incorporate depth information into fea-
ture fusion and efficient scene sampling. In comparison to
the previous state of the art, DINER achieves higher synthe-
sis quality and can process input views with greater dispar-
ity. This allows us to capture scenes more completely with-
out changing capturing hardware requirements and ulti-
mately enables larger viewpoint changes during novel view
synthesis. We evaluate our method by synthesizing novel
views, both for human heads and for general objects, and
observe significantly improved qualitative results and in-
creased perceptual metrics compared to the previous state
of the art. The code is publicly available through the Project
Webpage .
|
1. Introduction
In the past few years, we have seen immense progress in
digitizing humans for virtual and augmented reality applica-tions. Especially with the introduction of neural rendering
and neural scene representations [ 43,44], we see 3D digital
humans that can be rendered under novel views while being
controlled via face and body tracking [ 3,11,14,15,24,32,
37,47,50,51,58]. Another line of research reproduces gen-
eral 3D objects from few input images without aiming for
control over expressions and poses [ 5,25,38,41,46,56]. We
argue that this offers significant advantages in real-world
applications like video-conferencing with holographic dis-
plays: (i) it is not limited to heads and bodies but can also
reproduce objects that humans interact with, (ii) even for
unseen extreme expressions, fine texture details can be syn-
thesized since they can be transferred from the input im-
ages, (iii) only little capturing hardware is required e.g.
four webcams suffice, and (iv) the approach can generalize
across identities such that new participants could join the
conference ad hoc without requiring subject-specific opti-
mization. Because of these advantages, we study the sce-
nario of reconstructing a volumetric scene representation
for novel view synthesis from sparse camera views. Specif-
ically, we assume an input of four cameras with high rel-
ative distances to observe large parts of the scene. Based
on these images, we condition a neural radiance field [ 27]
which can be rendered under novel views including view-
dependent effects. We refer to this approach as image-based
neural radiance fields . It implicitly requires estimating the
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12449
scene geometry from the source images. However, we ob-
serve that even for current state-of-the-art methods the ge-
ometry estimation often fails and significant synthesis arti-
facts occur when the distance between the source cameras
becomes large because they rely on implicit correspondence
search between the different views. Recent research demon-
strates the benefits of exploiting triangulated landmarks to
guide the correspondence search [ 25]. However, landmarks
have several drawbacks: They only provide sparse guid-
ance, are limited to specific classes, and the downstream
task is bounded by the quality of the keypoint estimation,
which is known to deteriorate for profile views.
To this end, we propose DINER to compute an image-
based neural radiance field that is guided by estimated dense
depth. This has significant advantages: depth maps are not
restricted to specific object categories, provide dense guid-
ance, and are easy to attain via either a commodity depth
sensor or off-the-shelf depth estimation methods. Specif-
ically, we leverage a state-of-the-art depth estimation net-
work [ 8] to predict depth maps for each of the source views
and employ an encoder network that regresses pixel-aligned
feature maps. DINER exploits the depth maps in two im-
portant ways: (i) we condition the neural radiance field
on the deviation between sample location and depth esti-
mates which provides strong prior information about visual
opacity, and (ii) we focus sampling on the estimated sur-
faces to improve sampling efficiency. Furthermore, we im-
prove the extrapolation capabilities of image-based NeRFs
by padding and positionally encoding the input images be-
fore applying the feature extractor. Our model is trained on
many different scenes and at inference time, four input im-
ages suffice to reconstruct the target scene in one inference
step. As a result, compared to the previous state of the art,
DINER can reconstruct 3D scenes from more distinct source
views with better visual quality, while allowing for larger
viewpoint changes during novel view synthesis. We eval-
uate our method on the large-scale FaceScape dataset [ 54]
on the task of novel view synthesis for human heads from
only four highly diverse source views and on general ob-
jects in the DTU dataset [ 18]. For both datasets, our model
outperforms all baselines by a significant margin.
In summary, DINER is a novel method that produces vol-
umetric scene reconstructions from few source views with
higher quality and completeness than the previous state of
the art. In summary, we contribute:
• an effective approach to condition image-based NeRFs
on depth maps predicted from the RGB input,
• a novel depth-guided sampling strategy that increases
efficiency,
• and a method to improve the extrapolation capabilities
of image-based NeRFs by padding and positionally en-
coding the source images prior to feature extraction.
|
Maiya_NIRVANA_Neural_Implicit_Representations_of_Videos_With_Adaptive_Networks_and_CVPR_2023
|
Abstract
Implicit Neural Representations (INR) have recently
shown to be powerful tool for high-quality video compres-
sion. However, existing works are are limiting as they do
not exploit the temporal redundancy in videos, leading to a
long encoding time. Additionally, these methods have fixed
architectures which do not scale to longer videos or higher
resolutions. To address these issues, we propose NIRVANA,
which treats videos as groups of frames and fits separate
networks to each group performing patch-wise prediction.
The video representation is modeled autoregressively, with
networks fit on a current group initialized using weights
from the previous group’s model. To enhance efficiency, we
quantize the parameters during training, requiring no post-
hoc pruning or quantization. When compared with previ-
ous works on the benchmark UVG dataset, NIRVANA im-
proves encoding quality from 37.36 to 37.70 (in terms of
PSNR) and the encoding speed by 12 ×, while maintain-
ing the same compression rate. In contrast to prior video
INR works which struggle with larger resolution and longer
videos, we show that our algorithm scales naturally due
to its patch-wise and autoregressive design. Moreover, our
method achieves variable bitrate compression by adapting
to videos with varying inter-frame motion. NIRVANA also
achieves 6 ×decoding speed scaling well with more GPUs,
making it practical for various deployment scenarios.1.
|
1. Introduction
In the information age today, where petabytes of con-
tent is generated and consumed every hour, the ability to
compress data fast and reliably is important. Not only
does compression make data cheaper for server hosting, it
*First two authors contributed equally
†Work done during internship at Snap Inc.
1The project site can be found here
Pixelwise prediction SIREN
Framewise predictionNeRV
Single
modelSingle
model
Frame-group 1Initialize
Patchwise
prediction
Full video framesFrame-group 2 Frame-group NgFrame 123 456 N-2 NPixel coordinates(x,y)pp
Time coordinates(t) (x,y),t
Autoregressive
modelingPatch coordinatesFigure 1. Overview of NIRV ANA: Prior video INR works per-
form either pixel-wise or frame-wise prediction. We instead per-
form spatio-temporal patch-wise prediction and fit individual neu-
ral networks to groups of frames (clips) which are initialized using
networks trained on the previous group. Such an autoregressive
patch-wise approach exploits both spatial and temporal redundan-
cies present in videos while promoting scalability and adaptability
to varying video content, resolution or duration.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14378
makes content accessible to population/regions with low-
bandwidth. Conventionally, such compression is achieved
through codecs like JPEG [44] for images and HEVC [39],
A V1 [8] for videos, each of which compresses data via tar-
getted hand-crafted algorithms. These techniques achieve
acceptable trade-offs, leading to their widespread usage.
With the rise of deep learning, machine learning-based
codecs [3,4,30,42] showed that it is possible to achieve bet-
ter performance in some aspects than conventional codecs.
However, these techniques often require large networks as
they attempt to generalize to compress all data from the
distribution. Furthermore, such generalization is contin-
gent on the training dataset used by these models, leading
to poor performance for Out-of-Distribution (OOD) data
across different domains [46] or even when the resolution
changes [5]. This greatly limits its real-world practical-
ityespecially if the input data to be compressed is sig-
nificantly different from what the model has seen during
training. In recent years, a new paradigm, Implicit Neu-
ral Representations (INR), emerged to solve the drawbacks
of model-learned compression methods. Rather than at-
tempting to generalize to all data from a particular distri-
bution, its key idea is to train a network that specifically
fits to a signal, which can be an image [36], video [6], or
even a 3D scene [29]. With this conceptual shift, a neu-
ral network is no longer just a predictive tool, rather it is
now an efficient storage of data. Treating the neural net-
work as the data , INR translate the data compression task
to that of model compression. Such a continuous function
mapping further benefits downstream tasks such as image
super-resolution [23], denoising [31], and inpainting [36].
Despite these advances, videos vary widely in both spa-
tial resolutions and temporal lengths, making it challenging
for networks to encode videos in a practical setting. To-
wards solving this task, SIREN [36], attempted to learn a
direct mapping from 3D spatio-temporal coordinates of a
video to each pixel’s color values. While simple, this is
computationally inefficient and does not factor in the spatio-
temporal redundancies within the video. Later, NeRV [6]
proposed to map 1D temporal coordinates in the form of
frame indices directly to generate a whole frame. While this
improves the reconstruction quality, such a mapping still
does not capture the temporal redundancies between frames
as it treats each frame individually as a separate image en-
coding task. Finally, mapping only the temporal coordinate
also means one would need to modify the architecture in
order to encode videos of different spatial resolutions.
To address the above issues, we propose NIRV ANA, a
method that exploits spatio-temporal redundancies to en-
code videos of arbitrary lengths and resolutions. Rather
than performing a pixel-wise prediction ( e.g., SIREN) or
a whole-frame prediction ( e.g., NeRV), we predict patches ,
which allows our model to adapt to videos of different spa-tial resolutions without modifying the architecture. Our
method takes in the centroids of patches (patch coordinates)
(xp, yp)as inputs and outputs a corresponding patch vol-
ume. Since patches can be arranged for different resolu-
tions, we do not require any architectural modification when
the input video resolution changes. Furthermore, to exploit
the temporal nature of videos, we propose to train individ-
ual, small models for each group of video frames (“clips”)
in an autoregressive manner: the model weights for pre-
dicting each frame group is initialized from the weights of
the model for the previous frame group. Apart from the
obvious advantage that we can scale to longer sequences
without changing the model architecture, this design ex-
ploits the temporal nature of videos that, intuitively, frame
groups with similar visual information ( e.g., static video
frames) would have similar weights, allowing us to further
perform residual encoding for greater compression gains.
This adaptive nature, that static videos gain greater com-
pression than dynamic ones, is a big advantage over NeRV
where the compression for identical frames remain fixed as
it models each frame individually. To obtain further com-
pression gains, we employ recent advances in the literature
to add entropy regularization for quantization, and encode
model weights for each video during training [17]. This fur-
ther adapts the compression level to the complexity of each
video, and avoids any post-hoc pruning and fine-tuning as
in NeRV , which can be slow.
Finally, we show that despite its autoregressive nature,
our model is linearly parallelizable with the number of
GPUs by chunking each video into disjoint groups to be
processed. This strategy largely improves the speed while
maintaining the superior compression gains of our method,
making it practical for various deployment scenarios.
We evaluate NIRV ANA on the benchmark UVG dataset
[28]. We show that NIRV ANA reaches the same levels of
PSNR and Bits-per-Pixel (BPP) compression rate with al-
most 12×the encoding speed of NeRV . We verify that our
algorithm adapts to varying spatial and temporal scales by
providing results on videos in the UVG dataset with 4K res-
olution at 120fps, as well as on significantly longer videos
from the YouTube-8M dataset, both of which are challeng-
ing extensions which have not been attempted on for this
task. We show that our algorithm outperforms the base-
line with much smaller encoding times and that it naturally
scales with no performance degradation. We conduct abla-
tion studies to show the effectiveness of various components
of our algorithm in achieving high levels of reconstruction
quality and understand the sources of improvements.
Our contributions are summarized below:
• We present NIRV ANA, a patch-wise autoregressive
video INR framework which exploits both spatial and
temporal redundancies in videos to achieve high levels
of encoding speedups (12 ×) at similar reconstruction
14379
quality and compression rates.
• We achieve a 6 ×speedup in decoding times and scale
well with increasing number of GPUs, making it prac-
tical in various deployment scenarios.
• Our framework adapts to varying video resolutions and
lengths without performance degradations. Different
from prior works, it achieves adaptive bitrate compres-
sion based on the amount of inter-frame motion for
each video.
|
Park_BiFormer_Learning_Bilateral_Motion_Estimation_via_Bilateral_Transformer_for_4K_CVPR_2023
|
Abstract
A novel 4K video frame interpolator based on bilateral
transformer (BiFormer) is proposed in this paper, which
performs three steps: global motion estimation, local mo-
tion refinement, and frame synthesis. First, in global
motion estimation, we predict symmetric bilateral motion
fields at a coarse scale. To this end, we propose Bi-
Former, the first transformer-based bilateral motion esti-
mator. Second, we refine the global motion fields effi-
ciently using blockwise bilateral cost volumes (BBCVs).
Third, we warp the input frames using the refined motion
fields and blend them to synthesize an intermediate frame.
Extensive experiments demonstrate that the proposed Bi-
Former algorithm achieves excellent interpolation perfor-
mance on 4K datasets. The source codes are available at
https://github.com/JunHeum/BiFormer .
|
1. Introduction
Video frame interpolation (VFI) is a low-level vision task
to increase the frame rate of a video, in which two (or more)
successive input frames are used to interpolate intermedi-
ate frames. Its applications include video enhancement [ 3],
*Corresponding author.video compression [ 4,5], slow-motion generation [ 6], and
view synthesis [ 7,8]. Attempts have been made to develop
effective VFI methods [ 1,2,6,9–27]. Especially, with the
advances in optical flow estimation [ 28–37], motion-based
VFI methods provide remarkable performances. But, VFI
for high-resolution videos, e.g. 4K videos, remains chal-
lenging due to diverse factors, such as large motions and
small objects, hindering accurate optical flow estimation.
Most of these VFI methods are optimized for the
Vimeo90K dataset [ 3] of a low spatial resolution ( 448×
256), so they tend to yield poor results on 4K videos [ 2].
It is important to develop effective VFI techniques for 4K
videos, which are widely used nowadays. 4K videos are,
however, difficult to interpolate, for they contain large mo-
tions as in Figure 1. To cope with large motions, many opti-
cal flow estimators adopt coarse-to-fine strategies [ 31–33].
At a coarse scale, large motions can be handled more effi-
ciently. But, motion errors at the coarse scale may propa-
gate to a finer scale, making fine-scale results unreliable.
To reduce such errors, the transformer can be a power-
ful solution, as demonstrated by recent optical flow esti-
mators [ 36,37]. However, these estimators cannot be di-
rectly used for VFI, in which the motion fields from an in-
termediate frame It,0< t <1, to input frames I0and
I1should be estimated. For such bilateral motion estima-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1568
tion [ 1,2,24,38,39], a novel technique is required to adopt
the transformer because the source frame Itis not available.
In this paper, we propose a novel 4K VFI algorithm using
the bilateral transformer (BiFormer) based on bilateral cross
attention. First, we estimate global motion fields at a coarse
scale via BiFormer. Second, we refine these global motion
fields into final motion fields at a fine scale, by employing
a motion upsampler recurrently. Last, we warp the two in-
put frames using the final motion fields, respectively, and
blend the two warped frames to synthesize an intermediate
frame. Experimental results demonstrate that the proposed
BiFormer algorithm provides the best performance on 4K
benchmark datasets.
The work has the following major contributions:
•We propose the first transformer-based bilateral mo-
tion estimator, called BiFormer, for VFI.
•We develop blockwise bilateral cost volumes (BBCVs)
to refine motion fields at 4K resolution efficiently.
•The proposed BiFormer algorithm outperforms the
state-of-the-art VFI methods [ 1,2,14,22–24,40] on
three 4K benchmark datasets [ 2,41,42].
|
Moon_Bringing_Inputs_to_Shared_Domains_for_3D_Interacting_Hands_Recovery_CVPR_2023
|
Abstract
Despite recent achievements, existing 3D interacting
hands recovery methods have shown results mainly on mo-
tion capture (MoCap) environments, not on in-the-wild
(ITW) ones. This is because collecting 3D interacting hands
data in the wild is extremely challenging, even for the 2D
data. We present InterWild, which brings MoCap and ITW
samples to shared domains for robust 3D interacting hands
recovery in the wild with a limited amount of ITW 2D/3D
interacting hands data. 3D interacting hands recovery con-
sists of two sub-problems: 1) 3D recovery of each hand and
2) 3D relative translation recovery between two hands. For
the first sub-problem, we bring MoCap and ITW samples to
a shared 2D scale space. Although ITW datasets provide
a limited amount of 2D/3D interacting hands, they contain
large-scale 2D single hand data. Motivated by this, we use
a single hand image as an input for the first sub-problem
regardless of whether two hands are interacting. Hence, in-
teracting hands of MoCap datasets are brought to the 2D
scale space of single hands of ITW datasets. For the sec-
ond sub-problem, we bring MoCap and ITW samples to a
shared appearance-invariant space. Unlike the first sub-
problem, 2D labels of ITW datasets are not helpful for the
second sub-problem due to the 3D translation’s ambiguity.
Hence, instead of relying on ITW samples, we amplify the
generalizability of MoCap samples by taking only a geo-
metric feature without an image as an input for the second
sub-problem. As the geometric feature is invariant to ap-
pearances, MoCap and ITW samples do not suffer from a
huge appearance gap between the two datasets. The code
is publicly available1.
|
1. Introduction
3D interacting hands recovery aims to reconstruct a sin-
gle person’s interacting right and left hands in the 3D space.
The recent introduction of a large-scale motion capture
1https://github.com/facebookresearch/InterWild
(a) The number of single hand vs. interacting hands in MSCOCO
(b) The failure case of the 2D-based weak supervision
for the 3D relative translation between two hands
① ②
① wrong global translation (right hand) ② correct global translation (right hand)
wrong relative translation correct relative translation camera
input image z-axis ray
ray
Figure 1. (a) Only a very small amount of 2D interacting
hands are available in ITW datasets despite a relaxed threshold
of intersection-over-union (IoU). We consider two hands are in-
teracting if the IoU between the two hands’ boxes is bigger than
0.1. (b) The 2D-based weak supervision from ITW datasets often
results in a wrong 3D relative translation ( the orange arrow ).
(MoCap) dataset [20] motivated many 3D interacting hands
recovery methods [4, 6, 14, 25, 32].
Although they have shown robust results on MoCap
datasets, none of them explicitly tackled robustness on in-
the-wild (ITW) datasets. Simply training networks on Mo-
Cap datasets and testing them on ITW datasets results in un-
stable results due to a huge domain gap between MoCap and
ITW datasets. The most representative domain gap is an ap-
pearance gap. For example, images in InterHand2.6M [20]
(IH2.6M) have black backgrounds and artificial illumina-
tions, far from those of ITW datasets. The fundamental so-
lution for this is collecting large-scale ITW data with 3D
groundtruths (GTs); however, this is extremely challenging.
For example, capturing 3D data requires tens of calibrated
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17028
Target 3D mesh full supervision
Input image from
MoCap dataset
Input image from
ITW dataset weak supervision
Target 2D pose Input images from
MoCap dataset full supervision
full supervision
Target 3D meshes
(a) Mini-batch of previous approaches
with the mixed-batch training (b) Mini-batch of InterWild (Ours)
full supervision
Input image from
ITW dataset weak supervision
Target 2D pose Figure 2. Mini-batch comparison for the first sub-problem ( i.e.,
estimation of separate 3D meshes of left and right hands).
and synchronized cameras. Preparing such a setup at di-
verse places in the wild requires a huge amount of manual
effort. Furthermore, collecting even large-scale ITW 2D in-
teracting hand data with manual annotation is greatly chal-
lenging due to the severe occlusions and self-similarities.
Due to such challenges, there is no large-scale ITW 2D/3D
interacting hand dataset.
Nevertheless, ITW datasets provide large-scale 2D
single-hand data, as shown in Fig. 1 (a). Utilizing such
large-scale 2D single-hand data of ITW datasets can be
an orthogonal research direction to the 2D/3D interacting
hands data collection in the wild. Mixed-batch training
is the most dominant approach to utilize 2D data of ITW
datasets for the 3D human recovery [3, 11, 12, 17, 19, 24].
During the mixed-batch training, half of the samples in a
mini-batch are taken from MoCap datasets and the rest of
the samples from ITW datasets. The MoCap samples are
fully supervised with 3D GTs, and the ITW samples are
weakly supervised with 2D GTs. The ITW samples make
networks exposed to diverse appearances, which leads to
successful generalization to unseen ITW images. The 2D-
based weak supervision is enabled by the MANO [23] hand
model, which produces a 3D hand mesh from pose and
shape parameters in a differentiable way. To be specific,
3D joint coordinates, extracted from the 3D mesh, are pro-
jected to the 2D space using an estimated 3D global trans-
lation ( i.e., 3D translation from the camera to the hand root
joint) and fixed virtual camera intrinsics. Then, the pro-
jected 2D joint coordinates are supervised with the 2D GTs.
In this way, the 2D GTs weakly supervise MANO parame-
ters, which can make all vertices of the 3D mesh fit to the
2D GTs.
However, naively re-training networks of previous 3D
interacting hands recovery methods [4, 6, 14, 25, 32] with
the mixed-batch training does not result in robust results.
3D interacting hands mesh recovery consists of two sub-
problems: 1) estimation of separate 3D right and left hands
and 2) estimation of 3D relative translation between two
hands. For the first sub-problem, previous works [4, 6, 14,
25, 32] take an image of two hands when hands are in-
Target 3D relative translation
between hands full supervision
Input image from
MoCap dataset
(a) Mini-batch of previous approaches
with the mixed-batch training
y
xz
Estimated 2.5D heatmap from
ITW dataset No targets
(b) Mini-batch of InterWild (Ours) Target 3D relative translation
between hands full supervision y
xz
Estimated 2.5D heatmap from
MoCap dataset
Input image from
ITW dataset Target 2D pose
weak supervision
(Only IntagHand)
Figure 3. Mini-batch comparison for the second sub-problem ( i.e.,
estimation of 3D relative translation between two hands).
teracting, and an image of a single hand when hands are
not interacting (Fig. 2 (a)). As ITW datasets mostly con-
tain single-hand data (Fig. 1 (a)), most samples from ITW
datasets contain a single hand during the mixed-batch train-
ing. The problem is that the images of two hands from Mo-
Cap datasets have very different 2D hand scale distribution
compared to that of single-hand images from ITW datasets,
as shown in Fig. 5. For example, when two hands are in-
cluded in the input image, the 2D scale of each hand is much
smaller than that from a cropped image of a single hand.
Unlike the first sub-problem, the second sub-problem
hardly gets benefits from the 2D-based weak supervision
from ITW datasets. Fig. 1 (b) shows the failure case of the
2D-based weak supervision. When the 3D global transla-
tion, estimated for the 2D-based weak supervision, is wrong
(1⃝in the figure), the 3D relative translation is supervised
to be wrong one ( the orange arrow in the figure). The
wrong 3D global translation also can happen to the first sub-
problem; however, the critical difference is that the 3D scale
of the 3D relative translation ( i.e., output of the second sub-
problem) is very weakly constrained, while the 3D scale
of hands ( i.e., output of the first sub-problem) are strongly
constrained by the shape parameter of MANO [23]. For
example, we can place two hands close or far freely based
on how they are interacting with each other, while the size
of adults’ hands is usually around 15 cm. As there is no
such strong constraint to the relative translation, the relative
translation can be an arbitrary value and is prone to wrong
supervision ( the orange arrow in the figure). Please note
that estimating a 3D global translation from a single image
involves a high ambiguity and can be often wrong as the
camera position is not provided in the input image. In this
regard, we observed that the 2D-based weak supervision for
the 3D relative translation, used in IntagHand [14] (Fig. 3
(a)) deteriorates results, which is shown in the experimental
section. However, without the 2D-based weak supervision
of ITW datasets, the network is trained only on images of
MoCap datasets like previous works [4,6,25,32] (Fig. 3 (a)),
which results in generalization failure due to the appearance
gap between MoCap and ITW datasets.
17029
We present InterWild, a framework for 3D interacting
hands mesh recovery in the wild. For the first sub-problem,
InterWild takes a cropped single-hand image regardless of
whether two hands are interacting or not, as shown in Fig. 2
(b). In this way, the 2D scales of interacting hands are
normalized to those of a single hand. Such normaliza-
tion brings all single and interacting hands to the shared
2D scale space; hence, large-scale single-hand data of ITW
datasets can be much more helpful compared to the coun-
terpart without the normalization.
For the second sub-problem, InterWild takes geometric
features without images, as shown in Fig. 3 (b). In par-
ticular, the output of the second sub-problem ( i.e., 3D rela-
tive translation) is fully supervised only for MoCap samples
to prevent the 2D-based weak supervision of ITW samples
from deteriorating the 3D relative translation. The geomet-
ric features are invariant to appearances, such as colors and
illuminations, which can reduce the huge appearance gap
between MoCap and ITW datasets and bring samples from
two datasets to a shared appearance-invariant space. There-
fore, although the estimated 3D relative translation is super-
vised only on MoCap datasets and is not supervised on ITW
datasets, our InterWild produces robust 3D relative transla-
tions on ITW datasets.
We show that our InterWild produces highly robust 3D
interacting hand meshes from ITW images. As 3D interact-
ing hands recovery in the wild is barely studied, we hope
that ours can give useful insight into future works. For the
continual study, we released our codes and trained models.
Our contributions can be summarized as follows.
• We present InterWild, a framework for the 3D interact-
ing hands recovery in the wild.
• For the separate left and right 3D hands, Inter-
Wild takes a cropped single-hand image regardless of
whether hands are interacting or not so that all hands
are brought to a shared 2D scale space.
• For the 3D relative translation between two hands, In-
terWild takes only geometric features, which are in-
variant to appearances.
|
Pham_HyperCUT_Video_Sequence_From_a_Single_Blurry_Image_Using_Unsupervised_CVPR_2023
|
Abstract
We consider the challenging task of training models for
image-to-video deblurring, which aims to recover a se-
quence of sharp images corresponding to a given blurry
image input. A critical issue disturbing the training of
an image-to-video model is the ambiguity of the frame or-
dering since both the forward and backward sequences
are plausible solutions. This paper proposes an effective
self-supervised ordering scheme that allows training high-
quality image-to-video deblurring models. Unlike previous
methods that rely on order-invariant losses, we assign an
explicit order for each video sequence, thus avoiding the
order-ambiguity issue. Specifically, we map each video se-
quence to a vector in a latent high-dimensional space so
that there exists a hyperplane such that for every video se-
quence, the vectors extracted from it and its reversed se-
quence are on different sides of the hyperplane. The side
of the vectors will be used to define the order of the corre-
sponding sequence. Last but not least, we propose a real-
image dataset for the image-to-video deblurring problem
that covers a variety of popular domains, including face,
hand, and street. Extensive experimental results confirm
the effectiveness of our method. Code and data are avail-
able at https://github.com/VinAIResearch/
HyperCUT.git
|
1. Introduction
Motion blur artifacts occur when the camera’s shutter
speed is slower than the object’s motion. This can be stud-
ied by considering the image capturing process, in which
the camera shutter is opened to allow light to pass to the
camera sensor. This process can be formulated as:
y=g1
τZτ
0x(t)dt
≈g
1
N+ 1NX
k=0xk!
,(1)
HyperCUT
[tk, tN- k]
[tN-k, tk]Order mapping Figure 1. We tackle the order ambiguity issue by forcing the frame
sequence to follow a pre-defined order. To find such an order, we
map the frame sequences into a high dimensional space so that
they are separable. The side (left or right of the hyperplane) is
used to define the order of the frame sequence.
where yis the resulting image, x(t)is the signal captured by
the sensor at time t,gis the camera response function, and
τis the camera exposure time. For simplicity, we omit the
camera response function in the notation. The image ycan
also be approximated by averaging N+1uniform samples
of the signal x, denoted as xkwithk=0, N. For long
exposure duration or rapid movement, these samples can be
notably different, causing motion blur artifacts.
Image deblurring seeks to remove the blur artifacts to im-
prove the quality of the captured image. This task has many
practical applications, and it has been extensively studied in
the computer vision literature. However, existing methods
often formulate the deblurring task as an image-to-image
mapping problem, where only one sharp image is sought
for a given blurry input image, even though a blurry image
corresponds to a sequence of sharp images. The image-to-
image approach can improve the aesthetic look of a blurry
image, but it is insufficient for many applications, espe-
cially the applications that require recovering the motion of
objects, e.g., for iris or finger tracking. In this paper, we
tackle the another important task of image-to-video deblur-
ring, which we will refer to as blur2vid.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9843
Image-to-video deblurring, however, is a non trivial task
that requires learning a set of deblurring mapping functions
{fk}so that fk(y)≈xk. A naive approach is to minimize
the squared difference between the predicted sharp image
and the ground truth target, i.e.,
fk=argmin
fEx,y||f(y)−xk||2
2. (2)
However, this approach suffers from the order-ambiguity is-
sue [5]. Considering Eq. (1), the same blurry image yis
formed regardless of the order of the corresponding sam-
pled sharp frames. For example, both {x0, ..., x N}and the
reversed sequence are valid solutions. Thus, xk,xN−k, and
possibly other xh’s are valid ‘ground truth’ target for fk(y).
Thus, optimizing Eq. (2) will lead to a solution where fkis
mapped to the average of xkandxN−k. This issue has also
been observed in the work of [13] for future video frame
prediction. This also explains why most existing deblurring
methods cannot be directly used to recover any frame other
than the middle one. To tackle this issue, Jin et al. [5] intro-
duced the order-invariant loss, Eq. (3), which computed the
total loss on frames at symmetric indexes (i.e., kandN−k).
However, this loss does not fully resolve the issue of having
multiple solutions, as will be demonstrated in Sec. 2.2.
This paper proposes a new scheme to solve the order am-
biguity issue. Unlike the order-invariant loss [5] or motion
guidance [30], we solve this problem directly by explicitly
assigning which frame sequence is backward or forward.
In other words, each sequence is assigned an order label
0or1so that its label is opposite to the label of its re-
verse. Then the ambiguity issue can be tackled by forcing
the model to learn to generate videos with the order label
“0”. We introduce HyperCUT as illustrated in Fig. 1 to
find such an order. Specifically, we find a mapping Hthat
maps all frame sequences into a high-dimensional space so
that all pairs of vectors representing two temporal symmet-
ric sequences are separable by a hyperplane. We dub this
hyperplane HyperCUT. Each frame sequence’s order label
is defined as the side of its corresponding vector w.r.t. the
hyperplane. We find the mapping Hby representing it as
a neural network and training it in an unsupervised manner
using a contrastive loss.
Previously, there existed no real blur2viddataset, so an-
other contribution of this paper is the introduction of a new
dataset called Real blur2vid(RB2V ). RB2V was captured
by a beam splitter system, similar to [18, 29, 31]. It con-
sists of three subsets for three categories: street, face, and
hand. We will use the last two to demonstrate the potential
applications of the blur2vidtask in motion tracking.
In short, our contributions are summarized as follow:
• We introduce HyperCUT which is used to solve the
order ambiguity issue for the task of extracting a sharp
video sequence from a blurry image.• We build a new dataset for the task, covering three cat-
egories: street, face, and hand. This is the first real and
large-scale dataset for image-to-video deblurring.
• We demonstrate two potential real-world applications
of image-to-video deblurring.
|
Li_ToThePoint_Efficient_Contrastive_Learning_of_3D_Point_Clouds_via_Recycling_CVPR_2023
|
Abstract
Recent years have witnessed significant developments in
point cloud processing, including classification and seg-
mentation. However, supervised learning approaches need
a lot of well-labeled data for training, and annotation is
labor- and time-intensive. Self-supervised learning, on the
other hand, uses unlabeled data, and pre-trains a back-
bone with a pretext task to extract latent representations to
be used with the downstream tasks. Compared to 2D im-
ages, self-supervised learning of 3D point clouds is under-
explored. Existing models, for self-supervised learning of
3D point clouds, rely on a large number of data sam-
ples, and require significant amount of computational re-
sources and training time. To address this issue, we pro-
pose a novel contrastive learning approach, referred to as
ToThePoint. Different from traditional contrastive learning
methods, which maximize agreement between features ob-
tained from a pair of point clouds formed only with dif-
ferent types of augmentation, ToThePoint also maximizes
the agreement between the permutation invariant features
and features discarded after max pooling. We first per-
form self-supervised learning on the ShapeNet dataset, and
then evaluate the performance of the network on different
downstream tasks. In the downstream task experiments,
performed on the ModelNet40, ModelNet40C, Scanob-
jectNN and ShapeNet-Part datasets, our proposed ToThe-
Point achieves competitive, if not better results compared to
the state-of-the-art baselines, and does so with significantly
less training time (200 times faster than baselines).
|
1. Introduction
In recent years, self-supervised methods, which pretrain
a backbone with pretext tasks to extract useful latent rep-
resentations, have become increasingly effective [16]. For
example, self-supervised tasks can be set to distinguish pos-
itive and negative samples or restore damaged images, and
*This work was supported in part by the National Natural Sci-
ence Foundation of China under Grant No. 61972145 and 61932010,
and the Key R&D Program of Hunan Province under Grant No.
2022GK2069. D. Wu and X. Li are the corresponding authors (e-mail:
{dwu,lixinglin }@hnu.edu.cn).
Figure 1. A running example of ToThePoint. Raw 3D point
cloud data is streamed through two branches; in each branch, nor-
malization and data augmentation are performed followed by tra-
ditional max-pooling operations. In our recycling mechanism, the
N×M dimensional features are sorted and a row of features is
randomly selected (from the remaining rows after max-pooling)
as the recycled aligned features to assist the representation of per-
mutation invariant features. The four features extracted from the
two branches are next subjected to two stages of contrastive learn-
ing. Then the learning result would be mapped on the hypersphere.
these self-supervised pre-training tasks have been proven to
provide rich latent feature representations for downstream
tasks to improve their performance [5, 8, 10]. For the
tasks, for which dataset labeling is difficult, such as detec-
tion [33], segmentation [12] or video tracking tasks [27],
unsupervised pre-training can be especially helpful by alle-
viating the issue of insufficient labelled data. Moreover, it
has been shown that self-supervised pre-training combined
with supervised training provides better performance than
traditional fully supervised learning by itself [11, 34, 36].
With the ever increasing availability of LiDAR sensors and
stereo cameras, more and more point cloud data can be and
have been captured. However, annotating this data is dif-
ficult, providing additional incentive for self-supervised al-
gorithms developed for 3D point clouds.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21781
There have been some works exploring self-supervised
representation learning from point clouds, mainly based on
generative models [29], reconstruction [20, 26] and other
pretext tasks [34]. However, existing methods require large
amounts, even millions of data samples, for self-supervised
pre-training [9], making them computationally more expen-
sive and time-consuming. Among traditional point cloud
networks, PointNet [18] is a pioneering, end-to-end 3D
point cloud analysis work. It obtains permutation-invariant
features by adopting the max-pooling operation. There have
been many subsequent works adopting this structure [14,
19]. Yet, the max-pooling operation discards a large num-
ber of points and their features. Chen et al. [4] have shown
that these discarded features are still useful and, when recy-
cled, can boost performance; and proposed recycling to im-
prove the performance of fully-supervised 3D point cloud
processing tasks, including classification and segmentation.
In this work, different from [4], we perform recycling
differently, and also use the discarded point cloud features
as a feature augmentation method for contrastive learning.
This augmentation approach can allow having less train-
ing samples for self-supervised training, i.e. it can enable
the self-supervised pre-training of a point cloud network
without requiring large amounts of point cloud data. We
achieve this by making good use of the point cloud features
discarded by the max-pooling module of the point cloud
network. Performing self-supervised learning with a small
amount of point cloud data can also allow downstream tasks
to get a competitive result.
We propose ToThePoint to accelerate self-supervised
pretraining of 3D point cloud features, as shown by the ex-
ample in Fig. 1. Compared to previous baselines, which re-
quire a large number of training samples and longer training
time, our proposed work achieves its accuracy levels with
only a fraction of samples during pre-training. The goal of
our work is to introduce the distribution of the maximum
aggregated features and the recycled point cloud features
into the hypersphere space through a contrastive learning
method. The maximum aggregated feature and the recycled
point cloud feature from the same sample are regarded as a
cluster. Contrastive learning is used to make the maximum
aggregated feature become the centroid of the cluster, so
that the maximum aggregated feature can better represent
the sample.
Contributions. The main contributions of this work in-
clude the following:
• We first demonstrate that the point cloud features, dis-
carded by the max-pooling module of a point cloud net-
work, can be recycled and used as a feature augmentation
method for contrastive learning.
• We propose a two-branch contrastive learning framework,
which incorporates a cross-branch contrastive learning
loss and an intra-branch contrastive learning loss.• We perform extensive experiments to evaluate our pro-
posed method on three downstream tasks, namely object
classification, few-shot learning, and part segmentation
on synthetic and real datasets of varying scales. The re-
sults show that our method achieves competitive if not
better results compared to the state-of-the-art baselines,
and does so with significantly less training time and fewer
training samples.
• We perform ablation studies analyzing the effects of in-
dividual loss terms and their combinations on the perfor-
mance.
|
Mao_Leapfrog_Diffusion_Model_for_Stochastic_Trajectory_Prediction_CVPR_2023
|
Abstract
To model the indeterminacy of human behaviors, stochas-
tic trajectory prediction requires a sophisticated multi-modal
distribution of future trajectories. Emerging diffusion models
have revealed their tremendous representation capacities in
numerous generation tasks, showing potential for stochastic
trajectory prediction. However, expensive time consumption
prevents diffusion models from real-time prediction, since
a large number of denoising steps are required to assure
sufficient representation ability. To resolve the dilemma, we
present LEapfrog Diffusion model (LED), a novel diffusion-
based trajectory prediction model, which provides real-time,
precise, and diverse predictions. The core of the proposed
LED is to leverage a trainable leapfrog initializer to directly
learn an expressive multi-modal distribution of future tra-
jectories, which skips a large number of denoising steps,
significantly accelerating inference speed. Moreover, the
leapfrog initializer is trained to appropriately allocate cor-
related samples to provide a diversity of predicted future tra-
jectories, significantly improving prediction performances.
Extensive experiments on four real-world datasets, including
NBA/NFL/SDD/ETH-UCY, show that LED consistently im-
proves performance and achieves 23.7%/21.9% ADE/FDE
improvement on NFL. The proposed LED also speeds up the
inference 19.3/30.8/24.3/25.1 times compared to the stan-
dard diffusion model on NBA/NFL/SDD/ETH-UCY, satisfy-
ing real-time inference needs. Code is available at https:
//github.com/MediaBrain-SJTU/LED .
|
1. Introduction
Trajectory prediction aims to predict the future trajecto-
ries for one or multiple interacting agents conditioned on
their past movements. This task plays a significant role in
numerous applications, such as autonomous driving [24],
drones [10], surveillance systems [45], human-robot inter-
action systems [5], and interactive robotics [20]. Recently,
lots of fascinating research progresses have been made from
*Corresponding author.
⋯⋯Proposed leapfrog initializerDiffusion stepDenoising stepSampleprediction
Varianceestimation
Meanestimation
PredictedtrajectoriesGround-truthfuture trajectory
Randominitialization
Traditionaldenoising steps
Skipped denoising stepFigure 1. Leapfrog diffusion model uses the leapfrog initializer to
estimate the denoised distribution and substitute a long sequence of
traditional denoising steps, accelerating inference and maintaining
representation capacity.
many aspects, including temporal encoding [6, 13, 46, 53],
interaction modeling [1,15,18,43,49], and rasterized predic-
tion [11, 12, 26, 48]. In practice, to capture multiple possi-
bilities of future trajectories, a real-world prediction system
needs to produce multiple future trajectories. This leads to
the emergence of stochastic trajectory prediction, aiming to
precisely model the distribution of future trajectories.
Previous works have proposed a series of deep gen-
erative models for stochastic trajectory prediction. For
example, [15, 18] exploit the generator adversarial net-
works (GANs) to model the future trajectory distribu-
tion; [27, 38, 49] consider the conditional variational auto-
encoders (CV AEs) structure; and [3] uses the conditional
normalizing flow to relax the Gaussian prior in CV AEs and
learn more representative priors. Recently, with the great suc-
cess in image generation [17, 33] and audio synthesis [4, 21],
denoising diffusion probabilistic models have been applied
to time-series analysis and trajectory prediction, and show
promising prediction performances [14, 44]. Compared to
many other generative models, diffusion models have advan-
tages in stable training and modeling sophisticated distribu-
tions through sufficient denoising steps [8].
However, there are two critical problems in diffusion mod-
els for stochastic trajectory prediction. First, the real-time
inference is time-consuming [14]. To ensure the representa-
tion ability and generate high-quality samples, an adequate
number of denoising steps are required in standard diffu-
sion models, which costs more computational time. For
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5517
example, experiments show that on the NBA dataset, dif-
fusion models need about 100denoising steps to achieve
decent prediction performances, which would take ∼886ms
to predict; while the next frame comes every 200ms. Sec-
ond, as mentioned in [2], a limited number of independent
and identically distributed samples might not be able to cap-
ture sufficient modalities in the underlying distribution of a
generative model. Empirically, a few independent sampled
trajectories could miss some important future possibilities
due to the lack of appropriate sample allocation, significantly
deteriorating prediction performances.
In this work, we propose leapfrog diffusion model (LED),
a novel diffusion-based trajectory prediction model, which
significantly accelerates the inference speed and enables
adaptive and appropriate allocations of multiple correlated
predictions, providing sufficient diversity in predictions. The
core idea of the proposed LED is to learn a rough, yet suffi-
ciently expressive distribution to initialize denoised future
trajectories; instead of using a plain Gaussian distribution
as in standard diffusion models. Specifically, our forward
diffusion process is the same as standard diffusion models,
which assures that the ultimate representation ability is pris-
tine; while in the reverse denoising process, we leverage a
powerful initializer to produce correlated diverse samples
and leapfrog or skip a large number of denoising steps; and
then, use only a few denoising steps to refine the distribution.
To implement such a leapfrog initializer, we consider
a reparameterization to alleviate the learning burden. We
disassemble a denoised distribution into three parts: mean
trajectory, variance, and sample positions under the normal-
ized distribution. To estimate these three, we design three
corresponding trainable modules, each of which leverages
both a social encoder and a temporal encoder to learn the
social-temporal features and produce accurate estimation.
Furthermore, all the sample positions are simultaneously
generated based on the same social-temporal features, en-
abling appropriate sample allocations to provide diversity.
To evaluate the effectiveness of the proposed method, we
conduct experiments on four trajectory prediction datasets:
NBA, NFL Football Dataset, Standford Drones Dataset, and
ETH-UCY . The quantitative results show we outperform the
previous methods and achieve state-of-the-art performance.
Specifically, compared to MID [14], the proposed leapfrog
diffusion model reduces the average prediction time from
∼886ms to ∼46ms on the NBA dataset, while achieving a
15.6%/13.4% ADE/FDE improvement.
The main contributions are concluded as follows,
•We propose a novel LEapfrog Diffusion model (LED),
which is a denoising-diffusion-based stochastic trajectory
prediction model. It achieves precise and diverse predictions
with fast inference speed.
•We propose a novel trainable leapfrog initializer to
directly model sophisticated denoised distributions, acceler-ating inference speed, and adaptively allocating the sample
diversity, improving prediction performance.
•We conduct extensive experiments on four datasets
including NBA, NFL, SDD, and ETH-UCY . Results show
that i) our approach consistently achieves state-of-the-art
performance on all datasets; and ii) our method speeds up
the inference by around 20 times compared to the standard
diffusion model, satisfying real-time prediction needs.
|
Phung_Wavelet_Diffusion_Models_Are_Fast_and_Scalable_Image_Generators_CVPR_2023
|
Abstract
Diffusion models are rising as a powerful solution for
high-fidelity image generation, which exceeds GANs in
quality in many circumstances. However, their slow train-
ing and inference speed is a huge bottleneck, blocking them
from being used in real-time applications. A recent Diffu-
sionGAN method significantly decreases the models’ run-
ning time by reducing the number of sampling steps from
thousands to several, but their speeds still largely lag be-
hind the GAN counterparts. This paper aims to reduce
the speed gap by proposing a novel wavelet-based diffu-
sion scheme. We extract low-and-high frequency compo-
nents from both image and feature levels via wavelet decom-
position and adaptively handle these components for faster
processing while maintaining good generation quality. Fur-
thermore, we propose to use a reconstruction term, which
effectively boosts the model training convergence. Exper-
imental results on CelebA-HQ, CIFAR-10, LSUN-Church,
and STL-10 datasets prove our solution is a stepping-stone
to offering real-time and high-fidelity diffusion models. Our
code and pre-trained checkpoints are available at https:
//github.com/VinAIResearch/WaveDiff.git .
|
1. Introduction
Despite being introduced recently, diffusion models have
grown tremendously and drawn many research interests.
Such models revert the diffusion process to generate clean,
high-quality outputs from random noise inputs. These tech-
niques are applied in various data domains and applications
but show the most remarkable success in image-generation
tasks. Diffusion models can beat the state-of-the-art gen-
erative adversarial networks (GANs) in generation quality
on various datasets [4, 37]. More notably, diffusion mod-
els provide a better mode coverage [14, 22, 40] and a flexi-
ble way to handle different types of conditional inputs such
as semantic maps, text, representations, and images [36].
†Equal contribution.
Figure 1. Comparisons between our method and other GAN and
diffusion techniques in terms of FID and sampling time the on
CIFAR-10 ( 32×32) dataset. Our method is 2.5×faster than
DDGAN [49], the fastest up-to-date diffusion method, and ap-
proaches the real-time speed of StyleGAN methods [19, 20, 56]
while still achieving comparable FID scores.
Thanks to this capability, they offer various applications
such as text-to-image generation, image-to-image transla-
tion, image inpainting, image restoration, and more. Recent
diffusion-based text-to-image generative models [1, 34, 37]
allow users to generate unbelievably realistic images just
by text inputs, opening a new era of AI-based digital art and
promising applications to various other domains.
While showing great potential, diffusion models have a
very slow running speed, a critical weakness blocking them
from being widely adopted like GANs. The foundation
work Denoising Diffusion Probabilistic Models (DDPMs)
[13] requires a thousand sampling steps to produce the de-
sired output quality, taking minutes to generate a single im-
age. Many techniques have been proposed to reduce the
inference time [25, 39], mainly via reducing the sampling
steps. However, the fastest algorithm before Diffusion-
GAN still takes seconds to produce a 32 ×32 image, which
is about 100 times slower than GAN. DiffusionGAN [49]
made a break-though in fastening inference speed by com-
bining Diffusion and GANs in a single system, which ul-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10199
timately reduces the sampling steps to 4 and the inference
time to generate a 32 ×32 image as a fraction of a second. It
makes DiffusionGAN the fastest existing diffusion model.
Still, it is at least 4 times slower than the StyleGAN coun-
terpart, and the speed gap consistently grows when increas-
ing the output resolution. Moreover, DiffusionGAN still re-
quires a long training time and a slow convergence, confirm-
ing that diffusion models are not yet ready for large-scale or
real-time applications.
This paper aims to bridge the speed gap by introducing
a novel wavelet-based diffusion scheme. Our solution re-
lies on discrete wavelet transform, which decomposes each
input into four sub-bands for low- (LL) and high-frequency
(LH, HL, HH) components. We apply that transform on
both image and feature levels. This allows us to signifi-
cantly reduce both training and inference times while keep-
ing the output quality relatively unchanged. On the image
level, we obtain a high speed boost by reducing the spatial
resolution four times. On the feature level, we stress the im-
portance of wavelet information on different blocks of the
generator. With such a design, we can obtain considerable
performance improvement while inducing only a marginal
computing overhead.
Our proposed Wavelet Diffusion provides state-of-the-
art training and inference speed while maintaining high
generative quality, thoroughly confirmed via experiments
on standard benchmarks including CIFAR-10, STL-10,
CelebA-HQ, and LSUN-Church. Our models significantly
reduce the speed gap between diffusion models and GANs,
targeting large-scale and real-time systems.
In summary, our contributions are as following:
• We propose a novel Wavelet Diffusion framework
that takes advantage of the dimensional reduction
of Wavelet subbands to accelerate Diffusion Models
while maintaining good visual quality of generated re-
sults through high-frequency components.
• We employ wavelet decomposition in both image and
feature space to improve generative models’ robust-
ness and execution speed.
• Our proposed Wavelet Diffusion provides state-of-
the-art training and inference speed, which serves as
a stepping-stone to facilitating real-time and high-
fidelity diffusion models.
|
Qu_Learning_To_Segment_Every_Referring_Object_Point_by_Point_CVPR_2023
|
Abstract
Referring Expression Segmentation (RES) can facili-
tate pixel-level semantic alignment between vision and lan-
guage. Most of the existing RES approaches require mas-
sive pixel-level annotations, which are expensive and ex-
haustive. In this paper, we propose a new partially super-
vised training paradigm for RES, i.e., training using abun-
dant referring bounding boxes and only a few ( e.g., 1%)
pixel-level referring masks. To maximize the transferabil-
ity from the REC model, we construct our model based on
the point-based sequence prediction model. We propose
the co-content teacher-forcing to make the model explicitly
associate the point coordinates (scale values) with the re-
ferred spatial features, which alleviates the exposure bias
caused by the limited segmentation masks. To make the
most of referring bounding box annotations, we further pro-
pose the resampling pseudo points strategy to select more
accurate pseudo-points as supervision. Extensive experi-
ments show that our model achieves 52.06% in terms of ac-
curacy (versus 58.93% in fully supervised setting) on Re-
fCOCO+@testA, when only using 1% of the mask anno-
tations. Code is available at https://github.com/
qumengxue/Partial-RES.git .
|
1. Introduction
Referring Expression Segmentation (RES) aims to gen-
erate a segmentation mask for the object referred to by the
language expression in the image. It allows pixel-level se-
mantic alignment between language and vision, which is
meaningful to many multi-modal tasks and can be applied
to various practical applications, e.g., video/image editing
with sentences. Benefiting from the development of deep
learning techniques, significant progress [31, 9, 12, 2, 19,
*Work done during an internship at JD Explore Academy.
†Yao Zhao is the corresponding author.
Figure 1. Comparison of MDETR and SeqTR. On the left is the
streamlined framework of the two, and on the right is their IoU
performance on RefCOCOg@val . Best viewed in color.
20, 27, 54, 9] has been made in this field and achieved re-
markable performance.
The success of current methods partially attributes to the
large-scale training dataset with accurate pixel-level masks.
However, labeling masks for a huge amount of images is
often costly in terms of both human effort and finance, and
thus hardly be scaled up. Therefore, it is meaningful to
explore a new training paradigm for RES where extensive
pixel-level annotations are not necessary. It is well known
that bounding box annotations are much cheaper and easier
to be collected compared with pixel-level masks. Thus, we
try to study this question: is it possible to learn an accurate
RES model using abundant box annotations and only a few
mask annotations?
To answer this question, for the first time, we ex-
plore a new partially supervised training paradigm for RES
(Partial-RES). Intuitively, a naive pipeline for this partially
supervised setting is to first train a Referring Expression
Comprehension (REC) model and then transfer it to the
RES task by fine-tuning on a limited number of data with
mask annotations. However, due to the difference between
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3021
the REC and RES tasks, prevalent models ( e.g., Mask R-
CNN [14], MDETR [24]) have to simultaneously optimize
two different heads, with one for detection prediction and
the other for segmentation prediction. Such kind of struc-
tures would lead to a severe issue during the fine-tuning
stage, i.e., a randomly initialized mask head can not get
well optimized given only a few mask annotations (hun-
dreds of images). As shown in Figure 1, it is not easy to
optimize MDETR with only 1% mask-annotated data even
transferred from a well-trained REC model.
Recently, a contour-based method named SeqTR has
been proposed for unifying REC and RES. The idea is to
use a sequence model (usually a Transformer decoder) to
sequentially generate contour points of the referred object.
The predictions are two points (top-left and bottom-right
corner points of the bounding box) in the REC task while
dozens of points (contour) are in the RES task. As shown
in Fig. 1, both boxes and masks are converted to point se-
quences and optimized using the same simple cross-entropy
loss in SeqTR, which ensures the consistency between REC
and RES. It incorporates two optimization heads into a uni-
fied one, making the knowledge learned with four points
(i.e., detection) be naturally transferred to predict multiple
ones ( i.e., segmentation). Inspired by SeqTR, we have the
conjecture that sequence prediction might be a better solu-
tion for partially supervised training.
In this paper, we investigate methods for achieving bet-
ter partially supervised training for RES sequence predic-
tion models. Sequence-to-sequence prediction models are
typically trained with Teacher-Forcing (TF) [39], wherein
the model utilizes the ground truth token as input to pre-
dict the next token during the training stage. However, the
model can only predict the next state by taking its previous
output as input in inference, which is known as the expo-
sure bias [34]. This issue is even more severe in the par-
tially supervised setting due to the very limited ground truth
data. Consequently, we introduce the Co-Content Teacher-
Forcing (CCTF), which combines the ground truth point
coordinates together with the spatial visual feature of the
pointed row or column. In contrast to the previous sequence
model SeqTR, our CCTF explicitly associates the point co-
ordinates (scale values) with the referred spatial region, pro-
viding a more natural approach for visual grounding.
Furthermore, we estimate the referring region via our
proposed Point-Modulated Cross-Attention, to ensure the
decoder attends to those region content while generating the
point contour sequences. To fully utilize the data without
mask annotation in Partial-RES, we retrain the model with
the generated pseudo labels, and we present a Resampling
Pseudo Points (RPP) Strategy. Unlike most pseudo-label
works that directly use the network’s predictions as labels,
we select the appropriate pseudo masks with Dice coeffi-
cient and then resample the predicted points in a uniformway to regularize the contour sequence labels.
With extensive experiments, our method displays signif-
icant improvement compared to SeqTR [54] baselines on
all three benchmarks, i.e., at an average of 3.5%, 2.4%, and
3.0% on 1%, 5% and 10% mask-labeled data. With 10%
mask annotated data, our method achieves 97% of the fully
supervised performance on RefCOCO+@val . We are also
able to achieve 88% of the fully supervised performance
only with 1% mask-labeled data on RefCOCO+@testA .
|
Ning_HOICLIP_Efficient_Knowledge_Transfer_for_HOI_Detection_With_Vision-Language_Models_CVPR_2023
|
Abstract
Human-Object Interaction (HOI) detection aims to lo-
calize human-object pairs and recognize their interactions.
Recently, Contrastive Language-Image Pre-training (CLIP)
has shown great potential in providing interaction prior for
HOI detectors via knowledge distillation. However, such
approaches often rely on large-scale training data and suf-
fer from inferior performance under few/zero-shot scenar-
ios. In this paper, we propose a novel HOI detection frame-
work that efficiently extracts prior knowledge from CLIP
and achieves better generalization. In detail, we first intro-
duce a novel interaction decoder to extract informative re-
gions in the visual feature map of CLIP via a cross-attention
mechanism, which is then fused with the detection backbone
by a knowledge integration block for more accurate human-
object pair detection. In addition, prior knowledge in CLIP
text encoder is leveraged to generate a classifier by embed-
ding HOI descriptions. To distinguish fine-grained interac-
tions, we build a verb classifier from training data via visual
semantic arithmetic and a lightweight verb representation
adapter. Furthermore, we propose a training-free enhance-
ment to exploit global HOI predictions from CLIP . Exten-
sive experiments demonstrate that our method outperforms
the state of the art by a large margin on various settings, e.g.
+4.04 mAP on HICO-Det. The source code is available in
https://github.com/Artanic30/HOICLIP.
|
1. Introduction
Human-Object Interaction (HOI) detection, which aims
to localize human-object pairs and identify their interac-
tions, is a core task towards a comprehensive understanding
of visual scenes. It has attracted increasing interest in recent
years for its key role in a wide range of applications, such
*These authors contributed equally to this work, which was supported
by Shanghai Science and Technology Program 21010502700 and Shanghai
Frontiers Science Center of Human-centered Artificial Intelligence.
(a) Data Efficiency Comparison
(b) Verb Long-tail Problem
Figure 1. Data efficiency comparison and verb distribution
analysis. In panel (a), we increase training data from 5%to100%
and show the result of HOICLIP and GEN-VLKT. In panel (b),
the dots indicate the mean mAP and length of vertical line indicate
the variance of mAP for verbs grouped by sample number.
as assistive robots, visual surveillance and video analysis
[3,4,9,11]. Thanks to the development of end-to-end object
detectors [5], recent research [23,28,29,37,49] has made re-
markable progress in localizing human-object instances in
interaction. Nonetheless, the problem of identifying inter-
action classes between human-object pairs remains partic-
ularly challenging. Conventional strategies [7, 23, 37, 49]
simply learn a multi-label classifier and typically require
large-scale annotated data for training. As such, they of-
ten suffer from long-tailed class distributions and a lack of
generalization ability to unseen interactions.
Recently, Contrastive Vision-Language Pre-training [33]
has been explored to address such open-vocabulary and
zero-shot learning problems as its learned visual and lin-
guistic representations demonstrate strong transfer ability
in various downstream tasks. In particular, recent work
on open-vocabulary detection utilizes knowledge distilla-
tion to transfer CLIP’s object representation to object detec-
tors [10,12,14,31,45,52]. Such a strategy has been adopted
in the work of HOI detection, including GEN-VLKT [28]
and EoID [43], which leverage CLIP’s knowledge to tackle
the long-tail and zero-shot learning in the HOI tasks.
Despite their promising results, it remains an open ques-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23507
tion how we effectively transfer CLIP knowledge to the HOI
recognition task it involves compositional concepts com-
posed of visual objects and interactions. First, as pointed
out in [8,35], the commonly-adopted teacher-student distil-
lation objective is not aligned with improving the general-
ization of student models. In addition, as shown in Figure 1,
we empirically observe that the knowledge distillation in
learning HOIs (e.g., GEN-VLKT) typically requires a sub-
stantial amount of training data, which indicates its low data
efficiency . Furthermore, knowledge distillation often suffers
from performance degradation in zero-shot generalization
as it lacks training signal for unseen classes which is critical
to inherit knowledge from the teacher model.
To address those challenges, we propose a novel strat-
egy, dubbed HOICLIP, for transferring CLIP knowledge to
the HOI detection task in this work. Our design ethos is
to directly retrieve learned knowledge from CLIP instead
of relying on distillation and to mine the prior knowledge
from multiple aspects by exploiting the compositional na-
ture of the HOI recognition. Moreover, to cope with the
long-tail and zero-shot learning in verb recognition under
low data regime, we develop a verb representation based
on visual semantic arithmetic, which does not require large
amount of training data as in knowledge distillation based
methods. Our methods enable us to improve the data ef-
ficiency in HOI representation learning and achieve better
generalization as well as robustness.
Specifically, our HOICLIP framework learns to retrieve
the prior knowledge from the CLIP model from three as-
pects: 1) Spatial feature . As the feature location is key
to the detection task, we fully exploit the visual represen-
tation in CLIP and extract features only from informative
image regions. To this end, we utilize CLIP’s feature map
with spatial dimensions, and develop a transformer-based
interaction decoder that learns a localized interaction fea-
ture with cross-modal attention. 2) Verb feature . To address
the long-tailed verb-class problem as shown in Figure 1, we
develop a verb classifier focusing on learning a better rep-
resentation for the verbs. Our verb classifier consists of a
verb feature adapter [13,20,36,51] and a set of class weights
computed via visual semantic arithmetic [38]. We enhance
the HOI prediction by fusing the outputs of the verb clas-
sifier and the common interaction classifier. 3) Linguistic
feature . To cope with the very rare and unseen class for
HOI prediction, we adopt a prompt-based linguistic repre-
sentation for HOIs and build a zero-shot classifier for the
HOI classification [42]. This classifier branch requires no
training and we integrate its output with the HOI classifier
during model inference.
We evaluate our HOICLIP on two representative HOI
detection datasets, HICO-DET [6] and V-COCO [15]. To
validate HOICLIP, we perform extensive experiments under
fully-supervised setting, zero-shot setting and data-efficientsetting. The experiment results demonstrate the superior-
ity of our methods: HOICLIP achieves competitive per-
formance across all three settings, outperforming previous
state-of-the-art methods on the zero-shot setting by 4.04
mAP and improving the data efficiency significantly.
The main contributions of our paper can be summarized
as follows:
• To our best knowledge, HOICLIP is the first work
to utilize query-based knowledge retrieval for efficient
knowledge transfer from the pre-trained CLIP model
to HOI detection tasks.
• We develop a fine-grained transfer strategy, leveraging
regional visual feature of HOIs via cross attention and
a verb representation via visual semantic arithmetic for
more expressive HOI representation.
• We further improve the performance of HOICLIP by
exploiting zero-shot CLIP knowledge without addi-
tional training.
|
Min_NeurOCS_Neural_NOCS_Supervision_for_Monocular_3D_Object_Localization_CVPR_2023
|
Abstract
Monocular 3D object localization in driving scenes is a
crucial task, but challenging due to its ill-posed nature. Esti-
mating 3D coordinates for each pixel on the object surface
holds great potential as it provides dense 2D-3D geomet-
ric constraints for the underlying PnP problem. However,
high-quality ground truth supervision is not available in
driving scenes due to sparsity and various artifacts of Li-
dar data, as well as the practical infeasibility of collecting
per-instance CAD models. In this work, we present Neu-
rOCS, a framework that uses instance masks and 3D boxes
as input to learn 3D object shapes by means of differentiable
rendering, which further serves as supervision for learning
dense object coordinates. Our approach rests on insights
in learning a category-level shape prior directly from real
driving scenes, while properly handling single-view ambi-
guities. Furthermore, we study and make critical design
choices to learn object coordinates more effectively from
an object-centric view. Altogether, our framework leads to
new state-of-the-art in monocular 3D localization that ranks
1st on the KITTI-Object [ 16] benchmark among published
monocular methods.
|
1. Introduction
Localization of surrounding vehicles in 3D space is an
important problem in autonomous driving. While Lidar
[32,59,72] and stereo [ 32,59,72] methods have achieved
strong performances, monocular 3D localization remains a
challenge despite recent progress in the field [ 4,35,43].
Monocular 3D object localization can be viewed as a
form of 3D reconstruction, with the goal to estimate the 3D
extent of an object from single images. While single-view
3D reconstruction is challenging due to its ill-posed nature,
learned priors combined with differentiable rendering [ 65]
have recently emerged as a powerful technique, which has
the potential to improve 3D localization as well. Indeed, re-
searchers [ 2,31,78,79] have applied it as a means for object
pose optimization in 3D localization. However, difficulties
remain due to pose optimization being ambiguous under
Figure 1. NeurOCS learns category-level shape model in real driv-
ing scenes to provide dense and clean NOCS supervision through
differentiable rendering, leading to new state-of-the-art 3D object
localization performance.
challenging photometric conditions (such as textureless vehi-
cle surfaces) and geometric occlusions ubiquitous in driving
scenes. The question of how differentiable rendering may
be best explored in 3D localization remains under-studied.
Our work proposes a framework to unleash its potential – in-
stead of using differentiable rendering for pose optimization,
we use it with annotated ground truth pose to provide high-
quality supervision for image-based shape learning, leading
to a new state-of-the-art in 3D localization performance.
Our framework relies on the machinery of Perspective-
n-Point (PnP) [ 33] pose estimation, which uses 2D-3D con-
straints to explicitly leverage geometric principles that lend
itself well to generalization. In particular, learning 3D ob-
ject coordinates for every visible pixel on the object surface,
known as normalized object coordinate space (NOCS) [ 66],
provides a dense set of constraints. However, despite NOCS-
based pose estimation dominating indoor benchmarks [ 21],
their use in real driving scenes has been limited primarily
due to lack of supervision – it is nontrivial to obtain accurate
per-instance reconstructions or CAD models in road scenes.
Lidar or its dense completion [ 23] are natural alternatives
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21404
as pseudo ground truth [ 7], but they are increasingly sparse
or noisy on distant objects with reflective surfaces. Syn-
thetic data is a potential source of supervision [ 78,79], but
is inherently restricted by domain gap to real scenes.
In this work, we propose NeurOCS that leverages neural
rendering to obtain effective NOCS supervision. Firstly, we
propose to learn category-level shape reconstruction from
real driving scenes with object masks and 3D object boxes
using Neural Radiance Field (NeRF) [ 44]. The shape re-
construction is then rendered into NOCS maps that serve as
the pseudo ground-truth for a network dedicated to regress
NOCS from images. Specifically, NeurOCS learns category-
level shape representation as a latent grid [ 50,51] with low-
rank structure, consisting of a canonical latent grid plus
several deformation bases to account for instance variations.
With single-view ambiguities handled by a KL regulariza-
tion [ 26] and dense shape prior, we show that the NOCS
supervision so obtained yields strong 3D localization per-
formance, even when the shape model is trained without
using Lidar data or any CAD models. Our NOCS supervi-
sion is illustrated in Fig. 1in comparison with Lidar and
its dense completion. We also note that NeRF rendering is
only required during training, without adding computational
overhead to inference. We show NeurOCS is complementary
to direct 3D box regression [ 43,55], and their fusion further
boosts the performance.
Further, we study crucial design choices in image-
conditioned NOCS regression. For example, as opposed to
learning NOCS in a scene-centric manner with the full image
as network input, we learn in an object-centric view by crop-
ping objects without scene context, which is demonstrated
to especially benefit the localization of distant or occluded
objects. Our extensive experiments study key choices that en-
able NeurOCS to achieve top-ranked accuracy for monocular
3D localization on the KITTI-Object benchmark [ 16].
In summary, our contributions include:
•We propose a framework to obtain neural NOCS supervi-
sion through differentiable rendering of the category-level
shape representation learned in real driving scenes.
•We drive the learning with deformable shape representa-
tion as latent grids with careful regularizations, as well as
effective NOCS learning from an object-centric view.
•Our insights translate to state-of-the-art performance
which achieves a top rank in KITTI benchmark [ 16].
|
Mohammadi_Ranking_Regularization_for_Critical_Rare_Classes_Minimizing_False_Positives_at_CVPR_2023
|
Abstract
In many real-world settings, the critical class is rare and
a missed detection carries a disproportionately high cost.
For example, tumors are rare and a false negative diagno-
sis could have severe consequences on treatment outcomes;
fraudulent banking transactions are rare and an undetected
occurrence could result in significant losses or legal penal-
ties. In such contexts, systems are often operated at a high
true positive rate, which may require tolerating high false
positives. In this paper, we present a novel approach to ad-
dress the challenge of minimizing false positives for systems
that need to operate at a high true positive rate. We propose
a ranking-based regularization ( RankReg ) approach that is
easy to implement, and show empirically that it not only ef-
fectively reduces false positives, but also complements con-
ventional imbalanced learning losses. With this novel tech-
nique in hand, we conduct a series of experiments on three
broadly explored datasets (CIFAR-10&100 and Melanoma)
and show that our approach lifts the previous state-of-the-
art performance by notable margins.
|
1. Introduction
The cost of error is often asymmetric in real-world sys-
tems that involve rare classes or events. For example, in
medical imaging, incorrectly diagnosing a tumor as benign
(a false negative) could lead to cancer being detected later
at a more advanced stage, when survival rates are much
worse. This would be a higher cost of error than incor-
rectly diagnosing a benign tumour as potentially cancerous
(a false positive). In banking, misclassifying a fraudulent
transaction as legitimate may be more costly in terms of
financial losses or legal penalties than misclassifying a le-
gitimate transaction as fraudulent (a false positive). In both
of these examples, the critical class is rare, and a missed de-
tection carries a disproportionately high cost. In such situa-
tions, systems are often operated at high true positive rates,
*Work done during an internship at Borealis AI
+−+−−−+−−+−−−++−−−To: Option 1To: Option 2FromPredictedscore (f𝜃)
AUC (↑)FPR@100% TPR (↓)ℓreg 0.6250.750.7550%50%
2
5
%2017
13Rank655−−++−−−++−−−++−100% TPR threshold
+
−positive sample negative sample −−−Figure 1. Which optimization option is preferred in operational
contexts with critical positives? We propose a novel regularizer for
systems that need to operate at a high true positive rate (TPR). Our
approach prioritizes reducing false positives at a high TPR when
presented with different options that equally improve the base ob-
jective, e.g. area under the ROC curve (AUC). In this toy example,
option 2 is preferred because, with a suitable threshold depicted
by the dashed line, all positives can be detected (100% TPR) with
only one false positive ( i.e., 25% FPR at 100% TPR), better than
option 1 where two false positives need be tolerated. Our regular-
izer is consistent with this preference: ℓregis lower for option 2
than option 1 ( i.e., 13 vs. 17).
even though this may require tolerating high false positive
rates. Unfortunately, false positives can undermine user
confidence in the system and responding to them could in-
cur other costs (e.g. additional medical imaging tests).
In this paper, we present a novel approach to address
the challenge of minimizing false positives for systems that
need to operate at a high true positive rate. Surprisingly, this
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15783
high-stakes operational setting has rarely been studied by
the research community. In contrast to conventional imbal-
anced classification methods, we propose a general method
for inducing a deep neural network to prioritize the reduc-
tion of false positives at a high true positive rate. To re-
main as broadly applicable as possible, we make minimal
assumptions on the architecture and optimization details of
the deep neural network. Our key insight is that the false
positive rate at a high true positive rate is determined by how
the least confident positives are ranked by the network. Our
plug-and-play solution adds a simple yet effective ranking-
based regularization term to the usual neural network train-
ing objective. The regularization term places an increasing
penalty on positive samples the lower they are ranked in
a sorted list of the network’s classification scores, which
works to push up the scores of the hardest positives.
Contributions. The main contributions of this paper are as
follows:
• We present a novel plug-and-play regularization term
that induces a deep neural network to prioritize the re-
duction of false positives in operational contexts where
a high true positive rate is required.
• Our regularizer is generic and can be easily combined
with other methods for imbalanced learning.
• We conduct extensive experiments on three public
benchmarks to show how the proposed regularization
term is complementary to conventional imbalanced
learning losses, and achieves state-of-the-art perfor-
mance in the high true positive rate operational setting.
|
Ni_NUWA-LIP_Language-Guided_Image_Inpainting_With_Defect-Free_VQGAN_CVPR_2023
|
Abstract
Language-guided image inpainting aims to fill the defec-
tive regions of an image under the guidance of text while
keeping the non-defective regions unchanged. However, di-
rectly encoding the defective images is prone to have an
adverse effect on the non-defective regions, giving rise to dis-
torted structures on non-defective parts. To better adapt
the text guidance to the inpainting task, this paper pro-
poses N ¨UWA-LIP , which involves defect-free VQGAN (DF-
VQGAN) and a multi-perspective sequence-to-sequence mod-
ule (MP-S2S). To be specific, DF-VQGAN introduces relative
estimation to carefully control the receptive spreading, as
well as symmetrical connections to protect structure details
unchanged. For harmoniously embedding text guidance into
the locally defective regions, MP-S2S is employed by ag-
gregating the complementary perspectives from low-level
pixels, high-level tokens as well as the text description. Ex-
periments show that our DF-VQGAN effectively aids the
inpainting process while avoiding unexpected changes in
non-defective regions. Results on three open-domain bench-
marks demonstrate the superior performance of our method
against state-of-the-arts. Our code, datasets, and model will
be made publicly available1.
|
1. Introduction
The task of image inpainting, which aims to fill missing
pixels in the defective regions with photo-realistic struc-
tures, is as ancient as art itself [2]. Despite its practical
applications [18, 32, 34], such as image manipulation, image
completion, and object removal, the task poses significant
challenges, including the effective extraction of valid fea-
tures from defective input and the generation of semantically
consistent results.
With the remarkable success of vision-language learning,
language-guided image inpainting has become a promis-
ing topic [24, 28, 33], which enables the generation of con-
trollable results with the guidance of text description (see
the completed results in Fig. 1). Recently, multimodal pre-
1https://github.com/kodenii/NUWA-LIP
Defective Image
Birds.A small bear eating apples.
A bird.Cute teddy bear laying on the ground.
A laptop somebody lost.A black dog with red necklace in the sun.
Grass land without anything.Completed Results Guided by Language
Figure 1. Language-guided inpainting results via N ¨UWA-LIP.
Text descriptions provide effective guidance for inpainting the de-
fective image with desired objects. More examples are in the suppl.
training methods based on diffusion and autoregressive mod-
els have exhibited impressive capabilities in synthesizing
various and photo-realistic images, such as Stable Diffu-
sion [21], Parti [29] and N ¨UWA [25]. In particular, N ¨UWA
has demonstrated a promising capability for language-guided
image generation, suggesting the potential for combining
this pre-training schema with VQV AE and Transformer for
language-guided image inpainting.
It is worth noting that this work focuses on addressing the
challenge of processing defective images with some regions
filled with zeros (see the first image in Fig. 1). This setting
has been widely adopted in previous works [24, 28, 33] and
is consistent with real-world corrupted image inpainting. To
learn unified representations of the vision and language, it
is crucial to ensure that the representation of non-defective
regions is accurate and remains unaffected by defective parts.
This requirement exacerbates the challenges associated with
our method, as it involves the effective extraction of valid
features from defective input.
However, existing pre-trained image generative mod-
els [21, 25, 29] are usually trained on non-defective images.
When used in the image inpainting task, these models en-
code the whole image and fuse the features from defective
regions into the representations of non-defective parts, re-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14183
ENCODEREMOVEPREDICTDECODE
ENCODERELATIVEESTIMATIONREMOVEPREDICTDECODE
Receptive Spreading☺No Receptive Spreading
≠=AVERAGEDISTANCEWITHGT TOKENS: 16.93
AVERAGEDISTANCEWITHGT TOKENS: 3.52
Information Losing☺No Information Losing
DECODE
DECODESYMMETRICALCONNECTION≠=AVERAGEFID: 2.04
AVERAGEFID: 0.80
Figure 2. Illustration of Receptive Spreading and Information Losing. As for receptive spreading, we can observe an obvious change and
a blurry boundary on the reconstructed non-defective region. In terms of information losing, some unexpected changes are also apparent in
the output of the non-defective regions.
sulting in what we call receptive spreading of the defective
region. These types of approaches can severely limit the ac-
curacy of modeling non-defective regions, especially when
the defective region is large, and make it difficult to match
language descriptions. Furthermore, in the VQV AE-based
model, the known non-defective regions are challenging to
reconstruct exactly the same as the original image due to the
compression of the image into discrete tokens. We refer to
this as information losing in the non-defective region (see
Fig. 2 for an illustration).
To enable effective adaptation of text descriptions to these
types of defective images, we propose N ¨UWA-LIP which
leverages a novel defect-free VQGAN (DF-VQGAN) and a
multi-perspective sequence-to-sequence module (MP-S2S).
In contrast to VQGAN [8], DF-VQGAN incorporates the
relative estimation to decouple defective and non-defective
regions. This helps to control receptive spreading and ob-
tain accurate visual representations for vision-and-language
(VL) learning in MP-S2S. To retain the information of non-
defective regions, symmetrical connections replenish the
lost information from the features in the encoding procedure.
Additionally, MP-S2S further enhances visual information
from complementary perspectives, including low-level pix-
els, high-level tokens, and the text description.
Moreover, we construct three datasets to evaluate the per-
formance of language-guided image inpainting and conduct
a comprehensive comparison with existing methods. Experi-
ments demonstrate that N ¨UWA-LIP outperforms competing
methods by a significant margin on all three benchmarks.
An ablation study is further conducted to evaluate the effec-
tiveness of each component in our N ¨UWA-LIP.
The main contributions are summarized as follows:
•To effectively encode the defective input, we propose a
DF-VQGAN, which introduces relative estimation to
control receptive spreading and symmetrical connec-
tions to retain the information of non-defective regions.
•We propose a multi-perspective sequence-to-sequence
module for enhancing visual information from comple-
mentary perspectives of pixel, token, and text domains.
•We build three open-domain datasets for evaluating
language-guided image inpainting. Experiments showthat N ¨UWA-LIP achieves state-of-the-art performance
in comparison with the competing methods.
|
Li_Spatial-Then-Temporal_Self-Supervised_Learning_for_Video_Correspondence_CVPR_2023
|
Abstract
In low-level video analyses, effective representations are
important to derive the correspondences between video
frames. These representations have been learned in a self-
supervised fashion from unlabeled images or videos, us-
ing carefully designed pretext tasks in some recent studies.
However, the previous work concentrates on either spatial-
discriminative features or temporal-repetitive features, with
little attention to the synergy between spatial and tempo-
ral cues. To address this issue, we propose a spatial-then-
temporal self-supervised learning method. Specifically, we
firstly extract spatial features from unlabeled images via
contrastive learning, and secondly enhance the features by
exploiting the temporal cues in unlabeled videos via recon-
structive learning. In the second step, we design a global
correlation distillation loss to ensure the learning not to
forget the spatial cues, and a local correlation distillation
loss to combat the temporal discontinuity that harms the re-
construction. The proposed method outperforms the state-
of-the-art self-supervised methods, as established by the
experimental results on a series of correspondence-based
video analysis tasks. Also, we performed ablation studies
to verify the effectiveness of the two-step design as well as
the distillation losses.
|
1. Introduction
Learning representations for video correspondence is a
fundamental problem in computer vision, which is closely
related to different downstream tasks, including optical flow
estimation [10, 17], video object segmentation [3, 37], key-
point tracking [53], etc. However, supervising such a rep-
resentation requires a large number of dense annotations,
which is unaffordable. Thus, most approaches acquire
information from simulations [10, 33] or limited annota-
tions [39, 55], which result in poor generalization in dif-
This work was supported by the Natural Science Foundation of China
under Grants 62022075 and 62036005, and by the Fundamental Research
Funds for the Central Universities under Contracts WK3490000005 and
WK3490000006. (Corresponding author: Dong Liu.)
Query Point Spatial Spatial -then -Temporal Temporal
Figure 1. Matching results given the query point. While
temporal-repetitive features fail to handle video correspondence
with dramatic appearance changes and deformations (the first two
rows), spatial-discriminative features are incompetent to recognize
the temporal repetition and are misled by distractors with similar
appearance (the last two rows). The green/red bounding boxes in-
dicate correct/wrong matching. ( Zoom in for best view )
ferent downstream tasks. Recently, self-supervised feature
learning is gaining significant momentum. Several pretext
tasks [15, 19, 24, 27, 48, 52] are designed, mostly concen-
trating on either spatial feature learning or temporal feature
learning for space-time visual correspondence.
With the objective of learning the representations that
are invariant to the appearance changes, spatial feature
learning provides video correspondence with discrimina-
tive and robust appearance cues, especially when facing
severe temporal discontinuity, i.e., occlusions, appearance
changes, and deformations. Most recently, as mentioned
in [50], the contrastive models [6,15,52] pre-trained on im-
age data show competitive performance against dedicated
methods for video correspondence. Thus, it is shown to
be a better way for learning spatial representations in terms
of quality and data efficiency, compared with the meth-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2279
ods [19, 27, 40, 47, 54, 59] using large-scale video datasets
for training. Though getting promising results, as shown in
Figure 1, the learned spatial representations are misled by
the distractors with similar appearance, which indicates the
poor ability to recognize the temporal pattern.
In another line, temporal feature learning focuses on
learning the temporal-repetitive features that occur consis-
tently over time. With the temporal consistency assump-
tion [2], the pixel repetition across video motivates recent
studies to exploit the temporal cues via a reconstruction
task [24, 30, 34, 58], where the query pixel in the target
frame can be reconstructed by leveraging the information
of adjacent reference frames within a local range. Then a
reconstruction loss is applied to minimize the photometric
error between the raw frame and its reconstruction. Nev-
ertheless, temporal-repetitive features highly depend on the
consistency of the pixels across the video. It thus can be
easily influenced by the temporal discontinuity caused by
the dramatic appearance changes, occlusions, and deforma-
tions (see Figure 1).
In light of the above observation, we believe the video
correspondence relies on both spatial-discriminative and
temporal-repetitive features. Thus, we propose a novel
spatial-then-temporal pretext task to achieve synergy be-
tween spatial and temporal cues. Specifically, we firstly
learn spatial features from unlabeled images via contrastive
learning and secondly improve the features by exploiting
the temporal repetition in unlabeled videos with frame re-
construction. While such an implementation brings to-
gether the advantages of spatial and temporal feature learn-
ing, there are still some problems. First of all, the previous
studies [18, 19, 27, 47, 54] propose to learn coarse-grained
features for video correspondence. With the objective of
reconstructive learning, the video frames need to be down-
sampled to align with the coarse-grained features [24], re-
sulting in severe temporal discontinuity for the pixels to be
reconstructed. Thus, the frame reconstruction loss becomes
invalid. Second, in the context of sequential training, di-
rectly training with only new data and objective functions
will degrade the discriminative features learned before.
To tackle the first problem, we firstly exploit temporal
cues by frame reconstruction at different pyramid levels
of the encoder. We observe that temporal repetition ben-
efits from relatively smaller down-sampling rate. Hence,
the model will learn better temporal-persistent features at
the fine-grained pyramid level. To distillate the knowledge
from it, we design a local correlation distillation loss that
supports explicit learning of the final correlation map in the
region with high uncertainty, which is achieved by taking
the more fine-grained local correlation map as pseudo la-
bels. This leads to better temporal representations on the
coarse feature map. At the same time, we regard the model
pre-trained in the first step as the teacher. Then a globalcorrelation distillation loss is proposed to retain the spatial
cues. Eventually, we can obtain better temporal represen-
tations without losing the discriminative appearance cues
acquired in the first step.
To sum up, our main contributions include: (i) We pro-
pose a spatial-then-temporal pretext task for self-supervised
video correspondence, which achieves synergy between
spatially discriminative and temporally repetitive features.
(ii) We propose the local correlation distillation loss to fa-
cilitate the learning of temporal features in the second step
while retaining the appearance sensitivity learned in the
first step by the proposed global correlation distillation loss.
(iii) We verify our approach in a series of correspondence-
related tasks. Our approach consistently outperforms pre-
vious state-of-the-art self-supervised methods and is even
comparable with task-specific fully-supervised algorithms.
|
Meunier_Unsupervised_Space-Time_Network_for_Temporally-Consistent_Segmentation_of_Multiple_Motions_CVPR_2023
|
Abstract
Motion segmentation is one of the main tasks in com-
puter vision and is relevant for many applications. The op-
tical flow (OF) is the input generally used to segment every
frame of a video sequence into regions of coherent motion.
Temporal consistency is a key feature of motion segmenta-
tion, but it is often neglected. In this paper, we propose an
original unsupervised spatio-temporal framework for mo-
tion segmentation from optical flow that fully investigates
the temporal dimension of the problem. More specifically,
we have defined a 3D network for multiple motion segmen-
tation that takes as input a sub-volume of successive opti-
cal flows and delivers accordingly a sub-volume of coher-
ent segmentation maps. Our network is trained in a fully
unsupervised way, and the loss function combines a flow
reconstruction term involving spatio-temporal parametric
motion models, and a regularization term enforcing tempo-
ral consistency on the masks. We have specified an easy
temporal linkage of the predicted segments. Besides, we
have proposed a flexible and efficient way of coding U-nets.
We report experiments on several VOS benchmarks with
convincing quantitative results, while not using appearance
and not training with any ground-truth data. We also high-
light through visual results the distinctive contribution of
the short- and long-term temporal consistency brought by
our OF segmentation method.
|
1. Introduction
Motion segmentation is a key topic in computer vision
that arises as soon as videos are processed. It may be a
goal in itself. More frequently, it is a prerequisite for dif-
ferent objectives as independent moving object detection,
object tracking, or motion recognition, to name a few. It is
also widely leveraged in video object segmentation (VOS),
but most often coupled with appearance. Motion segmen-
tation is supposed to rely on optical flow as input. Indeed,
the optical flow carries all the information on the movementbetween two successive images of the video.
Clearly, the motion segmentation problem has a strong
temporal dimension, as motion is generally consistent
throughout the video, at least in part of it within video shots.
The use of one optical flow field at each given time instant
may be sufficient to get the segmentation at frame tof the
video. However, extending the temporal processing win-
dow can be beneficial. Introducing temporal consistency
in the motion segmentation framework is certainly useful
from an algorithmic perspective: it may allow to correct lo-
cal errors or to predict the segmentation map at the next
time instant. Beyond that, temporal consistency is an in-
trinsic property of motion that is essential to involve in the
formulation of the motion segmentation problem.
In this paper, we propose an original method for multiple
motion segmentation from optical flow, exhibiting temporal
consistency, while ensuring accuracy and robustness. To the
best of our knowledge, our optical flow segmentation (OFS)
method is the first one to involve short- and long-term tem-
poral consistency. We are considering a fully unsupervised
method, which overcomes tedious or even unfeasible man-
ual annotation and provides a better generalization power to
any type of video sequences.
The main contributions of our work are as follows. We
adopt an explicit space-time approach. More specifically,
our network takes as input a sub-volume of successive opti-
cal flows and delivers accordingly a sub-volume of coherent
segmentation maps. Our network is trained in a completely
unsupervised manner, without any manual annotation or
ground truth data of any kind. The loss function combines
a flow reconstruction term involving spatio-temporal para-
metric motion models defined over the flow sub-volume,
and a regularization term enforcing temporal consistency on
the masks of the sub-volume. Our method also introduces a
latent represention of each segment motion and enables an
easy temporal linkage between predictions. In addition, we
have designed a flexible and efficient coding of U-nets.
The rest of the paper is organized as follows. Section 2 is
devoted to related work. In Section 3, we describe our unsu-
pervised 3D network for multiple motion segmentation em-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22139
bedding temporal consistency. Section 4 collects details on
our implementation. In Section 5, we report results on sev-
eral VOS benchmarks with a comparison to several existing
methods. Finally, Section 6 contains concluding remarks.
|
Lu_Markerless_Camera-to-Robot_Pose_Estimation_via_Self-Supervised_Sim-to-Real_Transfer_CVPR_2023
|
Abstract
Solving the camera-to-robot pose is a fundamental re-
quirement for vision-based robot control, and is a process
that takes considerable effort and cares to make accurate.
Traditional approaches require modification of the robot via
markers, and subsequent deep learning approaches enabled
markerless feature extraction. Mainstream deep learning
methods only use synthetic data and rely on Domain Ran-
domization to fill the sim-to-real gap, because acquiring
the 3D annotation is labor-intensive. In this work, we
go beyond the limitation of 3D annotations for real-world
data. We propose an end-to-end pose estimation framework
that is capable of online camera-to-robot calibration and
a self-supervised training method to scale the training to
unlabeled real-world data. Our framework combines deep
learning and geometric vision for solving the robot pose,
and the pipeline is fully differentiable. To train the Camera-
to-Robot Pose Estimation Network (CtRNet), we leverage
foreground segmentation and differentiable rendering for
image-level self-supervision. The pose prediction is visu-
alized through a renderer and the image loss with the input
image is back-propagated to train the neural network. Our
experimental results on two public real datasets confirm the
effectiveness of our approach over existing works. We also
integrate our framework into a visual servoing system to
demonstrate the promise of real-time precise robot pose es-
timation for automation tasks.
|
1. Introduction
The majority of modern robotic automation utilizes cam-
eras for rich sensory information about the environment
to infer tasks to be completed and provide feedback for
closed-loop control. The leading paradigm for converting
the valuable environment information to the robot’s frame
of reference for manipulation is position-based visual ser-
voing (PBVS) [4]. At a high level, PBVS converts 3D en-
vironmental information inferred from the visual data (e.g.≤1FPS 15FPS 30FPS50 60 70 80 90
Rendering-based
Keypoint-based
SpeedAUC
Aruco Marker DREAM-F DREAM-H
DREAM-Q Opt. Keypoints RoboPose
RoboPose(online) Diff. Rendering Ours
Figure 1. Comparison of speed and accuracy (based on AUC met-
ric) for existing image-based robot pose estimation methods.
the pose of an object to be grasped) and transforms it to
the robot coordinate frame where all the robot geometry is
known (e.g. kinematics) using the camera-to-robot pose.
Examples of robotic automation using the PBVS range from
bin sorting [35] to tissue manipulation in surgery [31].
Calibrating camera-to-robot pose typically requires a
significant amount of care and effort. Traditionally, the
camera-to-robot pose is calibrated with externally attached
fiducial markers ( e.g. Aruco Marker [14], AprilTag [38]).
The 2D location of the marker can be extracted from the im-
age and the corresponding 3D location on the robot can be
calculated with forward kinematics. Given a set 2D-3D cor-
respondence, the camera-to-robot pose can be solved using
Perspective-n-Point (PnP) methods [13, 30]. The procedure
usually requires multiple runs with different robot configu-
rations and once calibrated, the robot base and the camera
are assumed static. The incapability of online calibration
limits the potential applications for vision-based robot con-
trol in the real world, where minor bumps or simply shifting
due to repetitive use will cause calibrations to be thrown off,
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21296
not to mention real-world environmental factors like vibra-
tion, humidity, and temperature, are non-constant. Having
flexibility on the camera and robot is more desirable so that
the robot can interact with an unstructured environment.
Deep learning, known as the current state-of-the-art ap-
proach for image feature extraction, brings promising ways
for markerless camera-to-robot calibration. Current ap-
proaches to robot pose estimation are mainly classified into
two categories: keypoint-based methods [27–29,34,43] and
rendering-based methods [16,26]. Keypoint-based methods
are the most popular approach for pose estimation because
of the fast inference speed. However, the performance is
limited to the accuracy of the keypoint detector which is
often trained in simulation such that the proposed methods
can generalize across different robotic designs. Therefore,
the performance is ultimately hampered by the sim-to-real
gap, which is a long-standing challenge in computer vision
and robotics [55].
Rendering-based methods can achieve better perfor-
mance by using the shape of the entire robot as observation,
which provides dense correspondence for pose estimation.
The approaches in this category usually employ an iterative
refinement process and require a reasonable initialization
for the optimization loop to converge [32]. Due to the na-
ture that iteratively render and compare is time- and energy-
consuming, rendering-based methods are more suitable for
offline estimation where the robot and camera are held sta-
tionary. In more dynamic scenarios, such as a mobile robot,
the slow computation time make the rendering-based meth-
ods impracticable to use.
In this work, we propose CtRNet, an end-to-end frame-
work for robot pose estimation which, at inference, uses
keypoints for the fast inference speed and leverages the
high performance of rendering-based methods for training
to overcome the sim-to-real gap previous keypoint-based
methods faced. Our framework contains a segmentation
module to generate a binary mask of the robot and keypoint
detection module which extracts point features for pose es-
timation. Since segmenting the robot from the background
is a simpler task than estimating the robot pose and localiz-
ing point features on robot body parts, we leverage fore-
ground segmentation to provide supervision for the pose
estimation. Toward this direction, we first pretrained the
network on synthetic data, which should have acquired es-
sential knowledge about segmenting the robot. Then, a
self-supervised training pipeline is proposed to transfer our
model to the real world without manual labels. We connect
the pose estimation to foreground segmentation with a dif-
ferentiable renderer [24,33]. The renderer generates a robot
silhouette image of the estimated pose and directly com-
pares it to the segmentation result. Since the entire frame-
work is differentiable, the parameters of the neural network
can be optimized by back-propagating the image loss.Contributions . Our main contribution is the novel
framework for image-based robot pose estimation together
with a scalable self-training pipeline that utilizes unlim-
ited real-world data to further improve the performance
without any manual annotations. Since the keypoint de-
tector is trained with image-level supervision, we effec-
tively encompass the benefits from both keypoint-based and
rendering-based methods, where previous methods were di-
vided. As illustrated in the Fig. 1, our method maintains
high inference speed while matching the performance of the
rendering-based methods. Moreover, we integrate the CtR-
Net into a robotic system for PBVS and demonstrate the
effectiveness on real-time robot pose estimation.
|
Pathak_Sequential_Training_of_GANs_Against_GAN-Classifiers_Reveals_Correlated_Knowledge_Gaps_CVPR_2023
|
Abstract
Modern Generative Adversarial Networks (GANs) gen-
erate realistic images remarkably well. Previous work has
demonstrated the feasibility of “GAN-classifiers” that are
distinct from the co-trained discriminator, and operate on
images generated from a frozen GAN. That such classifiers
work at all affirms the existence of “knowledge gaps” (out-of-
distribution artifacts across samples) present in GAN train-
ing. We iteratively train GAN-classifiers and train GANs that
“fool” the classifiers (in an attempt to fill the knowledge gaps),
and examine the effect on GAN training dynamics, output
quality, and GAN-classifier generalization. We investigate
two settings, a small DCGAN architecture trained on low
dimensional images (MNIST), and StyleGAN2, a SOTA GAN
architecture trained on high dimensional images (FFHQ).
We find that the DCGAN is unable to effectively fool a held-
out GAN-classifier without compromising the output quality.
However, the StyleGAN2 can fool held-out classifiers with
no change in output quality, and this effect persists over
multiple rounds of GAN/classifier training which appears to
reveal an ordering over optima in the generator parameter
space. Finally, we study different classifier architectures and
show that the architecture of the GAN-classifier has a strong
influence on the set of its learned artifacts.
|
1. Introduction
GAN [ 8] architectures like StyleGAN2 [ 17] generate
high-resolution images that appear largely indistinguishable
from real images to the untrained eye [ 14,18,24]. While
there are many positive applications, the ability to generate
large amounts of realistic images is also a source of concern
given its potential application in scaled abuse and misin-
formation. In particular, GAN-generated human faces are
widely available (e.g., thispersondoesnotexist.com) and have
been used for creating fake identities on the internet [12].
Detection of GAN-generated images is an active research
area ( see[9] for a survey of approaches), with some us-ing custom methods and others using generic CNN-based
classifiers. Such classifiers are distinct from the discrimina-
tor networks that are trained alongside the generator in the
archetypal GAN setup. Given the adversarial nature of the
training loss for GANs, the existence of the GAN-classifiers
suggest consistent generator knowledge gaps (i.e.,artifacts
present across samples that distinguish generated images
from those of the underlying distribution) left by discrimi-
nators during training. Specialized classifiers [ 31] are able
to detect images sampled from held-out GAN instances and
even from held-out GAN architectures. These generalization
capabilities imply that the knowledge gaps are consistent
not only across samples from a GAN generator but across
independent GAN generator instances.
In this work we modify the GAN training loss in order
to fool a GAN-classifier in addition to the co-trained dis-
criminator, and examine the effect on training dynamics and
output quality. We conduct multiple rounds of training in-
dependent pools (initialized differently) of GANs followed
by GAN-classifiers, and gain new insights into the GAN
optimization process. We investigate two different settings:
in the first setting, we choose the low-dimensional domain
of handwritten digits (MNIST [ 19]), using a small DCGAN
[25] architecture and a vanilla GAN-classifier architecture.
For the second setting, we choose a high-dimensional do-
main of human faces (FFHQ [ 16]) with StyleGAN2 (SG2)
as a SOTA GAN architecture, and three different GAN-
classifier architectures (ResNet-50 [ 10], Inception-v3 [ 28],
and MobileNetV2 [ 27]). Our findings in this paper are as
follows:
•Samples drawn from a GAN instance exhibit a space of
“artifacts” that are exploited by the classifiers, and this
space is strongly correlated with those of other GAN
generator instances. This effect is present in both the
DCGAN and SG2 settings.
•Upon introducing the need to fool held-out classifiers,
the DCGAN is unable to generate high quality outputs.
•In the high dimensional setting, however, SG2 gener-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
24460
ators can easily fool held-out trained classifiers, and
move to a new artifact space. Strikingly, we find that
the artifact space is correlated among the new popula-
tion of generators as it was in the original population.
This correlation appears to persist in subsequent rounds
as new classifiers are introduced that are adapted to the
new artifact spaces.
•MobileNetV2 classifier instances in the SG2 setting
appear unable to learn all of the artifacts available for
them to exploit. Instead, MobileNetV2 instances form
clusters based on the subset of artifacts learned. We
hypothesize this being an effect of classifier capacity.
•An SG2 generator trained to reliably fool unseen classi-
fier instances from a given architecture is not guaran-
teed to fool classifiers from another architecture. There-
fore, the artifacts learned by a given classifier depends
strongly on the classifier’s architecture.
|
Ming_Deep_Dive_Into_Gradients_Better_Optimization_for_3D_Object_Detection_CVPR_2023
|
Abstract
Intersection-over-Union (IoU) is the most popular metric
to evaluate regression performance in 3D object detection.
Recently, there are also some methods applying IoU to the
optimization of 3D bounding box regression. However,
we demonstrate through experiments and mathematical
proof that the 3D IoU loss suffers from abnormal gradient
w.r.t. angular error and object scale, which further leads
to slow convergence and suboptimal regression process,
respectively. In this paper, we propose a Gradient-
Corrected IoU (GCIoU) loss to achieve fast and accurate
3D bounding box regression. Specifically, a gradient
correction strategy is designed to endow 3D IoU loss
with a reasonable gradient. It ensures that the model
converges quickly in the early stage of training, and helps
to achieve fine-grained refinement of bounding boxes in
the later stage. To solve suboptimal regression of 3D IoU
loss for objects at different scales, we introduce a gradient
rescaling strategy to adaptively optimize the step size.
Finally, we integrate GCIoU Loss into multiple models
to achieve stable performance gains and faster model
convergence. Experiments on KITTI dataset demonstrate
superiority of the proposed method. The code is available
at https://github.com/ming71/GCIoU-loss.
|
1. Introduction
Autonomous driving has received extensive attention in
recent years due to its broad application scenarios. 3D ob-
ject detection is a very important and promising topic of
autonomous driving. Currently, LiDAR is the most com-
monly used sensor to obtain representations of object in the
real world for 3D object detection [4,21,33,35,40,44]. The
∗Corresponding author.
†Contribute equally.
anchor prediction GT
Oscillation of 3D IoUloss
(a) Gradient anomalies in angle convergence.
(b) Different convergence for multi-scale objects. Figure 1. Visualization of some problems in current 3D IoU loss.
(a) Gradient changes of 3D IoU loss w.r.t. angular error. Gradients
are small under large angular errors. (b) Convergence of bounding
boxes of different scales under supervision of 3D IoU loss. The
optimization for multi-scale objects is different. IoU Loss con-
verges slower for large objects.
sparse point cloud data from LiDAR scanners can provide
depth information of objects, which is critical for accurate
3D object detection.
Intersection-over-Union (IoU) is the most popular metric
used to measure the detection accuracy and evaluate per-
formance of the model. But in the training phase, current
detectors usually use the Lnorm -based loss function to op-
timize the bounding box regression [6, 24, 34, 43, 50]. The
inconsistency between training loss and evaluation metric
would lead to a misaligned optimization process. Specifi-
cally, a lower loss value does not guarantee better detection
performance (higher IoU), which has also been discussed in
some previous work [25,26,38,41,48,49]. For example, Yu
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5136
et al. [41] suggested out that IoU, as an evaluation metric in
2D object detection, can naturally be used in loss function
to achieve consistency between training and testing. Zhou
et al. [49] extended the case to 3D object detection and de-
signed 3D IoU loss for autonomous driving.
IoU-based loss and its variants are studied and widely
used in 2D object detection [9, 16–18, 25, 36, 48]. There are
some work to introduce 3D IoU into 3D object detection
and achieve performance improvement [12, 13, 15, 32, 49].
However, the IoU loss in 3D object detection is quite differ-
ent from that in 2D object detection. Firstly, the 3D bound-
ing boxes have higher degree of freedom than 2D horizontal
bounding boxes, which makes the IoU calculation between
two 3D bounding boxes very complicated. Secondly, the
IoU-based losses generally converges slowly. The larger
search space and higher degrees-of-freedom of 3D bound-
ing box parameters further increases the difficulty of model
convergence. There are also some previous work devoted
to the improvements on 3D IoU loss [26, 47]. For in-
stance, Rotation-robust Intersection over Union (RIoU) [47]
uses a projection operation to estimate the intersection area
to simplify the calculation of 3D IoU, which is both ro-
bust and feasible for back-propagation. Recently, Rotation-
Decoupled IoU (RDIoU) [26] decouples the rotation vari-
able to solve the negative coupling effect of rotation on the
3D IoU.
However, most of 3D IoU based losses directly apply
3D IoU or use its variants to supervise bounding box re-
gression, and do not give an in-depth analyze to the loss
gradient changes during training. As shown in Fig. 1(a),
we found that the 3D IoU loss suffers from abnormal gra-
dient changes during training. Firstly, when the angular er-
ror between predicted box and the ground-truth (GT) box is
large, 3D IoU loss produces small gradients for optimiza-
tion, which leads to a very slow convergence. Secondly, as
the model converges, the angular error decreases, but there
is a period where the gradient increases abnormally. In this
case, the angle prediction is difficult to be refined, and it
even leads to the loss oscillation (see Fig. 1(a)). Further-
more, IoU is generally considered to be scale-invariant, but
we observed that the optimization of the 3D IoU loss per-
forms differently for objects of different scales. As shown
in Fig. 1(b), for large scale objects, more iterations are re-
quired to make the bounding box regressed well.
In this paper, we give detailed discussion of gradient
changes of 3D IoU loss from both experimental results
and mathematical analysis. Then, we propose a gradient-
correction IoU (GCIoU) loss to solve the mentioned prob-
lems. GCIoU loss contains two parts, gradient correction
strategy and gradient rescaling strategy. Gradient correction
strategy corrects the gradient of 3D IoU loss w.r.t. angu-
lar error to ensure smooth angle convergence. The GCIoU
loss is then obtained by integrating the expected gradients.Further, to optimize the convergence of objects of different
scales, we use a gradient rescaling strategy to modify the
gradient of IoU loss w.r.t. scale to adaptively update the
variables. Experimental results on KITTI dataset demon-
strate the superiority of our method.
The main contributions of this paper are summarized as
follows:
•We found the abnormal gradient changes of 3D IoU
loss w.r.t. angular error during the optimization pro-
cess through experiments, and further gave analysis
and proof for it.
•We observed different performances of the 3D IoU loss
for the optimization of objects of different scales, and
concluded that it is also caused by abnormal gradients
via mathematical analysis.
•The gradient-correction IoU (GCIoU) loss is proposed
to alleviate abnormal gradients and accelerate conver-
gence of 3D IoU based loss. The parameter update
strategy is then adjusted to adaptively optimize multi-
scale objects with different steps.
|
Qin_Ground-Truth_Free_Meta-Learning_for_Deep_Compressive_Sampling_CVPR_2023
|
Abstract
Compressive sampling (CS) is an efficient technique for
imaging. This paper proposes a ground-truth (GT) free
meta-learning method for CS, which leverages both ex-
ternal and internal deep learning for unsupervised high-
quality image reconstruction. The proposed method first
trains a deep neural network (NN) via external meta-
learning using only CS measurements, and then efficiently
adapts the trained model to a test sample for exploit-
ing sample-specific internal characteristic for performance
gain. The meta-learning and model adaptation are built on
an improved Stein’s unbiased risk estimator (iSURE) that
provides efficient computation and effective guidance for
accurate prediction in the range space of the adjoint of the
measurement matrix. To improve the learning and adap-
tion on the null space of the measurement matrix, a modi-
fied model-agnostic meta-learning scheme and a null-space
consistency loss are proposed. In addition, a bias tuning
scheme for unrolling NNs is introduced for further acceler-
ation of model adaption. Experimental results have demon-
strated that the proposed GT-free method performs well and
can even compete with supervised methods.
|
1. Introduction
CS is an imaging technique that captures an image by
collecting a limited number of measurements using a spe-
cific measurement matrix [5]. This technique has a wide
range of applications, including magnetic resonance (MR)
imaging, computed tomography (CT), and many others.
One main challenges in CS is reconstructing an image from
its limited measurements. Let x2RNdenote an image of
interest,y2CMits CS measurement vector with MN,
andthe measurement matrix. Then the CS reconstruction
*Corresponding author: Yuhui Quan
†This work was supported by Natural Science Foundation of Guang-
dong Province (Grant No. 2022A1515011755 and 2023A1515012841),
and Singapore MOE AcRF Tier 1 Grant (WBS No. A-8000981-00-00).needs to solve an under-determined system:
y=x+; (1)
where2CMdenotes measurement noise. As is under-
determined, a direct inversion suffers from solution ambi-
guity and noise sensitivity.
Supervised learning is one popular approach for CSR,
which trains an end-to-end NN for CSR over a dataset with
pairs of a GT image and its measurements; see e.g. [11,
42, 45, 47]. However, supervised CSR has two limitations.
Firstly, it is often expensive or even impractical to collect a
sufficient number of GT images in practice. Consequently,
the NN trained using a limited amount of external training
data may not generalize well. Secondly, the training data
may be biased or insufficient to cover the full range of pat-
terns and characteristics of the images being reconstructed,
resulting in poor generalization performance.
Recently, many works ( e.g. [6, 7, 27, 50]) have sought to
address the first limitation of supervised CSR by developing
GT-free external learning techniques, which train an end-to-
end NN without accessing GT images. Quite a few meth-
ods along this line are based on the Stein’s unbiased risk
estimator (SURE) [37]. While they have shown promise,
there remains a noticeable performance gap between these
GT-free techniques and existing supervised methods.
To address the second limitation of supervised CSR,
there are also some works on self-supervised internal learn-
ing, which train a deep NN model directly on a test sam-
ple to exploit sample-specific internal characteristics; see
e.g. [15, 30, 39]. These methods are based on the deep im-
age prior (DIP) [38] induced by the structures of a convo-
lutional NN (CNN). The downside of these methods is their
computational efficiency. Since each test example requires
learning an individual model, processing a large number of
test samples can be time-consuming and overwhelming.
Motivated by the pros and cons of existing works on ex-
ternal and internal learning, this paper aims at developing a
GT-free joint external and internal learning method for CSR
that takes the full advantages of both:
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9947
• Same as existing external learning methods, it trains a
model using an external dataset in an offline manner. The
advantage is that the resulting model trained without GTs
can compete well against its supervised counterparts.
• Same as existing internal learning methods, it is GT-free
and its model is adaptive to test samples to reduce dataset
bias. The advantage is that it is much more computation-
ally efficient when processing many test samples.
This paper proposes a GT-free meta-learning method for
CSR that achieves these goals. Meta-learning is about
“learning to learn” which develops a learning model that
can learn new concepts or fast adapt to new environments
with efficient updates to an existing model. Different from
existing works which access GT images for meta-learning
(e.g. [21, 40]), we consider a GT-free environment where
only measurement data is available during training. In the
training stage, our proposed method trains a deep NN model
so that not only it can reconstruct high-quality images, but
also its weights are suitable for efficient test-time model
adaption ( i.e. internal learning). Then, in the test stage, the
meta-trained model can be efficiently adapted to each test
sample for further performance improvement.
Since no GT images are exposed to meta-learning and
model adaptation, the key challenge is how to train the
model to make accurate predictions solely based on mea-
surement data. Motivated by SURE-based unsupervised
learning, we propose a SURE-motivated self-supervised
loss function, called iSURE (“i” for “improved”). Simi-
lar to SURE, iSURE provides an unbiased estimate of the
mean squared error in range( H)(range space of adjoint
of), but using noisier input. The gradients derived from
iSURE result in a more computationally efficient iteration
scheme than existing SURE-based loss functions. Further-
more, the noise injection in iSURE also helps to alleviate
potential overfitting during model training and adaptation.
Built upon the iSURE, an unsupervised training scheme
is developed with the integration of model-agnostic meta-
learning (MAML) [12]. MAML is a gradient-based meta-
learning algorithm, which trains a model such that a small
number of gradient updates on it will give a model to per-
form well on a new task. In our method, each target task is
defined as the self-supervised reconstruction on a test sam-
ple via iSURE. Similar to other SURE variants, the iSURE
defined on range( H)does not address the solution ambi-
guity caused by non-empty null()(null space of ). For-
tunately, the DIP from a CNN imposes implicit regulariza-
tion on the output, partially addressing such ambiguity. To
further reduce ambiguity, an ensemble-based sub-process is
introduced into MAML. The basic idea is that the CNN with
re-corrupted inputs in iSURE is likely to generate estimates
with some degree of diversity on null(), and the ensemble
of these estimates will refine the prediction in null().
The adaption that only updates the gradients via iSUREignores the influence on the prediction on null(). Thus,
an additional loss on the prediction consistency in null()
between the original and adapted models is adopted, which
provides guidance on reducing the solution ambiguity in
null()during adaption. To further accelerate the adapta-
tion process, we use an unrolling CNN and propose a bias-
tuning scheme that only adapts the bias parameters of the
unrolling CNN. The motivation stems from the similarity
between the bias parameters in an unrolling CNN and the
threshold values in a shrinkage-based scheme for CSR.
To summarize, this paper proposes a GT-free meta-
learning method for CSR, with the following contributions:
• An iSURE loss function for GT-free meta-learning and
model adaption, which results in a more efficient and ef-
fective scheme for mitigating overfitting compared to ex-
isting SURE-based loss functions;
• A meta-learning process based on MAML and iSURE
for learning CSR from external GT-less data, with an en-
semble sub-process for improving null-space learning;
• A model adaption scheme using iSURE and null-space
consistency, which exploits internal characteristics of a
test sample for more accurate prediction;
• A bias-tuning scheme for accelerating the adaption of
unrolling CNNs, with little reconstruction accuracy loss.
The experiments demonstrate that the proposed method out-
performs GT-free learning methods and competes with su-
pervised methods, while being faster than internal methods.
|
Park_Temporal_Interpolation_Is_All_You_Need_for_Dynamic_Neural_Radiance_CVPR_2023
|
Abstract
Temporal interpolation often plays a crucial role to learn
meaningful representations in dynamic scenes. In this pa-
per, we propose a novel method to train spatiotemporal neu-
ral radiance fields of dynamic scenes based on temporal
interpolation of feature vectors. Two feature interpolation
methods are suggested depending on underlying represen-
tations, neural networks or grids. In the neural representa-
tion, we extract features from space-time inputs via multi-
ple neural network modules and interpolate them based on
time frames. The proposed multi-level feature interpolation
network effectively captures features of both short-term and
long-term time ranges. In the grid representation, space-
time features are learned via four-dimensional hash grids,
which remarkably reduces training time. The grid repre-
sentation shows more than 100 ×faster training speed than
the previous neural-net-based methods while maintainingthe rendering quality. Concatenating static and dynamic
features and adding a simple smoothness term further im-
prove the performance of our proposed models. Despite the
simplicity of the model architectures, our method achieved
state-of-the-art performance both in rendering quality for
the neural representation and in training speed for the grid
representation.
|
1. Introduction
3D reconstruction and photo-realistic rendering have
been long-lasting problems in the fields of computer vision
and graphics. Along with the advancements of deep learn-
ing, differentiable rendering [11, 14] or neural rendering,
has emerged to bridge the gap between the two problems.
Recently proposed Neural Radiance Field (NeRF) [18] has
finally unleashed the era of neural rendering. Using NeRF,
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
4212
it is able to reconstruct accurate 3D structures from multiple
2D images and to synthesize photo-realistic images from
unseen viewpoints. Tiny neural networks are sufficient to
save and retrieve complex 3D scenes, which can be trained
in a self-supervised manner given 2D images and camera
parameters.
Meanwhile, as our world typically involves dynamic
changes, it is crucial to reconstruct the captured scene
through 4D spatiotemporal space. Since it is often not pos-
sible to capture scenes at different viewpoints simultane-
ously, reconstructing dynamic scenes from images is in-
herently an under-constrained problem. While NeRF was
originally designed to deal with only static scenes, there
have been a few approaches in the literature that extend
NeRF to dynamic scenes [13,22,23,25], which are so called
as dynamic NeRFs. Inspired by the non-rigid structure-
from-motion algorithms [1, 4] that reconstruct 3D struc-
tures of deformable objects, most previous works solved
the under-constrained setting by estimating scene deforma-
tions [21,22,25] or 3D scene flows [5,9,12] for each frame.
However, since the parameters of deformation estima-
tion modules are jointly optimized with NeRF network si-
multaneously, it is questionable that the modules can accu-
rately estimate deformations or scene flows in accordance
with its design principles. In many dynamic scenes, it is
challenging to resolve the ambiguities whether a point was
newly appeared, moved, or changed its color. It is expected
that those ambiguities and deformation estimation can be
separately solved within a single network, but in practice, it
is hard to let the network implicitly learn the separation,
especially for general dynamic scenes without any prior
knowledge of deformation.
On the other hand, grid representations in NeRF train-
ing [7, 19, 24] have grabbed a lot of attentions mainly
due to its fast training speed. Simple trilinear interpola-
tion is enough to fill in the 3D space between grid points.
While the representation can be directly adopted to dynamic
NeRFs together with the warping estimation module [6], it
still requires additional neural networks that affects training
and rendering speed.
Motivated by the aforementioned analyses, we present
a simple yet effective architecture for training dynamic
NeRFs. The key idea in this paper is to apply feature in-
terpolation to the temporal domain instead of using warp-
ing functions or 3D scene flows. While the feature inter-
polation in 2D or 3D space has been thoroughly studied,
to the best of our knowledge, feature interpolation method
in temporal domain for dynamic NeRF has not been pro-
posed yet. We propose two multi-level feature interpolation
methods depending on feature representation which is ei-
ther neural nets or hash grids [19]. Overview of the two
representations, namely the neural representation and the
grid representation, are illustrated in Fig. 1. In addition,noting that 3D shapes deform smoothly over time in dy-
namic scenes, we additionally introduced a simple smooth-
ness term that encourages feature similarity between the ad-
jacent frames. We let the neural networks or the feature
grids to learn meaningful representations implicitly without
imposing any constraints or restrictions but the smoothness
regularizer, which grants the flexibility to deal with various
types of deformations. Extensive experiments on both syn-
thetic and real-world datasets validate the effectiveness of
the proposed method. We summarized the main contribu-
tions of the proposed method as follows:
• We propose a simple yet effective feature extraction
network that interpolates two feature vectors along the
temporal axis. The proposed interpolation scheme out-
performs existing methods without having a deforma-
tion or flow estimation module.
• We integrate temporal interpolation into hash-grid rep-
resentation [19], which remarkably accelerates train-
ing speed more than 100×faster compared to the neu-
ral network models.
• We propose a smoothness regularizer which effectively
improves the overall performance of dynamic NeRFs.
|
Mirza_ActMAD_Activation_Matching_To_Align_Distributions_for_Test-Time-Training_CVPR_2023
|
Abstract
Test-Time-Training (TTT) is an approach to cope with
out-of-distribution (OOD) data by adapting a trained model
to distribution shifts occurring at test-time. We propose to
perform this adaptation via Activation Matching (ActMAD):
We analyze activations of the model and align activation
statistics of the OOD test data to those of the training data.
In contrast to existing methods, which model the distribution
of entire channels in the ultimate layer of the feature ex-
tractor, we model the distribution of each feature in multiple
layers across the network. This results in a more fine-grained
supervision and makes ActMAD attain state of the art per-
formance on CIFAR-100C and Imagenet-C. ActMAD is also
architecture- and task-agnostic, which lets us go beyond
image classification, and score 15.4% improvement over
previous approaches when evaluating a KITTI-trained ob-
ject detector on KITTI-Fog. Our experiments highlight that
ActMAD can be applied to online adaptation in realistic
scenarios, requiring little data to attain its full performance.
|
1. Introduction
When evaluated in laboratory conditions, deep networks
outperform humans in numerous visual recognition tasks [6,
8, 9]. But this impressive performance is predicated on
the assumption that the test and training data come from the
same distribution. In many practical scenarios, satisfying
this requirement is either impossible or impractical. For
example, a perception system of an autonomous vehicle
can be exposed not only to fog, snow, and rain, but also to
rare conditions including smoke, sand storms, or substances
distributed on the road in consequence of a traffic accident.
Unfortunately, distribution shifts between the training and
test data can incur a significant performance penalty [1, 20,
‡Correspondence: [email protected]
0 60 120 180 240 300020406080
Time (seconds)Mean Average Precision (%) Fog Snow Rain Clear
Baseline Clear (90.7%)
NORM
TTTCondition Change
DUA
Our ActMADFigure 1. ActMAD is well suited for online test-time adaptation
irrespective of network architecture and task. Our intended applica-
tion is object detection on board of a vehicle driving in dynamically
changing and unpredictable weather conditions. Here, we report the
class-averaged Mean Average Precision (mAP@50) over adapta-
tion time1, attained by continuously adapting a KITTI [12]-trained
YOLOv3 [29] detector. Clear refers to the weather condition of the
original dataset, mostly acquired in sunny weather, while Fog,Rain,
andSnow are generated by perturbing the original images [15, 18].
Gray vertical lines mark transitions between weather conditions.
25,27,34,36]. To address this shortcoming, test-time-training
(TTT) methods attempt to adapt a deep network to the actual
distribution of the test data, at test-time.
Existing TTT techniques limit the performance drop pro-
voked by the shift in test distribution, but the goal of on-
line, data-efficient, source-free, and task-agnostic adapta-
tion remains elusive: the methods that update network pa-
rameters by self-supervision on test data [10, 25, 32] can
only be used if the network was first trained in a multi-
1Time (seconds) for adaptation is hardware specific. For reference, we
run these experiments on an RTX-3090 NVIDIA graphics card and it takes
∼75s to adapt to 3741 images (one weather condition) in the KITTI test set.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
24152
task setup, the approaches that minimise the entropy of
test predictions [24, 34], or use pseudo-labels, or pseudo-
prototypes [19,20], are not easily adapted to object detection
or regression, and the ones that update statistics of the batch
normalization layers [27, 30] are limited to architectures that
contain them. As a consequence of these limitations, while
existing methods fare well in image classification, none of
them is well suited for the scenario where the need for on-
line adaptation is the most acute, that is, object detection on
board a car driving in changing weather conditions.
Our goal is to lift the limitations of the current methods
and propose a truly versatile, task-, and architecture-agnostic
technique, that would extend TTT beyond image classifi-
cation and enable its deployment in object detection for
automotive applications. To that end, we revisit feature align-
ment, the classical domain adaptation technique [7, 31, 38],
also adopted by several TTT algorithms [20, 25, 27, 30]. It
consists in aligning the distribution of test set features to that
of the training set. By contrast to previous methods, that
align the distributions of entire channels in the ultimate layer
of the feature extractor, ActMAD brings the feature align-
ment approach to another level by individually aligning the
distribution of each feature in multiple feature maps across
the network. On the one hand, this makes the alignment
location-aware. For example, the features from the bottom
of the image, most often representing road and other vehi-
cles, are not mixed with features from the top part, more
likely to describe trees, buildings, or the sky. On the other
hand, it results in a more fine-grained supervision of the
adapted network. Our ablation studies show that ActMAD
owes most of its performance to these two contributions. Ad-
ditionally, while several authors suggested aligning higher-
order moments [20,38], we demonstrate that aligning means
and variances should be preferred when working with small
batches. Unlike methods that only update mean and vari-
ance [27, 30], or affine parameters of batch normalization
layers [20, 27, 30], ActMAD updates all network parameters.
Our approach is architecture- and task-agnostic, and does not
require access to the training data or training labels, which
is important in privacy-sensitive applications. It does require
the knowledge of activation statistics of the training set, but
this requirement can be easily satisfied by collecting the data
during training, or by computing them on unlabelled data
without distribution shift.
Our contribution consists in a new method to use Acti-
vation Matching to Align Distributions for TTT, which we
abbreviate as ActMAD. Its main technical novelty is that
we model the distribution of each point in the feature map,
across multiple feature maps in the network. While most
previous works focus on combining different approaches
to test-time adaptation, ActMAD is solely based on feature
alignment. This is necessary for our method to be applicable
across different architectures and tasks, but we show thatdiscarding the bells and whistles, while aligning activation
distributions in a location-aware manner results in matching
or surpassing state-of-the-art performance on a number of
TTT benchmarks. Figure 1 presents the performance of Act-
MAD in a simulated online adaptation scenario, in which an
object detector runs onboard a vehicle driven in changing
weather conditions. Most existing methods cannot be used
in this setup, because they cannot run online, or because they
cannot be used for object detection. Note, that our method
recovers, and even exceeds, the performance of the initial
network once the weather cycle goes back to the conditions
in which the detector was trained.
|
Marrie_SLACK_Stable_Learning_of_Augmentations_With_Cold-Start_and_KL_Regularization_CVPR_2023
|
Abstract
Data augmentation is known to improve the generaliza-
tion capabilities of neural networks, provided that the set
of transformations is chosen with care, a selection often
performed manually. Automatic data augmentation aims at
automating this process. However, most recent approaches
still rely on some prior information; they start from a small
pool of manually-selected default transformations that are
either used to pretrain the network or forced to be part of
the policy learned by the automatic data augmentation al-
gorithm. In this paper, we propose to directly learn the
augmentation policy without leveraging such prior knowl-
edge. The resulting bilevel optimization problem becomes
more challenging due to the larger search space and the
inherent instability of bilevel optimization algorithms. To
mitigate these issues (i) we follow a successive cold-start
strategy with a Kullback-Leibler regularization, and (ii) we
parameterize magnitudes as continuous distributions. Our
approach leads to competitive results on standard bench-
marks despite a more challenging setting, and generalizes
beyond natural images.1
|
1. Introduction
Data augmentation, which encourages predictions to be
stable with respect to particular image transformations, has
become an essential component in visual recognition sys-
tems. While the data augmentation process is conceptually
simple, choosing the optimal set of image transformations
for a given task or dataset is challenging. For instance,
designing a good set for ImageNet [4] or even CIFAR-
10/100 [13] has been the result of a long-standing research
effort. Whereas data augmentation strategies that have been
chosen by hand for ImageNet have been used successfully
for many recognition tasks involving natural images, they
may fail to generalize to other domains such as medical
imaging, remote sensing or hyperspectral imaging.
1Project page: https://europe.naverlabs.com/slackThis has motivated automating the design of data aug-
mentation strategies [10,12,14–16,19,27,31]. Those are of-
ten represented as a stochastic policy that randomly draws a
combination of transformations along with their magnitudes
from a large predefined set, each time an image is sampled.
The goal becomes to learn strategies that effectively com-
pose multiple transformations, which is a challenging task
given the large search space of augmentations.
A natural framework for learning the parameters of this
policy is that of bilevel optimization. Intuitively, one looks
for the best possible policy such that a neural network
trained with this policy on a training set (inner problem)
generalizes well on a distinct validation set (outer problem).
Optimizing the resulting formulation is challenging as the
outer problem depends on the solution of the inner problem.
Classical techniques for solving this bilevel problem, such
as unrolled optimization, can become highly unstable as the
network weights become progressively suboptimal for the
current policy during the learning process.
Moreover, augmentations are often non-differentiable
in the parameters of the policy, thus requiring techniques
other than direct differentiation, such as Bayesian optimiza-
tion [15], gradient approximations ( e.g. RELAX [7]), or the
score method / REINFORCE [28] algorithm. While these
techniques bypass the differentiability issues, they can suf-
fer from large bias or variance. As a result, learning aug-
mentation policies is a difficult problem whose challenges
are exacerbated by the inherent instability of the optimiza-
tion techniques developed to solve bilevel problems, such
as unrolled optimization [1].
A standard way to improve stability and make the auto-
matic data augmentation problem simpler is to reduce the
search space. This is often achieved by learning the pol-
icy on top of “default” transformations such as Cutout [5],
random cropping and resizing, or color jittering, all known
to be well-suited to natural images which compose stan-
dard benchmarks such as CIFAR or ImageNet, or by dis-
carding transformations known to be harmful such as In-
vert. Fixing some of the transformations and removing oth-
ers mitigate the challenges inherent to learning a compo-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
24306
Figure 1. For different domains of the DomainNet dataset [21] (one per line), we show an image from that domain (left) and that image
transformed using the three most likely (middle) and the three least likely (right) augmentations for that domain, as estimated by SLACK.
sition of transformations. TrivialAugment [19] also shows
that state-of-the-art results can be achieved on these pre-
vious benchmarks simply by directly applying the policy
classically used for initializing auto-augmentation models,
up to minor modifications. Moreover, all methods rely on
carefully chosen ranges that constraint the transformation’s
magnitudes. Despite its effectiveness, manually selecting
default transformations and magnitude ranges restricts the
applicability of such policies to natural images and prevents
generalisation to other domains.
In this paper, our goal is to choose augmentation strate-
gies without relying on default transformations nor on hand-
selected magnitude ranges known to suit common bench-
marks. To achieve this objective, we first introduce a simple
interpretable model for the augmentation policies which al-
lows learning both the frequency by which a given augmen-
tation is selected and the magnitude by which it is applied.
Then, we propose a method for learning these augmenta-
tion policies by solving a bilevel optimization problem. Our
method relies on the REINFORCE technique for computing
the gradient of the policy and on unrolled optimization for
learning the policy, both of which can result in instabilities
and yield high variance estimates.
To address these issues, we introduce an efficient multi-
stage algorithm with a cold-start strategy and a Kullback-
Leibler (KL) regularization that are designed to improve the
stability of the process for learning the data augmentation
policy. More precisely, the algorithm first pre-trains a net-
work with a data augmentation policy uniformly sampling
over all transformations. Then, each stage uses a “cold-
start” strategy by restarting from the pre-trained network
and performs incremental updates of the current policy.This multi-stage approach with cold start prevents the
network from becoming progressively suboptimal as the
policy is updated using unrolled optimization. The KL reg-
ularization defines a trust region for the policy to compen-
sate for the possibly high variance of gradient estimates ob-
tained using the REINFORCE technique and encourages
exploration during training, preventing collapse to trivial
solutions. This regularization is inspired by proximal point
algorithms in convex optimization [22], which have also
been successful in reinforcement learning tasks [23].
By combining the regularized multi-stage approach with
our interpretable model of the augmentation policies, we
obtained the proposed SLACK method, which stands for
Stable Learning of Augmentations with Cold-start and
Kullback-Leibler regularization . SLACK is an efficient
data augmentation learning method that is able to ad-
dress the challenging bilevel optimization problem of learn-
ing a stochastic data augmentation policy without relying
strongly on prior knowledge. Figure 1 illustrates the trans-
formations found by SLACK to be most important / detri-
mental on a dataset of different domains including non-
natural images.
To summarize, our contribution is threefold. (i) We pro-
pose a simple and interpretable model of the policies which
allows learning both frequency and magnitudes of the aug-
mentations. (ii) We propose a regularized multi-stage strat-
egy to improve the stability of the bilevel optimization algo-
rithm used for solving the data augmentation learning prob-
lem. (iii) We evaluate our method on challenging experi-
mental settings, and show that it finds competitive augmen-
tation strategies on natural images without resorting to prior
information and generalizes to other domains.
24307
|
Lu_Content-Aware_Token_Sharing_for_Efficient_Semantic_Segmentation_With_Vision_Transformers_CVPR_2023
|
Abstract
This paper introduces Content-aware Token Sharing
(CTS), a token reduction approach that improves the com-
putational efficiency of semantic segmentation networks
that use Vision Transformers (ViTs). Existing works have
proposed token reduction approaches to improve the effi-
ciency of ViT-based image classification networks, but these
methods are not directly applicable to semantic segmenta-
tion, which we address in this work. We observe that, for
semantic segmentation, multiple image patches can share a
token if they contain the same semantic class, as they con-
tain redundant information. Our approach leverages this by
employing an efficient, class-agnostic policy network that
predicts if image patches contain the same semantic class,
and lets them share a token if they do. With experiments, we
explore the critical design choices of CTS and show its ef-
fectiveness on the ADE20K, Pascal Context and Cityscapes
datasets, various ViT backbones, and different segmentation
decoders. With Content-aware Token Sharing, we are able
to reduce the number of processed tokens by up to 44%,
without diminishing the segmentation quality.
|
1. Introduction
In recent years, many works have proposed replacing
Convolutional Neural Networks (CNNs) with Vision Trans-
formers (ViTs) [13] to solve various computer vision tasks,
such as image classification [1, 13, 22, 23], object detec-
tion [3, 23, 56] and semantic segmentation [6, 23, 28, 37,
47, 54]. ViTs1apply multiple layers of multi-head self-
attention [41] to a set of tokens generated from fixed-size
image patches. ViT-based models now achieve state-of-the-
art results for various tasks, and it is found that ViTs are es-
pecially well-suited for pre-training on large datasets [1,16],
which in turn yields significant improvements on down-
*Both authors contributed equally.
1In this work we use the term ViTs for the complete family of vision
transformers that purely apply global self-attention.
ViT
PolicyN tokens N patches
N patches M<N patchesViT
M<N tokensStandard vision transformer
Vision transformer with Content -aware Token Sharing (CTS) Figure 1. Content-aware token sharing (CTS). Standard ViT-
based segmentation networks turn fixed-size patches into tokens,
and process all of them. To improve efficiency, we propose to let
semantically similar patches share a token, and achieve consider-
able efficiency boosts without decreasing the segmentation quality.
stream tasks. However, the use of global self-attention,
which is key in achieving good results, means that the com-
putational complexity of the model is quadratic with respect
to the input tokens. As a result, these models become par-
ticularly inefficient for dense tasks like semantic segmenta-
tion, which are typically applied to larger images than for
image classification.
Recent works address these efficiency concerns in two
different ways. Some works propose new ViT-based archi-
tectures to improve efficiency, which either make a combi-
nation of global and local attention [23, 48] or introduce a
pyramid-like structure inspired by CNNs [14,23,44]. Alter-
natively, to reduce the burden of the quadratic complexity,
some works aim to reduce number of tokens that are pro-
cessed by the network, by either discarding [29, 39, 49] or
merging [2, 21, 30] the least relevant tokens. These works,
which all address the image classification task, find that a
similar accuracy can be achieved when taking into account
only a subset of all tokens, thereby improving efficiency.
However, these methods, which are discussed further in
Section 2, are not directly applicable to semantic segmenta-
tion. First, we cannot simply discard certain tokens, as each
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23631
token represents an image region for which the semantic
segmentation task requires predictions. Second, existing to-
ken merging approaches allow any combination of tokens
to be merged, through multiple stages of the network. As
a result, ‘unmerging’ and spatial reorganization of tokens,
which is necessary because semantic segmentation requires
a prediction for each original token, is non-trivial.
In this work, we present a much simpler, yet highly ef-
fective and generally applicable token reduction approach
for ViT-based semantic segmentation networks. The goal of
this approach is to improve the efficiency without decreas-
ing the segmentation quality. Token reduction methods for
image classification already found that merging redundant
tokens only results in a limited accuracy drop [2, 21, 30].
For semantic segmentation, we observe that there are many
neighboring patches that contain the same semantic class,
and that the information contained by these patches is likely
redundant. Therefore, we hypothesize that neighboring
patches containing the same class can share a token with-
out negatively impacting the segmentation quality of a se-
mantic segmentation network. To leverage this, we propose
an approach that reduces tokens by (1) using a policy that
identifies which patches can share a token before any tokens
enter the ViT, and (2) grouping only rectangular neighbor-
ing regions, which allows for straightforward reassembling
of tokens at the output of the ViT backbone. The main chal-
lenge that we tackle is to find this policy without introduc-
ing a heavy computational burden.
We propose to tackle this policy challenge by turning
the problem into a simple binary classification task that
can be solved by a highly efficient CNN model, our pol-
icy network . Concretely, it predicts whether 2 ×2 neighbor-
ing patches contain the same class, and lets these patches
share a token if they do. Our complete approach, which
we call Content-aware Token Sharing (CTS), first applies
this policy network, then lets patches share tokens accord-
ing to the predicted policy, feeds the reduced set of tokens
through the ViT, and finally makes a semantic segmenta-
tion prediction using these tokens (see Section 3). CTS has
several advantages by design. First, it does not require any
modifications to the ViT architecture or its training strategy
for pre-training or fine-tuning. As a result, it is compati-
ble with all backbones that purely use global self-attention,
as well as any semantic segmentation decoder or advanced
pre-training strategy, e.g., BEiT [1]. Second, by reducing
the number of tokens before inputting them to the ViT, the
efficiency improvement is larger than when token reduction
is applied gradually in successive stages of the ViT, as done
in existing methods for image classification [2, 49]. Both
advantages are facilitated by our policy network, which
is trained separately from the ViT, and is so efficient that
it only introduces a marginal computational overhead for
large ViTs applied to high-resolution images.With experiments detailed in Section 4, we show that our
CTS approach can reduce the total number of processed to-
kens by at least 30% without decreasing the segmentation
quality, for multiple datasets. We also show that this holds
for transformer backbones of many different sizes, initial-
ized with different pre-trained weights, and using various
segmentation decoders. For more detailed results, we refer
to Section 5. With this work, we aim to provide a foundation
for future research on efficient transformer architectures for
per-pixel prediction tasks. The code for this work is avail-
able through https://tue-mps.github.io/CTS .
To summarize, the contributions of this work are:
• A generally applicable token sharing framework that
lets semantically similar neighboring image patches
share a token, improving the efficiency of ViT-based
semantic segmentation networks without reducing the
segmentation quality.
• A content-aware token sharing policy network that ef-
ficiently classifies whether a set of neighboring patches
should share a token or not.
• A comprehensive set of experiments with which we
show the effectiveness of the proposed CTS ap-
proach on the ADE20K [55], Pascal Context [27], and
Cityscapes [10] datasets, for a wide range of different
ViTs and decoders.
|
Ngo_ISBNet_A_3D_Point_Cloud_Instance_Segmentation_Network_With_Instance-Aware_CVPR_2023
|
Abstract
Existing 3D instance segmentation methods are predomi-
nated by the bottom-up design – manually fine-tuned algo-
rithm to group points into clusters followed by a refinement
network. However, by relying on the quality of the clus-
ters, these methods generate susceptible results when (1)
nearby objects with the same semantic class are packed
together, or (2) large objects with loosely connected re-
gions. To address these limitations, we introduce ISBNet, a
novel cluster-free method that represents instances as ker-
nels and decodes instance masks via dynamic convolution.
To efficiently generate high-recall and discriminative ker-
nels, we propose a simple strategy named Instance-aware
Farthest Point Sampling to sample candidates and lever-
age the local aggregation layer inspired by PointNet++ to
encode candidate features. Moreover, we show that pre-
dicting and leveraging the 3D axis-aligned bounding boxes
in the dynamic convolution further boosts performance.
Our method set new state-of-the-art results on ScanNetV2
(55.9), S3DIS (60.8), and STPLS3D (49.2) in terms of AP
and retains fast inference time (237ms per scene on Scan-
NetV2). The source code and trained models are available at
https://github.com/VinAIResearch/ISBNet .
|
1. Introduction
3D instance segmentation (3DIS) is a core problem of
deep learning in the 3D domain. Given a 3D scene repre-
sented by a point cloud, we seek to assign each point with
a semantic class and a unique instance label. 3DIS is an
important 3D perception task and has a wide range of ap-
plications in autonomous driving, augmented reality, and
robot navigation where point cloud data can be leveraged to
complement the information provided by 2D images. Com-
pared to 2D image instance segmentation (2DIS), 3DIS is
arguably harder due to much higher variations in appearance
and spatial extent along with unequal distribution of point
cloud, i.e., dense near object surface and sparse elsewhere.
Thus, it is not trivial to apply 2DIS methods to 3DIS.
DyCo3D's
1
DyCo3D's Ours Ours
"table"
2 "chair"Ground T ruthFigure 1. In DyCo3D [ 16], kernel prediction quality is greatly
affected by the centroid-based clustering algorithm which has two
issues: 1mis-grouping nearby instances and 2over-segment
a large object into multiple fragments. Our method addresses
these issues by instance-aware point sampling , achieving far better
results. Each sample point aggregates information from its local
context to generate a kernel for predicting its own object mask, and
the final instances will be filtered and selected by an NMS.
A typical approach for 3DIS, DyCo3D [ 16], adopts dy-
namic convolution [ 33,37] to predict instance masks. Specif-
ically, points are clustered, voxelized, and passed through a
3D Unet to generate instance kernels for dynamic convolu-
tion with the feature of all points in the scene. This approach
is illustrated in Fig. 2 (a). However, this approach has several
limitations. First, the clustering algorithm heavily relies on
the centroid-offset prediction whose quality deteriorates sig-
nificantly when: (1) objects are densely packed so that two
objects can be mistakenly grouped together as one object, or
(2) large objects whose parts are loosely connected resulting
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13550
in different objects when clustered. These two scenarios
are visualized in Fig. 1. Second, the points’ feature mostly
encodes object appearance which is not distinct enough for
separating different instances, especially between objects
having the same semantic class.
To address the limitations of DyCo3D [ 16], we propose
ISBNet, a cluster-free framework for 3DIS with Instance-
aware Farthest Point Sampling and Box-aware Dynamic
Convolution. First, we revisit the Farthest Point Sampling
(FPS) [ 10] and the clustering method in [ 5,16,34] and find
that these algorithms generate considerably low instance
recall. As a result, many objects are omitted in the sub-
sequent stage, leading to poor performance. Motivated by
this, we propose our Instance-aware Farthest Point Sampling
(IA-FPS), which aims to sample query candidates in a 3D
scene with high instance recall. We then introduce our Point
Aggregator, incorporating the IA-FPS with a local aggrega-
tion layer to encode semantic features, shapes, and sizes of
instances into instance-wise features.
Additionally, the 3D bounding box of the object is an
existing supervision but has not yet been explored in the
3D instance segmentation task. Therefore, we add an auxil-
iary branch to our model to jointly predict the axis-aligned
bounding box and the binary mask of each instance. The
ground-truth axis-aligned bounding box is deduced from the
existing instance mask label. Unlike Mask-DINO [ 25] and
CondInst [ 33], where the auxiliary bounding box prediction
is just used as a regularization of the learning process, we
leverage it as an extra geometric cue in the dynamic convo-
lution, thus further boosting the performance of the instance
segmentation task.
To evaluate the performance of our approach, we conduct
extensive experiments on three challenging datasets: Scan-
NetV2 [ 8], S3DIS [ 1], and STPLS3D [ 4]. ISBNet not only
achieves the highest accuracy among these three datasets,
surpassing the strongest method by +2.7/3.4/3.0 on Scan-
NetV2, S3DIS, and STPLS3D, but also demonstrates to be
highly efficient, running at 237ms per scene on ScanNetV2.
In summary, the contributions of our work are as follows:
•We propose ISBNet, a cluster-free paradigm for 3DIS,
that leverages Instance-aware Farthest Point Sampling and
Point Aggregator to generate an instance feature set.
•We first introduce using the axis-aligned bounding box
as an auxiliary supervision and propose the Box-aware
Dynamic Convolution to decode instance binary masks.
•ISBNet achieves state-of-the-art performance on three dif-
ferent datasets: ScanNetV2, S3DIS, and STPLS3D with-
out comprehensive modifications of the model architecture
and hyper-parameter tuning for each dataset.
In the following, Sec. 2 reviews prior work; Sec. 3 speci-
fies our approach; and Sec. 4 presents our implementation
details and experimental results. Sec. 5 concludes with some
remarks and discussions.
|
Peng_Trajectory-Aware_Body_Interaction_Transformer_for_Multi-Person_Pose_Forecasting_CVPR_2023
|
Abstract
Multi-person pose forecasting remains a challenging
problem, especially in modeling fine-grained human body
interaction in complex crowd scenarios. Existing methods
typically represent the whole pose sequence as a tempo-
ral series, yet overlook interactive influences among peo-
ple based on skeletal body parts. In this paper, we propose
a novel Trajectory-Aware BodyInteraction Trans former
(TBIFormer ) for multi-person pose forecasting via effec-
tively modeling body part interactions. Specifically, we con-
struct a Temporal Body Partition Module that transforms
all the pose sequences into a Multi-Person Body-Part se-
quence to retain spatial and temporal information based on
body semantics. Then, we devise a Social Body Interaction
Self-Attention (SBI-MSA) module, utilizing the transformed
sequence to learn body part dynamics for inter- and intra-
individual interactions. Furthermore, different from prior
Euclidean distance-based spatial encodings, we present a
novel and efficient Trajectory-Aware Relative Position En-
coding for SBI-MSA to offer discriminative spatial infor-
mation and additional interactive clues. On both short-
and long-term horizons, we empirically evaluate our frame-
work on CMU-Mocap, MuPoTS-3D as well as synthesized
datasets (6 ∼10 persons), and demonstrate that our method
greatly outperforms the state-of-the-art methods.
†Corresponding author.
|
1. Introduction
Recent years have seen a proliferation of work on the
topic of human motion prediction [4,6, 7,13, 24,25,28, 34],
which aims to forecast future poses based on past obser-
vations. Similarly, understanding and forecasting human
motion plays a critical role in the field of artificial intelli-
gence and computer vision, especially for robot planning,
autonomous driving, and video surveillance [8, 14, 21, 44].
Although encouraging progress has been achieved, the cur-
rent methods are mostly based on local pose dynamics fore-
casting without considering global position changes of body
joints (global body trajectory) and often tackle the prob-
lem of single humans in isolation while overlooking human-
human interaction. Actually, in real-world scenarios, each
person may interact with one or more people, ranging from
low to high levels of interactivity with instantaneous and
deferred mutual influences [2, 31]. As illustrated in Fig. 1
(a), two individuals are pushing and shoving with high in-
teraction, whilst a third individual is strolling with no or low
interaction. Thus, accurately forecasting pose dynamics and
trajectory and comprehensively considering complex social
interactive factors are imperative for understanding human
behavior in multi-person motion prediction. However, ex-
isting solutions do not efficiently address these challenging
factors. For example, Guo et al . [15] propose a collabo-
rative prediction task and perform future motion prediction
for only two interacted dancers, which inevitably ignores
low interaction influence on one’s future behavior. Wang
et al. [39] use local and global Transformers to learn indi-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17121
vidual motion and social interactions separately in a crowd
scene. The aforementioned methods ignore the interactive
influences of body parts and only learn temporal and social
relationships without modeling fine-grained body interac-
tion, which makes it difficult to capture complex interaction
dependencies.
To solve this issue, we propose a novel Transformer-
based framework, termed TBIFormer, which consists of
multiple stacked TBIFormer blocks and a Transformer de-
coder. In particular, each TBIFormer block contains a
Social Body Interaction Multi-Head Self-Attention (SBI-
MSA) module, which aims at learning body part dynam-
ics across inter- and intra-individuals and capturing fine-
grained skeletal body interaction dependencies in complex
crowd scenarios as shown in Fig. 1 (b). More specifically,
SBI-MSA learns body parts dynamics across temporal and
social dimensions by measuring motion similarity of body
parts rather than pose similarity of the entire body. In addi-
tion, a Trajectory-Aware Relative Position Encoding is in-
troduced for SBI-MSA as a contextual bias to provide addi-
tional interactive clues and discriminative spatial informa-
tion, which is more robust and accurate than the Euclidean
distance-based spatial encodings.
In order to feed the TBIFormer a pose sequence contain-
ing both temporal and spatial information, an intuitive way
is to retain body joints in time series. However, this strat-
egy will suffer from noisy joints caused by noisy sensor in-
puts or inaccurate estimations. In this work, we propose a
Temporal Body Partition Module (TBPM) that, based on
human body semantics, transforms the original pose se-
quence into a new one, enhancing the network’s capacity
for modeling interactive body parts. Then, we concatenate
the transformed sequences for all people one by one to gen-
erate a Multi-Person Body Part (MPBP) sequence for input
of TBIFormer blocks, which enables the model to capture
dependencies of interacting body parts between individu-
als. TBIFormer makes MPBP sequence suitable for motion
prediction by utilizing positional and learnable encodings to
indicate to whom each body part and timestamp belongs.
Finally, a Transformer decoder is used to further con-
sider the relations between the current and historical context
across individuals’ body parts toward predicting smooth and
accurate multi-person poses and trajectories. For multi-
person motion prediction (with 2 ∼3 persons), we evaluate
our method on multiple datasets, including CMU-Mocap
[9] with UMPM [35] augmented and MuPoTS-3D [30].
Besides, we extend our experiment by mixing the above
datasets with the 3DPW [38] dataset to perform prediction
in a more complex scene (with 6 ∼10 persons). Our method
outperforms the state-of-the-art approaches for both short-
and long-term predictions by a large margin, with 14.4 %
∼16.5%accuracy improvement for the short-term ( ≤1.0s)
and 6.5 %∼18.2%accuracy improvement for the long-term(1.0s∼3.0s).
To summarize, our key contributions are as follows: 1)
We propose a novel Transformer-based framework for ef-
fective multi-person pose forecasting and devise a Tempo-
ral Body Partition Module that transforms the original pose
sequence into a Multi-Person Body-Part sequence to retain
both temporal and spatial information. 2) We present a
novel Social Body Interaction Multi-Head Self-Attention
(SBI-MSA) that learns body part dynamics across inter-
and intra-individuals and captures complex interaction de-
pendencies. 3) A novel Trajectory-Aware Relative Posi-
tion Encoding is introduced for SBI-MSA to provide dis-
criminative spatial information and additional interactive
clues. 4) On multiple multi-person motion datasets, the
proposed TBIFormer significantly outperforms the state-of-
the-art methods.
|
Petrov_Object_Pop-Up_Can_We_Infer_3D_Objects_and_Their_Poses_CVPR_2023
|
Abstract
The intimate entanglement between objects affordances
and human poses is of large interest, among others, for
behavioural sciences, cognitive psychology, and Computer
Vision communities. In recent years, the latter has de-
veloped several object-centric approaches: starting from
items, learning pipelines synthesizing human poses and dy-
namics in a realistic way, satisfying both geometrical and
functional expectations. However, the inverse perspective
is significantly less explored: Can we infer 3D objects and
their poses from human interactions alone? Our investiga-
tion follows this direction, showing that a generic 3D hu-
man point cloud is enough to pop up an unobserved ob-
ject, even when the user is just imitating a functionality
(e.g., looking through a binocular) without involving a tan-
gible counterpart. We validate our method qualitatively and
quantitatively, with synthetic data and sequences acquired
for the task, showing applicability for XR/VR.
|
1. Introduction
Complex interactions with the world are among the
unique skills distinguishing humans from other living be-
ings. Even though our perception might be imperfect (we
cannot hear ultrasonic sounds or see ultraviolet light [49]),
our cognitive representation is enriched with a functional
perspective, i.e., potential ways of interacting with objects
or, as introduced by Gibson and colleagues, the affordance
of the objects [19]. Several behavioural studies confirmed
the centrality of this concept [2, 9, 43], which plays a fun-
damental role also for kids’ development [2]. Computer
Vision is well-aware that the function of an object com-
plements its appearance [21], and exploited this in tasks
like human and object reconstruction [62]. Previous litera-
ture approaches the interaction analysis from an object per-
spective (i.e., given an object, analyze the human interac-
tion) [8,22,63], building object-centric priors [73], generat-
ing realistic grasps given the object [13, 34], reconstructing
hand-object interactions [8, 22, 63]. Namely, objects induce
functionality, so a human interaction (e.g., a mug suggests
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
4726
Our
Human-Centric
Object-Centric
Previous Work
Figure 2. Human-Centric vs. Object-Centric inference. A com-
mon perspective in the prior work is to infer affordances or human
pose starting from an object. We explore the inverse, Human-
Centric perspective of the human-object interaction relationship:
given the human, we are interested in predicting the object.
a drinking action; an handle a grasping one). For the first
time, our work reverts the perspective, suggesting that ana-
lyzing human motion and behaviour is naturally a human-
centric problem (i.e., given a human interaction, what kind
of functionality is it suggesting, Fig. 2). Moving the first
step in this new research direction, we pose a fundamen-
tal question: Can we infer 3D objects and their poses from
human interactions alone?
At first sight, the problem seems particularly hard and
significantly under-constrained since several geometries
might fit the same action. However, the human body com-
plements this information in several ways: physical rela-
tions, characteristic poses, or body dynamics serve as valu-
able proxies for the involved functionality, as suggested by
Fig. 1. Such hints are so powerful that we can easily imag-
ine the kind of object and its location, even if such an object
does not exist. Furthermore, even if the given pose might
fit several possible solutions, our mind naturally comes to
the most natural one indicated by the observed behaviour .
It also suggests that solely focusing on the contact region
(an approach often preferred by previous works) is insuf-
ficient in this new viewpoint. The reification principle of
Gestalt psychology [37] highlights that ”the whole” arises
from ”the parts” and their relation. Similarly, in Fig. 3, the
hand grasps in B) pop up a binocular in our mind because
we naturally consider the relationship with the other body
parts.
Finally, moving to a human-centric perspective in
human-object interaction is critical for human studies and
daily-life applications. Modern systems for AR/VR [24],
and digital interaction [26] are both centered on humans,
often manipulating objects that do not have a real-world
counterpart. Learning to decode an object from human be-
haviour enables unprecedented applications.
To answer our question, we deploy a first straightforward
B) A)
Figure 3. Reification principle. Gestalt theory suggests our
perception considers the whole before the single parts (A). This
served as inspiration for our work: the object can arise consider-
ing the body parts as a whole (B).
and effective pipeline to ”pop up” a rigid object from a 3D
human point cloud. Starting from the input human point
cloud and a class, we train an end-to-end pipeline to in-
fer object location. In case a temporal sequence of point
clouds is available, we suggest post-processing to avoid jit-
tering and inconsistencies in the predictions, showing the
relevance of this information to handle ambiguous poses.
We show promising results on previously unaddressed tasks
in digital and real-world scenes. Finally, our method allows
us to analyze different features of human behaviour, high-
light their contribution to object retrieval, and point to ex-
citing directions for future works.
In summary, our main contributions are:
1. We formulate a novel problem, changing the perspec-
tive taken by previous works in the field, and open to a
yet unexplored research direction;
2. We introduce a method capable of predicting the object
starting from an input human point cloud;
3. We analyze different components of the human-object
relationship: the contribution of different pieces of in-
teractions (hands, body, time sequence), the point-wise
saliency of input points, and the confusion produced by
objects with similar functions.
All the code will be made publicly available.
|
Ma_OTAvatar_One-Shot_Talking_Face_Avatar_With_Controllable_Tri-Plane_Rendering_CVPR_2023
|
Abstract
Controllability, generalizability and efficiency are the
major objectives of constructing face avatars represented
by neural implicit field. However, existing methods have
not managed to accommodate the three requirements si-
multaneously. They either focus on static portraits, restrict-
ing the representation ability to a specific subject, or suffer
from substantial computational cost, limiting their flexibil-
ity. In this paper, we propose One-shot Talking face Avatar
(OTAvatar), which constructs face avatars by a general-
ized controllable tri-plane rendering solution so that each
personalized avatar can be constructed from only one por-
trait as the reference. Specifically, OTAvatar first inverts a
portrait image to a motion-free identity code. Second, the
identity code and a motion code are utilized to modulate
an efficient CNN to generate a tri-plane formulated volume,
which encodes the subject in the desired motion. Finally,
volume rendering is employed to generate an image in any
view. The core of our solution is a novel decoupling-by-
inverting strategy that disentangles identity and motion in
the latent code via optimization-based inversion. Benefit-
ing from the efficient tri-plane representation, we achieve
controllable rendering of generalized face avatar at 35FPS
on A100. Experiments show promising performance of
cross-identity reenactment on subjects out of the training
set and better 3D consistency. The code is available at
https://github.com/theEricMa/OTAvatar .
|
1. Introduction
Neural rendering has achieved remarkable progress and
promising results in 3D reconstruction. Thanks to the dif-
ferentiability in neuron computation, the neural rendering
*Equal contribution.
†Corresponding author.
OTAvatar
ConvsID MotionSingle
Reference
Figure 1. OTAvatar animation results . The source subjects in
HDTF [ 43] dataset are animated by OTAvatar using a single por-
trait as the reference. We use the pose and expression coefficients
of 3DMM to represent motion and drive the avatar. Note that these
subjects are not included in the training data of OTAvatar.
methods bypass the expensive cost of high-fidelity digi-
tal 3D modeling, thus attracting the attention of many re-
searchers from academia and industry. In this work, we aim
to generate a talking face avatar via neural rendering tech-
niques, which is controllable by driving signals like video
and audio segments. Such animation is expected to inherit
the identity from reference portraits, while the expression
and pose can keep in sync with the driving signals.
The early works about talking face focus on expression
animation consistent with driving signals in constrained
frontal face images [ 5,7,27]. It is then extended to in-the-
wild scenes with pose variations [ 18,35,46,47]. Many
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16901
previous works are 2D methods, where the image warping
adopted to stimulate the motion in 3D space is learned in
2D space [ 29,31,32,36,40]. These methods tend to col-
lapse under large pose variation. In contrast, there are 3D
methods that can address the pose problem, since the head
pose can be treated as a novel view. Both explicit [ 11,20]
and implicit 3D representations [ 23] are introduced to face
rendering [ 15,17,21,22,25,28,45]. However, these meth-
ods either overfit to a single identity or fail to produce high-
quality animation for different identities.
In this work, we propose a one-shot talking face avatar
(OTAvatar), which can generate mimic expressions with
good 3D consistency and be generalized to different iden-
tities with only one portrait reference. Some avatar anima-
tion examples are shown in Fig. 1. Given a single reference
image, OTAvatar can drive the subject with motion signals
to generate corresponding face images. We realize it un-
der the framework of volume rendering. Current methods
usually render static subjects [ 22,28]. Although there are
works [ 12,15,25,26,45] proposed to implement dynamic
rendering for face avatars, they need one model per subject.
Therefore, the generalization is poor. HeadNeRF [ 17] is a
similar work to us. However, its reconstruction results are
unnatural in the talking face case.
Our method is built on a 3D generative model pre-trained
on a large-scale face database to guarantee identity general-
ization ability. Besides, we employ a motion controller to
decouple the motion and identity in latent space when per-
forming the optimization-based GAN inversion, to make the
motion controllable and transferable to different identities.
The network architecture is compact to ensure inference ef-
ficiency. In the utilization of our OTAvatar, given a single
reference image, we fix the motion code predicted by the
controller and only optimize the identity code so that a new
avatar of the reference image can be constructed. Such a
disentanglement enables the rendering of any desired mo-
tion by simply alternating the motion-related latent code to
be that of the specific motion representation.
The major contributions of this work can be summarized
as follows.
• We make the first attempt for one-shot 3D face recon-
struction and motion-controllable rendering by taming
a pre-trained 3D generative model for motion control.
• We propose to decouple motion-related and motion-
free latent code in inversion optimization by prompt-
ing the motion fraction of latent code ahead of the op-
timization using a decoupling-by-inverting strategy.
• Our method can photo-realistically render any identity
with the desired expression and pose at 35FPS. The
experiment shows promising results of natural motion
and 3D consistency on both 2D and 3D datasets.
|
Li_Weakly_Supervised_Class-Agnostic_Motion_Prediction_for_Autonomous_Driving_CVPR_2023
|
Abstract
Understanding the motion behavior of dynamic environ-
ments is vital for autonomous driving, leading to increas-
ing attention in class-agnostic motion prediction in LiDAR
point clouds. Outdoor scenes can often be decomposed into
mobile foregrounds and static backgrounds, which enables
us to associate motion understanding with scene parsing.
Based on this observation, we study a novel weakly su-
pervised motion prediction paradigm, where fully or par-
tially (1%, 0.1%) annotated foreground/background binary
masks are used for supervision, rather than using expen-
sive motion annotations. To this end, we propose a two-
stage weakly supervised approach, where the segmentation
model trained with the incomplete binary masks in Stage1
will facilitate the self-supervised learning of the motion
prediction network in Stage2 by estimating possible mov-
ing foregrounds in advance. Furthermore, for robust self-
supervised motion learning, we design a Consistency-aware
Chamfer Distance loss by exploiting multi-frame informa-
tion and explicitly suppressing potential outliers. Compre-
hensive experiments show that, with fully or partially bi-
nary masks as supervision, our weakly supervised models
surpass the self-supervised models by a large margin and
perform on par with some supervised ones. This further
demonstrates that our approach achieves a good compro-
mise between annotation effort and performance.
|
1. Introduction
Understanding the dynamics of surrounding environ-
ments is vital for autonomous driving [27]. Particularly,
motion prediction, which generates the future positions of
objects from previous information, plays an important role
in path planning and navigation.
Classical approaches [7, 9, 47] achieve motion predic-
tion by object detection, tracking, and trajectory forecast-
Corresponding author: G. Lin. (e-mail: gslin @ntu:edu:sg)
(a)Ground truth motion data, (moving points are colored by their motion, static points are Gray)
(b) Fully annotated Foreground/Background masks (Purple: FG; Cyan: BG)
(c) Partially annotated Foreground/ Background masksFigure 1. Illustration of our weak supervision concept. Out-
door scenes can be decomposed into mobile foregrounds and static
backgrounds, which enables us to achieve motion learning with
fully or partially annotated FG/BG masks as weak supervision to
replace expensive ground truth motion data.
ing. These detection-based approaches may fail when en-
countering unknown categories not included in training
data [42]. To address this issue, many approaches [8,41,42]
propose to directly estimate class-agnostic motion from
bird’s eye view (BEV) map of point clouds and achieve a
good trade-off between accuracy and computational cost.
However, sensors can not capture motion information in
complex environments [27], which makes motion data
scarce and expensive. Therefore, most existing real-world
motion data are produced by semi-supervised learning
methods with auxiliary information, e.g., KITTI [10, 27],
or bootstrapped from human-annotated object detection and
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17599
tracking data, e.g., Waymo [16]. To circumvent the de-
pendence on motion annotations, PillarMotion [26] utilizes
point clouds and camera images for self-supervised motion
learning. Although achieving promising results, there is
still a large performance gap between the self-supervised
method, PillarMotion, and fully supervised methods.
Outdoor scenes can often be decomposed into moving
objects and backgrounds [27], which enables us to asso-
ciate motion understanding with scene parsing. As shown
in Fig. 1 (a) and (b), with ego-motion compensation, motion
only exists in foreground points. Therefore, if we distin-
guish the mobile foregrounds from the static backgrounds,
we can focus on mining valuable dynamic motion super-
vision from these potentially moving foreground objects,
leading to more effective self-supervised motion learning.
Based on this intuition, we propose a novel weakly super-
vised paradigm, where expensive motion annotations are
replaced by fully or partially (1%, 0.1%) annotated fore-
ground/background (FG/BG) masks to achieve a good com-
promise between annotation effort and performance. To
this end, we design a two-stage weakly supervised motion
prediction approach, where we train a FG/BG segmenta-
tion network with partially annotated masks in Stage1 and
train a motion prediction network in Stage2. Specifically, in
Stage2, the segmentation network from Stage1 will gener-
ate foreground points for training samples, so that the mo-
tion prediction network can be trained on these foreground
points in a self-supervised manner.
In self-supervised 3D motion learning [18, 26, 44],
Chamfer distance (CD) is preferred. However, the CD
is sensitive to outliers [37]. Unfortunately, outliers are
common in our setting. This is partly due to the view-
changes, occlusions, and noise of point clouds and also
due to the possible errors in the FG points estimated by
the FG/BG segmentation network. To alleviate the impact
of outliers, we propose a novel Consistency-aware Cham-
fer Distance (CCD) loss. Different from the typical CD
loss, our CCD loss exploits supervision from multi-frame
point clouds and leverages multi-frame consistency to mea-
sure the confidence of points. By assigning uncertain points
lower weights, our CCD loss suppresses potential outliers.
Our main contributions can be summarized as follows:
Without using expensive motion data, we propose a
weakly supervised motion prediction paradigm with
fully or partially annotated foreground/background
(FG/BG) masks as supervision to achieve a good com-
promise between annotation effort and performance.
To the best of our knowledge, this is the first work on
weakly supervised class-agnostic motion prediction.
By associating motion understanding with scene pars-
ing, we present a two-stage weakly supervised mo-
tion prediction approach, where the FG/BG segmen-tation generated from Stage1 will facilitate the self-
supervised motion learning in Stage2.
We design a novel Consistency-aware Chamfer Dis-
tance loss, where multi-frame information is used to
suppresses potential outliers for robust self-supervised
motion learning.
With FG/BG masks as weak supervision, our weakly
supervised models outperform the self-supervised
models by a large margin, and performs on par with
some supervised ones.
|
Ojha_Towards_Universal_Fake_Image_Detectors_That_Generalize_Across_Generative_Models_CVPR_2023
|
Abstract
With generative models proliferating at a rapid rate,
there is a growing need for general purpose fake image
detectors. In this work, we first show that the existing
paradigm, which consists of training a deep network for
real-vs-fake classification, fails to detect fake images from
newer breeds of generative models when trained to detect
GAN fake images. Upon analysis, we find that the result-
ing classifier is asymmetrically tuned to detect patterns that
make an image fake. The real class becomes a ‘sink’ class
holding anything that is not fake, including generated im-
ages from models not accessible during training. Build-
ing upon this discovery, we propose to perform real-vs-fake
classification without learning ; i.e., using a feature space
not explicitly trained to distinguish real from fake images.
We use nearest neighbor and linear probing as instantia-
tions of this idea. When given access to the feature space of
a large pretrained vision-language model, the very simple
baseline of nearest neighbor classification has surprisingly
good generalization ability in detecting fake images from a
wide variety of generative models; e.g., it improves upon the
SoTA [50] by +15.07 mAP and+25.90% acc when tested
on unseen diffusion and autoregressive models. Our code,
models, and data can be found at https://github.
com/Yuheng-Li/UniversalFakeDetect
|
1. Introduction
The digital world finds itself being flooded with many
kinds of fake images these days. Some could be natural
images that are doctored using tools like Adobe Photoshop
[1, 49], while others could have been generated through a
machine learning algorithm. With the rise and maturity of
deep generative models [22,29,42], fake images of the latter
kind have caught our attention. They have raised excitement
because of the quality of images one can generate with ease.
They have, however, also raised concerns about their use
for malicious purposes [4]. To make matters worse, there
*Equal contribution
Figure 1. Using images from just one generative model, can we
detect images from a different type of generative model as fake?
is no longer a single source of fake images that needs to
be dealt with: for example, synthesized images could take
the form of realistic human faces generated using generative
adversarial networks [29], or they could take the form of
complex scenes generated using diffusion models [42, 45].
One can be almost certain that there will be more modes
of fake images coming in the future. With such a diversity,
our goal in this work is to develop a general purpose fake
detection method which can detect whether any arbitrary
image is fake, given access to only one kind of generative
model during training; see Fig. 1.
A common paradigm has been to frame fake image de-
tection as a learning based problem [10, 50], in which a
training set of fake and real images are assumed to be avail-
able. A deep network is then trained to perform real vs fake
binary classification. During test time, the model is used
to detect whether a test image is real or fake. Impressively,
this strategy results in an excellent generalization ability of
the model to detect fake images from different algorithms
within the same generative model family [50]; e.g., a clas-
sifier trained using real/fake images from ProGAN [28] can
accurately detect fake images from StyleGAN [29] (both
being GAN variants). However, to the best of our knowl-
edge, prior work has not thoroughly explored generalizabil-
ity across different families of generative models, especially
to ones not seen during training; e.g., will the GAN fake
classifier be able to detect fake images from diffusion mod-
els as well? Our analysis in this work shows that existing
methods do not attain that level of generalization ability.
Specifically, we find that these models work (or fail to
work) in a rather interesting manner. Whenever an image
contains the (low-level) fingerprints [25, 50, 52, 53] particu-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
24480
lar to the generative model used for training (e.g., ProGAN),
the image gets classified as fake. Anything else gets classi-
fied as real. There are two implications: (i) even if diffu-
sion models have a fingerprint of their own, as long as it is
not very similar to GAN’s fingerprint, their fake images get
classified as real; (ii) the classifier doesn’t seem to look for
features of the real distribution when classifying an image
as real; instead, the real class becomes a ‘sink class’ which
hosts anything that is not GAN’s version of fake image. In
other words, the decision boundary for such a classifier will
be closely bound to the particular fake domain.
We argue that the reason that the classifier’s decision
boundary is unevenly bound to the fake image class is be-
cause it is easy for the classifier to latch onto the low-level
image artifacts that differentiate fake images from real im-
ages. Intuitively, it would be easier to learn to spot the fake
pattern, rather than to learn all the ways in which an image
could be real. To rectify this undesirable behavior, we pro-
pose to perform real-vs-fake image classification using fea-
tures that are not trained to separate fake from real images.
As an instantiation of this idea, we perform classification
using the fixed feature space of a CLIP-ViT [24, 41] model
pre-trained on internet-scale image-text pairs. We explore
both nearest neighbor classification as well as linear prob-
ing on those features.
We empirically show that our approach can achieve
significantly better generalization ability in detecting
fake images. For example, when training on real/fake
images associated with ProGAN [28] and evaluat-
ing on unseen diffusion and autoregressive model
(LDM+Glide+Guided+DALL-E) images, we obtain im-
provements over the SoTA [50] by (i) +15.05mAP and
+25.90% acc with nearest neighbor and (ii) +19.49mAP
and +23.39% acc with linear probing. We also study the in-
gredients that make a feature space effective for fake image
detection. For example, can we use any image encoder’s
feature space? Does it matter what domain of fake/real im-
ages we have access to? Our key takeaways are that while
our approach is robust to the breed of generative model
one uses to create the feature bank (e.g., GAN data can be
used to detect diffusion models’ images and vice versa), one
needs the image encoder to be trained on internet-scale data
(e.g., ImageNet [21] does not work).
In sum, our main contributions are: (1) We analyze the
limitations of existing deep learning based methods in de-
tecting fake images from unseen breeds of generative mod-
els. (2) After empirically demonstrating prior methods’
ineffectiveness, we present our theory of what could be
wrong with the existing paradigm. (3) We use that analy-
sis to present two very simple baselines for real/fake image
detection: nearest neighbor and linear classification. Our
approach results in state-of-the-art generalization perfor-
mance, which even the oracle version of the baseline (tun-ing its confidence threshold on the test set ) fails to reach.
(4) We thoroughly study the key ingredients of our method
which are needed for good generalizability.
|
Meng_On_Distillation_of_Guided_Diffusion_Models_CVPR_2023
|
Abstract
Classifier-free guided diffusion models have recently been
shown to be highly effective at high-resolution image genera-
tion, and they have been widely used in large-scale diffusion
*Work partially done during an internship at Googleframeworks including DALL ·E 2, Stable Diffusion and Ima-
gen. However, a downside of classifier-free guided diffusion
models is that they are computationally expensive at infer-
ence time since they require evaluating two diffusion models,
a class-conditional model and an unconditional model, tens
to hundreds of times. To deal with this limitation, we pro-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14297
pose an approach to distilling classifier-free guided diffusion
models into models that are fast to sample from: Given a
pre-trained classifier-free guided model, we first learn a sin-
gle model to match the output of the combined conditional
and unconditional models, and then we progressively distill
that model to a diffusion model that requires much fewer
sampling steps. For standard diffusion models trained on the
pixel-space, our approach is able to generate images visually
comparable to that of the original model using as few as 4
sampling steps on ImageNet 64x64 and CIFAR-10, achieving
FID/IS scores comparable to that of the original model while
being up to 256 times faster to sample from. For diffusion
models trained on the latent-space ( e.g., Stable Diffusion),
our approach is able to generate high-fidelity images using
as few as 1 to 4 denoising steps, accelerating inference by
at least 10-fold compared to existing methods on ImageNet
256x256 and LAION datasets. We further demonstrate the
effectiveness of our approach on text-guided image editing
and inpainting, where our distilled model is able to generate
high-quality results using as few as 2-4 denoising steps.
|
1. Introduction
Denoising diffusion probabilistic models (DDPMs) [ 4,37,
39,40] have achieved state-of-the-art performance on image
generation [ 22,26–28,31], audio synthesis [ 11], molecular
generation [ 44], and likelihood estimation [ 10]. Classifier-
free guidance [ 6] further improves the sample quality of
diffusion models and has been widely used in large-scale
diffusion model frameworks including GLIDE [ 23], Stable
Diffusion [ 28], DALL ·E2[ 26], and Imagen [ 31]. How-
ever, one key limitation of classifier-free guidance is its low
sampling efficiency—it requires evaluating two diffusion
models tens to hundreds of times to generate one sample.
This limitation has hindered the application of classifier-free
guidance models in real-world settings. Although distillation
approaches have been proposed for diffusion models [ 33,38],
these approaches are not directly applicable to classifier-free
guided diffusion models. To deal with this issue, we propose
a two-stage distillation approach to improving the sampling
efficiency of classifier-free guided models. In the first stage,
we introduce a single student model to match the combined
output of the two diffusion models of the teacher. In the sec-
ond stage, we progressively distill the model learned from
the first stage to a fewer-step model using the approach intro-
duced in [ 33]. Using our approach, a single distilled model is
able to handle a wide range of different guidance strengths,
allowing for the trade-off between sample quality and di-
versity efficiently. To sample from our model, we consider
existing deterministic samplers in the literature [ 33,38] and
further propose a stochastic sampling process.
Our distillation framework can not only be applied to stan-
dard diffusion models trained on the pixel-space [ 4,36,39],but also diffusion models trained on the latent-space of an au-
toencoder [ 28,35](e.g., Stable Diffusion [ 28]). For diffusion
models directly trained on the pixel-space, our experiments
on ImageNet 64x64 and CIFAR-10 show that the proposed
distilled model can generate samples visually comparable to
that of the teacher using only 4 steps and is able to achieve
comparable FID/IS scores as the teacher model using as few
as 4 to 16 steps on a wide range of guidance strengths (see
Fig.2). For diffusion model trained on the latent-space of an
encoder [ 28,35], our approach is able to achieve comparable
visual quality to the base model using as few as 1 to 4 sam-
pling steps (at least 10 ⇥fewer steps than the base model)
on ImageNet 256 ⇥256 and LAION 512 ⇥512, matching the
performance of the teacher (as evaluated by FID) with only
2-4 sampling steps. To the best of our knowledge, our work
is the first to demonstrate the effectiveness of distillation
for both pixel-space and latent-space classifier-free diffusion
models. Finally, we apply our method to text-guided image
inpainting and text-guided image editing tasks [ 20], where
we reduce the total number of sampling steps to as few as 2-4
steps, demonstrating the potential of the proposed framework
in style-transfer and image-editing applications [ 20,41].
!=0
Deterministic
8-step1-step
!=1!=2!=4Figure 2. Class-conditional samples from our two-stage (determin-
istic) approach on ImageNet 64x64 for diffusion models trained on
the pixel-space. By varying the guidance weight w, our distilled
model is able to trade-off between sample diversity and quality,
while achieving good results using as few as onesampling step.
|
Li_OVTrack_Open-Vocabulary_Multiple_Object_Tracking_CVPR_2023
|
Abstract
The ability to recognize, localize and track dynamic ob-
jects in a scene is fundamental to many real-world appli-
cations, such as self-driving and robotic systems. Yet, tra-
ditional multiple object tracking (MOT) benchmarks rely
only on a few object categories that hardly represent the
multitude of possible objects that are encountered in the real
world. This leaves contemporary MOT methods limited to
a small set of pre-defined object categories. In this paper,
we address this limitation by tackling a novel task, open-
vocabulary MOT, that aims to evaluate tracking beyond pre-
defined training categories. We further develop OVTrack, an
open-vocabulary tracker that is capable of tracking arbitrary
object classes. Its design is based on two key ingredients:
First, leveraging vision-language models for both classifi-
cation and association via knowledge distillation; second,
a data hallucination strategy for robust appearance feature
learning from denoising diffusion probabilistic models. The
result is an extremely data-efficient open-vocabulary tracker
that sets a new state-of-the-art on the large-scale, large-
vocabulary TAO benchmark, while being trained solely on
static images.
|
1. Introduction
Multiple Object Tracking (MOT) aims to recognize, lo-
calize and track objects in a given video sequence. It is a
cornerstone of dynamic scene analysis and vital for many
real-world applications such as autonomous driving, aug-
mented reality, and video surveillance. Traditionally, MOT
benchmarks [9, 11, 19, 64, 71] define a set of semantic cate-
gories that constitute the objects to be tracked in the training
and testing data distributions. The potential of traditional
MOT methods [3, 4, 33, 43] is therefore limited by the tax-
onomies of those benchmarks. As consequence, contem-
porary MOT methods struggle with unseen events, leading
to a gap between evaluation performance and real-world
*Equal contribution.
Generative VL ModelDiscriminative VL ModelBaseCategoriesTraining
TestingDiscriminative VL ModelAny Text Prompt(Base+ Novel)
Video FramesStatic ImageOVTrack
OVTrack
OVTrack
Instance Similarity LearningFigure 1. OVTrack. We approach the task of open-vocabulary mul-
tiple object tracking. During training, we leverage vision-language
(VL) models both for generating samples and knowledge distilla-
tion. During testing, we track both base and novel classes unseen
during training by querying a vision-language model.
deployment.
To bridge this gap, previous works have tackled MOT in
an open-world context. In particular, O ˇsepet al. [46, 48]
approach generic object tracking by first segmenting the
scene and performing tracking before classification. Other
works have used class agnostic localizers [10, 49] to per-
form MOT on arbitrary objects. Recently, Liu et al. [37]
defined open-world tracking, a task that focuses on the eval-
uation of previously unseen objects. In particular, it requires
any-object tracking as a stage that precedes object classifica-
tion. This setup comes with two inherent difficulties. First,
in an open-world context, densely annotating all objects
is prohibitively expensive. Second, without a pre-defined
taxonomy of categories, the notion of what is an object is am-
biguous. As a consequence, Liu et al. resort to recall-based
evaluation, which is limited in two ways. Penalizing false
positives (FP) becomes impossible, i.e. we cannot measure
the tracker precision . Moreover, by evaluating tracking in a
class-agnostic manner, we lose the ability to evaluate how
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5567
well a tracker can infer the semantic category of an object.
In this paper, we propose open-vocabulary MOT as an
effective solution to these problems. Similar to open-world
MOT, open-vocabulary MOT aims to track multiple objects
beyond the pre-defined training categories. However, instead
of dismissing the classification problem and resorting to
recall-based evaluation, we assume that at test time we are
given the classes of objects we are interested in. This allows
us to apply existing closed-set tracking metrics [34, 70] that
capture both precision and recall, while still evaluating the
tracker’s ability to track arbitrary objects during inference.
We further present the first Open-Vocabulary Tracker,
OVTrack (see Fig. 1). To this end, we identify and ad-
dress two fundamental challenges to the design of an open-
vocabulary multi-object tracker. The first is that closed-set
MOT methods are simply not capable of extending their
pre-defined taxonomies. The second is data availability, i.e.
scaling video data annotation to a large vocabulary of classes
is extremely costly. Inspired by existing works in open-
vocabulary detection [1, 13, 20, 76], we replace our classi-
fier with an embedding head, which allows us to measure
similarities of localized objects to an open vocabulary of
semantic categories. In particular, we distill knowledge from
CLIP [52] into our model by aligning the image feature
representations of object proposals with the corresponding
CLIP image and text embeddings.
Beyond detection, association is the core of modern MOT
methods. It is driven by two affinity cues: motion and ap-
pearance. In an open-vocabulary context, motion cues are
brittle since arbitrary scenery contains complex and diverse
camera and object motion patterns. In contrast, diverse ob-
jects usually exhibit heterogeneous appearance. However,
relying on appearance cues requires robust representations
that generalize to novel object categories. We find that CLIP
feature distillation helps in learning better appearance rep-
resentations for improved association. This is especially
intriguing since object classification and appearance model-
ing are usually distinct in the MOT pipeline [3, 16, 65].
Learning robust appearance features also requires strong
supervision that captures object appearance changes in dif-
ferent viewpoints, background, and lighting. To approach
the data availability problem, we utilize the recent success of
denoising diffusion probabilistic models (DDPMs) in image
synthesis [54,59] and propose an effective data hallucination
strategy tailored to appearance modeling. In particular, from
a static image, we generate both simulated positive and nega-
tive instances along with random background perturbations.
The main contributions are summarized as follows:
1.We define the task of open-vocabulary MOT and pro-
vide a suitable benchmark setting on the large-scale,
large-vocabulary MOT benchmark TAO [9].
2.We develop OVTrack, the first open-vocabulary multi-
object tracker. It leverages vision-language models tot t + 2 t+ 4
Figure 2. OVTrack qualitative results. We condition our tracker
on text prompts unseen during training, namely ‘heron’, ‘hippo’
and ‘drone’, and successfully track the corresponding objects in the
videos. The box color depicts object identity.
improve both classification and association compared
to closed-set trackers.
3.We propose an effective data hallucination strategy that
allows us to address the data availability problem in
open-vocabulary settings by leveraging DDPMs.
Owing to its thoughtful design, OVTrack sets a new state-
of-the-art on the challenging TAO benchmark [9], outper-
forming existing trackers by a significant margin while be-
ing trained on static images only . In addition, OVTrack
is capable of tracking arbitrary object classes (see Fig. 2),
overcoming the limitation of closed-set trackers.
|
Nassar_ProtoCon_Pseudo-Label_Refinement_via_Online_Clustering_and_Prototypical_Consistency_for_CVPR_2023
|
Abstract
Confidence-based pseudo-labeling is among the domi-
nant approaches in semi-supervised learning (SSL). It re-
lies on including high-confidence predictions made on un-
labeled data as additional targets to train the model. We
propose PROTO CON, a novel SSL method aimed at the less-
explored label-scarce SSL where such methods usually un-
derperform. PROTO CONrefines the pseudo-labels by lever-
aging their nearest neighbours’ information. The neigh-
bours are identified as the training proceeds using an on-
line clustering approach operating in an embedding space
trained via a prototypical loss to encourage well-formed
clusters. The online nature of PROTO CONallows it to
utilise the label history of the entire dataset in one train-
ing cycle to refine labels in the following cycle without the
need to store image embeddings. Hence, it can seamlessly
scale to larger datasets at a low cost. Finally, PROTO CON
addresses the poor training signal in the initial phase of
training (due to fewer confident predictions) by introduc-
ing an auxiliary self-supervised loss. It delivers significant
gains and faster convergence over state-of-the-art across 5
datasets, including CIFARs, ImageNet and DomainNet.
|
1. Introduction
Semi-supervised Learning (SSL) [10, 40] leverages un-
labeled data to guide learning from a small amount of la-
beled data; thereby, providing a promising alternative to
costly human annotations. In recent years, SSL frontiers
have seen substantial advances through confidence-based
pseudo-labeling [21, 22, 38, 42, 43]. In these methods,
a model iteratively generates pseudo-labels for unlabeled
samples which are then used as targets to train the model.
To overcome confirmation bias [1, 27] i.e., the model being
biased by training on its own wrong predictions, these meth-
ods only retain samples with high confidence predictions
for pseudo-labeling; thus ensuring that only reliable sam-
ples are used to train the model. While confidence works
well in moderately labeled data regimes, it usually strug-
Refine
project
consistencyzebra tiger birdzebra tiger bird
Original Pseudo-labelRefined Pseudo-label
zebra tiger birdCluster Pseudo-label
Class
Prototypes
classify
projectzebra tiger birdPrediction
consistency
classify
obtained via
online clusteringweak augmentation strong augmentationFigure 1. P ROTO CONrefines a pseudo-label of a given sample by
knowledge of its neighbours in a prototypical embedding space.
Neighbours are identified in an online manner using constrained
K-means clustering. Best viewed zoomed in.
gles in label-scarce settings1. This is primarily because the
model becomes over-confident about the more distinguish-
able classes [17, 28] faster than others, leading to a collapse.
In this work, we propose P ROTO CON, a novel method
which addresses such a limitation in label-scarce SSL. Its
key idea is to complement confidence with a label refine-
ment strategy to encourage more accurate pseudo-labels.
To that end, we perform the refinement by adopting a co-
training [5] framework: for each image, we obtain two dif-
ferent labels and combine them to obtain our final pseudo-
label. The first is the model’s softmax prediction, whereas
the second is an aggregate pseudo-label describing the im-
age’s neighbourhood based on the pseudo-labels of other
images in its vicinity. However, a key requirement for
the success of co-training is to ensure that the two labels
are obtained using sufficiently different image representa-
tions [40] to allow the model to learn based on their dis-
agreements. As such, we employ a non-linear projection
to map our encoder’s representation into a different embed-
1We denote settings with less than 10 images per class as “label-scarce.”
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
11641
ding space. We train this projector jointly with the model
with a prototypical consistency objective to ensure it learns
a different, yet relevant, mapping for our images. Then we
define the neighbourhood pseudo-label based on the vicin-
ity in that embedding space. In essence, we minimise a
sample bias by smoothing its pseudo-label in class space
via knowledge of its neighbours in the prototypical space.
Additionally, we design our method to be fully online,
enabling us to scale to large datasets at a low cost. We
identify neighbours in the embedding space on-the-fly as
the training proceeds by leveraging online K-means clus-
tering. This alleviates the need to store expensive image
embeddings [22], or to utilise offline nearest neighbour re-
trieval [23, 48]. However, applying naive K-means risks
collapsing to only a few imbalanced clusters making it less
useful for our purpose. Hence, we employ a constrained
objective [6] lower bounding each cluster size; thereby, en-
suring that each sample has enough neighbours in its clus-
ter. We show that the online nature of our method allows it
to leverage the entire prediction history in one epoch to re-
fine labels in the subsequent epoch at a fraction of the cost
required by other methods and with a better performance.
PROTO CON’s final ingredient addresses another limita-
tion of confidence-based methods: since the model only re-
tains high confident samples for pseudo-labeling, the initial
phase of the training usually suffers from a weak training
signal due to fewer confident predictions. In effect, this
leads to only learning from the very few labeled samples
which destabilises the training potentially due to overfit-
ting [25]. To boost the initial training signal, we adopt a
self-supervised instance-consistency [9, 15] loss applied on
samples that fall below the threshold. Our choice of loss
is more consistent with the classification task as opposed
to contrastive instance discrimination losses [11, 16] which
treat each image as its own class. This helps our method to
converge faster without loss of accuracy.
We demonstrate P ROTO CON’s superior performance
against comparable state-of-the-art methods on 5 datasets
including CIFAR, ImageNet and DomainNet. Notably,
PROTO CONachieves 2.2%, 1% improvement on the SSL
ImageNet protocol with 0.2% and 1% of the labeled data,
respectively. Additionally, we show that our method ex-
hibits faster convergence and more stable initial train-
ing compared to baselines, thanks to our additional self-
supervised loss. In summary, our contributions are:
• We propose a memory-efficient method addressing
confirmation bias in label-scarce SSL via a novel la-
bel refinement strategy based on co-training.
• We improve training dynamics and convergence of
confidence-based methods by adopting self-supervised
losses to the SSL objective.
• We show state-of-the-art results on 5 SSL benchmarks.
|
Li_Towards_Benchmarking_and_Assessing_Visual_Naturalness_of_Physical_World_Adversarial_CVPR_2023
|
Abstract
Physical world adversarial attack is a highly practical
and threatening attack, which fools real world deep learn-
ing systems by generating conspicuous and maliciously
crafted real world artifacts. In physical world attacks, eval-
uating naturalness is highly emphasized since human can
easily detect and remove unnatural attacks. However, cur-
rent studies evaluate naturalness in a case-by-case fash-
ion, which suffers from errors, bias and inconsistencies.
In this paper, we take the first step to benchmark and as-
sess visual naturalness of physical world attacks, taking
autonomous driving scenario as the first attempt. First,
to benchmark attack naturalness, we contribute the first
Physical Attack Naturalness (PAN) dataset with human rat-
ing and gaze. PAN verifies several insights for the first
time: naturalness is (disparately) affected by contextual
features (i.e., environmental and semantic variations) and
correlates with behavioral feature (i.e., gaze signal). Sec-
ond, to automatically assess attack naturalness that aligns
with human ratings, we further introduce Dual Prior Align-
ment (DPA) network, which aims to embed human knowl-
edge into model reasoning process. Specifically, DPA imi-
tates human reasoning in naturalness assessment by rating
prior alignment and mimics human gaze behavior by atten-
tive prior alignment. We hope our work fosters researches
to improve and automatically assess naturalness of physi-
cal world attacks. Our code and dataset can be found at
https://github.com/zhangsn-19/PAN.
|
1. Introduction
Extensive evidences have revealed the vulnerability of
deep neural networks (DNNs) towards adversarial attacks
[5, 17, 27, 37, 53, 72–74] in digital and physical worlds.
Different from digital world attacks which make pixelwise
perturbations, physical world adversarial attacks are es-
pecially dangerous, which fail DNNs by crafting specif-
yCorresponding author
Naturalness Evalua�on
Human
GazePAN Dataset
Human
Ra�ngExcellent
Good
Fair
Poor
Bad
UPC
CAMOU MeshAdv DAS
Insights
Rating Prior
HumanModelAssess Naturalness ( DPA)
Attention PriorRating Prior
Attention Prior
Align
Contextual
Behavioral
Naturalness
Figure 1. Overview of our work. To solve the problem in physical
world attack naturalness evaluation, we provide PAN dataset to
support this research. Based on PAN, we provide insights and
naturalness assessment methods of visual naturalness.
ically designed daily artifacts with adversarial capability
[2,13,31,48,51,59,70]. However, physical world attacks are
often conspicuous, allowing human to easily identify and
remove such attacks in real-world scenarios. To sidestep
such defense, in 48 physical world attacks we surveyedÀ, 20
papers (42%) emphasize their attack is natural and stealthy
to human [9, 11, 22, 31, 49, 55].
Despite the extensive attention on visual naturalness,
studies on natural attacks follow an inconsistent and case-
by-case evaluation. In 20 surveyed papers claimed to be
natural or stealthyÁ, (1) 11 papers perform no experiment to
validate their claim. (2) 11 papers claim their attack closely
imitates natural image, but it was unclear if arbitrary natural
image indicates naturalness. (3) 5 papers validate natural-
ness by human experiments, yet follow very different eval-
uation schemes and oftentimes neglect the gap between ex-
isting attacks and natural images. These problems raise our
question: how natural indeed are physical world attacks?
ÀSee this survey in supplementary materials.
ÁA work can have multiple limitations.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12324
In this paper, we take the first attempt to evaluate vi-
sual naturalness of physical world attacks in autonomous
driving [24], a field of attack with increasing attention
[11, 22, 55, 58, 70]. Since the factors and methods stud-
ied in our work are common in physical world attacks and
not limited to autonomous driving, our methods and find-
ings also have the potential to be applied to other scenarios.
The overview of our work is summarized in Fig. 1. To
benchmark attack naturalness, we contribute Physical At-
tack Naturalness (PAN) dataset, the first dataset to study
this problem. Specifically, PAN contains 2,688 images in
autonomous driving, with 5 widely used attacks, 2 benign
patterns ( i.e., no attacks) for comparison, 5 types of envi-
ronmental variations and 2 types of diversity enhancement
(semantic and model diversity). Data was collected from
126 participants, containing their subjective ratings as an
indicator of naturalness, and their gaze signal for all images
as an indicator of the selective attention area of human when
they make naturalness ratings [66].
PAN provides a plethora of insights for the first time.
First, we find contextual features have significant effect
on naturalness, including semantic variations (using natu-
ral image to constrain attack) and environmental variations
(illumination, pitch/yaw angles, etc). Properly selecting en-
vironmental and semantic factors can improve naturalness
up to 34.73% and 8.09%, respectively. Second, we find
contextual features have disparate impact on naturalness of
different attacks, some attacks might look more natural un-
der certain variations, which can lead to biased subjective
evaluation even under identical settings. Third, we find nat-
uralness is related to behavioral feature ( i.e., human gaze).
Specifically, we find attacks are considered less natural if
human gaze are more centralized and focus more on vehi-
cle (with statistical significance at p<: 001). This correla-
tion suggests modelling and guiding human gaze can be a
feasible direction to improve attack naturalness.
Finally, since manually collecting naturalness ratings re-
quires human participation and can be laborious as well as
costly, based on PAN dataset, we propose Dual Prior Align-
ment (DPA), an objective naturalness assessment algorithm
that gives a cheap and fast naturalness estimate of physical
world attacks. DPA aims to improve attack result by em-
bedding human knowledge into the model. Specifically, to
align with human reasoning process, rating prior alignment
mimics the uncertainty and hidden desiderata when human
rates naturalness. To align with human attention, attentive
prior alignment corrects spurious correlations in models by
aligning model attention with human gaze. Extensive ex-
periments on PAN dataset and DPA method shows training
DPA on PAN dataset outperforms the best method trained
on other dataset by 64.03%; based on PAN dataset, DPA
improves 3.42% in standard assessment and 11.02% in gen-
eralization compared with the best baseline. We also makeearly attempts to improve naturalness by DPA.
Ourcontributions can be summarized as follows:
We take the first step to evaluate naturalness of physi-
cal world attacks, taking autonomous driving as a first
attempt. Our methods and findings have the potential
to be applied to other scenarios.
We contribute PAN dataset, the first dataset that sup-
ports studying the naturalness of physical world at-
tacks via human rating and human gaze. PAN encour-
age subsequent research on enhancing and assessing
naturalness of physical world attacks.
Based on PAN, we unveil insights of how contextual
and behavioral features affect attack naturalness.
To automatically assess image naturalness, we propose
DPA method that embeds human behavior into model
reasoning, resulting in better result and generalization.
|
Long_NeuralUDF_Learning_Unsigned_Distance_Fields_for_Multi-View_Reconstruction_of_Surfaces_CVPR_2023
|
Abstract
We present a novel method, called NeuralUDF , for re-
constructing surfaces with arbitrary topologies from 2D im-
ages via volume rendering. Recent advances in neural ren-
dering based reconstruction have achieved compelling re-
sults. However, these methods are limited to objects with
closed surfaces since they adopt Signed Distance Function
(SDF) as surface representation which requires the target
shape to be divided into inside and outside. In this pa-
per, we propose to represent surfaces as the Unsigned Dis-
tance Function (UDF) and develop a new volume rendering
scheme to learn the neural UDF representation. Specifi-
cally, a new density function that correlates the property of
UDF with the volume rendering scheme is introduced for
robust optimization of the UDF fields. Experiments on the
DTU and DeepFashion3D datasets show that our method
not only enables high-quality reconstruction of non-closed
shapes with complex typologies, but also achieves compa-
rable performance to the SDF based methods on the re-
construction of closed surfaces. Visit our project page at
https://www.xxlong.site/NeuralUDF/ .
|
1. Introduction
Reconstructing high-quality surfaces from multi-view
images is a long-standing problem in computer vision and
computer graphics. Neural implicit fields have become
an emerging trend in recent advances due to its superior
capability of representing surfaces of complex geometry.
NeRF [35] and its variants [2, 26, 27, 36, 40, 52] have re-
cently achieved compelling results in novel view synthesis.
For each point in the 3D space, NeRF-based methods learn
two neural implicit functions based on volume rendering: a
volume density function and a view-dependent color func-
tion. Despite their successes in novel view synthesis, NeRF-
based methods still struggle to faithfully reconstruct the ac-
*Corresponding authors.
†This work was conducted during an internship at Tencent Games.
Reference Images Ours NeuS [46]
Figure 1. We show three groups of multi-view reconstruction re-
sults generated by our proposed NeuralUDF and NeuS [46] re-
spectively. Our method is able to faithfully reconstruct the high-
quality geometries for both the closed and open surfaces, while
NeuS can only model shapes as closed surfaces, thus leading to
inconsistent typologies and erroneous geometries.
curate scene geometry from multi-view inputs, because of
the difficulty in extracting high-quality surfaces from the
representation of volume density.
V olSDF [49] and NeuS [46] incorporate the Signed Dis-
tance Function (SDF) into volume rendering to facilitate
high-quality surface reconstruction in a NeRF framework.
However, as a continuous function with clearly defined in-
side/outside, SDF is limited to modeling only closed water-
tight surfaces. Although there have been efforts to modify
the SDF representation by learning an additional truncation
function (e.g., 3PSDF [5], TSDF [44]), their surface rep-
resentations are still built upon the definition of SDF, thus
they are not suitable for representing complex topologies.
We therefore propose to employ the Unsigned Distance
Function (UDF) to represent surfaces in volume rendering.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20834
With a surface represented by its zero level set without
signs, UDF is a unified representation with higher-degree
of freedom for both closed and open surfaces, thus making
it possible to reconstruct shapes with arbitrary topologies.
There are two major challenges in learning a neural UDF
field by volume rendering. First, UDF is not occlusion-
aware, while the formulation of volume rendering with dis-
tance fields requires to estimate surface occlusion for points.
Considering a camera ray intersecting with a surface, SDF
assumes that the surface is closed and distinguishes the in-
side/outside with signs, so that a negative SDF value on the
ray clearly indicates an occluded point inside the surface.
In contrast, UDF does not impose the closed surface as-
sumption and always gives non-negative values along the
ray. Hence, the UDF value of a point alone cannot be used
to infer occlusion.
The second challenge is that the UDF is not differen-
tiable at its zero-level sets. The non-differentiability at the
zero-level sets imposes barriers in the learning of UDF field.
The gradients are ill-defined near the iso-surface, leading to
difficulty in optimization. Consequently, the distance field
surrounding the iso-surface is not accurate and the exact
zero-level set of a learned UDF cannot be identified in a
stable manner.
In this paper, we present a novel method for learning
neural UDF fields by volume rendering. We introduce a
density function that correlates the property of the UDF rep-
resentation with the volume rendering process, which effec-
tively tackles the aforementioned challenges induced by the
unsigned representation and enables robust learning of sur-
faces with arbitrary topologies. Experiments on DTU [15]
and DeepFashion3D [53] datasets show that our method not
only enables the high-quality reconstruction of non-closed
shapes with complex topologies, but also achieves compa-
rable performance to the SDF based methods on the recon-
struction of closed surfaces.
Our contributions can be summarized as:
• We incorporate the UDF into volume rendering, which
extends the representation of the underlying geometry
of neural radiance fields.
• We introduce an effective density function that corre-
lates the property of the UDF with the volume render-
ing process, thus enabling robust optimization of the
distance fields.
• Our method achieves SOTA results for reconstructing
high-quality surfaces with various topologies (closed
or open) using the UDF in volume rendering.
|
Pehlivan_StyleRes_Transforming_the_Residuals_for_Real_Image_Editing_With_StyleGAN_CVPR_2023
|
Abstract
We present a novel image inversion framework and a
training pipeline to achieve high-fidelity image inversion
with high-quality attribute editing. Inverting real images
into StyleGAN’s latent space is an extensively studied prob-
lem, yet the trade-off between the image reconstruction fi-
delity and image editing quality remains an open challenge.
The low-rate latent spaces are limited in their expressive-
ness power for high-fidelity reconstruction. On the other
hand, high-rate latent spaces result in degradation in edit-
ing quality. In this work, to achieve high-fidelity inversion,
we learn residual features in higher latent codes that lower
latent codes were not able to encode. This enables preserv-
ing image details in reconstruction. To achieve high-quality
editing, we learn how to transform the residual features
for adapting to manipulations in latent codes. We train
the framework to extract residual features and transform
them via a novel architecture pipeline and cycle consistency
losses. We run extensive experiments and compare our
method with state-of-the-art inversion methods. Qualitative
metrics and visual comparisons show significant improve-
ments. Code: https://github.com/hamzapehlivan/StyleRes
|
1. Introduction
Generative Adversarial Networks (GANs) achieve high
quality synthesis of various objects that are hard to distin-
guish from real images [14, 21, 22, 41, 43]. These networks
also have an important property that they organize their la-
tent space in a semantically meaningful way; as such, via
latent editing, one can manipulate an attribute of a gener-
ated image. This property makes GANs a promising tech-
nology for image attribute editing and not only for gener-
ated images but also for real images. However, for real im-
ages, one also needs to find the corresponding latent code
that will generate the particular real image. For this pur-
pose, different GAN inversion methods are proposed, aim-
ing to project real images to pretrained GAN latent space
[15, 30, 31, 33, 37]. Even though this is an extensively stud-Pose Bobcut Smile ColorInput
e4e [32]
HFGI [34]
HyperStyle [5]
StyleRes (Ours)
Figure 1. Comparison of our method with e4e, HFGI, and Hy-
perStyle for the pose, bob cut hairstyle, smile removal, and color
change edits. Our method achieves high fidelity to the input and
high quality edits.
ied problem with significant progress, the trade-off between
image reconstruction fidelity and image editing quality re-
mains an open challenge.
The trade-off between image reconstruction fidelity and
image editing quality is referred to as the distortion-
editability trade-off [32]. Both are essential for real im-
age editing. However, it is shown that the low-rate latent
spaces are limited in their expressiveness power, and not
every image can be inverted with high fidelity reconstruc-
tion [1, 27, 32, 34]. For that reason, higher bit encodings
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1828
and more expressive style spaces are explored for image in-
version [1, 2]. Although with these techniques, images can
be reconstructed with better fidelity, the editing quality de-
creases since there is no guarantee that projected codes will
naturally lie in the generator’s latent manifold.
In this work, we propose a framework that achieves high
fidelity input reconstruction and significantly improved ed-
itability compared to the state-of-the-art. We learn residual
features in higher-rate latent codes that are missing in the
reconstruction of encoded features. This enables us to re-
construct image details and background information which
are difficult to reconstruct via low rate latent encodings. Our
architecture is single stage and learns the residuals based on
the encoded features from the encoder and generated fea-
tures from the pretrained GANs. We also learn a module
to transform the higher-latent codes if needed based on the
generated features (e.g. when the low-rate latent codes are
manipulated). This way, when low-rate latent codes are
edited for attribute manipulation, the decoded features can
adapt to the edits to reconstruct details. While the attributes
are not edited, the encoder can be trained with image recon-
struction and adversarial losses. On the other hand, when
the image is edited, we cannot use image reconstruction loss
to regularize the network to preserve the details. To guide
the network to learn correct transformations based on the
generated features, we train the model with adversarial loss
and cycle consistency constraint; that is, after we edit the
latent code and generate an image, we reverse the edit and
aim at reconstructing the original image. Since we do not
want our method to be limited to predefined edits, during
training, we simulate edits by randomly interpolating them
with sampled latent codes.
The closest to our approach is HFGI [34], which also
learns higher-rate encodings. Our framework is different as
we learn a single-stage architecture designed to learn fea-
tures that are missing from low-rate encodings and we learn
how to transform them based on edits. As shown in Fig.
1, our framework achieves significantly better results than
HFGI and other methods in editing quality. In summary,
our main contributions are:
• We propose a single-stage framework that achieves high-
fidelity input embedding and editing. Our framework
achieves that with a novel encoder architecture.
• We propose to guide the image projection with cycle con-
sistency and adversarial losses. We edit encoded images
by taking a step toward a randomly sampled latent code.
We expect to reconstruct the original image when the edit
is reversed. This way, edited images preserve the details
of the input image, and edits become high quality.
• We conduct extensive experiments to show the effective-
ness of our framework and achieve significant improve-ments over state-of-the-art for both reconstruction and
real image attribute manipulations.
|
Li_Spectral_Enhanced_Rectangle_Transformer_for_Hyperspectral_Image_Denoising_CVPR_2023
|
Abstract
Denoising is a crucial step for hyperspectral image
(HSI) applications. Though witnessing the great power of
deep learning, existing HSI denoising methods suffer from
limitations in capturing the non-local self-similarity. Trans-
formers have shown potential in capturing long-range de-
pendencies, but few attempts have been made with specifi-
cally designed Transformer to model the spatial and spec-
tral correlation in HSIs. In this paper, we address these
issues by proposing a spectral enhanced rectangle Trans-
former, driving it to explore the non-local spatial similar-
ity and global spectral low-rank property of HSIs. For
the former, we exploit the rectangle self-attention horizon-
tally and vertically to capture the non-local similarity in
the spatial domain. For the latter, we design a spectral
enhancement module that is capable of extracting global
underlying low-rank property of spatial-spectral cubes to
suppress noise, while enabling the interactions among non-
overlapping spatial rectangles. Extensive experiments have
been conducted on both synthetic noisy HSIs and real noisy
HSIs, showing the effectiveness of our proposed method in
terms of both objective metric and subjective visual quality.
The code is available at https://github.com/MyuLi/SERT.
|
1. Introduction
With sufficient spectral information, hyperspectral im-
ages (HSIs) can provide more detailed characteristics to
distinguish from different materials compared to RGB im-
ages. Thus, HSIs have been widely applied to face recog-
nition [37, 38], vegetation detection [4], medical diagno-
sis [43], etc. With scanning designs [2] and massive wave-
bands, the photon numbers in individual bands are limited.
HSI is easily degraded by various noise. Apart from poor
visual effects, such undesired degradation also negatively
affects the downstream applications. To obtain better visual
effects and performance in HSI vision tasks, denoising is a
†Equal Contribution,∗Corresponding Authorfundamental step for HSI analysis and processing.
Similar to RGB images, HSIs have self-similarity in
the spatial domain, suggesting that similar pixels can be
grouped and denoised together. Moreover, since hyperspec-
tral imaging systems are able to acquire images at a nomi-
nal spectral resolution, HSIs have inner correlations in the
spectral domain. Thus, it is important to consider both spa-
tial and spectral domains when designing denoising meth-
ods for HSI. Traditional model-based HSI denoising meth-
ods [10, 17, 21] employ handcrafted priors to explore the
spatial and spectral correlations by iteratively solving the
optimization problem. Among these works, total variation
[20, 21, 52] prior, non-local similarity [19], low-rank [8, 9]
property, and sparsity [42] regularization are frequently uti-
lized. The performance of these methods relies on the ac-
curacy of handcrafted priors. In practical HSI denoising,
model-based methods are generally time-consuming and
have limited generalization ability in diverse scenarios.
To obtain robust learning for noise removal, deep learn-
ing methods [7,35,41,49] are applied to HSI denoising and
achieve impressive restoration performance. However, most
of these works utilize convolutional neural networks for fea-
ture extraction and depend on local filter response to sepa-
rate noise and signal in a limited receptive field.
Recently, vision Transformers have emerged with com-
petitive results in both high-level tasks [16, 39] and low-
level tasks [1,13,50], showing the strong capability of mod-
eling long-range dependencies in image regions. To di-
minish the unaffordable quadratically computation cost to
image size, many works have investigated the efficient de-
sign of spatial attention [11,46,47]. Swin Transformer [28]
splitted feature maps into shifted square windows. CSWin
Transformer [15] developed a stripe window across the fea-
tures maps to enlarge the attention area. As HSI usually
has large feature maps, exploring the similarity beyond
the noisy pixel can cause unnecessary calculation burden.
Thus, how to efficiently model the non-local spatial simi-
larity is still challenging for HSI denoising Transformer.
HSIs usually lie in a spectral low-rank subspace [9],
which can maintain the distinguished information and sup-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5805
press noise. This indicates that the non-local spatial simi-
larity and low-rank spectral statistics should be jointly uni-
tized for HSI denoising. However, existing HSI denoising
methods [24, 45] mainly utilize the low-rank characteris-
tics through matrix factorization, which is based on a single
HSI and requires a long-time to solve. The global low-rank
property in large datasets is hardly considered.
In this paper, we propose a Spectral Enhanced Rectangle
Transformerc (SERT) for HSI denoising. To reinforce
model capacity with reasonable cost, we develop a multi-
shape rectangle self-attention module to comprehensively
explore the non-local spatial similarity. Besides, we ag-
gregate the most informative spectral statistics to suppress
noise in our spectral enhancement module, which projects
the spatial-spectral cubes into low-rank vectors with the as-
sistance of a global spectral memory unit. The spectral
enhancement module also provides interactions between
the non-overlapping spatial rectangles. With our proposed
Transformer, the spatial non-local similarity and global
spectral low-rank properly are jointly considered to benefit
the denoising process. Experimental results show that our
method significantly outperforms the state-of-the-art meth-
ods in both simulated data and real noisy HSIs.
Overall, our contributions can be summarized as follows:
• We propose a spectral enhanced rectangle Transformer
for HSI denoising, which can well exploit both the
non-local spatial similarity and global spectral low-
rank property of noisy images.
• We present a multi-shape rectangle spatial self-
attention module to effectively explore the comprehen-
sive spatial self-similarity in HSI.
• A spectral enhancement module with memory blocks
is employed to extract the informative low-rank vec-
tors from HSI cube patches and suppress the noise.
|
Li_Self-Supervised_Blind_Motion_Deblurring_With_Deep_Expectation_Maximization_CVPR_2023
|
Abstract
When taking a picture, any camera shake during the
shutter time can result in a blurred image. Recovering a
sharp image from the one blurred by camera shake is a chal-
lenging yet important problem. Most existing deep learning
methods use supervised learning to train a deep neural net-
work (DNN) on a dataset of many pairs of blurred/latent
images. In contrast, this paper presents a dataset-free deep
learning method for removing uniform and non-uniform
blur effects from images of static scenes. Our method in-
volves a DNN-based re-parametrization of the latent image,
and we propose a Monte Carlo Expectation Maximization
(MCEM) approach to train the DNN without requiring any
latent images. The Monte Carlo simulation is implemented
via Langevin dynamics. Experiments showed that the pro-
posed method outperforms existing methods significantly in
removing motion blur from images of static scenes.
|
1. Introduction
Motion blur occurs when the camera shakes during the
shutter time, resulting in a blurring effect. Blur is uniform
when the scene depth is constant and moves along the im-
age plane. For other camera movements, the blur is non-
uniform. In dynamic scenes with moving objects, the blur
is also non-uniform. Different types of motion blur are illus-
trated in Figure 1. This paper aims to address the problem
of removing uniform and non-uniform motion blur caused
by camera shake from an image. Removing motion blur
from an image is a blind deblurring problem. It is a chal-
lenging task as it requires estimating two unknowns the la-
tent image and blurring operator from a single input.
Deep learning, particularly supervised learning, has re-
cently emerged as a powerful tool for solving various im-
age restoration problems, including blind deblurring. Many
of these works rely on supervised learning, as seen in
e.g. [1–15]. Typically, these supervised deep learning meth-
ods train a deep neural network (DNN) on a large number
of training samples, which consist of pairs of latent/blur im-
ages. Furthermore, to address general blurring effects, most
(a) (b) (c)
Figure 1. Different motion-blurring effects. (a)–(b) Uniform and
non-uniform blurring caused by camera shake; (c) Non-uniform
blurring of the dynamic scene (not addressed in this paper).
methods take a physics-free approach. In other words, these
methods directly learn a model that maps a blurred image to
a latent image without using any prior information about the
blurring process.
The advantage of a physics-free supervised learning
method is its ability to handle many types of motion blur
effects. However, it has a significant disadvantage: to
achieve good generalization, the training dataset must cover
all motion-blurring effects. Because motion blur is de-
termined by both 3D camera motion (six parameters) and
scene depth, which can vary significantly among images,
an enormous number of training samples are required to ad-
equately cover the motion blur effects. This task can be very
costly and challenging. One possible solution is to synthe-
size blurred images. However, as shown in [ 16], a model
trained on samples synthesized using existing techniques
(e.g. [17]) does not generalize well to real-world images.
Some approaches consider the physics of motion blur.
Phong et al. [18] proposed learning a family of blurring op-
erators in an encoded blur kernel space, and Li et al. [19]
proposed learning a more general class of degradation op-
erators from input images. However, these physics-aware
methods also rely on supervised learning and thus face the
same dataset limitations as the physics-free methods.
1.1. Discussion on existing datasetfree methods
Motivated by the challenge of practical data collec-
tion, there is a growing interest in relaxing the require-
ment for training data when developing deep learning solu-
tions for motion deblurring. Some approaches require spe-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13986
cific data acquisition, such as multiple frames of the same
scene [ 20], while others are semi-supervised, relying on un-
paired training samples with ground truth images for train-
ing a GAN [ 21]. There are also a few works on dataset-free
deep learning methods for uniform blind image deblurring;
seee.g. [22–24].
When training a DNN to deblur an image without see-
ing ground truth images, it is necessary to incorporate prior
knowledge of the physics of motion blur. However, existing
dataset-free methods [ 22] for blind deblurring are limited to
uniform blur, where the blur process is modeled by a convo-
lution:g=k⊗f, wherefdenotes the latent image, gde-
notes the input, and kdenotes the blur kernel. Uniform mo-
tion blur only occurs when the scene depth is constant and
camera motion is limited to in-image translation, making it
not applicable to more complex camera motion. Moreover,
these methods have a lot of room for improvement, as they
do not achieve competitive performance compared to state-
of-the-art non-learning methods.
1.2. Main idea
In this paper, our goal is to develop a dataset-free deep
learning method for removing motion blur from a single im-
age, caused by general camera shake. Similar to existing
dataset-free methods, when training a DNN to deblur an
image without seeing any truth image, some prior knowl-
edge about the physics of motion blur needs to be utilized.
In this paper, we limit our study to recovering images of
static scenes without any moving objects. In our proposed
approach, we utilize the so-called space-variant overlap-
add(SVOLA) formulation [ 25,26] to model motion blur of
static scenes. This formulation describes the relationship
between a blurred image gand its corresponding latent im-
agefas follows:
g=F(f,K)+n=P/summationdisplay
i=1ki⊗(w(·−ci)⊙Pif)+n,(1)
Here,⊙denotes entry-wise multiplication, ⊗denotes con-
volution, and Piis a mask operator that extracts the i-th
patch from the image. kiis thei-th kernel and w(·−ci)is
a window function that is translated to align with the center
ciof thei-th image patch. The window function wis nor-
malized such that/summationtextP
i=1w(·−ci) =1, for example, using
the 2D Modified Bartlett-Hanning window [ 27]. When all
Pkernels{ki}P
i=1are the same, the SVOLA model degen-
erates to the case of uniform blurring:
g=k⊗f. (2)
For an SVOLA-based model, there are two unknowns: the
latent image fand the kernel set K={kj}P
j=1.
Similar to existing works, such as Double-DIP for im-
age decomposition and Ren et al. for uniform deblurring,
we re-parameterize the latent image and kernel set usingtwo DNNs. This DNN-based re-parametrization is moti-
vated by the implicit prior induced by convolutional neural
networks (CNNs), known as the deep image prior (DIP).
However, the regularization effect induced by DIP alone is
not sufficient to avoid likely overfitting. One approach is to
introduce additional regularization drawn inspiration from
traditional non-learning methods.
Discussion on MAP-relating methods. For simplicity,
consider the case of uniform blur where g=k⊗f. As the
ML (maximum likelihood) estimator of the pair (k,f)by
max
k,flogp(g|k,f) (3)
does not resolve solution ambiguity of blind deblurring,
most non-learning methods are based on the maximum a
posteriori (MAP) estimator, which estimates (k,f)by
max
k,flogp(k,f|g) = min
k,f−logp(g|k,f)−logp(f)−logp(k).
An MAP estimator requires the definition of two prior dis-
tributions: p(k)andp(f). A commonly used prior distribu-
tion for motion deblurring assumes that ffollows a Lapla-
cian distribution: logp(f)∝ −∥∇f∥1, also known as total
variation (TV) regularization. Such a TV-based MAP esti-
mator is proposed in [ 22] for blind uniform deblurring.
There are two concerns about a TV-relating MAP esti-
mator. One is the pre-defined TV regularization for latent
images limits the benefit of data adaptivity brought by a
DNN. The other is the possible convergence to an incor-
rect local minimum far away from the truth (f,k)or even
degenerated trivial solution (g,δ). Indeed, the second is-
sue has been extensively discussed in existing works; see
e.g. [28–30].
From MAP estimator of (f,K)to EM algorithm for
ML estimator of K.Besides MAP, many other statisti-
cal inference schemes have also been successfully used for
blind uniform deblurring, e.g. variational Bayesian infer-
ence [ 30,31]; and EM algorithm [ 32]. EM is an iterative
scheme to find maximum likelihood (ML) estimate with the
introduction of latent variables. For blind deblurring, EM
aims at finding ML estimate of the marginal likelihood of
the unknown parameter Konly by marginalizing over the
imagef(latent variable).
Our approach. Inspired by the effectiveness of the EM al-
gorithm and marginal likelihood optimization for uniform
deblurring in terms of performance and stability, we pro-
pose to use the EM algorithm as a guide to develop a self-
supervised learning approach. Specifically, we introduce
a dataset-free deep learning method for both uniform and
non-uniform blind motion deblurring, which is based on
the Monte Carlo expectation maximization (MCEM) algo-
rithm. In summary, our method is built upon the efficient
EM algorithm in DNN-based representation of latent image
and blurring operator.
13987
1.3. Main contribution
In this paper, we present a self-supervised deep learning
approach for restoring motion-blurred images. Our main
contributions can be summarized as follows:
1. The first dataset-free deep learning method for removing
general motion blur (uniform and non-uniform) from im-
ages due to camera shake. To our knowledge, all existing
dataset-free methods are limited to uniform motion blur.
2. The first approach that combines DNN-based re-
parametrization and EM algorithm, bridging the gap be-
tween classical non-learning algorithms and deep learn-
ing. The proposed MCEM-based deep learning method
can see its applications in other image recovery tasks.
3. A powerful method that significantly outperforms exist-
ing solutions for blind motion deblurring. Our method
demonstrates superior performance in recovering images
affected by both uniform and non-uniform motion blur.
|
Pak_B-Spline_Texture_Coefficients_Estimator_for_Screen_Content_Image_Super-Resolution_CVPR_2023
|
Abstract
Screen content images (SCIs) include many informative
components, e.g., texts and graphics. Such content creates
sharp edges or homogeneous areas, making a pixel distribu-
tion of SCI different from the natural image. Therefore, we
need to properly handle the edges and textures to minimize
information distortion of the contents when a display de-
vice’s resolution differs from SCIs. To achieve this goal, we
propose an implicit neural representation using B-splines
for screen content image super-resolution (SCI SR) with ar-
bitrary scales. Our method extracts scaling, translating,
and smoothing parameters of B-splines. The followed multi-
layer perceptron (MLP) uses the estimated B-splines to re-
cover high-resolution SCI. Our network outperforms both
a transformer-based reconstruction and an implicit Fourier
representation method in almost upscaling factor, thanks to
the positive constraint and compact support of the B-spline
basis. Moreover, our SR results are recognized as cor-
rect text letters with the highest confidence by a pre-trained
scene text recognition network. Source code is available at
https://github.com/ByeongHyunPak/btc .
|
1. Introduction
With the rapid development of multimedia applications,
screen content images (SCIs) have been common in peo-
ple’s daily life. Many users interact with SCIs through
various display terminals, so resolution mismatch between
a display device and SCIs occurs frequently. In this re-
gard, we need to consider a flexible reconstruction at ar-
bitrary magnification from low-resolution (LR) SCI to its
high-resolution (HR). As in Figs. 1a and 1b, SCI has dis-
continuous tone contents, whereas natural image (NI) has
smooth and continuous textures. Such characteristics are
observed as a Gaussian distribution in the naturalness value
of NIs [25] and sharp fluctuations in the naturalness value
*Equal contribution.
†Corresponding author.
(a) Screen content image
(b) Natural image
(c) Naturalness value distributions for 500 images per each class
Figure 1. Comparison on naturalness value distribution [25]
between screen content images and natural images.
of SCIs in Fig. 1c. This observation leads to a screen con-
tent image super-resolution (SCI SR) method considering
such distributional properties. However, most SR meth-
ods [4, 5, 8–10, 28, 29] are applied to NIs.
Recently, Yang et al . proposed a novel SCI SR
method based on a transformer, implicit transformer super-
resolution network (ITSRN) [26]. Since ITSRN evalu-
ates each pixel value by a point-to-point implicit function
through a transformer architecture, it outperforms CNN-
based methods [28, 29]. However, even though ITSRN rep-
resents SCI’s characters ( e.g., sharp edges or homogeneous
areas) continuously, it has a large model size leading to in-
efficient memory consumption and slow inference time.
Meanwhile, Chen et al . first introduced implicit neu-
ral representation (INR) to single image super-resolution
(SISR) [4]. The implicit neural function enables arbitrary
scale super-resolution by jointly combining the continuous
query points and the encoded feature of the input LR image.
Nevertheless, such implicit neural function, implemented
with a multi-layer perceptron (MLP), is biased to learn the
low-frequency components, called spectral bias [15].
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10062
Lee and Jin suggested the local texture estimator (LTE)
upon INR to overcome the above problem [8]. LTE es-
timates the frequencies and corresponding amplitude fea-
tures from the input LR image and feeds them into an MLP
with the Fourier representation. Here, projecting input into
a high-dimensional space with the sinusoids in Fourier rep-
resentation allows the implicit neural function to learn high-
frequency details. However, since LTE expresses a signal
with a finite sum of sinusoids, it has a risk for the recon-
structed values to under/overshoot at the discontinuities of
SCIs, called the Gibbs phenomenon. This phenomenon of-
ten produces incorrect information about SCIs. Thus, we
need to restore HR SCIs with fewer parameters, fewer com-
putation costs, and less distortion of contents.
In this paper, we propose a B-spline Texture Coeffi-
cients estimator (BTC) utilizing INR to represent SCIs
continuously. BTC predicts scaling ( coefficients ), trans-
lating ( knots ), and smoothing ( dilations ) parameters of B-
splines from the LR image. Then, inspired by Lee and
Jin [8], we project the query point’s coordinate into the
high-dimensional space with 2D B-spline representations
and feed them into MLP. Since the B-spline basis has a posi-
tive constraint and compact support, BTC preserves discon-
tinuities well without under/overshooting.
Our main contributions are: (I) We propose a B-spline
Texture Coefficients estimator (BTC), which estimates B-
spline features ( i.e., coefficients, knots, and dilations) for
SCI SR. (II) With a 2D B-spline representation, we achieve
better performances with fewer parameters and less mem-
ory consumption. (III) We demonstrate that B-spline rep-
resentation is robust to over/undershooting aliasing when
reconstructing HR SCIs, owing to positive constraint and
compact support of the B-spline basis function.
|
Melas-Kyriazi_RealFusion_360deg_Reconstruction_of_Any_Object_From_a_Single_Image_CVPR_2023
|
Abstract
We consider the problem of reconstructing a full 360◦
photographic model of an object from a single image of it.
We do so by fitting a neural radiance field to the image,
but find this problem to be severely ill-posed. We thus take
an off-the-self conditional image generator based on diffu-
sion and engineer a prompt that encourages it to “dream
up” novel views of the object. Using the recent DreamFu-
sion method, we fuse the given input view, the conditional
prior, and other regularizers into a final, consistent recon-
struction. We demonstrate state-of-the-art reconstruction
results on benchmark images when compared to prior meth-
ods for monocular 3D reconstruction of objects. Qualita-
tively, our reconstructions provide a faithful match of the
input view and a plausible extrapolation of its appearance
and 3D shape, including to the side of the object not visiblein the image.
|
1. Introduction
We consider the problem of obtaining a 360◦photo-
graphic reconstruction of anyobject given a single image
of it. The challenge is that a single image does not con-
tain sufficient information for 3D reconstruction. Without
access to multiple views, an image only provides weak ev-
idence about the 3D shape of the object, and only for one
side of it. Even so, there is proof that this task canbe solved:
any skilled 3D artist can take a picture of almost any object
and, given sufficient time and effort, create a plausible 3D
model of it. The artist can do so by tapping into their vast
knowledge of the natural world and the objects it contains,
making up for the information missing from the image.
Hence, monocular 3D reconstruction requires combining
1
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8446
visual geometry with a powerful statistical model of the 3D
world. Diffusion-based 2D image generators like DALL-E
2 [30], Imagen [35], and Stable Diffusion [33] are able to
generate high-quality images from ambiguous inputs such
as text, showing that powerful priors for 2D images can be
learned. However, extending them to 3D is not easy be-
cause, while one can access billions of 2D images for train-
ing [36], the same cannot be said for 3D data.
A simpler approach is to extract or distill 3D informa-
tion from an existing 2D generator. A 2D image genera-
tor can in fact be used to sample or validate multiple views
of a given object, which can then be used to perform 3D
reconstruction. This idea was already demonstrated with
GAN-based generators for simple data like faces and syn-
thetic objects [2, 6, 8, 24, 25, 47]. Better 2D generators have
since resulted in better results, culminating in methods such
as DreamFusion [27], which can produce high-quality 3D
models from an existing 2D generator and text.
In this paper, we port distillation approaches from text-
based generation to monocular 3D reconstruction. This is
not a trivial change because conditioning generation on an
image provides a much more fine-grained specification of
the object than text. This in turn requires the 2D diffusion
model to hallucinate new views of a specific object instead
ofsome object of a given type. The latter is difficult because
the coverage of generator models is limited [1], meaning
that not every version of an object is captured well by the
model. We find empirically that this is a key problem.
We address this issue by introducing RealFusion, a new
method for 3D reconstruction from a single image. We ex-
press the object’s 3D geometry and appearance by means of
a neural radiance field. Then, we fit the radiance field to the
given input image by minimizing the usual rendering loss.
At the same time, we sample random other views of the
object, and constrain them with the diffusion prior, using a
technique similar to DreamFusion.
We find that, due to the coverage issue, this idea does
not work well out of the box, but can be improved via ad-
equately conditioning the 2D diffusion model. The idea
is to configure the prior to “dream up” or sample images
that may plausibly constitute other views of the given ob-
ject. We do so by automatically engineering the diffusion
prompt from random augmentations of the given image. In
this manner, the diffusion model provides sufficiently strong
constraints to allow meaningful 3D reconstruction.
In addition to setting the prompt correctly, we also add
some regularizers: shading the underlying geometry and
randomly dropping out texture (also similar to DreamFu-
sion), smoothing the normals of the surface, and fitting the
model in a coarse-to-fine fashion, capturing first the overall
structure of the object and only then the fine-grained details.
We also focus on efficiency and base our model on Instant-
NGP [23]. In this manner, we achieve reconstructions in thespan of hours instead of days if we were to adopt traditional
MLP-based NeRF models.
We assess our approach by using random images cap-
tured in the wild as well as existing benchmark datasets.
Note that we do nottrain a fully-fledged 2D-to-3D model
and we are notlimited to specific object categories; rather,
we perform reconstruction on an image-by-image basis us-
ing a pretrained 2D generator as a prior. Nonetheless, we
can surpass quantitatively and qualitatively previous single-
image reconstructors, including Shelf-Supervised Mesh
Prediction [50], which uses supervision tailored specifically
for 3D reconstruction.
Qualitatively, we obtain plausible 3D reconstructions
that are a good match for the provided input image (Fig. 1).
Our reconstructions are not perfect, as the diffusion prior
clearly does its best to explain the available image evidence
but cannot always match all the details. Even so, we be-
lieve that our results convincingly demonstrate the viability
of this approach and trace a path for future improvements.
To summarize, we make the following contributions :
(1) We propose RealFusion, a method that can extract from
a single image of an object a 360◦photographic 3D recon-
struction without assumptions on the type of object imaged
or 3D supervision of any kind; (2) We do so by leveraging
an existing 2D diffusion image generator via a new single-
image variant of textual inversion; (3) We also introduce
new regularizers and provide an efficient implementation
using InstantNGP; (4) We demonstrate state-of-the-art re-
construction results on a number of in-the-wild images and
images from existing datasets when compared to alternative
approaches.
|
Pan_Stitchable_Neural_Networks_CVPR_2023
|
Abstract
The public model zoo containing enormous powerful
pretrained model families ( e.g., ResNet/DeiT) has reached
an unprecedented scope than ever, which significantly con-
tributes to the success of deep learning. As each model fam-
ily consists of pretrained models with diverse scales ( e.g.,
DeiT-Ti/S/B), it naturally arises a fundamental question of
how to efficiently assemble these readily available models
in a family for dynamic accuracy-efficiency trade-offs at
runtime. To this end, we present Stitchable Neural Net-
works (SN-Net), a novel scalable and efficient framework
for model deployment. It cheaply produces numerous net-
works with different complexity and performance trade-offs
given a family of pretrained neural networks, which we call
anchors. Specifically, SN-Net splits the anchors across the
blocks/layers and then stitches them together with simple
stitching layers to map the activations from one anchor to
another. With only a few epochs of training, SN-Net effec-
tively interpolates between the performance of anchors with
varying scales. At runtime, SN-Net can instantly adapt to
dynamic resource constraints by switching the stitching po-
sitions. Extensive experiments on ImageNet classification
demonstrate that SN-Net can obtain on-par or even bet-
ter performance than many individually trained networks
†Corresponding author. E-mail: bohan .zhuang @gmail .comwhile supporting diverse deployment scenarios. For exam-
ple, by stitching Swin Transformers, we challenge hundreds
of models in Timm model zoo with a single network. We be-
lieve this new elastic model framework can serve as a strong
baseline for further research in wider communities.
|
1. Introduction
The vast computational resources available and large
amount of data have driven researchers to build tens of thou-
sands of powerful deep neural networks with strong per-
formance, which have largely underpinned the most recent
breakthroughs in machine learning and much broader arti-
ficial intelligence. Up to now, there are ∼81k models on
HuggingFace [53] and ∼800 models on Timm [52] that
are ready to be downloaded and executed without the over-
head of reproducing. Despite the large model zoo, a model
family ( e.g., DeiT-Ti/S/B [48]) that contains pretrained
models with functionally similar architectures but different
scales only covers a coarse-grained level of model complex-
ity/performance, where each model only targets a specific
resource budget ( e.g., FLOPs). Moreover, the model fam-
ily is not flexible to adapt to dynamic resource constraints
since each individual model is not re-configurable due to the
fixed computational graph. In reality, we usually need to
deploy models to diverse platforms with different resource
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16102
46810121416FLOPs (G)70.072.575.077.580.082.585.0ImageNet Top-1 (%)Swin-TiSwin-SSwin-B
Ours (Single Model)Timm Model ZooFigure 2. One Stitchable Neural Network vs.200models in Timm
model zoo [52]. It shows an example of SN-Net by stitching
ImageNet-22K pretrained Swin-Ti/S/B. Compared to each indi-
vidual network, SN-Net is able to instantly switch network topol-
ogy at runtime and covers a wide range of computing resource
budgets. Larger and darker dots indicate a larger model with more
parameters and higher complexity.
constraints ( e.g., energy, latency, on-chip memory). For in-
stance, a mobile app in Google Play has to support tens of
thousands of unique Android devices, from a high-end Sam-
sung Galaxy S22 to a low-end Nokia X5. Therefore, given
a family of pretrained models in the model zoo, a funda-
mental research question naturally arises: how to effectively
utilise these off-the-shelf pretrained models to handle di-
verse deployment scenarios for Green AI [45]?
To answer this question, a naive solution is to train indi-
vidual models with different accuracy-efficiency trade-offs
from scratch. However, such method has a linearly in-
creased training and time cost with the number of possi-
ble cases. Therefore, one may consider the existing scal-
able deep learning frameworks, such as model compres-
sion and neural architecture search (NAS), to obtain mod-
els at different scales for diverse deployment requirements.
Specifically, network compression approaches such as prun-
ing [20,22,25], quantization [32,42,62] and knowledge dis-
tillation [7,43,47] aim to obtain a small model from a large
and well-trained network, which however only target one
specific resource budget (see Figure 1 (a)), thus not flexible
to meet the requirements of real-world deployment scenar-
ios. On the other hand, one-shot NAS [28, 37], a typical
NAS framework that decouples training and specialization
stages, seeks to train an over-parameterized supernet that
supports many sub-networks for run-time dynamics (see
Figure 1 (b)), but training the supernet is extremely time-
consuming and computationally expensive ( e.g., 1,200 GPU
hours on 32 V100 GPUs in OFA [4]). To summarize, the
existing scalable deep learning frameworks are still limited
within a single model design space, which cannot inherit the
rich knowledge from pretrained model families in a model
zoo for better flexibility and accuracy. Besides, they alsorequire complicated training strategies to guarantee a good
model performance.
In this work, we present Stitchable Neural Network (SN-
Net), a novel scalable deep learning framework for efficient
model design and deployment which quickly stitches an
off-the-shelf pretrained model family with much less train-
ing effort to cover a fine-grained level of model complex-
ity/performance for a wide range of deployment scenarios
(see Figure 1 (c)). Specifically, SN-Net is motivated by
the previous observations [2, 10, 21] that the typical min-
ima reached by SGD can be stitched to each other with low
loss penalty, which implies architectures of the same model
family pretrained on the same task can be stitched. Based
on this insight, SN-Net directly selects the well-performed
pretrained models in a model family as “anchors”, and then
inserts a few simple stitching layers at different positions
to transform the activations from one anchor to its nearest
anchor in terms of complexity. In this way, SN-Net nat-
urally interpolates a path between neighbouring anchors of
different accuracy-efficiency trade-offs, and thus can handle
dynamic resource constraints with a single neural network
at runtime . An example is shown in Figure 2, where a single
Swin-based SN-Net is able to do what hundreds of models
can do with only 50 epochs training on ImageNet-1K.
We systematically study the design principles for SN-
Net, including the choice of anchors, the design of stitching
layers, the stitching direction and strategy, along with a suf-
ficiently simple but effective training strategy. With com-
prehensive experiments, we show that SN-Net demonstrates
promising advantages: 1) Compared to the existing preva-
lent scalable deep learning frameworks (Figure 1), SN-Net
is a new universal paradigm which breaks the limit of a sin-
gle pretrained model or supernet design by extending the
design space into a large number of model families in the
model zoo, forming a “many-to-many” pipeline. 2) Differ-
ent from NAS training that requires complex optimization
techniques [4,61], training SN-Net is as easy as training in-
dividual models while getting rid of the huge computational
cost of training from scratch. 3) The final performance of
stitches is almost predictable due to the interpolation-like
performance curve between anchors, which implies that we
can selectively train a number of stitches prior to training
based on different deployment scenarios.
In a nutshell, we summarize our contributions as follows:
• We introduce Stitchable Neural Networks, a new uni-
versal framework for elastic deep learning by directly
utilising the pretrained model families in model zoo
via model stitching.
• We provide practical principles to design and train SN-
Net, laying down the foundations for future research.
• Extensive experiments demonstrate that compared to
training individual networks from scratch, e.g., a single
16103
DeiT-based [48] SN-Net can achieve flexible accuracy-
efficiency trade-offs at runtime while reducing 22×
training cost and local disk storage.
|
Pautrat_DeepLSD_Line_Segment_Detection_and_Refinement_With_Deep_Image_Gradients_CVPR_2023
|
Abstract
Line segments are ubiquitous in our human-made world
and are increasingly used in vision tasks. They are com-
plementary to feature points thanks to their spatial extent
and the structural information they provide. Traditional line
detectors based on the image gradient are extremely fast
and accurate, but lack robustness in noisy images and chal-
lenging conditions. Their learned counterparts are more
repeatable and can handle challenging images, but at the
cost of a lower accuracy and a bias towards wireframe lines.
We propose to combine traditional and learned approaches
to get the best of both worlds: an accurate and robust line
detector that can be trained in the wild without ground truth
lines. Our new line segment detector, DeepLSD, processes
images with a deep network to generate a line attraction field,
before converting it to a surrogate image gradient magni-
tude and angle, which is then fed to any existing handcrafted
line detector. Additionally, we propose a new optimization
tool to refine line segments based on the attraction field and
vanishing points. This refinement improves the accuracy of
current deep detectors by a large margin. We demonstrate
the performance of our method on low-level line detection
metrics, as well as on several downstream tasks using multi-
ple challenging datasets. The source code and models are
available at https://github.com/cvg/DeepLSD .
|
1. Introduction
Line segments are ubiquitous in human-made environ-
ments and encode the underlying scene structure in a com-
pact way. As such, line features have been used in mul-
tiple vision tasks: 3D reconstruction and Structure-from-
Motion (SfM) [17, 32, 34], Simultaneous Localization and
Mapping [13, 15, 25, 37, 62], visual localization [14], track-
ing [38], vanishing point estimation [49], etc. Thanks to
their spatial extent and presence even in textureless areas,
they offer a good complement to feature points [14, 15, 37].
All these applications require a robust and accurate de-
tector to extract line features from images. Traditionally,
line segments are extracted from the image gradient using
LSD [51] HAWP [54]
Line distance field Ours
Figure 1. Line detection in the wild. Top row: on challenging
images, handcrafted methods such as LSD [51] suffer from noisy
image gradients, while current learned methods like HAWP [54]
were trained on wireframe images and generalize poorly. Bottom
row: we combine deep learning to regress a line attraction field
and a handcrafted detector to get both accurate and robust lines.
handcrafted heuristics, such as in the Line Segment Detector
(LSD) [51]. These methods are fast and very accurate since
they rely on low-level details of the image. However, they
can suffer from a lack of robustness in challenging condi-
tions such as in low illumination, where the image gradient
is noisy. They also miss global knowledge from the scene
and will detect any set of pixels with the same gradient
orientation, including uninteresting and noisy lines.
Recently, deep networks offer new possibilities to tackle
these drawbacks. This resurgence of line detection methods
was initiated by the deep wireframe methods aiming at in-
ferring the line structure of indoor scenes [19, 30, 53, 54, 61].
Since then, more generic deep line segment detectors have
been proposed [10, 16, 20, 28, 50], including joint line detec-
tors and descriptors [1,36,56]. These methods can, in theory,
be trained on challenging images and, thus, gain robustness
where classical methods fail. As they require a large recep-
tive field to be able to handle the extent of line segments in
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17327
an image, they can also encode some image context and can
distinguish between noisy and relevant lines. On the other
hand, most of these methods are fully supervised and there
exists currently only a single dataset with ground truth lines,
the Wireframe dataset [19]. Initially designed for wireframe
parsing, this dataset is biased towards structural lines and
is limited to indoor scenes. Therefore, it is not a suitable
training set for generic line detectors, as illustrated in Fig-
ure 1. Additionally, similarly as with feature points [31, 45],
current deep detectors are lacking accuracy and are still out-
performed by handcrafted methods on easy images. The
exact localization of line endpoints is often hard to obtain,
as lines can be fragmented and suffer from partial occlusion.
Many applications using lines consequently consider infinite
lines and ignore the endpoints [34].
Based on this assessment, we propose in this work to keep
the best of both worlds: use deep learning to process the
image and discard unnecessary details, then use handcrafted
methods to detect the line segments. We thus retain the
benefits of deep learning, namely, to abstract the image and
gain more robustness to illumination and noise, while at the
same time retaining the accuracy of classical methods. We
achieve this goal by following the tracks of two previous
methods that used a dual representation of line segments
with attraction fields [53, 54]. The latter are continuous
representations that are well-suited for deep learning, and
we show how to leverage them as input to the traditional line
detectors. Contrary to these two previous methods, we do
not rely on ground truth lines to train our line attraction field,
but propose instead to bootstrap existing methods to create
a high-quality pseudo ground truth. Thus, our network can
be trained on any dataset and be specialized towards specific
applications, which we show in our experiments.
We additionally propose a novel optimization procedure
to refine the detected line segments. This refinement is based
on the attraction field output by the proposed network, as
well as on vanishing points, optimized together with the seg-
ments. Not only can this optimization be used to effectively
improve the accuracy of our prediction, but it can also be
applied to other deep line detectors.
In summary, we propose the following contributions:
•We propose a method bootstrapping current detectors to
create ground truth line attraction fields on any image.
•We introduce an optimization procedure that can simulta-
neously refine line segments and vanishing points . This
optimization can be used as a stand-alone refinement to
improve the accuracy of any existing deep line detector.
•We set a new record in several downstream tasks requir-
ing line segments by combining the robustness of deep
learning approaches with the precision of handcrafted
methods in a single pipeline.
|
Meng_NeAT_Learning_Neural_Implicit_Surfaces_With_Arbitrary_Topologies_From_Multi-View_CVPR_2023
|
Abstract
Recent progress in neural implicit functions has set new
state-of-the-art in reconstructing high-fidelity 3D shapes
from a collection of images. However, these approaches are
limited to closed surfaces as they require the surface to be
represented by a signed distance field. In this paper, we pro-
pose NeAT, a new neural rendering framework that can learn
implicit surfaces with arbitrary topologies from multi-view
images. In particular, NeAT represents the 3D surface as a
level set of a signed distance function (SDF) with a validity
branch for estimating the surface existence probability at the
query positions. We also develop a novel neural volume ren-
dering method, which uses SDF and validity to calculate the
volume opacity and avoids rendering points with low valid-
ity. NeAT supports easy field-to-mesh conversion using the
classic Marching Cubes algorithm. Extensive experiments
on DTU [20], MGN [4], and Deep Fashion 3D [19] datasets
indicate that our approach is able to faithfully reconstruct
both watertight and non-watertight surfaces. In particular,
NeAT significantly outperforms the state-of-the-art methodsin the task of open surface reconstruction both quantitatively
and qualitatively.
|
1. Introduction
3D reconstruction from multi-view images is a funda-
mental problem in computer vision and computer graphics.
Recent advances in neural implicit functions [26, 36, 48, 55]
have brought impressive progress in achieving high-fidelity
reconstruction of complex geometry even with sparse views.
They use differentiable rendering to render the inferred im-
plicit surface into images which are compared with the input
images for network supervision. This provides a promis-
ing alternative of learning 3D shapes directly from 2D im-
ages without 3D data. However, existing neural render-
ing methods represent surfaces as signed distance function
(SDF) [27, 55] or occupancy field [36, 38], limiting their
output to closed surfaces. This leads to a barrier in recon-
structing a large variety of real-world objects with open
boundaries, such as 3D garments, walls of a scanned 3D
scene, etc. The recently proposed NDF [10], 3PSDF [6]
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
248
and GIFS [56] introduce new implicit representations sup-
porting 3D geometry with arbitrary topologies, including
both closed and open surfaces. However, none of these rep-
resentations are compatible with existing neural rendering
frameworks. Leveraging neural implicit rendering to recon-
struct non-watertight shapes, i.e., shapes with open surfaces,
from multi-view images remains a virgin land.
We fill this gap by presenting NeAT , aNeural render-
ing framework that reconstructs surfaces with Arbitrary
Topologies using multi-view supervision. Unlike previous
neural rendering frameworks only using color and SDF pre-
dictions, we propose a validity branch to estimate the surface
existence probability at the query positions, thus avoiding
rendering 3D points with low validity as shown in Figure
2. In contrast to 3PSDF [6] and GIFS [56], our validity
estimation is a differentiable process. It is compatible with
the volume rendering framework while maintaining its flexi-
bility in representing arbitrary 3D topologies. To correctly
render both closed and open surfaces, we introduce a sign
adjustment scheme to render both sides of surfaces, while
maintaining unbiased weights and occlusion-aware proper-
ties as previous volume renderers. In addition, to reconstruct
intricate geometry, a specially tailored regularization mecha-
nism is proposed to promote the formation of open surfaces.
By minimizing the difference between the rendered and the
ground-truth pixels, we can faithfully reconstruct both the
validity and SDF field from the input images. At reconstruc-
tion time, the predicted validity value along with the SDF
value can be readily converted to 3D mesh with the clas-
sic field-to-mesh conversion techniques, e.g., the Marching
Cubes Algorithm [30].
We evaluate NeAT in the task of multi-view reconstruc-
tion on a large variety of challenging shapes, including both
closed and open surfaces. NeAT can consistently outperform
the current state-of-the-art methods both qualitatively and
quantitatively. We also show that NeAT can provide efficient
supervision for learning complex shape priors that can be
used for reconstructing non-watertight surface only from a
single image. Our contributions can be summarized as:
•A neat neural rendering scheme of implicit surface,
coded NeAT , that introduces a novel validity branch,
and, for the first time , can faithfully reconstruct surfaces
with arbitrary topologies from multi-view images.
•A specially tailored learning paradigm for NeAT with
effective regularization for open surfaces.
•NeAT sets the new state-of-the-art on multi-view re-
construction on open surfaces across a wide range of
benchmarks.
|
Ni_Conditional_Image-to-Video_Generation_With_Latent_Flow_Diffusion_Models_CVPR_2023
|
Abstract
Conditional image-to-video (cI2V) generation aims to
synthesize a new plausible video starting from an image
(e.g., a person’s face) and a condition ( e.g., an action class
label like smile). The key challenge of the cI2V task lies
in the simultaneous generation of realistic spatial appear-
ance and temporal dynamics corresponding to the given
image and condition. In this paper, we propose an ap-
proach for cI2V using novel latent flow diffusion models
(LFDM) that synthesize an optical flow sequence in the la-
tent space based on the given condition to warp the given
image. Compared to previous direct-synthesis-based works,
our proposed LFDM can better synthesize spatial details
and temporal motion by fully utilizing the spatial content of
the given image and warping it in the latent space accord-
ing to the generated temporally-coherent flow. The training
of LFDM consists of two separate stages: (1) an unsuper-
vised learning stage to train a latent flow auto-encoder for
spatial content generation, including a flow predictor to es-
timate latent flow between pairs of video frames, and (2) a
conditional learning stage to train a 3D-UNet-based diffu-
sion model (DM) for temporal latent flow generation. Un-
like previous DMs operating in pixel space or latent feature
space that couples spatial and temporal information, the
DM in our LFDM only needs to learn a low-dimensional
latent flow space for motion generation, thus being more
computationally efficient. We conduct comprehensive ex-
periments on multiple datasets, where LFDM consistently
outperforms prior arts. Furthermore, we show that LFDM
can be easily adapted to new domains by simply finetun-
ing the image decoder. Our code is available at https:
//github.com/nihaomiao/CVPR23_LFDM .
|
1. Introduction
Image-to-video (I2V) generation is an appealing topic
and has many potential applications, such as artistic cre-
*Work done during the internship at NEC Laboratories America.
“Draw circleclockwise”
“Fold wings”
“Surprise”
Figure 1. Examples of generated video frames and latent flow se-
quences using our proposed LFDM. The first column shows the
given images x0and conditions y. The latent flow maps are back-
ward optical flow to x0in the latent space. We use the color coding
scheme in [4] to visualize flow, where the color indicates the di-
rection and magnitude of the flow.
ation, entertainment and data augmentation for machine
learning [30]. Given a single image x0and a condition
y, conditional image-to-video (cI2V) generation aims to
synthesize a realistic video with frames 0tok,ˆxK
0=
{x0,ˆx1, . . . , ˆxK}, starting from the given frame x0and sat-
isfying the condition y. Similar to conditional image syn-
thesis works [43, 82], most existing cI2V generation meth-
ods [16, 19, 21, 30, 36, 77] directly synthesize each frame in
the whole video based on the given image x0and condition
y. However, they often struggle with simultaneously pre-
serving spatial details and keeping temporal coherence in
the generated frames. In this paper, we propose novel latent
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18444
…DDPM Reverse Process%𝐟<='𝐦<=
𝑥:Encoder Φ“pick up and throw”𝑦𝑒𝜖,𝜖,𝜖,𝜖,
…𝐧∼𝒩(𝟎,𝐈)𝑧:̃𝑧<=Latent Space
…
WarpDecoder Ω
…2𝐱<=BERT embedding
…
…Figure 2. The video generation ( i.e., inference) process of LFDM.
The generated flow sequence ˆfK
1and occlusion map sequence ˆmK
1
have the same spatial size as image latent map z0. The brighter
regions in ˆmK
1mean those are regions less likely to be occluded.
flow diffusion models, termed LFDM , which mitigate this
issue by synthesizing a latent optical flow sequence condi-
tioned on y, to warp the image x0in the latent space for gen-
erating new videos (see Fig. 1 for an example). Unlike pre-
vious direct-synthesis or warp-free based methods, the spa-
tial content of the given image can be consistently reused by
our warp-based LFDM through the generated temporally-
coherent flow. So LFDM can better preserve subject ap-
pearance, ensure motion continuity and also generalize to
unseen images, as shown in Sec. 4.3.
To disentangle the generation of spatial content and tem-
poral dynamics, the training of LFDM is designed to in-
clude two separate stages. In stage one, inspired by recent
motion transfer works [50,62,75,76,79], a latent flow auto-
encoder (LFAE) is trained in an unsupervised fashion. It
first estimates the latent optical flow between two frames
from the same video, a reference frame and a driving frame.
Then the reference frame is warped with predicted flow and
LFAE is trained by minimizing the reconstruction loss be-
tween this warped frame and the driving frame. In stage
two, a 3D U-Net [12] based diffusion model (DM) is trained
using paired condition yand latent flow sequence extracted
from training videos using trained LFAE. Conditioned on y
and image x0, the DM learns how to produce temporally co-
herent latent flow sequences through 3D convolutions. Dif-
ferent from previous DMs learning in a high-dimensional
pixel space [20, 24, 27, 65, 73, 84] or latent feature space
[57] that couples spatial and temporal information, the DM
in LFDM operates in a simple andlow-dimensional latent
flow space which only describes motion dynamics. So the
diffusion generation process can be more computationally
efficient. Benefiting from the decoupled training strategy,
LFDM can be easily adapted to new domains. In Sec. 4.3,
we show that using the temporal latent flow produced by
DM trained in the original domain, LFDM can animate fa-cial images from a new domain and generate better spatial
details if the image decoder is finetuned.
During inference, as Fig 2 shows, we first adopt the DM
trained in stage two to generate a latent flow sequence ˆfK
1
conditioned on yand given image x0. To generate the oc-
cluded regions in new frames, the DM also produces an oc-
clusion map sequence ˆmK
1. Then image x0is warped with
ˆfK
1andˆmK
1in the latent space frame-by-frame to generate
the video ˆxK
1. By keeping warping the given image x0in-
stead of previous synthesized frames, we can avoid artifact
accumulation. More details will be introduced in Sec. 3.3.
Our contributions are summarized as follows:
• We propose novel latent flow diffusion models
(LFDM) to achieve image-to-video generation by syn-
thesizing a temporally-coherent flow sequence in the
latent space based on the given condition to warp the
given image. To the best of our knowledge, we are the
first to apply diffusion models to generate latent flow
for conditional image-to-video tasks.
• A novel two-stage training strategy is proposed for
LFDM to decouple the generation of spatial content
and temporal dynamics, which includes training a la-
tent flow auto-encoder in stage one and a conditional
3D U-Net based diffusion model in stage two. This
disentangled training process also enables LFDM to
be easily adapted to new domains.
• We conduct extensive experiments on multiple
datasets, including videos of facial expression, human
action and gesture, where our proposed LFDM consis-
tently outperforms previous state-of-the-art methods.
|
Metzer_Latent-NeRF_for_Shape-Guided_Generation_of_3D_Shapes_and_Textures_CVPR_2023
|
Abstract
Text-guided image generation has progressed rapidly in
recent years, inspiring major breakthroughs in text-guided
shape generation. Recently, it has been shown that using
score distillation, one can successfully text-guide a NeRF
model to generate a 3D object. We adapt the score distilla-
tion to the publicly available, and computationally efficient,
Latent Diffusion Models, which apply the entire diffusion
process in a compact latent space of a pretrained autoen-
coder. As NeRFs operate in image space, a na ¨ıve solution
for guiding them with latent score distillation would require
encoding to the latent space at each guidance step. Instead,
we propose to bring the NeRF to the latent space, result-
ing in a Latent-NeRF . Analyzing our Latent-NeRF , we show
that while Text-to-3D models can generate impressive re-
sults, they are inherently unconstrained and may lack the
ability to guide or enforce a specific 3D structure. To as-
sist and direct the 3D generation, we propose to guide our
Latent-NeRF using a Sketch-Shape: an abstract geometry
that defines the coarse structure of the desired object. Then,
we present means to integrate such a constraint directly into
a Latent-NeRF . This unique combination of text and shape
guidance allows for increased control over the generation
process. We also show that latent score distillation can be
successfully applied directly on 3D meshes. This allows
for generating high-quality textures on a given geometry.
Our experiments validate the power of our different forms
of guidance and the efficiency of using latent rendering.
|
1. Introduction
Text-guided image generation has seen tremendous suc-
cess in recent years, primarily due to the breathtaking de-
velopment in Language-Image models [ 25,28,36] and dif-
fusion models [ 14,21,37–40]. These breakthroughs have
also resulted in fast progression for text-guided shape gen-
eration [ 9,29,53]. Most recently, it has been shown [ 35] that
one can directly use score distillation from a 2D diffusion
model to guide the generation of a 3D object represented as
a Neural Radiance Field (NeRF) [ 30].
While Text-to-3D can generate impressive results, it is
inherently unconstrained and may lack the ability to guide
or enforce a 3D structure. In this paper, we show how to in-
troduce shape-guidance to the generation process to guide
it toward a specific shape, thus allowing increased control
over the generation process. Our method builds upon two
models, a NeRF model [ 30], and a Latent Diffusion Model
(LDM) [ 39]. Latent Models, which apply the entire diffu-
sion process in a compact latent space, have recently gained
popularity due to their efficiency and publicly available pre-
trained checkpoints. As score distillation was previously
applied only on RGB diffusion models, we first present two
key modifications to the NeRF model that are better paired
with guidance from a latent model. First, instead of repre-
senting our NeRF in the standard RGB space, we propose
aLatent-NeRF which operates directly in the latent space
of the LDM, thus avoiding the burden of encoding a ren-
dered RGB image to a latent space for each and every guid-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12663
ing step. Secondly, we show that after training, one can
easily transform a Latent-NeRF back into a regular NeRF.
This allows further refinement in RGB space, where we can
also introduce shading constraints or apply further guidance
from RGB diffusion models [ 40]. This is achieved by intro-
ducing a learnable linear layer that can be optionally added
to a trained Latent-NeRF, where the linear layer is initial-
ized using an approximate mapping between the latent and
RGB values [ 45].
Our first form of shape-guidance is applied using a
coarse 3D model, which we call a Sketch-Shape . Given a
Sketch-Shape, we apply soft constraint during the NeRF
optimization process to guide its occupancy based on the
given shape. Easily combined with Latent-NeRF optimiza-
tion, the additional constraint can be tuned to meet a desired
level of strictness. Using a Sketch-Shape allows users to
define their base geometry, where Latent-NeRF then refines
the shape and introduces texture based on a guiding prompt.
We further present Latent-Paint , another form of shape-
guidance where the generation process is applied directly
on a given 3D mesh, and we have not only the structure but
also the exact parameterization of the input mesh. This is
achieved by representing a texture map in the latent space
and propagating the guidance gradients directly to the tex-
ture map through the rendered mesh. By doing so, we allow
for the first time to colorize a mesh using guidance from a
pretrained diffusion model and enjoy its expressiveness.
We evaluate our different forms of guidance under a va-
riety of scenarios and show that together with our latent-
based guidance, they offer a compelling solution for con-
strained shape and texture generation.
|
Peng_On_the_Convergence_of_IRLS_and_Its_Variants_in_Outlier-Robust_CVPR_2023
|
Abstract
Outlier-robust estimation involves estimating some pa-
rameters ( e.g., 3D rotations) from data samples in the pres-
ence of outliers, and is typically formulated as a non-convex
and non-smooth problem. For this problem, the classical
method called iteratively reweighted least-squares (IRLS)
and its variants have shown impressive performance. This
paper makes several contributions towards understanding
why these algorithms work so well. First, we incorporate
majorization and graduated non-convexity (GNC) into the
IRLS framework and prove that the resulting IRLS vari-
ant is a convergent method for outlier-robust estimation.
Moreover, in the robust regression context with a constant
fraction of outliers, we prove this IRLS variant converges
to the ground truth at a global linear and local quadratic
rate for a random Gaussian feature matrix with high prob-
ability. Experiments corroborate our theory and show that
the proposed IRLS variant converges within 5-10iterations
for typical problem instances of outlier-robust estimation,
while state-of-the-art methods need at least 30iterations. A
basic implementation of our method is provided: https:
//github.com/liangzu/IRLS-CVPR2023
...attempts to analyze this difficulty [ caused by infinite
weights of IRLS for the ℓp-loss ] have a long history of
proofs and counterexamples to incorrect claims.
Khurrum Aftab & Richard Hartley [1]
|
1. Introduction
1.1. The Outlier-Robust Estimation Problem
Many parameter estimation problems can be stated in
the following general form. We are given some function
r:C×D→[0,∞), called the residual function. Here
Dis the domain of data samples d1, . . . , dm, andC ⊂Rn
is the constraint set where our (ground truth) variable v∗
lies;Ccan be convex such as an affine subspace, or non-
convex such as a special orthogonal group SO(3) . We aim
to recover v∗from data di’s. A simple example is linear
regression , where a sample di= (ai, yi)consists of a fea-ture vector ai∈Rnand a scalar response yi∈R, and the
residual function is r(v,di) :=|a⊤
iv−yi|.
The sample diis called an inlier , ifr(v∗,di)≈0. It is
called an outlier , if the residual r(v∗,di)islarge (vaguely
speaking). If all samples are inliers, one usually prefers
solving the following problem as a means to estimate v∗:
min
v∈CXm
i=1r(v,di)2(1)
Problem (1) is called least-squares , and is known since Leg-
endre [41] and Gauss [29] in the linear regression context.
Even before that, Boscovich [13] suggested to minimize (1)
without the square. This unsquared version is called least
absolute deviation , and is more robust to outliers than (1).
Consider the following formulation for outlier-robust es-
timation (i.e., a specific type of M-estimators [35, 55]):
min
v∈CXm
i=1ρ
r(v,di)
(2)
Here ρ:R→Ris some outlier-robust loss (the unsquared
version of (1) corresponds to ρ(r) =|r|in (2)). Among
many possible losses ρ[19, 23], we discuss two particular
choices. The first is the ℓp-loss ρ(r) =|r|p/p,p∈(0,1];
it has been used in several research fields, e.g.,geometric
vision [1, 19], compressed sensing [17, 22, 36], matrix re-
covery [37,44,45], and subspace clustering [26]. The other
loss is due to Huber [35]: ρ(r) = min {r2, c2}, with c >0a
hyper-parameter; it has later been named as Talwar [21,48],
Huber-type skipped mean [30], truncated quadratic [6, 11],
andtruncated least-squares (TLS) [4, 68, 74]. Both losses
are highly robust to outliers but make solving (2) diffi-
cult, e.g., the objective of (2) becomes non-smooth or non-
convex. This motivates the need to develop efficient and
provably correct solvers for (2) with either of the two losses.
1.2. IRLS and Its Variants in Vision & Optimization
The General Principle of IRLS. As its name suggests, it-
eratively reweighted least-squares (IRLS) is a general algo-
rithmic paradigm that alternates between defining a weight
for each sample and solving a weighted least squares prob-
lem. Specifically, IRLS initializes a variable v(0)∈ C, and,
1
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17808
fort= 0,1, . . ., alternates between the following two steps:
Update weights w(t+1)
i based on v(t),∀i= 1, . . . , m (3)
Solve: v(t+1)←argmin
v∈CXm
i=1w(t+1)
i r(v,di)2(4)
This ba
|
Moon_Query-Dependent_Video_Representation_for_Moment_Retrieval_and_Highlight_Detection_CVPR_2023
|
Abstract
Recently, video moment retrieval and highlight detec-
tion (MR/HD) are being spotlighted as the demand for video
understanding is drastically increased. The key objective
of MR/HD is to localize the moment and estimate clip-wise
accordance level, i.e., saliency score, to the given text query.
Although the recent transformer-based models brought some
advances, we found that these methods do not fully exploit
the information of a given query. F or example, the relevance
between text query and video contents is sometimes neglected
when predicting the moment and its saliency. To tackle this
issue, we introduce Query-Dependent DETR (QD-DETR), a
detection transformer tailored for MR/HD. As we observe
the insignificant role of a given query in transformer architec-tures, our encoding module starts with cross-attention layers
to explicitly inject the context of text query into video repre-
sentation. Then, to enhance the model’s capability of exploit-
ing the query information, we manipulate the video-query
pairs to produce irrelevant pairs. Such negative (irrelevant)
video-query pairs are trained to yield low saliency scores,
which in turn, encourages the model to estimate precise ac-
cordance between query-video pairs. Lastly, we present an
input-adaptive saliency predictor which adaptively defines
the criterion of saliency scores for the given video-query
pairs. Our extensive studies verify the importance of build-
ing the query-dependent representation for MR/HD. Specif-
ically, QD-DETR outperforms state-of-the-art methods on
QVHighlights, TVSum, and Charades-STA datasets. Codes
are available at github.com/wjun0830/QD-DETR .
|
1. Introduction
Along with the advance of digital devices and platforms,
video is now one of the most desired data types for con-
sumers [ 2,48]. Although the large information capacity of
videos might be beneficial in many aspects, e.g., informative
∗Equal contribution
⋆Corresponding author
VideoPositive Query: Man with curly hair speaks directly to camera
Negative Query: Kids exercise in front of parked cars.98
9 9
Saliency of Positive Query S aliency of Negative Queryݐ =0 ݐ =TGT GT
Pred. Pred.Pred. (Moment-DETR)Moment
DETR
OursLack of query-relevance
Large gap between pos. and neg. in relevant momentt
GT GT P d (M t DETR)
Figure 1. Comparison of highlight-ness (saliency score) when
relevant and non-relevant queries are given. We found that the
existing work only uses queries to play an insignificant role, thereby
may not be capable of detecting negative queries and video-query
relevance; saliency scores for clips in ground-truth (GT) moments
are low and equivalent for positive and negative queries. On the
other hand, query-dependent representations of QD-DETR result
in corresponding saliency scores to the video-query relevance and
precisely localized moments.
and entertaining, inspecting the videos is time-consuming,
so that it is hard to capture the desired moments [ 1,2].
Indeed, the need to retrieve user-requested or highlight
moments within videos is greatly raised. Numerous research
efforts were put into the search for the requested moments
in the video [ 1,12,13,30] and summarizing the video high-
lights [ 4,33,47,57]. Recently, Moment-DETR [ 23] further
spotlighted the topic by proposing a QVHighlights dataset
that enables the model to perform both tasks, retrieving the
moments with their highlight-ness, simultaneously.
When describing the moment, one of the most favored
types of query is the natural language sentence (text) [ 1].
While early methods utilized convolution networks [ 14,44,
59], recent approaches have shown that deploying the atten-
tion mechanism of transformer architecture is more effective
to fuse the text query into the video representation. For
example, Moment-DETR [ 23] introduced the transformer
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23023
architecture which processes both text and video tokens as
input by modifying the detection transformer (DETR), and
UMT [ 31] proposed transformer architectures to take multi-
modal sources, e.g., video and audio. Also, they utilized
the text queries in the transformer decoder. Although they
brought breakthroughs in the field of MR/HD with seminal
architectures, they overlooked the role of the text query. To
validate our claim, we investigate the Moment-DETR [ 23]i n
terms of the impact of text query in MR/HD (Fig. 1). Given
the video clips with a relevant positive query and an irrel-
evant negative query, we observe that the baseline oftenneglects the given text query when estimating the query-
relevance scores, i.e., saliency scores, for each video clip.
To this end, we propose Query-Dependent DETR (QD-
DETR) that produces query-dependent video representation.
Our key focus is to ensure that the model’s prediction foreach clip is highly dependent on the query. First, to fully
utilize the contextual information in the query, we revise the
transformer encoder to be equipped with cross-attention lay-
ers at the very first layers. By inserting a video as the query
and a text as the key and value of the cross-attention layers,
our encoder enforces the engagement of the text query in
extracting video representation. Then, in order to not only in-ject a lot of textual information into the video feature but alsomake it fully exploited, we leverage the negative video-query
pairs generated by mixing the original pairs. Specifically,
the model is learned to suppress the saliency scores of such
negative (irrelevant) pairs. Our expectation is the increased
contribution of the text query in prediction since the videos
will be sometimes required to yield high saliency scores and
sometimes low ones depending on whether the text queryis relevant or not. Lastly, to apply the dynamic criterionto mark highlights for each instance, we deploy a saliency
token to represent the entire video and utilize it as an input-
adaptive saliency criterion. With all components combined,
our QD-DETR produces query-dependent video representa-
tion by integrating source and query modalities. This further
allows the use of positional queries [ 29] in the transformer
decoder. Overall, our superior performances over the exist-
ing approaches validate the significance of the role of text
query for MR/HD.
|
Nag_Unbiased_Scene_Graph_Generation_in_Videos_CVPR_2023
|
Abstract
The task of dynamic scene graph generation (SGG) from
videos is complicated and challenging due to the inherent
dynamics of a scene, temporal fluctuation of model predictions,
and the long-tailed distribution of the visual relationships in
addition to the already existing challenges in image-based SGG.
Existing methods for dynamic SGG have primarily focused on
capturing spatio-temporal context using complex architectures
without addressing the challenges mentioned above, especially
the long-tailed distribution of relationships. This often leads
to the generation of biased scene graphs. To address these
challenges, we introduce a new framework called TEMPURA:
TEmporal consistency and Memory Prototype guided UnceR-
tainty Attenuation for unbiased dynamic SGG. TEMPURA
employs object-level temporal consistencies via transformer-
based sequence modeling, learns to synthesize unbiased
relationship representations using memory-guided training, and
attenuates the predictive uncertainty of visual relations using
a Gaussian Mixture Model (GMM). Extensive experiments
demonstrate that our method achieves significant (up to 10% in
some cases) performance gain over existing methods highlight-
ing its superiority in generating more unbiased scene graphs.
Code: https://github.com/sayaknag/unbiasedSGG.git
|
1. Introduction
Scene graphs provide a holistic scene understanding that can
bridge the gap between vision and language [25,31]. This has
made image scene graphs very popular for high-level reasoning
tasks such as captioning [14, 37], image retrieval [49, 60],
human-object interaction (HOI) [40], and visual question
answering (VQA) [23]. Although significant strides have
been made in scene graph generation (SGG) from static
images [12,31,33,37,39,42,56,62 –64], research on dynamic
SGG is still in its nascent stage.
Dynamic SGG involves grounding visual relationships
jointly in space and time. It is aimed at obtaining a structured
representation of a scene in each video frame along with
learning the temporal evolution of the relationships between
each pair of objects. Such a detailed and structured form of
video understanding is akin to how humans perceive real-world
(a)
(b)
Figure 1. (a) Long-tailed distribution of the predicate classes in Ac-
tion Genome [25]. (b) Visual relationship or predicate classification
performance of two state-of-the-art dynamic SGG methods, namely
STTran [10] and TRACE [57], falls off significantly for the tail classes.
activities [4,25,43] and with the exponential growth of video
data, it is necessary to make similar strides in dynamic SGG.
In recent years, a handful of works have attempted to
address dynamic SGG [10, 16, 24, 38, 57], with a majority of
them leveraging the superior sequence processing capability
of transformers [1,5,19,26,44,53,58]. These methods simply
focused on designing more complex models to aggregate
spatio-temporal contextual information in a video but fail
to address the data imbalance of the relationship/predicate
classes, and although their performance is encouraging under
the Recall@k metric, this metric is biased toward the highly
populated classes. An alternative metric was proposed in [7,56]
called mean-Recall@k which quantifies how SGG models
perform over all the classes and not just the high-frequent ones.
Fig 1a shows the long-tailed distribution of predicate classes
in the benchmark Action Genome [25] dataset and Fig 1b
highlights the failure of some existing state-of-the-art methods
is classifying the relationships/predicates in the tail of the
distribution. The high recall values in prior works suggest that
they may have a tendency to overfit on popular predicate classes
(e.g.in front of / not looking at ), without considering how the
performances on rare classes (e.g. eating/wiping ) are getting
impacted [12]. Predicates lying in the tails often provide more
informative depictions of underlying actions and activities in
the video. Thus, it is important to be able to measure a model’s
long-term performance not only on the frequently occurring
relationships but also on the infrequently occurring ones.
Data imbalance is, however, not the only challenge in
dynamic SGG. As shown in Fig. 2 and Fig. 3, several other
factors, including noisy annotations, motion blur, temporal
fluctuations of predictions, and a need to focus on only
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22803
PERSON
BROOMFLOORIN FRONT OFLOOKING ATHOLDINGIN FRONT OFSTANDING ONBENEATH?
(a) Incomplete annotations and
multiple correct predicates.
PERSONHOLDINGBALLBAGPHONECUP…BROOM(b) Triplet variability (multiple
possible object pairs for the
same relationship)
Figure 2. Noisy scene graph annotations in Action Genome [25]
increase the uncertainty of predicted scene graphs.
active objects that are involved in an action contribute to the
bias in training dynamic SGG models [3]. As a result, the
visual relationship predictions have high uncertainty, thereby
increasing the challenge of dynamic SGG manyfold.
In this paper, we address these sources of bias in dynamic
SGG and propose methods to compensate for them. We
identify missing annotations, multi-label mapping, and triplet
(<subject −predicate −object> ) variability (Fig 2) as label-
ing noise, which coupled with the inherent temporal fluctuations
in a video can be attributed as data noise that can be modeled
as the aleatoric uncertainty [11]. Another form of uncertainty
called the epistemic uncertainty , relates to misleading model
predictions due to a lack of sufficient observations [28] and is
more prevalent for long-tailed data [22]. To address the bias
in training SGG models [3] and generate more unbiased scene
graphs, it is necessary to model and attenuate the predictive
uncertainty of an SGG model. While multi-model deep
ensembles [13,32] can be effective, they are computationally
expensive for large-scale video understanding. Therefore, we
employ the concepts of single model uncertainty based on
Mixture Density Networks (MDN) [9, 28, 54] and design the
predicate classification head as a Gaussian Mixture Model
(GMM) [8, 9]. The GMM-based predicate classification loss
penalizes the model if the predictive uncertainty of a sample is
high, thereby, attenuating the effects of noisy SGG annotations.
Due to the long-tailed bias of SGG datasets, the predicate
embeddings learned by existing dynamic SGG frameworks
significantly underfit to the data-poor classes. Since each
object pair can have multiple correct predicates (Fig 2),
many relationship classes share similar visual characteristics.
Exploiting this factor, we propose a memory-guided training
strategy to debias the predicate embeddings by facilitating
knowledge transfer from the data-rich to the data-poor classes
sharing similar characteristics. This approach is inspired by
recent advances in meta-learning and memory-guided training
for low-shot, and long-tail image recognition [17,45,48,51,65],
whereby a memory bank, composed of a set of prototypical
abstractions [51] each compressing information about a
predicate class, is designed. We propose a progressive memory
computation approach and an attention-based information diffu-
sion strategy [58]. Backpropagating while using this approach,
teaches the model to learn how to generate more balanced
predicate representations generalizable to all the classes.
44.1s55.6s45.3sBROOMü…CUPûBROOMü
Figure 3. Occlusion and motion blur caused by moving objects in
videos renders off-the-self object detectors such as FasterRCNN [47]
ineffective in producing consistent object classification.
Finally, to ensure the correctness of a generated graph,
accurate classification of both nodes (objects) and edges (pred-
icates) is crucial. While existing dynamic SGG methods focus
on innovative visual relation classification [10,24,38], object
classification is typically based on proposals from off-the-shelf
object detectors [47]. These detectors may fail to compensate
for dynamic nuances in videos such as motion blur, abrupt
background changes, occlusion, etc. leading to inconsistent
object classification. While some works use bulky tracking
algorithms to address this issue [57], we propose a simpler yet
highly effective learning-based approach combining the superior
sequence processing capability of transformers [58], with the
discriminative power of contrastive learning [18] to ensure more
temporally consistent object classification. Therefore, com-
bining the principles of temporally consistent object detection,
uncertainty-aware learning, and memory-guided training, we
design our framework called TEMPURA :TEmporal consis-
tency and Memory Prototype guided UnceRtainty Atentuation
for unbiased dynamic SGG. To the best of our knowledge,
|
Poirier-Ginter_Robust_Unsupervised_StyleGAN_Image_Restoration_CVPR_2023
|
Abstract
GAN-based image restoration inverts the generative
process to repair images corrupted by known degrada-
tions. Existing unsupervised methods must be carefully
tuned for each task and degradation level. In this work,
we make StyleGAN image restoration robust: a single
set of hyperparameters works across a wide range of
degradation levels. This makes it possible to handle
combinations of several degradations, without the need
to retune. Our proposed approach relies on a 3-phase
progressive latent space extension and a conservative
optimizer, which avoids the need for any additional reg-
ularization terms. Extensive experiments demonstrate
robustness on inpainting, upsampling, denoising, and
deartifacting at varying degradations levels, outperform-
ing other StyleGAN-based inversion techniques. Our
approach also favorably compares to diffusion-based
restoration by yielding much more realistic inversion
results. Code is available at the above URL.
|
1. Introduction
Image restoration, the task of recovering a high quality
image from a degraded input, is a long-standing problem
in image processing and computer vision. Since differ-
ent restoration tasks—such as denoising, upsampling,deartifacting, etc.—can be quite distinct, many recent
approaches [9, 10,37,54,61,62] propose to solve them
in a supervised learning paradigm by leveraging curated
datasets specifically designed for the task at hand. Un-
fortunately, designing task-specific approaches requires
retraining large networks on each task separately.
In parallel, the advent of powerful generative models
has enabled the emergence of unsupervised restoration
methods [42], which do not require task-specific training.
The idea is to invert the generative process to recover
a clean image. Assuming a known (or approximate)
degradation model, the optimization procedure there-
fore attempts to recover an image that both: 1) closely
matches the target degraded image after undergoing a
similar degradation model (fidelity); and 2) lies in the
space of realistic images learned by the GAN (realism).
In the recent literature, StyleGAN [29 –31] has been
found to be particularly effective for unsupervised image
restoration [13 –15,39,40] because of the elegant design
of its latent space. Indeed, these approaches leverage
style inversion techniques [3, 4] to solve for a latent vec-
tor that, when given to the generator, creates an image
close to the degraded target. Unfortunately, this only
works when such a match actually exists in the model
distribution, which is rarely the case in practice. Hence,
effective methods extend the learned latent space to cre-
1
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22292
ate additional degrees of freedom to admit more images;
this creates the need for additional regularization losses.
Hyperparameters must therefore carefully be tuned for
each specific task and degradation level.
In this work, we make unsupervised StyleGAN image
restoration-by-inversion robust to the type and intensity
of degradations. Our proposed method employs the same
hyperparameters across all tasks and levels and does not
rely on any regularization loss. Our approach leans on
two key ideas. First, we rely on a 3-phase progressive
latent space extension: we begin by optimizing over the
learned (global) latent space, then expand it across in-
dividual layers of the generator, and finally expand it
further across individual filters—where optimization at
each phase is initialized with the result of the previous
one. Second, we rely on a conservative, normalized gra-
dient descent (NGD) [58] optimizer which is naturally
constrained to stay close to its initial point compared to
more sophisticated approaches such as Adam [33]. This
combination of prudent optimization over a progressively
richer latent space avoids additional regularization terms
altogether and keep the overall procedure simple and
constant across all tasks. We evaluate our method on
upsampling, inpainting, denoising and deartifacting on
a wide range of degradation levels, achieving state-of-
the-art results on most scenarios even when baselines are
optimized on each independently. We also show that our
approach outperforms existing techniques on composi-
tions of these tasks without changing hyperparameters.
•We propose a robust 3-phase StyleGAN image restora-
tion framework. Our optimization technique maintains:
1) strong realism when degradations level are high; and
2) high fidelity when they are low. Our method is fully
unsupervised, requires no per-task training, and can
handle different tasks at different levels without having
to adjust hyperparameters.
•We demonstrate the effectiveness of the proposed
method under diverse and composed degradations.
We develop a benchmark of synthetic image restora-
tion tasks—making their degradation levels easy to
control—with care taken to avoid unrealistic assump-
tions. Our method outperforms existing unsuper-
vised [13, 40] and diffusion-based [32] approaches.
|
Qiu_REC-MV_REconstructing_3D_Dynamic_Cloth_From_Monocular_Videos_CVPR_2023
|
Abstract
Reconstructing dynamic 3D garment surfaces with open
boundaries from monocular videos is an important problem
as it provides a practical and low-cost solution for clothes
digitization. Recent neural rendering methods achieve
high-quality dynamic clothed human reconstruction results
from monocular video, but these methods cannot separate
the garment surface from the body. Moreover, despite ex-
isting garment reconstruction methods based on feature
curve representation demonstrating impressive results for
garment reconstruction from a single image, they struggle
to generate temporally consistent surfaces for the video in-
put. To address the above limitations, in this paper, we for-
mulate this task as an optimization problem of 3D garment
feature curves and surface reconstruction from monocular
video. We introduce a novel approach, called REC-MV , to
jointly optimize the explicit feature curves and the implicit
signed distance field (SDF) of the garments. Then the open
garment meshes can be extracted via garment template reg-
istration in the canonical space. Experiments on multiple
casually captured datasets show that our approach outper-
forms existing methods and can produce high-quality dy-
namic garment surfaces. The source code is available at
https://github.com/GAP-LAB-CUHK-SZ/REC-MV .
|
1. Introduction
High-fidelity clothes digitization plays an essential role
in various human-related vision applications such as vir-
tual shopping, film, and gaming. In our daily life, hu-
mans are always in a moving status, driving their clothes
to move together. To realize this very common scenario, it
is indispensable to gain dynamic garments in real applica-
tions. Thanks to the rapid development of mobile devices
in terms of digital cameras, processors, and storage, shoot-
ing a monocular video in the wild becomes highly conve-
nient and accessible for general customers. In this paper,
*Equal contribution.
†Corresponding author: [email protected] .
Figure 1. Can we extract dynamic 3D garments from monocu-
lar videos ? The answer is Yes! By jointly optimizing the dynamic
feature curves and garment surface followed by non-rigid template
registration, our method can reconstruct high-fidelity and tempo-
rally consistent garment meshes with open boundaries.
our goal is definite – extracting dynamic 3D garments from
monocular videos , which is significantly meaningful and
valuable for practical applications, but is yet an uncultivated
land with many challenges.
We attempt to seek a new solution to this open prob-
lem and start by revisiting existing works from two main-
streams. i) Leveraging the success of neural rendering
methods [ 35,37,57], several works are able to recon-
struct dynamic clothed humans from monocular videos
[8,19,29,47,49], by representing the body surface with
an implicit function in the canonical space and apply skin-
ning based deformation for motion modeling. One naive
way to achieve our goal is: first to get the clothed human
through these methods and separate the garments from hu-
man bodies. However, such a separation job requires la-
borious and non-trivial processing by professional artists,
which is neither straightforward nor feasible for general
application scenarios. ii) As for garment reconstruction,
many methods [ 5,10,20,61,62] make it possible to recon-
struct high-quality garment meshes from single-view im-
ages in the wild. Specifically, ReEF [ 62] estimates 3D fea-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
4637
ture curves *and an implicit surface field [ 34] for non-rigid
garment template registration. Nonetheless, these meth-
ods struggle to produce temporally consistent surfaces when
taking videos as inputs.
The above discussion motivates us to combine the mer-
its of both the dynamic surface modeling in recent neu-
ral rendering methods and the explicit curve representation
for garment modeling. To this end, we try to delineate
a new path towards our goal: optimizing dynamic explicit
feature curves and implicit garment surface from monocu-
lar videos , to extract temporally consistent garment meshes
with open boundaries. We represent the explicit curves and
implicit surface in the canonical space with skinning-based
motion modeling, and optimize them by 2D supervision au-
tomatically extracted from the video ( e.g., image intensities,
garment masks, and visible feature curves). After that, the
open garment meshes can be extracted by a garment tem-
plate registration in the canonical space (see Fig. 1).
We strive to probe this path as follows: (1)As a feature
curve is a point set whose deformation has a high degree
of freedom, directly optimizing the per-point offsets often
leads to undesired self-intersection and spike artifacts. To
better regularize the deformation of curves, we introduce
anintersection-free curve deformation method to maintain
the order of feature curves. (2)We optimize the 3D fea-
ture curves using 2D projection loss measured by the esti-
mated 2D visible curves, where the key challenge is to ac-
curately compute the visibility of curves. To address this
problem, we propose a surface-aware curve visibility esti-
mation method based on the implicit garment surface and
z-buffer. (3)To ensure the accuracy of curve visibility es-
timation during the optimization process, the curves should
always be right on the garment surface. We therefore in-
troduce a progressive curve and surface evolution strategy
to jointly update the curves and surface while imposing the
on-surface regularization for curves.
To summarize, the main contributions of this work are:
• We introduce REC-MV , to our best knowledge, the
first method to reconstruct dynamic and open loose
garments from the monocular video.
• We propose a new approach for joint optimization
of explicit feature curves and implicit garment sur-
face from monocular video, based on carefully de-
signed intersection-free curve deformation, surface-
aware curve visibility estimation, and progressive
curve and surface evolution methods.
• Extensive evaluations on casually captured monocular
videos demonstrate that our method outperforms exist-
ing methods.
*feature curves of the garment ( e.g., necklines, hemlines) can provide criti-
cal cues for determining the shape contours of the garment.
|
Panek_Visual_Localization_Using_Imperfect_3D_Models_From_the_Internet_CVPR_2023
|
Abstract
Visual localization is a core component in many appli-
cations, including augmented reality (AR). Localization al-
gorithms compute the camera pose of a query image w.r.t. a
scene representation, which is typically built from images.
This often requires capturing and storing large amounts of
data, followed by running Structure-from-Motion (SfM) al-
gorithms. An interesting, and underexplored, source of data
for building scene representations are 3D models that are
readily available on the Internet, e.g., hand-drawn CAD
models, 3D models generated from building footprints, or
from aerial images. These models allow to perform visual
localization right away without the time-consuming scene
capturing and model building steps. Yet, it also comes
with challenges as the available 3D models are often imper-
fect reflections of reality. E.g., the models might only have
generic or no textures at all, might only provide a simple
approximation of the scene geometry, or might be stretched.
This paper studies how the imperfections of these models
affect localization accuracy. We create a new benchmark
for this task and provide a detailed experimental evaluation
based on multiple 3D models per scene. We show that 3D
models from the Internet show promise as an easy-to-obtain
scene representation. At the same time, there is significant
room for improvement for visual localization pipelines. To
foster research on this interesting and challenging task, we
release our benchmark at v-pnk.github.io/cadloc.
|
1. Introduction
Visual localization is the task of estimating the pre-
cise position and orientation, i.e., the camera pose, from
which a given query image was taken. Localization is a
core capability for many applications, including self-driving
cars [35], drones [54], and augmented reality (AR) [16,57].
Visual localization algorithms mainly differ in the way
they represent the scene, e.g., explicitly as a 3D model [27,
28, 42, 53, 56, 57, 73, 75, 76, 78, 83, 92, 107] or a set of
Figure 1. We evaluate the use of 3D models from the Internet for
visual localization. Such models can differ significantly from the
real world in terms of geometry and appearance.
posed images [10,66,108,110], or implicitly via the weights
of a neural network [5, 12–14, 17, 46, 47, 59, 101, 104],
and in the way they estimate the camera pose, e.g., based
on 2D-3D [27, 28, 73, 75, 78, 92, 107] or 2D-2D [10, 110]
matches or as a weighted combination of a set of base
poses [46, 47, 59, 66, 81, 104]. Existing visual localization
approaches typically use RGB(-D) images to construct their
scene representations. Yet, capturing sufficiently many im-
ages and estimating their camera poses, e.g., via Structure-
from-Motion (SfM) [82,90], are challenging tasks by them-
selves, especially for non-technical users unfamiliar with
how to best capture images for the reconstruction task.
As shown in [4, 63, 85, 88, 109], modern local features
such as [21,23,91,111] are capable of matching real images
with renderings of 3D models, even though they were never
explicitly trained for this task. This opens up a new way of
obtaining the scene representations needed for visual local-
ization: rather than building the representation from images
captured in the scene, e.g., obtained via a service such as
Google Street View, crowd-sourcing [99], or from photo-
sharing websites [52], one can simply download a ready-
made 3D model from the Internet ( cf. Fig. 1), e.g., from 3D
model sharing websites such as Sketchfab and 3D Ware-
house. This removes the need to run a SfM system such as
COLMAP [82] or RealityCapture on image data, which can
be a very complicated step, especially for non-experts such
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13175
as artists designing AR applications. As such, this approach
has the potential to significantly simplify the deployment of
visual localization-based applications.
However, using readily available 3D models from the In-
ternet to define the scene representation also comes with
its own set of challenges ( cf. Fig. 2): (1) fidelity of ap-
pearance : the 3D models might not be colored / textured,
thus resulting in very abstract representations that are hard
to match to real images [63]. Even if a model is textured,
the texture might be generic and repetitive rather than based
on the real appearance of the scene. In other cases, the tex-
ture might be based on real images, but severely distorted
or stretched if these images were captured from drones or
planes. (2) fidelity of geometry : some 3D models might be
obtained via SfM and Multi-View Stereo (MVS), resulting
in 3D models that accurately represent the underlying scene
geometry. Yet, this does not always need to be the case.
For example some models might be obtained by extruding
building outlines, resulting in a very coarse model of the
scene geometry. Others might be created by artists by hand,
resulting in visually plausible models with overly simplified
geometry, or with wrong aspect ratios, e.g., a model might
be too high compared to the width of the building.
Naturally, the imperfections listed above negatively af-
fect localization accuracy. The goal of this work is to quan-
tify the relation between model inaccuracy and localiza-
tion accuracy. This will inform AR application designers
which 3D models are likely to provide precise pose esti-
mates. Looking at Fig. 2, humans seem to be able to estab-
lish correspondences between the models, even if the mod-
els are very coarse and untextured. Similarly, humans are
able to point out correspondences between coarse and un-
textured models and real images that can be used for pose
estimation in the context of visual localization. As such, we
expect that it is possible to teach the same to a machine. We
thus hope that this paper will help researchers to develop
algorithms that close the gap between human and machine
performance for this challenging matching task.
In summary, this paper makes the following contribu-
tions: ( 1) we introduce the challenging and interesting tasks
of visual localization w.r.t. to 3D models downloaded from
the Internet. ( 2) we provide a new benchmark for this
task that includes multiple scenes and 3D models of dif-
ferent levels of fidelity of appearance and geometry. ( 3) we
present detailed experiments evaluating how these different
levels of fidelity affect localization performance. We show
that 3D models from the Internet represent a promising new
category of scene representations. Still, our results show
that there is significant room for improvement, especially
for less realistic models (which often are very compact to
store). ( 4) We make our benchmark publicly available (v-
pnk.github.io/cadloc) to foster research on visual localiza-
tion algorithms capable of handling this challenging task.
|
Man_BEV-Guided_Multi-Modality_Fusion_for_Driving_Perception_CVPR_2023
|
Abstract
Integrating multiple sensors and addressing diverse
tasks in an end-to-end algorithm are challenging yet crit-
ical topics for autonomous driving. To this end, we in-
troduce BEVGuide, a novel Bird’s Eye-View (BEV) repre-
sentation learning framework, representing the first attempt
to unify a wide range of sensors under direct BEV guid-
ance in an end-to-end fashion. Our architecture accepts in-
put from a diverse sensor pool, including but not limited
to Camera, Lidar and Radar sensors, and extracts BEV
feature embeddings using a versatile and general trans-
former backbone. We design a BEV-guided multi-sensor
attention block to take queries from BEV embeddings and
learn the BEV representation from sensor-specific features.
BEVGuide is efficient due to its lightweight backbone de-
sign and highly flexible as it supports almost any input sen-
sor configurations. Extensive experiments demonstrate that
our framework achieves exceptional performance in BEV
perception tasks with a diverse sensor set. Project page is
at https://yunzeman.github.io/BEVGuide.
|
1. Introduction
The recent research in Bird’s Eye-View (BEV) percep-
tion and multi-sensor fusion has stimulated rapid progress
for autonomous driving. The BEV coordinates naturally
unify various downstream object-level and scene-level per-
ception tasks, while joint learning with multiple sensors
minimizes uncertainty, resulting in more robust and accu-
rate predictions. However, existing work still exhibits fun-
damental limitations. On the one hand, fusion strategies
often necessitate explicit space transformations, which can
be ill-posed and prone to errors. On the other hand, existing
techniques utilizing BEV representations rely on ad-hoc de-
signs and support a limited set of sensors ( i.e., cameras and
Lidar). These constraints impede the evolution of a more
general and flexible multi-sensor architecture for BEV 3D
perception, which inspires the design of our work.
More specifically, as different sensors always lie in dif-
ferent coordinate systems, prior approaches usually trans-
form features of each sensor into the same space prior to
Camera Lidar Radar
Query sensors in parallel
Decoder BEV
EmbeddingsTasks : BEV Segmentation
Velocity Estimation, etc.
BEV -Guided Sensor FusionFigure 1. BEVGuide takes input from a sensor combination
and learns BEV feature representation using a portable and gen-
eral BEV-guided multi-sensor attention module. In principle,
BEVGuide is able to take a wide variety of sensors and perform
any BEV perception task.
fusion. For example, some work prioritizes one sensor over
another [1, 42, 43]. However, such fusion architectures tend
to be inflexible and heavily reliant on the presence of the
primary sensor – should the primary sensor be unavailable
or malfunction, the entire pipeline collapses. Alternatively,
other work transforms all sensors into the same space (3D
or BEV space) using provided or estimated geometric con-
straints [10, 22, 25]. Such methods usually require an ex-
plicit depth estimation from camera images, which is sus-
ceptible to errors due to the ill-posed nature of the image
modality. Moreover, errors that arise during the transfor-
mation process may propagate into subsequent feature fu-
sion stages, ultimately impacting downstream tasks. Our
approach seeks to streamline this process by employing the
BEV space to directly guide the fusion of multiple sensor
feature maps within their native spaces .
Simultaneously, in addition to camera and Lidar sensors
which bring about rich semantic information and 3D fea-
tures respectively, we emphasize the integration of Radar
sensors , which deliver unique velocity information and ro-
bust signals in extreme weather conditions but have re-
ceived considerably less attention in research compared
with other sensing modalities. Among the limited literature
that involves Radar learning, some work focuses on utiliz-
ing the velocity measurement for prediction [39, 45], while
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21960
others treat Radar points as an additional 3D information
source to aid detection and tracking [7,10,31]. Our method
aims to accomplish perception tasks in the BEV space and
analyze the synergy among all three types of sensors.
BEVGuide is a general and flexible multi-modality fu-
sion framework designed for BEV perception. A paradigm
of it is shown in Figure 1. It accommodates any poten-
tial sensor configurations within the sensor pool, including
but not limited to camera, Lidar, and Radar sensors. For
any new sensor, the core fusion block simply requires a
sensor-specific feature embedding, which can be obtained
from any backbone encoder, whether pretrained or yet to be
trained. The fusion block consists of a BEV-guided sensor-
agnostic attention module. We split the BEV space into
small patches with position-aware embeddings, through
which the model queries and fuses all sensor features to
generate a unified BEV representation. By employing po-
sitional encoding to encapsulate geometric constraints, we
avoid error-prone explicit feature space transformations, en-
abling the model to focus on positions of interest across
sensors. Designed in this manner, the core fusion module
ismodality-agnostic and can potentially support any sen-
sor configurations in real-world applications. We evaluate
our model in BEV scene segmentation and velocity estima-
tion tasks, where BEVGuide achieves leading results across
various sensor configurations. Moreover, We observe that
BEVGuide exhibits great robustness in different weather
and lighting conditions, facilitated by the inclusion of dif-
ferent sensors.
The main contributions of this paper are as follows.
(1) We propose BEVGuide, a comprehensive and versatile
multi-modality fusion architecture designed for BEV per-
ception. (2) We underscore the significance of Radar sen-
sors in velocity flow estimation and BEV perception tasks in
general, offering an insightful analysis in comparison with
camera and Lidar sensors. (3) We present a map-guided
multi-sensor cross-attention learning module that is gen-
eral, sensor-agnostic, and easily extensible. (4) BEVGuide
achieves state-of-the-art performance in various sensor con-
figurations for BEV scene segmentation and velocity flow
estimation tasks. And in principle, BEVGuide is compati-
ble with a wide range of other BEV perception tasks.
|
Messikommer_Data-Driven_Feature_Tracking_for_Event_Cameras_CVPR_2023
|
Abstract
Because of their high temporal resolution, increased re-
silience to motion blur, and very sparse output, event cam-
eras have been shown to be ideal for low-latency and low-
bandwidth feature tracking, even in challenging scenarios.
Existing feature tracking methods for event cameras are
either handcrafted or derived from first principles but re-
quire extensive parameter tuning, are sensitive to noise,
and do not generalize to different scenarios due to unmod-
eled effects. To tackle these deficiencies, we introduce the
first data-driven feature tracker for event cameras, which
leverages low-latency events to track features detected in
a grayscale frame. We achieve robust performance via a
novel frame attention module, which shares information
across feature tracks. By directly transferring zero-shot
from synthetic to real data, our data-driven tracker outper-
forms existing approaches in relative feature age by up to
120 % while also achieving the lowest latency. This perfor-
mance gap is further increased to 130 % by adapting our
tracker to real data with a novel self-supervision strategy.
Multimedia Material A video is available at https:
//youtu.be/dtkXvNXcWRY and code at https://
github.com/uzh-rpg/deep_ev_tracker
|
1. Introduction
Despite many successful implementations in the real
world, existing feature trackers are still primarily con-
strained by the hardware performance of standard cameras.
To begin with, standard cameras suffer from a bandwidth-
latency trade-off, which noticeably limits their performance
under rapid movements: at low frame rates, they have
minimal bandwidth but at the expense of an increased la-
tency; furthermore, low frame rates lead to large appearance
changes between consecutive frames, significantly increas-
ing the difficulty of tracking features. At high frame rates,
the latency is reduced at the expense of an increased band-
width overhead and power consumption for downstream
*equal contribution.
Figure 1. Our method leverages the high-temporal resolution of
events to provide stable feature tracks in high-speed motion in
which standard frames suffer from motion blur. To achieve this,
we propose a novel frame attention module that combines the in-
formation across feature tracks.
systems. Another problem with standard cameras is mo-
tion blur, which is prominent in high-speed low-lit scenar-
ios, see Fig. 1. These issues are becoming more prominent
with the current commodification of AR/VR devices.
Event cameras have been shown to be an ideal comple-
ment to standard cameras to address the bandwidth-latency
trade-off [16, 17]. Event cameras are bio-inspired vision
sensors that asynchronously trigger information whenever
the brightness change at an individual pixel exceeds a pre-
defined threshold. Due to this unique working principle,
event cameras output sparse event streams with a temporal
resolution in the order of microseconds and feature a high-
dynamic range and low power consumption. Since events
are primarily triggered in correspondence of edges, event
cameras present minimal bandwidth. This makes them ideal
for overcoming the shortcomings of standard cameras.
Existing feature trackers for event cameras have shown
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5642
unprecedented results with respect to latency and tracking
robustness in high-speed and high-dynamic range scenar-
ios [4, 17]. Nonetheless, until now, event-based trackers
have been developed based on classical model assumptions,
which typically result in poor tracking performance in the
presence of noise. They either rely on iterative optimiza-
tion of motion parameters [17, 26, 48] or employ a simple
classification for possible translations of a feature [4], thus,
do not generalize to different scenarios due to unmodeled
effects. Moreover, they usually feature complex model pa-
rameters, requiring extensive manual hand-tuning to adapt
to different event cameras and new scenes.
To tackle these deficiencies, we propose the first data-
driven feature tracker for event cameras, which leverages
the high-temporal resolution of event cameras in combi-
nation with standard frames to maximize tracking perfor-
mance. Using a neural network, our method tracks features
by localizing a template patch from a grayscale image in
subsequent event patches. The network architecture fea-
tures a correlation volume for the assignment and employs
recurrent layers for long-term consistency. To increase the
tracking performance, we introduce a novel frame attention
module, which shares information across feature tracks in
one image. We first train on a synthetic optical flow dataset
and then finetune it with our novel self-supervision scheme
based on 3D point triangulation using camera poses.
Our tracker outperforms state-of-the-art baselines by up
to5.5%and130.2%on the Event Camera Dataset bench-
mark [32] and the recently published EDS dataset [22], re-
spectively. This performance is achieved without requir-
ing extensive manual hand-tuning of parameters. Moreover,
without optimizing the code for deployment, our method
achieves faster inference than existing methods. Finally, we
show how the combination of our method with the well-
established frame-based tracker KLT [30] leverages the best
of both worlds for high-speed scenarios. This combination
of standard and event cameras paves the path for the concept
of sparingly triggering frames based on the tracking quality,
which is a critical tool for future applications where runtime
and power consumption are essential.
|
Li_ProxyFormer_Proxy_Alignment_Assisted_Point_Cloud_Completion_With_Missing_Part_CVPR_2023
|
Abstract
Problems such as equipment defects or limited view-
points will lead the captured point clouds to be incomplete.
Therefore, recovering the complete point clouds from the
partial ones plays an vital role in many practical tasks, and
one of the keys lies in the prediction of the missing part. In
this paper, we propose a novel point cloud completion ap-
proach namely ProxyFormer that divides point clouds into
existing (input) and missing (to be predicted) parts and each
part communicates information through its proxies. Specif-
ically, we fuse information into point proxy via feature and
position extractor, and generate features for missing point
proxies from the features of existing point proxies. Then,
in order to better perceive the position of missing points,
we design a missing part sensitive transformer, which con-
verts random normal distribution into reasonable position
information, and uses proxy alignment to refine the missing
proxies. It makes the predicted point proxies more sensitive
to the features and positions of the missing part, and thus
makes these proxies more suitable for subsequent coarse-to-
fine processes. Experimental results show that our method
outperforms state-of-the-art completion networks on sev-
eral benchmark datasets and has the fastest inference speed.
|
1. Introduction
3D data is used in many different fields, including
autonomous driving, robotics, remote sensing, and more
[5, 12, 14, 17, 43]. Point cloud has a very uniform struc-
ture, which avoids the irregularity and complexity of com-
position. However, in practical applications, due to the oc-
clusion of objects, the difference in the reflectivity of the
target surface material, and the limitation of the resolution
and viewing angle of the visual sensor, the collected point
cloud data is often incomplete. The resultant missing geo-
metric and semantic information will affect the subsequent
3D tasks [26]. Therefore, how to use a limited amount of in-
*Corresponding author.
Input GTProxyFormer
GRNet PoinTr
Figure 1. Visual comparison of point cloud completion results.
Compared with GRNet [38] and PoinTr [41]. ProxyFormer com-
pletely retains the partial input (blue bounding box) and restores
the missing part with details (purple bounding box).
complete data to complete point cloud and restore the orig-
inal shape has become a hot research topic, and is of great
significance to downstream tasks [3, 4, 10, 19, 34, 39].
With the great success of PointNet [23] and PointNet++
[24], direct processing of 3D coordinates has become a
mainstream method for point cloud analysis. In recent
years, there have been many point cloud completion meth-
ods [1, 11, 37, 38, 41, 42, 48], and the emergence of these
networks has also greatly promoted the development of this
area. Many methods [1,38,42] adopt the common encoder-
decoder structure, which usually get global feature from the
incomplete input by pooling operation and map this feature
back to the point space to obtain a complete one. This kind
of feature can predict the approximate shape of the complete
point cloud. However, there are two drawbacks: (1) The
global feature is extracted from partial input and thus lack
the ability to represent the details of the missing part; (2)
These methods discard the original incomplete point cloud
and regenerate the complete shape after extracting features,
which will cause the shape of the original part to deform
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9466
to a certain extent. Methods like [11, 41] attempt to predict
the missing part separately, but they do not consider the fea-
ture connection between the existing and the missing parts,
which are still not ideal solutions to the first drawback. The
results of GRNet [38] and PoinTr [41] in Fig. 1 illustrate
the existence of these problems. GRNet failed to keep the
ring on the light stand while PoinTr incorrectly predicted
the straight edge of the lampshade as a curved edge. Be-
sides, some methods [16, 37, 41, 48] are based on the trans-
former structure and use the attention mechanism for fea-
ture correlation calculation. However, this also brings up
two other problems: (3) In addition to the feature, the po-
sition encoding also has a great influence on the effect of
the transformer. Existing transformer-based methods, ei-
ther directly using 3D coordinates [9, 48] or using MLP to
upscale the coordinates [37,41], the position information of
the point cloud cannot be well represented; (4) It also leads
to the problem of excessive parameters or calculation. Fur-
thermore, we also note that most of the current supervised
methods do not make full use of the known data. During the
training process, the point cloud data we can obtain includes
incomplete input and Ground Truth (GT). This pair of data
can indeed undertake the point cloud completion task well,
but in fact, we can obtain a new data through these two data,
that is, the missing part of the point cloud, so as to increase
our prior knowledge.
In order to solve the above-mentioned problems, we pro-
pose a novel point cloud completion network dubbed Prox-
yFormer , which completely preserves the incomplete input
and has better detail recovery capability as shown in Fig.
1. Firstly, we design a feature and position extractor to
convert the point cloud to proxies, with a particular atten-
tion to the representation of point position. Then, we let
the proxies of the partial input interact with the generated
missing part proxies through a newly proposed missing part
sensitive transformer, instead of using the global feature ex-
tracted from incomplete input alone as in prior methods.
After mapping proxies back to the point space, we splice it
with the incomplete input points to 100% preserve the orig-
inal data. During training, we use the true missing part of
the point cloud to increase prior knowledge and for predic-
tion error refinement. Overall, the main contributions of our
work are as follows:
• We design a Missing Part Sensitive Transformer,
which focuses on the geometry structure and details
of the missing part. We also propose a new position
encoding method that aggregates both the coordinates
and features from neighboring points.
• We introduce Proxy Alignment into the training pro-
cess. We convert the true missing part into proxies,
which are used to enhance the prior knowledge while
refining the predicted missing proxies.
• Our proposed method ProxyFormer discards the trans-former decoder adopted in most transformer based
completion methods such as PointTr, which achieves
SOTA performance compared to various baselines
while having considerably few parameters and the
fastest inference speed in terms of GFLOPs.
|
Li_Task-Specific_Fine-Tuning_via_Variational_Information_Bottleneck_for_Weakly-Supervised_Pathology_Whole_CVPR_2023
|
Abstract
While Multiple Instance Learning (MIL) has shown
promising results in digital Pathology Whole Slide Image
(WSI) analysis, such a paradigm still faces performance
and generalization problems due to high computational
costs and limited supervision of Gigapixel WSIs. To deal
with the computation problem, previous methods utilize a
frozen model pretrained from ImageNet to obtain repre-
sentations, however, it may lose key information owing to
the large domain gap and hinder the generalization ability
without image-level training-time augmentation. Though
Self-supervised Learning (SSL) proposes viable represen-
tation learning schemes, the downstream task-specific fea-
tures via partial label tuning are not explored. To alleviate
this problem, we propose an efficient WSI fine-tuning frame-
work motivated by the Information Bottleneck theory. The
theory enables the framework to find the minimal sufficient
statistics of WSI, thus supporting us to fine-tune the back-
bone into a task-specific representation only depending on
WSI-level weak labels. The WSI-MIL problem is further
analyzed to theoretically deduce our fine-tuning method.
We evaluate the method on five pathological WSI datasets
on various WSI heads. The experimental results show sig-
nificant improvements in both accuracy and generalization
compared with previous works. Source code will be avail-
able at https://github.com/invoker-LL/WSI-finetuning.
|
1. Introduction
Digital Pathology or microscopic images have been
widely used for the diagnosis of cancers such as Breast
Cancer [13] and Prostate Cancer [6]. However, the reading
of Whole Slide Images (WSIs) with gigapixel resolution is
*Corresponding author.
a. ImageNet-1k
v-score: 0.285b. Full Supervision
v-score: 0.745
c. Self-supervised
v-score: 0.394d. IB Fine-tuning (ours)
v-score: 0.538Figure 1. T-SNE visualization of different representations on
patches. Our method converts chaotic ImageNet-1K and SSL fea-
tures into a more task-specific and separable distribution. The
cluster evaluation measurement, v-scores, show weakly super-
vised fine-tuned features are more close to full supervision com-
pared to others. a. ImageNet-1k pretraining. b. Full patch supervi-
sion. c. Self-supervised Learning. d. Fine-tuning with WSI labels.
time-consuming which poses an urgent need for automatic
computer-assisted diagnosis. Though computers can boost
the speed of the diagnosis process, the enormous size of res-
olution, over 100M [45], makes it infeasible to acquire pre-
cise and exhaustive annotations for model training, let alone
the current hardware can hardly support the parallel train-
ing on all patches of a WSI. Hence, an annotation-efficient
learning scheme with light computation is increasingly de-
sirable to cope with those problems. In pathology WSI anal-
ysis, the heavy annotation cost is usually alleviated by Mul-
tiple Instance Learning (MIL) with only WSI-level weak
supervision, which makes a comprehensive decision on a
series of instances as a bag sample [19, 31]. Intuitively, all
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7454
small patches in the WSI are regarded as instances to con-
stitute a bag sample [7, 30, 38], where the WSI’s category
corresponds to the max lesion level of all patch instances.
However, most methods pay much effort to design WSI
architectures while overlooking the instance-level represen-
tation ability. Because of the computational limitation, the
gradient at the WSI-level is impossible to parallelly back-
propagate to instance encoders with more than 10k in-
stances of a bag. Thus parameters of the pretrained back-
bone from ImageNet-1k (IN-1K) are frozen to obtain in-
variant embeddings. Due to the large domain gap between
IN-1K and pathological images, some essential informa-
tion may be discarded by layers of frozen convolutional fil-
ters, which constrains the accuracy of previous WSI meth-
ods. To address the constraint, recent works [9, 25] make
efforts to learn a good feature representation at the patch-
level by leveraging Self-supervised Learning (SSL). How-
ever, such task-agnostic features are dominated by the proxy
objective of SSL, e.g. Contrastive Learning in [8, 11, 17]
may push away the distance between two instances within
the same category, thus only performs slightly better than
IN-1K pretraining in WSI classification. Nearly all SSL
methods [8, 11, 16, 17] proposed on natural image recogni-
tion utilize a small portion of annotations to get promising
fine-tuning accuracy compared to full supervision, which is
higher than Linear Probing [17] by a large margin.
These findings illuminate us to design a fine-tuning
scheme for WSI analysis to convert IN-K or SSL task-
agnostic representations into task-specifics. Motivated by
the Information Bottleneck (IB) theory [1, 2], we argue
that pretraining is limited to downstream tasks, therefore
fine-tuning is necessary for WSI analysis. In addition,
we develop a solution based on Variational IB to tackle
the dilemma of fine-tuning and computational limitation
by its minimal sufficient statistics and attribution proper-
ties [1, 23]. The differences among the above three feature
representations are depicted in Figure 1, where the feature
representation under full patch-level supervision is consid-
ered as the upper bound.
Our main contributions are in 3 folds: 1) We propose a
simple agent task of WSI-MIL by introducing an IB module
that distills over 10k redundant instances within a bag into
less than 1k of the most supported instances. Thus the paral-
lel computation cost of gradient-based training on Gigapixel
Images is over ten times relieved. By learning and mak-
ing classification on the simplified bag, we find that there
are trivial information losses due to the low-rank property
of pathological WSI, and the distilled bag makes it possi-
ble to train a WSI-MIL model with the feature extractor on
patches end-to-end, thus boosting the final performance. 2)
We argue that the performance can be further improved by
combining with the SSL pretraining since we could convert
the task-agnostic representation from SSL into task-specificone by well-designed fine-tuning. The proposed framework
only relies on annotations at WSI levels, which is similar to
recent SSL approaches [8,11,16,17]. Note that our method
only utilizes less than a 1% fraction of full patch annotation
to achieve competitive accuracy compared to counterparts.
3) Versatile training-time augmentations can be incorpo-
rated with our proposed fine-tuning scheme, thus resulting
in better generalization in various real-world or simulated
datasets with domain shift, which previous works ignore
to validate. These empirical results show that our method
advances accuracy and generalization simultaneously, and
thus would be more practical for real-world applications.
|
Piccinelli_iDisc_Internal_Discretization_for_Monocular_Depth_Estimation_CVPR_2023
|
Abstract
Monocular depth estimation is fundamental for 3D scene
understanding and downstream applications. However, even
under the supervised setup, it is still challenging and ill-
posed due to the lack of full geometric constraints. Although
a scene can consist of millions of pixels, there are fewer
high-level patterns. We propose iDisc to learn those patterns
with internal discretized representations. The method im-
plicitly partitions the scene into a set of high-level patterns.
In particular, our new module, Internal Discretization (ID),
implements a continuous-discrete-continuous bottleneck to
learn those concepts without supervision. In contrast to
state-of-the-art methods, the proposed model does not en-
force any explicit constraints or priors on the depth output.
The whole network with the ID module can be trained end-
to-end, thanks to the bottleneck module based on attention.
Our method sets the new state of the art with significant
improvements on NYU-Depth v2 and KITTI, outperform-
ing all published methods on the official KITTI benchmark.
iDisc can also achieve state-of-the-art results on surface
normal estimation. Further, we explore the model gener-
alization capability via zero-shot testing. We observe the
compelling need to promote diversification in the outdoor
scenario. Hence, we introduce splits of two autonomous
driving datasets, DDAD and Argoverse. Code is available
athttp://vis.xyz/pub/idisc .
|
1. Introduction
Depth estimation is essential in computer vision, espe-
cially for understanding geometric relations in a scene. This
task consists in predicting the distance between the projec-
tion center and the 3D point corresponding to each pixel.
Depth estimation finds direct significance in downstream
applications such as 3D modeling, robotics, and autonomous
cars. Some research [62] shows that depth estimation is
a crucial prompt to be leveraged for action reasoning and
execution. In particular, we tackle the task of monocular
depth estimation (MDE). MDE is an ill-posed problem due
to its inherent scale ambiguity: the same 2D input image can
correspond to an infinite number of 3D scenes.
(a) Input image
(b) Output depth
(c) Intermediate representations
(d) Internal discretization
Figure 1. We propose iDisc which implicitly enforces an internal
discretization of the scene via a continuous-discrete-continuous
bottleneck. Supervision is applied to the output depth only, i.e., the
fused intermediate representations in (c), while the internal discrete
representations are implicitly learned by the model. (d) displays
some actual internal discretization patterns captured from the input,
e.g., foreground, object relationships, and 3D planes. Our iDisc
model is able to predict high-quality depth maps by capturing scene
interactions and structure.
State-of-the-art (SotA) methods typically involve convo-
lutional networks [12, 13, 24] or, since the advent of vision
Transformer [11], transformer architectures [3, 41, 54, 59].
Most methods either impose geometric constraints on the
image [22, 33, 38, 55], namely, planarity priors or explicitly
discretize the continuous depth range [3,4,13]. The latter can
be viewed as learning frontoparallel planes. These imposed
priors inherently limit the expressiveness of the respective
models, as they cannot model arbitrary depth patterns, ubiq-
uitous in real-world scenes.
We instead propose a more general depth estimation
model, called iDisc, which does not explicitly impose any
constraint on the final prediction. We design an Internal
Discretization (ID) of the scene which is in principle depth-
agnostic. Our assumption behind this ID is that each scene
can be implicitly described by a set of concepts or patterns,
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21477
such as objects, planes, edges, and perspectivity relation-
ships. The specific training signal determines which patterns
to learn (see Fig. 1).
We design a continuous-to-discrete bottleneck through
which the information is passed in order to obtain such inter-
nal scene discretization, namely the underlying patterns. In
the bottleneck, the scene feature space is partitioned via
learnable and input-dependent quantizers, which in turn
transfer the information onto the continuous output space.
The ID bottleneck introduced in this work is a general con-
cept and can be implemented in several ways. Our partic-
ular ID implementation employs attention-based operators,
leading to an end-to-end trainable architecture and input-
dependent framework. More specifically, we implement
the continuous-to-discrete operation via “transposed” cross-
attention, where transposed refers to applying softmax on
the output dimension. This softmax formulation enforces
the input features to be routed to the internal discrete rep-
resentations (IDRs) in an exclusive fashion, thus defining
an input-dependent soft clustering of the feature space. The
discrete-to-continuous transformation is implemented via
cross-attention. Supervision is only applied to the final out-
put, without any assumptions or regularization on the IDRs.
We test iDisc on multiple indoor and outdoor datasets
and probe its robustness via zero-shot testing. As of to-
day, there is too little variety in MDE benchmarks for the
outdoor scenario, since the only established benchmark is
KITTI [17]. Moreover, we observe that all methods fail on
outdoor zero-shot testing, suggesting that the KITTI dataset
is not diverse enough and leads to overfitting, thus implying
that it is not indicative of generalized performance. Hence,
we find it compelling to establish a new benchmark setup
for the MDE community by proposing two new train-test
splits of more diverse and challenging high-quality outdoor
datasets: Argoverse1.1 [8] and DDAD [18].
Our main contributions are as follows: (i) we introduce
the Internal Discretization module, a novel architectural com-
ponent that adeptly represents a scene by combining under-
lying patterns; (ii) we show that it is a generalization of SotA
methods involving depth ordinal regression [3, 13]; (iii) we
propose splits of two raw outdoor datasets [8, 18] with high-
quality LiDAR measurements. We extensively test iDisc on
six diverse datasets and, owing to the ID design, our model
consistently outperforms SotA methods and presents better
transferability. Moreover, we apply iDisc to surface nor-
mal estimation showing that the proposed module is general
enough to tackle generic real-valued dense prediction tasks.
|
Pajouheshgar_DyNCA_Real-Time_Dynamic_Texture_Synthesis_Using_Neural_Cellular_Automata_CVPR_2023
|
Abstract
Current Dynamic Texture Synthesis ( DyTS ) models can
synthesize realistic videos. However, they require a slow it-
erative optimization process to synthesize a single fixed-size
short video, and they do not offer any post-training control
over the synthesis process. We propose Dynamic Neural
Cellular Automata (DyNCA), a framework for real-time
and controllable dynamic texture synthesis. Our method
is built upon the recently introduced NCA models and can
synthesize infinitely long and arbitrary-sized realistic video
textures in real time. We quantitatively and qualitatively
evaluate our model and show that our synthesized videos
appear more realistic than the existing results. We improve
the SOTA DyTS performance by 2∼4orders of magni-
tude. Moreover, our model offers several real-time video
controls including motion speed, motion direction, and an
editing brush tool. We exhibit our trained models in an on-
line interactive demo that runs on local hardware and is
accessible on personal computers and smartphones.
|
1. Introduction
Textures are everywhere. We perceive them as spa-
tially repetitive patterns. Dynamic Textures are textures that
change over time and induce a sense of motion. Flames,
sea waves, and fluttering branches are everyday examples.
Understanding and computationally modeling dynamic tex-
tures is an intriguing problem, as these patterns are observed
in most natural scenes.
The goal of Dynamic Texture Synthesis ( DyTS ) [5–
8, 10, 12, 23, 24, 27, 29, 32] is to generate perceptually-
equivalent samples of an exemplar video texture3. Ap-
plications of DyTS include the creation of special effects
for backdrops and video games [21], dynamic style trans-
fer [24], and creating cinemagraphs [12].
The state-of-the-art (SOTA) dynamic texture synthesis
1We use the same flow visualization as Baker et al. [2].
2Link to the demo: https://dynca.github.io
3We use ”video texture” and ”dynamic texture” interchangeably.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20742
Appearance Loss
ℱ!
ℱ!
Motion Loss𝑆"𝑆!𝑆#!
ℱ!𝑆$!
Seed𝑆!:DyNCAstepℱ:#Stepsperframe𝑇:Figure 2. Overview of DyNCA. Starting from a seed state,
DyNCA iteratively updates it, generating an image sequence. We
extract images from this sequence and compare them with an ap-
pearance target as well as a motion target to obtain the DyNCA
training objectives. After training, DyNCA can adapt to seeds of
different heights and widths, and synthesize videos with arbitrary
lengths. Sequentially applying DyNCA updates on the seed syn-
thesizes dynamic texture videos in real time.
methods [8, 24, 27, 29, 32] follow the same trend. They aim
to find better optimization strategies to iteratively update a
randomly initialized video until the synthesized video re-
sembles the exemplar dynamic texture. Although the SOTA
methods are able to synthesize acceptable quality videos,
their optimization process is very slow and requires several
hours to generate a single fixed-resolution short video on a
high-end GPU. Moreover, these methods do not offer any
post-training control over the synthesized video.
In this paper we propose Dynamic Neural Cellular
Automata ( DyNCA ), a model for dynamic texture synthe-
sis that is fast to train , and once trained, can synthesize
infinitely-long ,arbitrary-resolution dynamic texture videos
inreal time on a low-end GPU. Moreover, our method en-
ables several real-time video editing controls, including mo-
tion direction and motion speed. Through quantitative and
qualitative analysis we demonstrate that our synthesized
videos achieve better quality and realism than the previous
results in the literature.
Our model builds upon the recently introduced Neural
Cellular Automata (NCA) model [18,19]. While Niklasson
et al. [19] train the NCA with the goal of synthesizing static
textures only, the NCA model is able to spontaneously gen-
erate randomly moving patterns. As an inherent property
of NCA, these spontaneous motions are, however, unstruc-
tured and uncontrolled. We modify the architecture and the
training scheme of NCA so that it can learn to synthesize
video textures that have the desired motion and appearance.
In short, our DyNCA model acts as a stochastic Partial
Differential Equation (PDE), parameterized by a small neu-
ral network. DyNCA starts from a constant initial state
called seed, and then iteratively evolves the state accord-
ing to its trainable PDE update rule to generate a sequence
of images. This image sequence is then evaluated against
the appearance exemplar and the motion target to calculate
the loss functions for the optimization of DyNCA, as illus-
trated in Figure 2. We allow the user to specify the desiredmotion either by a motion vector field or an exemplar dy-
namic texture video. Moreover, by using a different target
for the appearance and the motion, our model can perform
dynamic style transfer, as shown in Figure 1. Our contribu-
tions summarized are:
• Our DyNCA model, once trained, can synthesize dy-
namic texture videos in real time.
• Our synthesized videos are on-par with or even better
than the existing results in terms of realism and quality.
• After training, our DyNCA model can synthesize
infinitely-long videos with arbitrary frame sizes.
• Our DyNCA framework enables several real-time in-
teractive video editing controls including speed, direc-
tion, a brush tool, and local coordinate transformation.
• We can perform real-time dynamic style transfer by
learning appearance and motion from distinct sources.
|
Naeem_I2MVFormer_Large_Language_Model_Generated_Multi-View_Document_Supervision_for_Zero-Shot_CVPR_2023
|
Abstract
Recent works have shown that unstructured text (doc-
uments) from online sources can serve as useful auxiliary
information for zero-shot image classification. However,
these methods require access to a high-quality source like
Wikipedia and are limited to a single source of information.
Large Language Models (LLM) trained on web-scale text
show impressive abilities to repurpose their learned knowl-
edge for a multitude of tasks. In this work, we provide a
novel perspective on using an LLM to provide text supervi-
sion for a zero-shot image classification model. The LLM is
provided with a few text descriptions from different annota-
tors as examples. The LLM is conditioned on these exam-
ples to generate multiple text descriptions for each class (re-
ferred to as views). Our proposed model, I2MVFormer,
learns multi-view semantic embeddings for zero-shot image
classification with these class views. We show that each text
view of a class provides complementary information allow-
ing a model to learn a highly discriminative class embed-
ding. Moreover, we show that I2MVFormer is better at con-
suming the multi-view text supervision from LLM compared
to baseline models. I2MVFormer establishes a new state-of-
the-art on three public benchmark datasets for zero-shot im-
age classification with unsupervised semantic embeddings.
Code available at https://github.com/ferjad/I2DFormer
|
1. Introduction
In Zero-Shot Learning (ZSL), we task an image classifi-
cation model trained on a set of seen classes to generalize
to a disjoint set of unseen classes using shared auxiliary in-
formation. While there has been great progress made in the
field, most works treat the auxiliary information to be fixed
to a set of human-labeled attributes [16, 37, 48, 53]. While
powerful, these attributes are hard to annotate and expen-
sive to scale [46,58]. Unsupervised alternatives to attributes
rely on pretrained word embeddings which provide limited
1First and second author contributed equally.
Describe Mallard?Describe House
Wren?Laysan Albatross wings are
sooty black and they mainly
feed on squid and small sh ...
House wren's underpart color ranges
from brown to pure white and are
mostly found in North America ...
Mallard's beak is yellow in color and
they have glossy bottle-green head ...
Describe Blue
Jay?
Large Languge
Model (LLM)
Views of class Blue Jay
Blue Jays have white
chest and underparts.
They can be found in
most of the eastern and
central United States. Blue Jays feed on seeds, soft
fruits and small vertebrates.
Their bill, eyes and legs are
all blacks. Blue Jays are noisy bold
and aggressive but are slow
iers. They have blue hue
and crown of feathers on
head.
Describe Laysan
Albatross?
Query ClassK-shot examplesKeysFigure 1. Different annotators focus on different attributes when
describing a class. Large Language Models prompted with each of
these annotations as k-shot examples can reveal complementary
information about a class for zero-shot image classification. We
refer to multiple LLM-generated descriptions as views of a class.
information about a class. Recent works [7,20,35,39] show
that text documents from internet sources like Wikipedia
can provide great auxiliary information for ZSL. Since these
web documents describe a queried class in detail, they pro-
vide more information for the ZSL model compared to word
embeddings. However, these methods only rely on a single
source of text documents like Wikipedia, which might not
sufficiently represent all classes a model is faced with. Mul-
tiple sources of text documents of a class can provide com-
plementary information for the ZSL model. For example,
in the case of birds, one source might focus more on the
patterns of the feather, while another source might better
describe the belly and the face of the bird. However, find-
ing multiple good sources of text documents for each class
requires additional annotation effort.
Large Language Models (LLM) [5, 12, 63] trained on
web-scale text have shown impressive abilities of using
their learned information to solve a multitude of tasks.
These models can be conditioned with a k-shot prompt to
generalize to a wide set of applications [5, 31, 60] using
knowledge from multiple sources they were trained on. In
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15169
this work, we aim to generate multiple text descriptions of a
class, that we recall as “views” hereinafter, with an LLM us-
ing a k-shot prompting strategy. We show that the LLM can
act as a mixture of annotators conditioned on different anno-
tation styles to generate complementary information about a
class. Moreover, we propose a novel model, I2MVFormer,
which utilizes our memory-efficient summary modules to
extract discriminative information from each view of a class
with the aim of learning a multi-view class embedding.
Our contributions in this work are as follows. 1) We pro-
vide the first study into using an LLM to generate auxil-
iary information for zero-shot image classification. More-
over, we propose a prompting strategy to extract multiple
descriptions from an LLM that reveal complementary in-
formation about a class. 2) We propose I2MVFormer, a
novel transformer-based model for zero-shot image classi-
fication which exploits multiple complementary sources of
text supervision to learn a class embedding. I2MVFormer
utilizes our Single-View Summary (SVSummary) module
to extract rich discriminative information from each class
view. This information is utilized by our Multi-View
Summary (MVSummary) module to represent a class-level
set of tokens from multiple views. The multi-view to-
kens are aligned with the image to maximize global and
local compatibility between the images and the multiple
views. 3) Our I2MVFormer achieves significant perfor-
mance gains to establish a new state-of-the-art (SOTA)
in unsupervised class embeddings in ZSL on three public
benchmarks AWA2 [22], CUB [48] and FLO [36].
|
Shetty_PLIKS_A_Pseudo-Linear_Inverse_Kinematic_Solver_for_3D_Human_Body_CVPR_2023
|
Abstract
We introduce PLIKS (Pseudo-Linear Inverse Kinematic
Solver) for reconstruction of a 3D mesh of the human body
from a single 2D image. Current techniques directly regress
the shape, pose, and translation of a parametric model from
an input image through a non-linear mapping with mini-
mal flexibility to any external influences. We approach the
task as a model-in-the-loop optimization problem. PLIKS
is built on a linearized formulation of the parametric SMPL
model. Using PLIKS, we can analytically reconstruct the
human model via 2D pixel-aligned vertices. This enables
us with the flexibility to use accurate camera calibration
information when available. PLIKS offers an easy way to
introduce additional constraints such as shape and transla-
tion. We present quantitative evaluations which confirm that
PLIKS achieves more accurate reconstruction with greater
than 10% improvement compared to other state-of-the-art
methods with respect to the standard 3D human pose and
shape benchmarks while also obtaining a reconstruction er-
ror improvement of 12.9 mm on the newer AGORA dataset.
|
1. Introduction
Estimating human surface meshes and poses from single
images is one of the core research directions in computer vi-
sion, allowing for multiple applications in computer graph-
ics, robotics and augmented reality [14, 46]. Since humans
have complex body articulations and the scene parameters
are typically unknown, we are essentially dealing with an
ill-posed problem that is difficult to solve in general.
Thanks to models such as SMPL [35] and SMPL-X [44]
additional constraints on body shape and pose became avail-
able. They made the problem somewhat more tractable.
Most state-of-the-art methods [19, 21, 24, 27, 52] directly
regress the shape and pose parameters from a given input
image. These approaches rely completely on neural net-
Cropped region
pose estimate
Closed form solution
argmin
θ,β,tAx=bPixel aligned prediction
duv
Figure 1. Network predicts a pixel-aligned vertex map (u, v, d )
which is used to obtain an initial pose estimate. Then a closed-
form solution is made use of to solve the Inverse kinematics
between the 2D pixel-aligned vertex map (u, v)and a pseudo-
parametric model given the detected bounding-box camera intrin-
sic and initial pose estimate.
works, while making several assumptions about the image
generation process. One typical assumption is the use of a
simplified camera model such as the weak perspective cam-
era. In this scenario, the camera is assumed to be far away
from the subject, which is generally realized by setting a
large focal length constant for all images. A weak perspec-
tive camera can be described based on three parameters, two
with respect to translation in the horizontal and vertical di-
rections, and the third being scale. While these methods can
estimate plausible shape and pose parameters, it can happen
that the resulting meshes are either misaligned in the 2D
image space or in the 3D object space. This is because the
underlying optimization problem is often not constrained
enough such that it is difficult for the underlying networks
to optimize between the 2D re-projection loss and the 3D
loss.
Some existing methods [23, 25, 31] propose a
workaround by tackling the problem using a hybrid
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
574
approach involving learning-based and optimization-based
techniques while incorporating a full perspective cam-
era [23]. Optimization-based approaches are, however,
prone to local minima, and they are computationally
expensive. In [25], the authors propose to regress the
SMPL parameters by conditioning on features from a
CamCalib network meant to predict the camera parameters.
Unfortunately, this camera prediction network needs a
specialized dataset to train on, which is very hard to acquire
in practice. It also prevents end-to-end learning.
On the other hand, recent non-parametric or model-free
approaches [33, 40] directly regress the mesh vertex coor-
dinates based on their 2D projections, aligning well to the
input image. However, by ignoring the effects of a perspec-
tive camera, even these methods suffer from the same limi-
tations as the parametric models.
Motivated by the above observations, we present a novel
approach, named PLIKS, for 3D human shape and pose
estimation that incorporates the perspective camera while
analytically solving for all the parameters of the paramet-
ric model. The pipeline of our approach comprises of two
modules, namely the mesh regressor and PLIKS. The mesh
regressor provides a mapping between an image and the
3D vertices of the SMPL model. Given a single image,
any off-the-shelf Convolution Neural Network (CNN) can
be used for feature extraction. The extracted features can
then be used to obtain a mesh representation either by us-
ing 1D CNNs [40], GraphCNNs [28], or even transform-
ers [33]. This way, correspondences to the image space can
be found and a relative depth estimate can be computed.
From the image-aligned mesh prediction, we can roughly
estimate the rotations with respect to a template mesh in
canonical space with the application of Inverse Kinematics
(IK), denoted in this work as the Approximate Rotation Es-
timator (ARE). Finally, we reformulate the SMPL model
as a linear system of equations, with which we can use the
2D pixel-aligned vertex maps and any known camera intrin-
sic parameters to fully estimate the model without the need
for any additional optimization. As our approach is end-
to-end differentiable and fits the model within the training
loop, it is self-improving in nature. The proposed approach
is benchmarked against various 3D human pose and shape
datasets, and significantly outperforms other state-of-the-art
approaches.
To summarize, the contribution of our paper is the fol-
lowing: (1) We bridge the gap between the 2D pixel-aligned
vertex maps and the parametric model by reformulating the
SMPL model as a linear system of equations. Since the pro-
posed approach is fully differentiable, we can perform end-
to-end training. (2) We propose a 3D human body estima-
tion framework that reconstructs the 3D body without rely-
ing on weak-perspective assumptions. (3) We show that our
approach can improve upon other state-of-the-art methodswhen evaluated across various 3D human pose and shape
benchmarks.
|
Rahman_Learning_Partial_Correlation_Based_Deep_Visual_Representation_for_Image_Classification_CVPR_2023
|
Abstract
Visual representation based on covariance matrix has
demonstrates its e fficacy for image classification by char-
acterising the pairwise correlation of di fferent channels in
convolutional feature maps. However, pairwise correla-
tion will become misleading once there is another channel
correlating with both channels of interest, resulting in the
“confounding” e ffect. For this case, “partial correlation”
which removes the confounding e ffect shall be estimated in-
stead. Nevertheless, reliably estimating partial correlation
requires to solve a symmetric positive definite matrix op-
timisation, known as sparse inverse covariance estimation
(SICE). How to incorporate this process into CNN remains
an open issue. In this work, we formulate SICE as a novel
structured layer of CNN. To ensure end-to-end trainabil-
ity, we develop an iterative method to solve the above ma-
trix optimisation during forward and backward propagation
steps. Our work obtains a partial correlation based deep vi-
sual representation and mitigates the small sample problem
often encountered by covariance matrix estimation in CNN.
Computationally, our model can be e ffectively trained with
GPU and works well with a large number of channels of
advanced CNNs. Experiments show the e fficacy and supe-
rior classification performance of our deep visual represen-
tation compared to covariance matrix based counterparts.
|
1. Introduction
Learning e ffective visual representation is a central issue
in computer vision. In the past two decades, describing im-
ages with local features and pooling them to a global rep-
resentation has shown promising performance. As one of
the pooling methods, covariance matrix based pooling has
attracted much attention due to its exploitation of second-
order correlation information of features. A variety of tasks
*Corresponding author. Code: https: //github.com /csiro-robotics /iSICE
confounding
variable
dimensionssamples
precision
matrixpartial
correlationFigure 1. Understanding the partial correlation (a 3D toy case).
Unlike the ordinary covariance (pairwise correlation of say xand
ycorresponding to channels), partial correlation between variables
xandyremoves the influence of the confounding variable z. Let
the number of samples n=3 and channels d=3. For the 3D case,
xandyare projected onto a plane perpendicular to z. Thenρxy=
cosφxy(andρxzandρyzcan be computed by analogy). Projected
“residuals” rxandryare computed as indicated in the plot, w′x=
arg minwP3
i=1(xi−w⊤
xzi) where zi=[zi,1]⊤(and w′yis computed
by analogy). The green box: for d>3, the computation of partial
correlation requires covariance inversion [7].
such as fine-grained image classification [27], image seg-
mentation [16], generic image classification [24, 26, 34],
image set classification [44], action recognition [18], few-
shot classification [50] and few-shot detection [52–54] have
benefited from the covariance matrix based representation.
A few pioneering works have integrated covariance ma-
trix as a pooling method within convolutional neural net-
works (CNN) and investigated associated issues such as
matrix function backpropagation [16], matrix normalisa-
tion [23,28,38], compact matrix estimation [11,49] and ker-
nel based extension [9]. The above works further improved
visual representations based on covariance matrix.
Despite the above progress, covariance matrix merely
measures the pairwise correlation (more accurately, covari-
ance) of two variables without taking any other variables
into account. This can be easily verified because its ( i,j)-
th entry solely depends on the i-th and j-th variables on a
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
6231
backbone
feature
mapsreshape maps ( ),
compute covariancesparsity
constraints
iterations of iSICE upper triangular
to vec.
classifierFigure 2. Proposed iterative sparse inverse covariance estimation (iSICE) method in a CNN pipeline.
sample set. In Statistics, it is known that such a pairwise
correlation will give misleading results once a third vari-
able is correlated with both variables of interest due to the
“confounding” e ffect. For this situation, the partial correla-
tion is the right measure to use. It regresses out the e ffects
of other variables from the two variables and then calculates
the correlation of their residuals instead. Partial correlation
can be conveniently obtained by computing inverse covari-
ance matrix, also known as the precision matrix [7] in the
statistical community. Figure 1 illustrates the geometrical
interpretation of partial correlation.
The above observation motivates us to investigate a vi-
sual representation for image classification based on the in-
verse covariance matrix. After all, it has better theoreti-
cal support on characterising the essential relationship of
variables ( e.g., the channels in a convolutional feature map)
when other variables are present. Note that inverse covari-
ance matrix can be used for many vision tasks but in this pa-
per, we investigate it from the perspective of image classifi-
cation. Nevertheless, reliably estimating inverse covariance
matrix from the local descriptors of a CNN feature map is
a challenging task. This is primarily due to the small spa-
tial size of the feature map, i.e., sample size, and a higher
number of channels, i.e., feature dimensions, and this issue
becomes more pronounced for advanced CNN models. An
unreliable estimate of inverse covariance matrix will criti-
cally a ffect its e ffectiveness as a visual representation. One
might argue that by increasing the size of input images or
using a dimension reduction layer to reduce the number of
feature channels, such an issue could be resolved. In this
paper, we investigate this issue from the perspective of ro-
bust precision matrix estimation.
To achieve our goal, we explore the use of sparsity prior
for inverse covariance matrix estimation in the literature.
Specifically, the general principle of “bet on sparsity” [12]
is adopted in estimating the structure of high-dimensional
data, and this leads to an established technique called sparse
inverse covariance estimation (SICE) [10]. It solves an op-
timisation in the space of symmetric and positive definite
(SPD) matrix to estimate the inverse covariance matrix by
imposing the sparsity prior on its entries. SICE is designed
to handle small sample problem and it is known for its ex-
cellent e ffectiveness to that end [10]. An initial attemptto apply SICE for visual representation is based on hand-
crafted or pre-extracted features of small size and an o ff-the-
shelf SICE solver, and it does not have the ability to back-
propagate through SICE due to optimisation of the SPD ma-
trix with the imposed non-smooth sparsity term [51].
Our work is the first one that truly integrates SICE into
CNN for end-to-end training. Clearly, such an integration
will fully take advantage of the feature learning capability
of CNN and the partial correlation o ffered by inverse co-
variance matrix. On the other hand, realising such an inte-
gration is not trivial. Unlike covariance matrix, which is ob-
tained by simple arithmetic operations, SICE is obtained by
solving an SPD matrix based optimisation. How to incorpo-
rate this optimisation process into CNN as a layer is an is-
sue. Furthermore, this SICE optimisation needs to be solved
for each training image during both forward and backward
phases to generate a visual representation. Directly solving
this optimisation within CNN will not be practical even for
a medium-sized SICE problem.
To efficiently integrate SICE into CNN, we propose a fast
end-to-end training method for SICE by taking inspiration
from Newton-Schulz iteration [14]. Our method solves the
SICE optimisation with a smooth convex cost function by
re-parameterising the non-smooth term in the original SICE
cost function (see Eq. (1)), and it can therefore be optimised
with standard optimisation techniques such as gradient de-
scend. Furthermore, we e ffectively enforce the SPD con-
straint during optimisation so that the obtained SICE solu-
tion remains SPD as desired. Figure 2 shows our “Iterative
Sparse Inverse Covariance Estimation (iSICE)”. In contrast
to SICE, iSICE works with end-to-end trainable deep learn-
ing models. Our iSICE involves simple matrix arithmetic
operations fully compatible with GPU. It can approximately
solve large SICE problems within CNN e fficiently.
Our main contributions are summarised as follows.
1. To more precisely characterise the relationship of fea-
tures for visual representation, this paper proposes to
integrate sparse inverse covariance estimation (SICE)
process into CNNs as a novel layer. To achieve this, we
develop a method based on Newton-Schulz iteration
and box constraints for ℓ1penalty to solve the SICE
optimisation with CNN and maintain the end-to-end
6232
training e fficiency. To the best of our knowledge, our
iSICE is the first end-to-end SICE solution for CNN.
2. Our iSICE method requires a minimal change of net-
work architecture. Therefore, it can readily be inte-
grated with existing works to replace those using deep
network models to learn covariance matrix based vi-
sual representation. The iSICE is fully compatible
with GPU and can be easily implemented with mod-
ern deep learning libraries.
3. As the objective of SICE is a combination of log det
term (may change rapidly) and sparsity (changes
slowly), achieving the balance between both terms dur-
ing optimisation by the gradient descent is hard. To
this end, we propose a minor contribution: a simple
modulating network whose goal is to adapt on-the-fly
learning rate and sparsity penalty.
Experiments on multiple image classification datasets
show the e ffectiveness of our proposed iSICE method.
|
Shin_Local_Connectivity-Based_Density_Estimation_for_Face_Clustering_CVPR_2023
|
Abstract
Recent graph-based face clustering methods predict the
connectivity of enormous edges, including false positive
edges that link nodes with different classes. However, those
false positive edges, which connect negative node pairs,
have the risk of integration of different clusters when their
connectivity is incorrectly estimated. This paper proposes a
novel face clustering method to address this problem. The
proposed clustering method employs density-based cluster-
ing, which maintains edges that have higher density. For
this purpose, we propose a reliable density estimation algo-
rithm based on local connectivity between Knearest neigh-
bors ( KNN). We effectively exclude negative pairs from the
KNN graph based on the reliable density while maintaining
sufficient positive pairs. Furthermore, we develop a pair-
wise connectivity estimation network to predict the connec-
tivity of the selected edges. Experimental results demon-
strate that the proposed clustering method significantly out-
performs the state-of-the-art clustering methods on large-
scale face clustering datasets and fashion image clustering
datasets. Our code is available at https://github.
com/illian01/LCE-PCENet
|
1. Introduction
Recently, with the release of large labeled face image
datasets [7, 9, 10], there are great progresses of face recog-
nition [4, 13, 14, 22]. These data-driven approaches still de-
mand massive annotated face data for improving face recog-
nition models. Face clustering, which aims to divide enor-
mous face images into different clusters, is essential to re-
duce annotation costs. Also, face clustering can be used in
real-world applications, including photo management and
organization of large-scale face images in social media, as
well as data collection.
Traditional clustering methods such as K-Means [16]
and DBSCAN [5] do not require training steps, but they
∗Corresponding authordepend on specific conditions on data distributions and
are sensitive to hyper-parameters. Also, they work well
on small scale-data but are vulnerable to large-scale data.
Thus, they are not effective to face image data, which con-
tains large-scale images and diverse distributions in general.
Recent researches [2,6,12,17,19,24,26] construct the KNN
graph and estimate the connectivity between nodes using
deep neural networks with supervised-learning, where the
connectivity represents the probability whether two nodes
belong to the same cluster. For instance, in [19, 24], graph
convolution networks (GCNs) are employed to estimate the
connectivity between nodes linked with edges in the KNN
graph. Also, the transformer [21] is adopted to exploit
the relationship among Knearest neighbors and estimate
the connectivity between neighbor nodes [17]. Thus, these
methods [17,19,24] estimate the connectivity between most
edges in the KNN graph, including many false positive
edges, which link nodes with different classes (negative
pairs). However, clustering performance is significantly de-
graded when the connectivity between negative pairs is in-
correctly estimated.
Some methods [2,12,26] select a small number of edges
from the KNN graph to exclude negative pairs and perform
classification on the selected pairs only. A GCN-based con-
fidence estimator is designed to select edges that link nodes
with higher confidence [26]. Density-based methods [2,12]
estimate density for each node to choose edges that are di-
rected to cluster centers. They pick only one pair for each
node based on the estimated density and determine whether
each pair belongs to the same cluster through pairwise clas-
sification. They achieve high precision performance by re-
ducing negative pairs for classification candidates, but a
small number of classification candidates yield relatively
low recall performance.
Due to the imperfection of the pairwise classification, the
more negative pairs are selected as classification candidates,
the more nodes with different classes are likely to be merged
in the clustering process. On the other hand, insufficient
positive pairs may degrade recall scores since some nodes
that belong to the same class cannot be merged. Thus, it is
essential to reduce negative pairs while maintaining suffi-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13621
cient positive pairs as the pairwise classification candidates
for high clustering performance.
In this paper, we propose a novel density estimation
method that explores the local connectivity and similarity
between Knearest neighbors. First, we construct a KNN
graph, where each face image becomes a node, based on the
cosine similarity between nodes. Then, we develop a lo-
cal connectivity estimation network (LCENet), which takes
the features of each node and its Knearest neighbors as
an input and provides the local connectivity probability be-
tween the pivot node and its Knearest neighbors. We then
combine the local connectivity and similarity to estimate
the reliable density. We refine the KNN graph based on
the node density by selecting edges toward cluster centers
to exclude false positive edges from the KNN graph. In the
graph refinement, we perform density-based and similarity-
based edge selection to reduce negative pairs while increas-
ing positive pairs. Given the reconstructed graph, we de-
velop a pairwise connectivity estimation network (PCENet)
based on intra-class and inter-class similarities to determine
whether or not two linked nodes belong to the same clus-
ter. Finally, we employ the bread-first search (BFS) to ob-
tain clustering results. Experimental results demonstrate
that the proposed clustering method significantly outper-
forms the state-of-the-art clustering methods on large-scale
face clustering dataset [7, 25] and fashion image clustering
dataset [15].
To summarize, this work has three main contributions.
• We develop LCENet to compute the reliable node den-
sity, which effectively excludes negative pairs from
theKNN graph while maintaining sufficient positive
pairs.
• We design PCENet based on intra-class and inter-
class similarities to effectively determine whether two
linked nodes belong to the same cluster.
• The proposed clustering method outperforms the state-
of-the-arts significantly on various datasets [7, 15, 25].
|
Ru_Token_Contrast_for_Weakly-Supervised_Semantic_Segmentation_CVPR_2023
|
Abstract
Weakly-Supervised Semantic Segmentation (WSSS) us-
ing image-level labels typically utilizes Class Activation
Map (CAM) to generate the pseudo labels. Limited by the
local structure perception of CNN, CAM usually cannot
identify the integral object regions. Though the recent Vi-
sion Transformer (ViT) can remedy this flaw, we observe it
also brings the over-smoothing issue, i.e., the final patch to-
kens incline to be uniform. In this work, we propose Token
Contrast (ToCo) to address this issue and further explore
the virtue of ViT for WSSS. Firstly, motivated by the obser-
vation that intermediate layers in ViT can still retain se-
mantic diversity, we designed a Patch Token Contrast mod-
ule (PTC). PTC supervises the final patch tokens with the
pseudo token relations derived from intermediate layers, al-
lowing them to align the semantic regions and thus yield
more accurate CAM. Secondly, to further differentiate the
low-confidence regions in CAM, we devised a Class Token
Contrast module (CTC) inspired by the fact that class tokens
in ViT can capture high-level semantics. CTC facilitates the
representation consistency between uncertain local regions
and global objects by contrasting their class tokens. Exper-
iments on the PASCAL VOC and MS COCO datasets show
the proposed ToCo can remarkably surpass other single-
stage competitors and achieve comparable performance
with state-of-the-art multi-stage methods. Code is available
athttps://github.com/rulixiang/ToCo .
|
1. Introduction
To reduce the expensive annotation costs of deep seman-
tic segmentation models, weakly-supervised semantic seg-
mentation (WSSS) is proposed to predict pixel-level predic-
tions with only weak and cheap annotations, such as image-
level labels [2], points [4], scribbles [51] and bounding
*Corresponding author. This work was done when Lixiang Ru was a
research intern at JD Explore Academy.
(a) Image (b) ViTbaseline (c) Ours
1
0
1
0
1
0
CAM Map CAM MapFigure 1. The generated CAM and the pairwise cosine simi-
larity of patch tokens ( sim. map). Our method can address the
over-smoothing issue well and produce accurate CAM. Here we
use ViT-Base.
boxes [23]. Among all these annotation forms, the image-
level label is the cheapest and contains the least informa-
tion. This work also falls in the field of WSSS using only
image-level labels.
Prevalent works of WSSS using image-level labels typ-
ically derive Class Activation Map (CAM) [53] or its vari-
ants [35] as pseudo labels. The pseudo labels are then pro-
cessed with alternative refinement methods [1, 2] and used
to train regular semantic segmentation models. However,
CAM is usually flawed since it typically only identifies the
most discriminative semantic regions, severely weakening
the final performance of semantic segmentation [1, 19, 43].
The recent works [15, 34, 47] show one reason is that pre-
vious methods usually generate CAM with CNN, in which
convolution only perceives local features and fails to acti-
vate the integral object regions. To ameliorate this problem
and generate more accurate pseudo labels for WSSS, these
works propose solutions based on the recent Vision Trans-
former (ViT) architecture [12], which inherently models the
global feature interactions with self-attention blocks.
However, as demonstrated in [29, 42], self-attention in
ViT is essentially a low-pass filter, which inclines to re-
duce the variance of input signals. Therefore, stacking self-
attention blocks is equivalent to repeatedly performing spa-
tial smoothing operations, which encourages the patch to-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3093
kens in ViT to be uniform [16, 36], i.e., over-smoothing.
We observe that the over-smoothing issue particularly im-
pairs the WSSS task, since CAM used to derive pseudo
labels relies on the output features ( i.e. patch tokens). As
shown in Figure 1, due to over-smoothing, the pairwise co-
sine similarities of the patch tokens are close to 1, suggest-
ing the learned representations of different patch tokens are
almost uniform. The generated CAM thus tends to assign
different image regions with the monotonous semantic la-
bel. Though several recent works have explored the ViT
architecture for WSSS [34, 39, 47], they typically overlook
the over-smoothing issue of patch tokens, leaving this prob-
lem unresolved.
In this work, we empirically observe that ViT smooths
the patch tokens progressively, i.e. the learned representa-
tions in intermediate layers can still preserve the seman-
tic diversity. Therefore, we propose a Patch Token Con-
trast (PTC) module to address the over-smoothing issue by
supervising the final patch tokens with intermediate layer
knowledge. Specifically, in the PTC module, we simply add
an additional classifier in an intermediate layer to extract
the auxiliary CAM and the corresponding pseudo pairwise
token relations. By supervising the pairwise cosine simi-
larities of final patch tokens with the pseudo relations, PTC
can finely counter the over-smoothing issue and thus pro-
duce high-fidelity CAM. As shown in Figure 1, our method
can generate CAM that aligns well with the semantic object
regions. The pairwise cosine similarities also coincide with
the corresponding semantics. In addition, to further differ-
entiate the uncertain regions in generated CAM, inspired by
the property that the class token in ViT can inherently ag-
gregate high-level semantics [6,15], we also propose a Class
Token Contrast (CTC) module. In CTC, we first randomly
crop local images from uncertain regions (background re-
gions), and minimize (maximize) the representation differ-
ence between the class tokens of local and global images.
As a result, CTC can facilitate the local-to-global represen-
tation consistency of semantic objects and the discrepancy
between foreground and background, benefiting the integral
and accurate object activation in CAM. Finally, based on the
proposed PTC and CTC, we build Token Contrast (ToCo)
for WSSS and extend it to the single-stage WSSS frame-
work [34].
Overall, our contributions in this work include the fol-
lowing aspects.
• We propose Patch Token Contrast (PTC) to address the
over-smoothing issue in ViT. By supervising the final to-
kens with intermediate knowledge, PTC can counter the
patch uniformity and significantly promote the quality of
pseudo labels for WSSS.
• We propose Class Token Contrast (CTC), which contrasts
the representation of global foregrounds and local uncer-
tain regions (background) and facilitates the object acti-vation completeness in CAM.
• The experiments on the PASCAL VOC [14] and MS
COCO dataset [26] show that the proposed ToCo can sig-
nificantly outperform SOTA single-stage WSSS methods
and achieve comparable performance with multi-stage
competitors.
|
Shen_Equiangular_Basis_Vectors_CVPR_2023
|
Abstract
We propose Equiangular Basis V ectors (EBVs) for clas-
sification tasks. In deep neural networks, models usually
end with a k-way fully connected layer with softmax to
handle different classification tasks. The learning objec-tive of these methods can be summarized as mapping the
learned feature representations to the samples’ label space.
While in metric learning approaches, the main objective is
to learn a transformation function that maps training data
points from the original space to a new space where simi-
lar points are closer while dissimilar points become farther
apart. Different from previous methods, our EBVs generate
normalized vector embeddings as “predefined classifiers”
which are required to not only be with the equal status be-
tween each other , but also be as orthogonal as possible. By
minimizing the spherical distance of the embedding of an in-
put between its categorical EBV in training, the predictions
can be obtained by identifying the categorical EBV with
the smallest distance during inference. V arious experiments
on the ImageNet-1K dataset and other downstream tasks
demonstrate that our method outperforms the general fully
connected classifier while it does not introduce huge addi-
tional computation compared with classical metric learning
methods. Our EBVs won the first place in the 2022 DIGIX
Global AI Challenge, and our code is open-source and avail-
able at https://github.com/NJUST-VIPGroup/
Equiangular-Basis-Vectors .
|
1. Introduction
The pattern classification field developed around the end
of the twentieth century aims to deal with the specific prob-
lem of assigning input signals to two or more classes [ 56].
In recent years, deep learning models have brought break-
throughs in processing image, video, audio, text, and other
*Corresponding author. This work was supported by National Key
R&D Program of China (2021YFA1001100), National Natural Science
Foundation of China under Grant (62272231), Natural Science Foundation
of Jiangsu Province of China under Grant (BK20210340), the Fundamental
Research Funds for the Central Universities (No. NJ2022028), and CAAI-
Huawei MindSpore Open Fund.
(c) Equiangular Basis Vectors
ෝ࢝ଵ
ෝ࢝ଶ
ෝ࢝ଷܿԢȣෝ࢝ଵ,ෝ࢝ଶ=ȣෝ࢝ଵ,ෝ࢝ଷ=ȣෝ࢝ଶ,ෝ࢝ଷ
ෝ࢝ᇱAdd category ȣෝ࢝ଵ,ෝ࢝ᇱ=ȣෝ࢝ଵ,ෝ࢝ଶ࢞
ݔݔଵ
ݔݔଶ
ݔݔௗݔ݄ଵ
ݔ݄ଶ
ݔ݄ଷ
ݔ݄ேݔ݂ଵ
ݔ݂
݁ᇲݐᇱ
σݐݔݐᇱݔݐଶ
ݔݐ݁భ
݁ݔݐଵ
σݐ
ݔݐ
σݐݔݔଶ
ݔᇱ
…
…
…
(a) Fully Connected Layers with Softmax…
ݔ݂ᇱ
ܿԢ Add category
Anchor Positive Sample
Negative Sample
ࣩܯଷ՜ࣩ((ܯ+݉Ԣ)ଷ)gp
ଷ ଷAdd category ܿԢwith ݉ᇱsamples:
(b) Metric Learning
Figure 1. Comparisons between typical classification paradigms
and our proposed Equiangular Basis V ectors (EBVs). (a)A general
classifier ends with k-way fully connected layers with softmax .
When adding more categories, the trainable parameters of the classi-
fier grow linearly. (b)Taking triplet embedding [ 60] as an example
of classical metric learning methods, the complexity is O(M3)
when given Mimages and it will grows to O((M+m/prime)3)when
adding a new category with m/primeimages. (c)Our proposed EBVs.
EBVs predefine fixed normalized vector embeddings for different
categories and these embeddings will not be changed during the
training stage. The trainable parameters of the network will not
be changed with the growth of the number of categories while the
complexity only grows from O(M)toO(M+m/prime).
data [ 10,19,27,58]. Aided by the rapid gains in hardware,
deep learning methods today can easily overfit one million
images [ 9] and easily overcomes the obstacle to the qual-
ity of handcrafted features in previous pattern classification
tasks. Many approaches based on deep learning spring up
like mushrooms and had been used to solve classification
problems in various scenarios and settings such as remote
sensing [ 38], few-shot [ 52], long-tailed [ 72], etc.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
11755
Figure 1illustrates two typical classification paradigms.
Nowadays, a large amount of deep learning methods [ 38,72]
adopt a trainable fully connected layer with softmax as the
classifier. However, since the number of categories is fixed,
the trainable parameters of the classifier rise as the number
of categories becomes larger. For example, the memory
consumption of a fully connected layer W∈Rd×Nlinearly
scales up with the growth of the category number Nand so
is the cost to compute the matrix multiplication between the
fully connected layer and the d-dimensional features. While
some other methods based on classical metric learning [ 23,
30,60–62] have to consider all the training samples and
design positive/negative pairs then optimize a class center
for each category, which requires a significant amount ofextra computation for large-scale datasets, especially for
those pre-training tasks.
In this paper, we propose Equiangular Basis V ectors
(EBVs) to replace the fully connected layer associated with
softmax within classification tasks in deep neural net-
works. EBVs predefine fixed normalized vector embeddings
with equal status (equal angles) which will not be changed
during the training stage. Specifically, EBVs pre-set a d-
dimensional unit hypersphere, and for each category in the
classification task, EBVs assign the category a d-dimensional
normalized embedding on the surface of the hypersphere and
we term these embedding as basis vectors . The spherical
distance of each basis vector pair satisfies an artificially
made rule to make the relationship between any two vec-
tors as close to orthogonal as possible. In order to keep the
trainable parameters of the deep neural networks constant
with the growth of the category number N, we then propose
the definition of EBVs based on Tammes Problem [ 55] and
Equiangular Lines [ 57] in Section 3.2.
The learning objective of each category in our proposed
EBVs is also different from previous classification methods.
Compared with deep models that end with a fully connected
layer to handle the classification tasks [ 19,27], the meaning
of the parameter weights within the fully connected layer in
EBVs is not the relevance of a feature representation to a par-
ticular category but a fixed matrix which embed feature rep-
resentations to a new space. Also, compared with regression
methods [ 34,51], EBVs do not need to learn the unified repre-
sentations for different categories and optimize the distance
between the representation of input images and category cen-
ters, which helps reduce the computational consumption for
the extra unified representations learning. In contrast to clas-
sical metric learning approaches [ 14,23,50,61], our EBVs
do not need to measure the similarity among different train-
ing samples and constrain distance between each category,
which will introduce a large amount of computational con-
sumption for large-scale datasets. In our proposed method,
the representations of different images learned by EBVs will
be embedded into a normalized hypersphere and the learningobjective is altered to minimize the spherical distance ofthe learned representations with different predefined basis
vectors. In addition, the spherical distance between each
predefined basis vector is carefully constrained so that there
is no need to spend extra cost in the optimization of these
basis vectors. To quantitatively prove both the effectiveness
and efficiency of our proposed EBVs, we evaluate EBVson diverse computer vision tasks with large-scale datasets,
including classification on ImageNet-1K, object detection
on COCO, as well as semantic segmentation on ADE20K.
|
Singh_Polynomial_Implicit_Neural_Representations_for_Large_Diverse_Datasets_CVPR_2023
|
Abstract
Implicit neural representations (INR) have gained sig-
nificant popularity for signal and image representation for
many end-tasks, such as superresolution, 3D modeling, and
more. Most INR architectures rely on sinusoidal positional
encoding, which accounts for high-frequency information in
data. However, the finite encoding size restricts the model’s
representational power. Higher representational power is
needed to go from representing a single given image to repre-
senting large and diverse datasets. Our approach addresses
this gap by representing an image with a polynomial function
and eliminates the need for positional encodings. Therefore,
to achieve a progressively higher degree of polynomial rep-
resentation, we use element-wise multiplications between
features and affine-transformed coordinate locations after
every ReLU layer. The proposed method is evaluated quali-
tatively and quantitatively on large datasets like ImageNet.
The proposed Poly-INR model performs comparably to state-
of-the-art generative models without any convolution, nor-
malization, or self-attention layers, and with far fewer train-
able parameters. With much fewer training parameters and
higher representative power, our approach paves the wayfor broader adoption of INR models for generative mod-
eling tasks in complex domains. The code is available at
https://github.com/Rajhans0/Poly_INR
|
1. Introduction
Deep learning-based generative models are a very ac-
tive area of research with numerous advancements in recent
years [8, 13, 24]. Most widely, generative models are based
on convolutional architectures. However, recent develop-
ments such as implicit neural representations (INR) [29, 43]
represent an image as a continuous function of its coordinate
locations, where each pixel is synthesized independently.
Such a function is approximated by using a deep neural
network. INR provides flexibility for easy image transforma-
tions and high-resolution up-sampling through the use of a
coordinate grid. Thus, INRs have become very effective for
3D scene reconstruction and rendering from very few train-
ing images [3, 27 –29, 56]. However, they are usually trained
to represent a single given scene, signal, or image. Recently,
INRs have been implemented as a generative model to gener-
ate entire image datasets [1,46]. They perform comparably to
CNN-based generative models on perfectly curated datasets
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2041
like human faces [22]; however, they have yet to be scaled
to large, diverse datasets like ImageNet [7].
INR generally consists of a positional encoding module
and a multi-layer perceptron model (MLP). The positional
encoding in INR is based on sinusoidal functions, often re-
ferred to as Fourier features. Several methods [29, 43, 49]
have shown that using MLP without sinusoidal positional
encoding generates blurry outputs, i.e., only preserves low-
frequency information. Although, one can remove the po-
sitional encoding by replacing the ReLU activation with
a periodic or non-periodic activation function in the MLP
[6, 37, 43]. However, in INR-based GAN [1], using a peri-
odic activation function in MLP leads to subpar performance
compared to positional encoding with ReLU-based MLP.
Sitzmann et al. [43] demonstrate that ReLU-based MLP
fails to capture the information contained in higher deriva-
tives. This failure to incorporate higher derivative informa-
tion is due to ReLU’s piece-wise linear nature, and second
or higher derivatives of ReLU are typically zero. This can
be further interpreted in terms of the Taylor series expansion
of a given function. The higher derivative information of a
function is included in the coefficients of a higher-order poly-
nomial derived from the Taylor series. Hence, the inability
to generate high-frequency information is due to the ineffec-
tiveness of the ReLU-based MLP model in approximating
higher-order polynomials.
Sinusoidal positional encoding with MLP has been widely
used, but the capacity of such INR can be limiting for two
reasons. First, the size of the embedding space is limited;
hence only a finite and fixed combination of periodic func-
tions can be used, limiting its application to smaller datasets.
Second, such an INR design needs to be mathematically co-
herent. These INR models can be interpreted as a non-linear
combination of periodic functions where periodic functions
define the initial part of the network, and the later part is
often a ReLU-based non-linear function. Contrary to this,
classical transforms (Fourier, sine, or cosine) represent an
image by a linear summation of periodic functions. However,
using just a linear combination of the positional embedding
in a neural network is also limiting, making it difficult to
represent large and diverse datasets. Therefore, instead of
using periodic functions, this work models an image as a
polynomial function of its coordinate location.
The main advantage of polynomial representation is the
easy parameterization of polynomial coefficients with MLP
to represent large datasets like ImageNet. However, conven-
tionally MLP can only approximate lower-order polynomials.
One can use a polynomial positional embedding of the form
xpyqin the first layer to enable the MLP to approximate
higher order. However, such a design is limiting, as a fixed
embedding size incorporates only fixed polynomial degrees.
In addition, we do not know the importance of each polyno-
mial degree beforehand for a given image.Hence, we do not use any positional encoding, but we
progressively increase the degree of the polynomial with the
depth of MLP. We achieve this by element-wise multiplica-
tion between the feature and affine transformed coordinate
location, obtained after every ReLU layer. The affine pa-
rameters are parameterized by the latent code sampled from
a known distribution. This way, our network learns the re-
quired polynomial order and represents complex datasets
with considerably fewer trainable parameters. In particular,
the key highlights are summarized as follows:
•We propose a Poly-INR model based on polynomial
functions and design a MLP model to approximate
higher-order polynomials.
•Poly-INR as a generative model performs compara-
bly to the state-of-the-art CNN-based GAN model
(StyleGAN-XL [42]) on the ImageNet dataset with
3−4×fewer trainable parameters (depending on output
resolution).
•Poly-INR outperforms the previously proposed INR
models on the FFHQ dataset [22], using a significantly
smaller model.
•We present various qualitative results demonstrating the
benefit of our model for interpolation, inversion, style-
mixing, high-resolution sampling, and extrapolation.
|
Rana_Hybrid_Active_Learning_via_Deep_Clustering_for_Video_Action_Detection_CVPR_2023
|
Abstract
In this work, we focus on reducing the annotation cost
for video action detection which requires costly frame-wise
dense annotations. We study a novel hybrid active learning
(AL) strategy which performs efficient labeling using both
intra-sample andinter-sample selection. The intra-sample
selection leads to labeling of fewer frames in a video as op-
posed to inter-sample selection which operates at video level.
This hybrid strategy reduces the annotation cost from two dif-
ferent aspects leading to significant labeling cost reduction.
The proposed approach utilize Clustering-Aware Uncertainty
Scoring (CLAUS), a novel label acquisition strategy which
relies on both informativeness anddiversity for sample se-
lection. We also propose a novel Spatio-Temporal Weighted
(STeW) loss formulation, which helps in model training un-
der limited annotations. The proposed approach is evaluated
on UCF-101-24 and J-HMDB-21 datasets demonstrating
its effectiveness in significantly reducing the annotation cost
where it consistently outperforms other baselines. Project
details available at https://tinyurl.com/hybridclaus
|
1. Introduction
Video action detection requires spatio-temporal annota-
tions which include bounding-box or pixel-wise annotation
on each frame of the video in addition to video level anno-
tations [22, 27, 35, 50, 66]. Cost for such dense annotation
is much higher compared to classification task where only
video level annotations is sufficient [8, 18, 59, 60]. In this
work, we study how this high annotation cost for spatio-
temporal detection can be reduced with minimal perfor-
mance trade-off.
The existing works on label efficient learning for videos
are mainly focused on classification task [10, 23, 57]. Video
action detection methods focus on weakly-supervised or
semi-supervised methods to save annotation costs [12,31,39,
40, 64]. Weakly-supervised methods use partial annotations
such as point-level [39], video-level [3, 12], and temporal
annotations [9, 64]. Similarly, semi-supervised methods use
12345
% Frames annot.205080 v-mAP @ 0.5
(a)Our
Our w/ RandomInter
Intra90%
12345
% Frames annot.205080 f-mAP @ 0.5
(b)
12345
% Frames annot.04080 v-mAP @ 0.5
(c)12345
% Frames annot.04080 f-mAP @ 0.5
(d)Figure 1. Comparison of the proposed CLAUS based AL method
with random selection for video action detection. The plots show
scores for (a-b) UCF-101-24 and (c-d) J-HMDB-21 for different
annotation amount. The green line represents model performance
with 90% annotations.
unlabeled samples with the help of pseudo-labeling [40] and
prediction consistency [31]. Such approaches have been
effective for classification tasks, however spatio-temporal
detection is more challenging under limited annotations with
inferior performance compared to fully supervised methods.
One of the main limitation of these methods is lack of se-
lection criteria which can guide in labeling only informative
samples. To overcome this limitation, we investigate the use
of active learning for label efficient video action detection.
Traditional active learning (AL) approach typically fo-
cuses on classification task where selection is performed at
sample level [19, 34, 61]. In video action detection, a frame-
level spatio-temporal localization is required in addition to
video level class prediction. Therefore, active learning strat-
egy should also consider detection on every frame within a
video apart from video-level decisions. We argue that frame-
level informativeness is also crucial for spatio-temporal de-
tection along with video-level informativeness. Motivated
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18867
by this, we explore a hybrid active learning strategy which
performs both intra-sample andinter-sample selection. The
intra-sample selection targets informative frames within a
video and inter-sample selection aims at informative sam-
ples at video-level. This hybrid approach results in efficient
labeling by significantly reducing the annotation costs.
Informativeness and diversity, both are important for sam-
ple selection in active learning [4]. The proposed approach
utilize Clustering-Aware Uncertainty Scoring (CLAUS) , a
novel clustering assisted AL strategy which considers both
these aspects for sample selection. It relies on model un-
certainty for informative sample selection and clustering for
reducing redundancy. Clustering is jointly performed on fea-
ture space while model training where diversity is enforced
based on cluster assignments. Moreover, the intra-sample
selection will lead to limited annotations making model train-
ing difficult. To overcome this, we propose a novel training
objective, Spatio-Temporal Weighted (STeW) loss, which re-
lies on temporal continuity for pseudo labels and helps in
learning under limited annotations.
We make the following contributions in this work: 1)
novel hybrid AL strategy that selects frames and videos
based on informativeness anddiversity ; 2)clustering based
selection criteria that enables diversity in sample selection;
3)novel training objective for effective utilization of limited
labels using temporal continuity . We evaluate the proposed
approach on UCF-101-24 and JHMDB-21 and demonstrate
that it outperforms other AL baselines and achieves compa-
rable performance with model trained on 90% annotations
at a fraction (5% vs 90%) of the annotation cost (Figure 1).
|
Saxena_Re-GAN_Data-Efficient_GANs_Training_via_Architectural_Reconfiguration_CVPR_2023
|
Abstract Training Generative Adversarial Networks (GANs) on high-fidelity images usually requires a vast number of training images. Recent research on GAN tickets reveals that dense GANs models contain sparse sub-networks or "lottery tickets" that, when trained separately, yield better results under limited data. However, finding GANs tickets requires an expensive process of train-prune-retrain. In this paper, we propose Re-GAN, a data-efficient GANs training that dynamically reconfigures GANs architecture during training to explore different sub-network structures in training time. Our method repeatedly prunes unimportant connections to regularize GANs network and regrows them to reduce the risk of prematurely pruning important connections. Re-GAN stabilizes the GANs models with less data and offers an alternative to the existing GANs tickets and progressive growing methods. We demonstrate that Re-GAN is a generic training methodology which achieves stability on datasets of varying sizes, domains, and resolutions (CIFAR-10, Tiny-ImageNet, and multiple few-shot generation datasets) as well as different GANs architectures (SNGAN, ProGAN, StyleGAN2 and AutoGAN). Re-GAN also improves performance when combined with the recent augmentation approaches. Moreover, Re-GAN requires fewer floating-point operations (FLOPs) and less training time by removing the unimportant connections during GANs training while maintaining comparable or even generating higher-quality samples. When compared to state-of-the-art StyleGAN2, our method outperforms without requiring any additional fine-tuning step. Code can be found at this link: https://github.com/IntellicentAI-Lab/Re-GAN
|
1. Introduction In recent years, Generative adversarial networks (GANs) [4]–[7] have made great strides in generating high-fidelity images. The GANs models serve as the backbone of several vision applications, such as data augmentation [5], [8], [9], domain adaptation [10], [11], and image-to-image translation [14]–[16]. The success of the GANs methods largely depends on a massive quantity of diverse training data, which is often time-consuming and challenging to collect [17]. Figure 1 shows how the performance of the StyleGAN2 [18] model drastically declines under the limited training data. As a
Re-GAN: Data-Efficient GANs Training via Architectural Reconfiguration Divya Saxena, Jiannong Cao, Jiahao Xu, Tarun Kulshrestha The Hong Kong Polytechnic University, Hong Kong {divsaxen, csjcao}@comp.polyu.edu.hk, [email protected], [email protected]
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16230
result, various new methods [1], [19], [20] have emerged to deal with the problem of insufficient data. Dynamic data-augmentation [1], [19]–[21] fills in the gap and stabilizes GANs training with less data. Very recently, [22], [23] introduced the lottery ticket hypothesis (LTH) in GANs (called “GANs tickets”), a complementary to the existing augmentation techniques. LTH identifies sparse sub-networks (called “winning tickets”) that can be trained in isolation to match or even surpass the performance of unpruned models. [24] demonstrated that an identified GANs ticket can be used as a sparse structural prior to alleviate the problem of limited data in GANs. However, identifying these winning tickets requires many iterations of a time-consuming and computationally expensive train-prune-retrain process. This results in high training time and a number of floating-point operations (FLOPs) than training a dense GANs models, such as StyleGAN2 [18] and BigGAN [5]. In addition, these methods train a full-scale model before pruning, and then, after the pruning process, they engage in an extra fine-tuning to improve the performance. Given this perspective, we ask: Is there any way to achieve training efficiency w.r.to both data and computation in GANs while preserving or even improving its performance? One potential solution is network pruning during training, which can allow the exploration of different sub-network structures in training-time. Network structure exploration during training has shown to be effective in a variety of domains [25], [26], and its properties have been the subject of a significant amount of research [27], [28]. However, network pruning is never introduced to GANs training; as a result, the investigation of different sub-network structures exploration during GANs training remains mysterious. To address this gap in the literature, we investigate and introduce the network pruning, i.e., connections, in GANs training by dynamically reconfiguring GANs architecture to allow the exploration of different sub-network structures in training time, dubbed as Re-GAN. However, on the other hand, it is common knowledge that the learning capabilities of the two competing networks—a generator (G) and a discriminator (D), need to be carefully maintained equilibrium in their respective capabilities for learning. Hence to build Re-GAN, the first question is: how to explore different network structures during GANs training? Network pruning during training regularizes the G to allow a robust gradient flow through G. This stabilizes the GANs models under limited training data and improves training efficiency. Re-GAN repeatedly prunes and grows the connections during the training process to reduce the risk of pruning important connections prematurely and prevent the model from losing its representational capabilities early in the training process. As a result, network growing provides a second opportunity to reinitialize pruned connections by reusing information from previously explored sub-network structures.
Figure 2: Conventional GANs training has fixed connectivity space. Re-GAN uses network pruning and growing during training to make connectivity space flexible that helps in the propagation of robust gradients. Best viewed in color. The second question is: how to explore different sub-network structures in G and D simultaneously? On the one hand, if we employ a pretrained D (or G) and prune solely for G (or D), it can quickly incur an imbalance between the capabilities of D or G (particularly in the early stage of training), resulting in slow convergence. While it is possible to prune for G and D simultaneously, empirical experiments show that doing so significantly degrades the initial unstable GANs training, resulting in highly fluctuating training curves and, in many cases, a failure to converge. As a trade-off, we propose expanding D as per standard GANs training while applying pruning exclusively to G's architecture. Additionally, our method is robust, working well with a wide range of GANs architectures (ProGANs [29], SNGAN [30], StyleGAN2, and AutoGAN [31], [32]) and datasets (CIFAR-10 [3], Tiny-ImageNet [33], Flickr Faces HQ [34], and many few-shot generation datasets). We find that exploring different sub-network structures during training accounts for a significant decrease in FID score compared to the vanilla DCGAN [35] architecture without a pre-trained model or fine-tuning the pruned model (see Figure 2). Our method delivers higher performance in less training time to state-of-the-art (SOTA) methods on most available datasets without additional hyperparameters that progressive growing method introduces, such as training schedules and learning rates for different generation stages (resolutions). This robustness allows the Re-GAN to be easily generalized on unseen datasets. To the best of our knowledge, Re-GAN is the first attempt to incorporate network pruning during GANs training. Our technical innovations are as follows: • We conduct the first in-depth study on taking a unified approach of incorporating pruning in GANs training without pre-training a large model or fine-tuning the pruned model. • Our method repeatedly prunes and grows the connections during training to reduce the possibility of
16231
pruning important connections and helps the model to maintain its representation ability early in training. Thus, network growing gives another chance to reinitialize pruned connections from the explored network sub-structures. • Extensive experiments are conducted across a wide range of GANs architectures and demonstrated that Re-GAN could be easily applied on these GANs architectures to improve their performances, both in regular and low-data regime setups. For example, for the identified winning GANs ticket, ProGAN and StyleGAN2 on full CIFAR-10, we achieve 70.23%, 18.81%, and 19% training FLOPs savings, respectively, while improved generated sample quality for both full and 10% training data. Re-GAN presents a viable alternative to the GANs tickets and progressive growing techniques. Additionally, the performance of Re-GAN is enhanced when integrated with recent augmentation techniques.
|
Ramanathan_PACO_Parts_and_Attributes_of_Common_Objects_CVPR_2023
|
Abstract
Object models are gradually progressing from predict-
ing just category labels to providing detailed descriptions
of object instances. This motivates the need for large
datasets which go beyond traditional object masks and pro-
vide richer annotations such as part masks and attributes.
Hence, we introduce PACO: Parts and Attributes of Com-
mon Objects. It spans 75object categories, 456object-
part categories and 55attributes across image (LVIS) and
video (Ego4D) datasets. We provide 641Kpart masks an-
notated across 260Kobject boxes, with roughly half of
them exhaustively annotated with attributes as well. We
design evaluation metrics and provide benchmark results
for three tasks on the dataset: part mask segmentation, ob-
ject and part attribute prediction and zero-shot instance de-
tection. Dataset, models, and code are open-sourced at
https://github.com/facebookresearch/paco.
∗Equal contribution†Work done during internship at Meta AI
|
1. Introduction
Today, tasks requiring fine-grained understanding of
objects like open vocabulary detection [8, 14, 20, 51],
GQA [17], and referring expressions [3, 21, 32] are gaining
importance besides traditional object detection. Represent-
ing objects through category labels is no longer sufficient.
A complete object description requires more fine-grained
properties like object parts and their attributes, as shown by
the queries in Fig. 1.
Currently, there are no large benchmark datasets for
common objects with joint annotation of part masks, ob-
ject attributes and part attributes (Fig. 1). Such datasets
are found only in specific domains like clothing [18, 47],
birds [42] and pedestrian description [25]. Current datasets
with part masks for common objects [2, 15, 50] are lim-
ited in number of object instances with parts ( 59Kfor
ADE20K [2] Tab. 1). On the attributes side, there exists
large-scale datasets like Visual Genome [23], V AW [35] and
COCO-attributes [34] that provide object-level attributes.
1
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7141
However, none have part-level attribute annotations.
In this work, we enable research for the joint task of ob-
ject detection, part segmentation, and attribute recognition,
by designing a new dataset: PACO. With video object de-
scription becoming more widely studied as well [19], we
construct both an image dataset (sourced from LVIS [13])
and a video dataset (sourced from Ego4D [11]) as part of
PACO. Overall, PACO has 641Kpart masks annotated in
77Kimages for 260Kobject instances across 75object
classes and 456object-specific part classes. It has an order
of magnitude more objects with parts, compared to recently
introduced PartImageNet dataset [15]. PACO further pro-
vides annotations for 55different attributes for both objects
and parts. We conducted user studies and multi-round man-
ual curation to identify high-quality vocabulary of parts and
attributes.
Along with the dataset, we provide three associated
benchmark tasks to help the community evaluate its
progress over time. These tasks include: a) part segmen-
tation, b) attribute detection for objects and object-parts and
c) zero-shot instance detection with part/attribute queries.
The first two tasks are aimed at benchmarking stand alone
capabilities of part and attribute understanding. The third
task evaluates models directly for a downstream task.
While building the dataset and benchmarks, we navigate
some key design choices: (a) Should we evaluate parts and
attributes conditioned on the object or independent of the
objects (eg: evaluating “leg” vs. “dog-leg”, “red” vs. “red
cup”)? (b) How do we keep annotation workload limited
without compromising fair benchmarking?
To answer the first question, we observed that the same
semantic part can visual
|
Singh_High-Fidelity_Guided_Image_Synthesis_With_Latent_Diffusion_Models_CVPR_2023
|
Abstract
Controllable image synthesis with user scribbles has
gained huge public interest with the recent advent of text-
conditioned latent diffusion models. The user scribbles con-
trol the color composition while the text prompt provides
control over the overall image semantics. However, we note
that prior works in this direction suffer from an intrinsic
domain shift problem wherein the generated outputs often
lack details and resemble simplistic representations of the
target domain. In this paper, we propose a novel guided im-
age synthesis framework, which addresses this problem by
modelling the output image as the solution of a constrained
optimization problem. We show that while computing an
exact solution to the optimization is infeasible, an approx-
imation of the same can be achieved while just requiring a
single pass of the reverse diffusion process. Additionally,we show that by simply defining a cross-attention based
correspondence between the input text tokens and the user
stroke-painting, the user is also able to control the seman-
tics of different painted regions without requiring any con-
ditional training or finetuning. Human user study results
show that the proposed approach outperforms the previous
state-of-the-art by over 85.32% on the overall user satis-
faction scores. Project page for our paper is available at
https://1jsingh.github.io/gradop .
|
1. Introduction
Guided image synthesis with user scribbles has gained
widespread public attention with the recent advent of large-
scale language-image (LLI) models [23, 26, 28, 30, 40]. A
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5997
novice user can gain significant control over the final image
contents by combining text-based conditioning with unsu-
pervised guidance from a reference image (usually a coarse
stroke painting). The text prompt controls the overall image
semantics, while the provided coarse stroke painting allows
the user to define the color composition in the output scene.
Existing methods often aim attempt to achieve this
through two means. The first category leverages condi-
tional training using semantic segmentation maps [8, 28,
39]. However, the conditional training itself is quite time-
consuming and requires a large scale collection of dense se-
mantic segmentation labels across diverse data modalities.
The second category, typically leverages an inversion based
approach for mapping the input stroke painting to the tar-
get data manifold without requiring any paired annotations.
For instance, a popular solution by [22, 35] introduces the
painting based generative prior by considering a noisy ver-
sion of the original image as the start of the reverse diffusion
process. However, the use of an inversion based approach
causes an intrinsic domain shift problem if the domain gap
between the provided stroke painting and the target domain
is too high. In particular, we observe that the resulting out-
puts often lack details and resemble simplistic representa-
tions of the target domain. For instance, in Fig. 1, we notice
that while the target domain consists of realistic photos of a
landscape, the generated outputs resemble simple pictorial
arts which are not very realistic. Iteratively reperforming
the guided synthesis with the generated outputs [4] seems
to improve realism but it is costly, some blurry details still
persist (refer Fig. 4), and the generated outputs tend to lose
faithfulness to the reference with each successive iteration.
To address this, we propose a diffusion-based guided im-
age synthesis framework which models the output image as
the solution of a constrained optimization problem (Sec. 3).
Given a reference painting y, the constrained optimization
is posed so as to find a solution xwith two constraints: 1)
upon painting xwith an autonomous painting function we
should recover a painting similar to reference y, and, 2) the
output xshould lie in the target data subspace defined by the
text prompt ( i.e., if the prompt says “photo” then we want
the output images to be realistic photos instead of cartoon-
like representations of the same concept). Subsequently, we
show that while the computation of an exact solution for
this optimization is infeasible, a practical approximation of
the same can be achieved through simple gradient descent.
Finally, while the proposed optimization allows the user
to generate image outputs with high realism and faithful-
ness (with reference y), the fine-grain semantics of differ-
ent painting regions are inferred implicitly by the diffusion
model. Such inference is typically dependent on the gener-
ative priors learned by the diffusion model, and might not
accurately reflect the user’s intent in drawing a particular
region. For instance, in Fig. 1, we see that the light blue re-gions can be inferred as blue-green grass instead of a river.
To address this, we show that by simply defining a cross-
attention based correspondence between the input text to-
kens and user stroke-painting, the user can control seman-
tics of different painted regions without requiring semantic-
segmentation based conditional training or finetuning.
|
Shin_NIPQ_Noise_Proxy-Based_Integrated_Pseudo-Quantization_CVPR_2023
|
Abstract
Straight-through estimator (STE), which enables the
gradient flow over the non-differentiable function via
approximation, has been favored in studies related to
quantization-aware training (QAT). However, STE incurs
unstable convergence during QAT, resulting in notable
quality degradation in low precision. Recently, pseudo-
quantization training has been proposed as an alternative
approach to updating the learnable parameters using the
pseudo-quantization noise instead of STE. In this study,
we propose a novel noise proxy-based integrated pseudo-
quantization (NIPQ) that enables unified support of pseudo-
quantization for both activation and weight by integrating
the idea of truncation on the pseudo-quantization frame-
work. NIPQ updates all of the quantization parameters
(e.g., bit-width and truncation boundary) as well as the net-
work parameters via gradient descent without STE insta-
bility. According to our extensive experiments, NIPQ out-
performs existing quantization algorithms in various vision
and language applications by a large margin.
|
1. Introduction
Neural network quantization is a representative opti-
mization technique that reduces the memory footprint by
storing the activation and weight in a low-precision do-
main. In addition, when hardware acceleration is available
(e.g., low-precision arithmetics [30, 41, 43, 48] or bit-serial
operations [14, 18, 42]), it also brings a substantial perfor-
mance boost. These advantages make network inference
affordable in large-scale servers as well as embedded de-
vices [2, 32], which has helped popularize it in various ap-
plications. However, quantization has a critical disadvan-
tage, quality degradation due to limited degrees of freedom.
*Equal contribution.
Weight
Activation
Full-precisionStraight-through EstimatorForward Function
Backward Function
Noise Proxy
Δ
Δ⋅Quantization-aware Training
Pseudo-quantization TrainingPseudo
Quantization
NoiseWeight
Activation
Low-precisionFigure 1. The proposed NIPQ as an alternative to QAT with STE.
The way to train the networks accurately within limited pre-
cision is critical and receiving much attention these days.
To mitigate the accuracy degradation, quantization-
aware training (QAT) has emerged that trains a neural net-
work with quantization operators to adapt to low precision.
While the quantization operator is not differentiable, the
straight-through estimator (STE) [5] allows the backprop-
agation of the quantized data based on linear approxima-
tion [19, 51]. This approximation works well in redundant
networks with moderate precision ( >4-bit). Thus, not only
early studies [9, 34, 51] but also advanced ones [8, 16, 46]
have proposed diverse QAT schemes based on STE and
shown that popular neural networks (i.e., ResNet-18 [20])
can be quantized into 4-bit without accuracy loss.
However, STE-based QAT bypasses the approximated
gradient, not the true gradient, and many studies have
pointed out that it can incur instability and bias during train-
ing [19,28,29,31,35,39]. For instance, PROFIT [31] points
out that the instability is a major source of accuracy drop
for the optimized networks (e.g., MobileNet-v2/v3 [22,36]).
More recently, an alternative training scheme, pseudo-
quantization training (PQT) based on pseudo-quantization
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3852
noise (PQN), has been proposed [3, 10, 37] to address the
instability of STE-based QAT. During PQT, the behavior
of the quantization operator is simulated via PQN, and the
learnable parameters are updated based on the proxy of
quantization. While those studies are applied only to the
weight, they can stabilize the training process significantly
compared to QAT with STE and show the potential of PQT.
Nevertheless, the existing PQT algorithms have room for
improvement. Various STE-based studies [8, 16, 51] have
shown that truncation contributes significantly to reduc-
ing quantization errors, but even the advanced PQT stud-
ies [10, 37] use a naive min-max quantization. Integrating
the truncation on the PQT framework can greatly reduce
quantization error and enable us to exploit a PQT scheme
for activation, which has an input-dependent distribution
and requires a static quantization range. In addition, there
is no theoretical support for whether PQT guarantees the
optimal convergence of the quantization parameters. Intu-
itive interpretation exists, but proof of whether quantization
parameters are optimized after PQT has yet to be provided.
In this paper, we propose an novel PQT method
(Figure 1) called Noise proxy-based Integrated Pseudo-
Quantization (NIPQ) that quantizes all activation and
weight based on PQN. We present a novel idea, called Noise
proxy, that shares the same quantization hyper-parameters
(e.g., truncation boundary and bit-width) with the existing
STE-based algorithm LSQ(+) [6,16]. However, noise proxy
allows to update the quantization parameters, as well as the
network parameters, via gradient descent with PQN instead
of STE. Subsequently, an arbitrary network can be opti-
mized in a mixed-precision representation without STE in-
stability. Our key contributions are summarized as follows:
• NIPQ is the first PQT that integrates truncation in ad-
dition to discretization. This extension not only further
reduces the weight quantization error but also enables
PQT for activation quantization.
• NIPQ optimizes an arbitrary network into the mixed-
precision with awareness of the given resource con-
straint without human intervention.
• We provide theoretical analysis showing that NIPQ up-
dates the quantization hyperparameters toward mini-
mizing the quantization error.
• We provide extensive experimental results to validate
the utility of NIPQ. It outperforms all existing mixed-
precision quantization schemes by a large margin.
|
Shao_Prompting_Large_Language_Models_With_Answer_Heuristics_for_Knowledge-Based_Visual_CVPR_2023
|
Abstract
Knowledge-based visual question answering (VQA) re-
quires external knowledge beyond the image to answer the
question. Early studies retrieve required knowledge from
explicit knowledge bases (KBs), which often introduces
irrelevant information to the question, hence restricting the
performance of their models. Recent works have sought to
use a large language model (i.e., GPT-3 [3]) as an implicit
knowledge engine to acquire the necessary knowledge for
answering. Despite the encouraging results achieved by
these methods, we argue that they have not fully activated
the capacity of GPT-3 as the provided input information is
insufficient. In this paper, we present Prophet—a concep-
tually simple framework designed to prompt GPT-3 with
answer heuristics for knowledge-based VQA. Specifically,
we first train a vanilla VQA model on a specific knowledge-
based VQA dataset without external knowledge. After that,
we extract two types of complementary answer heuristics
from the model: answer candidates and answer-aware
examples. Finally, the two types of answer heuristics
are encoded into the prompts to enable GPT-3 to better
comprehend the task thus enhancing its capacity. Prophet
significantly outperforms all existing state-of-the-art meth-
ods on two challenging knowledge-based VQA datasets,
OK-VQA and A-OKVQA, delivering 61.1% and 55.7%
accuracies on their testing sets, respectively.
|
1. Introduction
Recent advances in deep learning have enabled substan-
tial progress in visual question answering (VQA) which re-
quires a machine to answer free-form questions by reason-
ing about given images. Benefiting from large-scale vision-
*Zhou Yu is the corresponding author
GPT-3KAT / REVIVE
Prophet (ours)PICa
answer
heuristicsvanillia
VQA modelQV
Acandidates
evidenceKB-augmented
VQA modelC
QA
QC
QA
V K(Q)
(C) captioning model(V)(K)
knowledge base
a group of people
walk in a city squarewhat fruit comes
from these trees?
<entity-1, desc-1>
<entity- N, desc- N>...
GPT-3
GPT-3
GPT-3
C QFigure 1. Conceptual comparisons of three knowledge-based
VQA frameworks using a frozen GPT-3 model [3]. While PICa
[43], KAT [11], and REVIVE [22] directly feed the caption (C)
and question (Q) into GPT-3 as the prompt, we argue that the
information they provide for GPT-3 is insufficient thus cannot
fully activate GPT-3’s potential. In contrast, our Prophet learns
a vanilla VQA model without external knowledge to produce
answer heuristics , which endows GPT-3 with richer and more
task-specific information for answer prediction.
language pretraining, the state-of-the-art methods have even
surpassed human level on several representative bench-
marks [1,41,48]. Despite the success of these methods, their
reasoning abilities are far from satisfactory, especially when
external knowledge is required to answer the questions.
In this situation, the task of knowledge-based VQA is
introduced to validate models’ abilities to leverage external
knowledge. Early knowledge-based VQA benchmarks
additionally provide structured knowledge bases (KBs) and
annotate required knowledge facts for all the questions
[38, 39]. More recently, benchmarks emphasizing on open-
domain knowledge have been established [29, 32], which
means KBs are no longer provided and any external knowl-
edge resource can be used for answering. We focus on the
task with open-domain knowledge in this paper.
A straightforward solution for knowledge-based VQA
is to retrieve knowledge entries from explicit KBs, e.g.,
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14974
Wikipedia and ConceptNet [23]. Then a KB-augmented
VQA model performs joint reasoning over the retrieved
knowledge, image, and question to predict the answer [7,
8, 28, 42, 51]. However, the performance of these retrieval-
based approaches is limited for two reasons: (i) the required
knowledge may not be successfully retrieved from the KBs;
and (ii) even if the required knowledge is retrieved, plenty
of irrelevant knowledge is inevitably introduced, which
hampers the learning of VQA models.
Apart from those studies using explicit KBs, another line
of research resorts to pretrained large language models, e.g.,
GPT-3 [3], as implicit knowledge engines for knowledge
acquisition. A pioneering work by PICa employs the frozen
GPT-3 model to answer the question with formatted prompt
as its input [43]. Given a testing image-question pair,
PICa first translates the image into a caption using an off-
the-shelf captioning model. The question, caption, and a
few in-context examples are then integrated into a textual
prompt that can induce GPT-3 to predict the answer directly.
Thanks to the powerful knowledge reasoning ability of
GPT-3, PICa achieves significant performance improve-
ments compared to those retrieval-based methods using
explicit KBs. Inspired by PICa, KAT [11] and REVIVE
[22] learn KB-augmented VQA models to exploit both the
implicit knowledge from GPT-3 and explicit knowledge
from KBs for answer prediction. The synergy of the two
knowledge resources brings further improvements to their
models. Despite the promising results achieved by these
methods, they have not fully activated GPT-3 due to the
following limitations:
(i) The generated captions cannot cover all the necessary
information in the image. Consider the example in
Figure 1, the caption “a group of people walk in a city
square” contribute nothing to answering the question
“what fruit comes from these trees”. In this situation,
GPT-3 has to make an aimless and biased guess to
answer the question.
(ii) GPT-3 employs a few-shot learning paradigm that
requires a few in-context examples to adapt to new
tasks. Therefore, the choice of these examples is
critical to model performance. As reported in [43],
all its example selection strategies achieve far inferior
performance to the oracle strategy that uses the simi-
larity of ground-truth answers.
We ask: Is it possible to endow GPT-3 with some heuristics
to enhance its capacity for knowledge-based VQA?
In this paper, we present Prophet —a conceptually
simple framework designed to prompt GPT-3 with
answer heuristics for knowledge-based VQA. By answer
heuristics, we mean some promising answers that are
presented in a proper manner in the prompt. Specifically,
we introduce two types of answer heuristics, namelyanswer candidates and answer-aware examples , to
overcome the limitations in (i) and (ii), respectively. Given
a testing input consisting of an image and a question, the
answer candidates refer to a list of promising answers to
the testing input, where each answer is associated with a
confidence score. The answer-aware examples refer to a list
of in-context examples, where each example has a similar
answer to the testing input. Interestingly, these two types of
answer heuristics can be simultaneously obtained from any
vanilla VQA model trained on a specific knowledge-based
VQA dataset. A schematic of Prophet is illustrated at the
bottom of Figure 1.
Without bells and whistles, Prophet surpasses all previ-
ous state-of-the-art single-model results on the challenging
OK-VQA and A-OKVQA datasets [29, 32], including the
heavily-engineered Flamingo-80B model trained on 1.8B
image-text pairs [1]. Moreover, Prophet is friendly to most
researchers, as our results can be reproduced using a single
GPU and an affordable number of GPT-3 invocations.
|
Rajasegaran_On_the_Benefits_of_3D_Pose_and_Tracking_for_Human_CVPR_2023
|
Abstract
In this work we study the benefits of using tracking and
3D poses for action recognition. To achieve this, we take the
Lagrangian view on analysing actions over a trajectory of
human motion rather than at a fixed point in space. Taking
this stand allows us to use the tracklets of people to pre-
dict their actions. In this spirit, first we show the benefits of
using 3D pose to infer actions, and study person-person in-
teractions. Subsequently, we propose a Lagrangian Action
Recognition model by fusing 3D pose and contextualized
appearance over tracklets. To this end, our method achieves
state-of-the-art performance on the AVA v2.2 dataset on
both pose only settings and on standard benchmark settings.
When reasoning about the action using only pose cues, our
pose model achieves +10.0 mAP gain over the correspond-
ing state-of-the-art while our fused model has a gain of +2.8
mAP over the best state-of-the-art model. Code and results
are available at: https://brjathu.github.io/LART
|
1. Introduction
In fluid mechanics, it is traditional to distinguish between
the Lagrangian and Eulerian specifications of the flow field.
Quoting the Wikipedia entry, “Lagrangian specification of
the flow field is a way of looking at fluid motion where
the observer follows an individual fluid parcel as it moves
through space and time. Plotting the position of an indi-
vidual parcel through time gives the pathline of the parcel.
This can be visualized as sitting in a boat and drifting down
a river. The Eulerian specification of the flow field is a way
of looking at fluid motion that focuses on specific locations
in the space through which the fluid flows as time passes.
This can be visualized by sitting on the bank of a river and
watching the water pass the fixed location. ”
These concepts are very relevant to how we analyze
videos of human activity. In the Eulerian viewpoint, we
would focus on feature vectors at particular locations, either
(x, y)or(x, y, z ), and consider evolution over time while
staying fixed in space at the location. In the Lagrangian
viewpoint, we would track, say a person over space-time
and track the associated feature vector across space-time.While the older literature for activity recognition e.g.,
[11, 18, 53] typically adopted the Lagrangian viewpoint,
ever since the advent of neural networks based on 3D space-
time convolution, e.g., [50], the Eulerian viewpoint became
standard in state-of-the-art approaches such as SlowFast
Networks [16]. Even after the switch to transformer archi-
tectures [12, 52] the Eulerian viewpoint has persisted. This
is noteworthy because the tokenization step for transform-
ers gives us an opportunity to freshly examine the question,
“What should be the counterparts of words in video analy-
sis?” . Dosovitskiy et al. [10] suggested that image patches
were a good choice, and the continuation of that idea to
video suggests that spatiotemporal cuboids would work for
video as well.
On the contrary, in this work we take the Lagrangian
viewpoint for analysing human actions. This specifies that
we reason about the trajectory of an entity over time. Here,
the entity can be low-level, e.g., a pixel or a patch, or high-
level, e.g., a person. Since, we are interested in understand-
ing human actions, we choose to operate on the level of
“humans-as-entities” . To this end, we develop a method
that processes trajectories of people in video and uses them
to recognize their action. We recover these trajectories by
capitalizing on a recently introduced 3D tracking method
PHALP [43] and HMR 2.0 [19]. As shown in Figure 1
PHALP recovers person tracklets from video by lifting peo-
ple to 3D, which means that we can both link people over
a series of frames and get access to their 3D representa-
tion. Given these 3D representations of people ( i.e., 3D
pose and 3D location), we use them as the basic content of
each token. This allows us to build a flexible system where
the model, here a transformer, takes as input tokens corre-
sponding to the different people with access to their identity,
3D pose and 3D location. Having 3D location of the peo-
ple in the scene allow us to learn interaction among people.
Our model relying on this tokenization can benefit from 3D
tracking and pose, and outperforms previous baseline that
only have access to pose information [8, 45].
While the change in human pose over time is a strong
signal, some actions require more contextual information
about the appearance and the scene. Therefore, it is im-
portant to also fuse pose with appearance information from
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
640
TransformerPHALPRunningStandingStandingClappingRunningStandingClappingStandingWatchingCarryingWatchingCarryingStandingWatching
=Appearance FeaturesPose Features
Person tokenTracking
Action Tube
Figure 1. Overview of our method: Given a video, first, we track every person using a tracking algorithm ( e.g.PHALP [43]). Then every
detection in the track is tokenized to represent a human-centric vector ( e.g. pose, appearance ). To represent 3D pose we use SMPL [35]
parameters and estimated 3D location of the person, for contextualized appearance we use MViT [12] (pre-trained on MaskFeat [59])
features. Then we train a transformer network to predict actions using the tracks. Note that, at the second frame we do not have detection
for the blue person , at these places we pass a mask token to in-fill the missing detections.
humans and the scene, coming directly from pixels. To
achieve this, we also use the state-of-the-art models for ac-
tion recognition [12, 34] to provide complementary infor-
mation from the contextualized appearance of the humans
and the scene in a Lagrangian framework. Specifically, we
densely run such models over the trajectory of each tracklet
and record the contextualized appearance features localized
around the tracklet. As a result, our tokens include explicit
information about the 3D pose of the people and densely
sampled appearance information from the pixels, processed
by action recognition backbones [12]. Our complete system
outperforms the previous state of the art by a large margin
of2.8 mAP , on the challenging A V A v2.2 dataset.
Overall, our main contribution is introducing an ap-
proach that highlights the effects of tracking and 3D poses
for human action understanding. To this end, in this
work, we propose a Lagrangian Action Recognition with
Tracking ( LART ) approach, which utilizes the tracklets of
people to predict their action. Our baseline version lever-
ages tracklet trajectories and 3D pose representations of the
people in the video to outperform previous baselines utiliz-
ing pose information. Moreover, we demonstrate that the
proposed Lagrangian viewpoint of action recognition can
be easily combined with traditional baselines that rely only
on appearance and context from the video, achieving signif-
icant gains compared to the dominant paradigm.
|
Singh_Depth_Estimation_From_Camera_Image_and_mmWave_Radar_Point_Cloud_CVPR_2023
|
Abstract
We present a method for inferring dense depth from a
camera image and a sparse noisy radar point cloud. We
first describe the mechanics behind mmWave radar point
cloud formation and the challenges that it poses, i.e. am-
biguous elevation and noisy depth and azimuth components
that yields incorrect positions when projected onto the im-
age, and how existing works have overlooked these nuances
in camera-radar fusion. Our approach is motivated by these
mechanics, leading to the design of a network that maps
each radar point to the possible surfaces that it may project
onto in the image plane. Unlike existing works, we do not
process the raw radar point cloud as an erroneous depth
map, but query each raw point independently to associate
it with likely pixels in the image – yielding a semi-dense
radar depth map. To fuse radar depth with an image, we
propose a gated fusion scheme that accounts for the confi-
dence scores of the correspondence so that we selectively
combine radar and camera embeddings to yield a dense
depth map. We test our method on the NuScenes benchmark
and show a 10.3% improvement in mean absolute error and
a 9.1% improvement in root-mean-square error over the
best method. Code: https://github.com/nesl/
radar-camera-fusion-depth .
|
1. Introduction
Understanding the 3-dimensional (3D) structure of the
scene surrounding us can support a variety of spatial tasks
such as navigation [33] and manipulation [9]. To perform
these tasks, an agent is generally equipped with multiple
sensors, including optical i.e., RGB camera and range i.e.,
lidar, radar. The images from a camera are “dense” in that
they provide an intensity value at each pixel. Yet, they are
also sparse in that much of the image does not allow for the
70m
0m
(a) (b) (c)
Figure 1. Depth estimation using a mmWave radar and a camera.
(a) RGB image. (b) Semi-dense depth generated from associating
the radar point cloud to probable image pixels. (c) Predicted depth.
Boxes highlight mapping of radar points to objects in the scene.
establishing of unique correspondence due to occlusions or
the aperture problem to recover the 3D structure lost to the
image formation process. On the other hand, range sensors
are typically sparse in returns, but provide the 3D coordi-
nates for a subset of points in the scene i.e., a point cloud.
The goal then is to leverage the complementary properties
of both sensor observations – an RGB image and a radar
point cloud that is synchronized with the frame – to recover
the dense 3D scene i.e., camera-radar depth estimation.
While sensor platforms that pair lidar with camera have
been of recent interest i.e., in autonomous vehicles, they are
expensive in cost, heavy in payload, and have high energy
and bandwidth consumption [36] – limiting their applica-
tions at the edge [37]. On the other hand, mmWave [20]
radars are orders of magnitude cheaper, light weight, and
power efficient. Over the last few years, developments
in mmWave radars and antenna arrays [18] have signifi-
cantly advanced the performance of these sensors. Radars
are already ubiquitous in automotive vehicles as they en-
able services such as cruise control and collision warn-
ing [10]. Methods to perform 3D reconstruction with cam-
era and radar are also synergistic with the joint communica-
tion and sensing (JCAS) paradigm in 6G cellular communi-
cation [1, 43, 57], where cellular base-stations will not only
be the hub of communication, but also act as radars, to sense
the environment.
The challenge, however, is that a mmWave radar is a
point scatterer and only a very small subset of points (50
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9275
to 80 per frame [6]) in the scene, often noisy due to its
large beam width, are reflected back into the radar’s re-
ceiver. Compared to the returns of a lidar, this is 1000x
more sparse. Additionally, most radars used in automotive
vehicles either do not have enough antenna elements along
the elevation axis or do not process the radar returns along
the elevation axis. Hence, the elevation obtained from them
is either too noisy or completely erroneous (see Sec. 3).
As a result, camera-radar depth estimation requires (i)
mapping noisy radar points without elevation components
to their 3D coordinates (and with calibrating their 2D im-
age coordinates i.e., radar-to-camera correspondence) and
(ii) fusing the associated sparse points with images to ob-
tain the dense depth. Existing works have projected the
radar points onto the image and “extended” the elevation
or y-coordinate in the image space as a vertical line [28] or
relied on multiple camera images to compute the optical-
flow which in-turn has been used to learn the radar-to-pixel
mapping [30]. These approaches overlook that radar returns
have noisy depth, azimuth and erroneous elevation. They
also assume access to multiple consecutive image and radar
frames, so that they may use the extra points to densify radar
returns in both the close (from past frames) and far (from
future frames) regions. In the scenario of obtaining instan-
taneous depth for a given frame, the requirement of future
frames makes it infeasible; if delays are permitted, then an
order of hundreds of milliseconds in latency is incurred.
Instead, we propose to estimate depth from a single radar
and image frame by first learning a one to many mapping of
correspondence between each radar point and the probable
surfaces in the image that it belongs to. Each radar point is
corresponded to a region within the image (based on empir-
ical error of projection due to noise) via a ROI alignment
mechanism – yielding a semi-dense radar depth map. The
information in the radar depth map is further modulated by
a gated fusion mechanism to learn the error modes in the
correspondence (due to possible noisy returns) and adap-
tively weight its contribution for image-radar fusion. The
result of which is used to augment the image information
and decoded to a dense depth map.
Our contributions are: (i) to the best of our knowledge,
the first approach to learn radar to camera correspondence
using a single radar scan and a single camera image for
mapping arbitrary number of ambiguous and noisy radar
points to the object surfaces in the image, (ii) a method to
introduce confidence scores of the mapping for fusing radar
and image modalities, and (iii) a learned gated fusion be-
tween radar and image to adaptively modulate the trade-off
between the noisy radar depth and image information. (iv)
We outperform the best method that uses multiple image
and radar frames by 10.3%in mean absolute error (MAE)
and9.1%in root-mean-square error (RMSE) to achieve the
state of the art on the NuScenes [6] benchmark, despite onlyusing a single image and radar frame.
|
Rao_TranSG_Transformer-Based_Skeleton_Graph_Prototype_Contrastive_Learning_With_Structure-Trajectory_Prompted_CVPR_2023
|
Abstract
Person re-identification (re-ID) via 3D skeleton data is
an emerging topic with prominent advantages. Existing
methods usually design skeleton descriptors with raw body
joints or perform skeleton sequence representation learn-
ing. However, they typically cannot concurrently model dif-
ferent body-component relations, and rarely explore useful
semantics from fine-grained representations of body joints.
In this paper, we propose a generic Transformer-based
Skeleton Graph prototype contrastive learning (TranSG)
approach with structure-trajectory prompted reconstruc-
tion to fully capture skeletal relations and valuable spatial-
temporal semantics from skeleton graphs for person re-ID.
Specifically, we first devise the Skeleton Graph Transformer
(SGT) to simultaneously learn body and motion relations
within skeleton graphs, so as to aggregate key correlative
node features into graph representations. Then, we propose
the Graph Prototype Contrastive learning (GPC) to mine
the most typical graph features (graph prototypes) of each
identity, and contrast the inherent similarity between graph
representations and different prototypes from both skeleton
and sequence levels to learn discriminative graph repre-
sentations. Last, a graph Structure-Trajectory Prompted
Reconstruction (STPR) mechanism is proposed to exploit
the spatial and temporal contexts of graph nodes to prompt
skeleton graph reconstruction, which facilitates capturing
more valuable patterns and graph semantics for person re-
ID. Empirical evaluations demonstrate that TranSG signif-
icantly outperforms existing state-of-the-art methods. We
further show its generality under different graph model-
ing, RGB-estimated skeletons, and unsupervised scenar-
ios. Our codes are available at https://github.com/Kali-
Hac/TranSG.
*Corresponding author
Prompt
Representative Features
Contrast and Learn Skeletal Relations
. . .Figure 1. TranSG temporally and spatially masks skeleton graphs
to prompt their reconstruction, while integrating full skeletal rela-
tions into contrastive learning of typical features for person re-ID.
|
1. Introduction
Person re-identification (re-ID) is a challenging task of
retrieving and matching a specific person across varying
views or scenarios, which has empowered many vital ap-
plications such as security authentication, human tracking,
and robotics [1–4]. Recently driven by economical, non-
obtrusive and accurate skeleton-tracking devices like Kinect
[5], person re-ID via 3D skeletons has attracted surging at-
tention in both academia and industry [2, 6–14]. Unlike
conventional methods that rely on visual appearance fea-
tures ( e.g., colors, silhouettes), skeleton-based person re-
ID methods model unique body and motion representations
with 3D positions of key body joints, which typically enjoy
smaller data sizes, lighter models, and more robust perfor-
mance under scale and view variations [2, 15].
Most existing methods manually extract anthropometric
descriptors and gait attributes from body-joint coordinates
[7–10, 13], or leverage sequence learning paradigms such
as Long Short-Term Memory (LSTM) [16] to model body
and motion features with skeleton sequences [2, 6, 14, 17].
However, these methods rarely explore the inherent body
relations within skeletons ( e.g., inter-joint motion correla-
tions), thus largely ignoring some valuable skeleton pat-
terns. To fill this gap, a few recent works [11, 18] con-
struct skeleton graphs to model body-component relations
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22118
in terms of structure and action. These methods typically re-
quire multi-stage non-parallel relation modeling, e.g., [18]
models collaborative relations conditioned on the structural
relations, while they cannot simultaneously mine differ-
ent underlying relations. On the other hand, they usu-
ally leverage sequence-level instances such as averaged fea-
tures of sequential graphs [11] for representation learning,
which inherently limits their ability to exploit richer graph
semantics from fine-grained ( e.g., node-level) representa-
tions. For example, there usually exist strong correlations
between nearby body-joint nodes within a local spatial-
temporal graph context, which can prompt learning unique
and recognizable skeleton patterns for person re-ID.
To address the above challenges, we propose a general
Transformer-based Skeleton Graph prototype contrastive
learning (TranSG) paradigm with structure-trajectory
prompted reconstruction as shown in Fig. 1, which inte-
grates different relational features of skeleton graphs and
contrasts representative graph features to learn discrimina-
tive representations for person re-ID. Specifically, we first
model 3D skeletons as graphs, and propose the Skeleton
Graph Transformer (SGT) to perform full-relation learning
of body-joint nodes, so as to simultaneously aggregate key
relational features of body structure and motion into effec-
tive graph representations. Second, a Graph Prototype Con-
trastive learning (GPC) approach is proposed to contrast
and learn the most representative skeleton graph features
(defined as “graph prototypes” ) of each identity. By pulling
together both sequence-level andskeleton-level graph repre-
sentations to their corresponding prototypes while pushing
apart representations of different prototypes, we encourage
the model to capture discriminative graph features and high-
level skeleton semantics ( e.g., identity-associated patterns)
for person re-ID. Last, motivated by the inherent structural
correlations and pattern continuity of body joints [2], we de-
vise a graph Structure-Trajectory Prompted Reconstruction
(STPR) mechanism to exploit the spatial-temporal context
(i.e., graph structure and trajectory) of skeleton graphs to
prompt the skeleton graph reconstruction, which facilitates
learning richer graph semantics and more effective graph
representations for the person re-ID task.
Our key contributions can be summarized as follows:
• We present a generic TranSG paradigm to learn ef-
fective representations from skeleton graphs for per-
son re-ID. To the best of our knowledge, TranSG is
the first transformer paradigm that unifies skeletal re-
lation learning and skeleton graph contrastive learning
specifically for skeleton-based person re-ID.
• We devise a skeleton graph transformer (SGT) to fully
capture relations in skeleton graphs and integrate key
correlative node features into graph representations.
• We propose the graph prototype contrastive learn-ing (GPC) to contrast and learn the most representa-
tive graph features and identity-related semantics from
both skeleton and sequence levels for person re-ID.
• We devise the graph structure-trajectory prompted re-
construction (STPR) to exploit spatial-temporal graph
contexts for reconstruction, so as to capture more key
graph semantics and unique features for person re-ID.
Extensive experiments on five public benchmarks show
that TranSG prominently outperforms existing state-of-the-
art methods and is highly scalable to be applied to different
graph modeling, RGB-estimated or unlabeled skeleton data.
|
Ren_Masked_Jigsaw_Puzzle_A_Versatile_Position_Embedding_for_Vision_Transformers_CVPR_2023
|
Abstract
Position Embeddings (PEs), an arguably indispensable
component in Vision Transformers (ViTs), have been shown
to improve the performance of ViTs on many vision tasks.
However, PEs have a potentially high risk of privacy leak-
age since the spatial information of the input patches is
exposed. This caveat naturally raises a series of interesting
questions about the impact of PEs on accuracy, privacy, pre-
diction consistency, etc. To tackle these issues, we propose a
Masked Jigsaw Puzzle (MJP) position embedding method.
In particular, MJP first shuffles the selected patches via our
block-wise random jigsaw puzzle shuffle algorithm, and their
corresponding PEs are occluded. Meanwhile, for the non-
occluded patches, the PEs remain the original ones but their
spatial relation is strengthened via our dense absolute lo-
calization regressor. The experimental results reveal that 1)
PEs explicitly encode the 2D spatial relationship and lead
to severe privacy leakage problems under gradient inversion
attack; 2) Training ViTs with the naively shuffled patches can
alleviate the problem, but it harms the accuracy; 3) Under
a certain shuffle ratio, the proposed MJP not only boosts
the performance and robustness on large-scale datasets ( i.e.,
ImageNet-1K and ImageNet-C, -A/O) but also improves the
privacy preservation ability under typical gradient attacks
by a large margin. The source code and trained models are
available at https://github.com/yhlleo/MJP .
|
1. Introduction
Transformers [38] demonstrated their overwhelming
power on a broad range of language tasks ( e.g., text classifi-
cation, machine translation, or question answering [22, 38]),
and the vision community follows it closely and extends it for
vision tasks, such as image classification [7,37], object detec-
tion [2, 51], segmentation [47], and image generation [3, 24].
Most of the previous ViT-based methods focus on design-
ing different pre-training objectives [9, 11, 12] or variants
of self-attention mechanisms [28, 39, 40]. By contrast, PEs
*Equal contribution. Email: {bin.ren, yahui.liu }@unitn.it
†Corresponding author. Email: [email protected]
(a) UMAP(b) PCA
Figure 1. Low-dimensional projection of position embeddings from
DeiT-S [37]. (a) The 2D UMAP projection, it shows that reverse
diagonal indices have the same order as the input patch positions.
(b) The 3D PCA projection, it also shows that the position informa-
tion is well captured with PEs. Note that the embedding of index 1
(highlighted in red ) corresponds to the [CLS] embedding that does
not embed any positional information.
receive less attention from the research community and have
not been well studied yet. In fact, apart from the attention
mechanism, how to embed the position information into the
self-attention mechanism is also one indispensable research
topic in Transformer. It has been demonstrated that with-
out the PEs, the pure language Transformer encoders ( e.g.,
BERT [6] and RoBERTa [25]) may not well capture the
meaning of positions [43]. As a consequence, the meaning
of a sentence can not be well represented [8]. Similar phe-
nomenon of PEs could also be observed in vision community.
Dosovitskiy et al. [7] reveals that removing PEs causes per-
formance degradation. Moreover, Lu et al. [29] analyzed
this issue from the perspective of user privacy and demon-
strated that the PEs place the model at severe privacy risk
since it leaks the clues of reconstructing sequential patches
back to images. Hence, it is very interesting and necessary
to understand how the PEs affect the accuracy, privacy, and
consistency in vision tasks. Here the consistency means
whether the predictions of the transformed/shuffled image
are consistent with the ones of the original image.
To study the aforementioned effects of PEs, the key is to
figure out what explicitly PEs learn about positions from
input patches. To answer this question, we project the
high-dimensional PEs into the 2D and 3D spaces using Uni-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20382
(a) (b) (c) (d) Overview of our MJP ...Embedding Space :
Position Space :
Linear projection of flattened patchesViTs
Encoder
MLP
Class
Dog
Ball
Car
...DAL
...Figure 2. (a) The original input patches; (b) Totally random shuffled input patches; (c) Partially random shuffled input patches; (d) An
overview of the proposed MJP. Note that we show the random shuffled patches and its corresponding unknow position embedding in green
and the rest part in blue. DAL means the self-supervised dense absolute localization regression constraint.
form Manifold Approximation & Projection (UMAP) [30]
and PCA, respectively. Then for the first time, we visually
demonstrate that the PEs can learn the 2D spatial relation-
ship very well from the input image patches (the relation is
visualized in Fig. 1). We can see that the PEs are distributed
in the same order as the input patch positions. Therefore,
we can easily obtain the actual spatial position of the input
patches by analyzing the PEs. Now it explains why PEs can
bring the performance gain for ViTs [7]. This is because the
spatial relation in ViTs works similar as the inherent intrinsic
inductive bias in CNNs ( i.e., it models the local visual struc-
ture) [45]. However, these correctly learned spatial relations
are unfortunately the exact key factor resulting in the privacy
leakage [29].
Based on these observations, one straightforward idea to
protect the user privacy is to provide ViTs with the randomly
transformed ( i.e., shuffled) input data. The underlying intu-
ition is that the original correct spatial relation within input
patches will be violated via such a transformation. There-
fore, we transform the previous visually recognizable input
image patches xshown in Fig. 2(a) to its unrecognizable
counterpart exdepicted in Fig. 2(b) during training. The ex-
perimental results show that such a strategy can effectively
alleviate the privacy leakage problem. This is reasonable
since the reconstruction of the original input data during the
attack is misled by the incorrect spatial relation. However,
the side-effect is that this leads to a severe accuracy drop.
Meanwhile, we noticed that such a naive transformation
strategy actually boosts the consistency [31, 34, 44] albeit
the accuracy drops. Note that here the consistency repre-
sents if the predictions of the original and transformed ( i.e.,
shuffled) images are consistent. Given the original input
patches xand its corresponding transformed ( i.e., shuffled)
counterpart, we say that the predictions are consistent if
arg max P(F(x)) = arg max P(F(ex)), where Frefers to
the ViT models, and Pdenotes the predicted logits.
These observations hint that there might be a trade-off
solution that makes ViTs take the best from both worlds
(i.e., both the accuracy and the consistency). Hence, we pro-pose the Masked Jigsaw Puzzle (MJP) position embedding
method. Specifically, there are four core procedures in the
MJP: (1) We first utilize a block-wise masking method [1]
to randomly select a partial of the input sequential patches;
(2) Next, we apply jigsaw puzzle to the selected patches
(i.e., shuffle the orders); (3) After that, we use a shared un-
known position embedding for the shuffled patches instead
of using their original PEs; (4) To well maintain the posi-
tion prior of the unshuffled patches, we introduce a dense
absolute localization (DAL) regressor to strengthen their
spatial relationship in a self-supervised manner. We simply
demonstrate the idea of the first two procedures in Fig. 2(c),
and an overview of the proposed MJP method is available in
Fig. 2(d). In summary, our main contributions are:
•We demonstrate that although PEs can boost the accuracy,
the consistency against image patch shuffling is harmed.
Therefore, we argue that studying PEs is a valuable re-
search topic for the community.
•We propose a simple yet efficient Masked Jigsaw Puzzle
(MJP) position embedding method which is able to find a
balance among accuracy, privacy, and consistency.
•Extensive experimental results show that MJP boosts the
accuracy on regular large-scale datasets ( e.g., ImageNet-
1K [32]) and the robustness largely on ImageNet-C [18],
-A/O [19]. One additional bonus of MJP is that it can
improve the privacy preservation ability under typical gra-
dient attacks by a large margin.
|
Raistrick_Infinite_Photorealistic_Worlds_Using_Procedural_Generation_CVPR_2023
|
Abstract
We introduce Infinigen, a procedural generator of photo-
realistic 3D scenes of the natural world. Infinigen is entirely
procedural: every asset, from shape to texture, is generated
from scratch via randomized mathematical rules, using no
external source and allowing infinite variation and composi-
tion. Infinigen offers broad coverage of objects and scenes
in the natural world including plants, animals, terrains, and
natural phenomena such as fire, cloud, rain, and snow. In-
finigen can be used to generate unlimited, diverse training
data for a wide range of computer vision tasks including
object detection, semantic segmentation, optical flow, and
3D reconstruction. We expect Infinigen to be a useful re-
source for computer vision research and beyond. Please visit
infinigen.org for videos, code and pre-generated data.
|
1. Introduction
Data, especially large-scale labeled data, has been a criti-
cal driver of progress in computer vision. At the same time,
data has also been a major challenge, as many important
vision tasks remain starved of high-quality data. This is es-
pecially true for 3D vision, where accurate 3D ground truth
is difficult to acquire for real images.
Synthetic data from computer graphics is a promis-
ing solution to this data challenge. Synthetic data can
be generated in unlimited quantity with high-quality la-
bels. Synthetic data has been used in a wide range of
tasks [10, 18,44,46,52,55,65], with notable successes in 3D
vision, where models trained on synthetic data can perform
well on real images zero-shot [31, 51,75–78, 82].
Despite its great promise, the use of synthetic data in
computer vision remains much less common than real
images. We hypothesize that a key reason is the limited
diversity of 3D assets: for synthetic data to be maximally
useful, it needs to capture the diversity and complexity of the
real world, but existing freely available synthetic datasets
are mostly restricted to a fairly narrow set of objects and
shapes, often driving scenes (e.g. [35, 65]) or human-made
†work done while a student at Princeton Universityobjects in indoor environments (e.g. [25, 53]).
In this work, we seek to substantially expand the cover-
age of synthetic data, particularly objects and scenes from
the natural world. We introduce Infinigen, a procedural
generator of photorealistic 3D scenes of the natural world.
Compared to existing sources of synthetic data, Infinigen is
unique due to the combination of the following properties:
•Procedural: Infinigen is not a finite collection of 3D
assets or synthetic images; instead, it is a generator
that can create infinitely many distinct shapes, textures,
materials, and scene compositions. Every asset, from
shape to texture, is entirely procedural, generated from
scratch via randomized mathematical rules that allow
infinite variation and composition. This sets it apart
from datasets or dataset generators that rely on external
assets.
•Diverse: Infinigen offers a broad coverage of objects
and scenes in the natural world, including plants,
animals, terrains, and natural phenomena such as fire,
cloud, rain, and snow.
•Photorealistic: Infinigen creates highly photorealistic
3D scenes. It achieves high photorealism by procedu-
rally generating not only coarse structures but also fine
details in geometry and texture.
•Real geometry: unlike in video game assets, which
often use texture maps to fake geometrical details (e.g.
a surface appears rugged but is in fact flat), all geomet-
ric details in Infinigen are real. This ensures accurate
geometric ground truth for 3D reconstruction tasks.
•Free and open-source: Infinigen builds on top of
Blender [17], a free and open-source graphics tool.
Infinigen’s code is released for free under the GPL
license, same as Blender. Anyone can freely use
Infinigen to obtain unlimited assets and renders‡.
‡The output of GPL code is generally not covered by GPL.
See www . gnu . org / licenses / gpl - faq . en . html #
WhatCaseIsOutputGPL
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12630
Figure 1. Randomly generated, non cherry-picked images produced by our system. From top left to bottom right: Forest, River, Underwater,
Caves, Coast, Desert, Mountain and Plains. See the supplement for larger, higher resolution samples.
Infinigen focuses on the natural world for two reasons.
First, accurate perception of natural objects is demanded
by many applications, including geological survey, drone
navigation, ecological monitoring, rescue robots, agriculture
automation, but existing synthetic datasets have limited cov-
erage of the natural world. Second, we hypothesize that the
natural world alone can be sufficient for pretraining powerful
“foundation models”—the human visual system was evolved
entirely in the natural world; exposure to human-made ob-
jects was likely unnecessary.
Infinigen is useful in many ways. It can serve as a genera-
tor of unlimited training data for a wide range of computer
vision tasks, including object detection, semantic segmen-
tation, pose estimation, 3D reconstruction, view synthesis,
and video generation. Because users have access to all the
procedural rules and parameters underlying each 3D scene,
Infinigen can be easily customized to generate a large variety
of task-specific ground truth. Infinigen can also serve as a
generator of 3D assets, which can be used to build simulated
environments for training physical robots as well as virtual
embodied agents. The same 3D assets are also useful for 3D
printing, game development, virtual reality, film production,
and content creation in general.
We construct Infinigen on top of Blender [17], a graphics
system that provides many useful primitives for procedural
generation. Utilizing these primitives we design and imple-
ment a library of procedural rules to cover a wide range of
natural objects and scenes. In addition, we develop utili-
ties that facilitate creation of procedural rules and enable
all Blender users including non-programmers to contribute;
the utilities include a transpiler that automatically convertsBlender node graphs (intuitive visual representation of pro-
cedural rules often used by Blender artists) to Python code.
We also develop utilities to render synthetic images and ex-
tract common ground truth labels including depth, occlusion
boundaries, surface normals, optical flow, object category,
bounding boxes, and instance segmentation. Constructing
Infinigen involves substantial software engineering: the lat-
est main branch of Infinigen codebase consists of 40,485
lines of code.
In this paper, we provide a detailed description of our
procedural system. We also perform experiments to validate
the quality of the generated synthetic data; our experiments
suggest that data from Infinigen is indeed useful, especially
for bridging gaps in the coverage of natural objects. Finally,
we provide an analysis on computational costs including a
detailed profiling of the generation pipeline.
We expect Infinigen to be a useful resource for computer
vision research and beyond. In future work, we intend to
make Infinigen a living project that, through open-source
collaboration with the whole community, will expand to
cover virtually everything in the visual world.
|
Shen_DiGA_Distil_To_Generalize_and_Then_Adapt_for_Domain_Adaptive_CVPR_2023
|
Abstract
Domain adaptive semantic segmentation methods com-
monly utilize stage-wise training, consisting of a warm-up
and a self-training stage. However, this popular approach
still faces several challenges in each stage: for warm-up,
the widely adopted adversarial training often results in lim-
ited performance gain, due to blind feature alignment; for
self-training, finding proper categorical thresholds is very
tricky. To alleviate these issues, we first propose to re-
place the adversarial training in the warm-up stage by a
novel symmetric knowledge distillation module that only ac-
cesses the source domain data and makes the model do-
main generalizable. Surprisingly, this domain generaliz-
able warm-up model brings substantial performance im-
provement, which can be further amplified via our pro-
posed cross-domain mixture data augmentation technique.
Then, for the self-training stage, we propose a threshold-
free dynamic pseudo-label selection mechanism to ease the
aforementioned threshold problem and make the model bet-
ter adapted to the target domain. Extensive experiments
demonstrate that our framework achieves remarkable and
consistent improvements compared to the prior arts on
popular benchmarks. Codes and models are available at
https://github.com/fy-vision/DiGA
|
1. Introduction
Semantic segmentation [7,42,45,82] is an essential com-
ponent in autonomous driving [20], image editing [54, 79],
medical imaging [60], etc. However, for images in a spe-
cific domain, training deep neural networks [35, 37, 38, 62]
for semantic segmentation often requires a vast amount of
pixel-wisely annotated data, which is expensive and la-
borious. Therefore, domain adaptive semantic segmen-
tation ,i.e. learning semantic segmentation from a la-
belled source domain (either virtual data or an existing
dataset) and then performing unsupervised domain adapta-
*corresponding authortion (UDA) [3,15,21,46,63] to the target domain, becomes
an important research topic. Yet the remaining challenge is
the severe model performance degradation caused by the vi-
sual domain gap between source and target domain data. In
this work, we tackle this domain adaptive semantic segmen-
tation problem, with the goal of aligning correct categorical
information pixel-wisely from the labelled source domain
onto the unlabelled target domain.
Currently, stage-wise training, composed of a warm-up
and a self-training stage, has been widely adopted in do-
main adaptive semantic segmentation [1,41,50,80,81,84] as
it stabilizes the domain adaptive learning [81] and thus re-
duces the performance drop across domains. However, the
gap is far from being closed and, in this work, we identify
that there is still a large space to improve in both warm-up
and self-training.
Regarding warm-up , recent works in this field [41, 50,
80, 81, 84] mostly adopt adversarial training [21, 67, 68] as
their basic strategy, which usually contributes to limited
adaptation improvements. Without knowing the target do-
main labels, this adversarial learning proposes to align the
overall feature distributions across domains. Note that this
alignment is class-unaware and fails to guarantee the fea-
tures from the same semantic category are well aligned be-
tween the source and target domain, thus being sub-optimal
as a warm-up strategy.
In contrast, we take an alternative perspective to improve
the warm-up: simply enhancing the model’s domain gener-
alizability without considering target data. To be specific,
we introduce a pixel-wise symmetric knowledge distillation
technique. The benefits are threefold: i. knowledge distil-
lation is performed on the source domain where ground-
truths are available, the learning thus becomes class-aware,
which avoids the blind alignment as observed in adversarial
training; ii. the soft labels created in the process of distilla-
tion can effectively avoid the model overfitting to domain-
specific bias [71] and help to learn more generalized fea-
tures across domains; iii. our symmetric proposal ensures
the bidirectional consistency between the original source
view and its augmented view, leading to more generaliz-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15866
domain
translatorsource
inputfrozensemantic
segmentation
networkoutputCrDoMixsymmetric distillation
target
inputsemantic
segmentation
networkoutputdynamic pseudo-supervisionAdapt to T argetGeneralize to T arget
stored prob. labels
update feat. centroids(Warm-up stage)(Self-training stage)Figure 1. A systematic overview of DiGA framework . For
warm-up stage, instead of aligning features between the two do-
mains from the beginning, we propose to first make the model
generalizable to an unseen domain through our symmetric distil-
lation scheme, which can be achieved even without access to the
target domain data. Coupled with our target-aware CrDoMix data
augmentation technique, a warm-up model of higher quality can be
obtained. To make the model better adapted to the target domain,
a threshold-free self-training stage is empowered by checking the
consensus between feature-induced labels and probability-based
labels that are initialized from the warm-up model. CrDoMix also
contributes to the update of domain-generalized class centroids to
strengthen the self-training stage.
able model performance in the warm-up stage. Our method
achieves a significant improvement from 45.2 to 48.9 mIoU
compared to adversarial training, meanwhile, it outperforms
existing arts on domain generalized segmentation.
We further observe that making the data augmentation
target-aware can help the model explore target domain char-
acteristics and improve the adaptation. Hence, we propose
cross-domain mixture (CrDoMix) data augmentation to bet-
ter condition our warm-up model to the next stage.
In terms of self-training on the target domain, many
works [1, 41, 50, 80, 81, 84] manage to optimize their self-
training stage by finding proper thresholds for pseudo-
labelling, which is onerous and not productive enough so
far. In practice, however, self-training methods often get
trapped into a performance bottleneck because the search
for categorical thresholds is regarded as a trade-off be-
tween quantity and quality. Larger thresholds lead to in-
sufficient learning, whereas smaller ones introduce noisy
pseudo-labels in training.
To handle this, we propose bilateral-consensus pseudo-
supervision, a threshold-free technique which selects
pseudo-labels dynamically by checking the consensus be-
tween feature-induced and probability-based labels. The
feature-induced labels come from pixel-to-centroid voting,
focusing on local contexts of an input. The probability-
based labels from the warm-up model are better at captur-
ing global and regional contexts by the design nature of se-
mantic segmentation architectures. Hence, by checking the
consensus of these two types of labels generated by differ-
ent mechanisms, the obtained pseudo-labels provide reli-
able and comprehensive estimation of target domain labels.
Thus, an efficient self-training stage is enabled, leading to
substantial performance gainagainst prior arts.In this work, we present DiGA , a novel framework for
domain adaptive semantic segmentation (see Fig.1). Our
contributions can be summarized as follows:
• We introduce pixel-wise symmetric knowledge distil-
lation sorely on source domain, which results in a
stronger warm-up model and turns out to be a better
option than its adversarial counterpart.
• We introduce cross-domain mixture (CrDoMix), a
novel data augmentation technique that brings further
improvement to our warm-up model performance;
• We propose bilateral-consensus pseudo-supervision,
empowering efficient self-training while abandoning
categorical thresholds;
• Our method achieves remarkable and consistent per-
formance gain over prior arts on popular benchmarks,
e.g., GTA5- and Synthia-to-Cityscapes adaptation.
|
Shi_Transformer_Scale_Gate_for_Semantic_Segmentation_CVPR_2023
|
Abstract
Effectively encoding multi-scale contextual information
is crucial for accurate semantic segmentation. Most of the
existing transformer-based segmentation models combine
features across scales without any selection, where features
on sub-optimal scales may degrade segmentation outcomes.
Leveraging from the inherent properties of Vision Trans-
formers, we propose a simple yet effective module, Trans-
former Scale Gate (TSG), to optimally combine multi-scale
features. TSG exploits cues in self and cross attentions in
Vision Transformers for the scale selection. TSG is a highly
flexible plug-and-play module, and can easily be incorpo-
rated with any encoder-decoder-based hierarchical vision
Transformer. Extensive experiments on the Pascal Context,
ADE20K and Cityscapes datasets demonstrate that the pro-
posed feature selection strategy achieves consistent gains.
|
1. Introduction
Semantic segmentation aims to segment all objects in-
cluding ‘things’ and ‘stuff’ in an image and determine their
categories. It is a challenging task in computer vision, and
serves as a foundation for many higher-level tasks, such as
scene understanding [15,34], object recognition [23,29] and
vision+language [30, 33]. In recent years, Vision Trans-
formers based on encoder-decoder architectures have be-
come a new paradigm for semantic segmentation. The en-
coder consists of a series of multi-head self-attention mod-
ules to capture features of image patches, while the decoder
has both self- and cross-attention modules to generate seg-
mentation masks. Earlier works [47] usually use Vision
Transformers designed for image classification to tackle se-
mantic segmentation, and only encode single-scale features.
However, different from image classification where we only
need to recognize one object in an image, semantic segmen-
tation is generally expected to extract multiple objects of
different sizes. It is hard to segment and recognize these
varying sized objects by only single-scale features.Some methods [20, 37] attempt to leverage multi-scale
features to solve this problem. They first use hierarchical
transformers such as Swin Transformer [20] and PVT [37]
to extract multi-scale image features, and then combine
them, e.g. by the pyramid pooling module (PPM) [46] or
the seminal feature pyramid network (FPN) [18] borrowed
from CNNs. We argue that such feature combinations can-
not effectively select an appropriate scale for each image
patch. Features on sub-optimal scales may impact the seg-
mentation performance.To address this issue, CNN-based
methods [3, 8, 14, 32] design learnable models to select the
optimal scales. Nevertheless, these models are complex, ei-
ther use complex mechanisms [3,8,14] and/or require scale
labels [32], which decrease the network efficiency and may
cause over-fitting.
In this paper, we exploit the inherent characteristics of
Vision Transformers to guide the feature selection process.
Specifically, our design is inspired from the following ob-
servations: (1)As shown in Fig. 1(a), the self-attention
module in the transformer encoder learns the correlations
among image patches. If an image patch is correlated to
many patches, small-scale (low-resolution) features may be
preferred for segmenting this patch, since small-scale fea-
tures have large effective receptive fields [42] to involve as
many of these patches as possible, and vice versa. (2)In
the transformer decoder, the cross-attention module mod-
els correlations between patch-query pairs, as shown in
Fig. 1(b), where the queries are object categories. If an
image patch is correlated to multiple queries, it indicates
that this patch may contain multiple objects and need large-
scale (high-resolution) features for fine-grained segmenta-
tion. On the contrary, an image patch correlated to only a
few queries may need small-scale (low-resolution) features
to avoid over-segmentations. (3)The above observations
can guide design choices for not only category-query-based
decoders but also object-query-based decoders. In object-
query-based decoders, each query token corresponds to an
object instance. Then, the cross-attention module extracts
relationships between patch-object pairs. Many high cross-
attention values also indicate that the image patch may con-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3051
Norm Multi-Head
Self-Attention
+Norm MLP
+Ldec 灤
Norm Multi-Head
Cross-Attention
+
(b) Transformer Decoder Learnable Query Tokens Norm Multi-Head
Self-Attention
+Norm MLP
+Lenc 灤
Tokenization Input Image Self -Attention Maps
Results
…
cat dog boat person …Image Patch Features
(a) Transformer Encoder Query Embeddings
… 灅
Patch 1: cat dog boat sky … person
Patch 2: cat dog boat sky … person
Patch 3: cat dog boat sky … person
…
Patch N: cat dog boat sky … person Cross-Attention Maps Figure 1. Illustration of Vision Transformers for semantic segmentation. (a) The image is divided into multiple patches and input into
the encoder. The encoder contains Lencblocks and outputs features for every image patch. (b) The decoder takes learnable query tokens
as inputs, where each query is corresponding to an object category. The decoder with Ldecblocks outputs query embeddings. Finally,
segmentation results are generated by multiplying the image patch features and the query embeddings. In this paper, we exploit inner
properties of these attention maps for multi-scale feature selection.
tain multiple objects and require high-resolution features.
Note that these are general observations, which do not mean
that these are the only relationships between transformer
correlation maps and feature scales.
From these observations and analyses, we propose a
novel Transformer Scale Gate (TSG) module that takes cor-
relation maps in self- and cross-attention modules as inputs,
and predicts weights of multi-scale features for each image
patch. Our TSG is a simple design, with a few lightweight
linear layers, and thus can be plug-and-play across diverse
architectures. We further extend TSG to a TSGE module
and a TSGD module, which leverage our TSG weights to
optimize multi-scale features in transformer encoders and
decoders, respectively. TSGE employs a pyramid struc-
ture to refine multi-scale features in the encoder by the
self-attention guidance, while TSGD fuses these features in
the decoder based on the cross-attention guidance. Experi-
mental results on three datasets, Pascal Context, ADE20K,
Cityscapes show that the proposed modules consistently
achieve gains, up to 4.3% in terms of mIoU, compared with
Swin Transformer based baseline [20].
Our main contributions can be summarized as follows:
(1)To the best of our knowledge, this is the first work to
exploit inner properties of transformer attention maps for
multi-scale feature selection in semantic segmentation. We
analyze the properties of Vision Transformers and design
TSG for the selection. (2)We propose TSGE and TSGD
in the encoder and decoder in transformers, respectively,which leverage our TSG to improve the semantic segmen-
tation performance. (3)Our extensive experiments and ab-
lations show that the proposed modules obtain significant
improvements on three semantic segmentation datasets.
|
Rempe_Trace_and_Pace_Controllable_Pedestrian_Animation_via_Guided_Trajectory_Diffusion_CVPR_2023
|
Abstract
We introduce a method for generating realistic pedes-
trian trajectories and full-body animations that can be con-
trolled to meet user-defined goals. We draw on recent ad-
vances in guided diffusion modeling to achieve test-time
controllability of trajectories, which is normally only asso-
ciated with rule-based systems. Our guided diffusion model
allows users to constrain trajectories through target way-
points, speed, and specified social groups while accounting
for the surrounding environment context. This trajectory
diffusion model is integrated with a novel physics-based hu-
manoid controller to form a closed-loop, full-body pedes-
trian animation system capable of placing large crowds in
a simulated environment with varying terrains. We fur-
ther propose utilizing the value function learned during RL
training of the animation controller to guide diffusion to
produce trajectories better suited for particular scenarios
such as collision avoidance and traversing uneven terrain.
Video results are available on the project page .
|
1. Introduction
Synthesizing high-level human behavior, in the form
of 2D positional trajectories, is at the core of modeling
pedestrians for applications like autonomous vehicles, ur-
ban planning, and architectural design. An important fea-
ture of such synthesis is controllability – generating tra-
∗Equal contributionjectories that meet user-defined objectives, edits, or con-
straints. For example, a user may place specific waypoints
for characters to follow, specify social groups for pedestri-
ans to travel in, or define a social distance to maintain.
Attaining controllability is straightforward for algorith-
mic or rule-based models of human behavior, since they
have built-in objectives. In the simplest case, human tra-
jectories can be determined by the shortest paths between
control points [11], but more sophisticated heuristics have
also been developed for pedestrians [2,14], crowds [22,46],
and traffic [29, 53]. Unfortunately, algorithmically gener-
ated trajectories often appear unnatural. Learning-based ap-
proaches, on the other hand, can improve naturalness by
mimicking real-world data. These methods often focus on
short-term trajectory prediction using a single forward pass
of a neural network [1, 10, 49, 61]. However, the ability to
control these models is limited to sampling from an out-
put trajectory distribution [34, 58] or using an expensive la-
tent space traversal [45]. As a result, learning-based meth-
ods can predict implausible motions such as collisions with
obstacles or between pedestrians. This motivates another
notion of controllability – maintaining realistic trajectories
during agent-agent and agent-environment interactions.
In this work, we are particularly interested in using con-
trollable pedestrian trajectory models for character anima-
tion. We envision a simple interface where a user provides
high-level objectives, such as waypoints and social groups,
and a system converts them to physics-based full-body hu-
man motion. Compared to existing kinematic motion mod-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13756
els [19, 27, 42], physics-based methods have the potential
to produce high-quality motion with realistic subtle behav-
iors during transitions, obstacle avoidance, traversing un-
even terrains, etc. Although there exist physics-based ani-
mation models [12, 27, 39–41, 57], controlling their behav-
ior requires using task-specific planners that need to be re-
trained for new tasks, terrains, and character body shapes.
We develop a generative model of trajectories that is data
driven, controllable, and tightly integrated with a physics-
based animation system for full-body pedestrian simulation
(Fig. 1). Our method enables generating pedestrian trajec-
tories that are realistic and amenable to user-defined objec-
tives at test time. We use this trajectory generator as a plan-
ner for a physics-based pedestrian controller, resulting in a
closed-loop controllable pedestrian animation system.
For trajectory generation, we introduce a TRA jectory
Diffusion Model for Controllable P Edestrians (TRACE).
Inspired by recent successes in generating trajectories
through denoising [9, 20, 64], TRACE generates the future
trajectory for each pedestrian in a scene and accounts for
the surrounding context through a spatial grid of learned
map features that is queried locally during denoising. We
leverage classifier-free sampling [17] to allow training on
mixed annotations ( e.g., with and without a semantic map),
which improves controllability at test time by trading off
sample diversity with compliance to conditioning. User-
controlled sampling from TRACE is achieved through test-
time guidance [7, 17, 18], which perturbs the output at each
step of denoising towards the desired objective. We extend
prior work [20] by introducing several analytical loss func-
tions for pedestrians and re-formulating trajectory guidance
to operate on clean trajectory outputs from the model [18],
improving sample quality and adherence to user objectives.
For character animation, we develop a general-purpose
Pedestrian Animation Controll ER (PACER) capable of
driving physics-simulated humanoids with diverse body
types to follow trajectories from a high-level planner. We
focus on (1) motion quality: PACER learns from a small
motion database to create natural and realistic locomotion
through adversarial motion learning [40,41]; (2) terrain and
social awareness: trained in diverse terrains with other hu-
manoids, PACER learns to move through stairs, slopes, un-
even surfaces, and to avoid obstacles and other pedestrians;
(3) diverse body shapes: by training on different body types,
PACER draws on years of simulation experience to control
a wide range of characters; (4) compatibility with high-level
planners: PACER accepts 2D waypoints and can be a plug-
in model for any 2D trajectory planner.
We demonstrate a controllable pedestrian animation sys-
tem using TRACE as a high-level planner for PACER, the
low-level animator. The planner and controller operate in
a closed loop through frequent re-planning according to
simulation results. We deepen their connection by guidingTRACE with the value function learned during RL training
of PACER to improve animation quality in varying tasks.
We evaluate TRACE on synthetic [2] and real-world pedes-
trian data [3, 26, 38], demonstrating its flexibility to user-
specified and plausibility objectives while synthesizing re-
alistic motion. Furthermore, we show that our animation
system is capable and robust with a variety of tasks, terrains,
and characters. In summary, we contribute (1) a diffusion
model for pedestrian trajectories that is readily controlled at
test time through guidance, (2) a general-purpose pedestrian
animation controller for diverse body types and terrains, and
(3) a pedestrian animation system that integrates the two to
drive simulated characters in a controllable way.
|
Ramrakhya_PIRLNav_Pretraining_With_Imitation_and_RL_Finetuning_for_ObjectNav_CVPR_2023
|
Abstract
We study ObjectGoal Navigation – where a virtual robot
situated in a new environment is asked to navigate to an ob-
ject. Prior work [1] has shown that imitation learning (IL)
using behavior cloning (BC) on a dataset of human demon-
strations achieves promising results. However, this has lim-
itations – 1) BC policies generalize poorly to new states,
since the training mimics actions not their consequences,
and 2) collecting demonstrations is expensive. On the other
hand, reinforcement learning (RL) is trivially scalable, but
requires careful reward engineering to achieve desirable
behavior. We present PIRLNav, a two-stage learning scheme
for BC pretraining on human demonstrations followed by
RL-finetuning. This leads to a policy that achieves a suc-
cess rate of 65.0%onOBJECT NAV(+5.0%absolute over
previous state-of-the-art).
Using this BC →RL training recipe, we present a rigorous
empirical analysis of design choices. First, we investigate
whether human demonstrations can be replaced with ‘free’
(automatically generated) sources of demonstrations, e.g.
shortest paths (SP) or task-agnostic frontier exploration (FE)
trajectories. We find that BC →RL on human demonstrations
outperforms BC →RL on SP and FE trajectories, even when
controlled for the same BC-pretraining success on TRAIN ,
and even on a subset of VAL episodes where BC-pretraining
success favors the SP or FE policies. Next, we study how
RL-finetuning performance scales with the size of the BC
pretraining dataset. We find that as we increase the size of
the BC-pretraining dataset and get to high BC accuracies,
the improvements from RL-finetuning are smaller, and that
90% of the performance of our best BC →RL policy can be
achieved with less than half the number of BC demonstra-
tions. Finally, we analyze failure modes of our OBJECT NAV
policies, and present guidelines for further improving them.
Project page: ram81.github.io/projects/pirlnav .
|
1. Introduction
Since the seminal work of Winograd [2], designing embod-
ied agents that have a rich understanding of the environment
Figure 1. OBJECT NAVsuccess rates of agents trained using be-
havior cloning (BC) vs. BC-pretraining followed by reinforcement
learning (RL) (in blue). RL from scratch ( i.e. BC= 0) fails to get off-
the-ground. With more BC demonstrations, BC success increases,
and it transfers to even higher RL-finetuning success. But the differ-
ence between RL-finetuning vs. BC-pretraining success (in orange)
plateaus and starts to decrease beyond a certain point, indicating
diminishing returns with each additional BC demonstration.
they are situated in, can interact with humans (and other
agents) via language, and the environment via actions has
been a long-term goal in AI [3 –12]. We focus on Object-
Goal Navigation [13, 14], wherein an agent situated in a
new environment is asked to navigate to any instance of an
object category (‘find a plant’, ‘find a bed’, etc.); see Fig. 2.
OBJECT NAVis simple to explain but difficult for today’s
techniques to accomplish. First, the agent needs to be able
to ground the tokens in the language instruction to physical
objects in the environment ( e.g. what does a ‘plant’ look
like?). Second, the agent needs to have rich semantic priors
to guide its navigation to avoid wasteful exploration ( e.g.
the microwave is likely to be found in the kitchen, not the
washroom). Finally, it has to keep track of where it has been
in its internal memory to avoid redundant search.
Humans are adept at OBJECT NAV. Prior work [1] collected
a large-scale dataset of 80khuman demonstrations for OB-
JECT NAV, where human subjects on Mechanical Turk tele-
operated virtual robots and searched for objects in novel
houses. This first provided a human baseline on OBJECT -
NAVof88.9%success rate on the Matterport3D (MP3D)
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17896
Figure 2. OBJECT NAVtrajectories for policies trained with BC →RL on 1) Human Demonstrations, 2) Shortest Paths, and 3) Frontier
Exploration Demonstrations.
dataset [15]1compared to 35.4%success rate of the best
performing method [1]. This dataset was then used to train
agents via imitation learning (specifically, behavior cloning).
While this approach achieved state-of-art results ( 35.4%suc-
cess rate on MP3D VAL dataset), it has two clear limitations.
First, behavior cloning (BC) is known to suffer from poor
generalization to out-of-distribution states not seen during
training, since the training emphasizes imitating actions not
accomplishing their goals. Second and more importantly,
it is expensive and thus not scalable. Specifically, Ram-
rakhya et al. [1] collected 80kdemonstrations on 56scenes
in Matterport3D Dataset, which took ∼2894 hours of hu-
man teleoperation and $ 50kdollars. A few months after [1]
was released, a new higher-quality dataset called HM3D-
Semantics v0.1 [16] became available with 120annotated
3D scenes, and a few months after that HM3D-Semantics
v0.2 added 96additional scenes. Scaling Ramrakhya et al.’s
approach to continuously incorporate new scenes involves
replicating that entire effort again and again.
On the other hand, training with reinforcement learning (RL)
is trivially scalable once annotated 3D scans are available.
However, as demonstrated in Maksymets et al . [17], RL
requires careful reward engineering, the reward function typ-
ically used for OBJECT NAVactually penalizes exploration
(even though the task requires it), and the existing RL poli-
cies overfit to the small number of available environments.
Our primary technical contribution is PIRLNav, an approach
for pretraining with BC and finetuning with RL for OBJECT -
NAV. BC pretrained policies provide a reasonable starting
point for ‘bootstrapping’ RL and make the optimization eas-
ier than learning from scratch. In fact, we show that BC
pretraining even unlocks RL with sparse rewards. Sparse
rewards are simple (do not involve any reward engineer-
ing) and do not suffer from the unintended consequences
1OnVAL split, for 21 object categories, and a maximum of 500 steps.described above. However, learning from scratch with sparse
rewards is typically out of reach since most random action
trajectories result in no positive rewards.
While combining IL and RL has been studied in prior
work [18 –22], the main technical challenge in the context
of modern neural networks is that imitation pretraining re-
sults in weights for the policy (or actor), but not a value
function (or critic). Thus, naively initializing a new RL pol-
icy with these BC-pretrained policy weights often leads to
catastrophic failures due to destructive policy updates early
on during RL training, especially for actor-critic RL meth-
ods [23]. To overcome this challenge, we present a two-stage
learning scheme involving a critic-only learning phase first
that gradually transitions over to training both the actor and
critic. We also identify a set of practical recommendations
for this recipe to be applied to OBJECT NAV. This leads to a
PIRLNav policy that advances the state-the-art on OBJECT -
NAVfrom 60.0%success rate (in [24]) to 65.0%(+5.0%,
8.3%relative improvement).
Next, using this BC →RL training recipe, we conduct an
empirical analysis of design choices. Specifically, an ingre-
dient we investigate is whether human demonstrations can
be replaced with ‘free’ (automatically generated) sources
of demonstrations for OBJECT NAV,e.g. (1) shortest paths
(SP) between the agent’s start location and the closest object
instance, or (2) task-agnostic frontier exploration [25] (FE)
of the environment followed by shortest path to goal-object
upon observing it. We ask and answer the following:
1.‘Do human demonstrations capture any unique
OBJECT NAV-specific behaviors that shortest paths and
frontier exploration trajectories do not?’ Yes. We find
that BC / BC →RL on human demonstrations outperforms
BC / BC →RL on shortest paths and frontier exploration
trajectories respectively. When we control the number of
demonstrations from each source such that BC success on
TRAIN is the same, RL-finetuning when initialized from
17897
BC on human demonstrations still outperforms the other
two.
2.‘How does performance after RL scale with BC dataset
size?’ We observe diminishing returns from RL-finetuning
as we scale BC dataset size. This suggests, by effectively
leveraging the trade-off curve between size of pretraining
dataset size vs. performance after RL-Finetuning, we can
achieve closer to state-of-the-art results without investing
into a large dataset of BC demonstrations.
3.‘Does BC on frontier exploration demonstrations present
similar scaling behavior as BC on human demonstrations?’
No. We find that as we scale frontier exploration demon-
strations past 70ktrajectories, the performance plateaus.
Finally, we present an analysis of the failure modes of our
OBJECT NAVpolicies and present a set of guidelines for fur-
ther improving them. Our policy’s primary failure modes
are: a) Dataset issues: comprising of missing goal annota-
tions, and navigation meshes blocking the path, b) Naviga-
tion errors: primarily failure to navigate between floors, c)
Recognition failures: where the agent does not identify the
goal object during an episode, or confuses the specified goal
with a semantically-similar object.
|
Ruan_MM-Diffusion_Learning_Multi-Modal_Diffusion_Models_for_Joint_Audio_and_Video_CVPR_2023
|
Abstract
We propose the first joint audio-video generation framework
that brings engaging watching and listening experiences
simultaneously, towards high-quality realistic videos. To
generate joint audio-video pairs, we propose a novel
Multi-Modal Diffusion model ( i.e.,MM-Diffusion ), with
two-coupled denoising autoencoders. In contrast to existing
single-modal diffusion models, MM-Diffusion consists of a
sequential multi-modal U-Net for a joint denoising process
by design. Two subnets for audio and video learn to grad-
ually generate aligned audio-video pairs from Gaussian
noises. To ensure semantic consistency across modalities,
we propose a novel random-shift based attention block
bridging over the two subnets, which enables efficient
cross-modal alignment, and thus reinforces the audio-video
fidelity for each other. Extensive experiments show superior
*This work was performed when Ludan Ruan was visiting Microsoft
Research Asia as research interns.
†Corressponding author.results in unconditional audio-video generation, and zero-
shot conditional tasks (e.g., video-to-audio). In particular,
we achieve the best FVD and FAD on Landscape and
AIST++ dancing datasets. Turing tests of 10k votes further
demonstrate dominant preferences for our model. The code
and pre-trained models can be downloaded at https:
//github.com/researchmm/MM-Diffusion .
|
1. Introduction
AI-powered content generation in image, video, and
audio domains has attracted extensive attention in recent
years. For example, DALL ·E 2 [33] and DiffWave [19] can
create vivid art images and produce high-fidelity audio, re-
spectively. However, such generated content can only pro-
vide single-modality experiences either in vision or audi-
tion. There are still large gaps with plentiful human-created
contents on the Web which often involve multi-modal con-
tents, and can provide engaging experiences for humans to
perceive from both sight and hearing. In this paper, we take
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10219
one natural step forward to study a novel multi-modality
generation task, in particular focusing on joint audio-video
generation in the open domain.
Recent advances in generative models have been
achieved by using diffusion models [14, 40]. From task-
level perspectives, these models can be divided into two cat-
egories: unconditional and conditional diffusion models. In
particular, unconditional diffusion models generate images
and videos by taking the noises sampled from Gaussian dis-
tributions [14] as input. Conditional models usually import
the sampled noises combined with embedding features from
one modality, and generate the other modality as outputs,
such as text-to-image [30, 33, 37], text-to-video [13, 39],
audio-to-video [53], etc. However, most of the existing dif-
fusion models can only generate single-modality content.
How to utilize diffusion models for multi-modality genera-
tion still remains rarely explored.
The challenges of designing multimodal diffusion mod-
els mainly lie in the following two aspects. First, video and
audio are two distinct modalities with different data pat-
terns. In particular, videos are usually represented by 3D
signals indicating RGB values in both spatial ( i.e., height ×
width) and temporal dimensions, while audio is in 1D wave-
form digits across the temporal dimension. How to process
them in parallel within one joint diffusion model remains a
problem. Second, video and audio are synchronous in tem-
poral dimension in real videos, which requires models to be
able to capture the relevance between these two modalities
and encourage their mutual influence on each other.
To solve the above challenges, we propose the first
Multi-Modal Diffusion model ( i.e.,MM-Diffusion ) con-
sisting of two-coupled denoising autoencoders for joint
audio-video generation. Less-noisy samples from each
modality ( e.g., audio) at time step t−1, are generated by
implicitly denoising the outputs from both modalities (au-
dio and video) at time step t. Such a design enables a joint
distribution over both modalities to be learned. To further
learn the semantic synchronousness, we propose a novel
cross-modal attention block to ensure the generated video
frames and audio segments can be correlated at each mo-
ment. We design an efficient random-shift mechanism that
conducts cross-attention between a given video frame and a
randomly-sampled audio segment in a neighboring period,
which greatly reduces temporal redundancies in video and
audio and facilitates cross-modal interactions efficiently.
To verify the proposed MM-Diffusion model, we con-
duct extensive experiments on Landscape dataset [21], and
AIST++ dancing dataset [22]. Evaluation results over
SOTA modality-specific (video or audio) unconditional
generation models show the superiority of our model, with
significant visual and audio gains of 25.0%and32.9%by
FVD and FAD, respectively, on Landscape dataset. Supe-
rior performances can be also observed in AIST++ dataset[22], with large gains of 56.7%and37.7%by FVD and
FAD, respectively, over previous SOTA models. We further
demonstrate the capability of zero-shot conditional gener-
ation for our model, without any task-driven fine-tuning.
Moreover, Turing tests of 10k votes further verify the high-
fidelity performance of our results for common users.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.