title
stringlengths 28
135
| abstract
stringlengths 0
12k
| introduction
stringlengths 0
12k
|
---|---|---|
Wang_RODIN_A_Generative_Model_for_Sculpting_3D_Digital_Avatars_Using_CVPR_2023
|
Abstract
This paper presents a 3D diffusion model that automat-
ically generates 3D digital avatars represented as neural
radiance fields (NeRFs). A significant challenge for 3D
diffusion is that the memory and processing costs are pro-
hibitive for producing high-quality results with rich details.
To tackle this problem, we propose the roll-out diffusion net-
work (RODIN), which takes a 3D NeRF model represented
as multiple 2D feature maps and rolls out them onto a sin-
gle 2D feature plane within which we perform 3D-aware
diffusion. The RODIN model brings much-needed compu-
tational efficiency while preserving the integrity of 3D dif-
fusion by using 3D-aware convolution that attends to pro-
jected features in the 2D plane according to their origi-
nal relationships in 3D. We also use latent conditioning to
orchestrate the feature generation with global coherence,
leading to high-fidelity avatars and enabling semantic edit-
ing based on text prompts. Finally, we use hierarchical syn-
thesis to further enhance details. The 3D avatars generated
by our model compare favorably with those produced by ex-
isting techniques. We can generate highly detailed avatars
with realistic hairstyles and facial hair. We also demon-
strate 3D avatar generation from image or text, as well as
text-guided editability.
†Intern at Microsoft Research.‡Corresponding author.∗Equal contri-
bution. Project Webpage: https://3d-avatar-diffusion.microsoft.com
|
1. Introduction
Generative models [2, 34] are one of the most promising
ways to analyze and synthesize visual data including 2D
images and 3D models. At the forefront of generative mod-
eling is the diffusion model [14, 24, 61], which has shown
phenomenal generative power for images [19,47,50,52] and
videos [23,59]. Indeed, we are witnessing a 2D content cre-
ation revolution driven by the rapid advances of diffusion
and generative modeling. In this paper, we aim to expand
the applicability of diffusion to 3D digital avatars. We use
“digital avatars” to refer to the traditional avatars manually
created by 3D artists, as opposed to the recently emerging
photorealistic avatars [8, 43]. The reason for focusing on
digital avatars is twofold. On the one hand, digital avatars
are widely used in movies, games, the metaverse, and the
3D industry in general. On the other hand, the available
digital avatar data is very scarce as each avatar must be
painstakingly created by a specialized 3D artist using a so-
phisticated creation pipeline [20, 35], especially for mod-
eling hair and facial hair. All this leads to a compelling
scenario for generative modeling.
We present a diffusion model for automatically produc-
ing digital avatars represented as NeRFs [38], with each
point describing its color radiance and density within the
3D volume. The core challenge in generating these avatars
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
4563
is the prohibitive memory and computational cost required
by high-quality avatars with rich details. Without rich de-
tails, an avatar will always be somewhat “toy-like”. To
tackle this challenge, we develop RODIN, the roll-out dif-
fusion network. We take a NeRF model represented as mul-
tiple 2D feature maps and roll out these maps onto a single
2D feature plane and perform 3D-aware diffusion within
this plane. Specifically, we use the tri-plane representa-
tion [9], which represents a volume by three orthogonal fea-
ture planes. By simply rolling out the feature maps, RODIN
can perform 3D-aware diffusion using an efficient 2D ar-
chitecture and by drawing power from RODIN’s three key
ingredients.
The first is the 3D-aware convolution. The 2D CNN
processing used in conventional 2D diffusion cannot ef-
fectively handle feature maps originating from orthogonal
planes. Rather than treating the features as plain 2D input,
the 3D-aware convolution explicitly accounts for the fact
that a 2D feature in one plane (of the tri-plane) is a projec-
tion from a piece of 3D data and is hence intrinsically as-
sociated with the same data’s projected features in the other
two planes. To encourage cross-plane communication, we
involve all these associated features in the convolution and
synchronize their detail synthesis according to their 3D re-
lationship.
The second ingredient is latent conditioning. We use a
latent vector to orchestrate the feature generation so that it is
globally coherent in 3D, leading to better quality avatars and
enabling semantic editing. We do this by using the avatars
in the training dataset to train an additional image encoder
which extracts a semantic latent vector serving as the condi-
tional input to the diffusion model. This latent conditioning
essentially acts as an autoencoder in orchestrating the fea-
ture generation. For semantic editability, we adopt a frozen
CLIP image encoder [46] that shares the latent space with
text prompts.
The final ingredient is hierarchical synthesis. We start
by generating at low resolution ( 64×64), followed by a
diffusion-based upsampling that yields a higher resolution
(256×256). When training the diffusion upsampler, it is
instrumental to penalize the image-level loss that we com-
pute in a patch-wise manner.
RODIN supports several application scenarios. We can
use the model to generate an unlimited number of avatars
from scratch, each different from the others as well as those
in the training data. As shown in Figure 1, we can generate
highly detailed avatars with realistic hairstyles and facial
hair styled as beards, mustaches, goatees, and sideburns.
Hairstyle and facial hair are essential parts of personal iden-
tity yet have been notoriously difficult to model well with
existing approaches. The RODIN model also allows avatar
customization, with the resulting avatar capturing the visual
characteristics of a person portrayed in an image or a textualdescription. Finally, our framework supports text-guided
semantic editing.
Our work shows that 3D diffusion holds great model-
ing power, and this power can be effectively unleashed by
rolling out the feature maps onto a 2D plane, leading to rich
details, including those highly desirable but extremely dif-
ficult to produce with existing techniques. It is worth not-
ing that while this paper focuses on RODIN’s application to
avatars, the design of RODIN is not avatar specific. Indeed,
we believe RODIN is applicable to general 3D scenes.
|
Wu_Co-Salient_Object_Detection_With_Uncertainty-Aware_Group_Exchange-Masking_CVPR_2023
|
Abstract
The traditional definition of co-salient object detection
(CoSOD) task is to segment the common salient objects in
a group of relevant images. Existing CoSOD models by-
default adopt the group consensus assumption. This brings
about model robustness defect under the condition of ir-
relevant images in the testing image group, which hinders
the use of CoSOD models in real-world applications. To
address this issue, this paper presents a group exchange-
masking (GEM) strategy for robust CoSOD model learn-
ing. With two group of image containing different types of
salient object as input, the GEM first selects a set of images
from each group by the proposed learning based strategy,
then these images are exchanged. The proposed feature ex-
traction module considers both the uncertainty caused by
the irrelevant images and group consensus in the remain-
ing relevant images. We design a latent variable genera-
tor branch which is made of conditional variational autoen-
coder to generate uncertainly-based global stochastic fea-
tures. A CoSOD transformer branch is devised to capture
the correlation-based local features that contain the group
consistency information. At last, the output of two branches
are concatenated and fed into a transformer-based decoder,
producing robust co-saliency prediction. Extensive evalua-
tions on co-saliency detection with and without irrelevant
images demonstrate the superiority of our method over a
variety of state-of-the-art methods.
|
1. Introduction
Co-salient object detection (CoSOD) is to segment the
common salient objects in a group of relevant images.
By detecting the co-salient object in a group of images,
the images’ background and redundant content are re-
∗Corresponding author. This work is supported in part by Na-
tional Key Research and Development Program of China under Grant No.
2018AAA0100400, in part by the NSFC under Grant Nos. 62276141,
61872189.
Rearrange
(b) Group exchange -masking
Learn to select
top-k noisy images
RearrangeTrains
Dogs
Inputs
GT
OURS
DCFM
(a) Comparison results
GCoNet
Figure 1. (a) When there exists an irrelevant image in the test
group, the state-of-the-art CoSOD models such as DCFM [39] and
GCoNet [9] tend to generate false positive predictions for the noisy
image, yet our method can achieve an accurate prediction due to
the use of group exchange-masking strategy in (b).
moved, which helps the downstream tasks such as ob-
ject tracking [38], co-segmentation [48] and video co-
localization [14], to name a few.
Group consensus assumption is widely used by exist-
ing CoSOD models, i.e., these models presume that all im-
ages in the same group contain the common salient tar-
gets. The widely-used CoSOD benchmark datasets, such as
COCO-SEG [33], CoCA [50], and CoSOD3k [8] organize
the training and testing images that contain the same ob-
ject as a group. Many existing CoSOD models consider the
group consensus characteristic in modeling. For example,
early works such as [2, 10, 13] extract hand-crafted features
for inter-image co-object correspondence discovery. The
deep learning models proposed in [15, 20, 36, 39, 44, 46, 48]
use one group of relevant images as training data input
for consensus representation learning. Among them, a va-
riety of novel model design techniques have been devel-
oped to make full use of the group consistency characteris-
tic, such as the low-rank feature learning [46], co-attention
model [20] and intra-saliency correlation learning [15]. The
issues of the group consensus assumption are partially stud-
ied in recent literature [9], which further models the inter-
1
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19639
group separability for more discriminative feature learning
by a group collaborating module.
In this paper, we find that the group consensus assump-
tion also restricts the CoSOD model’s robustness against
the images without common object. As illustrated by Fig-
ure 1(a), state-of-the-art CoSOD models tend to output false
positive predictions for the irrelevant image. This issue
hinders the use of CoSOD models in real-world applica-
tions where the testing inputs are likely to contain irrele-
vant images. To enhance the model’s robustness, we pro-
pose a learning framework called group exchange-masking
(GEM). The GEM is illustrated by Figure 1(b). Given two
image groups that contain different types of co-salient ob-
jects, we exchange several images between one group and
the other. Those exchanged images are called noisy images.
The number of noisy images is chosen to be less than the
number of remaining relevant images in the group, so that
the co-salient object in the noisy images forms a negative
object but not the dominant co-salient object. The “mask-
ing” strategy refers to the label regeneration of the noisy
images. Because there is no co-salient object, in the regen-
erated label, the original ground-truth object is masked. The
learning objective is to correctly predict both the co-salient
objects in the original relevant images and the added noisy
images.
Adding noisy images to the training image group brings
about uncertainty to the CoSOD model learning since there
is some probability of no expected common object in
each image. We design a dual-path image feature extrac-
tion module to model the group uncertainty in addition
to the group consensus property. Specifically, we design
a latent variable generator branch (LVGB) to extract the
uncertainty-based global image features. The LVGB mod-
ule is motivated by the conditional variational autoencoder
(CV AE) [32] that is widely used to address the uncertainty
in vision applications including image background model-
ing [19], RGB-D saliency detection [43] and image recon-
struction [41]. In parallel with LVGB, we feed the im-
age group into a CoSOD Transformer Branch (CoSOD-
TB). The CoSOD-TB partitions each image group into lo-
cal patches, and the attention mechanism in the transformer
enables this branch to model patch-wise correlation-based
local features. As a result, the group consistency informa-
tion can be captured by this branch. The outputs of the two
branches are concatenated and fed into a transformer-based
decoder for co-saliency prediction. The proposed model has
the following technical contributions.
• A robust CoSOD model learning mechanism, called
group exchange-masking is proposed. By exchanging
images between two groups, we augment the training
data containing irrelevant images as noise to enhance
model’s robustness. This is different from the tra-
ditional CoSOD model learning frameworks that usegroups of relevant images as training data.
• We propose a dual-path feature extraction module
composed of the LVGB and the CoSOD-TB. The
LVGB is designed to model the uncertainty of co-
salient object existence. The CoSOD-TB is for the
consensus feature extraction of the salient object in the
relevant images.
• Extensive evaluations on three benchmark datasets,
including CoSal2015 [42], CoCA [15], and
CoSOD3k [7] show that the superiority of the
proposed model to the state-of-the-art methods in
terms of all evaluation metrics. Besides, the proposed
model demonstrates good robustness for dealing with
noisy data without co-salient objects.
|
Wang_Gradient-Based_Uncertainty_Attribution_for_Explainable_Bayesian_Deep_Learning_CVPR_2023
|
Abstract
Predictions made by deep learning models are prone
to data perturbations, adversarial attacks, and out-of-
distribution inputs. To build a trusted AI system, it is there-
fore critical to accurately quantify the prediction uncertain-
ties. While current efforts focus on improving uncertainty
quantification accuracy and efficiency, there is a need to
identify uncertainty sources and take actions to mitigate
their effects on predictions. Therefore, we propose to de-
velop explainable and actionable Bayesian deep learning
methods to not only perform accurate uncertainty quan-
tification but also explain the uncertainties, identify their
sources, and propose strategies to mitigate the uncertainty
impacts. Specifically, we introduce a gradient-based uncer-
tainty attribution method to identify the most problematic
regions of the input that contribute to the prediction uncer-
tainty. Compared to existing methods, the proposed UA-
Backprop has competitive accuracy, relaxed assumptions,
and high efficiency. Moreover, we propose an uncertainty
mitigation strategy that leverages the attribution results as
attention to further improve the model performance. Both
qualitative and quantitative evaluations are conducted to
demonstrate the effectiveness of our proposed methods.
|
1. Introduction
Despite significant progress in many fields, conventional
deep learning models cannot effectively quantify their pre-
diction uncertainties, resulting in overconfidence in un-
known areas and the inability to detect attacks caused by
data perturbations and out-of-distribution inputs. Left unad-
dressed, this may cause disastrous consequences for safety-
critical applications, and lead to untrustworthy AI models.
The predictive uncertainty can be divided into epistemic
uncertainty and aleatoric uncertainty [16]. Epistemic un-certainty reflects the model’s lack of knowledge about the
input. High epistemic uncertainty arises in regions, where
there are few or no observations. Aleatoric uncertainty mea-
sures the inherent stochasticity in the data. Inputs with high
noise are expected to have high aleatoric uncertainty. Con-
ventional deep learning models, such as deterministic clas-
sification models that output softmax probabilities, can only
estimate the aleatoric uncertainty.
Bayesian deep learning (BDL) offers a principled frame-
work for estimating both aleatoric and epistemic uncertain-
ties. Unlike the traditional point-estimated models, BDL
constructs the posterior distribution of model parameters.
By sampling predictions from various models derived from
the parameter posterior, BDL avoids overfitting and al-
lows for systematic quantification of predictive uncertain-
ties. However, current BDL methods primarily concentrate
on enhancing the accuracy and efficiency of uncertainty
quantification, while failing to explicate the precise loca-
tions of the input data that cause predictive uncertainties and
take suitable measures to reduce the effects of uncertainties
on model predictions.
Uncertainty attribution (UA) aims to generate an uncer-
tainty map of the input data to identify the most problem-
atic regions that contribute to the prediction uncertainty. It
evaluates the contribution of each pixel to the uncertainty,
thereby increasing the transparency and interpretability of
BDL models. Previous attribution methods are mainly de-
veloped for classification attribution (CA) with determinis-
tic neural networks (NNs) to find the contribution of im-
age pixels to the classification score. Unlike UA, directly
leveraging the gradient-based CA methods for detecting
problematic regions is unreliable. While CA explains the
model’s classification process, assuming its predictions are
confident, UA intends to identify the sources of input im-
perfections that contribute to the high predictive uncertain-
ties. Moreover, CA methods are often class-discriminative
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12044
since the classification score depends on the predicted class.
As a result, they often fail to explain the inputs which have
wrong predictions
|
Wang_Images_Speak_in_Images_A_Generalist_Painter_for_In-Context_Visual_CVPR_2023
|
Abstract
In-context learning, as a new paradigm in NLP , allows
the model to rapidly adapt to various tasks with only a hand-
ful of prompts and examples. But in computer vision, the
difficulties for in-context learning lie in that tasks vary sig-
nificantly in the output representations, thus it is unclear
how to define the general-purpose task prompts that the vi-
*Equal contribution. Correspondence to [email protected] .
This work is done when Wen Wang is an intern at BAAI.sion model can understand and transfer to out-of-domain
tasks. In this work, we present Painter, a generalist model
which addresses these obstacles with an “image”-centric
solution, that is, to redefine the output of core vision tasks
as images, and specify task prompts as also images. With
this idea, our training process is extremely simple, which
performs standard masked image modeling on the stitch of
input and output image pairs. This makes the model capable
of performing tasks conditioned on visible image patches.
Thus, during inference, we can adopt a pair of input and
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
6830
output images from the same task as the input condition, to
indicate which task to perform. Without bells and whistles,
our generalist Painter can achieve competitive performance
compared to well-established task-specific models, on seven
representative vision tasks ranging from high-level visual
understanding to low-level image processing. In addition,
Painter significantly outperforms recent generalist models
on several challenging tasks.
|
1. Introduction
Training one generalist model that can execute diverse
tasks simultaneously, and even can perform a new task given
a prompt and very few examples, is one important step closer
to artificial general intelligence. In NLP, the emergence of
in-context learning [2, 7, 18] presents a new path in this
direction, which uses language sequences as the general
interface and allows the model to rapidly adapt to various
language-centric tasks with only a handful of prompts and
examples.
Thus far, in computer vision, in-context learning is rarely
explored and remains unclear how to achieve that. This can
be attributed to the differences between two modalities. One
difference is that NLP tasks mainly consist of language un-
derstanding and generation, so their output spaces can be
unified as sequences of discrete language tokens. But vision
tasks are the abstractions of raw visual input to varied gran-
ularity and angles. Thus, vision tasks vary significantly in
output representations , leading to various task-specific loss
functions and architecture designs. The second difference is
that the output space of NLP tasks is even the same as the
input. Thus, the task instruction and the example’s input/out-
put, which are all language-token sequences, can be directly
used as the input condition (also denoted as the task prompt),
which can be processed straightforwardly by the large lan-
guage model. However, in computer vision, it is unclear
how to define general-purpose task prompts or instructions
that the vision model can understand and transfer to out-of-
domain tasks. Several recent attempts [6, 9, 10, 27, 35, 41]
tackle these difficulties by following the solutions in NLP.
They more-or-less convert vision problems into NLP ones
via discretizing the continuous output spaces of vision tasks,
and using the language or specially-designed discrete tokens
as the task prompts.
However, we believe that images speak in images, i.e.,
image itself is a natural interface for general-purpose visual
perception. In this work, we address the above obstacles
with a vision-centric solution. The core observation is that
most dense-prediction vision problems can be formulated as
image inpainting ,i.e.:
Given an input image, prediction is to inpaint the desired
but missing output “image”.
Thus, we need a representation of 3-channel tensor thatappears as an “image” for the output of the vision tasks,
and specify the task prompts using a pair of images. Here
we showcase several representative vision tasks for train-
ing, including depth estimation, human keypoint detection,
semantic segmentation, instance segmentation, image de-
noising, image deraining, and image enhancement, and unify
their output spaces using a 3-channel tensor, a.k.a. “out-
put image”. We carefully design the data format for each
task, such as instance mask and per-pixel discrete labels
of panoptic segmentation, per-pixel continuous values of
depth estimation, and high-precision coordinates of pose
estimation. Including more tasks is very straightforward, as
we only need to construct new data pairs and add them to
the training set, without modifications to either the model
architecture or loss function.
Based on this unification, we train a generalist Painter
model with an extremely simple training process. During
training, we stitch two images from the same task into a
larger image, and so do their corresponding output images.
Then we apply masked image modeling (MIM) on pixels of
the output image, with the input image being the condition.
With such a learning process, we enable the model to per-
form tasks conditioned on visible image patches, that is, the
capability of in-context prediction with the visual signal as
context.
Thus the trained model is capable of the in-context infer-
ence. That is, we directly use the input/output paired images
from the same task as the input condition to indicate which
task to perform. Examples of in-context inference are illus-
trated in Figure 1, consisting of seven in-domain examples
(seven rows at top) and three out-of-domain examples (three
rows at bottom). This definition of task prompts does not
require deep understanding of language instructions as need
by almost all previous approaches, and makes it very flexible
for performing both in-domain and out-of-domain vision
tasks.
Without bells and whistles, our model can achieve com-
petitive performance compared to well-established task-
specific models, on several fundamental vision tasks across
high-level visual understanding to low-level image process-
ing, namely, depth estimation on NYUv2 [37], semantic
segmentation on ADE-20K [54], human keypoint detection
on COCO [31], panoptic segmentation on COCO [31], and
three low-level image restoration tasks. Notably, on depth
estimation of NYUv2, our model achieves state-of-the-art
performance, outperforming previous best results by large
margins which have heavy and specialized designs on archi-
tectures and loss functions. Compared to other generalist
models, Painter yields significant improvements on several
challenging tasks.
6831
|
Wang_SunStage_Portrait_Reconstruction_and_Relighting_Using_the_Sun_as_a_CVPR_2023
|
Abstract
A light stage uses a series of calibrated cameras and
lights to capture a subject’s facial appearance under vary-
ing illumination and viewpoint. This captured information
is crucial for facial reconstruction and relighting. Unfortu-
nately, light stages are often inaccessible: they are expen-
sive and require significant technical expertise for construc-
tion and operation. In this paper, we present SunStage: a
lightweight alternative to a light stage that captures com-
parable data using only a smartphone camera and the sun.
Our method only requires the user to capture a selfie video
outdoors, rotating in place, and uses the varying angles
between the sun and the face as guidance in joint recon-
struction of facial geometry, reflectance, camera pose, and
lighting parameters. Despite the in-the-wild un-calibrated
setting, our approach is able to reconstruct detailed facial
appearance and geometry, enabling compelling effects such
as relighting, novel view synthesis, and reflectance editing.
|
1. Introduction
A light stage [11] acquires the shape and material prop-
erties of a face in high detail using a series of images cap-
tured under synchronized cameras and lights. This captured
information can be used to synthesize novel images of the
subject under arbitrary lighting conditions or from arbitrary
viewpoints. This process enables a number of visual effects,
such as creating digital replicas of actors that can be used in
movies [1] or high-quality postproduction relighting [46].
In many cases, however, it is often infeasible to get ac-
cess to a light stage for capturing a particular subject, be-
cause light stages are not easy to find: they are expensive
and require significant technical expertise (often teams of
people) to build and operate. In these cases, hope is not
lost — one can turn to methods that are trained on light
stage data, with the intention of generalizing to new sub-
jects. These methods do not require the subject to be cap-
tured by a light stage but instead use a machine learning
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20792
model trained on a collection of previously acquired light
stage captures to enable the same applications as a light
stage, but from only one or several images of a new sub-
ject [6, 25, 30,38,40,50,52]. Unfortunately, these methods
have difficulty faithfully reproducing and editing the ap-
pearance of new subjects, as they lack much of the signal
necessary to resolve the ambiguities of single-view recon-
struction, i.e., a single image of a face can be reasonably
explained by different combinations of geometry, illumina-
tion, and reflectance.
In this paper, we propose an intermediate solution — one
that allows for personalized, high-quality capture of a given
subject, but without the need for expensive, calibrated cap-
ture equipment. Our method, which we dub SunStage, uses
only a handheld smartphone camera and the sun to simu-
late a minimalist light stage, enabling the reconstruction of
individually-tailored geometry and reflectance without spe-
cialized equipment. Our capture setup only requires the user
to hold the camera at arm’s length and rotate in place, al-
lowing the face to be observed under varying angles of inci-
dent sunlight, which causes specular highlights to move and
shadows to swing across the face. This provides strong sig-
nals for the reconstruction of facial geometry and spatially-
varying reflectance properties. The reconstructed face and
scene parameters estimated by our system can be used to
realistically render the subject in new, unseen lighting con-
ditions — even with complex details like self-occluding cast
shadows, which are typically missing in purely image-based
relighting techniques, i.e., those that do not explicitly model
geometry. In addition to relighting, we also show applica-
tions in view synthesis, correcting facial perspective distor-
tion, and editing skin reflectance.
Our contributions include: (1) a novel capture technique
for personalized facial scanning without custom equip-
ment, (2) a system for optimization and disentanglement of
scene parameters (geometry, materials, lighting, and camera
poses) from an unaligned, handheld video, and (3) multiple
portrait editing applications that produce photorealistic re-
sults, using as input only a single selfie video.
|
Xie_GP-VTON_Towards_General_Purpose_Virtual_Try-On_via_Collaborative_Local-Flow_Global-Parsing_CVPR_2023
|
Abstract
Image-based Virtual Try-ON aims to transfer an in-shop
garment onto a specific person. Existing methods employ
a global warping module to model the anisotropic defor-
mation for different garment parts, which fails to preserve
the semantic information of different parts when receiving
challenging inputs (e.g, intricate human poses, difficult gar-
ments). Moreover, most of them directly warp the input
garment to align with the boundary of the preserved re-
gion, which usually requires texture squeezing to meet the
boundary shape constraint and thus leads to texture dis-
tortion. The above inferior performance hinders existing
methods from real-world applications. To address these
problems and take a step towards real-world virtual try-on,
we propose a General-Purpose Virtual Try-ON framework,
named GP-VTON, by developing an innovative Local-Flow
Global-Parsing (LFGP) warping module and a DynamicGradient Truncation (DGT) training strategy. Specifically,
compared with the previous global warping mechanism,
LFGP employs local flows to warp garments parts individ-
ually, and assembles the local warped results via the global
garment parsing, resulting in reasonable warped parts and
a semantic-correct intact garment even with challenging in-
puts.On the other hand, our DGT training strategy dynam-
ically truncates the gradient in the overlap area and the
warped garment is no more required to meet the bound-
ary constraint, which effectively avoids the texture squeez-
ing problem. Furthermore, our GP-VTON can be easily
extended to multi-category scenario and jointly trained by
using data from different garment categories. Extensive ex-
periments on two high-resolution benchmarks demonstrate
our superiority over the existing state-of-the-art methods.1
1Corresponding author is Xiaodan Liang. Code is available at gp-vton.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23550
|
1. Introduction
The problem of Virtual Try-ON (VTON), aiming to
transfer a garment onto a specific person, is of particu-
lar importance for the nowadays e-commerce and the fu-
ture metaverse. Compared with the 3D-based solutions [2,
14, 16, 28, 34] which rely upon 3D scanning equipment or
labor-intensive 3D annotations, the 2D image-based meth-
ods [1, 9, 12, 17, 19, 20, 29, 30, 32, 36, 38–40, 43], which
directly manipulate on the images, are more practical for
the real world scenarios and thus have been intensively ex-
plored in the past few years.
Although the pioneering 2D image-based VTON meth-
ods [12, 19, 29] can synthesize compelling results on the
widely used benchmarks [6, 8, 18], there still exist some
deficiencies preventing them from the real-world scenarios,
which we argue mainly contain three-folds. First, existing
methods have strict constraints on the input images, and are
prone to generate artifacts when receiving challenging in-
puts. To be specific, as shown in the 1st row of Fig. 1(A),
when the pose of the input person is intricate, existing
methods [12, 19] fail to preserve the semantic information
of different garment parts, resulting in the indistinguish-
able warped sleeves. Besides, as shown in the 2nd row of
Fig. 1(A), if the input garment is a long sleeve without ob-
vious seam between the sleeve and torso, existing methods
will generate adhesive artifact between the sleeve and torso.
Second, most of the existing methods directly squeeze the
input garment to make it align with the preserved region,
leading to the distorted texture around the preserved region
(e.g., the 3rd row of Fig. 1(A)). Third, most of the exist-
ing works only focus on the upper-body try-on and neglect
other garment categories (i.e, lower-body, dresses), which
further limits their scalability for real-world scenarios.
To relieve the input constraint for VTON systems and
fully exploit their application potential, in this paper, we
take a step forwards and propose a unified framework,
named GP-VTON, for the General- Purposed Virtual Try-
ON, which can generate realistic try-on results even for
the challenging scenario (Fig. 1(A)) (e.g., intricate human
poses, difficult garment inputs, etc.), and can be easily ex-
tended to the multi-category scenario (Fig. 1(B)).
The innovations of our GP-VTON lie in a novel Local-
Flow Global-Parsing (LFGP) warping module and a Dy-
namic Gradient Truncation (DGT) training strategy for the
warping network, which enable the network to generate
high fidelity deformed garments, and further facilitate our
GP-VTON to generate photo-realistic try-on results.
Specifically, most of the existing methods employ neural
network to model garment deformation by introducing the
Thin Plate Splines (TPS) transformation [3] or the appear-
ance flow [44] into the network, and training the network
in a weakly supervised manner (i.e., without ground truth
for the deformation function). However, both of the TPS-based methods [6, 18, 30, 36, 40] and the flow-based meth-
ods [1, 12, 17, 19, 29] directly learn a global deformation
field, therefore fail to represent complicated non-rigid gar-
ment deformation that requires diverse transformation for
different garment parts. Taking the intricate pose case in
Fig. 1(A) as an example, existing methods [12, 19] can not
simultaneously guarantee accurate deformation for the torso
region and sleeve region, and lead to exceeding distorted
sleeves. In contrast, our LFGP warping module chooses to
learn diverse local deformation fields for different garment
parts, which is capable of individually warping each gar-
ment part, and generating semantic-correct warped garment
even for intricate pose case. Besides, since each local defor-
mation field merely affects one corresponding garment part,
garment texture from other parts is agnostic to the current
deformation field and will not appear in the current local
warped result. Therefore, the garment adhesion problem in
the complex garment scenario can be completely addressed
(as demonstrated in the 2nd row of Fig. 1(A)). However,
directly assembling the local warped parts together can not
obtain realistic warped garments, because there would be
overlap among different warped parts. To deal with this, our
LFGP warping module collaboratively estimates a global
garment parsing to fuse different local warped parts, result-
ing in a complete and unambiguous warped garment.
On the other hand, the warping network in existing meth-
ods [12,19,29] takes as inputs the flat garment and the mask
of the preserved region (i.e., region to be preserved during
the try-on procedure, such as the lower garment for upper-
body VTON), and force the input garment to align with the
boundary of the preserved region (e.g., the junction of the
upper and lower garment), which usually require garment
squeezing to meet the shape constraint and lead to texture
distortion around the garment junction (please refer to the
3rd row of Fig. 1(A)). An effective solution to this problem
is exploiting the gradient truncation strategy for the network
training, in which the warped garment will be processed
by the preserved mask before calculating the warping loss
and the gradient in the preserved region will not be back-
propagated. By using such a strategy, the warped garment
is no longer required to strictly align with the preserved
boundary, which largely avoids the garment squeezing and
texture distortion. However, due to the poor supervision
of warped garments in the preserved region, directly em-
ploying the gradient truncation for all training data will lead
to excessive freedom for the deformation field, which usu-
ally results in texture stretching in the warped results. To
tackle this problem, we proposed a Dynamic Gradient Trun-
cation (DGT) training strategy which dynamically conducts
gradient truncation for different training samples accord-
ing to the disparity of height-width ratio between the flat
garment and the warped garment. By introducing the dy-
namic mechanism, our LFGP warping module can alleviate
23551
LFGP Warping
Module
Warp
�� ��
�’���’
�� �’��’ {��}�=13{��}�=13{�’�}�=13
Skin
Color
Map
�������
�Figure 2. Overview of GP-VTON. The LFGP warping module aims to estimate the local flows {fk}3
k=1and global garment parsing S′,
which is used to warp different garment parts {Gk}3
k=1and assembles warped parts {G′k}3
k=1into the intact garment G′, respectively.
The generator Gtakes as inputs G′and the other person-related conditions to generate the final try-on result I′.
the texture stretching problem and obtain realistic warped
garments with better texture preservation.
Overall, our contributions can be summarized as fol-
lows: (1) We propose a unified try-on framework, named
GP-VTON, to generate photo-realistic results for diverse
scenarios. (2) We propose a novel LFGP warping mod-
ule to generate semantic-correct deformed garments even
with challenging inputs. (3) We introduce a simple, yet ef-
fective DGT training strategy for the warping network to
obtain distortion-free deformed garments. (4) Extensive ex-
periments on two challenging high-resolution benchmarks
show the superiority of GP-VTON over existing SOTAs.
|
Xu_Grid-Guided_Neural_Radiance_Fields_for_Large_Urban_Scenes_CVPR_2023
|
Abstract
Purely MLP-based neural radiance fields (NeRF-based
methods) often suffer from underfitting with blurred ren-
derings on large-scale scenes due to limited model capac-
ity. Recent approaches propose to geographically divide the
scene and adopt multiple sub-NeRFs to model each region
individually, leading to linear scale-up in training costs and
the number of sub-NeRFs as the scene expands. An alterna-
tive solution is to use a feature grid representation, which is
computationally efficient and can naturally scale to a large
scene with increased grid resolutions. However, the feature
grid tends to be less constrained and often reaches subop-
timal solutions, producing noisy artifacts in renderings, es-
pecially in regions with complex geometry and texture. In
this work, we present a new framework that realizes high-
fidelity rendering on large urban scenes while being com-
putationally efficient. We propose to use a compact multi-
resolution ground feature plane representation to coarsely
capture the scene, and complement it with positional encod-
ing inputs through another NeRF branch for rendering in a
joint learning fashion. We show that such an integrationcan utilize the advantages of two alternative solutions: a
light-weighted NeRF is sufficient, under the guidance of the
feature grid representation, to render photorealistic novel
views with fine details; and the jointly optimized ground fea-
ture planes, can meanwhile gain further refinements, form-
ing a more accurate and compact feature space and output
much more natural rendering results.
|
1. Introduction
Large urban scene modeling has been drawing lots of
research attention with the recent emergence of neural radi-
ance fields (NeRF) due to its photorealistic rendering and
model compactness [3, 34, 56, 59, 62, 64]. Such model-
ing can enable a variety of practical applications, including
autonomous vehicle simulation [23, 39, 67], aerial survey-
ing [6,15], and embodied AI [35,61]. NeRF-based methods
have shown impressive results on object-level scenes with
their continuity prior benefited from the MLP architecture
andhigh-frequency details with the globally shared posi-
tional encodings. However, they often fail to model large
and complex scenes. These methods suffer from underfit-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8296
ting due to limited model capacity and only produce blurred
renderings without fine details [56,62,64]. BlockNeRF [58]
and MegaNeRF [62] propose to geographically divide ur-
ban scenes and assign each region a different sub-NeRF to
learn in parallel. Subsequently, when the scale and com-
plexity of the target scene increases, they inevitably suffer
from a trade-off between the number of sub-NeRFs and the
capacity required by each sub-NeRF to fully capture all the
fine details in each region. Another stream of grid-based
representations represents the target scene using a grid of
features [27, 27, 30, 36, 55, 68, 69]. These methods are gen-
erally much faster during rendering and more efficient when
the scene scales up. However, as each cell of the feature grid
is individually optimized in a locally encoded manner, the
resulting feature grids tend to be less continuous across the
scene compared to NeRF-based methods. Although more
flexibility is intuitively beneficial for capturing fine details,
the lack of inherent continuity makes this representation
vulnerable to suboptimal solutions with noisy artifacts, as
demonstrated in Fig. 1.
To effectively reconstruct large urban scenes with im-
plicit neural representations, in this work, we propose a two-
branch model architecture that takes a unified scene repre-
sentation that integrates both grid-based andNeRF-based
approaches under a joint learning scheme. Our key insight
is that these two types of representations can be used com-
plementary to each other: while feature grids can easily fit
local scene content with explicit and independently learned
features, NeRF introduces an inherent global continuity on
the learned scene content with its shareable MLP weights
across all 3D coordinate inputs. NeRF can also encourage
capturing high-frequency scene details by matching the po-
sitional encodings as Fourier features with the bandwidth of
details. However, unlike feature grid representation, NeRF
is less effective in compacting large scene contents into its
globally shared latent coordinate space.
Concretely, we firstly model the target scene with a fea-
ture grid in a pre-train stage, which coarsely captures scene
geometry and appearance. The coarse feature grid is then
used to 1) guide NeRF’s point sampling to let it concentrate
around the scene surface; and 2) supply NeRF’s positional
encodings with extra features about the scene geometry and
appearance at sampled positions. Under such guidance,
NeRF can effectively and efficiently pick up finer details
in a drastically compressed sampling space. Moreover, as
coarse-level geometry and appearance information are ex-
plicitly provided to NeRF, a light-weight MLP is sufficient
to learn a mapping from global coordinates to volume den-
sities and color values. The coarse feature grids get further
optimized with gradients from the NeRF branch in the sec-
ond joint-learning stage, which regularizes them to produce
more accurate and natural rendering results when applied
in isolation. To further reduce memory footprint and learn areliable feature grid for large urban scenes, we adopt a com-
pact factorization of the 3D feature grid to approximate it
without losing representation capacity. Based on the obser-
vation that essential semantics such as the urban layouts are
mainly distributed on the ground ( i.e.,xy-plane), we pro-
pose to factorize the 3D feature grid into 2D ground feature
planes spanning the scene and a vertically shared feature
vector along the z-axis. The benefits are manifold: 1) The
memory is reduced from O(N3)toO(N2). 2) The learned
feature grid is enforced to be disentangled into highly com-
pact ground feature plans, offering explicit and informative
scene layouts. Extensive experiments show the effective-
ness of our unified model and scene representation. When
rendering novel views in practice, users are allowed to use
either the grid branch at a faster rendering speed, or the
NeRF branch, with more high-frequency details and spatial
smoothness, yet at the cost of a relatively slower rendering.
|
Wang_Scene-Aware_Egocentric_3D_Human_Pose_Estimation_CVPR_2023
|
Abstract
Egocentric 3D human pose estimation with a single
head-mounted fisheye camera has recently attracted atten-
tion due to its numerous applications in virtual and aug-
mented reality. Existing methods still struggle in challeng-
ing poses where the human body is highly occluded or is
closely interacting with the scene. To address this issue, we
propose a scene-aware egocentric pose estimation method
that guides the prediction of the egocentric pose with scene
constraints. To this end, we propose an egocentric depth
estimation network to predict the scene depth map from a
wide-view egocentric fisheye camera while mitigating the
occlusion of the human body with a depth-inpainting net-
work. Next, we propose a scene-aware pose estimation
network that projects the 2D image features and estimated
depth map of the scene into a voxel space and regresses
the 3D pose with a V2V network. The voxel-based fea-
ture representation provides the direct geometric connec-
tion between 2D image features and scene geometry, and
further facilitates the V2V network to constrain the pre-
dicted pose based on the estimated scene geometry. To en-
able the training of the aforementioned networks, we also
generated a synthetic dataset, called EgoGTA, and an in-
the-wild dataset based on EgoPW, called EgoPW-Scene.
The experimental results of our new evaluation sequences
show that the predicted 3D egocentric poses are accurate
and physically plausible in terms of human-scene interac-
tion, demonstrating that our method outperforms the state-
of-the-art methods both quantitatively and qualitatively.
|
1. Introduction
Egocentric 3D human pose estimation with head- or
body-mounted cameras is extensively researched recently
because it allows capturing the person moving around in a
large space, while the traditional pose estimation methods
can only record in a fixed volume. With this advantage, the
egocentric pose estimation methods show great potential in
various applications, including the xR technologies and mo-
Image EgoPW OursFigure 1. Previous egocentric pose estimation methods like
EgoPW predict body poses that may suffer from body floating is-
sue (the first row) or body-environment penetration issue (the sec-
ond row). Our method predicts accurate and plausible poses com-
plying with the scene constraints. The red skeletons are the ground
truth poses and the green skeletons are the predicted poses.
bile interaction applications.
In this work, we estimated the full 3D body pose from
a single head-mounted fisheye camera. A number of
works have been proposed, including Mo2Cap2[39], xR-
egopose [32], Global-EgoMocap [36], and EgoPW [35].
These methods have made significant progress in estimat-
ing egocentric poses. However, when taking account of the
interaction between the human body and the surrounding
environment, they still suffer from artifacts that contrast the
physics plausibility, including body-environment penetra-
tions or body floating (see the EgoPW results in Fig. 1),
which is mostly ascribed to the ambiguity caused by the
self-occluded and highly distorted human body in the ego-
centric view. This problem will render restrictions on sub-
sequent applications including action recognition, human-
object interaction recognition, and motion forecasting.
To address this issue, we propose a scene-aware pose
estimation framework that leverages the scene context to
constrain the prediction of an egocentric pose. This frame-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13031
work produces accurate and physically plausible 3D human
body poses from a single egocentric image, as illustrated in
Fig. 1. Thanks to the wide-view fisheye camera mounted
on the head, the scene context can be easily obtained even
with only one egocentric image. To this end, we train an
egocentric depth estimator to predict the depth map of the
surrounding scene. In order to mitigate the occlusion caused
by the human body, we predict the depth map including the
visible human and leverage a depth-inpainting network to
recover the depth behind the human body.
Next, we combine the projected 2D pose features and
scene depth in a common voxel space and regress the 3D
body pose heatmaps with a V2V network [22]. The 3D
voxel representation projects the 2D poses and depth in-
formation from the distorted fisheye camera space to the
canonical space, and further provides direct geometric con-
nection between 2D image features and 3D scene geome-
try. This aggregation of 2D image features and 3D scene
geometry facilitates the V2V network to learn the rela-
tive position and potential interactions between the human
body joints and the surrounding environment and further
enables the prediction of plausible poses under the scene
constraints.
Since no available dataset can be used for train these net-
works, we proposed EgoGTA, a synthetic dataset based on
the motion sequences of GTA-IM [3], and EgoPW-Scene,
an in-the-wild dataset based on EgoPW [35]. Both of the
datasets contain body pose labels and scene depth map la-
bels for each egocentric frame.
To better evaluate the relationship between estimated
egocentric pose and scene geometry, we collected a new
test dataset containing ground truth joint positions in the
egocentric view. The evaluation results on the new dataset,
along with results on datasets in Wang et al . [36] and
Mo2Cap2[39] demonstrate that our method significantly
outperforms existing methods both quantitatively and qual-
itatively. We also qualitatively evaluate our method on in-
the-wild images. The predicted 3D poses are accurate and
plausible even in challenging real-world scenes. To summa-
rize, our contributions are listed as follows:
• The first scene-aware egocentric human pose estima-
tion framework that predicts accurate and plausible
egocentric pose with the awareness of scene context;
• Synthetic and in-the-wild egocentric datasets contain-
ing egocentric pose labels and scene geometry labels;1
• A new depth estimation and inpainting networks to
predict the scene depth map behind the human body;
• By leveraging a voxel-based representation of body
pose features and scene geometry jointly, our method
1Datasets are released in our project page. Meta did not access or pro-
cess the data and is not involved in the dataset release.outperforms the previous approaches and generates
plausible poses considering the scene context.
|
Wu_Spatiotemporal_Self-Supervised_Learning_for_Point_Clouds_in_the_Wild_CVPR_2023
|
Abstract
Self-supervised learning (SSL) has the potential to ben-
efit many applications, particularly those where manually
annotating data is cumbersome. One such situation is the
semantic segmentation of point clouds. In this context, ex-
isting methods employ contrastive learning strategies and
define positive pairs by performing various augmentation
of point clusters in a single frame. As such, these meth-
ods do not exploit the temporal nature of LiDAR data. In
this paper, we introduce an SSL strategy that leverages pos-
itive pairs in both the spatial and temporal domain. To this
end, we design (i) a point-to-cluster learning strategy that
aggregates spatial information to distinguish objects; and
(ii) a cluster-to-cluster learning strategy based on unsu-
pervised object tracking that exploits temporal correspon-
dences. We demonstrate the benefits of our approach via
extensive experiments performed by self-supervised train-
ing on two large-scale LiDAR datasets and transferring the
resulting models to other point cloud segmentation bench-
marks. Our results evidence that our method outperforms
the state-of-the-art point cloud SSL methods.1
|
1. Introduction
Semantic segmentation from LiDAR point clouds can
be highly beneficial in practical applications, e.g., for self-
driving vehicles to safely interact with their surroundings.
Nowadays, state-of-the-art methods [13, 36, 46] achieve
this with deep neural networks. While effective, the
training of such semantic segmentation networks requires
large amounts of annotated data, which is prohibitively
costly to acquire, particularly for point-level LiDAR anno-
tations [45]. By contrast, with the rapid proliferation of self-
driving vehicles, large amounts of unlabeled LiDAR data
are generated. Here, we develop a method to exploit such
unlabeled data in a self-supervised learning framework.
Self-supervised learning (SSL) aims to learn features
without any human annotations [1, 2, 22, 26, 33, 35, 40, 45]
but so that they can be effectively used for fine-tuning on a
1Our code and pretrained models will be found at
https://github.com/YanhaoWu/STSSL. Correspondence to Ke Wei.
Temporal𝑃𝑖1𝑃𝑖2
𝜏1Previous Methods
Our Method
𝑃𝑚1𝑃𝑛1
𝑃𝑚2𝑃𝑛2𝜏2
𝜏1
𝜏2𝜏1
𝜏2
:Spatial features aggregation :Temporal features aggregation
:Point -wise feature :Cluster -wise featureSpatial Spatial Spatial
Temporal
:Cluster/Scene -wise featureFigure 1. Our method vs existing ones. (Top) Previous methods
create positive pairs for SSL by applying different augmentations,
τ1andτ2(e.g., random flipping, clipping), to a single frame. (Bot-
tom) By contrast, we leverage both spatial and temporal informa-
tion via a point-to-cluster and an inter-frame SSL strategy. Points
in the same color are from the same cluster in the latent space.
downstream task with a small number of labeled samples.
This is achieved by defining a pre-task that does not re-
quire annotations. While many pre-tasks have been pro-
posed [27], contrastive learning has nowadays become a
highly popular choice [30,33,40,41,45]. In general, it aims
to maximize the similarity of positive pairs while potentially
minimizing that of negative ones. In this context, most of
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5251
Figure 2. Cars in the same frame but under different illumina-
tion angles . Note that the main source of difference between the
two instance point clouds arises from the different illumination an-
gles.
the point cloud SSL literature focuses on indoor scenes, for
which relatively dense point clouds are available. Unfor-
tunately, for outdoor scenes, such as the ones we consider
here, the data is more complex and much sparser, and cre-
ating effective pairs remains a challenge.
Several approaches [33, 45] have nonetheless been pro-
posed to perform SSL on outdoor LiDAR point cloud data.
As illustrated in the top portion of Fig. 1, they construct
positive pairs of point clusters or scenes by applying aug-
mentations to a single frame. As such, they neglect the tem-
poral information of the LiDAR data. By contrast, in this
paper, we introduce an SSL approach to LiDAR point cloud
segmentation based on extracting effective positive pairs in
both the spatial and temporal domain.
To achieve this without requiring any pose sensor as
in [24, 40], we introduce (i) a point-to-cluster (P2C) SSL
strategy that maximizes the similarity between the features
encoding a cluster and those of its individual points, thus
encouraging the points belonging to the same object to be
close in feature space; (ii) a cluster-level inter-frame self-
supervised learning strategy that tracks an object across
consecutive frames in an unsupervised manner and encour-
ages feature similarity between the different frames. These
two strategies are depicted in the bottom portion of Fig. 1.
Note that the illumination angle of one object seen in two
different frames typically differs. As shown in Fig. 2, this
is also the main source of difference between two objects
of the same class in the same frame. Therefore, our inter-
frame SSL strategy lets us encode not only temporal infor-
mation, but also the fact that points from different objects
from the same class should be close to each other in feature
space. As simulating different illumination angles via data
augmentation is challenging, our approach yields positive
pairs that better reflects the intra-class variations in LiDAR
point clouds than existing single-frame methods [33, 45].
Our contribution can be summarized as follows:• We introduce an SSL strategy for point cloud segmen-
tation based only on positive pairs. It does not require
any external information, such as pose, GPS, and IMU.
• We propose a novel Point-to-Cluster (P2C) training
paradigm that combines the advantages of point-level
and cluster-level representations to learn a structured
point-level embedding space.
• We introduce the use of cluster-level inter-frame self-
supervised leaning on point clouds generated by a Li-
DAR sensor, which introduces a new way to integrate
temporal information into SSL.
Our experiments on several datasets, including KITTI [17],
nuScene [5], SemanticKITTI [4] and SemanticPOSS [34],
evidence that our method outperforms the state-of-the-art
SSL techniques for point cloud data.
|
VS_Instance_Relation_Graph_Guided_Source-Free_Domain_Adaptive_Object_Detection_CVPR_2023
|
Abstract
Unsupervised Domain Adaptation (UDA) is an effective
approach to tackle the issue of domain shift. Specifically,
UDA methods try to align the source and target representa-
tions to improve generalization on the target domain. Fur-
ther, UDA methods work under the assumption that the
source data is accessible during the adaptation process.
However, in real-world scenarios, the labelled source data
is often restricted due to privacy regulations, data transmis-
sion constraints, or proprietary data concerns. The Source-
Free Domain Adaptation (SFDA) setting aims to alleviate
these concerns by adapting a source-trained model for the
target domain without requiring access to the source data.
In this paper, we explore the SFDA setting for the task of
adaptive object detection. To this end, we propose a novel
training strategy for adapting a source-trained object de-
tector to the target domain without source data. More pre-
cisely, we design a novel contrastive loss to enhance the
target representations by exploiting the objects relations for
a given target domain input. These object instance rela-
tions are modelled using an Instance Relation Graph (IRG)
network, which are then used to guide the contrastive repre-
sentation learning. In addition, we utilize a student-teacher
to effectively distill knowledge from source-trained model
to target domain. Extensive experiments on multiple ob-
ject detection benchmark datasets show that the proposed
approach is able to efficiently adapt source-trained object
detectors to the target domain, outperforming state-of-the-
art domain adaptive detection methods. Code and models
are provided in https://viudomain.github.io/irg-sfda-web/ .
|
1. Introduction
In recent years, object detection has seen tremendous
advancements due to the rise of deep networks [ 12,42,
44,45,53,79]. The major contributor to this success is
the availability of large-scale annotated detection datasets
[10,13,15,43,73], as it enables the supervised training
of deep object detector models. However, these models
Instance Relation
Graph
Labeled source domainUnlabeled target domain
GraphCon Loss
Pseudo-label Loss
Source-free
domain adaptationSource model trainingFigure 1. Left: Supervised training of detection model on the
source domain. Right: Source-Free Domain Adaptation where
a source-trained model is adapted to the target domain in the ab-
sence of source data with pseudo-label self-training and proposed
Instance Relation Graph (IRG) network guided contrastive loss.
often have poor generalization when deployed in visual
domains not encountered during training. In such cases,
most works in the literature follow the Unsupervised Do-
main Adaptation (UDA) setting to improve generalization
[7,14,23,24,57,62]. Specifically, UDA methods aim to
minimize the domain discrepancy by aligning the feature
distribution of the detector model between source and tar-
get domain [ 9,19,28,56,59]. To perform feature align-
ment, UDA methods require simultaneous access to the la-
beled source and unlabeled target data. However in practi-
cal scenarios, the access to source data is often restricted
due to concerns related to privacy/safety, data transmis-
sion, data proprietary etc. For example, consider a detec-
tion model trained on large-scale source data, that performs
poorly when deployed in new devices having data with dif-
ferent visual domains. In such cases, it is far more effi-
cient to transmit the source-trained detector model ( ∼500-
1000MB) for adaptation rather than transmitting the source
data (∼10-100GB) to these new devices [ 27,37]. Moreover,
transmitting only source-trained model alleviates many pri-
vacy/safety, data proprietary concerns as well [ 41,47,70].
Hence, adapting the source-trained model to the target do-
main without having access to source data is essential in the
case of practical deployment of detection models. This mo-
tivates us to study Source-Free Domain Adaptation (SFDA)
setting for adapting object detectors (illustrated in Fig. 1).
The SFDA is a more challenging setting than UDA.
Specifically, on top of having no labels for the target data,
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3520
(a) Prediction (b) Ground truth
Figure 2. (a) Object predictions by Cityscapes-trained model
on the FoggyCityscapes image. (b) Corresponding ground truth.
Here, the proposals around the bus instance have inconsistent pre-
dictions, indicating that instance features are prone to large shift
in the feature space, for a small shift in the proposal location.
the source data is not accessible during adaptation. There-
fore, most SFDA methods for detection consider train-
ing with pseudo-labels generated by source-trained model
[27,40]. During our initial SFDA training experiments,
we identified two key challenges. Firstly, noisy pseudo-
labels generated by the source-trained model due to domain
shift can result in suboptimal distillation of target domain
information into the source-trained model [ 11,46]. Sec-
ondly, Fig. 2shows object proposals for an image from
FoggyCityscapes, predicted by a detector model trained on
Cityscapes. Here, all the proposals have Intersection-over-
Union>0.9 with respective ground-truth bounding boxes
and each proposal is assigned a prediction with a confidence
score. Noticeably, the proposals around the bus instance
have different predictions, e.g., car with 18 %, truck with
93%, and bus with 29 %confidence. This indicates that the
pooled features are prone to a large shift in the feature space
for a small shift in the proposal location. This is because,
the source-trained model representations would tend to be
biassed towards source data, resulting in weak representa-
tion for the target data. Therefore, we consider two major
challenges in SFDA training: 1) Effectively distill target do-
main information into source-trained model 2) Enhancing
the target domain feature representations .
Motivated by [ 46], we utilize mean-teacher [ 61] frame-
work to effectively distill of target domain knowledge into
source-trained model. However, the key challenge of en-
hancing the target domain feature representations remained.
To address this, we turned to contrastive representation
learning (CRL) methods, has been shown to learn high-
quality representations from images in an unsupervised
manner [ 5,6,69]. CRL methods achieve this by forcing
representations to be similar under multiple views (or aug-
mentations) of an anchor image and dissimilar to all other
images. In classification, the CRL methods assume that
each image contains only one object. On the contrary,
for object detection, each image is highly likely to have
multiple object instances. Furthermore, the CRL train-
ing also requires large batch sizes and multiple views to
Source only model
(a) RPN ProposalMultiple views cropped from RPN proposals
PushPull Pull
(b) RPN-view Contrastive LearningBus Car
Figure 3. (a) Class agnostic object proposals generated by Re-
gion Proposal Network (RPN). (b) Cropping out RPN propos-
als will provide multiple contrastive views of an object instance.
We utilize this to improve target domain feature representations
through RPN-view contrastive learning. However as RPN pro-
posals are class agnostic, it is challenging to form positive (same
class)/negative pairs (different class), which is essential for CRL.
learn high-quality representations, which incurs a very high
GPU/memory cost, as detection models are computation-
ally expensive. To circumvent these issues, we propose
an alternative strategy which exploits the architecture of
the detection model like Faster-RCNN [ 54]. Interestingly,
the proposals generated by the Region Proposal Network
(RPN) of a Faster-RCNN essentially provide multiple views
for any object instance as shown in Fig. 3(a). In other
words, the RPN module provides instance augmentation
for free , which could be exploited for CRL, as shown in
Fig. 3(b). However, RPN predictions are class agnos-
tic and without the ground-truth annotations for target do-
main, it is impossible to know which of these proposals
would form positive (same class)/negative pairs (different
class), which is essential for CRL. To this end, we propose
a Graph Convolution Network (GCN) based network that
models the inter-instance relations for generated RPN pro-
posals. Specifically, each node corresponds to a proposal
and the edges represent the similarity relations between the
proposals. This learned similarity relations are utilized to
extract information regarding which proposals would form
positive/negative pairs and are used to guide CRL. By doing
so, we show that such graph-guided contrastive representa-
tion learning is able to enhance representations for the target
data. Our contributions are summarized as follows:
• We investigate the problem of source-free domain adap-
tation for object detection and identify some of the major
challenges that need to be addressed.
• We introduced an Instance Relation Graph (IRG) frame-
work to model the relationship between proposals gener-
ated by the region proposal network.
• We propose a novel contrastive loss which is guided by
the IRG network to improve the feature representations
for the target data.
• The effectiveness of the proposed method is evaluated on
multiple object detection benchmarks comprising of visu-
ally distinct domains. Our method outperforms existing
source-free domain adaptation methods and many unsu-
pervised domain adaptation methods.
3521
|
Wen_Highly_Confident_Local_Structure_Based_Consensus_Graph_Learning_for_Incomplete_CVPR_2023
|
Abstract
Graph-based multi-view clustering has attracted exten-
sive attention because of the powerful clustering-structure
representation ability and noise robustness. Considering
the reality of a large amount of incomplete data, in this pa-
per, we propose a simple but effective method for incomplete
multi-view clustering based on consensus graph learning,
termed as HCLS CGL. Unlike existing methods that uti-
lize graph constructed from raw data to aid in the learn-
ing of consistent representation, our method directly learns
a consensus graph across views for clustering. Specifi-
cally, we design a novel confidence graph and embed it to
form a confidence structure driven consensus graph learn-
ing model. Our confidence graph is based on an intu-
itive similar-nearest-neighbor hypothesis, which does not
require any additional information and can help the model
to obtain a high-quality consensus graph for better cluster-
ing. Numerous experiments are performed to confirm the
effectiveness of our method.
|
1. Introduction
In today’s world, graphs are common tools to mine the
intrinsic structure of data. Graph learning, as a power-
ful data analysis approach, has attracted increasing atten-
tion over the past few years [38, 42]. More and more ma-
chine learning and data mining tasks attempt to adopt graph
learning to enhance the ability of structural representation
and performance [35, 41, 43]. As an emerging data analy-
sis and representation task, multi-view clustering also needs
to deeply mine the geometric structure information of data
samples [15,36,44]. That is also the core focus of this paper.
As we all know, multi-view data is collected from differ-
ent sources or perspectives, which is a diverse description of
the target, and this diversity endows the data with stronger
*Corresponding author.†Co-first authors.
?Intersectionof local nearest neighborConfidence graphComputeweightFigure 1. Our key motivation: A novel confidence graph whose
edges are computed by the nodes’ shared nearest neighbors.
discriminative and expressive power [19, 48]. For example,
in the webpage recommendation tasks, we can assess the
attributes of webpages from different elements such as au-
dio, video, image, and text; in the autonomous driving sce-
nario, ultrasonic radar, optical camera, and infrared radar
together provide data basis for road traffic analysis. These
different styles of multi-view data play an extremely impor-
tant role in real-world applications and provide a decision-
making basis for plentiful downstream tasks [4, 18]. To
better mine and utilize multi-view data, researchers have
proposed abundant multi-view learning methods. These
methods can be broadly divided into the following cate-
gories according to different technical routes: multiple ker-
nel kmeans based methods, canonical correlation analysis
(CCA) based methods, subspace learning based methods,
spectral clustering based methods, nonnegative matrix fac-
torization (NMF) based methods, and deep learning based
methods [2, 3, 5]. However, these conventional multi-view
clustering methods are built on the premise that each sam-
ple has all complete views. Obviously, this idealized as-
sumption is often violated in practice. Missing data is al-
most inevitable due to human oversight or the reasons from
the application scenario itself [17]. Numerous solutions
have been proposed for this incomplete multi-view cluster-
ing (IMC) problem. The classic PMVC (partial multi-view
clustering) [17] seeks a common space to connect two in-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15712
complete views. Liu et al . proposed a matrix factoriza-
tion based model dubbed as LSIMVC [20], which directly
learns a consistent low-dimensional representation of multi-
view feature data and introduces prior indicator information
to he
|
Wang_Out-of-Distributed_Semantic_Pruning_for_Robust_Semi-Supervised_Learning_CVPR_2023
|
Abstract
Recent advances in robust semi-supervised learning
(SSL) typically filter out-of-distribution (OOD) information
at the sample level. We argue that an overlooked problem ofrobust SSL is its corrupted information on semantic level,
practically limiting the development of the field. In this pa-
per , we take an initial step to explore and propose a unified
framework termed OOD Semantic Pruning (OSP), whichaims at pruning OOD semantics out from in-distribution
(ID) features. Specifically, (i) we propose an aliasing OOD
matching module to pair each ID sample with an OOD sam-ple with semantic overlap. (ii) We design a soft orthogonal-
ity regularization, which first transforms each ID feature bysuppressing its semantic component that is collinear with
paired OOD sample. It then forces the predictions beforeand after soft orthogonality decomposition to be consistent.
Being practically simple, our method shows a strong per-formance in OOD detection and ID classification on chal-lenging benchmarks. In particular , OSP surpasses the pre-
vious state-of-the-art by 13.7% on accuracy for ID classifi-
cation and 5.9% on AUROC for OOD detection on TinyIm-
ageNet dataset. The source codes are publicly available athttps://github.com/rain305f/OSP .
|
1. Introduction
Deep neural networks have obtained impressive per-
formance on various tasks [ 30,44,46]. Their success is
partially dependent on a large amount of labeled train-ing data, of which the acquisition is expensive and time-consuming [ 20,25,40,45]. A prevailing way to reduce the
dependency on human annotation is semi-supervised learn-ing (SSL). It learns informative semantics using annotation-
*Equal contribution.
†Corresponding author.
(b) (c)OOD Samples
ID Samples OOD Semantic
OOD Semantic Pruning
Predict : Beetles
Predict : Butterfly
Anchor ID FeaturesPruned ID
FeatureID FeatureOOD Semantic Pruning
OOD Semantic
(a)
Figure 1. (a) Intuitive diagram of OOD Semantic Pruning (OSP),
that pruning OOD semantics out from ID features. (b) t-SNE vi-
sualization [ 52] from the baseline [ 23]. (c) t-SNE visualization
from our OSP model. The colorful dots donate ID features, whilethe black dots mark OOD features. The dots with the same colorrepresent the features of the same class. Here, our OSP and thebaseline are trained on CIFAR100 with 100 labeled data per classand 60% OOD in unlabeled data.
free and acquisition-easy unlabeled data to extend the la-
bel information from limited labeled data and has achievedpromising results in various tasks [ 38,50,51,54].
Unfortunately, classical SSL relies on a basic assump-
tion that the labeled and unlabeled data are collected from
the same distribution, which is difficult to hold in real-worldapplications. In most practical cases, unlabeled data usually
contains classes that are not seen in the labeled data. Exist-
ing works [ 12,17,21,40] have shown that training the SSL
model with these OOD samples in unlabeled data leads toa large degradation in performance. To solve this problem,robust semi-supervised learning (Robust SSL) has been in-
vestigated to train a classification model that performs sta-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23849
bly when the unlabeled set is corrupted by OOD samples.
Typical methods focus on discarding OOD information at
the sample level, that is, detecting and filtering OOD sam-ples to purify the unlabeled set [ 12,17,21,57]. However,
these methods ignore the semantic-level pollution caused
by the classification-useless semantics from OOD samples,
which improperly disturbs the feature distribution learnedfrom ID samples, eventually resulting in weak ID and OODdiscrimination and low classification performance. We pro-vide an example to explain such a problem in Fig. 1.A sw e
can see, due to the semantics of Orchid in OOD examples,
the model pays too much attention to the background andmisclassifies the Butterfly asBeetle .
In this paper, we propose Out-of-distributed Semantic
Pruning (OSP) method to solve the problem mentionedabove and achieve effective robust SSL.
Concretely, our OSP approach consists of two main
modules. We first develop an aliasing OOD matching mod-ule to pair each ID sample with an OOD sample with which
it has feature aliasing. Secondly, we propose a soft orthogo-nality regularization, which constrains the predictions of ID
samples to keep consistent before and after soft-orthogonal
decomposition according to their matching OOD samples.
We evaluate the effectiveness of our OSP in extensive
robust semi-supervised image recognition benchmarks in-
cluding MNIST [ 53], CIFAR10 [ 29], CIFAR100 [ 29] and
TinyImageNet [ 15]. We show that our OSP obtains signifi-
cant improvements compared to state-of-the-art alternatives
(e.g., 13.7% and 15.0% on TinyImagetNet with an OOD
ratio of 0.3 and 0.6 respectively). Besides, we also empiri-cally demonstrate that OSP indeed increases the feature dis-crimination between ID and OOD samples. To summarize,the contributions of this work are as follows:
• To the best of our knowledge, we are the first to exploit
the OOD effects at the semantic level by regularizationID features to be orthogonal to OOD features.
• We develop an aliasing OOD matching module that
adaptively pairs each ID sample with an OOD sample.
In addition, we propose a soft orthogonality regulariza-
tion to restrict ID and OOD features to be orthogonal.
• We conduct extensive experiments on four datasets,
i.e., MNIST, CIFAR10, CIFAR100, and TIN200, and
achieve new SOTA performance. Moreover, we ana-lyze that the superiority of OSP lies in the enhanced
discrimination between ID and OOD features.
|
Wang_CF-Font_Content_Fusion_for_Few-Shot_Font_Generation_CVPR_2023
|
Abstract
Content and style disentanglement is an effective way to
achieve few-shot font generation. It allows to transfer the
style of the font image in a source domain to the style de-
fined with a few reference images in a target domain. How-
ever, the content feature extracted using a representative
font might not be optimal. In light of this, we propose a con-
tent fusion module (CFM) to project the content feature into
a linear space defined by the content features of basis fonts,
which can take the variation of content features caused
by different fonts into consideration. Our method also al-
lows to optimize the style representation vector of reference
images through a lightweight iterative style-vector refine-
ment (ISR) strategy. Moreover, we treat the 1D projection of
a character image as a probability distribution and leverage
the distance between two distributions as the reconstruc-
tion loss (namely projected character loss, PCL). Compared
to L2 or L1 reconstruction loss, the distribution distance
pays more attention to the global shape of characters. We
*This work was done during an internship at Alibaba Group.
†Corresponding author.have evaluated our method on a dataset of 300 fonts with
6.5k characters each. Experimental results verify that our
method outperforms existing state-of-the-art few-shot font
generation methods by a large margin. The source code
can be found at https://github.com/wangchi95/CF-Font.
|
1. Introduction
Few-shot font generation aims to produce characters of
a new font by transforming font images from a source do-
main to a target domain according to just a few reference
images. It can greatly reduce the labor of expert design-
ers to create a new style of fonts, especially for logographic
languages that contain multiple characters, such as Chinese
(over 60K characters), Japanese (over 50K characters), and
Korean (over 11K characters), since only several reference
images need to be manually designed. Therefore, font gen-
eration has wide applications in font completion for ancient
books and monuments, personal font generation, etc.
Recently, with the rapid development of convolu-
tional neural networks [22] and generative adversarial net-
works [9] (GAN), pioneers have made great progress in
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1858
generating gratifying logographic fonts. Zi2zi [38] intro-
duces pix2pix [14] method to generate complex charac-
ters of logographic languages with high quality, but it can-
not handle those fonts that do not appear in training (un-
seen fonts). For the few-shot font generation, many meth-
ods [3, 7, 31, 32, 34, 42, 47] verify that content and style dis-
entanglement is effective to convert the style of a character
in the source domain, denoted as source character , to the
target style embodied with reference images of seen or un-
seen fonts. The neural networks in these methods usually
have two branches to learn content and style features respec-
tively, and the content features are usually obtained with the
character image from a manually-chosen font, denoted as
source font . However, since it’s a difficult task to achieve
a complete disentanglement between content and style fea-
tures [17, 21], the choice of the font for content-feature en-
coding influences the font generation results substantially.
For instance, Song andKaiare commonly selected as the
source font [20, 28, 31, 42, 43, 47]. While such choices are
effective in many cases, the generated images sometimes
contain artifacts, such as incomplete and unwanted strokes.
The main contribution of this paper is a novel content
feature fusion scheme to mitigate the influence of incom-
plete disentanglement by exploring the synchronization of
content and style features, which significantly enhances the
quality of few-shot font generation. Specifically, we design
a content fusion module (CFM) to take the content features
of different fonts into consideration during training and in-
ference. It is realized by computing the content feature of
a character of a target font through linearly blending con-
tent features of the corresponding characters in the auto-
matically determined basis fonts, and the blending weights
are determined through a carefully designed font-level dis-
tance measure. In this way, we can form a linear cluster
for the content feature of a semantic character, and explore
how to leverage the font-level similarity to seek for an opti-
mized content feature in this cluster to improve the quality
of generated characters.
In addition, we introduce an iterative style-vector refine-
ment (ISR) strategy to find a better style feature vector for
font-level style representation. For each font, we average
the style vectors of reference images and treat it as a learn-
able parameter. Afterward, we fine-tune the style vector
with a reconstruction loss, which further improves the qual-
ity of the generated fonts.
Most font-generation algorithms [3, 20, 31, 32, 38, 42]
choose L1 loss as the character image reconstruction loss.
However, L1 or L2 loss mainly supervises per-pixel accu-
racy and is easily disturbed by the local misalignment of
details. Hence, we employ a distribution-based projected
character loss (PCL) to measure the shape difference be-
tween characters. Specifically, by treating the 1D projec-
tion of 2D character images as a 1D probability distribution,PCL computes the distribution distance to pay more atten-
tion to the global properties of character shapes, resulting in
the large improvement of skeleton topology transfer results.
The CFM can be embedded into the few-shot font gen-
eration task to enhance the quality of generated results. Ex-
tensive experiments verify that our method, referred to as
CF-Font, remarkably outperforms state-of-the-art methods
on both seen and unseen fonts. Fig. 1 reveals that our
method can generate high-quality fonts of various styles.
|
Wang_Multi-Agent_Automated_Machine_Learning_CVPR_2023
|
Abstract
In this paper, we propose multi-agent automated ma-
chine learning (MA2ML) with the aim to effectively han-
dle joint optimization of modules in automated machine
learning (AutoML). MA2ML takes each machine learning
module, such as data augmentation (AUG), neural archi-
tecture search (NAS), or hyper-parameters (HPO), as an
agent and the final performance as the reward, to formu-
late a multi-agent reinforcement learning problem. MA2ML
explicitly assigns credit to each agent according to its
marginal contribution to enhance cooperation among mod-
ules, and incorporates off-policy learning to improve search
efficiency. Theoretically, MA2ML guarantees monotonic
improvement of joint optimization. Extensive experiments
show that MA2ML yields the state-of-the-art top-1 accuracy
on ImageNet under constraints of computational cost, e.g.,
79.7%/80.5%with FLOPs fewer than 600M/800M. Exten-
sive ablation studies verify the benefits of credit assignment
and off-policy learning of MA2ML.
|
1. Introduction
Automated machine learning (AutoML) aims to find
high-performance machine learning (ML) pipelines without
human effort involvement. The main challenge of AutoML
lies in finding optimal solutions in huge search spaces.
In recent years, reinforcement learning (RL) has been
validated to be effective to optimize individual AutoML
modules, such as data augmentation (AUG) [4], neural ar-
chitecture search (NAS) [25, 53, 54], and hyper-parameter
optimization (HPO) [38]. However, when facing the huge
search space (Figure 1 left) and the joint optimization of
these modules, the efficiency and performance challenges
remain.
Through experiments, we observed that among AutoML
modules there exists a cooperative relationship that facilities
the joint optimization of modules. For example, a small net-
work (ResNet-34) with specified data augmentation and op-
†The work was done during his Master program at Peking University.
‡Correspondence to [email protected]
HPO
AUG
HPO
AUG
NAS NAS𝒌ା𝟏 𝒌
𝒑𝒌ା𝟏
𝒑𝒌Huge search space
of ML pipelines
Searched pipelineFigure 1. Search spaces of machine learning pipelines. Left: sin-
gle agent controls all modules, and the huge search space makes
it ineffective to learn. Mid: each agent controls one module, and
the learning difficulty is reduced by introducing MA2ML. Right :
MA2ML guarantees monotonic improvement of the searched
pipeline, where pkandR(pk)denote the k-th searched pipeline
and its expected performance, respectively.
timized hyper-parameters significantly outperforms a large
one (ResNet-50) with default training settings (76.8% vs.
76.1%). In other words, good AUG and HPO alleviate
the need for NAS to some extent. Accordingly, we pro-
pose multi-agent automated machine learning (MA2ML),
which explores the cooperative relationship towards joint
optimization of ML pipelines. In MA2ML, ML modules
are defined as RL agents (Figure 1 mid), which take ac-
tions to jointly maximize the reward, so that the training ef-
ficiency and test accuracy are significantly improved. Spe-
cially, we introduce credit assignment to differentiate the
contribution of each module, such that all modules can be
simultaneously updated. To handle both continuous ( e.g.,
learning rate) and discrete ( e.g., architecture) action spaces,
MA2ML employs a multi-agent actor-critic method, where
a centralized Q-function is learned to evaluate the joint ac-
tion. Besides, to further improve search efficiency, MA2ML
adopts off-policy learning to exploit historical samples for
policy updates.
MA2ML is justified theoretically and experimentally.
Theoretically, we prove that MA2ML guarantees mono-
tonic policy improvement (Figure 1 right ),i.e., the per-
formance of the searched pipeline monotonically improves
in expectation. This enables MA2ML to fit the joint op-
timization problem and be adaptive to all modules in the
ML pipeline, potentially achieving full automation. Exper-
imentally, we take the combination of individual RL-based
modules to form MA2ML-Lite, and compare their perfor-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
11960
mance on ImageNet [31] and CIFAR-10/100 [20] datasets.
To better balance performance and computational cost, we
add constraints of FLOPs in the experiment on ImageNet.
Experiments show that MA2ML substantially outperforms
MA2ML-Lite w.r.t. both accuracy and sample efficiency,
and MA2ML achieves remarkable accuracy compared with
recent methods.
Our contributions are summarized as follows:
• We propose MA2ML, which utilizes credit assignment
to differentiate the contributions of ML modules, pro-
viding a systematic solution for the joint optimization
of AutoML modules.
• We prove the monotonic improvement of module poli-
cies, which enables to MA2ML fit the joint optimiza-
tion problem and be adaptive to various modules in the
ML pipeline.
• MA2ML yields the state-of-the-art performance under
constraints of computational cost, e.g.,79.7%/80.5%
on ImageNet, with FLOPs fewer than 600M/800M,
validating the superiority of the joint optimization of
MA2ML.
|
Xue_Egocentric_Video_Task_Translation_CVPR_2023
|
Abstract
Different video understanding tasks are typically treated
in isolation, and even with distinct types of curated data
(e.g., classifying sports in one dataset, tracking animals
in another). However, in wearable cameras, the immer-
sive egocentric perspective of a person engaging with the
world around them presents an interconnected web of video
understanding tasks—hand-object manipulations, naviga-
tion in the space, or human-human interactions—that un-
fold continuously, driven by the person’s goals. We argue
that this calls for a much more unified approach. We pro-
pose EgoTask Translation (EgoT2), which takes a collec-
tion of models optimized on separate tasks and learns to
translate their outputs for improved performance on any or
all of them at once. Unlike traditional transfer or multi-
task learning, EgoT2’s “flipped design” entails separate
task-specific backbones and a task translator shared across
all tasks, which captures synergies between even heteroge-
neous tasks and mitigates task competition. Demonstrat-
ing our model on a wide array of video tasks from Ego4D,
we show its advantages over existing transfer paradigms
and achieve top-ranked results on four of the Ego4D 2022
benchmark challenges.1
|
1. Introduction
In recent years, the introduction of large-scale
video datasets ( e.g., Kinetics [ 6,33] and Something-
Something [ 22]) have enabled the application of powerful
deep learning models to video understanding and have
led to dramatic advances. These third-person datasets,
however, have overwhelmingly focused on the single task
of action recognition in trimmed clips [ 12,36,47,64].
Unlike curated third-person videos, our daily life involves
frequent and heterogeneous interactions with other hu-
mans, objects, and environments in the wild. First-person
videos from wearable cameras capture the observer’s
perspective and attention as a continuous stream. As such,
*Work done during an internship at FAIR, Meta AI.
1Project webpage: https : / / vision . cs . utexas . edu /
projects/egot2/ .
Task of InterestHolistic Egocentric PerceptionEgoT2Task Relatednesssmall large
[Task 1][Task 2][Task 3][Task 4]
...Is she looking at me?
Did the object gothrough a state change?Who is speaking?Is she talking to me?
What am I doing? What will I do next?When did I knead dough?
Auxiliary TasksWhat will I do next?Figure 1. Given a set of diverse egocentric video tasks, the pro-
posed EgoT2 leverages synergies among the tasks to improve each
individual task performance. The attention maps produced by
EgoT2 offer good interpretability on inherent task relations.
they are better equipped to reveal these multi-faceted,
spontaneous interactions. Indeed egocentric datasets, such
as EPIC-Kitchens [ 9] and Ego4D [ 23], provide suites
of tasks associated with varied interactions. However,
while these benchmarks have promoted a broader and
more heterogeneous view of video understanding, they
risk perpetuating the fragmented development of models
specialized for each individual task.
In this work, we argue that the egocentric perspective
offers an opportunity for holistic perception that can ben-
eficially leverage synergies among video tasks to solve all
problems in a unified manner. See Figure 1.
Imagine a cooking scenario where the camera wearer ac-
tively interacts with objects and other people in an environ-
ment while preparing dinner. These interactions relate to
each other: a hand grasping a knife suggests the upcoming
action of cutting; the view of a tomato on a cutting board
suggests that the object is likely to undergo a state transi-
tion from whole to chopped; the conversation may further
reveal the camera wearer’s ongoing and planned actions.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2310
Apart from the natural relation among these tasks, egocen-
tric video’s partial observability (i.e., the camera wearer is
largely out of the field of view) further motivates us to seek
synergistic, comprehensive video understanding to leverage
complementary cues among multiple tasks.
Our goal presents several technical challenges for con-
ventional transfer learning (TL) [ 65] and multi-task learn-
ing (MTL) [ 63]. First, MTL requires training sets where
each sample includes annotations for all tasks [ 15,24,48,
53,55,62], which is often impractical. Second, egocen-
tric video tasks are heterogeneous in nature, requiring dif-
ferent modalities (audio, visual, motion), diverse labels
(e.g., temporal, spatial or semantic), and different tem-
poral granularities ( e.g., action anticipation requires long-
term observations, but object state recognition operates at
a few sparsely sampled frames)—all of which makes a
unified model design problematic and fosters specializa-
tion. Finally, while existing work advocates the use of
a shared encoder across tasks to learn general representa-
tions [ 3,18,26,32,39,44,45,51], the diverse span of ego-
centric tasks poses a hazard to parameter sharing which can
lead to negative transfer [ 21,24,38,53].
To address the above limitations, we propose EgoTask
Translation (EgoT2), a unified learning framework to ad-
dress a diverse set of egocentric video tasks together. EgoT2
is flexible and general in that it can handle separate datasets
for the different tasks; it takes video heterogeneity into ac-
count; and it mitigates negative transfer when tasks are not
strongly related. To be specific, EgoT2 consists of special-
ized models developed for individual tasks and a task trans-
lator that explicitly models inter-task and inter-frame rela-
tions. We propose two distinct designs: (1) task-specific
EgoT2 (EgoT2-s) optimizes a given primary task with the
assistance of auxiliary tasks (Figure 2(c)) while (2) task-
general EgoT2 (EgoT2-g) supports task translation for mul-
tiple tasks at the same time (Figure 2(d)).
Compared with a unified backbone across tasks [ 62],
adopting task-specific backbones preserves peculiarities of
each task ( e.g. different temporal granularities) and miti-
gates negative transfer since each backbone is optimized on
one task. Furthermore, unlike traditional parameter shar-
ing [ 51], the proposed task translator learns to “translate”
all task features into predictions for the target task by se-
lectively activating useful features and discarding irrelevant
ones. The task translator also facilitates interpretability by
explicitly revealing which temporal segments and which
subsets of tasks contribute to improving a given task.
We evaluate EgoT2 on a diverse set of 7 egocentric
perception tasks from the world’s largest egocentric video
benchmark, Ego4D [ 23]. Its heterogeneous tasks extend
beyond mere action recognition to speaker/listener identi-
fication, keyframe localization, object state change classifi-
cation, long-term action anticipation, and others, and pro-Shared BackboneHead AHead BHead CTask ATask BTask CBackbonepretrained on Task ATransfer LayersTask B
Backbone A Shared TaskTranslatorTask ATask BTask C
Backbone B Backbone C (c) EgoT2-s(a) conventional TL
(d) EgoT2-g Backbone A Task Translator Task B
Backbone B Backbone C (b) conventional MTL
Figure 2. (a) Conventional TL uses a backbone pretrained on the
source task followed by a head transferring supervision to the tar-
get task; (b) Traditional MTL consists of a shared backbone and
several task-specific heads; (c) EgoT2-s adopts task-specific back-
bones and optimizes the task translator for a given primary task;
(d) EgoT2-g jointly optimizes the task translator for all tasks.
vide a perfect fit for our study. Our results reveal inher-
ent task synergies, demonstrate consistent performance im-
provement across tasks, and offer good interpretability in
task translation. Among all four Ego4D challenges involved
in our task setup, EgoT2 outperforms all submissions to
three Ego4D-CVPR’22 challenges and achieves state-of-
the-art performance in one Ego4D-ECCV’22 challenge.
|
Xie_Toward_Stable_Interpretable_and_Lightweight_Hyperspectral_Super-Resolution_CVPR_2023
|
Abstract
For real applications, existing HSI-SR methods are not
only limited to unstable performance under unknown sce-
narios but also suffer from high computation consumption.
In this paper, we develop a new coordination optimiza-
tion framework for stable, interpretable, and lightweight
HSI-SR. Specifically, we create a positive cycle between fu-
sion and degradation estimation under a new probabilis-
tic framework. The estimated degradation is applied to fu-
sion as guidance for a degradation-aware HSI-SR. Under
the framework, we establish an explicit degradation estima-
tion method to tackle the indeterminacy and unstable per-
formance caused by the black-box simulation in previous
methods. Considering the interpretability in fusion, we in-
tegrate spectral mixing prior into the fusion process, which
can be easily realized by a tiny autoencoder, leading to
a dramatic release of the computation burden. Based on
the spectral mixing prior, we then develop a partial fine-
tune strategy to reduce the computation cost further. Com-
prehensive experiments demonstrate the superiority of our
method against the state-of-the-arts under synthetic and
real datasets. For instance, we achieve a 2.3dB promo-
tion on PSNR with 120×model size reduction and 4300×
FLOPs reduction under the CAVE dataset. Code is avail-
able in https://github.com/WenjinGuo/DAEM .
|
1. Introduction
Different from traditional optical images with a few
channels, hyperspectral images (HSIs) with tens to hun-
dreds of bands hold discriminative information about ma-
terials, leading to a great advantage in a wide range of ap-
plications, e.g., the monitoring and management of ecosys-
tems, biodiversity, and disasters [1–8]. However, the physi-
cal limitation in imaging causes a trade-off between spatial
*Equal contribution.
†Cooresponding author.
CUCaNetMHFNet
UAL
Ours 0.1 1103060
120Size (M)
13090027000
10 100 1000 10000FLOPs (G)
Time (s)(a)
(b)
Figure 1. Comparison among recent state-of-the-art (SOTA) meth-
ods and our method. We report the computational efficiency (Size,
FLOPs, and Time) in (a) and the distribution of two measurement
metrics (PSNR, SAM) under various degradations in (b). Obvi-
ously, our method is remarkably superior to others in lightweight,
fidelity, and stability.
and spectral resolution. HSIs are suffered from low spa-
tial resolution in real applications. Therefore, hyperspectral
super-resolution (HSI-SR), which aims to promote the spa-
tial resolution of HSIs, has become a significant task, and
always performs as a necessary pre-processing of HSI ap-
plications, i.e. detection and classification [9–16].
Due to the common optical platforms which are
equipped with both HSI sensor and multispectral sensor
(such as satellites, airborne platforms), HSIs and multispec-
tral images (MSIs) imaged in the same scene are easy to
access. Fusion-based HSI-SR aims to estimate the desired
high resolution HSI (HR-HSI) from the corresponding low
resolution HSI (LR-HSI) and high resolution MSI (HR-
MSI). Naturally, HSI-SR can be modeled as a maximum
a posterior (MAP) estimation:
p(Z|X,Y)∝p(X,Y|Z,θ)p(Z|ϕ), (1)
where Z ∈RH×W×Bis the target HR-HSI with H,Wand
Bas its height, width and number of bands, respectively.
X ∈Rh×w×Bis the observed LR-HSI with handwas
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22272
its height and width ( h < H, w < W ).Y ∈RH×W×b
is the HR-MSI with bbands ( b < B ).θandϕare pa-
rameters in the likelihood and the prior, respectively. As
a pre-processing task, HSI-SR faces two challenges, high
fidelity recovery and efficient processing (i.e., low compu-
tational burden and fast processing). Referred to Eq. (1), we
analyse these two ingredients from three perspectives: the
likelihood term, the prior term, and coordination between
them.
Likelihood Term
The likelihood term reflects the degradation process, and
the widely accepted degradation model can be formulated
as:
X= (Z ∗C)↓s+NX,
Y=Z × 3R+NY,(2)
where C∈Rs×s,R∈RB×brepresent the PSF (point
spread function) and SRF (spectral response function), re-
spectively. ↓srepresents spatial downsampling with s
times. ×3is matrix multiple on the third dimension. Early
methods demand precise degradation parameters to opti-
mize the likelihood [17–19]. However, PSF and SRF are
various and unknown in real scenarios, hence the high de-
mand for blind HSI-SR. In [20], two convolution blocks are
built to simulate the degradation process, which learn the
degradation in training sets and apply it to testing samples.
Unsupervised blind HSI-SR methods estimate degradation
in each image-pair individually [21–23]. Yao et al. [22] pro-
pose a spatial-spectral consistency to further constrain the
degradation estimation. Despite better applicability, recent
blind HSI-SR methods are limited by the DL-based estima-
tion. On one hand, the implicit modeling of degradation
imports a black box in optimization and draws uncertainty,
especially in volatile degradation. On the other hand, degra-
dation estimation is an inverse problem with multiple candi-
dates, the common over-fitting in neural networks will lead
to inaccurate and unstable performance.
Prior Term
If only considering the likelihood term, HSI-SR is an
ill-posed problem with infinite solutions. The prior term
shrinks the solution space as a regularization. Earlier works
make assumptions on prior through manual induction of
data characteristics, such as low rank [24, 25], and spar-
sity [26–30]. Recent supervised deep learning (DL)-based
methods replace this process through data characterization
by neural networks. These inductive methods will drop
a lot on performance when testing samples differ training
samples, hence a critical need for general prior assumption.
As an inherent feature in HSIs, spectral mixing prior sim-
ulates the common spectral mixing phenomenon in imag-
ing. Unmixing-based methods make a breakthrough in fi-
delity [17,18,31,32]. Moreover, to promote the generaliza-
tion, coupled autoencoder is proposed to simulate the spec-tral mixing with deep learning toolkit [21, 22]. Despite su-
perior fidelity, recent unsupervised DL-based methods over-
focus on network architecture and underestimate the effort
of prior knowledge in HSIs, resulted in two drawbacks. On
one hand, the over-designed network structures weaken the
role of the prior assumption, which harms the interpretabil-
ity. On the other hand, the complicated network structures
pose a heavy burden on power and memory.
Coordination between Likelihood and Prior
The most recent blind HSI-SR methods generally con-
tain two modules, the fusion module and the degradation
estimator [22,23,33]. From the viewpoint of MAP problem,
the former aims to recover the HR-HSI through the obser-
vations under the regularization of the prior. The later mini-
mizes the likelihood term by estimating degradation param-
eters. The unknown HR-HSI and degradation determine the
observations. Naturally, the estimated degradation can be
fed to the fusion module as a guidance. However, in recent
methods, the degradation estimator and fusion module only
interacts in backward stage. The learned degradation pa-
rameters are not involved in the updating of fusion module
or prior parameters, which brings two weaknesses. Firstly,
the nearly independent optimization of likelihood and prior
causes more updating steps and slows the convergence. Sec-
ondly, optimization with less interaction inducts conflict op-
timization directions and results in local optimum with un-
real recovery.
In this paper, we develop a novel coordination optimiza-
tion framework for stable, interpretable, and lightweight
HSI-SR. Through integrating the Wald protocol in the MAP
problem, we construct a postive feedback loop between
prior and likelihood. Based on the framework, we explore
an explicit degradation estimation to remedy the unstable
performance of black-box estimation. As for the prior, we
establish a lightweight autoencoder to simulate the spectral
mixing prior in HSIs for a interpretable fusion. Our contri-
butions are summarized as follows:
1. We explore a coordination optimization framework
for HSI-SR with fast convergence and stable performance.
Under the framework, the degradation estimation and fusion
process promote each other concordantly. To the best of our
knowledge, it is the first work to explore the cooperative
relationship between prior and likelihood in HSI-SR.
2. An explicit estimation method is established in HSI-
SR firstly. Through modeling PSF and SRF with anisotropic
Gaussian kernel and Gaussian mixture, tiny parameters re-
alize stable and precise estimation.
3. We simulate the general spectral mixing prior with
only one interpretable autoencoder. Compared with recent
complicated models, the network makes a breakthrough in
lightweight. Based on the fusion netwrok, we explore a par-
tial optimization strategy in test stage, which only updates
the decoder that handles the individual spectral feature of
22273
images, reducing large computaion and time consumption.
|
Wu_OmniObject3D_Large-Vocabulary_3D_Object_Dataset_for_Realistic_Perception_Reconstruction_and_CVPR_2023
|
Abstract
Recent advances in modeling 3D objects mostly rely
on synthetic datasets due to the lack of large-scale real-
scanned 3D databases. To facilitate the development of
3D perception, reconstruction, and generation in the real
world, we propose OmniObject3D , a large vocabulary 3D
object dataset with massive high-quality real-scanned 3D
objects. OmniObject3D has several appealing properties:
1) Large Vocabulary: It comprises 6,000 scanned objects
in 190 daily categories, sharing common classes with pop-
ular 2D datasets (e.g., ImageNet and LVIS), benefiting the
pursuit of generalizable 3D representations. 2) Rich An-
notations: Each 3D object is captured with both 2D and
BCorresponding authors. https://omniobject3d.github.io/3D sensors, providing textured meshes, point clouds, multi-
view rendered images, and multiple real-captured videos. 3)
Realistic Scans: The professional scanners support high-
quality object scans with precise shapes and realistic ap-
pearances. With the vast exploration space offered by Om-
niObject3D, we carefully set up four evaluation tracks: a)
robust 3D perception, b)novel-view synthesis, c)neural sur-
face reconstruction, and d)3D object generation. Extensive
studies are performed on these four benchmarks, revealing
new observations, challenges, and opportunities for future
research in realistic 3D vision.
|
1. Introduction
Sensing, understanding, and synthesizing realistic 3D
objects is a long-standing problem in computer vision, with
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
803
rapid progress emerging in recent years. However, a major-
ity of the technical approaches rely on unrealistic synthetic
datasets [6, 19, 64] due to the absence of a large-scale real-
world 3D object database. However, the appearance and
distribution gaps between synthetic and real data cannot be
compensated for trivially, hindering their real-life applica-
tions. Therefore, it is imperative to equip the community
with a large-scale and high-quality 3D object dataset from
the real world, which can facilitate a variety of 3D vision
tasks and downstream applications.
Recent advances partially fulfill the requirements while
still being unsatisfactory. As shown in Table 1, CO3D [48]
contains 19k videos capturing objects from 50 MS-COCO
categories, while only 20% of the videos are annotated
with accurate point clouds reconstructed by COLMAP [50].
Moreover, they do not provide textured meshes. GSO [16]
has 1k scanned objects while covering only 17 household
classes. AKB-48 [33] focuses on robotics manipulation
with 2k articulated object scans in 48 categories, but the
focus on articulation leads to a relatively narrow semantic
distribution, failing to support general 3D object research.
To boost the research on general 3D object understand-
ing and modeling, we present OmniObject3D : a large-
vocabulary 3D object dataset with massive high-quality,
real-scanned 3D objects. Our dataset has several appealing
properties: 1) Large Vocabulary: It contains 6,000 high-
quality textured meshes scanned from real-world objects,
which, to the best of our knowledge, is the largest among
real-world 3D object datasets with accurate 3D meshes. It
comprises 190 daily categories, sharing common classes
with popular 2D and 3D datasets ( e.g., ImageNet [15],
LVIS [25], and ShapeNet [6]), incorporating most daily
object realms (See Figure 1 and Figure 2). 2) Rich An-
notations: Each 3D object is captured with both 2D and
3D sensors, providing textured 3D meshes, sampled point
clouds, posed multi-view images rendered by Blender [13],
and real-captured video frames with foreground masks and
COLMAP camera poses. 3) Realistic Scans: The object
scans are of high fidelity thanks to the professional scan-
ners, bearing precise shapes with geometric details and re-
alistic appearance with high-frequency textures.
Taking advantage of the vast exploration space offered
by OmniObject3D, we carefully set up four evaluation
tracks: a)robust 3D perception, b)novel-view synthesis,
c)neural surface reconstruction, and d)3D object genera-
tion. Extensive studies are performed on these benchmarks:
First, the high-quality, real-world point clouds in OmniOb-
ject3D allow us to perform robust 3D perception analysis
on both out-of-distribution (OOD) styles and corruptions,
two major challenges in point cloud OOD generalization.
Furthermore, we provide massive 3D models with multi-
view images and precise 3D meshes for novel-view syn-
thesis andneural surface reconstruction . The broad diver-Table 1. A comparison between OmniObject3D and other
commonly-used 3D object datasets. Rlvisdenotes the ratio of
the 1.2k LVIS [25] categories being covered.
Dataset Real Full Mesh Video # Objs # Cats Rlvis(%)
ShapeNet [6] ✓ 51k 55 4.1
ModelNet [64] ✓ 12k 40 2.4
3D-Future [19] ✓ 16k 34 1.3
ABO [12] ✓ 8k 63 3.5
Toys4K [53] ✓ 4k 105 7.7
CO3D V1 / V2 [48] ✓ ✓ 19 / 40k 50 4.2
DTU [1] ✓ ✓ 124 NA 0
ScanObjectNN [56] ✓ 15k 15 1.3
GSO [16] ✓ ✓ 1k 17 0.9
AKB-48 [33] ✓ ✓ 2k 48 1.8
Ours ✓ ✓ ✓ 6k 190 10.8
sity in shapes and textures offers a comprehensive training
and evaluation source for both scene-specific and general-
izable algorithms. Finally, we equip the community with a
database for large vocabulary and realistic 3D object gener-
ation , which pushes the boundary of existing state-of-the-
art generation methods to real-world 3D objects. The four
benchmarks reveal new observations, challenges, and op-
portunities for future research in realistic 3D vision.
|
Xie_Visibility_Aware_Human-Object_Interaction_Tracking_From_Single_RGB_Camera_CVPR_2023
|
Abstract
Capturing the interactions between humans and their
environment in 3D is important for many applications in
robotics, graphics, and vision. Recent works to reconstruct
the 3D human and object from a single RGB image do not
have consistent relative translation across frames because
they assume a fixed depth. Moreover, their performance
drops significantly when the object is occluded. In this
work, we propose a novel method to track the 3D human,
object, contacts, and relative translation across frames from
a single RGB camera, while being robust to heavy occlu-
sions. Our method is built on two key insights. First, we
condition our neural field reconstructions for human and
object on per-frame SMPL model estimates obtained by
pre-fitting SMPL to a video sequence. This improves neu-
ral reconstruction accuracy and produces coherent relative
translation across frames. Second, human and object mo-
tion from visible frames provides valuable information to
infer the occluded object. We propose a novel transformer-
based neural network that explicitly uses object visibil-
ity and human motion to leverage neighboring frames to
make predictions for the occluded frames. Building on
these insights, our method is able to track both human
and object robustly even under occlusions. Experiments
on two datasets show that our method significantly im-
proves over the state-of-the-art methods. Our code and pre-
trained models are available at: https://virtualhumans.mpi-
inf.mpg.de/VisTracker.
|
1. Introduction
Perceiving and understanding human as well as their in-
teraction with the surroundings has lots of applications in
robotics, gaming, animation and virtual reality etc. Ac-
curate interaction capture is however very hard. Early
works employ high-end systems such as dense camera ar-
rays [8, 17, 37] that allow accurate capture but are expen-
sive to deploy. Recent works [6, 34, 36] reduce the require-
Teaser figure
Figure 1. From a monocular RGB video, our method tracks the
human, object and contacts between them even under occlusions.
ment to multi-view RGBD cameras but it is still compli-
cated to setup the full capture system hence is not friendly
for consumer-level usage. This calls for methods that can
capture human-object interaction from a single RGB cam-
era, which is more convenient and user-friendly.
However, reasoning about the 3D human and object
from monocular RGB images is very challenging. The
lack of depth information makes the predictions suscepti-
ble to depth-scale ambiguity, leading to temporally inco-
herent tracking. Furthermore, the object or human can get
heavily occluded, making inference very hard. Prior work
PHOSA [89] relies on hand-crafted heuristics to reduce the
ambiguity but such heuristic-based method is neither very
accurate nor scalable. More recently, CHORE [79] com-
bines neural field reconstructions with model based fitting
obtaining promising results. However, CHORE, assumes
humans are at a fixed depth from the camera and predicts
scale alone, thereby losing the important relative transla-
tionacross frames. Another limitation of CHORE is that it
is not robust under occlusions as little information is avail-
able from single-frame when the object is barely visible.
Hence CHORE often fails in these cases, see Fig. 3.
In this work, we propose the first method that can track
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
4757
both human and object accurately from monocular RGB
videos. Our approach combines neural field predictions
and model fitting, which has been consistently shown to be
more effective than directly regressing pose [4–6, 79]. In
contrast to existing neural field based reconstruction meth-
ods [60,79], we can do tracking including inference of rela-
tive translation. Instead of assuming a fixed depth, we con-
dition the neural field reconstructions (for object and hu-
man) on per frame SMPL estimates (SMPL-T) including
translation in camera space obtained by pre-fitting SMPL to
the video sequence. This results in coherent translation and
improved neural reconstruction. In addition, we argue that
during human-object interaction, the object motion is highly
correlated with the human motion, which provides us valu-
able information to recover the object pose even when it is
occluded (see Fig. 1 column 3-4). To this end, we propose
a novel transformer based network that leverages the hu-
man motion and object motion from nearby visible frames
to predict the object pose under heavy occlusions.
We evaluate our method on the BEHA VE [6] and Inter-
Cap dataset [35]. Experiments show that our method can
robustly track human, object and realistic contacts between
them even under heavy occlusions and significantly outper-
forms the currrent state of the art method, CHORE [79].
We further ablate the proposed SMPL-T conditioning and
human and visibility aware object pose prediction network
and demonstrate that they are key for accurate human-object
interaction tracking.
In summary, our key contributions include:
• We propose the first method that can jointly track full-
body human interacting with a movable object from a
monocular RGB camera.
• We propose SMPL-T conditioned interaction fields ,
predicted by a neural network that allows consistent
4D tracking of human and object.
• We introduce a novel human and visibility aware ob-
ject pose prediction network along with an object visi-
bility prediction network that can recover object poses
even under heavy occlusions.
• Our code and pretrained models are publicly available
to foster future research in this direction.
|
Xia_SCPNet_Semantic_Scene_Completion_on_Point_Cloud_CVPR_2023
|
Abstract
Training deep models for semantic scene completion
(SSC) is challenging due to the sparse and incomplete in-
put, a large quantity of objects of diverse scales as well asthe inherent label noise for moving objects. To address theabove-mentioned problems, we propose the following threesolutions: 1) Redesigning the completion sub-network. We
design a novel completion sub-network, which consists ofseveral Multi-Path Blocks (MPBs) to aggregate multi-scalefeatures and is free from the lossy downsampling opera-tions. 2) Distilling rich knowledge from the multi-frame
model. We design a novel knowledge distillation objective,
dubbed Dense-to-Sparse Knowledge Distillation (DSKD).It transfers the dense, relation-based semantic knowledgefrom the multi-frame teacher to the single-frame student,significantly improving the representation learning of thesingle-frame model. 3) Completion label rectification. We
propose a simple yet effective label rectification strategy,which uses off-the-shelf panoptic segmentation labels to re-move the traces of dynamic objects in completion labels,greatly improving the performance of deep models espe-cially for those moving objects. Extensive experimentsare conducted in two public SSC benchmarks, i.e., Se-
manticKITTI and SemanticPOSS. Our SCPNet ranks 1st on
SemanticKITTI semantic scene completion challenge and
surpasses the competitive S3CNet [ 3]b y 7.2 mIoU. SCP-
Net also outperforms previous completion algorithms on theSemanticPOSS dataset. Besides, our method also achievescompetitive results on SemanticKITTI semantic segmenta-tion tasks, showing that knowledge learned in the scenecompletion is beneficial to the segmentation task.
|
1. Introduction
Semantic scene completion (SSC) [ 20] aims at inferring
†: Corresponding author.both geometry and semantics of the scene from an incom-
plete and sparse observation, which is a crucial componentin 3D scene understanding. Performing semantic scenecompletion in the outdoor scenarios is challenging due tothe sparse and incomplete input, a large quantity of objectsof diverse scales as well as the inherent label noise for thosemoving objects (See Fig. 1(a)).
Recent years have witnessed an explosion of methods
in the outdoor scene completion field [ 3,19,22,25,29,32].
For example, S3CNet [ 3] performs 2D and 3D completion
tasks jointly and achieves impressive performance on Se-manticKITTI [ 1]. JS3C-Net [ 29] performs semantic seg-
mentation first and then feeds the segmentation features tothe completion sub-network. The coarse-to-fine refinementmodule is further put forward to improve the completionquality. Although significant progress has been achievedin this area, these methods heavily rely on the voxelwise
completion labels and show unsatisfactory completion per-
formance on the small, distant objects and crowded scenes.Moreover, the long traces of dynamic objects in originalcompletion labels will hamper the learning of completionmodels, which is overlooked in the previous literature [ 20].
To address the preceding problems, we propose three
solutions from the aspects of the completion sub-network
redesign, distillation of multi-frame knowledge as well ascompletion label rectification. Specifically, we first makea comprehensive overhaul of the completion sub-network.We adopt the completion-first principle and make the com-
pletion module directly process the raw voxel features. Be-
sides, we avoid the use of downsampling operations sincethey inevitably introduce information loss and cause se-vere misclassification for those small objects and crowdedscenes. To improve the completion quality on objects of di-verse scales, we design Multi-Path Blocks (MPBs) with var-
ied kernel sizes, which aggregate multi-scale features andfully utilize the rich contextual information.
Second, to combat against the sparse and incomplete in-
put signals, we make the single-scan student model distilknowledge from the multi-frame teacher model. However,
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17642
1520253035
2016 2017 2018 2019 2020 2021 2022 2023SSCNet-Full
(16.1)TS3D+DNet+SATNet
(17.7)ESSCNet
(17.5)
LMSCNet-SS
(17.6)Local-DIFs
(22.7)JS3CNet
(23.8)S3CNet
(29.5)
UDNet
(19.5)SSA-SC
(23.5)SCPNet
(36.7)mIoU (%)
(a)
(b) (c)
Figure 1. Left: challenges of semantic scene completion, i.e., (a) sparse and incomplete input, varying completion difficulty for objects of
diverse scales and (b) long traces of dynamic objects in completion labels. Traces of cars and persons are highlighted by yellow ellipses.Right: (c) performance comparison of various completion algorithms on SemanticKITTI semantic scene completion challenge.
mimicking the probabilistic knowledge of each point/voxel
brings marginal gains. Instead, we propose to distill thepairwise similarity information. Considering the sparsityand unorderness of features, we align the features usingtheir indices and then force the consistency between the
pairwise similarity maps of student features and those of
teacher features, make the student benefit from the rela-tional knowledge of the teacher. The resulting Dense-to-Sparse Knowledge Distillation objective is termed DSKD,which is specifically designed for the scene completion task.
Finally, to address the long traces of those dynamic ob-
jects in the completion labels, we propose a simple yeteffective label rectification strategy. The core idea is touse off-the-shelf panoptic segmentation labels to removethe traces of dynamic objects in completion labels. Therectified completion labels are more accurate and reliable,greatly improving the completion qualities of deep modelson those moving objects.
We conduct experiments on two large-scale outdoor
scene completion benchmarks, i.e., SemanticKITTI [ 1]
and SemanticPOSS [ 16]. Our SCPNet ranks 1st on Se-
manticKITTI semantic scene completion challenge
1and
outperforms the S3CNet [ 3]b y 7.2 mIoU. SCPNet also
achieves better performance than other completion algo-rithms on the SemanticPOSS dataset. The learned knowl-edge from the completion task also benefits the segmenta-tion task, making our SCPNet achieve superior performanceon the SemanticKITTI semantic segmentation task.
Our contributions are summarized as follows.
• The comprehensive redesign of the completion sub-
1https://codalab.lisn.upsaclay.fr/competitions/7170#results till 2022-
11-12 00:00 Pacific Time, and our method is termed SCPNet.network. We unveil several key factors to building
strong completion networks.
• To cope with the sparsity and incompleteness of the
input, we propose to distill the dense relation-basedknowledge from the multi-frame model. Note that weare the first to apply knowledge distillation to the se-
mantic scene completion task.
• To address the long traces of moving objects in com-
pletion labels, we present the completion label rectifi-cation strategy.
• Our SCPNet ranks 1st on SemanticKITTI semantic
scene completion challenge, outperforming the previ-ous SOT A S3CNet [ 3]b y 7.2 mIoU. Competitive per-
formance is also shown in SemanticPOSS completiontask and SemanticKITTI semantic segmentation task.
|
Wang_Masked_Video_Distillation_Rethinking_Masked_Feature_Modeling_for_Self-Supervised_Video_CVPR_2023
|
Abstract
Benefiting from masked visual modeling, self-supervised
video representation learning has achieved remarkable
progress. However, existing methods focus on learning rep-
resentations from scratch through reconstructing low-level
features like raw pixel values. In this paper, we propose
masked video distillation (MVD), a simple yet effective two-
stage masked feature modeling framework for video repre-
sentation learning: firstly we pretrain an image (or video)
model by recovering low-level features of masked patches,
then we use the resulting features as targets for masked fea-
ture modeling. For the choice of teacher models, we ob-
serve that students taught by video teachers perform bet-
ter on temporally-heavy video tasks, while image teachers
transfer stronger spatial representations for spatially-heavy
video tasks. Visualization analysis also indicates different
teachers produce different learned patterns for students. To
leverage the advantage of different teachers, we design a
spatial-temporal co-teaching method for MVD. Specifically,
we distill student models from both video teachers and im-
age teachers by masked feature modeling. Extensive ex-
perimental results demonstrate that video transformers pre-
trained with spatial-temporal co-teaching outperform mod-
els distilled with a single teacher on a multitude of video
datasets. Our MVD with vanilla ViT achieves state-of-the-
art performance compared with previous methods on sev-
eral challenging video downstream tasks. For example, with
the ViT-Large model, our MVD achieves 86.4% and 76.7%
Top-1 accuracy on Kinetics-400 and Something-Something-
v2, outperforming VideoMAE by 1.2% and 2.4% respec-
tively. When a larger ViT-Huge model is adopted, MVD
achieves the state-of-the-art performance with 77.3% Top-1
accuracy on Something-Something-v2. Code will be avail-
able at https://github.com/ruiwang2021/mvd .
†Corresponding authors
0 2000 4000 6000 8000
GFLOPs/Video676971737577SSv2 T op-1 Acc %MVD(ours)
VideoMAE
ST-MAE
OmniMAE
BEVTMaskFeat
MViTv1
MViTv2
VideoSwin
MformerFigure 1. Comparisons of MVD with previous supervised or
self-supervised methods on Something-Something v2. Each
line represents the corresponding model of different sizes.
|
1. Introduction
For self-supervised visual representation learning, recent
masked image modeling (MIM) methods like MAE [31]
and BEiT [2] achieve promising results with vision trans-
formers [17] on various vision downstream tasks. Such a
pretraining paradigm has also been adapted to the video
domain and boosts video transformers by clear margins
compared with supervised pretraining on several video
downstream tasks. Representative masked video modeling
(MVM) works include BEVT [63], VideoMAE [57] and
ST-MAE [21].
Following MAE [31] and BEiT [2], existing masked
video modeling methods [21, 57, 63] pretrain video trans-
formers through reconstructing low-level features, e.g., raw
pixel values or low-level VQV AE tokens. However, us-
ing low-level features as reconstruction targets often incur
much noise. And due to the high redundancy in video data,
it is easy for masked video modeling to learn shortcuts, thus
resulting in limited transfer performance on downstream
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
6312
tasks. To alleviate this issue, masked video modeling [57]
often uses larger masking ratios.
In this paper, we observe that much better performance
on video downstream tasks can be achieved by conducting
masked feature prediction by using the high-level features
of pretrained MIM and MVM models as masked predic-
tion targets. This can be viewed as two-stage masked video
modeling, where MIM pretrained image models ( i.e., an
image teacher) or MVM pretrained video models ( i.e., an
video teacher) are obtained in the first stage, and they fur-
ther act as teachers in the second stage for the student model
via providing the high-level feature targets. Therefore, we
call this method Masked Video Distillation (MVD).
More interestingly, we find that student models distilled
with different teachers in MVD exhibit different proper-
ties on different video downstream tasks. Specifically, stu-
dents distilled from the image teacher perform better on
video tasks that mainly rely on spatial clues, while students
distilled from the video teacher model perform better on
the video downstream tasks where temporal dynamics are
more necessary. We think during the pretraining process
of masked video modeling in the first stage, video teach-
ers have learned spatial-temporal context in their high-level
features. Therefore, when employing such high-level repre-
sentations as prediction targets of masked feature modeling,
it will help encouraging the student model to learn stronger
temporal dynamics. By analogy, image teachers provide
high-level features as targets that include more spatial in-
formation, which can help the student model learn more
spatially meaningful representations. We further analyze
the feature targets provided by image teachers and video
teachers, and calculate the cross-frame feature similarity. It
shows that the features provided by the video teachers con-
tain more temporal dynamics.
Motivated by the above observation, to leverage the ad-
vantages of video teachers and image teachers, we propose
a simple yet effective spatial-temporal co-teaching strategy
for MVD. In detail, the student model is designed to re-
construct the features coming from both the image teacher
and video teacher with two different decoders, so as to
learn stronger spatial representation and temporal dynam-
ics at the same time. Experiments demonstrate that MVD
with co-teaching from both the image teacher and the video
teacher significantly outperforms MVD only using one sin-
gle teacher on several challenging downstream tasks.
Despite the simplicity, our MVD is super effective and
achieves very strong performance on multiple standard
video recognition benchmarks. For example, on Kinectics-
400 and Something-Something-v2 datasets, compared to
the baseline without distillation, MVD with 400 epochs us-
ing a teacher model of the same size achieves 1.2% ,2.8%
Top-1 accuracy gain on ViT-B. If a larger teacher model
ViT-L is used, more significant performance gains ( i.e.,1.9% ,4.0% ) can be obtained. When ViT-Large is the target
student model, our method can achieves 86.4% and76.7%
Top-1 accuracy on these two datasets, surpassing existing
state-of-the-art method VideoMAE [57] by 1.2% and2.4%
respectively. When a larger ViT-Huge model is adopted,
MVD achieves the state-of-the-art performance with 77.3%
Top-1 accuracy on Something-Something-v2.
Our contributions can be summarized as below:
• We find that using MIM pretrained image models and
MVM pretrained video models as teachers to provide
the high-level features for continued masked feature
prediction can learn better video representation. And
representations learned with image teachers and video
teachers show different properties on different down-
stream video datasets.
• We propose masked video distillation together with a
simple yet effective co-teaching strategy, which enjoys
the synergy of image and video teachers.
• We demonstrate strong performance on multiple stan-
dard video recognition benchmarks, surpassing both
the baseline without MVD and prior state-of-the-art
methods by clear margins.
|
Wu_STMixer_A_One-Stage_Sparse_Action_Detector_CVPR_2023
|
Abstract
Traditional video action detectors typically adopt the
two-stage pipeline, where a person detector is first em-
ployed to generate actor boxes and then 3D RoIAlign is
used to extract actor-specific features for classification.
This detection paradigm requires multi-stage training and
inference, and cannot capture context information outside
the bounding box. Recently, a few query-based action de-
tectors are proposed to predict action instances in an end-
to-end manner. However, they still lack adaptability in fea-
ture sampling and decoding, thus suffering from the issues
of inferior performance or slower convergence. In this pa-
per, we propose a new one-stage sparse action detector,
termed STMixer. STMixer is based on two core designs.
First, we present a query-based adaptive feature sampling
module, which endows our STMixer with the flexibility of
mining a set of discriminative features from the entire spa-
tiotemporal domain. Second, we devise a dual-branch fea-
ture mixing module, which allows our STMixer to dynami-
cally attend to and mix video features along the spatial and
the temporal dimension respectively for better feature de-
coding. Coupling these two designs with a video backbone
yields an efficient end-to-end action detector. Without bells
and whistles, our STMixer obtains the state-of-the-art re-
sults on the datasets of AVA, UCF101-24, and JHMDB.
|
1. Introduction
Video action detection [14,18,20,30,32,44,46] is an im-
portant problem in video understanding, which aims to rec-
ognize all action instances present in a video and also local-
ize them in both space and time. It has drawn a significant
amount of research attention, due to its wide applications in
many areas like security and sports analysis.
Since the proposal of large-scale action detection bench-
marks [16, 22], action detection has made remarkable
progress. This progress is partially due to the advances
of video representation learning such as video convolution
*: Equal contribution. : Corresponding author.
150 200 250 300 350 400
GFLOPs2830323436mAP
SlowFast,SF-R101-NL
WOO,SF-R101-NLSTMixer,SF-R101-NL
CSN,CSN-152TubeR,CSN-152STMixer,CSN-152
VideoMAE,ViT-BSTMixer,ViT-B (from VideoMAE)STMixer,ViT-B (from VideoMAE V2)Figure 1. Comparion of mAP versus GFLOPs. We report
detection mAP on A V A v2.2. The GFLOPs of CSN, SlowFast,
and VideoMAE are the sum of Faster RCNN-R101-FPN detector
GFLOPs and classifier GFLOPs. Different methods are marked by
different makers and models with the same backbone are marked
in the same color. The results of CSN are from [53]. Our STMixer
achieves the best effectiveness and efficiency balance.
neural networks [5, 11, 39–41, 45, 50] and video transform-
ers [1, 3, 9, 27, 38, 43, 52].
Most current action detectors adopt the two-stage Faster
R-CNN-alike detection paradigm [31]. They share two ba-
sic designs. First, they use an auxiliary human detector to
generate actor bounding boxes in advance. The training of
the human detector is decoupled from the action classifica-
tion network. Second, in order to predict the action category
for each actor box, the RoIAlign [17] operation is applied on
video feature maps to extract actor-specific features. How-
ever, these two-stage action detection pipeline has several
critical issues. First, it requires multi-stage training of per-
son detector and action classifier, which requires large com-
puting resources. Furthermore, the RoIAlign [17] opera-
tion constrains the video feature sampling inside the actor
bounding box and lacks the flexibility of capturing context
information in its surroundings. To enhance RoI features,
recent works use an extra heavy module that introduces in-
teraction features of context or other actors [29, 36].
Recently sparse query-based object detector [4, 35, 54]
has brought a new perspective on detection tasks. Several
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14720
query-based sparse action detectors [6, 53] are proposed.
The key idea is that action instances can be represented as
a set of learnable queries, and detection can be formulated
as a set prediction task, which could be trained by a match-
ing loss. These query-based methods detect action instances
in an end-to-end manner, thus saving computing resources.
However, the current sparse action detectors still lack adapt-
ability in feature sampling or feature decoding, thus suffer-
ing from inferior accuracy or slow convergence issues. For
example, building on the DETR [4] framework, TubeR [53]
adaptively attends action-specific features from single-scale
feature maps but perform feature transformation in a static
mannner. On the contrary, though decoding sampled fea-
tures with dynamic interaction heads, WOO [6] still uses
the 3D RoIAlign [17] operator for feature sampling, which
constrains feature sampling inside the actor bounding box
and fails to take advantage of other useful information in
the entire spatiotemporal feature space.
Following the success of adaptive sparse object detector
AdaMixer [12] in images, we present a new query-based
one-stage sparse action detector, named STMixer. Our goal
is to create a simple action detection framework that can
sample and decode features from the complete spatiotem-
poral video domain in a more flexible manner, while re-
taining the benefits of sparse action detectors, such as end-
to-end training and reduced computational cost. Specifi-
cally, we come up with two core designs. First, to over-
come the aforementioned fixed feature sampling issue, we
present a query-guided adaptive feature sampling module.
This new sampling mechanism endows our STMixer with
the flexibility of mining a set of discriminative features from
the entire spatiotemporal domain and capturing context and
interaction information. Second, we devise a dual-branch
feature mixing module to extract discriminative represen-
tations for action detection. It is composed of an adaptive
spatial mixer and an adaptive temporal mixer in parallel to
focus on appearance and motion information, respectively.
Coupling these two designs with a video backbone yields a
simple, neat, and efficient end-to-end actor detector, which
obtains a new state-of-the-art performance on the datasets
of A V A [16], UCF101-24 [33], and JHMDB [19]. In sum-
mary, our contribution is threefold:
• We present a new one-stage sparse action detection
framework in videos (STMixer). Our STMixer is easy
to train in an end-to-end manner and efficient to deploy
for action detection in a single stage.
• We devise two flexible designs to yield a powerful ac-
tion detector. The adaptive sampling can select the
discriminative feature points and the adaptive feature
mixing can enhance spatiotemporal representations.
• STMixer achieves a new state-of-the-art performance
on three challenging action detection benchmarks.
|
Xiao_CutMIB_Boosting_Light_Field_Super-Resolution_via_Multi-View_Image_Blending_CVPR_2023
|
Abstract
Data augmentation (DA) is an efficient strategy for im-
proving the performance of deep neural networks. Recent
DA strategies have demonstrated utility in single image
super-resolution (SR). Little research has, however, focused
on the DA strategy for light field SR, in which multi-view
information utilization is required. For the first time in light
field SR, we propose a potent DA strategy called CutMIB to
improve the performance of existing light field SR networks
while keeping their structures unchanged. Specifically, Cut-
MIB first cuts low-resolution (LR) patches from each view
at the same location. Then CutMIB blends all LR patches
to generate the blended patch and finally pastes the blended
patch to the corresponding regions of high-resolution light
field views, and vice versa. By doing so, CutMIB en-
ables light field SR networks to learn from implicit geo-
metric information during the training stage. Experimen-
tal results demonstrate that CutMIB can improve the re-
construction performance and the angular consistency of
existing light field SR networks. We further verify the ef-
fectiveness of CutMIB on real-world light field SR and light
field denoising. The implementation code is available at
https://github.com/zeyuxiao1997/CutMIB .
|
1. Introduction
Light field cameras, which can record spatial and angular
information of light rays, have rapidly become prominent
imaging devices in virtual and augmented reality. Light
fields are suitable for various applications, such as post-
capture refocusing [35, 55], disparity estimation [52], and
foreground occlusion removal [54, 69], thanks to the abun-
dance of 4D spatial-angular information they contain. Com-
mercialized light field cameras generally adopt micro-lens-
array in front of the sensor, which poses an essential trade-
off between the angular and spatial resolutions [29, 35].
Therefore, light field super-resolution (SR) has been an im-
portant and popular topic. Convolutional neural network
*Corresponding author.
Figure 1. Comparisons on the reconstruction fidelity (PSNR, ↑)
and the angular consistency (MSE, ↓) between light fields super-
resolved through different methods. Following [9], we super-
resolve the whole light field of the scene Bicycle from the HCI
dataset to analyze the angular consistency of the super-resolved
results in terms of disparity estimation using SPO [70]. Note that,
CutMIB improves the values of PSNR and lowers the values of
MSE by a large margin as compared to na ¨ıve light field SR meth-
ods ( e.g., ATO [27], InterNet [53], IINet [33], and DPT [47]).
(CNN) based and Transformer based methods have recently
shown promising performance for light field SR [7, 8, 10,
26, 31, 33, 47, 52, 53, 56], outperforming traditional non-
learning based methods [1, 38] with noticeable gains. This
performance boost is obtained by training deep methods on
external datasets. Few works have investigated data aug-
mentation (DA) strategies for light field SR, which can im-
prove the model performance without the need for addi-
tional training datasets given that obtaining these light field
data is often time-consuming and expensive [19,23,36,48].
DA has been well studied in high-level vision tasks ( e.g.,
image recognition, image classification, and semantic seg-
mentation) for achieving better network performance and
alleviating the overfitting problem [14,44,49,64,66,71]. For
example, as one of the pioneering strategies, Mixup [66]
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1672
(a)
CutBlur
(b)
CutMIB
(ours)
Cut
Cut
Blend
Blend
Paste
Paste
LR light field
HR light field (input)
HR light field
LR light field (input)
LR
patches
LR light field
Blended
patch
Blend
Paste
HR light field
BP
BP
BP
BP
BP
BP
BP
BP
BP
HR
HR
HR
HR
HR
HR
HR
HR
HR
Cut
HR
HR
HR
LR
LR
LR
LR
LR
LR
LR
LR
LR
LR
LR
LR
LR
LR
LR
LR
LR
LR
LR view 1
LR view
𝑛
HR view 1
……
……
HR view
𝑛
Cut
Cut
Paste
Paste
HR
HR
LR
LR
LR
LR
LR
LRFigure 2. Illustrative examples of (a) CutBlur and (b) our proposed CutMIB. CutBlur generates augmented SAIs view-by-view via the
“cutting-pasting” operation. CutMIB generates the augmented light field via the “cutting-blending-pasting” operation. The implicit geo-
metric information can be utilized during the training stage.
(a)
(d)
(b)
(c)
(e)
(f)
Figure 3. Analyzing CutBlur and CutMIB from a phase spectrum
perspective. (a) The center view image in a 5×5light field. The
red rectangle denotes the area for the cutting and pasting operation.
(b) The phase spectrum of the original LR center view image. (c)
is the calculated residual map between (b) and (d). (d) The phase
spectrum of the LR center view image with the pasted LR patch us-
ing CutBlur. We cut an HR patch from the HR center view image,
and paste it to the LR center view image. (e) The phase spectrum
of the LR center view image with the pasted blended patch using
CutMIB. We cut all HR patches from the HR light field, blend
them, and then paste the blended patch to the LR center view im-
age. (f) is the calculated residual map between (b) and (e).
blends two images to generate an unseen training sample.
The effectiveness of the DA strategy on light field SR has
received very little attention. Instead, only geometric trans-
formation strategies such as flipping and rotating are used in
light field SR. Recently, Yoo et al. [60] propose CutBlur, a
DA strategy for training a stronger single image SR model,
in which a low-resolution (LR) patch is cutandpasted to the
corresponding high-resolution (HR) image region, and vice
versa. A straightforward way to utilize the DA strategy on
light field SR is to perform CutBlur on each view in a light
field and train single image SR networks view by view, asshown in Figure 2(a). However, the ignorance of the inher-
ent correlation in the spatial-angular domain makes it sub-
optimal. We provide a visual observation using the phase
spectrum since it contains rich texture information [46, 62]
in Figure 3. Specifically, we use CutBlur on the LR center
view (Figure 3(a)) in a 5 ×5 light field, cut an HR patch, and
then paste it to the original LR image, and analyze the phase
spectrum of the processed LR image. We can directly ob-
serve from the calculated residual map in Figure 3(c) that
there is little additional information from the pasted HR
patch using the CutBlur strategy. This encourages us to re-
alize the need for a more effective strategy to exploit patches
from multiple views.
Based on the aforementioned observation, we propose
CutMIB, a novel DA strategy specifically designed for light
field SR, as shown in Figure 2(b). Our CutMIB, which is
inspired by CutBlur [60], first cuts LR patches from differ-
ent views in an LR light field at the same position. The
cut LR patches are then blended to generate the blended
LR patch, which is then pasted to the corresponding areas
of various HR light field views, and vice versa. There-
fore, each augmented light field pair has partially blended
LR and blended HR pixel distributions with a random ra-
tio. By feeding the augmented training pairs into light field
SR networks, these networks can not only learn “how” and
“where” to super-resolve the LR light field ( i.e., benefit
from the cutting-blending operation [60]), but also utilize
the implicit geometric information in multi-view images,
resulting in better performance and higher angular consis-
tency among super-resolved light field views ( i.e., benefit
from the blending operation [2, 5, 17]). Figure 3(f) illus-
trates that pasting the blended HR patch to the LR center
view (Figure 3(a)) results in more additional details in the
pasted area. This demonstrates that our CutMIB can more
effectively use multi-view information in a light field.
Thanks to CutMIB, we can improve both the reconstruc-
tion quality and the angular consistency of light field SR
1673
results while maintaining the network structures unchanged
(see Figure 1). Additionally, we verify the effectiveness of
the proposed CutMIB on real-world light field SR and light
field denoising tasks.
Contributions of this paper are summarized as follows:
(1) We propose a novel DA strategy, CutMIB, to improve
the performance of existing light field SR networks. To our
best knowledge, it is the first DA strategy for light field SR.
Through the “cutting-blending-pasting” operation, CutMIB
is designed to efficiently explore the geometric information
in light fields during the training stage.
(2) Extensive experiments demonstrate CutMIB can
boost the reconstruction fidelity and the angular consistency
of existing typical light field SR methods.
(3) We verify the effectiveness of CutMIB on real-world
light field SR and light field denoising tasks.
|
Wang_F2-NeRF_Fast_Neural_Radiance_Field_Training_With_Free_Camera_Trajectories_CVPR_2023
|
Abstract
This paper presents a novel grid-based NeRF called F2-
NeRF (Fast-Free-NeRF) for novel view synthesis, which
enables arbitrary input camera trajectories and only costs
a few minutes for training. Existing fast grid-based NeRF
training frameworks, like Instant-NGP , Plenoxels, DVGO,
or TensoRF , are mainly designed for bounded scenes and
rely on space warping to handle unbounded scenes. Existing
two widely-used space-warping methods are only designed
for the forward-facing trajectory or the 360◦object-centric
trajectory but cannot process arbitrary trajectories. In this
paper, we delve deep into the mechanism of space warping to
handle unbounded scenes. Based on our analysis, we further
propose a novel space-warping method called perspective
warping, which allows us to handle arbitrary trajectories
in the grid-based NeRF framework. Extensive experiments
demonstrate that F2-NeRF is able to use the same perspec-
tive warping to render high-quality images on two standard
datasets and a new free trajectory dataset collected by us.
Project page: totoro97.github.io/projects/f2-nerf .
|
1. Introduction
The research progress of novel view synthesis has ad-
vanced drastically in recent years since the emergence of the
Neural Radiance Field (NeRF) [24, 42]. Once the training
is done, NeRF is able to render high-quality images from
novel camera poses. The key idea of NeRF is to represent
the scene as a density field and a radiance field encoded
by Multi-layer Perceptron (MLP) networks, and optimize
the MLP networks with the differentiable volume rendering
technique. Though NeRF is able to achieve photo-realistic
rendering results, training a NeRF takes hours or days due to
the slow optimization of deep neural networks, which limits
its application scopes.
Recent works demonstrate that grid-based methods, such
as Plenoxels [57], DVGO [38], TensoRF [6], and Instant-
*Equal contribution.
(a) Forward -facing (LLFF)
(b) 360 °, object -centric (NeRF -360)
(c) Free trajectory
………
…
…
Instant -NGP Ground truth
DVGO
Plenoxels F2-NeRFNDC
warp
Inv. sphere
Warp
Figure 1. Top: (a) Forward-facing camera trajectory. (b) 360◦
object-centric camera trajectory. (c) Free camera trajectory. In
(c), the camera trajectory is long and contains multiple foreground
objects, which is extremely challenging. Bottom: Rendered images
of the state-of-the-art fast NeRF training methods and F2-NeRF on
a scene with a free trajectory.
NGP [25], enable fast training a NeRF within a few minutes.
However, the memory consumption of such grid-based rep-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
4150
resentations grows in cubic order with the size of the scene.
Though various techniques, such as voxel pruning [38, 57],
tensor decomposition [6] or hash indexing [25], are proposed
to reduce the memory consumption, these methods still can
only process bounded scenes when grids are built in the
original Euclidean space.
To represent unbounded scenes, a commonly-adopted
strategy is to use a space-warping method that maps an un-
bounded space to a bounded space [3, 24, 61]. There are
typically two kinds of warping functions. (1) For forward-
facing scenes (Fig. 1 (a)), the Normalized Device Coordinate
(NDC) warping is used to map an infinitely-far view frus-
tum to a bounded box by squashing the space along the
z-axis [24]; (2) For 360◦object-centric unbounded scenes
(Fig. 1 (b)), the inverse-sphere warping can be used to map
an infinitely large space to a bounded sphere by the sphere
inversion transformation [3, 61]. Nevertheless, these two
warping methods assume special camera trajectory patterns
and cannot handle arbitrary ones. In particular, when a
trajectory is long and contains multiple objects of interest,
called free trajectories , as shown in Fig. 1 (c), the quality of
rendered images degrades severely.
The performance degradation on free trajectories is
caused by the imbalanced allocation of spatial representation
capacity. Specifically, when the trajectory is narrow and
long, many regions in the scenes are empty and invisible
to any input views. However, the grids of existing methods
are regularly tiled in the whole scene, no matter whether the
space is empty or not. Thus, much representation capacity is
wasted on empty space. Although such wasting can be allevi-
ated by using the progressive empty-voxel-pruning [38, 57],
tensor decomposition [6] or hash indexing [25], it still causes
blurred images due to limited GPU memory. Furthermore, in
the visible spaces, multiple foreground objects in Fig. 1 (c)
are observed with dense and near input views while back-
ground spaces are only covered by sparse and far input views.
In this case, for the optimal use of the spatial representation
of the grid, dense grids should be allocated for the fore-
ground objects to preserve shape details and coarse grids
should be put in background space. However, current grid-
based methods allocate grids evenly in the space, causing
the inefficient use of the representation capacity.
To address the above problems, we propose F2-NeRF
(Fast-Free-NeRF), the first fast NeRF training method that
accommodates free camera trajectories for large, unbounded
scenes. Built upon the framework of Instant-NGP [25], F2-
NeRF can efficiently be trained on unbounded scenes with
diverse camera trajectories and maintains the fast conver-
gence speed of the hash-grid representation.
In F2-NeRF , we give the criterion on a proper warping
function under an arbitrary camera configuration. Based on
this criterion, we develop a general space-warping scheme
called the perspective warping that is applicable to arbitrarycamera trajectories. The key idea of perspective warping is
to first represent the location of a 3D point pby the concate-
nation of the 2D coordinates of the projections of pin the
input images and then map these 2D coordinates into a com-
pact 3D subspace space using Principle Component Analysis
(PCA) [51]. We empirically show that the proposed perspec-
tive warping is a generalization of the existing NDC warp-
ing [24] and the inverse sphere warping [3, 61] to arbitrary
trajectories in a sense that the perspective warping is able
to handle arbitrary trajectories while could automatically
degenerate to these two warping functions in forward-facing
scenes or 360◦object-centric scenes. In order to implement
the perspective warping in a grid-based NeRF framework,
we further propose a space subdivision algorithm to adap-
tively use coarse grids for background regions and fine grids
for foreground regions.
We conduct extensive experiments on the unbounded
forward-facing dataset, the unbounded 360◦object-centric
dataset, and a new unbounded free trajectory dataset. The
experiments show that F2-NeRF uses the same perspective
warping to render high-quality images on the three datasets
with different trajectory patterns. On the new Free dataset
with free camera trajectories, our method outperforms base-
line grid-based NeRF methods,while only using ∼12 min-
utes on training on a 2080Ti GPU.
|
Voigtlaender_Connecting_Vision_and_Language_With_Video_Localized_Narratives_CVPR_2023
|
Abstract
We propose Video Localized Narratives, a new form of
multimodal video annotations connecting vision and lan-
guage. In the original Localized Narratives [ 36], annota-
tors speak and move their mouse simultaneously on an im-
age, thus grounding each word with a mouse trace segment.
However, this is challenging on a video. Our new protocol
empowers annotators to tell the story of a video with Local-
ized Narratives, capturing even complex events involving
multiple actors interacting with each other and with sev-
eral passive objects. We annotated 20k videos of the OVIS,
UVO, and Oops datasets, totalling 1.7M words. Based on
this data, we also construct new benchmarks for the video
narrative grounding and video question answering tasks,
and provide reference results from strong baseline models.
Our annotations are available at https://google.
github.io/video-localized-narratives/ .
|
1. Introduction
Vision and language is a very active research area, which
experienced much exciting progress recently [ 1,28,38,39,
48]. At the heart of many developments lie datasets con-
necting still images to captions, while grounding some of
the words in the caption to regions in the image, made with
a variety of protocols [ 6,20,23,25,29,34,36,44,46,56]. The
recent Localized Narratives (ImLNs) [ 36] offer a particu-
larly attractive solution: the annotators describe an image
with their voice while simultaneously moving their mouse
over the regions they are describing. Speaking is natural and
efficient, resulting in long captions that describe the whole
scene. Moreover, the synchronization between the voice
and mouse pointer yields dense visual grounding for every
word. Yet, still images only show one instant in time. Anno-
tating videos would be even more interesting, as they show
entire stories , featuring a flow of events involving multiple
actors and objects interacting with each other.
Directly extending ImLNs to video by letting the anno-
tator move their mouse and talk while the video is playing
would lead to a “race against time”, likely resulting in fol-
lowing only one salient object. In this paper, we proposea better annotation protocol which allows the annotator to
tell the story of the video in a calm environment (Fig. 1-3).
The annotators first watch the video carefully, identify the
main actors (“man”, “ostrich”), and select a few represen-
tative key-frames for each actor. Then, for each actor sep-
arately, the annotators tell a story: describing the events it
is involved in using their voice, while moving the mouse on
the key-frames over the objects and actions they are talking
about. The annotators mention the actor name, its attributes,
and especially the actions it performs, both on other ac-
tors ( e.g., “play with the ostrich”) and on passive objects
(e.g., “grabs the cup of food”). For completeness, the an-
notators also briefly describe the background in a separate
step (bottom row). Working on keyframes avoids the race
against time, and producing a separate narration for each ac-
tor enables disentangling situations and thus cleanly captur-
ing even complex events involving multiple actors interact-
ing with each other and with several passive objects. As in
ImLN, this protocol localizes each word with a mouse trace
segment. We take several additional measures to obtain ac-
curate localizations, beyond what was achieved in [ 36].
We annotated the OVIS [ 37], UVO [ 47], and Oops [ 12]
datasets with Video Localized Narratives (VidLNs). These
datasets cover a general domain, as opposed to only cook-
ing [ 10,62] or first-person videos [ 16]. Moreover, these
videos contain complex scenes with interactions between
multiple actors and passive objects, leading to interesting
stories that are captured by our rich annotations. In to-
tal our annotations span 20k videos, 72k actors, and 1.7
million words. On average, each narrative features tran-
scriptions for 3.5actors, totaling 75.1words, including 23.0
nouns,9.5verbs, and 8.5adjectives (Fig. 4). Our analysis
demonstrates the high quality of the data. The text descrip-
tions almost always mention objects/actions that are actu-
ally present in the video, and the mouse traces do lie inside
their corresponding object in most cases.
The richness of our VidLNs makes them a strong ba-
sis for several tasks. We demonstrate this by creating new
benchmarks for the tasks of Video Narrative Grounding
(VNG) and Video Question Answering (VideoQA). The
new VNG task requires a method to localize the nouns in
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2461
1. Watch 2./3. Select actors and key-frames 5. Transcription 4. Speak and mo ve mouse
"A man wearing a black t-shirt is
holding a cup of food in his right hand .
He moves around a piece of food in
his left hand to play with the ostrich ."
A man wearing a black t-shirt is
holding a cup of food in his right hand.
He moves around a piece of food in
his left hand to play with the ostrich.
Figure 1. The five steps of the annotation process for VidLNs. We are able to cleanly describe even complex interactions and events by
disentangling the storylines of different actors.
<Man> A man wearing a black t-shirt is holding a cup of food in his right hand.
He moves around a piece of food in his left hand to play with the ostrich.
<Ostrich> An ostrich is looking at the piece of food held by the man and suddenly
grabs the cup of food and starts eating.
<Background> In the background, there are hills, white barriers, a flag, the sky, and
soil on the ground.Man
Ostrich
Background
Figure 2. An example VidLN annotation. Each row shows the story of the video from the perspective of a different actor. Each mouse
trace segment drawn on selected key-frames localizes the word highlighted in the same color. More examples are in the supplement.
an input narrative with a segmentation mask on the video
frames. This is a complex task that goes beyond open-
vocabulary semantic segmentation [ 14], as often the text has
multiple identical nouns that need to be disambiguated us-
ing the context provided by other words. We construct two
VNG benchmarks on OVIS and UVO, with a total of 8k
videos involving 45k objects. We also build baseline mod-
els that explicitly attempt this disambiguation and report ex-
periments showing it’s beneficial on this dataset, while also
highlighting these new benchmarks are far from solved.
For VideoQA, we construct a benchmark on the Oops
dataset, consisting of text-output questions ( e.g., “What is
the person in the blue suit doing?”), where the answer
is free-form text ( e.g., “paragliding”), and location-output
questions ( e.g., “Where is the woman that is wearing a black
and white dress?”), where the answer is a spatio-temporal
location in the video. The benchmark features a total of 62k
questions on 9.5k videos. Answering many of these ques-
tions requires a deep understanding of the whole video. Wealso implement baseline methods to establish initial results
on this task.
|
Wang_Towards_Transferable_Targeted_Adversarial_Examples_CVPR_2023
|
Abstract
Transferability of adversarial examples is critical for
black-box deep learning model attacks. While most existing
studies focus on enhancing the transferability of untargeted
adversarial attacks, few of them studied how to generate
transferable targeted adversarial examples that can mislead
models into predicting a specific class. Moreover, existing
transferable targeted adversarial attacks usually fail to suf-
ficiently characterize the target class distribution, thus suf-
fering from limited transferability. In this paper, we pro-
pose the Transferable Targeted Adversarial Attack (TTAA),
which can capture the distribution information of the tar-
get class from both label-wise and feature-wise perspec-
tives, to generate highly transferable targeted adversarial
examples. To this end, we design a generative adversar-
ial training framework consisting of a generator to produce
targeted adversarial examples, and feature-label dual dis-
criminators to distinguish the generated adversarial exam-
ples from the target class images. Specifically, we design
the label discriminator to guide the adversarial examples
to learn label-related distribution information about the
target class. Meanwhile, we design a feature discrimina-
tor, which extracts the feature-wise information with strong
cross-model consistency, to enable the adversarial exam-
ples to learn the transferable distribution information. Fur-
thermore, we introduce the random perturbation dropping
to further enhance the transferability by augmenting the di-
versity of adversarial examples used in the training process.
Experiments demonstrate that our method achieves excel-
lent performance on the transferability of targeted adver-
sarial examples. The targeted fooling rate reaches 95.13%
when transferred from VGG-19 to DenseNet-121, which
significantly outperforms the state-of-the-art methods.
∗Zhibo Wang is the corresponding author.
|
1. Introduction
As an important branch of artificial intelligence (AI),
deep neural networks (DNNs) contribute to many real-
life applications, e.g., image classification [1, 2], speech
recognition [3, 4], face detection [5, 6], automatic driv-
ing technology [7], etc. Such broad impacts have mo-
tivated a wide range of investigations into the adversar-
ial attacks on DNNs, exploring the vulnerability and un-
certainty of DNNs. For instance, Szegedy et al . showed
that DNNs could be fooled by adversarial examples crafted
by adding human-indistinguishable perturbations to origi-
nal inputs [8]. As successful adversarial examples must be
imperceptible to humans but cause DNNs to make a false
prediction, how to design adversarial attacks to generate
high-quality adversarial examples in a general manner re-
mains challenging.
Adversarial attack methods can be divided into untar-
geted attacks and targeted attacks. Untargeted attacks try
to misguide the model to predict arbitrary incorrect la-
bels, while targeted adversarial attacks expect the gener-
ated adversarial examples can trigger the misprediction for
a specific label. The transferability of adversarial examples
is crucial for both untargeted and targeted attacks, espe-
cially in black-box attack scenarios where the target model
is inaccessible. However, most existing studies focus on
enhancing the transferability of untargeted adversarial at-
tacks, through data augmentation [9, 10], model aggrega-
tion [11,12], feature information utilization [13–15], or gen-
erative methods [16–18]. Although some of them could be
extended to the targeted adversarial attacks by simply mod-
ifying the loss function, they could not extract sufficient
transferable information about the target class due to the
overfitting of the source model and the lack of distribution
information of the target class, thus demonstrating limited
transferability.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20534
Some recent works [15, 19–21] studied how to boost the
transferability of targeted adversarial attacks by learning the
target class information from either the label or the feature
perspective, leveraging the label probability distribution and
feature maps respectively. The label-wise information, of-
ten output by the last layer of the classification model, can
effectively reflect the direct correlation between image dis-
tribution and class labels. However, learning with just the
label-wise information is proven to retain high-level seman-
tic information of the original class [22], thus leading to low
cross-model transferability. The feature-wise information,
which can be obtained from the intermediate layer of the
classification model, has been proven [23] to have transfer-
ability due to the mid-level layer of different DNNs follow-
ing similar activation patterns. However, the feature-wise
information is not sensitive to the target label and fails to
trigger the targeted misclassification.
In summary, only the information extracted from the la-
bel or the feature cannot generate high-transferability tar-
geted adversarial examples. Meanwhile, most existing tar-
geted attacks assume that the training dataset of the target
model ( i.e., target domain) is accessible and could be lever-
aged by the attackers to train a shadow model in the target
domain as the simulation of the target model, which how-
ever is not practical in realistic scenarios.
In this paper, we aim to generate highly transferable tar-
geted adversarial examples in a more realistic but challeng-
ing scenario where the attacker cannot access the data of
the target domain. To this end, we are facing two main
challenges. The first challenge is how to generate targeted
adversarial examples with both cross-model and cross-
domain transferability? Cross-domain images usually vary
significantly in characteristic distribution (e.g., have differ-
ent attributes even labels), making it difficult for models
to transfer the knowledge to unknown domains. Besides,
when involved models adopt different architectures, do-
main knowledge learned by the source model becomes less
transferable, resulting in more difficulties in cross-model
and cross-domain transferable adversarial attacks. The sec-
ond challenge is how to improve the transferability of tar-
geted adversarial attacks? As the targeted adversarial at-
tack needs to distort the ground-truth label and trigger the
model to predict the target label simultaneously, causing the
transferability of the targeted attack hard to achieve due to
the dual objective of obfuscating original class information
and recognizing target class information.
To solve such challenges, we propose Transferable Tar-
geted Adversarial Attack (TTAA) to generate highly trans-
ferable targeted adversarial examples. The main idea of
our method is to capture the distribution information of the
target class from both label-wise and feature-wise perspec-
tives. We design a generative adversarial network, which
generates targeted adversarial examples by a generator andcaptures the distribution information of the target class by
label-feature dual discrimination, consisting of the label dis-
criminator and the feature discriminator. More specifically,
the label discriminator learns the label-related distribution
information from the label probability distribution and the
feature discriminator extracts transferable distribution in-
formation of the target class via feature maps output by the
intermediate layer of the label discriminator. Meanwhile,
we propose random perturbation dropping to enhance our
training samples, which applies random transformations to
augment the diversity of the adversarial examples used dur-
ing the training process to improve the robustness of the
distribution information of the target class on the adversar-
ial examples. In generative adversarial training, such dis-
tribution information extracted by discriminators guides the
generator to acquire the label-related and transferable dis-
tribution information of the target class. Therefore the tar-
geted adversarial examples achieve both high cross-model
and cross-domain transferability.
Our main contributions are summarized as follows.
• We propose a general framework for targeted adver-
sarial attack that works in both non-cross-domain and
cross-domain scenarios. This framework could be eas-
ily integrated with other targeted adversarial attacks to
improve their cross-model and cross-domain targeted
transferability.
• Our proposed Transferable Targeted Adversarial At-
tack could extract the target class distribution from
feature-wise and label-wise levels to promote pertur-
bations to acquire label-related and transferable distri-
bution information of the target class, thus generating
highly transferable targeted adversarial examples.
• Extensive experiments on diverse classification models
demonstrate the superior targeted transferability of ad-
versarial examples generated by the proposed TTAA
as compared to state-of-the-art transferable attacking
methods, no matter whether the attack scenario is
cross-domain or non-cross-domain.
|
Wang_Semi-Supervised_Parametric_Real-World_Image_Harmonization_CVPR_2023
|
Abstract
Learning-based image harmonization techniques are usu-
ally trained to undo synthetic random global transforma-
tions applied to a masked foreground in a single ground
truth photo. This simulated data does not model many of
the important appearance mismatches (illumination, object
boundaries, etc.) between foreground and background in
real composites, leading to models that do not generalize
well and cannot model complex local changes. We propose
a new semi-supervised training strategy that addresses this
problem and lets us learn complex local appearance harmo-
nization from unpaired real composites, where foreground
and background come from different images. Our model is
fully parametric. It uses RGB curves to correct the global
colors and tone and a shading map to model local vari-
ations. Our method outperforms previous work on estab-
lished benchmarks and real composites, as shown in a userstudy, and processes high-resolution images interactively.
Code, and project page available at:
https://kewang0622.github.io/sprih/ .
|
1. Introduction
Image harmonization [12, 22, 23, 26, 28, 32] aims to iron
out visual inconsistencies created when compositing a fore-
ground subject onto a background image that was captured
under different conditions [18, 32], by altering the fore-
ground’s colors, tone, etc., to make the composite more re-
alistic. Despite significant progress, the practicality of to-
day’s most sophisticated learning-based image harmoniza-
tion techniques [3, 4, 9, 10, 13, 14, 16, 32] is limited by a se-
vere domain gap between the synthetic data they are trained
on and real-world composites.
As shown in Figure 2, the standard approach to generat-
ing synthetic training composites applies global transforms
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5927
Global adjustmentsReal image
Paste foreground
F
BSyntheticcompositeSyntheticcomposite
Same lighting environmentConsistentshadingPerfectboundaryImperfectboundaryInconsistent shadingBackgroundRealcomposite
RealcompositeDifferent lighting environment
Lighting source of Foregroundand Background
F
BSynthetic composite imageReal-world composite imageFigure 2. Domain Gap between synthetic and real-world com-
posites. The existing synthetic composites [4] (left), generated by
applying global transforms (e.g., color, brightness), are unable to
simulate many of the appearance mismatches that occur in real
composites (right). This leads to a domain gap: models trained on
synthetic data do not generalize well to real composites. In real
composites (right), the foreground and background are captured
under different conditions. They have different illuminations, the
shadows do not match, and the object’s boundary is inconsistent.
Such mismatches do not happen in the synthetic case (left).
(color, brightness, contrast, etc.) to a masked foreground
subject in a ground truth photo. This is how the iHarmony
Dataset [2, 4] was constructed. A harmonization network is
then trained to recover the ground truth image from the syn-
thetic input. While this approach makes supervised training
possible, it is unsatisfying in simulating the real composite
in that synthetic data does not simulate mismatch in illu-
mination, shadows, shading, contacts, perspective, bound-
aries, and low-level image statistics like noise, lens blur, etc.
However, in real-world composites, the foreground subject
and the background are captured under different conditions,
which can have more diverse and arbitrary differences in
any aspects mentioned above.
We argue that using realistic composites for training is
essential for image harmonization to generalize better to
real-world use cases. Because collecting a large dataset
of artist-created before/after real composite pairs would
be costly and cumbersome, our strategy is to use a semi-
supervised approach instead. We propose a novel dual-
stream training scheme that alternates between two data
streams. Similar to previous work, the first is a supervised
training stream, but crucially, it uses artist-retouched image
pairs. Different from previous datasets, these artistic adjust-
ments include global color editing but also dodge and burn
shading corrections and other local edits.
The second stream is fully unsupervised. It uses a
GAN [8] training procedure, in which the critic comparesour harmonized results with a large dataset of realistic im-
age composites. Adversarial training requires no paired
ground truth. The foreground and background for the com-
posite in this dataset are extracted from different images so
that their appearance mismatch is consistent with what the
model would see at test time.
To reap the most benefits from our semi-supervised train-
ing, we also introduce a new model that is fully paramet-
ric. To process a high-resolution input composite at test
time, our proposed network first creates a down-sampled
copy of the image at 512×512resolution, from which it
predicts global RGB curves and a smooth, low-resolution
shading map . We then apply the RGB curves pointwise to
the high-resolution input and multiply them by the upsam-
pled shading map. The shading map enables more realistic
local tonal variations, unlike previous harmonization meth-
ods limited to global tone and color changes, either by con-
struction [14, 16, 31] or because of their training data [4].
Our parametric approach offers several benefits. First, by
restricting the model’s output space, it regularizes the adver-
sarial training. Unrestricted GAN generators often create
spurious image artifacts or other unrealistic patterns [36].
Second, it exposes intuitive controls for an artist to adjust
and customize the harmonization result post-hoc. This is
unlike the black-box nature of most current learning-based
approaches [3, 4, 9, 10], which output an image directly.
And, third our parametric model runs at an interactive rate,
even on very high-resolution images (e.g., 4k), whereas sev-
eral state-of-the-art methods [4, 9, 10] are limited to low-
resolution (e.g., 256×256) inputs.
To summarize, we make the following contributions:
• A novel dual-stream semi-supervised training strategy
that, for the first time, enables training from real com-
posites, which contains much richer local appearance
mismatches between foreground and background.
• A parametric harmonization method that can capture
these more complex, local effects (using our shading
map) and produces more diverse and photorealistic
harmonization results.
• State-of-the-art results on both synthetic and real com-
posite test sets in terms of quantitative results and
visual comparisons, together with a new evaluation
benchmark.
|
Xie_Revealing_the_Dark_Secrets_of_Masked_Image_Modeling_CVPR_2023
|
Abstract
Masked image modeling (MIM) as pre-training is shown
to be effective for numerous vision downstream tasks, but
how and where MIM works remain unclear. In this paper,
we compare MIM with the long-dominant supervised pre-
trained models from two perspectives, the visualizations and
the experiments, to uncover their key representational dif-
ferences. From the visualizations, we find that MIM brings
locality inductive bias to all layers of the trained models,
but supervised models tend to focus locally at lower layers
but more globally at higher layers. That may be the rea-
son why MIM helps Vision Transformers that have a very
large receptive field to optimize. Using MIM, the model
can maintain a large diversity on attention heads in all lay-
ers. But for supervised models, the diversity on attention
heads almost disappears from the last three layers and less
diversity harms the fine-tuning performance. From the exper-
iments, we find that MIM models can perform significantly
better on geometric and motion tasks with weak semantics
or fine-grained classification tasks, than their supervised
counterparts. Without bells and whistles, a standard MIM
pre-trained SwinV2-L could achieve state-of-the-art perfor-
mance on pose estimation (78.9 AP on COCO test-dev and
78.0 AP on CrowdPose), depth estimation (0.287 RMSE on
NYUv2 and 1.966 RMSE on KITTI), and video object track-
ing (70.7 SUC on LaSOT). For the semantic understanding
datasets where the categories are sufficiently covered by the
supervised pre-training, MIM models can still achieve highly
competitive transfer performance. With a deeper understand-
ing of MIM, we hope that our work can inspire new and solid
research in this direction. Code will be available at https:
//github.com/zdaxie/MIM-DarkSecrets .
|
1. Introduction
Pre-training of effective and general representations ap-
plicable to a wide range of tasks in a domain is the key to
the success of deep learning. In computer vision, supervised
*Equal Contribution. The work is done when Zhenda Xie, Zigang Geng,
and Jingcheng Hu are interns at Microsoft Research Asia.†Contact person.classification on ImageNet [14] has long been the dominant
pre-training task which is manifested to be effective on a
wide range of vision tasks, especially on the semantic un-
derstanding tasks, such as image classification [17, 18, 36,
38, 51], object detection [23, 29, 61, 63], semantic segmenta-
tion [53, 69], video action recognition [7, 52, 65, 67] and so
on. Over the past several years, “masked signal modeling”,
which masks a portion of input signals and tries to predict
these masked signals, serves as a universal and effective self-
supervised pre-training task for various domains, including
language, vision, and speech. After (masked) language mod-
eling repainted the NLP field [15, 49], recently, such task
has also been shown to be a competitive challenger to the su-
pervised pre-training in computer vision [3, 8, 18, 27, 74, 80].
That is, masked image modeling (MIM) pre-trained models
achieve very high fine-tuning accuracy on a wide range of
vision tasks of different nature and complexity.
However, there still remain several questions:
1.What are the key mechanisms that contribute to the
excellent performance of MIM?
2.How transferable are MIM and supervised models
across different types of tasks, such as semantic un-
derstanding, geometric and motion tasks?
To investigate these questions, we compare MIM with su-
pervised models from two perspectives, the visualization
perspective and the experimental perspective, trying to un-
cover key representational differences between these two
pre-training tasks and deeper understand the behaviors of
MIM pre-training.
We start with studying the attention maps of the pre-
trained models. Firstly, we visualize the averaged attention
distance in MIM models, and we find that masked image
modeling brings locality inductive bias to the trained
model, that the models tend to aggregate near pixels in
part of the attention heads, and the locality strength is
highly correlated with the masking ratio and masked patch
size in the pre-training stage. But the supervised models tend
to focus locally at lower layers but more globally at higher
layers.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14475
We next probe how differently the attention heads in MIM
trained Transformer behave. We find that different atten-
tion heads tend to aggregate different tokens on all layers
in MIM models , according to the large KL-divergence on
attention maps of different heads. But for supervised models,
the diversity on attention heads diminishes as the layer goes
deeper and almost disappears in the last three layers. We
drop the last several layers for supervised pre-trained models
during fine-tuning and find that it benefits the fine-tuning per-
formance on downstream tasks, however this phenomenon
is not observed for MIM models. That is, less diversity on
attention heads would somewhat harm the performance
on downstream tasks .
Then we examine the representation structures in the
deep networks of MIM and supervised models via the simi-
larity metric of Centered Kernel Alignment (CKA) [37]. We
surprisingly find that in MIM models, the feature represen-
tations of different layers are of high similarity, that their
CKA values are all very large (e.g., [0.9, 1.0]). But for
supervised models, as in [59], different layers learn different
representation structures, that their CKA similarities vary
greatly (e.g., [0.5,1.0]). To further verify this, we load the
pre-trained weights of randomly shuffled layers during fine-
tuning and find that supervised pre-trained models suffer
more than the MIM models.
From the experimental perspective, a fundamental pre-
training task should be able to benefit a wide range of tasks,
or at least it is important to know for which types of tasks
MIM models work better than the supervised counterparts.
To this end, we conduct a large-scale study by comparing the
fine-tuning performance of MIM and supervised pre-trained
models, on three types of tasks, semantic understanding
tasks, geometric and motion tasks, and the combined tasks
which simultaneously perform both.
For semantic understanding tasks, we select several rep-
resentative and diverse image classification benchmarks, in-
cluding Concept Generalization (CoG) benchmark [62], the
widely-used 12-dataset benchmark [38], as well as a fine-
grained classification dataset iNaturalist-18 [68]. For the
classification datasets whose categories are sufficiently cov-
ered by ImageNet categories (e.g. CIFAR-10/100), super-
vised models can achieve better performance than MIM
models. However, for other datasets, such as fine-grained
classification datasets (e.g., Food, Birdsnap, iNaturalist), or
datasets with different output categories (e.g., CoG), most
of the representation power in supervised models is diffi-
cult to transfer, thus MIM models remarkably outperform
supervised counterparts.
For geometric and motion tasks that require weaker se-
mantics and high-resolution object localization capabilities,
such as pose estimation on COCO [48] and CrowdPose [44],
depth estimation on NYUv2 [64] and KITTI [22], and video
object tracking on GOT10k [32], TrackingNet [55], and La-SOT [20], MIM models outperform supervised counterparts
by large margins. Note that, without bells and whistles,
Swin-L with MIM pre-training could achieve state-of-the-art
performance on these benchmarks, e.g., 80.5AP on COCO
val,78.9AP on COCO test-dev, and 78.0AP on Crowd-
Pose of pose estimation, 0.287RMSE on NYUv2 and 1.966
RMSE on KITTI of depth estimation, and 70.7SUC on
LaSOT of video object tracking.
We select object detection on COCO as the combined task
which simultaneously performs both semantic understanding
and geometric learning. For object detection on COCO,
MIM models would outperform supervised counterparts. Via
investigating the training losses of object classification and
localization, we find that MIM models help localization
task converge faster, and supervised models benefit more
for object classification, that categories of COCO are fully
covered by ImageNet.
In general, MIM models tend to exhibit improved perfor-
mance on geometric/motion tasks with weak semantics or
fine-grained classification tasks compared to their supervised
counterparts. For tasks/datasets where supervised models
excel in transfer, MIM models can still achieve competitive
transfer performance. Masked image modeling appears to
be a promising candidate for a general-purpose pre-trained
model. We hope our paper contributes to this understanding
within the community and stimulates further research in this
direction.
|
Wu_Virtual_Sparse_Convolution_for_Multimodal_3D_Object_Detection_CVPR_2023
|
Abstract
Recently, virtual/pseudo-point-based 3D object detec-
tion that seamlessly fuses RGB images and LiDAR data
by depth completion has gained great attention. However,
virtual points generated from an image are very dense, in-
troducing a huge amount of redundant computation during
detection. Meanwhile, noises brought by inaccurate depth
completion significantly degrade detection precision. This
paper proposes a fast yet effective backbone, termed Vir-
ConvNet , based on a new operator VirConv (Virtual Sparse
Convolution), for virtual-point-based 3D object detection.
VirConv consists of two key designs: (1) StVD (Stochas-
tic Voxel Discard) and (2) NRConv (Noise-Resistant Sub-
manifold Convolution). StVD alleviates the computation
problem by discarding large amounts of nearby redundant
voxels. NRConv tackles the noise problem by encoding
voxel features in both 2D image and 3D LiDAR space. By
integrating VirConv, we first develop an efficient pipeline
VirConv-L based on an early fusion design. Then, we
build a high-precision pipeline VirConv-T based on a trans-
formed refinement scheme. Finally, we develop a semi-
supervised pipeline VirConv-S based on a pseudo-label
framework. On the KITTI car 3D detection test leader-
board, our VirConv-L achieves 85% AP with a fast run-
ning speed of 56ms . Our VirConv-T and VirConv-S attains
a high-precision of 86.3% and87.2% AP , and currently
rank 2nd and 1st1, respectively. The code is available at
https://github.com/hailanyi/VirConv .
|
1. Introduction
3D object detection plays a critical role in autonomous
driving [32, 45]. The LiDAR sensor measures the depth
of scene [4] in the form of a point cloud and enables re-
liable localization of objects in various lighting environ-
ments. While LiDAR-based 3D object detection has made
rapid progress in recent years [19, 23, 25, 27, 28, 42, 43, 49],
its performance drops significantly on distant objects, which
inevitably have sparse sampling density in the scans. Unlike
*Corresponding author
1On the date of CVPR deadline, i.e., Nov.11, 2022
Inference speed (ms)
40
60
80
100
78
80
82
84
86
LiDAR-only
Multimodal
Ours
SE-SSD
Voxel-RCNN
VirConv-L
VirConv-T
UberATG-MMF
Fast PointRCNN
3D-CVF
Focals Conv
SFD
VPFNet
Graph-VoI
PV-RCNN
CasA
40
60
80
100
88
89
90
91
92
93
SE-SSD
Voxel-RCNN
VirConv-L
VirConv-T
UberATG-MMF
Fast PointRCNN
3D-CVF
Focals Conv
SFD
VPFNet
Graph-VoI
PV-RCNN
CasA
3D AP (%)
BEV AP (%)
Inference speed (ms)Figure 1. Our VirConv-T achieves top average precision (AP) on
both 3D and BEV moderate car detection in the KITTI benchmark
(more details are in Table 1). Our VirConv-L runs fast at 56ms
with competitive AP.
LiDAR scans, color image sensors provide high-resolution
sampling and rich context data of the scene. The RGB im-
age and LiDAR data can complement each other and usu-
ally boost 3D detection performance [1, 6, 20, 21, 24].
Early methods [29–31] extended the features of LiDAR
points with image features, such as semantic mask and 2D
CNN features. They did not increase the number of points;
thus, the distant points still remain sparse. In contrast,
the methods based on virtual/pseudo points (for simplic-
ity, both denoted as virtual points in the following) enrich
the sparse points by creating additional points around the
LiDAR points. For example, MVP [45] creates the vir-
tual points by completing the depth of 2D instance points
from the nearest 3D points. SFD [36] creates the virtual
points based on depth completion networks [16]. The vir-
tual points complete the geometry of distant objects, show-
ing the great potential for high-performance 3D detection.
However, virtual points generated from an image are
generally very dense. Taking the KITTI [9] dataset as an
example, an 1242 ×375 image generates 466k virtual points
(∼27×more than the LiDAR scan points). This brings a
huge computational burden and causes a severe efficiency
issue (see Fig. 2 (f)). Previous work addresses the density
problem by using a larger voxel size [19, 44] or by ran-
domly down-sampling [17] the points. However, applying
such methods to virtual points will inevitably sacrifice use-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21653
Distance
0
1
2
3
4
AP improvement
LiDAR points per object
Virtual points per object
97% vir. points
0.18% AP gain
3% vir. points
2.2% AP gain
AP comparison
Late fusion
Virtual points
LiDAR-only
Early fusion
Speed comparison
(e)
(f)
(b)
(d)
0
10
20
30
40
40
50
60
70
75
80
85
90
0
40
80
120
160
0
5k
10k
15k
20k
25k
30k
(a)
(c)
3D AP (%)
Time (ms)
Number of points
3D AP (%)Figure 2. The noise problem and density problem of virtual points.
(a) Virtual points in 3D space. (b) Virtual points in 2D space. (c)
Noises (red) in 3D space. (d) Noises (red) distributed on 2D in-
stance boundaries. (e) Virtual points number versus AP improve-
ment along different distances by using V oxel-RCNN [7] with late
fusion (details see Sec. 3.1). (f) Car 3D AP and inference time us-
ing V oxel-RCNN [7] with LiDAR-only, virtual points-only, early
fusion, and late fusion (details see Sec. 3.1), respectively.
ful shape cues from faraway points and result in decreased
detection accuracy.
Another issue is that the depth completion can be inac-
curate, and it brings a large amount of noise in the virtual
points (see Fig. 2 (c)). Since it is very difficult to distinguish
the noises from the background in 3D space, the localiza-
tion precision of 3D detection is greatly degraded. In addi-
tion, the noisy points are non-Gaussian distributed, and can
not be filtered by conventional denoising algorithms [8,12].
Although recent semantic segmentation network [15] show
promising results, they generally require extra annotations.
To address these issues, this paper proposes a VirCon-
vNet pipeline based on a new Virtual Sparse Convolution
(VirConv) operator. Our design builds on two main obser-
vations . (1) First, geometries of nearby objects are often
relatively complete in LiDAR scans. Hence, most virtual
points of nearby objects only bring marginal performance
gain (see Fig. 2 (e)(f)), but increase the computational cost
significantly. (2) Second, noisy points introduced by inac-
curate depth completions are mostly distributed on the in-
stance boundaries (see Fig. 2 (d)). They can be recognized
in 2D images after being projected onto the image plane.
Based on these two observations, we design a StVD
(Stochastic V oxel Discard) scheme to retain those most im-
portant virtual points by a bin-based sampling, namely, dis-
carding a huge number of nearby voxels while retaining far-
away voxels. This can greatly speed up the network com-
putation. We also design a NRConv (Noise-Resistant Sub-
manifold Convolution) layer to encode geometry features
of voxels in both 3D space and 2D image space. The ex-tended receptive field in 2D space allows our NRConv to
distinguish the noise pattern on the instance boundaries in
2D image space. Consequently, the negative impact of noise
can be suppressed.
We develop three multimodal detectors to demon-
strate the superiority of our VirConv: (1) a lightweight
VirConv-L constructed from V oxel-RCNN [7]; (2) a high-
precision VirConv-T based on multi-stage [34] and multi-
transformation [35] design; (3) a semi-supervised VirConv-
Sbased on a pseudo-label [33] framework. The effective-
ness of our design is verified by extensive experiments on
the widely used KITTI dataset [9] and nuScenes dataset [3].
Our contributions are summarized as follows:
• We propose a VirConv operator, which effectively en-
codes voxel features of virtual points by StVD and
NRConv . The StVD discards a huge number of redun-
dant voxels and substantially speeds up the 3D detec-
tion prominently. The NRConv extends the receptive
field of 3D sparse convolution to the 2D image space
and significantly reduces the impact of noisy points.
• Built upon VirConv, we present three new multimodal
detectors: a VirConv-L , aVirConv-T , and a semi-
supervised VirConv-S for efficient, high-precision,
and semi-supervised 3D detection, respectively.
• Extensive experiments demonstrated the effectiveness
of our design (see Fig. 1). On the KITTI leaderboard,
our VirConv-T and VirConv-S currently rank 2nd and
1st, respectively. Our VirConv-L runs at 56ms with
competitive precision.
|
Wang_Selective_Structured_State-Spaces_for_Long-Form_Video_Understanding_CVPR_2023
|
Abstract
Effective modeling of complex spatiotemporal dependencies
in long-form videos remains an open problem. The recently
proposed Structured State-Space Sequence (S4) model with
its linear complexity offers a promising direction in this
space. However, we demonstrate that treating all image-
tokens equally as done by S4 model can adversely affect
its efficiency and accuracy. To address this limitation, we
present a novel Selective S4 (i.e., S5) model that employs
a lightweight mask generator to adaptively select infor-
mative image tokens resulting in more efficient and accu-
rate modeling of long-term spatiotemporal dependencies in
videos. Unlike previous mask-based token reduction meth-
ods used in transformers, our S5 model avoids the dense
self-attention calculation by making use of the guidance of
the momentum-updated S4 model. This enables our model
to efficiently discard less informative tokens and adapt to
various long-form video understanding tasks more effec-
tively. However, as is the case for most token reduction
methods, the informative image tokens could be dropped in-
correctly. To improve the robustness and the temporal hori-
zon of our model, we propose a novel long-short masked
contrastive learning (LSMCL) approach that enables our
model to predict longer temporal context using shorter in-
put videos. We present extensive comparative results using
three challenging long-form video understanding datasets
(LVU, COIN and Breakfast), demonstrating that our ap-
proach consistently outperforms the previous state-of-the-
art S4 model by up to 9.6%accuracy while reducing its
memory footprint by 23%.
|
1. Introduction
Video understanding is an active research area where a
variety of different models have been explored including
e.g., two-stream networks [19, 20, 52], recurrent neural net-
works [3, 63, 72] and 3-D convolutional networks [59–61].
However, most of these methods have primarily focused on
short-form videos that are typically with a few seconds in
length, and are not designed to model the complex long-
eggsaucepanstovepickwhiskpouractions:objects:
timecook eggsmake cereal•••Figure 1. Illustration of long-form videos – Evenly sampled
frames from two long-form videos, that have long duration (more
than 1 minute) and distinct categories in the Breakfast [36] dataset
(grayscale frames are shown for better visualization). The video
on top shows the activity of making scrambled eggs, while the
one on the bottom shows the activity of making cereal. These two
videos heavily overlap in terms of objects ( e.g., eggs, saucepan
and stove), and actions ( e.g., picking, whisking and pouring). To
effectively distinguish these two videos, it is important to model
long-term spatiotemporal dependencies, which is also the key in
long-form video understanding.
term spatiotemporal dependencies often found in long-form
videos (see Figure 1 for an illustrative example). The re-
cent vision transformer (ViT) [14] has shown promising ca-
pability in modeling long-range dependencies, and several
variants [1,4,15,41,45,49,65] have successfully adopted the
transformer architecture for video modeling. However, for a
video with T frames and S spatial tokens, the complexity of
standard video transformer architecture is O(S2T2), which
poses prohibitively high computation and memory costs
when modeling long-form videos. Various attempts [54,68]
have been proposed to improve this efficiency, but the ViT
pyramid architecture prevents them from developing long-
term dependencies on low-level features.
In addition to ViT, a recent ViS4mer [29] method has
tried to apply the Structured State-Spaces Sequence (S4)
model [23] as an effective way to model the long-term
video dependencies. However, by introducing simple mask-
ing techniques we empirically reveal that the S4 model can
have different temporal reasoning preferences for different
downstream tasks. This makes applying the same image to-
ken selection method as done by ViS4mer [29] for all long-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
6387
form video understanding tasks suboptimal.
To address this challenge, we propose a cost-efficient
adaptive token selection module, termed S 5(i.e., selective
S4) model, which adaptively selects informative image to-
kens for the S4 model, thereby learning discriminative long-
form video representations. Previous token reduction meth-
ods for efficient image transformers [37, 42, 50, 66, 70, 71]
heavily rely on a dense self-attention calculation, which
makes them less effective in practice despite their theoret-
ical guarantees about efficiency gains. In contrast, our S5
model avoids the dense self-attention calculation by lever-
aging S4 features in a gumble-softmax sampling [30] based
mask generator to adaptively select more informative im-
age tokens. Our mask generator leverages S4 feature for its
global sequence-context information and is further guided
by the momentum distillation from the S4 model.
To further improve the robustness and the temporal pre-
dictability of our S5 model, we introduce a novel long-short
mask contrastive learning (LSMCL) to pre-train our model.
In LSMCL, randomly selected image tokens from long and
short clips include the scenario that the less informative im-
age tokens are chosen, and the representation of them are
learned to match each other. As a result, the LSMCL not
only significantly boosts the efficiency compared to the pre-
vious video contrastive learning methods [17, 51, 64], but
also increases the robustness of our S5 model when deal-
ing with the mis-predicted image tokens. We empirically
demonstrate that the S5 model with LSMCL pre-training
can employ shorter-length clips to achieve on-par perfor-
mance with using longer-range clips without incorporating
LSMCL pre-training.
We summarize our key contributions as the following:
•We propose a Selective S4 (S5) model that leverages
the global sequence-context information from S4 features
to adaptively choose informative image tokens in a task-
specific way.
•We introduce a novel long-short masked contrastive learn-
ing approach (LSMCL) that enables our model to be tol-
erant to the mis-predicted tokens and exploit longer dura-
tion spatiotemporal context by using shorter duration input
videos, leading to improved robustness in the S5 model.
•We demonstrate that two proposed novel techniques (S5
model and LSMCL) are seamlessly suitable and effective
for long-form video understanding, achieving the state-of-
the-art performance on three challenging benchmarks. No-
tably, our method achieves up to 9.6% improvement on
LVU dataset compared to the previous state-of-the-art S4
method, while reducing the memory footprint by 23% .
|
Xie_Exploring_and_Exploiting_Uncertainty_for_Incomplete_Multi-View_Classification_CVPR_2023
|
Abstract
Classifying incomplete multi-view data is inevitable since
arbitrary view missing widely exists in real-world applica-
tions. Although great progress has been achieved, existing
incomplete multi-view methods are still difficult to obtain a
trustworthy prediction due to the relatively high uncertainty
nature of missing views. First, the missing view is of high
uncertainty, and thus it is not reasonable to provide a single
deterministic imputation. Second, the quality of the imputed
data itself is of high uncertainty. To explore and exploit the
uncertainty, we propose an Uncertainty-induced Incomplete
Multi-View Data Classification (UIMC) model to classify
the incomplete multi-view data under a stable and reliable
framework. We construct a distribution and sample multiple
times to characterize the uncertainty of missing views, and
adaptively utilize them according to the sampling quality.
Accordingly, the proposed method realizes more perceivable
imputation and controllable fusion. Specifically, we model
each missing data with a distribution conditioning on the
available views and thus introducing uncertainty. Then an
evidence-based fusion strategy is employed to guarantee the
trustworthy integration of the imputed views. Extensive ex-
periments are conducted on multiple benchmark data sets
and our method establishes a state-of-the-art performance
in terms of both performance and trustworthiness.
|
1. Introduction
Learning from multiple complementary views has the
potential to yield more generalizable models. Benefiting
from the power of deep learning, multi-view learning has
further exhibited remarkable benefits against the single-view
paradigm in clustering [1 –3], classification [4, 5] and rep-
resentation learning [6, 7]. However, real-world data are
usually incomplete. For instance, in the medical field, pa-
tients with same condition may choose different medical
examinations producing incomplete/unaligned multi-view
‡Equal contribution.
∗Corresponding author.data; similarly, sensors in cars at times may be out of order
and thus only part of information can be collected. Flexible
and trustworthy utilization of incomplete multi-view data is
still very challenging due to the high uncertainty of missing
issues.
For the task of incomplete multi-view classification
(IMVC), there have been plenty of studies which could be
roughly categorized into two main lines. The methods [8, 9]
only use available views without imputation to conduct clas-
sification. While the other line [10 –13] reconstructs miss-
ing data based on deep learning methods such as autoen-
coder [14,15] or generative adversarial network (GAN) [16],
and then utilizes the imputed complete data for classification.
Although significant progress has been achieved by exist-
ing IMVC methods, there are still limitations: (1) Methods
that simply neglect missing views are usually ineffective
especially under high missing rate due to the limitation in
exploring the correlation among views; (2) Methods that
impute missing data based on deep learning methods are
short in interpretability and the deterministic imputation way
fails to characterize the uncertainty of missing resulting in
unstable classification; (3) Few IMVC methods can handle
multi-view data with complex missing patterns especially
for the data with more than two views, which makes these
methods inflexible.
In view of above limitations, we propose a simple yet
effective, stable and flexible incomplete multi-view classi-
fication model. First, the proposed model characterizes the
uncertainty of each missing view by imputing a distribution
instead of a deterministic value. The necessity of charac-
terizing the uncertainty for missing data on single view has
been well recognized in recent works [17, 18]. Second, we
conduct sampling multiple times from the above distribution
and each one is combined with the observed views to form
multiple completed multi-view samples. The quality of the
sampled data is of high uncertainty, and thus we adaptively
integrate them according to their quality on the single view
and multi-view fusion. For single view, the uncertainty of
the low-quality sampled data tends to be large and should
not affect the learning of other views. Therefore, we con-
struct an evidence-based classifier for each view to obtain the
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19873
opinion including subjective probabilities and uncertainty
masses. While for multi-view fusion, the opinions from mul-
tiple views are integrated based on DS rule, which ensures
the trustworthy utilization of arbitrary-quality views. The
main contributions of this work are summarized as follows:
•We propose an exploration-and-exploitation strategy for
classifying incomplete multi-view data by characteriz-
ing the uncertainty of missing data, which promotes
both effectiveness and trustworthiness in utilizing im-
puted data. To the best of our knowledge, the proposed
UIMC is the first work asserting uncertainty in incom-
plete multi-view classification.
•To fully exploit the high-quality imputed data and re-
duce the affect of the low-quality imputed views, we
propose to weight the imputed data from two aspects,
avoiding the negative effect on single view and multi-
view fusion. The uncertainty-aware training and fusion
significantly ensure the effectiveness and reliability of
integrating uncertain imputed data.
•We conduct experiments on classification for data with
multiple types of features or modalities, and evaluate
the results with diverse metrics, which validates that the
proposed UIMC outperforms existing methods in the
above tasks and is trustworthy with reliable uncertainty.
|
Wang_Hunting_Sparsity_Density-Guided_Contrastive_Learning_for_Semi-Supervised_Semantic_Segmentation_CVPR_2023
|
Abstract
Recent semi-supervised semantic segmentation methods
combine pseudo labeling and consistency regularization to
enhance model generalization from perturbation-invariant
training. In this work, we argue that adequate supervi-
sion can be extracted directly from the geometry of fea-
ture space. Inspired by density-based unsupervised clus-
tering, we propose to leverage feature density to locate
sparse regions within feature clusters defined by label and
pseudo labels. The hypothesis is that lower-density fea-
tures tend to be under-trained compared with those densely
gathered. Therefore, we propose to apply regularization on
the structure of the cluster by tackling the sparsity to in-
crease intra-class compactness in feature space. With this
goal, we present a Density-Guided Contrastive Learning
(DGCL) strategy to push anchor features in sparse regions
toward cluster centers approximated by high-density posi-
tive keys. The heart of our method is to estimate feature
density which is defined as neighbor compactness. We de-
sign a multi-scale density estimation module to obtain the
density from multiple nearest-neighbor graphs for robust
density modeling. Moreover, a unified training framework is
proposed to combine label-guided self-training and density-
guided geometry regularization to form complementary su-
pervision on unlabeled data. Experimental results on PAS-
CAL VOC and Cityscapes under various semi-supervised
settings demonstrate that our proposed method achieves
state-of-the-art performances. The project is available
athttps://github.com/Gavinwxy/DGCL .
|
1. Introduction
Semantic segmentation, as an essential computer vision
task, has seen significant advances along with the rise of
deep learning [4, 30, 46]. Nevertheless, training segmenta-
tion models requires massive pixel-level annotations which
can be time-consuming and laborious to obtain. Therefore,
*Corresponding author.
(a) Feature clusters (b) Local density estimationFigure 1. Illustration of feature density within clusters. (a) Pixel-
level features of 5 classes extracted from PASCAL VOC 2012 [12]
with prediction confidence over 0.95. (b) Feature density esti-
mated by the averaged distance of 16 nearest neighbor features.
semi-supervised learning is introduced in semantic segmen-
tation and is drawing growing interest. It aims to design a
label-efficient training scheme with limited annotated data
to allow better model generalization by leveraging addi-
tional unlabeled images.
The key in semi-supervised semantic segmentation lies
in mining extra supervision from unlabeled samples. Re-
cent studies focus on learning consistency-based regular-
ization [13, 29, 32, 51] and designing self-training pipelines
[15, 19, 43, 44]. Inspired by the advances in representation
learning [17,24], another line of works [28,40,47–49] intro-
duce contrastive learning on pixel-level features to enhance
inter-class separability. Though previous works have shown
effectiveness, their label-guided learning scheme solely re-
lies on classifier knowledge, while the structure information
of feature space is under-explored. In this work, we argue
that effective supervision can be extracted from the geome-
try of feature clusters to complement label supervision.
Feature density, measured by local compactness, has
shown its potential to reveal feature patterns in unsuper-
vised clustering algorithms such as DBSCAN [11]. The
density-peak assumption [33] states that cluster centers are
more likely located in dense regions. Inversely, features in
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3114
sparse areas tend to be less representative within the clus-
ter, so they require extra attention. The sparsity exists even
within features that are confidently predicted by the classi-
fier. As shown in Fig. 1, pixel-level features are extracted
on labeled and unlabeled images from 5 classes in PASCAL
VOC 2012 dataset, all with high prediction confidence. Ev-
ident density variation still exists within each cluster, indi-
cating varying learning difficulty among features, which the
classifier fails to capture.
In this work, we propose a learning strategy named
Density-Guided Contrastive Learning (DGCL) to mine ef-
fective supervision from cluster structure of unlabeled data.
Specifically, we initialize categorical clusters based on la-
beled features and enrich them with unlabeled features
which are confidently predicted. Then, sparsity hunting
is conducted in each in-class feature cluster to locate low-
density features as anchors. Meanwhile, features in dense
regions are selected to approximate the cluster centers and
serve as the positive keys. Then, feature contrast is applied
to push the anchors toward their positive keys, explicitly
shrinking sparse regions to enforce more compact clusters.
The core of our method is feature density estimation. We
measure local density by the average distance between the
target feature and its nearest neighbors. For robust esti-
mation, categorical memory banks are proposed to break
the limitation on mini-batch, so in-class density can be es-
timated in a feature-to-bank style where class distribution
can be approximated globally. When building the nearest
neighbor graph, densities estimated by fewer neighbors tend
to focus on the local region, which prevents capturing true
cluster centers. On the other hand, graphs with too many
neighbors cause over-smoothed estimation, which harms
accurate sparsity mining. Therefore, we propose multi-
scale nearest neighbor graphs to determine the final density
by combining estimations from graphs of different sizes.
We evaluate the proposed method on PASCAL VOC
2012 [12] and Cityscapes [8] under various semi-supervised
settings, where our approach achieves state-of-the-art per-
formances. Our contributions are summarized as follows:
• We propose a density-guided contrastive learning strat-
egy to tackle semi-supervised semantic segmentation
by mining effective supervision from the geometry in
feature space.
• We propose a multi-scale density estimation module
combined with dynamic memory banks to capture fea-
ture density robustly.
• We propose a unified learning framework consisting
of label-guided self-training and density-guided fea-
ture learning, in which two schemes complement each
other. Experiments show that our method achieves
state-of-the-art performances.
|
Xie_On_Data_Scaling_in_Masked_Image_Modeling_CVPR_2023
|
Abstract
Scaling properties have been one of the central issues
in self-supervised pre-training, especially the data scalabil-
ity, which has successfully motivated the large-scale self-
supervised pre-trained language models and endowed them
with significant modeling capabilities. However, scaling
properties seem to be unintentionally neglected in the re-
cent trending studies on masked image modeling (MIM),
and some arguments even suggest that MIM cannot ben-
efit from large-scale data. In this work, we try to break
down these preconceptions and systematically study the scal-
ing behaviors of MIM through extensive experiments, with
data ranging from 10% of ImageNet-1K to full ImageNet-
22K, model parameters ranging from 49-million to one-
billion, and training length ranging from 125K to 500K
iterations. And our main findings can be summarized in
two folds: 1) masked image modeling remains demand-
ing large-scale data in order to scale up computes and
model parameters; 2) masked image modeling cannot bene-
fit from more data under a non-overfitting scenario, which
diverges from the previous observations in self-supervised
pre-trained language models or supervised pre-trained vi-
sion models. In addition, we reveal several intriguing prop-
erties in MIM, such as high sample efficiency in large MIM
models and strong correlation between pre-training vali-
dation loss and transfer performance. We hope that our
findings could deepen the understanding of masked im-
age modeling and facilitate future developments on large-
scale vision models. Code and models will be available at
https://github.com/microsoft/SimMIM .
|
1. Introduction
Masked Image Modeling (MIM) [3, 18, 44], which has
recently emerged in the field of self-supervised visual pre-
training, has attracted widespread interest and extensive ap-
plications throughout the community for unleashing the su-
perior modeling capacity of attention-based Transformer
The work is done when Zhenda Xie, Yutong Lin, and Yixuan Wei are
interns at Microsoft Research Asia.†Project co-leaders.architectures [13, 26] and demonstrating excellent sample
efficiency and impressive transfer performance on a vari-
ety of vision tasks. However, most recent practices are
focused on the design of MIM methods, while the study
of scaling properties of MIM is unintentionally neglected,
especially the data scaling property, which successfully mo-
tivated the large-scale self-supervised pre-trained language
models and endowed them with significant modeling capa-
bilities. Although previous works [8, 10, 17, 37, 45] have
explored several conclusions about the scaling properties of
vision models, most of their findings were obtained under a
supervised pre-training scheme or under a contrastive learn-
ing framework, so the extent to which these findings could
be transferred to MIM still needs to be investigated.
Meanwhile, with the emergence of Transformers [41] and
masked language modeling (MLM) [12, 31], the systematic
studies of scaling laws have already been explored in natural
language processing field [21,23,35], which provided ample
guidance for large models in recent years. The core finding
drawn from scaling laws [21] for neural language models
is that the performance has a power-law relationship with
each of the three scale factors – model parameters N, size of
dataset D, and amount of compute C respectively – when not
bottlenecked by the other two . This conclusion implies that
better performance can be obtained by scaling up these three
factors to the extent that the scaling laws are in effect, which
led to the subsequent developments of large scale language
models [16,28,29,32,33] that exhibit excellent modeling ca-
pabilities [4] on most language tasks. Therefore, it is natural
to ask whether MIM possesses the same scaling signatures
for vision models as the MLM method for language models,
so that the scaling up of vision models can catch up with
language models.
Though masked image modeling and masked language
modeling both belong to masked signal prediction, their prop-
erty differences are also non-negligible due to the different
nature of vision and language. That is, the images are highly
redundant raw signals and words/sentences are semantically
rich tokens, which may result in different abilities of data
utilization, and it is thus debatable whether the observations
from language models could be reproduced in vision models.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10365
Figure 1. The curves of validation loss of pre-training models w.r.t. the relative compute, dataset size and model size. MIM performance
improves smoothly as we increase the relative compute and model size, but not improve when the dataset size is sufficient to prevent model
from overfitting. We set the relative compute of SwinV2-S for 125K iterations as the value of 1. Best viewed in color.
Moreover, recent studies [14, 38] have shown that using a
small amount of training data in masked image modeling
can achieve comparable performance to using large datasets,
which also motivates our explorations as it disagrees with
previous findings and intuitions.
In this paper, we systematically investigate the scaling
properties, especially the data scaling capability of masked
image modeling in terms of different dataset sizes ranging
from 10% of ImageNet ( ∼0.1 million) to full ImageNet-22K
(∼14 million), model sizes ranging from 49-million Swin-
V2 Small to 1-billion Swin-V2 giant and training lengths
ranging from 125K to 500K iterations. We use Swin Trans-
former V2 [25] as the vision encoder for its proven train-
ability of large models and applicability to a wide range
of vision tasks, and adopt SimMIM [44] for masked image
modeling pre-training because it has no restrictions on en-
coder architectures. We also conduct experiments with other
MIM frameworks like MAE [18] and other vision encoder
like the widely used ViT [13] to verify the generalizability
of our findings. With these experimental setups, our main
findings could be summarized into two folds:
i)Masked image modeling remains demanding large-
scale data in order to scale up computes and model parame-
ters. We empirically find that MIM performance has a power-
law relationship with relative compute and model size when
not bottlenecked by the dataset size (Figure 1). Besides, we
observe that smaller datasets lead to severe overfitting phe-
nomenon for training large models (Figure 2), and the size
of the dataset to prevent model from overfitting increases
clearly as the model increases (Figure 5-Left). Therefore,
from the perspective of scaling up model, MIM still demands
large-scale data. Furthermore, if we train large-scale models
of different lengths, we find that relatively small datasets
are adequate at shorter training lengths, but still suffer from
overfitting at longer training lengths(Figure 5-Right), which
further demonstrate the data scalability of MIM for scaling
up compute.
2)Masked image modeling cannot benefit from more dataunder a non-overfitting scenario, which diverges from the
previous observations in self-supervised pre-traind language
models or supervised pre-trained vision models. In MIM,
we find that increasing the number of unique samples for
a non-overfitting model does not provide additional bene-
fits to performance (Figure 1). This behavior differs from
previous observations in supervised vision transformers and
self-supervised language models, where the model perfor-
mance increases as the number of unique samples increases.
In addition, we also demonstrate some intriguing proper-
ties about masked image modeling, such as larger models
possessing higher sample efficiency, i.e., fewer optimization
steps are required for larger models to achieve same perfor-
mance (Section 4.2); and the consistency between transfer
and test performance, i.e., test performance could be used to
indicate the results on downstream tasks (Section 4.3). These
observations also correspond to those in previous practices.
These findings on the one hand confirm the effect of MIM
on scaling up model size and compute, and raise new con-
cerns and challenges on the data scalability of MIM on the
other. We hope that our findings could deepen the under-
standing of masked image modeling and facilitate future
developments on large-scale vision models.
|
Xia_VecFontSDF_Learning_To_Reconstruct_and_Synthesize_High-Quality_Vector_Fonts_via_CVPR_2023
|
Abstract
Font design is of vital importance in the digital con-
tent design and modern printing industry. Developing al-
gorithms capable of automatically synthesizing vector fonts
can significantly facilitate the font design process. How-
ever, existing methods mainly concentrate on raster image
generation, and only a few approaches can directly syn-
thesize vector fonts. This paper proposes an end-to-end
trainable method, VecFontSDF , to reconstruct and synthe-
size high-quality vector fonts using signed distance func-
tions (SDFs). Specifically, based on the proposed SDF-
based implicit shape representation, VecFontSDF learns
to model each glyph as shape primitives enclosed by sev-
eral parabolic curves, which can be precisely converted
to quadratic B ´ezier curves that are widely used in vec-
tor font products. In this manner, most image generation
methods can be easily extended to synthesize vector fonts.
Qualitative and quantitative experiments conducted on a
publicly-available dataset demonstrate that our method ob-
tains high-quality results on several tasks, including vector
font reconstruction, interpolation, and few-shot vector font
synthesis, markedly outperforming the state of the art.
|
1. Introduction
Traditional vector font designing process relies heavily
on the expertise and effort from professional designers, set-
ting a high barrier for common users. With the rapid de-
velopment of deep generative models in the last few years,
a large amount of effective and powerful methods [1, 7, 32]
have been proposed to synthesize visually-pleasing glyph
images. In the meantime, how to automatically reconstruct
and generate high-quality vector fonts is still considered
as a challenging task in the communities of Computer Vi-
sion and Computer Graphics. Recently, several methods
*Denotes equal contribution.
†Corresponding author. E-mail: [email protected]
This work was supported by National Language Committee of China
(Grant No.: ZDI135-130), Center For Chinese Font Design and Research,
and Key Laboratory of Science, Technology and Standard in Press Industry
(Key Laboratory of Intelligent Press Media Technology).
(a) Vector Font Reconstruction
(b) Vector Font Interpolation
Figure 1. Examples of results obtained by our method in the tasks
of vector font reconstruction (a) and vector font interpolation (b).
based on sequential generative models [3,11,20,30,33] have
been reported that treat a vector glyph as a draw-commands
sequence and use Recurrent Neural Networks (RNNs) or
Transformer [31] to encode and decode the sequence. How-
ever, this explicit representation of vector graphics is ex-
tremely difficult for the learning and comprehension of deep
neural networks, mainly due to the long-range dependence
and the ambiguity in how to draw the outlines of glyphs.
More recently, DeepVecFont [33] was proposed to use dual-
modality learning to alleviate the problem, showing state-
of-the-art performance on this task. Its key idea is to use a
CNN encoder and an RNN encoder to extract features from
both image and sequence modalities. Despite using richer
information of dual-modality data, it still needs repetitively
random samplings of synthesis results to find the optimal
one and then uses Diffvg [18] to refine the vector glyphs un-
der the guidance of generated images in the inference stage.
Another possible solution to model the vector graphic
is to use implicit functions which have been widely used
to represent 3D shapes in recent years. For instance,
DeepSDF [25] adopts a neural network to predict the values
of signed distance functions for surfaces, but it fails to con-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1848
vert those SDF values to the explicit representation. BSP-
Net [4] uses hyperplanes to split the space in 2D shapes and
3D meshes, but it generates unsmooth and disordered re-
sults when handling glyphs consisting of numerous curves.
Inspired by the above-mentioned existing methods, we
propose an end-to-end trainable model, VecFontSDF, to
reconstruct and synthesize high-quality vector fonts using
signed distance functions (SDFs). The main idea is to treat
a glyph as several primitives enclosed by parabolic curves
which can be translated to quadratic B ´ezier curves that are
widely used in common vector formats like SVG and TTF.
Specifically, we use the feature extracted from the input
glyph image by convolutional neural networks and decode
it to the parameters of parabolic curves. Then, we calculate
the value of signed distance function for the nearest curve
of every sampling point and leverage these true SDF values
as well as the target glyph image to train the model.
The work most related to ours is [19], which also aims
to provide an implicit shape representation for glyphs, but
possesses a serious limitation mentioned below. For conve-
nience, we name the representation proposed in [19] IGSR,
standing for “Implicit Glyph Shape Representation”. The
quadratic curves used in IGSR are not strictly parabolic
curves, which cannot be translated to quadratic B ´ezier
curves. As a consequence, their model is capable of synthe-
sizing high-resolution glyph images but not vector glyphs.
Furthermore, it only uses raster images for supervision
which inevitably leads to inaccurate reconstruction results.
On the contrary, our proposed VecFontSDF learns to recon-
struct and synthesize high-quality vector fonts that consist
of quadratic B ´ezier curves by training on the corresponding
vector data in an end-to-end manner. Major contributions
of our paper are threefold:
• We design a new implicit shape representation to pre-
cisely reconstruct high-quality vector glyphs, which
can be directly converted into commonly-used vector
font formats (e.g., SVG and TTF).
• We use the true SDF values as a strong supervision
instead of only raster images to produce much more
precise reconstruction and synthesis results compared
to previous SDF-based methods.
• The proposed VecFontSDF can be flexibly integrated
with other generative methods such as latent space in-
terpolation and style transfer. Extensive experiments
have been conducted on these tasks to verify the supe-
riority of our method over other existing approaches,
indicating its effectiveness and broad applications.
|
Wu_Semi-Supervised_Stereo-Based_3D_Object_Detection_via_Cross-View_Consensus_CVPR_2023
|
Abstract
Stereo-based 3D object detection, which aims at detect-
ing 3D objects with stereo cameras, shows great potential
in low-cost deployment compared to LiDAR-based methods
and excellent performance compared to monocular-based
algorithms. However, the impressive performance of stereo-
based 3D object detection is at the huge cost of high-quality
manual annotations, which are hardly attainable for any
given scene. Semi-supervised learning, in which limited an-
notated data and numerous unannotated data are required
to achieve a satisfactory model, is a promising method to
address the problem of data deficiency. In this work, we pro-
pose to achieve semi-supervised learning for stereo-based
3D object detection through pseudo annotation generation
from a temporal-aggregated teacher model, which tempo-
rally accumulates knowledge from a student model. To fa-
cilitate a more stable and accurate depth estimation, we
introduce Temporal-Aggregation-Guided (TAG) disparity
consistency, a cross-view disparity consistency constraint
between the teacher model and the student model for robust
and improved depth estimation. To mitigate noise in pseudo
annotation generation, we propose a cross-view agreement
strategy, in which pseudo annotations should attain high
degree of agreements between 3D and 2D views, as well
as between binocular views. We perform extensive exper-
iments on the KITTI 3D dataset to demonstrate our pro-
posed method’s capability in leveraging a huge amount of
unannotated stereo images to attain significantly improved
detection results.
|
1. Introduction
3D object detection, as one of the most significant per-
ception tasks in the computer vision community, has wit-
nessed great progress in recent years, especially after the
advent of neural-network-based deep learning. Most state-
of-the-art works which focus on 3D object detection mainly
*Corresponding author.
Figure 1. Illustration of the proposed Temporal-Aggregation-
Guided (TAG) disparity consistency constraint, in which the
teacher model, temporally collecting knowledge from the student
model, leads the disparity estimation of the student model across
the views.
rely on LiDAR data [17,32,33,43,51] to extract accurate 3D
information, such as 3D structure and depth of the points.
However, LiDAR data are costly to harvest and annotate,
and have limited sensing ranges in some cases. Instead,
vision-based methods, which detect 3D objects based on
images only, have drawn more attention in recent years.
While it is convenient to collect image data, there are signif-
icant challenges in applying image-based methods to depth
sensing, which is an ill-posed problem when localizing ob-
jects with only images. This problem is further exacerbated
in monocular-based 3D object detection [24,25,48]. Stereo
images, in which image pairs took at different viewpoints
are available, can be used to reconstruct depth information
through pixel-to-pixel correspondence or stereo geometry,
making them more useful for detecting 3D objects without
the introduction of expensive LiDAR data.
With more attention focusing on stereo-based 3D object
detection, the performance of these methods gets continu-
ously improved, and the gap between LiDAR-based meth-
ods and stereo-based methods becomes progressively nar-
rower. However, the improving performance is built on
large-scale manual annotation, which is costly in terms of
both time and human resources. When the amount of an-
notation is limited, the performance of stereo-based meth-
ods deteriorates rapidly. Semi-supervised learning, which
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17471
makes use of a limited set of annotated data and plenty of
unannotated data, is a promising method for solving the data
deficiency problem. In this work, we propose an effective
and efficient semi-supervised stereo-based 3D object detec-
tion method to alleviate this limited annotation problem.
With limited annotated stereo images, depth estima-
tion, a key stage for accurately localizing 3D objects, be-
comes increasingly unstable, which in turn leads to poor
object localization. However, the inherent constraint be-
tween the left and right image in each stereo pair can be
adopted to promote the model’s performance in depth es-
timation. Inspired by the left-right disparity consistency
constraint proposed by Godard et al . [11], we propose
a Temporal-Aggregation-Guided (TAG) disparity consis-
tency constraint, as shown in Fig. 1. In this method, dis-
parity estimation of the base model, as the student model,
should pursue that of the teacher model, with continuous
knowledge accumulation from the base model, across the
views. With the proposed TAG method, the base model can
enhance its capability in depth estimation when provided
with extra data without annotations. In addition, to gener-
ate more accurate pseudo annotations for the unannotated
data, we propose a cross-view agreement strategy, which
comprises a 3D-2D agreement constraint, in which pseudo
annotations with high localization consistency in both 2D
view and 3D view are kept, and a left-right agreement con-
straint, in which pseudo annotations with high similarities
in both the left and right view at the feature space are re-
tained for pseudo-supervision. With the proposed cross-
view agreement strategy, the remaining pseudo annotations
can effectively encompass the objects and provide high-
quality supervision on the unannotated data, further improv-
ing the detection performance of the base model.
We evaluate the proposed method on the KITTI 3D
dataset [10]. With our proposed method, the base model
achieves relative performance gains of up to 18 percentage
points under the evaluation metric of AP 3Dand 20 percent-
age points under the evaluation metric of APBEV with the
IoU threshold of 0.7 in the car category when only 5% of
training data are annotated, thus verifying the effectiveness
of our proposed method in making full use of data without
annotations.
We summarize our main contributions as follows:
1) We propose a semi-supervised method for stereo-
based 3D object detection, in which plentiful and easy-
to-access unannotated images are fully utilized to en-
hance the base model.
2) We propose a Temporal-Aggregation-Guided (TAG)
disparity consistency constraint to direct the dispar-
ity estimation of the base model through the teacher
model, with cumulative knowledge from the base
model, across the views.3) We introduce a cross-view agreement strategy to re-
fine the generated pseudo annotations through enforc-
ing agreements between the 3D view and 2D view, and
between the left view and right view.
4) Our proposed semi-supervised method leads to signifi-
cant performance gains without requiring extensive an-
notations on the KITTI 3D dataset.
|
Wei_Sparsifiner_Learning_Sparse_Instance-Dependent_Attention_for_Efficient_Vision_Transformers_CVPR_2023
|
Abstract
Vision Transformers (ViT) have shown competitive advan-
tages in terms of performance compared to convolutional
neural networks (CNNs), though they often come with high
computational costs. To this end, previous methods explore
different attention patterns by limiting a fixed number of
spatially nearby tokens to accelerate the ViT’s multi-head
self-attention (MHSA) operations. However, such struc-
tured attention patterns limit the token-to-token connections
to their spatial relevance, which disregards learned seman-
tic connections from a full attention mask. In this work,
we propose an approach to learn instance-dependent at-
tention patterns, by devising a lightweight connectivity pre-
dictor module that estimates the connectivity score of each
pair of tokens. Intuitively, two tokens have high connectiv-
ity scores if the features are considered relevant either spa-
tially or semantically. As each token only attends to a small
number of other tokens, the binarized connectivity masks
are often very sparse by nature and therefore provide the
opportunity to reduce network FLOPs via sparse compu-
tations. Equipped with the learned unstructured attention
pattern, sparse attention ViT (Sparsifiner) produces a supe-
rior Pareto frontier between FLOPs and top-1 accuracy on
ImageNet compared to token sparsity. Our method reduces
48%∼69% FLOPs of MHSA while the accuracy drop is
within 0.4%. We also show that combining attention and
token sparsity reduces ViT FLOPs by over 60%.
|
1. Introduction
Vision Transformers (ViTs) [15] have emerged as a dom-
inant model for fundamental vision tasks, such as image
classification [15], object detection [3], and semantic seg-
mentation [6, 7]. However, scaling ViTs to a large number
of tokens is challenging due to the quadratic computational
complexity of multi-head self-attention (MHSA) [37].
This is particularly disadvantageous for large-scale vi-
sion tasks because processing high-resolution and high-
*Equal contribution.
(c) Axial Attention Pattern (Fixed) (d) Proposed Instance -Dependent
Attention Pattern (Dynamic)(b) Window Attention Pattern (Fixed)
Query Patch
Attended Patch
(a) Local Attention Pattern (Fixed)
Input ImageFigure 1. Comparison of Sparsifiner and fixed attention patterns.
Twins [10] (a), Swin [24] (b), and Axial [18] (c) address quadratic
MHSA complexity using fixed attention patterns, which does not
consider the instance-dependent nature of semantic information in
images. To address this, we propose Sparsifiner (d): an efficient
module for sparse instance-dependent attention pattern prediction.
dimensionality inputs is desirable. For example, input
modalities such as video frames and 3D point clouds have a
large number of tokens even for basic use cases. New algo-
rithms are needed to continue to scale ViTs to larger, more
complex vision tasks.
Prior works have largely taken two approaches to im-
prove the computational efficiency of ViTs: token pruning
and using fixed sparse attention patterns in MHSA. To-
ken pruning methods [29] reduce the number of tokens by
a fixed ratio called the keep rate, but accuracy degrades
quickly when pruning early layers in the network [17, 32,
33]. For example, introducing token pruning into shallower
layers of EViT [17] causes a significant 3.16% top-1 accu-
racy drop on ImageNet [14]. This issue is due to the restric-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22680
tion of pruning an entire token, which amounts to pruning
an entire row and column of the attention matrix at once.
One way to alleviate this is to prune connectivity between
individual tokens in the attention matrix. Existing meth-
ods that take this attention matrix connectivity-pruning ap-
proach use fixed sparse attention patterns [8]. For example,
local and strided fixed attention patterns are used [8, 16], in
combination with randomly-initialized connectivities [42].
However, such fixed attention patterns limit the capacity
of the self-attention connections to a fixed subset of to-
kens (Fig. 1). The fixed nature of these attention patterns
is less effective compared to the direct communication be-
tween tokens in full self-attention. For example, Swin trans-
former [23,24] has a limited the receptive field at shallower
layers and needs many layers to model long-range depen-
dencies. And BigBird [42] needs to combine multiple fixed
attention patterns to achieve good performance. Rather, it is
desirable to design sparse attention algorithms that mimic
full self attention’s instance-dependent nature [37], thereby
capturing the variable distribution of semantic information
in the input image content.
To address these challenges, we propose a method called
Sparsifiner that learns to compute sparse connectivity pat-
terns over attention that are both instance-dependent and
unstructured . The instance-dependent nature of the atten-
tion pattern allows each token to use its limited attention
budget of nonzero elements more efficiently compared to
fixed sparse attention patterns. For example, in attention
heads that attend to semantic rather than positional con-
tent [37, 38], tokens containing similar semantic informa-
tion should be considered to have high connectivity scores
despite their spatial distance. Similarly, nearby tokens with
irrelevant semantic relations should have lower connectivity
scores despite their spatial proximity. Furthermore, Spar-
sifiner improves attention pattern flexibility compared to to-
ken pruning by pruning individual connectivities instead of
entire rows and columns of the attention matrix. This al-
lows Sparsifiner to reduce FLOPs in the early layers of the
network without incurring significant top-1 accuracy degra-
dation (§4). By pruning individual connectivities dependent
on image content, Sparsifiner generalizes prior approaches
to sparsifying MHSA in ViTs, and in doing so, produces a
favourable trade-off between accuracy and FLOPs.
Our contributions are the following:
• We propose a novel efficient algorithm called Spar-
sifiner to predict instance-dependent sparse attention
patterns using low-rank connectivity patterns. Our in-
vestigation into instance-dependent unstructured spar-
sity is to the best of our knowledge novel in the context
of ViTs.
• We show that such learned unstructured attention spar-
sity produces a superior Pareto-optimal tradeoff be-tween FLOPs and top-1 accuracy on ImageNet com-
pared to token sparsity. Furthermore, we show that
Sparsifiner is complementary to token sparsity meth-
ods, and the two approaches can be combined to
achieve superior performance-accuracy tradeoffs.
• We propose a knowledge distillation-based approach
for training Sparsifiner from pretrained ViTs using a
small number of training epochs.
|
Wang_Object-Aware_Distillation_Pyramid_for_Open-Vocabulary_Object_Detection_CVPR_2023
|
Abstract
Open-vocabulary object detection aims to provide ob-
ject detectors trained on a fixed set of object categories
with the generalizability to detect objects described by arbi-
trary text queries. Previous methods adopt knowledge dis-
tillation to extract knowledge from Pretrained Vision-and-
Language Models (PVLMs) and transfer it to detectors.
However, due to the non-adaptive proposal cropping and
single-level feature mimicking processes, they suffer from
information destruction during knowledge extraction and
inefficient knowledge transfer. To remedy these limitations,
we propose an Object-Aware Distillation Pyramid (OADP)
framework, including an Object-Aware Knowledge Extrac-
tion (OAKE) module and a Distillation Pyramid (DP) mech-
anism. When extracting object knowledge from PVLMs, the
former adaptively transforms object proposals and adopts
object-aware mask attention to obtain precise and com-
plete knowledge of objects. The latter introduces global
and block distillation for more comprehensive knowledge
transfer to compensate for the missing relation information
in object distillation. Extensive experiments show that our
method achieves significant improvement compared to cur-
rent methods. Especially on the MS-COCO dataset, our
OADP framework reaches 35.6mAPN
50, surpassing the cur-
rent state-of-the-art method by 3.3mAPN
50. Code is released
athttps://github.com/LutingWang/OADP .
|
1. Introduction
Open-vocabulary object detection (OVD) [49] aims to
endow object detectors with the generalizability to detect
open categories including both base andnovel categories
where only the former are annotated in the training phase.
Pretrained Vision-and-Language Models (PVLMs, e.g.,
CLIP [32] and ALIGN [19]) have witnessed great progress
in recent years, and Knowledge Distillation (KD) [17] has
led to a wave of unprecedented advances transferring the
zero-shot visual recognition ability from PVLMs to detec-
*Corresponding author ([email protected])
Center Crop w/o Transform
Center Crop w/ Transform
(a) Knowledge Extraction
StudentGlobal KDBlock KDObject KD
(b) Knowledge Transfer
Figure 1. An overview of our OADP framework. (a) Directly
applying center crop on proposals may throw informative object
parts away, resulting in ambiguous image regions. In contrast, our
OAKE module extracts complete objects and reduces the influence
of surrounding distractors. (b) Our DP mechanism includes global,
block, and object KD to achieve effective knowledge transfer.
tors [12, 25, 28, 29, 48, 53]. KD typically comprises two es-
sential steps, i.e.,knowledge extraction and then knowledge
transfer . A common practice in OVD is to crop objects with
class-agnostic proposals and use the teacher ( e.g., CLIP vi-
sual encoder) to extract knowledge of the proposals. The
knowledge is then transferred to the detector ( e.g., Mask R-
CNN [15]) via feature mimicking.
Despite significant development, we argue that con-
ventional approaches still have two main limitations: 1)
Dilemma between comprehensiveness and purity during
knowledge extraction . As proposals have diverse aspect ra-
tios, the fixed center crop strategy to square them may cut
out object parts (fig. 1a). Enlarging those proposals via re-
sizing function may alleviate this problem, but additional
surrounding distractors may confuse the teacher to extract
accurate proposal knowledge. 2) Missing global scene
understanding during knowledge transfer . Conventional
approaches merely concentrate on object-level knowledge
transfer by directly mimicking the teacher’s features of indi-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
11186
vidual proposals. As a result, the student cannot fully grasp
the contextual characteristics describing the interweaving of
different objects. In light of the above discussions, we pro-
pose an Object-Aware Distillation Pyramid (OADP) frame-
work to excavate the teacher’s knowledge accurately and
effectively transfer the knowledge to the student.
To preserve the complete information of proposals while
extracting their CLIP image embeddings, we propose
an Object-Aware Knowledge Extraction (OAKE) module.
Concretely, given a proposal, we square it with an adaptive
resizing function to avoid destroying the object structure
and involve object information as much as possible. How-
ever, the resizing process inevitably introduces environmen-
tal context, which may contain some distractors that confuse
the teacher. Therefore, we propose to utilize an object to-
ken[OBJ] whose interaction manner during the forward
process is almost the same as the class token [CLS] except
that it only attends to patch tokens covered by the origi-
nal proposal. In this way, the extracted embeddings contain
precise and complete knowledge of the proposal object.
To facilitate complete and effective knowledge trans-
fer, we propose a Distillation Pyramid (DP) mechanism
(fig. 1b). As previous works only adopt object distillation
to align the feature space of detectors and PVLMs, the re-
lation between different objects is neglected. Therefore,
we propose global and block distillation to compensate for
the missing relation information in object distillation. For
global distillation, we optimize the L1distance between the
detector backbone and the CLIP visual encoder so that the
detector learns to encode rich semantics implied in the im-
age scene. However, the CLIP visual encoder is prone to
ignore background information, which may also be valu-
able for detection. Therefore, we take a finer step to divide
the input image into several blocks and optimize the L1dis-
tance between the block embeddings of the detector and the
CLIP image encoder. Overall, the above three distillation
modules constitute a hierarchical distillation pyramid, al-
lowing for the transfer of more diversified knowledge from
CLIP to the detectors.
We demonstrate the superiority of our OADP framework
on MS-COCO [27] and LVIS [14] datasets. On MS-COCO,
it improves the state-of-the-art results of mAPN
50from 32.3
to35.6. On the LVIS dataset, our OADP framework reaches
21.9APron the object detection task and 21.7APron the
instance segmentation task, leading the former methods by
more than 1.1APrand1.9APrrespectively.
|
Xiang_SQUID_Deep_Feature_In-Painting_for_Unsupervised_Anomaly_Detection_CVPR_2023
|
Abstract
Radiography imaging protocols focus on particular body
regions, therefore producing images of great similarity
and yielding recurrent anatomical structures across pa-
tients. To exploit this structured information, we propose
the use of Space-aware Memory Queues for In-painting and
Detecting anomalies from radiography images (abbreviated
asSQUID ). We show that SQUID can taxonomize the in-
grained anatomical structures into recurrent patterns; and
in the inference, it can identify anomalies (unseen/modified
patterns) in the image. SQUID surpasses 13 state-of-the-
art methods in unsupervised anomaly detection by at least 5
points on two chest X-ray benchmark datasets measured by
the Area Under the Curve (AUC). Additionally, we have cre-
ated a new dataset ( DigitAnatomy ), which synthesizes the
spatial correlation and consistent shape in chest anatomy.
We hope DigitAnatomy can prompt the development, eval-
uation, and interpretability of anomaly detection methods.
|
1. Introduction
Vision tasks in photographic imaging and radiogra-
phy imaging are different. For example, when identify-
ing objects in photographic images, we assume translation
invariance—a cat is a cat no matter if it appears on the left
or right of the image. In radiography imaging, on the other
hand, the relative location and orientation of a structure are
important characteristics that allow the identification of nor-
mal anatomy and pathological conditions [ 20,83]. Since ra-
diography imaging protocols assess patients in a fairly con-
sistent orientation, the generated images have great similar-
ity across various patients, equipment manufacturers, and
facility locations (see examples in Figure 1d). The consis-
tent and recurrent anatomy facilitates the analysis of numer-
ous critical problems and should be considered a significant
advantage for radiography imaging [ 85]. Several investiga-
*Corresponding author: Zongwei Zhou ( [email protected] )
Figure 1. Anomaly detection in radiography images can be both
easier and harder than in photographic images. It is easier be-
cause radiography images are spatially structured due to consistent
imaging protocols. It is harder because anomalies in radiography
images are subtle and require medical expertise to annotate.
tions have demonstrated the value of this prior knowledge in
enhancing Deep Nets’ performance by adding location fea-
tures, modifying objective functions, and constraining co-
ordinates relative to landmarks in images [ 3,47,49,69,86].
Our work seeks to answer this critical question: Can we
exploit consistent anatomical patterns and their spatial in-
formation to strengthen Deep Nets’ detection of anomalies
from radiography images without manual annotation?
Unsupervised anomaly detection only uses healthy im-
ages for model training and requires no other annotations
such as disease diagnosis or localization [ 5]. As many as
80% of clinical errors occur when the radiologist misses the
abnormality in the first place [ 7]. The impact of anomaly
detection is to reduce that 80% by clearly pointing out to ra-
diologists that there exists a suspicious lesion and then hav-
ing them look at the scan in depth. Unlike previous anomaly
detection methods, we formulate the task as an in-painting
task to exploit the anatomical consistency in appearance,
position, and layout across radiography images. Specif-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23890
ically, we propose Space-aware Memory Queues for In-
painting and Detecting anomalies from radiography images
(abbreviated as SQUID ). During training, our model can
dynamically maintain a visual pattern dictionary by taxono-
mizing recurrent anatomical patterns based on their spatial
locations. Due to the consistency in anatomy, the same body
region across healthy images is expected to express similar
visual patterns, which makes the total number of unique pat-
terns manageable. During inference, since anomaly patterns
do not exist in the dictionary, the generated radiography im-
age is expected to be unrealistic if an anomaly is present.
As a result, the model can identify the anomaly by discrim-
inating the quality of the in-painting task. The success of
anomaly detection has two basic assumptions [ 89]:first,
anomalies only occur rarely in the data; second , anomalies
differ from the normal patterns significantly.
We have conducted experiments on two large-scale, pub-
licly available radiography imaging datasets. Our SQUID
is significantly superior to predominant methods in unsuper-
vised anomaly detection by over 5 points on the ZhangLab
dataset [ 32]; remarkably, we have demonstrated a 10-point
improvement over 13 recent unsupervised anomaly detec-
tion methods on the Stanford CheXpert dataset [ 29]. In
addition, we have created a new dataset ( DigitAnatomy )
to elucidate spatial correlation andconsistent shape of
the chest anatomy in radiography (see Figure 1c).Digi-
tAnatomy is dedicated to easing the development, eval-
uation, and interpretability of anomaly detection methods.
The qualitative visualization clearly shows the superiority
of our SQUID over the current state-of-the-art methods.
In summary, our contributions include: (I)the best per-
forming unsupervised anomaly detection method for chest
radiography imaging; (II)a synthetic dataset to promote
anomaly detection research; (III) SQUID overcomes lim-
itations in dominant unsupervised anomaly detection meth-
ods [ 1,17,35,61,82] by inventing Space-aware Memory
Queue (§ 3.2), and Feature-level In-painting (§ 3.3).
|
Wang_Glocal_Energy-Based_Learning_for_Few-Shot_Open-Set_Recognition_CVPR_2023
|
Abstract
Few-shot open-set recognition (FSOR) is a challenging
task of great practical value. It aims to categorize a sam-
ple to one of the pre-defined, closed-set classes illustrated
by few examples while being able to reject the sample from
unknown classes. In this work, we approach the FSOR
task by proposing a novel energy-based hybrid model. The
model is composed of two branches, where a classification
branch learns a metric to classify a sample to one of closed-
set classes and the energy branch explicitly estimates the
open-set probability. To achieve holistic detection of open-
set samples, our model leverages both class-wise and pixel-
wise features to learn a glocal energy-based score, in which
a global energy score is learned using the class-wise fea-
tures, while a local energy score is learned using the pixel-
wise features. The model is enforced to assign large energy
scores to samples that are deviated from the few-shot exam-
ples in either the class-wise features or the pixel-wise fea-
tures, and to assign small energy scores otherwise. Exper-
iments on three standard FSOR datasets show the superior
performance of our model.1
|
1. Introduction
In recent years, deep learning has flourished in various
fields with the ever-increasing scale of the training data un-
der the closed-world learning settings, i.e., the training and
test sets share exactly the same set of classes. However,
such settings often do not hold in many real applications.
This is because 1) it is difficult or costly to obtain a large
amount of labeled data, and 2) models deployed in open-
world environments need to constantly deal with samples
from unknown classes. For example, in the application of
deep learning for diagnosing rare diseases, the number of
samples is limited. In this case, the model is prone to over-
fitting, resulting in a significant degradation in performance.
Further, there can be unknown variants of those diseases due
*H. Wang, G. Pang and P. Wang contributed equally in this work.
†Corresponding author.
1Code is available at https://github.com/00why00/Glocal
support: linnet
query: robin
support: cliff
query: Arctic wolf
GT - Unknown
PEELER[18] - linnet
SnaTCHer[15] - linnet
ATT-G[13] - Unknown
Ours - Unknown GT - Unknown
PEELER[18] - cliff
SnaTCHer[15] - cliff
ATT-G[13] - cliff
Ours - Unknown Figure 1. Two typical errors with existing methods. Our method
can detect unknown open-set samples that are similar to the
closed-set samples in either the global class level or the local fea-
ture level.
to our limited understanding of the diseases. Thus, the mod-
els are required to perform the classification accurately for
the classes illustrated by limited samples, while at the same
time detecting the samples from unknown classes. The lat-
ter ability is important, especially for healthcare or safety-
critical applications, e.g., to alert the unknown cases for hu-
man investigation in the disease diagnosis example, or to
request human intervention for handling unknown objects
in autonomous driving.
Few shot learning [16, 23, 28, 32, 35, 36] (FSL) and open
set recognition [3,7,9,22,29,30] (OSR) are two techniques
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7507
dedicated to solve these two problems, respectively. FSL
methods are trained to achieve a good generalization abil-
ity on the new task with only a few training samples. But
FSL approaches are developed under a closed-set setting.
It lacks the ability to distinguish the classes unseen during
training. The goal of OSR is, on the other hand, to recognize
open-set samples while maintaining the classification abil-
ity of closed-set samples. However, its classification ability
is often built upon the availability of a large number of train-
ing samples. Thus, OSR approaches fail to work effectively
when only a few training samples are available.
Few-shot open-set recognition (FSOR), which combines
the FSL and OSR problems, is a largely under-explored
area. FSOR requires the model to utilize only a few train-
ing samples to effectively achieve the ability of both closed-
set classification and open-set recognition. Existing FSOR
methods [6,13,15] are based on the prototype network [32],
which performs classification by measuring the distance be-
tween the prototype of each class and the query embedding
feature of a sample. They improve the original closed-set
classifier in recognizing open-set samples by learning an
additional open-set class using pseudo open-set samples.
However, only the class-wise information of the sample is
considered, and the pixel-wise spatial information of the
sample is ignored. As shown in Figure 1, these methods fail
to distinguish open-set samples from closed-set class sam-
ples that share similar global semantic appearances. Fur-
ther, the optimization objectives of FSL and OSR are differ-
ent from each other. Thus, training using only an open-set
classifier can limit the performance of these models.
To solve the above problems, we propose a novel FSOR
method, called Glocal Energy-based Learning (GEL). Dif-
ferent from previous methods, GEL consists of two classi-
fication components: one for closed-set classification and
one for open-set recognition. Specifically, in addition to
use the class-wise features to classify closed-set samples
in the closed-set classifier, GEL leverages both class-wise
and pixel-wise features to learn a new energy-based open-
set classifier, in which a global energy score is learned using
the class-wise features while a local energy score is learned
using the pixel-wise features. GEL is enforced to assign
large energy scores to samples that are deviated from the
few-shot examples in either the class-wise features or the
pixel-wise features, and to assign small energy scores oth-
erwise. In doing so, GEL can detect unknown class samples
that are deviated from the known classes in either high-level
abstractions or fine-grained appearances, as shown in Fig-
ure 1. In summary, this work makes the following three
main contributions:
• We propose a novel FSOR framework that learns glo-
cal open scores for detecting unknown samples from
the class-wise (global) and pixel-wise (local) scales.• We further propose a novel energy-based FSOR
model, dubbed GEL, that learns glocal energy-based
open scores based on the class-wise and pixel-wise
similarities of query samples to the support set.
• Through extensive experiments on three widely-used
datasets, we show that GEL outperforms state-of-the-
art competing methods and achieves state-of-the-art re-
sults on these benchmarks.
|
Wang_Imagen_Editor_and_EditBench_Advancing_and_Evaluating_Text-Guided_Image_Inpainting_CVPR_2023
|
Abstract
Text-guided image editing can have a transformative im-
pact in supporting creative applications. A key challenge
is to generate edits that are faithful to input text prompts,
while consistent with input images. We present Imagen Ed-
itor, a cascaded diffusion model built, by fine-tuning Im-
agen [36] on text-guided image inpainting. Imagen Ed-
itor’s edits are faithful to the text prompts, which is ac-
complished by using object detectors to propose inpaint-
ing masks during training. In addition, Imagen Editor cap-
tures fine details in the input image by conditioning the cas-
caded pipeline on the original high resolution image. To im-
prove qualitative and quantitative evaluation, we introduce
EditBench , a systematic benchmark for text-guided image
inpainting. EditBench evaluates inpainting edits on natu-
ral and generated images exploring objects, attributes, and
scenes. Through extensive human evaluation on EditBench,
we find that object-masking during training leads to across-
the-board improvements in text-image alignment – such that
Imagen Editor is preferred over DALL-E 2 [31] and Stable
Diffusion [33] – and, as a cohort, these models are better
at object-rendering than text-rendering, and handle mate-
rial/color/size attributes better than count/shape attributes.
∗Equal contribution.†Equal advisory contribution.
|
1. Introduction
Text-to-image generation has seen a surge of recent inter-
est [31, 33, 36, 50, 51]. While these generative models are
surprisingly effective, users with specific artistic and de-
sign needs do not typically obtain the desired outcome in a
single interaction with the model. Text-guided image edit-
ingcan enhance the image generation experience by sup-
porting interactive refinement [13, 17, 34, 46]. We focus on
text-guided image inpainting, where a user provides an im-
age, a masked area, and a text prompt and the model fills
the masked area, consistent with both the prompt and the
image context (Fig. 1). This complements mask-free edit-
ing [13, 17, 46] with the precision of localized edits [5, 27].
This paper contributes to the modeling and evaluation of
text-guided image inpainting. Our modeling contribution is
Imagen Editor ,2a text-guided image editor that combines
large scale language representations with fine-grained con-
trol to produce high fidelity outputs. Imagen Editor is a
cascaded diffusion model that extends Imagen [36] through
finetuning for text-guided image inpainting. Imagen Edi-
tor adds image and mask context to each diffusion stage via
three convolutional downsampling image encoders (Fig. 2).
A key challenge in text-guided image inpainting is ensur-
ing that generated outputs are faithful to the text prompts.
The standard training procedure uses randomly masked re-
2https://imagen.research.google/editor/
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18359
Figure 2. Imagenator is an image editing model built by fine-
tuning Imagen. All of the diffusion models, i.e., the base model
and super-resolution (SR) models, condition on high-resolution
1024×1024 image and mask inputs. To this end, new convolu-
tional image encoders are introduced.
gions of input images [27, 35]. We hypothesize that this
leads to weak image-text alignment since randomly cho-
sen regions can often be plausibly inpainted using only the
image context, without much attention to the prompt. We
instead propose a novel object masking technique that en-
courages the model to rely more on the text prompt during
training (Fig. 3). This helps make Imagen Editor more con-
trollable and substantially improves text-image alignment.
Observing that there are no carefully-designed standard
datasets for evaluating text-guided image inpainting, we
propose EditBench , a curated evaluation dataset that cap-
tures a wide variety of language, types of images, and lev-
els of difficulty. Each EditBench example consists of (i)
a masked input image, (ii) an input text prompt, and (iii)
a high-quality output image that can be used as reference
for automatic metrics. To provide insight into the relative
strengths and weaknesses of different models, edit prompts
are categorized along three axes: attributes (material, color,
shape, size, count), objects (common, rare, text rendering),
and scenes (indoor, outdoor, realistic, paintings).
Finally, we perform extensive human evaluations on Ed-
itBench, probing Imagen Editor alongside Stable Diffu-
sion (SD) [33], and DALL-E 2 (DL2) [31]. Human an-
notators are asked to judge a) text-image alignment – how
well the prompt is realized (both overall and assessing the
presence of each object/attribute individually) and b) image
quality – visual quality regardless of the text prompt. In
terms of text-image alignment, Imagen Editor trained with
object-masking is preferred in 68% of comparisons with
its counterpart configuration trained with random masking
(a commonly adopted method [27, 35, 41]). Improvements
are across-the-board in all object and attribute categories.
Imagen Editor is also preferred by human annotators rel-
ative to SD and DL2 (78% and 77% respectively). As
a cohort, models are better at object-rendering than text-
rendering, and handle material/color/size attributes better
than count/shape attributes. Comparing automatic evalua-
tion metrics with human judgments, we conclude that while
Figure 3. Random masks (left) frequently capture background or
intersect object boundaries, defining regions that can be plausibly
inpainted just from image context alone. Object masks (right) are
harder to inpaint from image context alone, encouraging models
to rely more on text inputs during training. (Note: This example
image was generated by Imagen and is not in the training data.)
human evaluation remains indispensable, CLIPScore [14] is
the most useful metric for hyperparameter tuning and model
selection.
In summary, our main contributions are: (i) Imagen Ed-
itor, a new state-of-the-art diffusion model for high fidelity
text-guided image editing (Sec. 3); (ii) EditBench, a man-
ually curated evaluation benchmark for text-guided image
inpainting that assesses fine-grained details such as object-
attribute combinations (Sec. 4); and (iii) a comprehensive
human evaluation on EditBench, highlighting the relative
strengths and weaknesses of current models, and the use-
fulness of various automated evaluation metrics for text-
guided image editing (Sec. 5).
|
Wang_Dynamic_Graph_Learning_With_Content-Guided_Spatial-Frequency_Relation_Reasoning_for_Deepfake_CVPR_2023
|
Abstract
With the springing up of face synthesis techniques, it
is prominent in need to develop powerful face forgery de-tection methods due to security concerns. Some existing
methods attempt to employ auxiliary frequency-aware in-
formation combined with CNN backbones to discover theforged clues. Due to the inadequate information inter-
action with image content, the extracted frequency fea-
tures are thus spatially irrelavant, struggling to general-
ize well on increasingly realistic counterfeit types. To ad-
dress this issue, we propose a Spatial-Frequency Dynamic
Graph method to exploit the relation-aware features in spa-
tial and frequency domains via dynamic graph learning. Tothis end, we introduce three well-designed components: 1)Content-guided Adaptive Frequency Extraction module to
mine the content-adaptive forged frequency clues. 2) Multi-
ple Domains Attention Map Learning module to enrich thespatial-frequency contextual features with multiscale atten-tion maps. 3) Dynamic Graph Spatial-Frequency FeatureFusion Network to explore the high-order relation of spa-tial and frequency features. Extensive experiments on sev-
eral benchmark show that our proposed method sustainedly
exceeds the state-of-the-arts by a considerable margin.
|
1. Introduction
Recent years have witnessed the continuous advances in
deepfake creation [ 11,27,36]. Utilizing booming open-
source tools such as Deepfakes [ 41], novices can readily
manipulate the expression and identity of faces to generate
visually untraceable videos. Face forgery technology hasstimulated many applications [ 12,14,44,46] with wide ac-
ceptance. These techniques can whereas be abused by ma-
*Chen Chen is the corresponding author. This work is supported by
the National Key R&D Program of China under Grant 2021YFF0602101,Alibaba Group through Alibaba Research Intern Program.
SFDG-Detector
Spat.
Freq.Spat.
Freq.
漏b漐Correlation Heatmap 漏c漐Graph Reasoning漏a漐SFDG Framework
漏
b
漐
Cl t i H t
Figure 1. The motivation of our proposed approach. Our SFDG
method (a) delves into the high-order relationships (b) of spatial
and frequency domain via cross-domain graph reasoning (c).
licious intentions to make pornographic movies, fake news
and political rumors. In the circumstances, it is desperatelyin need to develop powerful forgery detection methods.
Early face forgery detection methods [ 7,30,52] treat this
challenge as vanilla dichotomy tasks in the prevailing view.They use off-the-shelf backbones to extract the global fea-ture of faces and a binary classifier follow-up to identify
the real and counterfeit faces. However, as the counter-feits become increasingly realistic, it is intractable for thesemethods to spot subtle and local forgery traces. One recent
study [ 50] reformulates deepfake detection as a fine-grained
classification task and designs a multi-attentional frame-work to extract local discriminative features from multiple
attention maps. It is susceptible to common disturbances
and the generalized features remain therefore poorly un-
derstood. Some other works resort to specific forgery pat-terns to encourage better classification such as DCT [ 24,28],
SRM [ 25] and steganalysis features [ 45]. Although promis-
ing advances have been achieved by these previous works,
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7278
they always extract frequency features with hand-crafted
filter banks which are content-irrelevant, thus incapable toadapt the changes of complex scenarios. Moreover, theyfuse multi-domain information via adding directly or atten-tional projection. However, these approaches devote little
efforts to discover the high-order relation of spatial and fre-
quency features and integrating them in a reasonable way.
In this paper, we provide seminal insights to exploit
adaptive frequency features and delve into the interactions
of spatial-frequency domains. To this end, we propose anadaptive extraction-multiscale enhancement-graph fusion
paradigm for deepfake detection via dynamic graph learn-ing, which prompts to excavate content-aware frequency
clues and the high-order relation of multiple domains.
Firstly, for adaptive frequency extraction, we tailor a
Content-guided Adaptive Frequency Extraction (CAF ´E)
module with coarse-grained DCT and fine-grained DCT tocapture the local frequency cues guided by content-awaremasks. Different from PEL [ 9] and F
3Net [ 28], our cus-
tomized frequency learning protocol provides potential formore combinations of frequency features, which is indis-
pensable to spot complicated counterfeit patterns.
To further enhance the representation of content-guided
frequency features, we introduce a Multiple Domains At-
tention Map Learning (MDAML) module to generate mul-
tiscale spatial and frequency attention maps with high-level
semantic features. Specifically, we first propose a Multi-Scale Attention Ensemble (MSAE) module, which pro-
duces multi-scale semantic attention maps with large re-
ceptive fields and endows rich contextual information tospatial and frequency domains. Moreover, an AttentionMap Refinement Block (AMRB) is included in the MSAEmodule to refine the obtained semantic attention maps con-ducive to the following feature learning. In comparison withMADD [ 50] that merely emphasizes the spatial domain, we
further introduce the semantic-relevant frequency attentionmap with rich semantic information retained for the subse-
quent spatial-frequency relation-discovery paradigm.
Finally, to fully discover the spatial and frequency re-
lationships, we propose a Dynamic Graph-based Spatial-Frequency Feature Fusion network (DG-SF
3Net) to formu-
late the interaction of two domains via a graph-based rela-tion discovery protocol. Specifically, DG-SF
3Net is com-
posed of two ingredients: Dynamic GCN [ 16] and Graph
Information Interaction layers. The former constructs kNN-graph dynamically and performs graph convolution to rea-
son high-order relationships in spatial and frequency do-
mains, while the latter is designed to enhance the mutualrelation with several graph-weighted MLP-Mixer [ 40] lay-
ers via channel-wise and node-wise interaction.
The achievements, including contributions are threefold:
• From a new perspective, we propose a novel Spatial-
Frequency Dynamic Graph (SFDG) framework, whichis qualified to exploit relation-aware spatial-frequency
features to promote generalized forgery detection.
• We first harness a CAF ´E module for content-aware fre-
quency feature extraction, and then tailor an MDAMLscheme to dig deeper into multiscale spatial-frequencyattention maps with rich contextual understanding offorgeries. Finally, a seminal DG-SF
3Net module is
proposed to discover the multi-domain relationshipswith a graph-based relation-reasoning approach.
• Our method achieves state-of-the-art performance on
six benchmark datasets. The cross-dataset experiment
and the perturbation analysis show the robustness and
generalization ability of the proposed SFDG method.
|
Wang_NeMo_Learning_3D_Neural_Motion_Fields_From_Multiple_Video_Instances_CVPR_2023
|
Abstract
The task of reconstructing 3D human motion has wide-
ranging applications. The gold standard Motion capture
(MoCap) systems are accurate but inaccessible to the general
public due to their cost, hardware, and space constraints. In
contrast, monocular human mesh recovery (HMR) methods
are much more accessible than MoCap as they take single-
view videos as inputs. Replacing the multi-view MoCap
systems with a monocular HMR method would break the cur-
rent barriers to collecting accurate 3D motion thus making
exciting applications like motion analysis and motion-driven
animation accessible to the general public. However, the
performance of existing HMR methods degrades when the
video contains challenging and dynamic motion that is not
in existing MoCap datasets used for training. This reducesits appeal as dynamic motion is frequently the target in 3D
motion recovery in the aforementioned applications. Our
study aims to bridge the gap between monocular HMR and
multi-view MoCap systems by leveraging information shared
across multiple video instances of the same action. We in-
troduce the Neural Motion (NeMo) field. It is optimized to
represent the underlying 3D motions across a set of videos
of the same action. Empirically, we show that NeMo can re-
cover 3D motion in sports using videos from the Penn Action
dataset, where NeMo outperforms existing HMR methods in
terms of 2D keypoint detection. To further validate NeMo
using 3D metrics, we collected a small MoCap dataset mim-
icking actions in Penn Action, and show that NeMo achieves
better 3D reconstruction compared to various baselines.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22129
|
1. Introduction
Reconstructing 3D human motion has wide-ranging ap-
plications from the production of animation movies like
Avatar [ 4], human motion synthesis [ 17,44,45] and biome-
chanical analysis of motion [ 3,10,16,27,39,40]. Tra-
ditional marker-based MoCap systems work by recording
2D infrared images of light reflected by markers placed on
the human subject. Placing, calibrating, and labeling the
markers are tedious, and the markers can potentially restrict
the range-of-motion of the subject. Alternatively, marker-
lessMoCap systems work with RGB videos and use com-
puter vision techniques to extract the 3D human pose, which
eliminates the need of placing physical markers on human
subjects. Given a set of synchronized video captures from
multiple views, one can run 2D keypoint detection methods
like OpenPose [ 5] and perform triangulation to recover the
3D pose [ 35]. Markerless MoCap approaches, however, are
still restricted by the assumption of having synchronized
multi-view video capture of the same instance of the motion.
Meanwhile, there has been a rapid development of 3D
monocular human pose estimation (HPE) and human mesh
recovery (HMR) methods. These methods aim to recover
the 3D human motion from a single-view video capture. The
accessibility of monocular HMR makes it an attractive alter-
native to existing MoCap systems, especially in situations
where setting up additional hardware is challenging like
when the human subject is on a bike [13] or underwater [9].
The accuracy of monocular methods is generally lower be-
cause single-view input only provides partial information
about the underlying 3D motion. The model needs to over-
come complications like depth ambiguity and self-occlusion.
Like other machine learning systems, HMR models over-
come these difficulties by learning from paired training data.
Because of the difficulty and cost of collecting MoCap data,
paired datasets of videos and 3D motions are scarce. Ad-
ditionally, publically available MoCap datasets are often
restricted to simple everyday motions like Human3.6M [ 19]
and AMASS [ 12]. As a result, existing HMR methods gen-
eralize less well in domains with less available MoCap data,
such as motions in sports, a dominant application domain of
3D human motion recovery [ 8,15,33,34]. As an example,
see Figure 1 for where existing HMR methods struggled to
capture the dynamic range of athletic motions.
We bridge the gap between multi-view MoCap and
monocular HMR by assuming there is shared and comple-
mentary information in multiple instances of video captures
of the same action, similar to what is in the (same instance)
multi-view setup. These multiple video instances can be
different repetitions of the same action from the same per-
son, or even from different people executing the same action
in different settings (See the left side of Figure 2 for illus-
tration). For application domains like sports, a key feature
of its motions is that they are well-defined and structured.For example, an athlete is also often instructed to practice
by performing many repetitions of the same action and the
variation across the repetitions is oftentimes slight. Even
when we look at the executions of the same action from
different athletes, these different motion “instances” often
contain shared information. In this work, we aim to better re-
construct the underlying 3D motion from videos of multiple
instances of the same action in sports.
We parametrize each motion as a neural network. It takes
as input a scalar phase value indicating the phase/progress
of the action and an instance code vector for variation and
outputs human joint angles, root orientation, and translation.
Since the different sequences are not synchronized, and the
actions might progress at slightly different rates (e.g., a faster
versus slower pitch), we use an additional learned phase net-
work for synchronization. The neural network is shared
across all the instances while other components including
the instance codes and phase networks are instance-specific.
All the components are learned jointly. We optimize using
the 2D reprojection loss with respect to the 2D joint key-
points of the input videos and prior 3D loss w.r.t. initial
predictions from HMR methods to enforce 3D prior. We
call the resulting neural network a Neural Motion ( NeMo )
field. NeMo can also be seen as a new test-time optimization
scheme for better domain adaptation of 3D HMR similar
in spirit to SMPLify [ 2,36]. A key difference is that we
leverage shared 3D information at the group level of many
instances to learn a canonical motion and its variations. To
summarize, our contributions are:
•We propose the neural motion (NeMo) field and an
optimization framework that improves 3D HMR results
by jointly reasoning about different video instances of
the same action.
•We optimize NeMo fields on sports actions selected
from the Penn Action dataset [ 49]. Since the Penn
Action dataset only has 2D keypoint annotations, we
collected a small MoCap dataset with 3D groundtruth
where the actor was instructed to mimic these motions,
which we will refer to as our NeMo-MoCap dataset. We
show improved 3D motion reconstruction compared to
various baseline HMR methods using both 3D metrics,
and also improved results on the Penn Action dataset
using 2D metrics.
•The NeMo field also recovers global root transla-
tion. Compared to the recently proposed global HMR
method, recovered global motion from NeMo is sub-
stantially more accurate on our NeMo-MoCap dataset.
|
Wimbauer_Behind_the_Scenes_Density_Fields_for_Single_View_Reconstruction_CVPR_2023
|
Abstract
Inferring a meaningful geometric scene representation
from a single image is a fundamental problem in computer
vision. Approaches based on traditional depth map predic-
tion can only reason about areas that are visible in the im-
age. Currently, neural radiance fields (NeRFs) can capture
true 3D including color, but are too complex to be gener-
ated from a single image. As an alternative, we propose to
predict an implicit density field from a single image. It maps
every location in the frustum of the image to volumetric den-
sity. By directly sampling color from the available views
instead of storing color in the density field, our scene rep-
resentation becomes significantly less complex compared to
NeRFs, and a neural network can predict it in a single for-
ward pass. The network is trained through self-supervision
from only video data. Our formulation allows volume ren-
dering to perform both depth prediction and novel view syn-
thesis. Through experiments, we show that our method is
able to predict meaningful geometry for regions that are oc-
cluded in the input image. Additionally, we demonstrate the
potential of our approach on three datasets for depth pre-
diction and novel-view synthesis.
|
1. Introduction
The ability to infer information about the geometric
structure of a scene from a single image is of high impor-
tance for a wide range of applications from robotics to aug-mented reality. While traditional computer vision mainly
focused on reconstruction from multiple images, in the deep
learning age the challenge of inferring a 3D scene from
merely a single image has received renewed attention.
Traditionally, this problem has been formulated as the
task of predicting per-pixel depth values ( i.e. depth maps).
One of the most influential lines of work showed that it is
possible to train neural networks for accurate single-image
depth prediction in a self-supervised way only from video
sequences. [14–16, 29, 44, 51, 58, 59, 61] Despite these ad-
vances, depth prediction methods are notmodeling the true
3D of the scene: they model only a single depth value per
pixel. As a result, it is not directly possible to obtain depth
values from views other than the input view without con-
sidering interpolation and occlusion. Further, the predicted
geometric representation of the scenes does not allow rea-
soning about areas that lie behind another object in the im-
age ( e.g. a house behind a tree), inhibiting the applicability
of monocular depth estimation to 3D understanding.
Due to the recent advance of 3D neural fields, the related
task of novel view synthesis has also seen a lot of progress.
Instead of directly reasoning about the scene geometry, the
goal here is to infer a representation that allows rendering
views of the scene from novel viewpoints. While geomet-
ric properties can often be inferred from the representation,
they are usually only a side product and lack visual quality.
Even though neural radiance field [32] based methods
achieve impressive results, they require many training im-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9076
ages per scene and do not generalize to new scenes. To en-
able generalization, efforts have been made to condition the
neural network on global or local scene features. However,
this has only been shown to work well on simple scenes,
for example, scenes containing an object from a single cat-
egory [43, 57]. Nevertheless, obtaining a neural radiance
field from a single image has not been achieved before.
In this work, we tackle the problem of inferring a ge-
ometric representation from a single image by generaliz-
ing the depth prediction formulation to a continuous den-
sity field. Concretely, our architecture contains an encoder-
decoder network that predicts a dense feature map from the
input image. This feature map locally conditions a density
field inside the camera frustum, which can be evaluated at
any spatial point through a multi-layer perceptron (MLP).
The MLP is fed with the coordinates of the point and the
feature sampled from the predicted feature map by repro-
jecting points into the camera view. To train our method,
we rely on simple image reconstruction losses.
Our method achieves robust generalization and accu-
rate geometry prediction even in very challenging outdoor
scenes through three key novelties:
1. Color sampling. When performing volume render-
ing, we sample color values directly from the input frames
through reprojection instead of using the MLP to predict
color values. We find that only predicting density drasti-
cally reduces the complexity of the function the network
has to learn. Further, it forces the model to adhere to the
multi-view consistency assumption during training, leading
to more accurate geometry predictions.
2. Shifting capacity to the feature extractor. In many
previous works, an encoder extracts image features to con-
dition local appearance, while a high-capacity MLP is ex-
pected to generalize to multiple scenes. However, on com-
plex and diverse datasets, the training signal is too noisy for
the MLP to learn meaningful priors. To enable robust train-
ing, we significantly reduce the capacity of the MLP and
use a more powerful encoder-decoder that can capture the
entire scene in the extracted features. The MLP then only
evaluates those features locally.
3.Behind the Scenes loss formulation. The continuous
nature of density fields and color sampling allow us to re-
construct a novel view from the colors of any frame, not just
the input frame. By applying a reconstruction loss between
two frames that both observe areas occluded in the input
frame, we train our model to predict meaningful geometry
everywhere in the camera frustum, not just the visible areas.
We demonstrate the potential of our new approach in
a number of experiments on different datasets regarding
the aspects of capturing true 3D, depth estimation, and
novel view synthesis. On KITTI [12] and KITTI-360
[26], we show both qualitatively and quantitatively that
our model can indeed capture true 3D, and that our modelachieves state-of-the-art depth estimation accuracy. On
RealEstate10K [45] and KITTI, we achieve competitive
novel view synthesis results, even though our method is
purely geometry-based. Further, we perform thorough ab-
lation studies to highlight the impact of our design choices.
|
Xu_Learning_To_Generate_Image_Embeddings_With_User-Level_Differential_Privacy_CVPR_2023
|
Abstract
Small on-device models have been successfully trained
with user-level differential privacy (DP) for next word pre-
diction and image classification tasks in the past. However,
existing methods can fail when directly applied to learn em-
bedding models using supervised training data with a large
class space. To achieve user-level DP for large image-
to-embedding feature extractors, we propose DP-FedEmb,
a variant of federated learning algorithms with per-user
sensitivity control and noise addition, to train from user-
partitioned data centralized in the datacenter. DP-FedEmb
combines virtual clients, partial aggregation, private local
fine-tuning, and public pretraining to achieve strong pri-
vacy utility trade-offs. We apply DP-FedEmb to train im-
age embedding models for faces, landmarks and natural
species, and demonstrate its superior utility under same
privacy budget on benchmark datasets DigiFace, EMNIST,
GLD and iNaturalist. We further illustrate it is possible to
achieve strong user-level DP guarantees of ϵ <2while con-
trolling the utility drop within 5%, when millions of users
can participate in training .
|
1. Introduction
Representation learning, by training deep neural net-
works as feature extractors to generate compact embedding
vectors from images, is a fundamental component in com-
puter vision. Metric learning, a kind of representation learn-
ing using supervised data, has been widely applied to im-
age recognition, clustering, and retrieval [61, 75, 77]. Ma-
chine learning models have the capacity to memorize train-
ing data [10, 11], leading to privacy risks when the models
are deployed. Privacy risk can also be audited by member-
ship inference attacks [9, 63], i.e. detecting whether cer-
tain data was used to train a model and potentially expos-
ing users’ usage behaviors. Defending against such risks is
a critical responsibility when training on privacy-sensitive
data.
*The first two authors contributed equally. Correspondence to Zheng
[email protected] Privacy (DP) [23] is an extensively used
quantifiable measurement of privacy risk, now generally ac-
cepted as a standard notion of privacy in both industry and
government [5, 18, 50, 70]. Applied to machine learning,
DP requires a training procedure with explicit randomness,
and guarantees that the distribution over output models is
quantifiably similar given a certain scope of change to the
training dataset. A DP guarantee with respect to the change
of a single arbitrary training example is known as example-
level DP , which provides plausible deniability (in the binary
hypothesis testing sense of [38]) that any single example
(e.g., image) occurred in the training dataset. If we instead
consider how the distribution of output models changes if
the data (including even the number of examples) from any
single user change arbitrarily, we have user-level DP [22].
This ensures model training is quantifiably insensitive to all
of the data from any one user, and hence it is impossible
to tell if a user has participated in training with high con-
fidence. This guarantee can be exponentially stronger than
example-level DP if one user may contribute many exam-
ples to training.
Recently, DP-SGD [1] (essentially, SGD with the ad-
ditional steps of clipping each individual gradient to have
a maximum norm, and adding correspondingly calibrated
noise) has been used to achieve example-level DP for rel-
atively large models in language modeling and image clas-
sification tasks [4, 16, 42, 45, 81], often utilizing techniques
like large batch training and pretraining on public data. DP-
SGD can be modified to guarantee user-level DP, which
is often combined with federated learning algorithms and
called DP-FedAvg [49]. User-level DP has only been stud-
ied for small on-device models that have less than 10 mil-
lion parameters [36, 49, 57].
We consider user-level DP for relatively large models in
representation learning with supervised data. In our set-
ting, similar to federated learning (FL), the data are user-
partitioned; but in contrast to decentralized FL, we are pri-
marily motivated
|
Xu_Learning_Dynamic_Style_Kernels_for_Artistic_Style_Transfer_CVPR_2023
|
Abstract
Arbitrary style transfer has been demonstrated to be ef-
ficient in artistic image generation. Previous methods ei-
ther globally modulate the content feature ignoring local
details, or overly focus on the local structure details lead-
ing to style leakage. In contrast to the literature, we propose
a new scheme “style kernel” that learns spatially adaptive
kernels for per-pixel stylization, where the convolutional
kernels are dynamically generated from the global style-
content aligned feature and then the learned kernels are
applied to modulate the content feature at each spatial posi-
tion. This new scheme allows flexible both global and local
interactions between the content and style features such that
the wanted styles can be easily transferred to the content im-
age while at the same time the content structure can be eas-
ily preserved. To further enhance the flexibility of our style
transfer method, we propose a Style Alignment Encoding
(SAE) module complemented with a Content-based Gating
Modulation (CGM) module for learning the dynamic style
kernels in focusing regions. Extensive experiments strongly
demonstrate that our proposed method outperforms state-
of-the-art methods and exhibits superior performance in
terms of visual quality and efficiency.
|
1. Introduction
Artistic style transfer [48] refers to a hot computer vi-
sion technology that allows us to recompose the content of
an image in the style of an artistic work. Figure 1 shows
several vivid examples. We might have ever imagined what
a photo might look like if it were painted by a famous artist
like Pablo Picasso or Van Gogh. Now style transfer is the
computer vision technique that turns this into a reality. It
has great potential values in various real-world applications
and therefore attracts a lot of researchers to constantly put
efforts to make progress towards both quality and efficiency.
Most of existing style transfer works [6, 15, 18, 26, 33]
either globally modulate the content feature ignoring local
details or overly focus on the local structure details leading
to style leakage. In particular, [15,18] seeks to match global
statistics between content and style images, resulting in in-
consistent stylizations that either remain large parts of the
content unchanged or contain local content with distorted
style patterns. SANet [33] and AdaAttN [31] improve the
content similarity by learning attention maps that match the
semantics between the style and stylized images, but these
methods tend to distort object instances when improper
style patterns are involved. Recently, IEC [5] adopts con-
trastive learning to pull close both the content and style rep-
resentations between input and output images. MAST [16]
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10083
seeks to balance the style and content via manifold align-
ment. Although both IEC and MAST have achieved some
success in most cases, they are still far from satisfactory to
well balance two important requirements, i.e., style consis-
tency and structure similarity.
In this paper, we develop a novel style transfer frame-
work in a manner of encoder-decoder structure, as illus-
trated in Figure 2, which contains two important modules:
(1) a Style Alignment Encoding (SAE) module enhanced by
Content-based Gating Modulation (CGM), which is used to
generate content-style aligned features, and (2) a Style Ker-
nel Generation (SKG) module used to transform the output
features of SAE to convolutional kernels. In such a frame-
work, we propose a new style transfer scheme, “style ker-
nel”, which treats the features of the style images as dy-
namic convolutional kernels and then transfer the style in-
formation to the content image by convolving the content
image by the learned dynamic style kernels. This allows
fine-grained local interactions between the style and content
features, the flexibility of which makes both style transfer-
ring and content structure preserving much easier.
To enforce the global correlation between the content
and style features, we simulate the self-attention mecha-
nism of Transformer [41] in the SAE module. This treats
the content and style features as the queries and keys of the
self-attention operator respectively and computes a large at-
tention map the elements of which measure for each local
content context the similarity to each local style context.
With the attention map, we can aggregate for each pixel of
the content image all the style features in a weighted man-
ner to obtain content-style aligned features. We also design
a Content-based Gating Modulation (CGM) operator that
further thresholds the attention map and zeros the places
where similarities are too small, seeking an adaptive num-
ber of correlated neighbors as a focusing region for each
query point. Then, we use another set of convolutions that
constitute the SKG module to further transform the content-
style aligned features, and view (reshape) the output of SKG
as dynamic style kernels. In order to improve the efficiency
of the dynamic convolutions, SKG predicts separable local
filters (two 1D filters and a bias) instead of a spatial 2D filter
that has more parameters. The learned style kernels already
integrate the information of the given content and style im-
ages. We further apply the learned dynamic style kernels to
the content image feature for transferring the target style to
the content image.
We shall emphasize that unlike “style code” in
AdaIN [15], DRB-GAN [48], and AdaConv [2] modeled as
the dynamic parameters ( e.g. mean, variance, convolution
kernel) which are shared over all spatial positions globally
without distinguishing the semantic regions, the learned dy-
namic kernels via our “style kernel” are point-wisely mod-
eled upon the globally style-content aligned feature, andtherefore is able to make full use of the point-wise semantic
structure correlation to modulate content feature for better
artistic style transfer. Our learned dynamic style kernels
are good at transferring the globally aggregated and locally
semantic-aligned style features to local regions. This en-
sures our style transfer model that works well on consis-
tent stylization with well preserved structure, overcoming
the shortages of existing style transfer methods which either
globally modulate the content feature ignoring local details
(e.g. AdaIN, WCT [28]) or overly focus on the local struc-
ture details leading to style leakage ( e.g. AdaAttN [31] and
MAST [16]).
The main contributions of this work are 3-fold as fol-
lows:
• We are the first to propose the “style kernel” scheme
for artistic style transfer, which converts the globally
style-content alignment features into position point-
wisely dynamic convolutional kernels to convolve the
features of content images together for style transfer.
• We design a novel architecture, i.e. SAE and CGM, to
generate the dynamic style kernels by employing the
correlation between the content and style image adap-
tively.
• Extensive experiments demonstrate that our method
produces visually plausible stylization results with
fine-grained style details and coherence content with
the input content images.
|
Wang_Learning_Transformation-Predictive_Representations_for_Detection_and_Description_of_Local_Features_CVPR_2023
|
Abstract
The task of key-points detection and description is to es-
timate the stable location and discriminative representa-
tion of local features, which is a fundamental task in vi-
sual applications. However, either the rough hard positive
or negative labels generated from one-to-one correspon-
dences among images may bring indistinguishable samples,
like false positives or negatives, which acts as inconsis-
tent supervision. Such resultant false samples mixed with
hard samples prevent neural networks from learning de-
scriptions for more accurate matching. To tackle this chal-
lenge, we propose to learn the transformation-predictive
representations with self-supervised contrastive learning.
We maximize the similarity between corresponding views of
the same 3D point (landmark) by using none of the neg-
ative sample pairs and avoiding collapsing solutions. Fur-
thermore, we adopt self-supervised generation learning and
curriculum learning to soften the hard positive labels into
soft continuous targets. The aggressively updated soft la-
bels contribute to overcoming the training bottleneck (de-
rived from the label noise of false positives) and facili-
tating the model training under a stronger transformation
paradigm. Our self-supervised training pipeline greatly de-
creases the computation load and memory usage, and out-
performs the sota on the standard image matching bench-
marks by noticeable margins, demonstrating excellent gen-
eralization capability on multiple downstream tasks.
|
1. Introduction
Local visual descriptors are fundamental to various com-
puter vision applications such as camera calibration [37],
3D reconstruction[19], visual simultaneous localization and
mapping (VSLAM) [33], and image retrieval [38]. The
descriptors indicate the representation vector of the patch
around the key-points and can be used to generate dense
*Corresponding authorcorrespondences between images.
The discriptors are highly dependent on the effective rep-
resentation, which has been always trained with the Siamese
architecture and contrastive learning loss [12, 29, 39]. The
core idea of contrastive learning is “learn to compare”:
given an anchor key-point, distinguish a similar (or pos-
itive) sample from a set of dissimilar (or negative ) sam-
ples, in a projected embedding space. The induced repre-
sentations present two key properties: 1) alignment of fea-
tures from positive pairs, 2) and uniformity of representation
on the hypersphere [51]. Negative samples are thus intro-
duced to keep the uniformity property and avoid model col-
lapse, i.e., preventing the convergence to one constant solu-
tion [13]. Therefore, various methods have been proposed
to mine hard negatives[6, 21, 55]. However, these methods
raise the computational load and memory resources usage
heavily [17]. More Importantly, within the hard negatives,
some samples are labeled as negatives, but actually have
the identical semantics of the anchor ( i.e.,false negatives ).
These false negatives act as inconsistent supervision and
prevent the learning-based models from achieving higher
accuracy[4]. More concretely, the false negatives represent
the instance located on the repetitive texture in the structural
dataset, as shown in Figure 1. It is challenging to recognize
such false negatives from true negatives[39].
The recent active self-supervised learning methods[5, 8,
9, 15] motivate us to rethink the effectiveness of negatives
in descriptors learning. We propose to learn the transfor-
mation predictive representations (TPR) for visual descrip-
tors only with the positives and avoids collapsing solutions.
Furthermore, using none of negatives greatly improves the
training efficiency by reducing the scale of the similarity
matrix from O(n2)toO(n), and reduces the computation
load and memory usage.
To further improve the generalization performance of
descriptors, hard positives (i.e., corresponding pairs with
large-scale transformation) are encouraged as training data
to expose novel patterns. However, recent experiments
have shown that directly contrastive learning on stronger
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
11464
Query
EasyNegativesHardNegativesFalseNegativesSemanticallydissimilarSemanticallysimilar,butnotidenticalSemanticallyidentical,butlabeldissimilar
t-SNEvisualizationFigure 1. Illustrative schematic of different negative samples. Easy negatives (green) are easy to distinguish from the query (blue) and
not sufficient for good performance. Hard negatives (orange) are important for better performance and are still semantically dissimilar
from the query. False negatives (red) are practically impossible to distinguish and semantically identical with the query, which is harmful
to the performance. The right figure visualizes the embedding of different samples in latent space.
transformed images can not learn representations effec-
tively [52]. In addition, current contrastive learning meth-
ods label all positives with different transformation strength
as coarse “1”, which prevent learning refined representa-
tion. We propose to learn transformation predictive repre-
sentation with soft positive labels from (0,1] instead of “1”
to supervise the learning of local descriptors. Furthermore,
we propose a self-supervised curriculum learning module to
generate controllable stronger positives with gradually re-
fined soft supervision as the network iterative training.
Finally, our TPR with soft labels is trained on natural
images in a fully self-supervised paradigm. Different from
previous methods trained on datasets with SfM or cam-
era pose information [20, 29, 39, 49], our training datasets
are generally easy to collect and scale up since there is no
extra annotation requirement to capture dense correspon-
dences. Experiments show that our self-supervised method
outperforms the state-of-the-art on standard image match-
ing benchmarks by noticeable margins and shows excel-
lent generalization capability on multiple downstream tasks
(e.g., visual odometry, and localization).
Our contributions to this work are as follows: i) we pro-
pose to learn transformation-predictive representations for
joint local feature learning, using none of the negative sam-
ple pairs and avoiding collapsing solutions. ii) We adopt
self-supervised generation learning and curriculum learn-
ing to soften the hard positives into continuous soft labels,
which can alleviate the false positives and train the model
with stronger transformation. iii) The overall pipeline is
trained with the self-supervised paradigm, and the training
data are computed from random affine transformation and
augmentation on natural images.
|
Vaksman_Patch-Craft_Self-Supervised_Training_for_Correlated_Image_Denoising_CVPR_2023
|
Abstract
Supervised neural networks are known to achieve excel-
lent results in various image restoration tasks. However,
such training requires datasets composed of pairs of cor-
rupted images and their corresponding ground truth tar-
gets. Unfortunately, such data is not available in many ap-
plications. For the task of image denoising in which the
noise statistics is unknown, several self-supervised training
methods have been proposed for overcoming this difficulty.
Some of these require knowledge of the noise model, while
others assume that the contaminating noise is uncorrelated,
both assumptions are too limiting for many practical needs.
This work proposes a novel self-supervised training tech-
nique suitable for the removal of unknown correlated noise.
The proposed approach neither requires knowledge of the
noise model nor access to ground truth targets. The in-
put to our algorithm consists of easily captured bursts of
noisy shots. Our algorithm constructs artificial patch-craft
images from these bursts by patch matching and stitching,
and the obtained crafted images are used as targets for the
training. Our method does not require registration of the
images within the burst. We evaluate the proposed frame-
work through extensive experiments with synthetic and real
image noise.
|
1. Introduction
Supervised neural networks have proven themselves
powerful, achieving impressive results in solving image
restoration problems (e.g., [14–16, 18, 33, 41–43]). In the
commonly deployed supervised training for such tasks, one
needs a dataset consisting of pairs of corrupted and ground
truth images. The degraded images are fed to the network
input, while the ground truth counterparts are used as guid-
ing targets. When the degradation model is known and easy
to implement, one can construct such a dataset by applying
This research was partially supported by the Israel Science Foundation
(ISF) under Grant 335/18 and the Council For Higher Education - Planning
& Budgeting Committee.the degradation to clean images. However, a problem arises
when the degradation model is unknown. In such cases,
while it is relatively easy to acquire distorted images, ob-
taining their ground truth counterparts can be challenging.
For this reason, there is a need for self-supervised methods
that use corrupted images only in the training phase. More
on these methods is detailed in Section 2.
In this work we focus on the problem of image denoising
with an unknown noise model. More specifically, we as-
sume that the noise is additive, zero mean, but not necessar-
ily Gaussian, and one that could be cross-channel and short-
range spatially correlated1. We additionally assume that the
noise is (mostly) independent of the image and nearly ho-
mogeneous, i.e., having low to moderate spatially variant
statistics. Examples of such noise could be Gaussian corre-
lated noise or real image noise in digital cameras. Several
recent papers propose methods for self-supervised training
under similar such challenging conditions. However, they
all assume an uncorrelated noise or a noise with a known
model, thus limiting their coverage of the need posed.
This work proposes a novel self-supervised training
framework for addressing the problem of image denois-
ing of an unknown correlated noise. The proposed algo-
rithm gets as input bursts of shots, where each frame in
the burst captures nearly the same scene, up to moderate
movements of the camera and objects. Such sequences of
images are easily captured in many digital cameras. Our
algorithm uses one image from the burst as the input, utiliz-
ing the rest of the frames of the same burst for constructing
(noisy) targets for training. For creating these target images,
we harness the concept of patch-craft frames introduced in
PaCNet [35]. Similar to PaCNet, we split the input shot
into fully overlapping patches. For each patch, we find its
nearest neighbor within the rest of the burst images. Note
that, unlike PaCNet, we strictly omit the input shot from
the neighbor search. We proceed by building mpatch-craft
1By short-term we refer to the case in which the auto-correlation func-
tion decays fast, implying that only nearby noise pixels may be highly cor-
related. The correlation range we consider is governed by the patch size in
our algorithm - see Section 3 for more details.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5795
images by stitching the found neighbor patches, where m
is the patch size, and use these frames as denoising targets.
The above can be easily extended by using more than one
nearest neighbor per patch, this way enriching dramatically
the number of patch-craft frames and their diversity.
The proposed technique for creating artificial target im-
ages is sensitive to the possibility of getting statistical de-
pendency between the input and the target noise. To com-
bat this flaw, we propose a method for statistical analysis
of the target noise. This analysis suggests simple actions
that reduce dependency between the target noise and the
denoiser’s input, leading to a significant boost in perfor-
mance. We evaluate the proposed framework through ex-
tensive experiments with synthetic and real image noise,
showing that the proposed framework outperforms leading
self-supervised methods. To summarize, the contributions
of this work are the following:
• We propose a novel self-supervised framework for
training an image denoiser, where the noise may
be cross-channel and short-range spatially correlated.
Our approach relies simply on the availability of bursts
of noisy images; the ground truth is unavailable, and
the noise model is unknown.
• We suggest a method for statistical analysis of the tar-
get noise that leads to a boost in performance.
• We demonstrate superior denoising performance com-
pared to leading alternative self-supervised denoising
methods.
|
Xie_Category_Query_Learning_for_Human-Object_Interaction_Classification_CVPR_2023
|
Abstract
Unlike most previous HOI methods that focus on learn-
ing better human-object features, we propose a novel and
complementary approach called category query learning .
Such queries are explicitly associated to interaction cat-
egories, converted to image specific category representa-
tion via a transformer decoder, and learnt via an auxil-
iary image-level classification task. This idea is motivated
by an earlier multi-label image classification method, but
is for the first time applied for the challenging human-
object interaction classification task. Our method is sim-
ple, general and effective. It is validated on three rep-
resentative HOI baselines and achieves new state-of-the-
art results on two benchmarks. Code will be available at
https://github.com/charles-xie/CQL .
|
1. Introduction
Human-Object Interaction (HOI) detection has attracted
a lot of interests in recent years [3, 8–10, 21, 33]. The task
consists of two sub-tasks. The first is human and object de-
tection. It is usually performed by common object detection
methods. The second is interaction classification of each
human-object (HO) pair. This sub-task is very challenging
due to the complex appearance variations in the interaction
categories. See Fig. 1 for examples. It is the focus of most
previous HOI methods, as well as this work.
Most previous HOI methods focus on learning bet-
ter human-object features , including modeling relation
and context via GNN [7, 29, 34, 37] or attention mecha-
nism [8, 34, 46], decoupling localization and classification
[22, 41, 48], leveraging vision-language knowledge [6, 22]
and introducing multi-scale feature to transformer [16].
However, for interaction classification they all adopt the
simple linear classifier that performs the dot product of the
*Corresponding author.
†Work done during internship at MEGVII technology.
‡Work done while worked at MEGVII.
(a) personfly kite
(b) personholdforkpersonholdelephantpersonflyairplaneFigure 1. Interaction classification is inherently challenging. In
(a), “fly” is semantically polysemic, resulting in different objects,
poses and relative positions. In (b), when “hold” is associated with
different objects, the appearance, scene background, and human
poses are largely different.
human-object feature and a static weight vector, which rep-
resents an interaction category.
In this work, we propose a new approach that enhances
the above paradigm and complements most previous HOI
methods. It is motivated by the recent work Query2label
[25], a transformer-based classification network. It pro-
poses a new concept we call category-specific query . Unlike
the queries in other transformer methods, each query is as-
sociated to a specific and fixed image category during train-
ing and inference. This one-to-one binding makes the query
learn to model each category more effectively. The queries
are converted to image specific category representations via
a transformer decoder. This method achieves excellent per-
formance on multi-label image classification task.
We extend this approach for human-object interaction
classification. Essentially, our approach replaces traditional
category representation as a static weight vector in previ-
ous HOI methods with category queries learnt as described
above. The same linear classifier is adopted. Such cate-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15275
gory queries are more effective, and adaptive for different
images, giving rise to better modeling of the complex vari-
ations in each interaction category. This is the crucial dif-
ference between this work and a simple adaption of [25] to
HOI. Notably, this work is the first to address the category
weight representation problem in the HOI community.
Note that our proposed category specific query is differ-
ent and not related to those queries in other transformer-
based HOI methods [4, 15, 33, 49]. Specifically, category
queries extract image-level features as the category repre-
sentation. The queries in other methods are human-object
instance-level features and category-agnostic.
Our method is simple, lightweight and general. The
overview is in Fig. 2. It is complementary to any off-the-
shelf HOI method that provides human-object features. The
modification of both inference and training is small. The in-
curred additional cost is marginal.
In experiments, our approach is validated on three
representative and strong HOI baseline methods, two
transformer-based methods [22, 33] and a traditional two-
stage method [42]. They are all significantly improved by
our approach. New state-of-the-art results are obtained on
two benchmarks. Specifically, we obtain 36.03 mAP on
HICO-DET. Comprehensive ablation studies and in-depth
discussion are also provided to verify the effectiveness of
implementation details in our approach. It turns out that our
method is more effective on challenging images that con-
tain more human-object instances, a property that is rarely
discussed by previous HOI methods.
|
Wu_Learning_Semantic-Aware_Knowledge_Guidance_for_Low-Light_Image_Enhancement_CVPR_2023
|
Abstract
Low-light image enhancement (LLIE) investigates how
to improve illumination and produce normal-light images.
The majority of existing methods improve low-light images
via a global and uniform manner, without taking into ac-
count the semantic information of different regions. With-
out semantic priors, a network may easily deviate from a
region’s original color. To address this issue, we propose a
novel semantic-aware knowledge-guided framework (SKF)
that can assist a low-light enhancement model in learning
rich and diverse priors encapsulated in a semantic segmen-
tation model. We concentrate on incorporating semantic
knowledge from three key aspects: a semantic-aware em-
bedding module that wisely integrates semantic priors in
feature representation space, a semantic-guided color his-
togram loss that preserves color consistency of various in-
stances, and a semantic-guided adversarial loss that pro-
duces more natural textures by semantic priors. Our SKF
is appealing in acting as a general framework in LLIE
task. Extensive experiments show that models equipped
with the SKF significantly outperform the baselines on mul-
tiple datasets and our SKF generalizes to different models
and scenes well. The code is available at Semantic-Aware-
Low-Light-Image-Enhancement
|
1. Introduction
In real world, low-light imaging is fairly common due to
unavoidable environmental or technical constraints such asinsufficient illumination and limited exposure time. Low-
light images not only have poor visibility for human per-
ception, but also are unsuitable for subsequent multime-
dia computing and downstream vision tasks designed for
high-quality images [4, 9, 36]. Thus, low-light image en-
hancement (LLIE) is proposed to reveal buried details in
low-light images and avoid degraded performance in sub-
sequent vision tasks. Mainstream traditional methods for
LLIE include Histogram Equalization-based methods [2]
and Retinex model-based methods [18].
Recently, many deep learning-based LLIE methods have
proposed, such as end-to-end frameworks [5,7,34,45,46,48]
and Retinex-based frameworks [29, 41, 43, 44, 49, 53, 54].
Benefiting from their ability in modeling the mapping be-
tween the low-light and high-quality image, deep LLIE
methods commonly achieve better results than traditional
approaches. However, existing methods typically improve
low-light images globally and uniformly, without taking
into account the semantic information of different regions,
which is crucial for enhancement. As shown in Fig. 1(a),
a network that lacks the utilization of semantic priors can
easily deviate from a region’s original hue [22]. Further-
more, studies have demonstrated the significance of incor-
porating semantic priors into low-light enhancement. Fan et
al. [8] utilize semantic map as prior and incorporated it into
the feature representation space, thereby enhancing image
quality. Rather than relying on optimizing intermediate fea-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1662
tures, Zheng et al. [58] adopt a novel loss to guarantee the
semantic consistency of the enhanced images. These meth-
ods successfully combine the semantic priors with LLIE
task, demonstrating the superiority of semantic constraints
and guidance. However, their methods fail to fully ex-
ploit the knowledge that semantic segmentation networks
can provide, limiting the performance gain by semantic pri-
ors. Furthermore, the interaction between segmentation and
enhancement is designed for specific methods, limiting the
possibility of incorporating semantic guidance into LLIE
task. Hence, we wonder two questions: 1. How can we
obtain various and available semantic knowledge? 2. How
does semantic knowledge contribute to image quality im-
provement in LLIE task?
We attempt to answer the first question. First, a semantic
segmentation network pre-trained on large-scale datasets is
introduced as a semantic knowledge bank (SKB). The SKB
can provide richer and more diverse semantic priors to im-
prove the capability of enhancement networks. Second, ac-
cording to previous works [8, 19, 58], the available priors
provided by the SKB primarily consist of intermediate fea-
tures and semantic maps. Once training a LLIE model, the
SKB yields above semantic priors and guides the enhance-
ment process. The priors can not only refine image fea-
tures by employing techniques like affinity matrices, spatial
feature transformations [40], and attention mechanisms, but
also guide the design of objective functions by explicitly
incorporating regional information into LLIE task [26].
Then we try to answer the second question. We design
a series of novel methods to integrate semantic knowledge
into LLIE task based on the above answers, formulating
in a novel semantic-aware knowledge-guided framework
(SKF). First, we use the High-Resolution Network [38]
(HRNet) pre-trained on the PASCAL-Context dataset [35]
as the previously mentioned SKB. In order to make use of
intermediate features, we develop a semantic-aware embed-
ding (SE) module. It computes the similarity between the
reference and target features and employs cross-modal in-
teractions between heterogeneous representations. As a re-
sult, we quantify the semantic awareness of image features
as a form of attention and embed semantic consistency in
enhancement network.
Second, some methods [20, 55] propose to optimize im-
age enhancement using color histogram in order to preserve
the color consistency of the image rather than simply en-
hancing the brightness globally. The color histogram, on
the other hand, is still a global statistical feature that cannot
guarantee local consistency. Hence, we propose a semantic-
guided color histogram (SCH) loss to refine color consis-
tency. Here, we intend to make use of local geometric
information derived from the scene semantics and global
color information derived from the content. In addition to
guarantee original color of the enhanced image, it can alsoadd spatial information to the color histogram, performing
a more nuanced color recovery.
Third, existing loss functions are not well aligned with
human perception and fail to capture an image’s intrinsic
signal structure, resulting in unpleasing visual results. To
improve visual quality, EnlightenGAN [16] employs global
and local image-content consistency and randomly chooses
the local patch. However, the discriminator do not know
where the regions are likely to be ‘fake’. Thus, we propose
a semantic-guided adversarial (SA) loss. Specifically, the
ability of the discriminator is improved by using segmen-
tation map to determine the fake areas, which can improve
the image quality further.
The main contributions of our work are as follows:
• We propose a semantic-aware knowledge-guided
framework (SKF) to boost the performance of exist-
ing methods by jointly maintaining color consistency
and improving image quality.
• We propose three key techniques to take full advantage
of semantic priors provided by semantic knowledge
bank (SKB): semantic-aware embedding (SE) mod-
ule, semantic-guided color histogram (SCH) loss, and
semantic-guided adversarial (SA) loss.
• We conduct experiments on LOL/LOL-v2 datasets and
unpaired datasets. The experimental results demon-
strate large performance improvements by our SKF,
verifying its effectiveness in resolving the LLIE task.
|
Wang_Compression-Aware_Video_Super-Resolution_CVPR_2023
|
Abstract
Videos stored on mobile devices or delivered on the In-
ternet are usually in compressed format and are of various
unknown compression parameters, but most video super-
resolution (VSR) methods often assume ideal inputs result-
ing in large performance gap between experimental set-
tings and real-world applications. In spite of a few pio-
neering works being proposed recently to super-resolve the
compressed videos, they are not specially designed to deal
with videos of various levels of compression. In this pa-
per, we propose a novel and practical compression-aware
video super-resolution model, which could adapt its video
enhancement process to the estimated compression level.
A compression encoder is designed to model compression
levels of input frames, and a base VSR model is then condi-
tioned on the implicitly computed representation by insert-
ing compression-aware modules. In addition, we propose
to further strengthen the VSR model by taking full advan-
tage of meta data that is embedded naturally in compressed
video streams in the procedure of information fusion. Ex-
tensive experiments are conducted to demonstrate the ef-
fectiveness and efficiency of the proposed method on com-
pressed VSR benchmarks. The codes will be available at
https://github.com/aprBlue/CAVSR
|
1. Introduction
Video super-resolution aims at restoring a sequence
of high-resolution (HR) frames by utilizing the comple-
mentary temporal information within low-resolution (LR)
frames. There have been many efforts [1–3, 7, 13, 15–18,
20, 24, 35, 38, 47] made on this task, especially after the
rise of deep learning. Most of these methods, however,
*T. ISOBE and Y . WANG contributed equally to this work.
†Corresponding author.
CRF15CRF35
MobilePhoneLRVideoTranscoderCloud ServerVSRDecoderBitstreamFramesPreviousMethodsMetaDataCompression-awareVSROurMethodCompressedVSRUploading
BasicVSROursBicubicCRF15CRF35Train
TestSTDF+BasicVSR
CRF15CRF35(a)
(b)(c)Figure 1. Motivation of this work. (a) Application scenario of
the VSR task this work focuses on, (b) performance of existing
VSR methods on compressed VSR task, and (c) performance of
previous VSR models trained on videos of different compression.
assume an ideal input such as directly taking either bicu-
bicly downsampled frames or Gaussian smoothed and dec-
imated ones as the degraded inputs. In real world, videos
stored on mobile devices or delivered on the Internet are
all in a compressed format with different compression lev-
els [10, 11, 14, 25, 30, 33, 41, 45]. Unless the compression is
very lightweight, directly applying an existing VSR model
would give unsatisfactory results with magnified compres-
sion artifacts, as shown in Fig. 1(a). One straightforward so-
lution is to first apply a multi-frame decompression method
[5, 9, 41, 45] to remove blocking artifacts and then feed the
enhanced frames to an uncompressed VSR model. How-
ever, as shown in Fig. 1(b), the performance is still not good
with artifacts remained. In addition, the decompression net-
work usually cannot handle video frames of different com-
pression levels adaptively, which will cause over-smoothing
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2012
and accumulate errors in super-resolution stage.
Recently a few pioneering works have been proposed to
investigate the video super-resolution task on compressed
videos. In [46], Yang et al. took into consideration com-
plex degradations in real video scenarios and built a real-
world VSR dataset using iPhone 11 Pro Max. A special
training strategy is proposed to handle misalignment and il-
lumination/color difference. In RealBasicVSR [3], Chan et
al. synthesized real-world-like low-quality videos based on
a combination of several degradations and proposed a two-
stage method with an image pre-cleaning module followed
by an existing VSR model. COMISR [22] and FTVSR [27]
are proposed to address streamed videos compressed in dif-
ferent levels rather than degradations like noise and blur.
Although COMISR and FTVSR have improved perfor-
mance on compressed videos, they are not specially de-
signed to deal with videos of various levels of compres-
sion. Being aware of compression with input videos would
allow a model to exert its power on those videos adap-
tively. Otherwise, video frames with less compression
would be oversmoothed while the ones with heavy com-
pression would still remain magnified artifacts, as shown in
Fig. 1(c). These methods feed themselves with only com-
pressed video frames as input, however, meta data such as
frame type, motion vectors and residual maps that are nat-
urally encoded with a compressed video are ignored. Mak-
ing full use of such meta data and the decoded video frames
could help further improve super-resolution performance on
compressed videos.
Based on the above observations, we propose a
compression-aware video super-resolution model, a com-
pression encoder module is designed to implicitly model
compression level with the help of meta data of a com-
pressed video. It would also take into account both frames
and their frame types in computing compression represen-
tation. A base bidirectional recurrent-based VSR model
is then conditioned on that representation by inserting
compression-aware modules such that it could adaptively
deal with videos of different compression levels. To fur-
ther strengthen the power of the base VSR model, we take
advantage of meta data in a further step. Motion vectors
and residual maps are employed to achieve fast and accu-
rate alignment between different time steps and frame types
are leveraged again to update hidden state in bidirectional
recurrent network. Extensive experiments demonstrate that
the specially designed VSR model for compressed videos
performs favorably against state-of-the-art methods.
Our contributions are summarized as follows:
• A compression encoder to perceive compression levels
of frames is proposed. It is supervised with a ranking-
based loss and the computed compression representa-
tion is used to modulate a base VSR model.
• Meta data that comes naturally with compressedvideos are fully explored in fusion process of spatial
and temporal information to strengthen the power of a
bidirectional RNN-based VSR model.
• Extensive experiments demonstrate the effectiveness
and efficiency of the proposed method on compressed
VSR benchmarks.
|
Wang_ARO-Net_Learning_Implicit_Fields_From_Anchored_Radial_Observations_CVPR_2023
|
Abstract
We introduce anchored radial observations (ARO), a novel
shape encoding for learning implicit field representation of
3D shapes that is category-agnostic andgeneralizable amid
significant shape variations. The main idea behind our work
is to reason about shapes through partial observations from
a set of viewpoints, called anchors. We develop a general and
unified shape representation by employing a fixed set of an-
chors, via Fibonacci sampling, and designing a coordinate-
based deep neural network to predict the occupancy value of
a query point in space. Differently from prior neural implicit
models that use global shape feature, our shape encoder
operates on contextual, query-specific features. To predict
point occupancy, locally observed shape information from
the perspective of the anchors surrounding the input query
point are encoded and aggregated through an attention mod-
ule, before implicit decoding is performed. We demonstrate
the quality and generality of our network, coined ARO-Net,
on surface reconstruction from sparse point clouds, with
tests on novel and unseen object categories, “one-shape”
training, and comparisons to state-of-the-art neural and
classical methods for reconstruction and tessellation.
|
1. Introduction
Despite the substantial progress made in deep learning in
recent years, transferability and generalizability issues still
persist due to domain shifts and the need to handle diverse,
out-of-distribution test cases. For 3D shape representation
learning, a reoccurring challenge has been the inability of
trained neural models to generalize to unseen object cate-
gories and diverse shape structures, as these models often
overfit to or “memorize" the training data.
In this paper, we introduce a novel shape encoding for
learning an implicit field [34] representation of 3D shapes
that is category-agnostic andgeneralizable amid significant
shape variations. In Figure 1 (top), we show 3D reconstruc-
tions from sparse point clouds obtained by our approach
that is trained on chairs but tested on airplanes, riffles, and
*Equal contribution
†Corresponding author. E-mail: [email protected]
Airplanes Rifles
Animals
Input
ARO-Net
GT mesh
Input
ARO-Net
GT mesh
Trainingdata
3D Reconstruction from sparse point clouds by ARO -Net trained on 4K chairs
ARO-Net trained on only the Fertility model with rotation and scaling
......Figure 1. Neural 3D reconstruction using ARO-Net from sparse
point clouds (1,024 or 2,048 points). Top: reconstruction results
on airplanes, riffles, and animals when the network was trained
only on chairs. Bottom: reconstruction of a variety of shapes when
the network was train on numerous versions of one model - the
Fertility. See comparison to other methods in Section 4.
animals. Our model can even reconstruct a variety of shapes
when the training data consists of only one shape, augmented
with rotation and scaling; see Figure 1 (bottom).
The main idea behind our work is to reason about shapes
from partial observations at a set of viewpoints, called an-
chors , and apply this reasoning to learn implicit fields. We
develop a general and unified shape representation by des-
ignating a fixed set of anchors and designing a coordinate-
based neural network to predict the occupancy at a query
point. In contrast to classical neural implicit models such as
IM-Net [8], OccNet [23], and DeepSDF [24], which learn
global shape features for occupancy/distance prediction, our
novel encoding scheme operates on local shape features
obtained by viewing the query point from the anchors.
Specifically, for a given query point x, we collect locally
observable shape information surrounding x, as well as direc-
tional and distance information toward x, from the perspec-
tive of the set of anchors, and encode such information using
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3572
/gid00009/gid00064/gid00010/gid00001/gid00034/gid00077/gid00066/gid00071/gid00078/gid00081/gid00082/gid00001/gid00064/gid00077/gid00067/gid00001/gid00072/gid00077/gid00079/gid00084/gid00083/gid00001/gid00082/gid00071/gid00064/gid00079/gid00068 (b) ARO fixated on query (c) ARO-Net
PointNet
Attention Module
Implicit Decoder
Figure 2. 2D illustration of ARO and ARO-Net architecture: (a) Input point cloud (in grey) and a set of mfixed anchors (coloured dots).
(b) Radial observation Oifrom each anchor aitoward the query point xconsists of closed points inside the cone apexed at ai, with axis
ri=x−ai. (c) Each radial observation Oiis passed to a PointNet encoder to obtain an embedding feature fi, which is concatenated with ri
and its norm to form the query-specific ARO encoding of xwith respect to ai. Finally, all the ARO features are decoded into the occupancy
value occ(x)though an attention module and several MLPs. For 3D reconstruction, the ARO features are computed for each query point,
while the PointNet, attention module, and implicit decoder are fixed during inference – their weights were determined during training.
(e) GT (d) ARO-Net (c) ConvONet (b) OccNet (a) Input
Figure 3. A somewhat extreme toy example comparing ARO-Net
to prior occupancy prediction networks on 3D reconstruction from
a sparse point cloud of a cube (a), with training on a single sphere.
The results from OccNet [23] and ConvONet) [25] show more signs
of overfitting to the training sphere than ARO-Net (d).
PointNet [27]; see Figure 2(b). The PointNet features are
then aggregated through an attention module, whose output
is fed to an implicit decoder to produce the occupancy value
forx. We call our query-specific shape encoding Anchored
Radial Observations , orARO . The prediction network is
coined ARO-Net, as illustrated in Figure 2.
The advantages of ARO for learning implicit fields are
three-fold. First, shape inference from partial and local
observations is not bound by barriers set by object categories
or structural variations. Indeed, local shape features are more
prevalent, and hence more generalizable, across categories
than global ones [8, 23, 24]. Second, ARO is query-specific
and the directional and distance information it includes is
intimately tied to the occupancy prediction at the query point,
as explained in Section 3.1. Last but not least, by aggregating
observations from all the anchors, the resulting encoding
is not purely local, like the voxel-level encoding or latent
codes designed to better capture geometric details across
large-scale scenes [3,15,25]. ARO effectively captures more
global and contextual query-specific shape features.
In Figure 3, we demonstrate using a toy example the
difference between ARO-Net and representative neural im-
plicit models based on global (OccNet [23]) and local gridshape encodings (convolutional occupancy network or Con-
vONet [25]), for the 3D reconstruction task. All three net-
works were trained on a single sphere shape, with a single
anchor for ARO inside the sphere. When tested on recon-
structing a cube, the results show that both OccNet and
ConvONet exhibit more memorization of the training sphere
either globally or locally, while the ARO-Net reconstruction
is more faithful to the input without special fine-tuning.
In our current implementation of ARO-Net, we adopt Fi-
bonacci sampling [19] to obtain the fixed set of anchors. We
demonstrate the quality and generalizability of our method
on surface reconstruction from sparse point clouds, with
testing on novel and unseen object categories, as well as
“one-shape” training (see bottom of Figure 1). We report
extensive quantitative and qualitative comparison results to
state-of-the-art methods including both neural models for re-
construction [8, 11, 23, 25, 26] and tessellation [7], as well as
classical schemes such as screen Poisson reconstruction [18].
Finally, we conduct ablation studies to evaluate our choices
for the number of anchors, the selection strategies, and the
decoder architecture: MLP vs. attention modules.
|
Wen_Crowd3D_Towards_Hundreds_of_People_Reconstruction_From_a_Single_Image_CVPR_2023
|
Abstract
Image-based multi-person reconstruction in wide-field
large scenes is critical for crowd analysis and security alert.
However, existing methods cannot deal with large scenes
containing hundreds of people, which encounter the chal-
lenges of large number of people, large variations in hu-
man scale, and complex spatial distribution. In this paper,
we propose Crowd3D, the first framework to reconstruct the
3D poses, shapes and locations of hundreds of people with
global consistency from a single large-scene image. The
core of our approach is to convert the problem of complex
crowd localization into pixel localization with the help of
our newly defined concept, Human-scene Virtual Interac-
tion Point (HVIP). To reconstruct the crowd with global
consistency, we propose a progressive reconstruction net-
work based on HVIP by pre-estimating a scene-level cam-
era and a ground plane. To deal with a large number of per-
sons and various human sizes, we also design an adaptive
human-centric cropping scheme. Besides, we contribute
a benchmark dataset, LargeCrowd, for crowd reconstruc-
tion in a large scene. Experimental results demonstrate the
effectiveness of the proposed method. The code and the
†Equal contribution.
* Corresponding author.dataset are available at http://cic.tju.edu.cn/
faculty/likun/projects/Crowd3D .
|
1. Introduction
3D pose, shape and location reconstruction for hundreds
of people in a large scene will help with modeling crowd be-
havior for simulation and security monitoring. However, no
existing methods can achieve this with global consistency.
In this paper, we aim to reconstruct the 3D poses, shapes
and locations of hundreds of people in the global camera
space from a single large-scene image, as shown in Fig. 1.
Although monocular human pose and shape estimation
[15, 37, 43] has been extensively explored over the past
years, estimating global space locations together with hu-
man poses and shapes for multiple people from a single im-
age is still a difficult problem due to the depth ambiguity.
Existing methods [13,35] reconstruct 3D poses, shapes and
relative positions of the reconstructed human meshes by as-
suming a constant focal length. But the methods are limited
to small scenes with a common FoV (Field of View). These
methods cannot regress the people from a whole large-scene
image [41] due to the relatively small and varying human
scales in comparison to the image size. Even with an image
cropping strategy, these methods cannot obtain consistent
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8937
reconstructions in the global camera space due to indepen-
dent inference from the cropped images. Besides, existing
methods hardly consider the coherence of the reconstructed
people with the outdoor scene, especially with the ground,
since the ground is a common and significant element of
outdoor scenes. Taking a usual urban scene as an example,
these methods may include wrong positions and rotations so
that the reconstructed people do not appear to be standing
or walking on the ground.
In general, there are three challenges in reconstructing
hundreds of people with global consistency from a single
large-scene image: 1) there are a large number of people
with relatively small and highly varying 2D scales; 2) due
to the depth ambiguity from a single view, it is difficult
to directly estimate absolute 3D positions and 3D poses of
people in the large scene; 3) there is no large-scene image
datasets with hundreds of people for supervising crowd re-
construction in large scenes.
In this paper, to address these challenges, we propose
Crowd3D , the first framework for crowd reconstruction
from a single large-scene image. To deal with the large
number of people and various human scales, we propose
an adaptive human-centric cropping scheme for a consistent
scale proportion of people among different cropped images
by leveraging the observation of pyramid-like changes in
the scales of people in large-scene images. To ensure the
globally consistent spatial locations and coherence with the
scene, we propose a progressive ground-guided reconstruc-
tion network Crowd3DNet to reconstruct globally consis-
tent human body meshes from the cropped images by pre-
estimating a global scene-level camera and a ground plane.
To alleviate the ambiguity brought in by directly estimat-
ing absolute 3D locations from a single image, we present a
novel concept called Human-scene Virtual Interaction Point
(HVIP) for effectively converting the 3D crowd spatial lo-
calization problem into a progressive 2D pixel localization
problem with intermediate supervisions. Benefiting from
HVIP, our model can reconstruct the people with various
poses including non-standing.
We also construct LargeCrowd , a benchmark dataset
with over 100K labeled humans (2D bounding boxes, 2D
keypoints, 3D ground plane and HVIPs) in 733 gigapixel
images (19200×6480) of 9 different scenes. To our best
knowledge, this is the first large-scene crowd dataset, which
enables the training and evaluation on large-scene images
with hundreds of people. Experimental results demonstrate
that our method achieves globally consistent crowd recon-
struction in a large scene. Fig. 1 gives an example.
To summarize, our main contributions include:
1) We propose Crowd3D, a multi-person 3D pose, shape
and location estimation framework for large-scale
scenes with hundreds of people. We design an adap-
tive human-centric cropping scheme and a joint localand global strategy to achieve the globally consistent
reconstruction.
2) We propose a progressive reconstruction network with
the newly defined HVIP, to alleviate the depth ambigu-
ity and obtain global reconstructions in harmony with
the scene.
3) We contribute LargeCrowd , a benchmark dataset with
over 100K labeled crowded people in 733 gigapixel
large-scene images (19200×6480), which are valuable
for the training and evaluation of crowd reconstruction
and spatial reasoning in large scenes.
|
Xie_Active_Finetuning_Exploiting_Annotation_Budget_in_the_Pretraining-Finetuning_Paradigm_CVPR_2023
|
Abstract
Given the large-scale data and the high annotation cost,
pretraining-finetuning becomes a popular paradigm in mul-
tiple computer vision tasks. Previous research has coveredboth the unsupervised pretraining and supervised finetuningin this paradigm, while little attention is paid to exploiting
the annotation budget for finetuning. To fill in this gap, weformally define this new active finetuning task focusing on
the selection of samples for annotation in the pretraining-finetuning paradigm. We propose a novel method called Ac-tiveFT for active finetuning task to select a subset of datadistributing similarly with the entire unlabeled pool and
maintaining enough diversity by optimizing a parametricmodel in the continuous space. We prove that the Earth
Mover’s distance between the distributions of the selectedsubset and the entire data pool is also reduced in this pro-cess. Extensive experiments show the leading performanceand high efficiency of ActiveFT superior to baselines onboth image classification and semantic segmentation. Our
code is released at https://github.com/yichen928/ActiveFT .
|
1. Introduction
Recent success of deep learning heavily relies on abun-
dant training data. However, the annotation of large-scaledatasets often requires intensive human labor. This dilemmainspires a popular pretraining-finetuning paradigm where
models are pretrained on a large amount of data in an unsu-
pervised manner and finetuned on a small labeled subset.
Existing literature pays significant attention to both the
unsupervised pretraining [ 7,13,14,18] and supervised fine-
tuning [ 26]. In spite of their notable contributions, these
researches build upon an unrealistic assumption that we al-
ready know which samples should be labeled . As shown in
Fig. 1, given a large unlabeled data pool, it is necessary to
pick up the most useful samples to exploit the limited an-
notation budget. In most cases, this selected subset only
Unlabeled Pool
EncoderStage 1: Unsupervised Pretraining
Data
Selection
Selected
Samples
Labeled Pool Oracle
giraffe horse
Our Focus: Active Finetuning TaskEncoderTask
ModuleStage 2: Supervised Finetuning
Finetune
Figure 1. Pretraining-Finetuning Paradigm: We focus on the
selection strategy of a small subset from a large unlabeled data
pool for annotation, named as active finetuning task, which isunder-explored for a long time.
counts a small portion ( e.g.<10%) of this large unlabeled
pool. Despite the long-standing under-exploration, the se-lection strategy is still crucial since it may significantly af-
fect the final results.
Active learning algorithms [ 34,38,39] seem to be a
potential solution, which aims to select the most suit-
able samples for annotation when models are trained
from scratch. However, their failures in this pretraining-finetuning paradigm are revealed in both [ 4] and our ex-
periments (Sec. 4.1). A possible explanation comes from
the batch-selection strategy of most current active learn-
ing methods. Starting from a random initial set, this strat-egy repeats the model training and data selection processes
multiple times until the annotation budget runs out. De-
spite their success in from-scratch training, it does not fit
this pretraining-finetuning paradigm well due to the typi-
cally low annotation budget, where too few samples in each
batch lead to harmful bias inside the selection process.
To fill in this gap in the pretraining-finetuning paradigm,
we formulate a new task called active finetuning , concen-
trating on the sample selection for supervised finetuning. In
this paper, a novel method, ActiveFT , is proposed to deal
with this task. Starting from purely unlabeled data, Ac-
tiveFT fetches a proper data subset for supervised finetuning
in a negligible time. Without any redundant heuristics, we
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23715
directly bring close the distributions between the selected
subset and the entire unlabeled pool while ensuring the di-versity of the selected subset. This goal is achieved by con-tinuous optimization in the high-dimensional feature space,which is mapped with the pretrained model.
We design a parametric model p
θSto estimate the distri-
bution of the selected subset. Its parameter θSis exactly
the high-dimensional features of those selected samples.We optimize this model via gradient descent by minimizingour designed loss function. Unlike traditional active learn-ing algorithms, our method can select all the samples fromscratch in a single-pass without iterative batch-selections.
We also mathematically show that the optimization in the
continuous space can exactly reduce the earth mover’s dis-tance (EMD) [ 35,36] between the entire pool and selected
subset in the discrete data sample space.
Extensive experiments are conducted to evaluate our
method in the pretraining-finetuning paradigm. After pre-
training the model on ImageNet-1k [ 37], we select subsets
of data from CIFAR-10, CIFAR-100 [ 23], and ImageNet-
1k [ 37] for image classification, as well as ADE20k [ 50] for
semantic segmentation. Results show the significant perfor-
mance gain of our ActiveFT in comparison with baselines.
Our contributions are summarized as follows:
• To our best knowledge, we are the first to iden-
tify the gap of data selection for annotation and
supervised finetuning in the pretraining-finetuning
paradigm, which can cause inefficient use of annota-tion budgets as also verified in our empirical study.
Meanwhile, we formulate a new task called active fine-
tuning to fill in this gap.
• We propose a novel method, ActiveFT, to deal with the
active finetuning task through parametric model opti-
mization which theoretically reduces the earth mover’s
distance (EMD) between the distributions of the se-
lected subset and entire unlabeled pool. To our bestknowledge, we are the first to directly optimize sam-
ples to be selected in the continuous space for data se-lection tasks.
• We apply ActiveFT to popular public datasets, achiev-
ing leading performance on both classification and seg-
mentation tasks. In particular, our ablation study re-
sults justify the design of our method to fill in the dataselection gap in the pretraining-finetuning paradigm.The source code will be made public available.
|
Wang_Cooperation_or_Competition_Avoiding_Player_Domination_for_Multi-Target_Robustness_via_CVPR_2023
|
Abstract
Despite incredible advances, deep learning has been
shown to be susceptible to adversarial attacks. Numerous
approaches have been proposed to train robust networks
both empirically and certifiably. However, most of them de-
fend against only a single type of attack, while recent work
takes steps forward in defending against multiple attacks. In
this paper, to understand multi-target robustness, we view
this problem as a bargaining game in which different players
(adversaries) negotiate to reach an agreement on a joint
direction of parameter updating. We identify a phenomenon
named player domination in the bargaining game, namely
that the existing max-based approaches, such as MAX and
MSD, do not converge. Based on our theoretical analysis, we
design a novel framework that adjusts the budgets of differ-
ent adversaries to avoid any player dominance. Experiments
on standard benchmarks show that employing the proposed
framework to the existing approaches significantly advances
multi-target robustness.
|
1. Introduction
Machine learning (ML) models [ 15,47,48] have been
shown to be susceptible to adversarial examples [ 39], where
human-imperceptible perturbations added to a clean example
might arbitrarily change the output of machine learning mod-
els. Adversarial examples are generated by maximizing the
loss within a small perturbation region around a clean exam-
ple,e.g.,`1,`1and`2balls. On the other hand, numerous
heuristic defenses have been proposed to be robust against
*Corresponding author.Figure 1. Robust accuracy against PGD attacks and AutoAttack
(“AA” in this figure) on CIFAR-10. “All” means that the model suc-
cessfully defends against the `1,`2, and `1(PGD or AutoAttack)
attacks simultaneously. Compared with the previously best-known
methods, our proposed framework achieves improved performance.
“w.`1” and “w. `2” refer to the model training with our proposed
AdaptiveBudget algorithm with `1or`2norms, respectively.
adversarial examples, e.g., distillation [ 31], logit-pairing [ 19]
and adversarial training [ 25].
However, most of the existing defenses are only robust
against one type of attacks [ 11,25,33,49], while they fail
to defend against other adversaries. For example, existing
work [ 18,26] showed that robustness in the `pthreat model
does not necessarily generalize to other `qthreat models
when p6=q. However, for the sake of the safety of ML
systems, it has been argued that one should target robustness
against multiple adversaries simultaneously [ 7].
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20564
Recently, various methods [ 26,35,41] have been proposed
to address this problem. Multi-target adversarial training,
which targets defending against multiple adversarial per-
turbations, has attracted significant attention: a variational
autoencoder-based model [ 35] learns a classifier robust to
multiple perturbations; after that, MAX and A VG strategies,
which aggregate different adversaries for adversarial training
against multiple threat models, have been shown to enjoy
improved performance [ 41]. To further advance the robust-
ness against multiple adversaries, MSD [ 26] is proposed
and outperformed MAX and A VG by taking the worst case
over all steepest descent directions. These methods follow
a general scheme similar to the (single-target) adversarial
training. They first sample adversarial examples by different
adversaries and then update the model with the aggregation
of the gradients from these adversarial examples.
This general scheme for multi-target adversarial training
can be seen as an implementation of a cooperative bargaining
game [ 40]. In this game, different parties have to decide how
to maximize the surplus they jointly get. In the multi-target
adversarial training, we view each party as an adversary,
and they negotiate to reach an agreed gradient direction that
maximizes the overall robustness.
Inspired by the bargaining game modelling for multi-
target adversarial training, we first analyze the convergence
property of existing methods, i.e., MAX [ 41], MSD [ 26],
and A VG [ 41], and identify a phenomenon namely player
domination. Specifically, it refers to the case where one
player dominates the bargaining game at any time t, and
the gradient at any time tis the same as this player’s gradi-
ent. Furthermore, we notice that under the SVM and linear
model setups, player domination always occurs when using
MAX and MSD, which leads to non-convergence. Based on
such theoretical results, we propose a novel mechanism that
adaptively adjusts the budgets of adversaries to avoid the
player domination. We show that with our proposed mecha-
nism, the overall robust accuracy of MAX, A VG and MSD
improves on three representative datasets. We also illustrate
the performance improvement on CIFAR- 10in Figure 1.
In this paper, we present the first theoretical analysis of
the convergence of multi-target robustness on three algo-
rithms under two models. Building on our theoretical results,
we introduce a new method called AdaptiveBudget , de-
signed to prevent the player domination phenomenon that
can cause MSD and MAX to fail to converge. Our exten-
sive experimental results demonstrate the superiority of our
approach over previous methods.
|
Wang_AttriCLIP_A_Non-Incremental_Learner_for_Incremental_Knowledge_Learning_CVPR_2023
|
Abstract
Continual learning aims to enable a model to incre-
mentally learn knowledge from sequentially arrived data.
Previous works adopt the conventional classification
architecture, which consists of a feature extractor and a
classifier. The feature extractor is shared across sequen-
tially arrived tasks or classes, but one specific group of
weights of the classifier corresponding to one new class
should be incrementally expanded. Consequently, the
parameters of a continual learner gradually increase.
Moreover, as the classifier contains all historical arrived
classes, a certain size of the memory is usually required
to store rehearsal data to mitigate classifier bias and
catastrophic forgetting. In this paper, we propose a non-
incremental learner, named AttriCLIP , to incrementally
extract knowledge of new classes or tasks. Specifically,
AttriCLIP is built upon the pre-trained visual-language
model CLIP . Its image encoder and text encoder are fixed
to extract features from both images and text. Text consists
of a category name and a fixed number of learnable
parameters which are selected from our designed attribute
word bank and serve as attributes. As we compute the
visual and textual similarity for classification, AttriCLIP is
a non-incremental learner. The attribute prompts, which
encode the common knowledge useful for classification, can
effectively mitigate the catastrophic forgetting and avoid
constructing a replay memory. We evaluate our AttriCLIP
and compare it with CLIP-based and previous state-of-the-
art continual learning methods in realistic settings with
domain-shift and long-sequence learning. The results show
that our method performs favorably against previous state-
of-the-arts. The implementation code will be available at
https://gitee.com/mindspore/models/tree/master/research/
cv/AttriCLIP.
|
1. Introduction
In recent years, deep neural networks have achieved re-
markable progress in classification when all the classes (or
*Co-First Author.
†Corresponding Author.tasks) are jointly trained. However, in real scenarios, the
tasks or classes usually sequentially arrive. Continual learn-
ing [13, 21, 28] aims to train a model which incrementally
expands its knowledge so as to deal with all the histori-
cal tasks or classes, behaving as if those tasks or classes
are jointly trained. The conventional continual learning
methods learn sequentially arrived tasks or classes with a
shared model, as shown in Fig. 1(a). Such processing that
fine-tunes the same model in sequence inevitably results in
subsequent values of the parameters overwriting previous
ones [20], which leads to catastrophic forgetting. Besides,
the classification ability on historical data can be easily de-
stroyed by current-stage learning. In the conventional con-
tinual learning methods, a classifier on top of the feature
extractor is employed to perform recognition. As one group
of weights in the classifier is responsible for the prediction
of one specific class, the classifier needs to be expanded
sequentially to make a continual learner able to recognize
novel classes. Moreover, extra replay data is usually re-
quired to reduce the classifier bias and the catastrophic for-
getting of learned features. It is still challenging if we ex-
pect a non-incremental learner ,i.e., the trainable parame-
ters of the model do not incrementally increase and no re-
play data is needed to avoid the classifier bias and the catas-
trophic forgetting.
To address the above issues, this paper proposes a con-
tinual learning method named AttriCLIP , which adopts the
frozen encoders of CLIP [19]. It is a typical visual-language
model that conducts image classification by contrasting the
features of images and their descriptive texts. In the face of
increasing instances, we design a prompt tuning scheme for
continual learning. As shown in Fig. 1(b), there are simi-
lar attributes in images with different categories, such as “a
brown-white dog lying on the grass” and “a brown-white
cat lying on the grass”. They belong to different categories
but both have “lying on the grass” and “brown-white” at-
tributes, so the distance between these two images of differ-
ent categories may be close in the feature space. Therefore,
we selectively train different prompts based on the attributes
of the images rather than the categories. In this way, there is
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3654
🔥Trainable parametersTask 1Task 2…Task 𝑇Encoder
🔥Classifier
🔥
Fixed number of classesUnlimited number of classes
[CLS]on the grassbrown-whitecatdogbirdon the grass
brown-whitebeakon the grass
(a)(b)Contrastive learningMemorydata
……Task 1Task 2Task Tbrown-whiteon the grassbeak
🔥
🔥Attribute Word Bank…Figure 1. (a) Traditional framework for continual learning. The encoder and the classifier are trained by tasks in sequence, some of which
even need extra memory data. In the framework, the model parameters of the current task are fine-tuned from the parameters trained by the
previous last task and then are used for the classification of all seen tasks. The total number of categories the model can classify is fixed in
the classifier. (b) Our proposed AttriCLIP for continual learning. AttriCLIP is based on CLIP, which classifies images by contrasting them
with their descriptive texts. The trainable prompts are selected by the attributes of the current image from a prompt pool. The prompts are
different if the attributes of the image are different. The trained prompts are concatenated with the class name of the image, which serve as
a more accurate supervised signal for image classification than labels.
no problem of knowledge overwriting caused by sequential
training of the same model with increasing tasks.
Specifically, an attribute word bank is constructed as
shown in Fig 1(b), which consists of a set of (key, prompt)
pairs. The keys represent the local features (attributes) of
images and the prompts represent the descriptive words cor-
responding to the keys. Several prompts are selected ac-
cording to the similarities between their keys and the in-
put image. The selected prompts are trained to capture an
accurate textual description of the attributes of the image.
If images with different labels have similar attributes, it
is also possible to select the same prompts, i.e., the same
prompts can be trained with images of different categories.
Similarly, different prompts can be trained by the images
of the same category. The trained prompts are concate-
nated with the class names and put into the text encoder
to contrast with the image feature from the image encoder.
This process makes our AttriCLIP distinct from all previous
classifier-based framework as it serves as a non-incremental
learner without the need to store replay data. The goal of
our method is to select the existing attributes of the current
image as the text description in the inference process, to
classify the image. In addition, the structure of AttriCLIP
can also avoid the problem of the increasing classifier pa-
rameters with the increase of the tasks and the problem of
the inefficiency about memory data in the traditional con-
tinual learning methods.
The experimental setup of existing continual learning
methods is idealized which divides one dataset into sev-
eral tasks for continual learning. The model can set the
output dimensionality of the classifier according to the to-
tal number of categories in the dataset. In practical appli-
cations, with the continuous accumulation of data, the to-
tal number of categories of samples usually cannot be ob-tained when the model is established. When the total num-
ber of categories exceeds the preset output dimensionality
of the classifier, the model has to add parameters to classi-
fier and requires the previous samples for fine-tuning, which
greatly increases the training burden. Therefore, the contin-
ual learning approaches are required to have the ability to
adapt to the categories of freely increasing data, i.e., the
model capacity should not have a category upper limit. In
order to measure such ability of continual learning models,
we propose a Cross-Datasets Continual Learning (CDCL)
setup, which verifies the classification performance of the
model on long-sequence domain-shift tasks. The contribu-
tions of this paper are summarized as follows:
• We establish AttriCLIP, which is a prompt tuning ap-
proach for continual learning based on CLIP. We train
different prompts according to the attributes of images
to avoid knowledge overwriting caused by training the
same model in sequence of classes.
• AttriCLIP contrasts the images and their descriptive
texts based on the learned attributes. This approach
avoids the memory data requirement for fine-tuning
the classifier of increasing size.
• In order to evaluate the performance of the model
on long-sequence domain-shift tasks, we propose a
Cross-Datasets Continual Learning (CDCL) experi-
mental setup. AttriCLIP exhibits excellent perfor-
mance and training efficiency on CDCL.
|
Wu_Pix2map_Cross-Modal_Retrieval_for_Inferring_Street_Maps_From_Images_CVPR_2023
|
Abstract
Self-driving vehicles rely on urban street maps for au-
tonomous navigation. In this paper, we introduce Pix2Map,
a method for inferring urban street map topology directly
from ego-view images, as needed to continually update and
expand existing maps. This is a challenging task, as we
need to infer a complex urban road topology directly from
raw image data. The main insight of this paper is that this
problem can be posed as cross-modal retrieval by learning
a joint, cross-modal embedding space for images and ex-
isting maps, represented as discrete graphs that encode the
topological layout of the visual surroundings. We conduct
our experimental evaluation using the Argoverse dataset
and show that it is indeed possible to accurately retrieve
street maps corresponding to both seen and unseen roads
solely from image data. Moreover, we show that our re-
trieved maps can be used to update or expand existing maps
and even show proof-of-concept results for visual localiza-
tion and image retrieval from spatial graphs.
|
1. Introduction
We propose Pix2Map , a method for inferring road maps
directly from images. More precisely, given camera im-
ages, Pix2Map generates a topological map of the visible
surroundings, represented as a spatial graph. Such maps en-
code both geometric and semantic scene information such
as lane-level boundaries and locations of signs [52] and
serve as powerful priors in virtually all autonomous vehi-
cle stacks. In conjunction with on-the-fly sensory measure-
ments from lidar or camera, such maps can be used for lo-
calization [3] and path planning [39]. As map maintenance
and expansion to novel areas are challenging and expen-
sive, often requiring manual effort [38, 50], automated map
maintenance and expansion has been gaining interest in the
community [9, 10, 27, 32, 33, 35, 38, 40].
Why is it hard? To estimate urban street maps, we need to
*Work done while at Carnegie Mellon University.
Global Street Map
Camera
Data Street Map Adjacency Matrix Figure 1. Illustration of our proposed Pix2Map for cross-
modal retrieval. Given unseen 360◦ego-view images collected
from seven ring cameras ( left), our Pix2Map predicts the lo-
cal street map by retrieving from the existing street map library
(right ), represented as an adjacency matrix. The local street maps
can be further used for global high-definition map maintenance
(top).
learn to map continuous images from ring cameras to dis-
crete graphs with varying numbers of nodes and topology
in bird’s eye view (BEV). Prior works that estimate road
topology from monocular images first process images using
Convolutional Neural Networks or Transformers to extract
road lanes and markings [9] or road centerlines [10] from
images. These are used in conjunction with recurrent neural
networks for the generation of polygonal structures [12] or
heuristic post-processing [33] to estimate a spatial graph in
BEV . This is a very difficult learning problem: such meth-
ods need to jointly learn to estimate a non-linear mapping
from image pixels to BEV , as well as to estimate the road
layout and learn to generate a discrete spatial graph.
Pix2Map. Instead, our core insight is to simply sidestep
the problem of graph generation and 3D localization from
monocular images by recasting Pix2Map as a cross-modal
retrieval task: given a set of test-time ego-view images, we
(i) compute their visual embedding and then (ii) retrieve
a graph with the closest graph embedding in terms of co-
sine similarity. Given recent multi-city autonomous vehi-
cle datasets [13], it is straightforward to construct pairs of
ego-view images and street maps, both for training and test-
1
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17514
ing. We train image and graph encoders to operate in the
same embedding space, making use of recent techniques
for cross-modal contrastive learning [44]. Our key techni-
cal contribution is a novel but simple graph encoder , based
on sequential transformers from the language community
(i.e., BERT [18]) that extract fixed-dimensional embeddings
from street maps of arbitrary size and topology.
In fact, we find that even naive nearest-neighbor retrieval
performs comparably to leading techniques for map gener-
ation [9, 10], i.e., returning the graph paired with the best-
matching image in the training set. Moreoever, we demon-
strate that cross-modal retrieval via Pix2Map performs even
better due to its ability to learn graph embeddings that reg-
ularize the output space of graphs. In addition, cross-modal
retrieval has the added benefit of allowing one to expand
the retrieval graph library with unpaired graphs that lack
camera data, leveraging the insight that retrieval need not
be limited to the same (image, graph) training pairs used
for learning the encoders. This suggests that Pix2Map can
be further improved with augmented road graph topologies
that capture potential road graph updates (for which paired
visual data might not yet exist). Beyond mapping, we show
pilot experiments for visual localization, and the inverse
method, Map2Pix , which retrieves a close-matching image
from an image library given a graph. While not our pri-
mary focus, such approaches may be useful for generating
photorealistic simulated worlds [40].
We summarize our main contributions as follows: We
(i) show that dynamic street map construction from cam-
eras can be posed as a cross-modal retrieval task and pro-
pose an contrastive image-graph model based on this fram-
ing. Building on recent advances in multimodal represen-
tation learning, we train a graph encoder and an image en-
coder with a shared latent space. We (ii) demonstrate em-
pirically that this approach is effective and perform ablation
studies to highlight the impacts of architectural decisions.
Our approach outperforms existing graph generation meth-
ods from image cues by a large margin. We (iii) further
show that it is possible to retrieve similar graphs to those in
previously unseen areas without access to the ground truth
graphs for those areas, and demonstrate the generalization
ability to novel observations.
|
Wu_SCoDA_Domain_Adaptive_Shape_Completion_for_Real_Scans_CVPR_2023
|
Abstract
3D shape completion from point clouds is a challeng-
ing task, especially from scans of real-world objects. Con-
sidering the paucity of 3D shape ground truths for real
scans, existing works mainly focus on benchmarking this
task on synthetic data, e.g. 3D computer-aided design mod-
els. However, the domain gap between synthetic and real
data limits the generalizability of these methods. Thus, we
propose a new task, SCoDA, for the domain adaptation of
real scan shape completion from synthetic data. A new
dataset, ScanSalon, is contributed with a bunch of elaborate
3D models created by skillful artists according to scans. To
address this new task, we propose a novel cross-domain
feature fusion method for knowledge transfer and a novel
volume-consistent self-training framework for robust learn-
ing from real data. Extensive experiments prove our method
is effective to bring an improvement of 6% ∼7% mIoU.
|
1. Introduction
Shape completion and reconstruction from scans is a
practical 3D digitization task that is of great significancein applications of virtual and augmented reality. It takes a
scanned point cloud as input and aims to recover the 3D
shape of the target object. The completion of real scans
is challenging for the poor quality of point clouds and the
deficiency of 3D shape ground truths. Existing methods ex-
ploit synthetic data, e.g.3D computer-aided design (CAD)
models to alleviate the demand for real object shapes. For
example, authors of [18, 26, 35] simulate the scanning pro-
cess to obtain point clouds from CAD models with paired
ground truth shapes to train learning-based reconstruction
models. However, there still exist distinctions between the
simulated and real scan because the latter is scanned from
a real object with complex scanning noise and occlusion,
which limits the generalization quality.
Considering the underexploration in this field, we
propose a new task, SCoDA ,Domain Adaptive Shape
Completion, that aims to transfer the knowledge from the
synthetic domain with rich clean shapes (source domain)
into the shape completion of real scans (target domain), as
illustrated in Fig. 1. To this end, we, for the first time, build
an object-centric dataset, ScanSalon , which consists of real
Scan s with Shape manua lannotati ons. Our ScanSalon con-
tains a bunch of 3D models that are paired with real scans of
objects in six categories: chair, desk/table, sofa, bed, lamp,
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17630
and car. The 3D models are manually created of high qual-
ity by skillful artists for around 10% of real scans, which
can serve as the evaluation ground truths of shape comple-
tion or as the few labels for semi-supervised domain adapta-
tion. See Fig. 1 for some examples, and details of ScanSa-
lonare exposed in Sec. 4.
The main challenge of the proposed SCoDA task lies in
the domain gap between synthetic and real point clouds.
Due to the intrinsic complexity of the scanning process, e.g.,
the scanner parameters and object materials, it is difficult to
simulate the scans in terms of sparsity, noisy extent, etc.
More importantly, real scans are usually incomplete result-
ing from the scene layout and object occlusion during scan-
ning, which can hardly be simulated. Thus, we propose a
novel domain adaptive shape completion approach to trans-
fer the rich knowledge from the synthetic domain to the real
one. At first, the reconstruction module in our approach is
based on an implicit function for continuous shape com-
pletion [18, 54, 58]. For an effective transfer, we observe
that although the local patterns of real scans ( e.g., noise,
sparsity, and incompleteness) are distinct from the simu-
lated synthetic ones, the global topology or structure in a
same category is usually similar between the synthetic and
real data, for example, a chair from either the synthetic or
real domain usually consists of a seat, a backrest, legs, etc.
(see Fig. 1). In other words, the global topology is more
likely to be domain invariant, while the local patterns are
more likely to be domain specific. Accordingly, we propose
a cross-domain feature fusion (CDFF) module to combine
the global features and local features learned in the syn-
thetic and real domain, respectively, which helps recover
both fine details and global structures in the implicit recon-
struction stage. Moreover, a novel volume-consistent self-
training (VCST) framework is developed to encourage self-
supervised learning from the target data. Specifically, we
create two views of real scans by dropping different clus-
ters of points to produce incompleteness of different extent,
and the model is forced to make consistent implicit predic-
tions at each spatial volume, which encourages the model’s
robustness to the incompleteness of real scans.
To construct a benchmark on the proposed new task, we
implement some existing solutions to related tasks as base-
lines, and develop extensive experiments for the baselines
and our method on ScanSalon . These experiments also
demonstrate the effectiveness of the proposed method.
In summary, our key contributions are four-fold:
• We propose a new task, SCoDA , namely domain
adaptive shape completion for real scans; A dataset
ScanSalon that contains 800 elaborate 3D models
paired with real scans in 6 categories is contributed.
• A novel cross-domain feature fusion module is de-
signed to combine the knowledge of global shapes and
local patterns learned in the synthetic and real domain,respectively. Such a feature fusion manner may also
inspire the works in the 2D domain adaption field.
• A volume-consistent self-training framework is pro-
posed to improve the robustness of shape completion
to the complex incompleteness of real scans.
• A benchmark with multiple methods evaluated is con-
structed for the task of SCoDA based on ScanSalon ;
Extensive experiments also demonstrate the superior-
ity of the proposed method.
|
Xu_CXTrack_Improving_3D_Point_Cloud_Tracking_With_Contextual_Information_CVPR_2023
|
Abstract
3D single object tracking plays an essential role in many
applications, such as autonomous driving. It remains a
challenging problem due to the large appearance varia-
tion and the sparsity of points caused by occlusion and lim-
ited sensor capabilities. Therefore, contextual information
across two consecutive frames is crucial for effective ob-
ject tracking. However, points containing such useful in-
formation are often overlooked and cropped out in existing
methods, leading to insufficient use of important contextual
knowledge. To address this issue, we propose CXTrack, a
novel transformer-based network for 3D object tracking,
which exploits ConteXtual information to improve the track-
ing results. Specifically, we design a target-centric trans-
former network that directly takes point features from two
consecutive frames and the previous bounding box as in-
put to explore contextual information and implicitly propa-
gate target cues. To achieve accurate localization for ob-
jects of all sizes, we propose a transformer-based local-
ization head with a novel center embedding module to dis-
tinguish the target from distractors. Extensive experiments
on three large-scale datasets, KITTI, nuScenes and Waymo
Open Dataset, show that CXTrack achieves state-of-the-art
tracking performance while running at 34 FPS.
|
1. Introduction
Single Object Tracking (SOT) has been a fundamental
task in computer vision for decades, aiming to keep track
of a specific target across a video sequence, given only its
initial status. In recent years, with the development of 3D
data acquisition devices, it has drawn increasing attention
for using point clouds to solve various vision tasks such as
object detection [7, 12, 14, 15, 18] and object tracking [20,
29, 31–33]. In particular, much progress has been made on
point cloud-based object tracking for its huge potential in
applications such as autonomous driving [11,30]. However,
it remains challenging due to the large appearance variation
*corresponding author
𝑃!"#𝑃!CXTrackParadigm (Ours)
𝐵!"#BackboneBackboneRelation ModelingLocalization𝐵!𝑃!"#+𝐵!"#𝑃!
crop
sample & cropBackboneBackboneFeature SimilaritySC3D Paradigm𝐵!argmax
𝑃!"#+𝐵!"#𝑃!
cropRelation ModelingLocalizationBackboneBackboneP2B Paradigm𝐵!
𝑃!"#+𝐵!"#𝑃!
Motion-Centric Paradigm
cropSegmentationMotionModeling𝐵!Figure 1. Comparison of various 3D SOT paradigms. Previous
methods crop the target from the frames to specify the region of
interest, which largely overlook contextual information around the
target. On the contrary, our proposed CXTrack fully exploits con-
textual information to improve the tracking results.
of the target and the sparsity of 3D point clouds caused by
occlusion and limited sensor resolution.
Existing 3D point cloud-based SOT methods can be cat-
egorized into three main paradigms, namely SC3D, P2B
and motion-centric, as shown in Fig. 1. As a pioneering
work, SC3D [6] crops the target from the previous frame,
and compares the target template with a potentially large
number of candidate patches generated from the current
frame, which consumes much time. To address the effi-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1084
ciency problem, P2B [20] takes the cropped target tem-
plate from the previous frame as well as the complete
search area in the current frame as input, propagates tar-
get cues into the search area and then adopts a 3D re-
gion proposal network [18] to predict the current bound-
ing box. P2B reaches a balance between performance and
speed. Therefore many follow-up works adopt the same
paradigm [3, 8, 9, 22, 29, 31, 33]. However, both SC3D and
P2D paradigms overlook the contextual information across
two consecutive frames and rely entirely on the appear-
ance of the target. As mentioned in previous work [32],
these methods are sensitive to appearance variation caused
by occlusions and tend to drift towards intra-class distrac-
tors. To this end, M2-Track [32] introduces a novel motion-
centric paradigm, which directly takes point clouds from
two frames without cropping as input, and then segments
the target points from their surroundings. After that, these
points are cropped and the current bounding box is es-
timated by explicitly modeling motion between the two
frames. Hence, the motion-centric paradigm still works on
cropped patches that lack contextual information in later lo-
calization. In short, none of these methods could fully uti-
lize the contextual information around the target to predict
the current bounding box, which may degrade tracking per-
formance due to the existence of large appearance variation
and widespread distractors.
To address the above concerns, we propose a novel
transformer-based tracker named CXTrack for 3D SOT,
which exploits contextual information across two consecu-
tive frames to improve the tracking performance. As shown
in Fig. 1, different from paradigms commonly adopted
by previous methods, CXTrack directly takes point clouds
from the two consecutive frames as input, specifies the tar-
get of interest with the previous bounding box and predicts
the current bounding box without any cropping, largely pre-
serving contextual information. We first embed local geo-
metric information of the two point clouds into point fea-
tures using a shared backbone network. Then we integrate
the targetness information into the point features accord-
ing to the previous bounding box and adopt a target-centric
transformer to propagate the target cues into the current
frame while exploring contextual information in the sur-
roundings of the target. After that, the enhanced point fea-
tures are fed into a novel localization head named X-RPN
to obtain the final target proposals. Specifically, X-RPN
adopts a local transformer [25] to model point feature in-
teractions within the target, which achieves a better balance
between handling small and large objects compared with
other localization heads. To distinguish the target from dis-
tractors, we incorporate a novel center embedding module
into X-RPN, which embeds the relative target motion be-
tween two frames for explicit motion modeling. Extensive
experiments on three popular tracking datasets demonstratethat CXTrack significantly outperforms the current state-of-
the-art methods by a large margin while running at real-time
(34 FPS) on a single NVIDIA RTX3090 GPU.
In short, our contributions can be summarized as: (1)
a new paradigm for the real-time 3D SOT task, which
fully exploits contextual information across consecutive
frames to improve the tracking accuracy; (2) CXTrack:
a transformer-based tracker that employs a target-centric
transformer architecture to propagate targetness informa-
tion and exploit contextual information; and (3) X-RPN: a
localization head that is robust to intra-class distractors and
achieves a good balance between small and large targets.
|
Wang_Balancing_Logit_Variation_for_Long-Tailed_Semantic_Segmentation_CVPR_2023
|
Abstract
Semantic segmentation usually suffers from a long-tail
data distribution. Due to the imbalanced number of samples
across categories, the features of those tail classes may get
squeezed into a narrow area in the feature space. Towards
a balanced feature distribution, we introduce category-wise
variation into the network predictions in the training phase
such that an instance is no longer projected to a feature
point, but a small region instead. Such a perturbation
is highly dependent on the category scale, which appears
as assigning smaller variation to head classes and larger
variation to tail classes. In this way, we manage to close
the gap between the feature areas of different categories,
resulting in a more balanced representation. It is note-
worthy that the introduced variation is discarded at the
inference stage to facilitate a confident prediction. Although
with an embarrassingly simple implementation, our method
manifests itself in strong generalizability to various datasets
and task settings. Extensive experiments suggest that our
plug-in design lends itself well to a range of state-of-the-art
approaches and boosts the performance on top of them.1
|
1. Introduction
The success of deep models in semantic segmenta-
tion [12, 58, 71, 109] benefits from large-scale datasets.
However, popular datasets for segmentation, such as PAS-
CAL VOC [24] and Cityscapes [18], usually follow a
long-tail distribution, where some categories may have far
fewer samples than others. Considering the particularity of
this task, which targets assigning labels to pixels instead
of images, it is quite difficult to balance the distribution
1Code: https://github.com/grantword8/BLV .
†This work was done during the internship at SenseTime Research.
‡Rui Zhao is also with Qing Yuan Research Institute, Shanghai Jiao
Tong University.
Category 1 Decision boundary
(a) w/oLogit Variation (b) w/ Balanced Logit VariationCategory 2 Category 3 Category 4Figure 1. Illustration of logit variation from the feature space,
where each point corresponds to an instance and different colors
stand for different categories. (a) Without logit variation, the
features of tail classes ( e.g., the blue one) may get squeezed into
a narrow area. (b) After introducing logit variation, which is
controlled by the category scale ( i.e., number of training samples
belonging to a particular category), we expand each feature point
to a feature region with random perturbation, resulting in a more
category-balanced feature distribution.
from the aspect of data collection. Taking the scenario
of autonomous driving as an example, a bike is typically
tied to a smaller image region ( i.e., fewer pixels) than a
car, and trains appear more rarely than pedestrians in a
city. Therefore, learning a decent model from long tail data
distributions becomes critical.
A common practice to address such a challenge is to
make better use of the limited samples from tail classes. For
this purpose, previous attempts either balance the sample
quantity ( e.g., oversample the tail classes when organizing
the training batch) [8, 28, 36, 46, 75], or balance the per-
sample importance ( e.g., assign the training penalties re-
garding tail classes with higher loss weights) [19,51,67,69].
Given existing advanced techniques, however, performance
degradation can still be observed in those tail categories.
This works provides a new perspective on improving
long-tailed semantic segmentation. Recall that, in modern
pipelines based on neural networks [12,14,58,92,108,110],
instances are projected to representative features before
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19561
categorized to a certain class. We argue that the features
of tail classes may get squeezed into a narrow area in the
feature space, as the blue region shown in Fig. 1a, because
of miserly samples. To balance the feature distribution, we
propose a simple yet effective approach via introducing bal-
ancing logit variation (BLV) into the network predictions.
Concretely, we perturb each predicted logit with a randomly
sampled noise during training. That way, each instance can
be seen as projected to a feature region, as shown in Fig. 1b,
whose radius is dependent on the noise variance. We
then propose to balance the variation by applying smaller
variance to head classes and larger variance to tail classes so
as to close the feature area gap between different categories.
This newly introduced variation can be viewed as a special
augmentation and discarded in the inference phase to ensure
a reliable prediction.
We evaluate our approach on three different settings of
semantic segmentation, including fully supervised [21, 90,
101, 109], semi-supervised [13, 87, 111, 115], and unsu-
pervised domain adaptation [23, 52, 107, 113], where we
improve the baselines consistently. We further show that
our method works well with various state-of-the-art frame-
works [2, 41, 42, 87, 92, 108] and boosts their performance,
demonstrating its strong generalizability.
|
Wei_Inferring_and_Leveraging_Parts_From_Object_Shape_for_Improving_Semantic_CVPR_2023
|
Abstract
Despite the progress in semantic image synthesis, it re-
mains a challenging problem to generate photo-realistic
parts from input semantic map. Integrating part segmen-
tation map can undoubtedly benefit image synthesis, but is
bothersome and inconvenient to be provided by users. To
improve part synthesis, this paper presents to inferParts
from Object ShapE(iPOSE) and leverage it for improving
semantic image synthesis. However, albeit several part seg-
mentation datasets are available, part annotations are still
not provided for many object categories in semantic image
synthesis. To circumvent it, we resort to few-shot regime
to learn a PartNet for predicting the object part map with
the guidance of pre-defined support part maps. PartNet
can be readily generalized to handle a new object cate-
gory when a small number (e.g., 3) of support part maps
for this category are provided. Furthermore, part semantic
modulation is presented to incorporate both inferred part
map and semantic map for image synthesis. Experiments
show that our iPOSE not only generates objects with rich
part details, but also enables to control the image synthe-
sis flexibly. And our iPOSE performs favorably against the
state-of-the-art methods in terms of quantitative and qual-
itative evaluation. Our code will be publicly available at
https://github.com/csyxwei/iPOSE .
|
1. Introduction
Semantic image synthesis allows to generate an image
with input semantic map, which provides significant flexi-
bility for controllable image synthesis. Recently, it has at-
tracted intensive attention due to its wide applications, e.g.,
content generation and image editing [28,36], and extensive
benefits for many vision tasks [33].
Albeit rapid progress has been made in semantic im-
age synthesis [10, 25, 28, 30, 33, 36, 43–45], it is still chal-
lenging to generate photo-realistic parts from the semantic
map. Most methods [28, 33] tackle semantic image syn-
thesis with a spatially adaptive normalization architecture,
while other frameworks are also explored, such as Style-
GAN [19] and diffusion model [43, 45], etc. However, due
to the lack of fine-grained guidance ( e.g., object part infor-
mation), these methods only exploited the object-level se-
mantic information for image synthesis, and usually failed
to generate photo-realistic object parts (see the top row of
Fig. 1(c)). SAFM [25] adopted the shape-aware position
descriptor to exploit the pixel’s position feature inside the
object. However, as illustrated in Fig. 1(a), the obtained de-
scriptor tends to be only region-aware instead of part-aware,
leading to limited improvement in synthesized results (see
the middle row of Fig. 1(c)).
To improve image parts synthesis, one straightforward
solution is to ask the user to provide part segmentation
map and integrate it into semantic image synthesis dur-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
11248
ing training and inference. However, it is bothersome and
inconvenient for users to provide it, especially during in-
ference. Fortunately, with the existing part segmentation
datasets [7], we propose a method to inferParts from Object
ShapE(iPOSE), and leverage it for improving semantic im-
age synthesis. Specifically, based on these datasets, we first
construct an object part dataset that consists of paired (ob-
ject shape, object part map) to train a part prediction net-
work (PartNet). Besides, although the part dataset con-
tains part annotations for several common object categories,
many object categories in semantic image synthesis are still
not covered. To address this issue, we introduce the few shot
mechanism to our proposed PartNet. As shown in Fig. 1(b),
for each category, a few annotated object part maps are se-
lected as pre-defined supports to guide the part prediction.
Cross attention block is adopted to aggregate the part infor-
mation from support part maps. Benefited from the few shot
setting, our PartNet can be readily generalized to handle a
new object category not in the object part dataset. Partic-
ularly, we can manually label kobject part maps for this
category ( e.g., 3) as supports, and use them to infer the part
map without fine-tuning the part prediction model.
By processing each object in the semantic map, we ob-
tain the part map. With that, we further present a part se-
mantic modulation (PSM) residual block to incorporate the
part map with semantic map to improve the image syn-
thesis. Specifically, the part map is first used to modulate
the normalized activations spatially to inject the part struc-
ture information. Then, to inject the semantic texture in-
formation, the semantic map and a randomly sampled 3D
noise are further used to modulate the features with the
SPADE module [28]. We find that performing part modula-
tion and semantic modulation sequentially disentangles the
structure and texture for image synthesis, and is also benefi-
cial for generating images with realistic parts. Additionally,
to facilitate model training, a global adversarial loss and an
object-level CLIP style loss [54] are further introduced to
encourage model to generate photo-realistic images.
Experiments show that our iPOSE can generate more
photo-realistic parts from the given semantic map, while
having the flexibility to control the generated objects (as
illustrated in Fig. 1 (c)(d)). Both quantitative and quali-
tative evaluations on different datasets further demonstrate
that our method performs favorably against the state-of-the-
art methods.
The contributions of this work can be summarized as:
• We propose a method iPOSE to infer parts from object
shape and leverage them to improve semantic image
synthesis. Particularly, a PartNet is proposed to predict
the part map based on a few support part maps, which
can be easily generalized to new object categories.
• A part semantic modulation Resblock is presented
to incorporate the predicted part map and semanticmap for image synthesis. And global adversarial and
object-level CLIP style losses are further introduced to
generate photo-realistic images.
• Experimental results show that our iPOSE performs fa-
vorably against state-of-the-art methods and can gen-
erate more photo-realistic results with rich part details.
|
Yang_Panoptic_Video_Scene_Graph_Generation_CVPR_2023
|
Abstract
Towards building comprehensive real-world visual per-
ception systems, we propose and study a new problem
called panoptic scene graph generation (PVSG). PVSG
is related to the existing video scene graph generation
(VidSGG) problem, which focuses on temporal interactions
between humans and objects localized with bounding boxes
in videos. However, the limitation of bounding boxes in
detecting non-rigid objects and backgrounds often causes
VidSGG systems to miss key details that are crucial for
comprehensive video understanding. In contrast, PVSG re-
quires nodes in scene graphs to be grounded by more pre-
cise, pixel-level segmentation masks, which facilitate holis-
tic scene understanding. To advance research in this new
area, we contribute a high-quality PVSG dataset, which
consists of 400 videos (289 third-person + 111 egocentric
videos) with totally 150K frames labeled with panoptic seg-
mentation masks as well as fine, temporal scene graphs. Wealso provide a variety of baseline methods and share useful
design practices for future work.
|
1. Introduction
In the last several years, scene graph generation has re-
ceived increasing attention from the computer vision com-
munity [15, 16, 24, 48–51]. Compared with object-centric
labels like “person” or “bike,” or precise bounding boxes
commonly seen in object detection, scene graphs provide
far richer information in images, such as “a person riding
a bike,” which capture both objects and the pairwise rela-
tionships and/or interactions. A recent trend in the scene
graph community is the shift from static, image-based scene
graphs to temporal, video-level scene graphs [1, 41, 49].
This has marked an important step towards building more
comprehensive visual perception systems.
Compared with individual images, videos clearly contain
more information due to the additional temporal dimension,
1
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18675
Table 1. Comparison between the PVSG dataset and some related datasets . Specifically, we choose three video scene graph generation
(VidSGG) datasets, three video panoptic segmentation (VPS) datasets, and two egocentric video datasets—one for short-term action antic-
ipation (STA) while the other for video object segmentation (VOS). Our PVSG dataset is the first long-video dataset with rich annotations
of panoptic segmentation masks and temporal scene graphs.
Dataset Task #Video Video Hours Avg. Len. View #ObjCls #RelCls Annotation # Seg Frame Year Source
ImageNet-VidVRD [35] VidSGG 1,000 - - 3rd 35 132 Bounding Box - 2017 ILVSRC2016-VID [33]
Action Genome [15] VidSGG 10,000 99 35s 3rd 80 50 Bounding Box - 2019 YFCC100M [42]
VidOR [34] VidSGG 10,000 82 30s 3rd 35 25 Bounding Box - 2020 Charades [36]
Cityscapes-VPS [17] VPS 500 - - vehicle 19 - Panoptic Seg. 3K 2020 -
KITTI-STEP [45] VPS 50 - - vehicle 19 - Panoptic Seg. 18K 2021 -
VIP-Seg [28] VPS 3,536 5 5s 3rd 124 - Panoptic Seg. 85K 2022 -
Ego4D-STA [12] STA 1,498 111 264s ego - - Bounding Box - 2022 -
VISOR [8] VOS 179 36 720s ego 257 2 Semantic Seg. 51K 2022 EPIC-KITCHENS [7]
PVSG PVSG 400 9 77s 3rd + ego 126 57 Panoptic Seg. 150K 2023 VidOR + Ego4D + EPIC-KITCHENS
which largely facilitates high-level understanding of tempo-
ral events (e.g., actions [14]) and is useful for reasoning [59]
and identifying causality [10] as well. However, we ar-
gue that current video scene graph representations based on
bounding boxes still fall short of human visual perception
due to the lack of granularity —which can be addressed with
panoptic segmentation masks . This is echoed by the evo-
lutionary path in visual perception research: from image-
level labels (i.e., classification) to spatial locations (i.e., ob-
ject detection) to more fine-grained, pixel-wise masks (i.e.,
panoptic segmentation [20]).
In this paper, we take scene graphs to the next level by
proposing panoptic video scene graph generation (PVSG) , a
new problem that requires each node in video scene graphs
to be grounded by a pixel-level segmentation mask. Panop-
tic video scene graphs can solve a critical issue exposed
in bounding box-based video scene graphs: both things
and stuff classes (i.e., amorphous regions containing water,
grass, etc.) can be well covered—the latter are crucial for
understanding contexts but cannot be localized with bound-
ing boxes. For instance, if we switch from panoptic video
scene graphs to bounding box-based scene graphs for the
video in Figure 1, some nontrivial relations useful for con-
text understanding like “adult-1 standing on/in ground” and
“adult-2 standing on/in water” will be missing. It is also
worth noting that bounding box-based video scene graph
annotations, at least in current research [15], often miss
small but important details, such as the “candles” on cakes.
To help the community progress in this new area, we
contribute a high-quality PVSG dataset, which consists of
400 videos among which 289 are third-person videos and
111 are egocentric videos. Each video contains an average
length of 76.5 seconds. In total, 152,958 frames are labeled
with fine panoptic segmentation and temporal scene graphs.
There are 126 object classes and 57 relation classes. A more
detailed comparison between our PVSG dataset and some
related datasets is shown in Table 1.
To solve the PVSG problem, we propose a two-stage
framework: the first stage produces a set of features for eachmask-based instance tracklet while the second stage gener-
ates video-level scene graphs based on tracklets’ features.
We study two design choices for the first stage: 1) a panop-
tic segmentation model + a tracking module; 2) an end-to-
end video panoptic segmentation model. For the second
scene graph generation stage, we provide four different im-
plementations covering both convolution and Transformer-
based methods.
In summary, we make the following contributions to the
scene graph community:
1.A new problem : We identify several issues associated
with current research in scene graph generation and
propose a new problem, which combines video scene
graph generation with panoptic segmentation for holis-
tic video understanding.
2.A new dataset : A high-quality dataset with fine, tem-
poral scene graph annotations and panoptic segmenta-
tion masks is proposed to advance the area of PVSG.
3.New methods and a benchmark : We propose a two-
stage framework to address the PVSG problem and
benchmark a variety of design ideas, from which valu-
able insights on good design practices are drawn for
future work.
|
Yang_Language_in_a_Bottle_Language_Model_Guided_Concept_Bottlenecks_for_CVPR_2023
|
Abstract
Concept Bottleneck Models (CBM) are inherently inter-
pretable models that factor model decisions into human-
readable concepts. They allow people to easily understand
why a model is failing, a critical feature for high-stakes ap-
plications. CBMs require manually specified concepts and
often under-perform their black box counterparts, preventing
their broad adoption. We address these shortcomings and
are first to show how to construct high-performance CBMs
without manual specification of similar accuracy to black
box models. Our approach, Language Guided Bottlenecks
(LaBo), leverages a language model, GPT-3, to define a
large space of possible bottlenecks. Given a problem domain,
LaBo uses GPT-3 to produce factual sentences about cate-
gories to form candidate concepts. LaBo efficiently searches
possible bottlenecks through a novel submodular utility that
promotes the selection of discriminative and diverse informa-
tion. Ultimately, GPT-3’s sentential concepts can be aligned
to images using CLIP , to form a bottleneck layer. Experi-
ments demonstrate that LaBo is a highly effective prior for
concepts important to visual recognition. In the evaluation
with 11 diverse datasets, LaBo bottlenecks excel at few-shot
classification: they are 11.7% more accurate than black
box linear probes at 1 shot and comparable with more data.
Overall, LaBo demonstrates that inherently interpretable
models can be widely applied at similar, or better, perfor-
mance than black box approaches.1
|
1. Introduction
As deep learning systems improve, their applicability
to critical domains is hampered because of a lack of trans-
parency. Efforts to address this have largely focused on
post-hoc explanations [ 47,54,72]. Such explanations can
be problematic because they may be incomplete or unfaith-
ful with respect to the model’s computations [ 49]. Models
1Code and data are available at https://github.com/YueYANG1996/LaBo
Input Image x
!"
Human Designed Concepts
...
has nape color :: grey
has bill shape :: cone
...
has head pattern :: eyebow
Ours: LLM Generated Concepts
...
grayish brown back and wings
black throat with a white boarder
brown head with white stripes
...
!" GPT-3 prompt: describe what the black-throated sparrow looks like: Figure 1. Our proposed high-performance Concept Bottleneck
Model alleviates the need for human-designed concepts by prompt-
ing large language models (LLMs) such as GPT-3 [ 4].
can also be designed to be inherently interpretable, but it is
believed that such models will perform more poorly than
their black box alternatives [ 16]. In this work, we provide
evidence to the contrary. We show how to construct high-
performance interpretable-by-design classifiers by combin-
ing a language model, GPT-3 [ 4], and a language-vision
model, CLIP [ 44].
Our method builds on Concept Bottleneck Models
(CBM) [ 25], which construct predictors through a linear
combination of human-designed concepts. For example, as
seen in Figure 1, a qualified person can design concepts,
such as “nape color,” as intermediate targets for a black box
model before classifying a bird. CBMs provide abstractions
that people can use to understand errors or intervene on,
contributing to increased trust.
Application of CBMs is limited because they require
costly attribute annotations by domain experts and often
under-perform their black box counterparts. In contexts
where CBM performance is competitive with black box al-
ternatives, interpretability properties are sacrificed [ 34,70].
To address both of these challenges, we propose to build
systems that automatically construct CBMs.
OurLanguage Model Guided Concept Bottleneck Model
(LaBo ), Figure 2, allows for the automatic construction of
high-performance CBMs for arbitrary classification prob-
lems without concept annotations. Large language models
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19187
dot product softmax 𝜎
class 2-red panda
LLM
candidates
S2
submodular optimization
ℱ
class N-tree frog
LLM
candidates
SN
submodular optimization
ℱ Generate concepts: Sec 3.4 prompt: describe what the axolotl looks like: LLM: The axolotl's limbs are delicate, and the tail is long and thin. Extract concept using LM and delete class names: Candidate concepts: limbs are delicate; tail is long and thin
Select concepts: Sec 3.2
c1,1
c1,2
c1,k
...
k concepts
limbs are delicate
tail is long and thin
gills are bright pink
...
c2,1
c2,2
c2,k
...
k concepts
thick, soft fur
reddish brown fur
ears are also red
...
k concepts
cN,1
cN,2
cN,k
...
toes are long
green body
smooth bumpy skin
...
...
... ...
... ...
... ...
... ...
... ... text encoder Concept Space 𝑬𝐶∈ℝ𝑁𝐶×𝑑
concept scores: 𝑔(𝒙,𝐶)= 𝒙⋅𝑬𝐶T∈ℝ𝑁𝑐 (1,𝑁𝐶)
Class-Concept Weight Matrix
W ∈ℝ𝑁×𝑁𝑐
image encoder
x ∈ℝ𝑑
test image
𝑦̂=argmax(𝑔(𝒙,𝐶)⋅𝜎(𝑾)T)
(1,𝑁) Predict the label with concepts: Sec 3.3 Bottleneck-C (NC concepts) N classes
submodular optimization
LLM
class 1-axolotl
candidates
S1
ℱ
Figure 2. We present an overview of our Language-Model-Guided Concept Bottleneck Model ( LaBo ), which is interpretable by design
image classification system. First, we prompt the large language model (GPT-3) to generate candidate concepts (Sec 3.4). Second, we
employ a submodular function to select concepts from all candidates to construct the bottleneck (Sec 3.2). Third, we apply a pretrained
alignment model (CLIP) to obtain the embeddings of concepts and images, which is used to compute concept scores. Finally, we train a
linear function in which the weight Wdenotes the concept-class association user to predict targets based on concept scores (Sec 3.3).
(LLMs) contain significant world knowledge [ 21,42,61],
that can be elicited by inputting a string prefix and allowing
LLMs to complete the string (prompting). For example, in
Figure 1, GPT-3 is prompted about sparrows and completes
with information such as “brown head with white stripes.”
LaBo leverages this by constructing bottlenecks where the
concepts are such GPT-3 generated sentences. Since our
concepts are textual, we use CLIP to score their presence in
an image and form a bottleneck layer out of these scores.
A key advantage of LaBo is the ability to control the
selection of concepts in the bottleneck by generating can-
didates from the language model. We develop selection
principles targeting both interpretability and classification
accuracy. For example, we prefer smaller bottlenecks that
include shorter sentences that do not include class names.
Furthermore, to maximize performance, we prefer attributes
that CLIP can easily recognize and are highly discriminative.
To account for appearance variation, we select attributes that
cover a variety of information and are not repetitive. We
formulate these factors into a novel sub-modular criterion
that allows us to select good bottlenecks efficiently [ 38].
We have evaluated LaBo-created bottlenecks on 11 di-
verse image classification tasks, spanning recognition of
common objects [ 11,26] to skin tumors [ 64]. fine-grained
types [ 3,32,39,67], textures [ 10], actions [ 59], skin tu-
mors [ 64], and satellite photographed objects [ 8].2Our
2The only dataset specialization we perform is prompt tuning for GPT-main finding is that LaBo is a highly effective prior for what
concepts to look for, especially in low data regimes. In eval-
uations comparing with linear probes, LaBo outperforms by
as much as 11.7% at 1-shot and marginally underperforms
given larger data settings. Averaged over many dataset sizes,
LaBo bottlenecks are 1.5% more accurate than linear probes.
In comparison to modifications of CBMs that improve per-
formance by circumventing the bottleneck [ 70], we achieve
similar or better results without breaking the CBM abstrac-
tion. In extensive ablations, we study key trade-offs in bot-
tleneck design and show our selection criteria are crucial and
highlight several other critical design choices.
Human evaluations indicate that our bottlenecks are
largely understandable, visual, and factual. Finally, anno-
tators find our GPT-3 sourced bottlenecks are more factual
and groundable than those constructed from WordNet or
Wikipedia sentences. Overall, our experiments demonstrate
that automatically designed CBMs can be as effective as
black box models while maintaining critical factors con-
tributing to their interpretability.
|
Xu_Zero-Shot_Dual-Lens_Super-Resolution_CVPR_2023
|
Abstract
The asymmetric dual-lens configuration is commonly
available on mobile devices nowadays, which naturally
stores a pair of wide-angle and telephoto images of the
same scene to support realistic super-resolution (SR). Even
on the same device, however , the degradation for model-ing realistic SR is image-specific due to the unknown ac-quisition process (e.g., tiny camera motion). In this paper ,
we propose a zero-shot solution for dual-lens SR (ZeDuSR),
where only the dual-lens pair at test time is used to learn animage-specific SR model. As such, ZeDuSR adapts itself to
the current scene without using external training data, and
thus gets rid of generalization difficulty. However , there aretwo major challenges to achieving this goal: 1) dual-lens
alignment while keeping the realistic degradation, and 2)
effective usage of highly limited training data. To overcomethese two challenges, we propose a degradation-invariantalignment method and a degradation-aware training strat-
egy to fully exploit the information within a single dual-lenspair . Extensive experiments validate the superiority of Ze-
DuSR over existing solutions on both synthesized and real-
world dual-lens datasets. The implementation code is avail-
able at https://github.com/XrKang/ZeDuSR .
|
1. Introduction
Mobile devices such as smartphones are generally
equipped with an asymmetric camera system consisting of
multiple lenses with different focal lengths. As a commonconfiguration, with a wide-angle lens and a telephoto lens,one can capture the same scene with different field-of-views(FoVs). Within the overlapped FoV , the wide-angle and
telephoto images naturally store low-resolution (LR) and
high-resolution (HR) counterparts for learning a realistic
super-resolution (SR) model. This provides a feasible way
to obtain an HR image with a large FoV at the same time,which is beyond the capability of the original device.
There are a few pioneer works along this direction,
which are referred to as dual-lens/camera/zoomed SR inter-
∗Equal contribution.†Corresponding author.
Figure 1. Degradation kernel (estimated by KernelGAN [ 1]) clus-
tering on wide-angle images from iPhone11 [ 43].
changeably. Beyond traditional methods that simply trans-
fer the telephoto content to the wide-angle view in theoverlapped FoV , recent deep-learning-based methods en-
able resolution enhancement of the full wide-angle image.
Specifically, Wang et al. [43] utilize the overlapped FoV of
dual-lens image pairs to learn a reference-based SR model,where the wide-angle view is super-resolved by using the
telephoto view as a reference. However, the external train-ing data are synthesized with the predefined bicubic degra-
dation, which leads to generalization difficulty when the
trained SR model is applied to real devices. To narrow the
domain gap between the training and inference stages, theyfurther adopt a self-supervised adaptation strategy to fine-
tune the pretrained model on real devices. On the otherhand, Zhang et al. [53] adopt self-supervised learning to
train an SR model directly on a dual-lens dataset to avoidthe predefined degradation, with the assumption of consis-
tent degradation on the same device.
In practice, however, the degradation kernel for model-
ing realistic SR is influenced by not only the camera opticsbut also tiny camera motion during the acquisition processon mobile devices [ 1,6,36], resulting in the image-specific
degradation for each dual-lens pair, even on the same de-
vice. Figure 1gives an exemplar analysis, where we es-
timate the degradation kernels of 146 wide-angle images
captured by the dual-lens device iPhone11 [ 43] and cluster
them using t-SNE [ 41]. Although these images are captured
by the same device, they exhibit notably different degrada-
tion kernels due to the unknown acquisition process. This
limits the performances of previous dual-lens SR methods
for practical applications, since they assume that the realis-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9130
Wideʉangle Telephoto
Input Dual ʉlensSRmethods ResultPredefined degradation
Extra dualʉlensdataset
Supervised learningDCSRDCSR
SelfDZSR
ZeDuSRDeviceʉconsistent degradation
Extradualʉlensdataset
Selfʉsupervised learningSelfDZSR
Imageʉspecificdegradation
No extra dualʉlensdataset
ZeroʉshotlearningZeDuSR
Figure 2. Overview of dual-lens SR methods and visual compari-
son of 2×SR on the real-world data.
tic degradation is at least device-consistent.
In this paper, we propose a zero-shot learning solution
for realistic SR on dual-lens devices, termed as ZeDuSR,which learns an image-specific SR model solely from thedual-lens pair at test time. As such, ZeDuSR adapts itselfto the current scene under the unknown acquisition process
and gets rid of the generalization difficulty when using ex-
ternal training data. Figure 2summarizes the main differ-
ences of ZeDuSR from existing dual-lens SR methods with
a real-world reconstruction example. ZeDuSR gives a visu-ally improved result compared with its competitors, thanksto the image-specific degradation assumption.
There are two major challenges to achieving the success
of ZeDuSR. First, as also noticed in previous works [ 43,53],
it is non-trivial to exploit the information of the dual-lensimage pair, due to the spatial misalignment caused by thephysical offset between the two lenses. Moreover, we find
that the alignment process for generating the training data
will introduce additional frequency information, which in-evitably changes the realistic degradation. To overcome
this challenge, we propose to constrain the alignment pro-cess in spatial, frequency, and feature domains simulta-neously to keep the degradation. Specifically, we pro-
pose a degradation-invariant alignment method by leverag-
ing adversarial and contrastive learning. The other chal-
lenge is how to effectively use the highly limited data forlearning an image-specific SR model. To this end, we
design a degradation-aware training strategy to fully ex-
ploit the information within a single dual-lens pair. Thatis, we calculate the probability of each location for patchcropping according to the degradation similarity between
the images before and after alignment. As such, samplesthat keep the degradation are assigned a higher probabil-ity to be selected during training, while the contribution ofdegradation-variant samples is reduced.
We evaluate the performance of ZeDuSR on both syn-
thesized and real-world dual-lens datasets. For quantita-
tive evaluation with the HR ground-truth, we adopt a stereodataset (Middlebury2021 [ 35]) and a light field dataset
(HCI
new [ 12], with two views selected) to simulate dual-
lens image pairs with different baselines between the twolenses, by applying image-specific degradation on one of
the two views. Besides, we also perform the qualita-
tive evaluation on two real-world datasets, i.e., CameraFu-
sion [ 43] captured by iPhone11 and RealMCVSR [ 18] cap-
tured by iPhone12, where no additional degradation is in-
troduced beyond the realistic one between the two views.
Extensive experiments on both synthesized and real-worlddatasets demonstrate the superiority of our ZeDuSR overexisting solutions including single-image SR, reference-based SR, and dual-lens SR.
Contributions of this paper are summarized as follows:•We propose a zero-shot learning solution for realis-
tic SR on dual-lens devices, which assumes image-specific
degradation and adapts itself to the current scene under the
unknown acquisition process.
•We propose a degradation-invariant alignment method
by leveraging adversarial and contrastive learning to con-strain the alignment process in spatial, frequency, and fea-ture domains simultaneously.
•We design a degradation-aware training strategy to ef-
fectively exploit the information within the highly limitedtraining data, i.e., a single dual-lens pair at test time.
•We conduct extensive experiments on both synthesized
and real-world dual-lens datasets to validate the superiorityof our zero-shot solution.
|
Yang_Reconstructing_Animatable_Categories_From_Videos_CVPR_2023
|
Abstract
Building animatable 3D models is challenging due to the
need for 3D scans, laborious registration, and rigging. Re-
cently, differentiable rendering provides a pathway to ob-
tain high-quality 3D models from monocular videos, but
these are limited to rigid categories or single instances. We
present RAC, a method to build category-level 3D models
from monocular videos, disentangling variations over in-
stances and motion over time. Three key ideas are intro-
duced to solve this problem: (1) specializing a category-
level skeleton to instances, (2) a method for latent space
regularization that encourages shared structure across a
category while maintaining instance details, and (3) us-
ing 3D background models to disentangle objects from the
background. We build 3D models for humans, cats and dogs
given monocular videos. Project page: https://gengshan-
y.github.io/rac-www/ .
|
1. Introduction
We aim to build animatable 3D models for deformable
object categories. Prior work has done so for targeted cat-
egories such as people (e.g., SMPL [ 1,30]) and quadruped
animals (e.g., SMAL [ 4]), but such methods appear chal-
lenging to scale due to the need of 3D supervision and reg-
istration. Recently, test-time optimization through differ-entiable rendering [ 40,41,44,56,69] provides a pathway
to generate high-quality 3D models of deformable objects
and scenes from monocular videos. However, such models
are typically built independently for each object instance or
scene. In contrast, we would like to build category models
that can generate different instances along with deforma-
tions, given causally-captured video collections .
Though scalable, such data is challenging to leverage
in practice. One challenge is how to learn the morpho-
logical variation of instances within a category. For ex-
ample,husky s andcorgi s are bothdogs, but have dif-
ferent body shapes, skeleton dimensions, and texture ap-
pearance. Such variations are difficult to disentangle from
the variations within a single instance, e.g., as a dog artic-
ulates, stretches its muscles, and even moves into differ-
ent illumination conditions. Approaches for disentangling
such factors require enormous efforts in capture and reg-
istration [ 1,5], and doing so without explicit supervision
remains an open challenge.
Another challenge arises from the impoverished nature
of in-the-wild videos: objects are often partially observ-
able at a limited number of viewpoints, and input signals
such as segmentation masks can be inaccurate for such “in-
the-wild” data. When dealing with partial or impoverished
video inputs, one would want the model to listen to the com-
mon structures learned across a category – e.g., dogs have
two ears. On the other hand, one would want the model to
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16995
Morphology βArticulation θ
Figure 2. Disentangling morphologies βand articulation θ.We show different morphologies (body shape and clothing) given the same
rest pose ( top) and bouncing pose ( bottom ).
stay faithful to the input views.
Our approach addresses these challenges by exploiting
three insights: (1) We learn skeletons with constant bone
lengths within a video, allowing for better disentanglement
of between-instance morphology and within-instance artic-
ulation. (2) We regularize the unobserved body parts to be
coherent across instances while remaining faithful to the in-
put views with a novel code-swapping technique. (3) We
make use of a category-level background model that, while
not 3D accurate, produces far better segmentation masks.
We learn animatable 3D models of cats, dogs, and humans
which outperform prior art. Because our models regis-
ter different instances with a canonical skeleton, we also
demonstrate motion transfer across instances.
|
Zhang_Structural_Multiplane_Image_Bridging_Neural_View_Synthesis_and_3D_Reconstruction_CVPR_2023
|
Abstract
The Multiplane Image (MPI), containing a set of fronto-
parallel RGB αlayers, is an effective and efficient represen-
tation for view synthesis from sparse inputs. Yet, its fixed
structure limits the performance, especially for surfaces im-
aged at oblique angles. We introduce the Structural MPI (S-
MPI), where the plane structure approximates 3D scenes
concisely. Conveying RGB αcontexts with geometrically-
faithful structures, the S-MPI directly bridges view synthe-
sis and 3D reconstruction. It can not only overcome the
critical limitations of MPI, i.e., discretization artifacts from
sloped surfaces and abuse of redundant layers, and can
also acquire planar 3D reconstruction. Despite the intu-
ition and demand of applying S-MPI, great challenges are
introduced, e.g., high-fidelity approximation for both RGB α
layers and plane poses, multi-view consistency, non-planar
regions modeling, and efficient rendering with intersected
planes. Accordingly, we propose a transformer-based net-
work based on a segmentation model [4]. It predicts com-
pact and expressive S-MPI layers with their corresponding
masks, poses, and RGB αcontexts. Non-planar regions are
inclusively handled as a special case in our unified frame-
work. Multi-view consistency is ensured by sharing global
proxy embeddings, which encode plane-level features cov-
ering the complete 3D scenes with aligned coordinates. In-
tensive experiments show that our method outperforms both
previous state-of-the-art MPI-based view synthesis methods
and planar reconstruction methods.
|
1. Introduction
Novel view synthesis [30, 51] aims to generate new im-
ages from specifically transformed viewpoints given one or
multiple images. It finds wide applications in augmented or
mixed reality for immersive user experiences.
The advance of neural networks mostly drives the re-
cent progress. NeRF-based methods [28, 30] achieve im-
pressive results but are limited in rendering speed and gen-
*This work was done when Mingfang Zhang was an intern at MSRA.
PlaneFormerStandardMPIOurs
Single-/sparse-view inputStandard MPIStructural MPI3D ReconstructionView synthesis
(D)(R)
Ours(a) Standard MPI v.s. Structural MPI(b) View-synthesis result Input(c) Reconstruction result
Figure 1. We propose the Structural Multiplane Image (S-MPI)
representation to bridge the tasks of neural view synthesis and 3D
reconstruction. It consists of a set of posed RGB αimages with ge-
ometries approximating the 3D scene. The scene-adaptive S-MPI
overcomes the critical limitations of standard MPI [20], e.g., dis-
cretization artifacts (D) andrepeated textures (R) , and achieves a
better depth map compared with the previous planar reconstruc-
tion method, PlaneFormer [1].
eralizability. The multiplane image (MPI) representation
[42, 50] shows superior abilities in these two aspects, es-
pecially given extremely sparse inputs. Specifically, neural
networks are utilized to construct MPI layers, containing a
set of fronto-parallel RGB αplanes regularly sampled in a
reference view frustum. Then, novel views are rendered in
real-time through simple homography transformation and
integral over the MPI layers. Unlike NeRF models, MPI
models do not need another training for a new scene.
Nevertheless, standard MPI has underlying limitations.
1) It is sensitive to discretization due to slanted surfaces in
scenes. As all layered planes are parallel to the source im-
age plane, slanted surfaces will be distributed to multiple
MPI layers causing discretization artifacts in novel views,
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16707
as shown in Fig. 1 (b). Increasing the number of layers can
improve the representation capability [38] but also increase
memory and computation costs. 2) It easily introduces re-
dundancy. It tends to distribute duplicated textures into dif-
ferent layers to mimic the lighting field [21], which can in-
troduce artifacts with repeated textures as shown in Fig. 1
(b). The essential reason causing the above issues is that the
MPI construction is dependent on source views but neglects
the explicit 3D geometry of the scenes. Intuitively, we raise
a question: Is it possible to construct MPIs adaptive to the
scenes, considering both depths and orientations?
Slanted planes are smartly utilized in 3D reconstruction
to save stereo matching costs [11] or represent the scene
compactly [12,17], especially for man-made environments.
Recent advanced neural networks reach end-to-end planar
reconstruction from images by formulating it as instance
segmentation [25, 26], where planes are directly detected
and segmented from the input image. However, non-planar
regions are often not well-modeled or neglected [1, 25, 40],
resulting in holes or discontinuities in depth maps, as shown
in Fig. 1 (c).
In this paper, we aim to bridge the neural view synthe-
sis and the planar 3D reconstruction, that is, to construct
MPIs adaptive to 3D scenes with planar reconstruction and
achieve high-fidelity view synthesis. The novel representa-
tion we propose is called Structural MPI (S-MPI), which
is fully flexible in both orientations and depth offsets to
approximate the scene geometry, as shown in Fig. 1 (a).
Although our motivation is straightforward, there are great
challenges in the construction of an S-MPI. (1) The network
not only needs to predict RGB αvalues but also the planar
approximation of the scene. (2) It is difficult to correspond
the plane projections across views since they may cover dif-
ferent regions and present different appearances. Recent
plane reconstruction works [1, 18] build matches of planes
after they are detected in each view independently, which
may increase costs and accumulate errors. (3) Non-planar
regions are challenging to model even with free planes. Pre-
vious plane estimation methods [1,40,46] cannot simultane-
ously handle planar and non-planar regions well. (4) In the
rendering process, as an S-MPI contains planes intersecting
with each other, an efficient rendering pipeline needs to be
designed so that the rendering advantages of MPI can be
inherited.
To address these challenges, we propose to build an
S-MPI with an end-to-end transformer-based model for
both planar geometry approximation and view synthesis, in
which planar and non-planar regions are processed jointly.
We follow the idea [25] of formulating plane detection as
instance segmentation and leverage the segmentation net-
work [4]. Our S-MPI transformer uniformly takes planar
and non-planar regions as two structure classes and predicts
their representative attributes, which are for reconstruction(structure class, plane pose and plane mask) and view syn-
thesis (RGB αimage). We term each instance with such
attributes as a proxy . Note that non-planar layers are inclu-
sively handled as fronto-parallel planes with adaptive depth
offsets and the total number of the predicted proxy instances
is adaptive to the scene.
Our model can manipulate both single-view and multi-
view input. It aims to generate a set of proxy embed-
dings in the full extent of the scene, covering all planar
and non-planar regions aligned in a global coordinate frame.
For multi-view input, the proxy embeddings progressively
evolve to cover larger regions and refine plane poses as
the number of views increases. In this way, the predicted
proxy instances are directly aligned, which avoids the so-
phisticated matching in two-stage methods [1, 18]. The
global proxy embeddings are effectively learned with the
ensembled supervision from all local view projections. Our
model achieves state-of-the-art performance for single-view
view synthesis ( 10%↑PSNR) and planar reconstruction
(20%↑recall) in datasets [6, 36] of man-made scenes and
also achieve encouraging results for multi-view input com-
pared to NeRF-based methods with high costs.
In summary, our main contributions are as follows:
• We introduce the Structural MPI representation, con-
sisting of geometrically-faithful RGB αimages to the
3D scene, for both neural view synthesis and 3D re-
construction.
• We propose an end-to-end network to construct S-MPI,
where planar and non-planar regions are uniformly
handled with high-fidelity approximations for both ge-
ometries and light filed.
• Our model ensures multi-view consistency of planes
by introducing the global proxy embeddings compre-
hensively encoding the full 3D scene, and they effec-
tively evolve with the ensembled supervision from all
views.
|
Yao_Visual-Language_Prompt_Tuning_With_Knowledge-Guided_Context_Optimization_CVPR_2023
|
Abstract
Prompt tuning is an effective way to adapt the pretrained
visual-language model (VLM) to the downstream task us-
ing task-related textual tokens. Representative CoOp-based
work combines the learnable textual tokens with the class
tokens to obtain specific textual knowledge. However, the
specific textual knowledge is worse generalization to the
unseen classes because it forgets the essential general tex-
tual knowledge having a strong generalization ability. To
tackle this issue, we introduce a novel Knowledge-guided
Context Optimization (KgCoOp) to enhance the generaliza-
tion ability of the learnable prompt for unseen classes. The
key insight of KgCoOp is that the forgetting about essen-
tial knowledge can be alleviated by reducing the discrep-
ancy between the learnable prompt and the hand-crafted
prompt. Especially, KgCoOp minimizes the discrepancy be-
tween the textual embeddings generated by learned prompts
and the hand-crafted prompts. Finally, adding the Kg-
CoOp upon the contrastive loss can make a discriminative
prompt for both seen and unseen tasks. Extensive eval-
uation of several benchmarks demonstrates that the pro-
posed Knowledge-guided Context Optimization is an effi-
cient method for prompt tuning, i.e.,achieves better perfor-
mance with less training time. code.
|
1. Introduction
With the help of the large scale of the image-text as-
sociation pairs, the trained visual-language model (VLM)
contains essential general knowledge, which has a bet-
ter generalization ability for the other tasks. Recently,
many visual-language models have been proposed, such
as Contrastive Language-Image Pretraining (CLIP) [29],
Flamingo [1], ALIGN [18], etc. Although VLM is an ef-
fective model for extracting the visual and text description,
training VLM needs a large scale of high-quality datasets.
However, collecting a large amount of data for training
a task-related model in real visual-language tasks is dif-
ficult. To address the above problem, the prompt tun-Table 1. Compared to existing methods, the proposed KgCoOp
is an efficient method, obtaining a higher performance with less
training time .
Methods PromptsAccuracyTraining-timeBase New H
CLIP hand-crafted 69.34 74.22 71.70 -
CoOp textual 82.63 67.99 74.60 6ms/image
ProGrad textual 82.48 70.75 76.16 22ms/image
CoCoOp textual+visual 80.47 71.69 75.83 160ms/image
KgCoOp textual 80.73 73.6 77.0 6ms/image
ing [4] [10] [19] [22] [28] [30] [33] [38] has been proposed
to adapt the pretrained VLM to downstream tasks, achiev-
ing a fantastic performance on various few-shot or zero-shot
visual recognization tasks.
The prompt tuning1usually applies task-related textual
tokens to embed task-specific textual knowledge for pre-
diction. The hand-crafted template “a photo of a [Class]”
in CLIP [29] is used to model the textual-based class em-
bedding for zero-shot prediction. By defining the knowl-
edge captured by the fixed (hand-crafted) prompts as the
general textual knowledge2, it has a high generalization
capability on unseen tasks. However, the general textual
knowledge is less able to describe the downstream tasks
due to not consider the specific knowledge of each task.
To obtain discriminative task-specific knowledge, Context
Optimization(CoOp) [41], Conditional Context Optimiza-
tion(CoCoOp) [40], and ProGrad [42] replace the hand-
crafted prompts with a set of learnable prompts inferred
by the labeled few-shot samples. Formally, the discrimina-
tive knowledge generated by the learned prompts is defined
as the specific textual knowledge . However, CoOp-based
methods have a worse generalization to the unseen classes
with the same task, e.g.,obtaining a worse performance than
CLIP for the unseen classes ( New), shown in Table 1.
As the specific textual knowledge is inferred from the
1In this work, we only consider the textual prompt tuning and do not
involve the visual prompt tuning.
2Inspired from [17], ‘knowledge’in this work denotes the information
contained in the trained model.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
6757
00.050.10.150.20.250.30.350.40.450.5
0510152025
ImageNet
Caltech101
OxfordPets
StandfordCars
Flowers
Food101
FGVCAircraft
SUN397
DTD
EuroSAT
UCF101∇=(−)/= − 22
Figure 1. For the CoOp-based prompt tuning, the degree of per-
formance degradation Onewon the New classes is consistent with
the distance between the learnable textual embedding wcoop and
the hand-crafted textual embedding wclip. The larger distance, the
more severe the performance degradation. clipandcoop are the
accuracy of New classes for CLIP and CoOp, respectively.
labeled few-shot samples, it is discriminative for the seen
classes and biased away from the unseen classes, lead-
ing to worse performance on the unseen domain. For
example, non-training CLIP obtains a higher New accu-
racy on the unseen classes than CoOp-based methods, e.g.,
74.22%/63.22%/71.69% for CLIP/CoOP/CoCoOp. The su-
perior performance of CLIP on unseen classes verifies that
its general textual knowledge has a better generalization
for unseen classes. However, the specific textual knowl-
edge inferred by the CoOp-based methods always forgets
the essential general textual knowledge, called catastrophic
knowledge forgetting, i.e.,the more serve catastrophic for-
getting, the larger performance degradation.
To address this issue, we introduce a novel prompt tun-
ing method Knowledge-guided Context Optimization (Kg-
CoOp) to boost the generality of the unseen class by reduc-
ing the forgetting of the general textual knowledge. The
key insight of KgCoOp is that the forgetting about gen-
eral textual knowledge can be alleviated by reducing the
discrepancy between the learnable prompt and the hand-
crafted prompt. The observation of relationship between
the discrepancy of two prompts and the performance drop
also verify the insight. As shown in Figure 1, the larger
the distance between textual embeddings generated by the
learnable prompt and the hand-crafted prompt, the more
severe the performance degradation. Formally, the hand-
crafted prompts “a photo of a [Class]” are fed into the text
encoder of CLIP to generate the general textual embedding,
regarded as the general textual knowledge. Otherwise, a set
of learnable prompts is optimized to generate task-specific
textual embedding. Furthermore, Knowledge-guided Con-
text Optimization(KgCoOp) minimizes the euclidean dis-
tance between general textual embeddings and specific tex-
tual embeddings for remembering the essential general tex-
tual knowledge. Similar to the CoOp and CoCoOp, the con-
trastive loss between the task-specific textual and visual em-beddings is used to optimize the learnable prompts.
We conduct comprehensive experiments under base-to-
new generalization setting, few-shot classification, and do-
main generalization over 11 image classification datasets
and four types of ImageNets. The evaluation shows that
the proposed KgCoOp is an efficient method: using the less
training time obtains a higher performance, shown in Ta-
ble 1. In summary, the proposed KgCoOp obtains: 1) higher
performance: KgCoOp obtains a higher final performance
than existing methods. Especially, KgCoOp obtains a clear
improvement on the New class upon the CoOp, CoCoOp,
and ProGrad, demonstrating the rationality and necessity of
considering the general textual knowledge. 2) less training
time: the training time of KgCoOp is the same as CoOp,
which is faster than CoCoOp and ProGrad.
|
Zhang_Revisiting_the_Stack-Based_Inverse_Tone_Mapping_CVPR_2023
|
Abstract
Current stack-based inverse tone mapping (ITM) meth-
ods can recover high dynamic range (HDR) radiance by
predicting a set of multi-exposure images from a single
low dynamic range image. However, there are still some
limitations. On the one hand, these methods estimate a
fixed number of images (e.g., three exposure-up and three
exposure-down), which may introduce unnecessary com-
putational cost or reconstruct incorrect results. On the
other hand, they neglect the connections between the up-
exposure and down-exposure models and thus fail to fully
excavate effective features. In this paper, we revisit the
stack-based ITM approaches and propose a novel method
to reconstruct HDR radiance from a single image, which
only needs to estimate two exposure images. At first, we de-
sign the exposure adaptive block that can adaptively adjust
the exposure based on the luminance distribution of the in-
put image. Secondly, we devise the cross-model attention
block to connect the exposure adjustment models. Thirdly,
we propose an end-to-end ITM pipeline by incorporating
the multi-exposure fusion model. Furthermore, we propose
and open a multi-exposure dataset that indicates the opti-
mal exposure-up/down levels. Experimental results show
that the proposed method outperforms some state-of-the-art
methods.
|
1. Introduction
The luminance distribution in nature spans a wide range,
from the starlight ( 10−5cd/m2) to the direct sunlight
(108cd/m2). The low dynamic range (LDR) devices can
not cover the full range of luminance of the real scene, and
thus fail to reproduce the realistic visual experience. High
dynamic range imaging (HDRI) technology [1] [26] can
solve this problem, which takes multiple LDR images of
the same scene with different shutter time, and then gener-
ates the high dynamic range (HDR) image via the multi-
exposure fusion (MEF) method [4] [19] [32]. However,
HDRI cannot handle the images that have already been cap-
input LDR+1 +2 +3
-1 -2 -3
+ EV - EVground
truthPrevious stack-based method
Proposed methodestimating
six images
estimating
only two images Fixed
Fusion
tone
mapping
Q-score
59.6612
Q-score
63.9265Up-exposure
Down-exposure
Up-exposure
Down-exposureCross modelLearning-
based
Fusion
tone
mapping61.4861.9262.4162.84 62.73 62.7263.12 62.94
61.38
59.6360.4761.2161.89
60.96
60.42
59.560.561.562.563.5
T=3 T=5 T=7 T=9 T=11HDR-VDP Score
Length of Multi Exposure StackThe influences of different stack lengths
DrTMO [7]Deep Recursive [12]Deep Chain [11]
(a) The influences of different stack lengths
(b) Comparison on the ITM pipelineFigure 1. (a) The influences of different stack lengths demonstrate
that the stack length is important to the quality of reconstructed
HDR image. However, the previous ITM methods [7] [11] [12]
[10] simply set a fixed length, which cannot be the optimal choice
for every scene. (b) The comparison between the MES predicted
by the state-of-the-art stack-based method [10] and the proposed
method. Our method only needs to estimate two exposure im-
ages to recover more realistic details in highlights and shadows
and achieves a higher HDR-VDP-2.2 Q-score. The HDR images
are tone mapped by [15] for LDR display.
tured, such as a large number of LDR images and videos on
the Internet. The inverse tone mapping (ITM) technique is
thus designed to recover the HDR radiance from a single
LDR image, which is an ill-posed problem because the de-
tails in the highlights and shadows are almost lost and dif-
ficult to be restored. Fortunately, the development of deep
learning [6] [7] provides a solution by learning and predict-
ing the distribution of the lost information from the huge
amount of training examples.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9162
There are two main deep-learning-based ITM ap-
proaches, i.e., direct mapping methods [6] [17] [22] [28]
[34] and stack-based methods [7] [10] [11] [12] [13]. The
direct mapping methods learn an end-to-end model to re-
cover the HDR radiance from the LDR input straightfor-
wardly. By contrast, the stack-based methods simulate the
HDRI technology by increasing and decreasing the expo-
sure value (EV) of the input image to obtain the multi-
exposure stack (MES). Compared to the direct mapping
methods, the stack-based methods simulate the generation
process of HDR images in the real world and perform the
learning stage in the same LDR space, which avoids the
sophisticated changes between the LDR and HDR domain
[10] [11] [13].
The process of the previous stack-based based ITM
methods can be roughly summarized in the following three
main steps: (1) Training an up-exposure model and a down-
exposure model separately and independently. (2) Gen-
erating a fixed number of exposure adjusted images (e.g.,
three up-exposure and three down-exposure images) with
the trained models. (3) Merging these images by a clas-
sic multi-exposure fusion (MEF) approach [4]. However,
each of these three steps has limitations that may cause in-
accurate results. Specifically, (1) the process of increasing
and decreasing the exposure should not be independent. For
instance, when decreasing the exposure, some useful infor-
mation of the under-exposed regions may become subtle,
while the features in the opposite increasing process can
compensate for it. (2) The times of exposure adjustment are
important to the quality of reconstructed HDR images and
a fixed length of MES will cause incomplete information
recovery or introduce an unnecessary computational cost,
as shown in Fig. 1 (a). (3) The classic MEF approaches
cannot be integrated into the entire end-to-end training pro-
cess. Although Kim et al. [10] proposed a differentiable
HDR synthesis layer, it is time-consuming and highly de-
pends on the shutter time and therefore cannot be applied to
general scenes.
In this paper, we propose a novel HDR reconstruction
method with adaptive exposure adjustment, which provides
an effective solution to the existing limitations in the field
of stack-based ITM. At first, we design an efficient encoder
equipped with the luminance-guided convolution (LGC)
and cross-model attention block (CMAB) to extract useful
information from local and cross-model features. With the
help of CMAB, valid information on the entire up-exposure
and down-exposure process can be fully explored to help
their reconstruction. Secondly, the proposed up-exposure
and down-exposure models can adjust the input LDR im-
age only once to obtain the corresponding optimal expo-
sure adjusted result. In this way, we can avoid the difficulty
of determining the length of the MES and get the desired
exposure adjustment directly. For this purpose, appropri-ate ground truth is needed to indicate the optimal exposure
level. Therefore, we improve the SICE [2] dataset to form
a new MES dataset with optimal exposure labels. On the
other hand, since the exposure levels of the labels are dif-
ferent, the models need to be able to generate different re-
sults adaptively based on different inputs. Consequently,
we devise the exposure adaptive block (EAB) to extract the
global information and remap the features of the decoder.
The features extracted from EAB are used to normalize the
features in the decoder, which results in the image-adaptive
capability. Thirdly, we propose a lightweight and fast multi-
exposure fusion model (MEFM), which can merge the ex-
posure adjusted results with the input image into the de-
sired HDR image and thus make the whole pipeline end-to-
end. Furthermore, we propose progressive reconstruction
loss and mask-aware generative adversarial loss to avoid the
artifacts in the restored textures of over/under-exposed re-
gions. As Fig. 1 (b) shows, the proposed method only needs
to estimate two exposure values to recover the lost infor-
mation in the shadows and highlights respectively, which is
more concise and effective. Experiments show that our ITM
algorithm outperforms the state-of-the-art ITM methods in
both quantitative and qualitative evaluations.
This paper has the following main contributions:
(1). We propose a novel stack-based ITM framework,
which only needs to estimate two exposure images to form
the MES. In this way, the lost information can be recov-
ered more efficiently and precisely. Moreover, the exposure
adaptive block is designed to adaptively adjust the exposure
based on LDR inputs with different luminance distributions.
(2). We connect the up-exposure and down-exposure
models with the designed cross-model attention block,
which can fully extract the effective features of the image
regions with different luminance.
(3). A lightweight and fast multi-exposure fusion net-
work is proposed that can merge the generated results and
makes the entire training pipeline end-to-end.
(4). A more concise MES dataset is proposed and opened
based on the SICE dataset [2], which contains the optimal
exposure-up/down labels to train the adaptive exposure ad-
justment networks.
|
Zara_AutoLabel_CLIP-Based_Framework_for_Open-Set_Video_Domain_Adaptation_CVPR_2023
|
Abstract
Open-set Unsupervised Video Domain Adaptation (OU-
VDA) deals with the task of adapting an action recognition
model from a labelled source domain to an unlabelled tar-
get domain that contains “target-private” categories, which
are present in the target but absent in the source. In this
work we deviate from the prior work of training a spe-
cialized open-set classifier or weighted adversarial learn-
ing by proposing to use pre-trained Language and Vision
Models (CLIP). The CLIP is well suited for OUVDA due
to its rich representation and the zero-shot recognition ca-
pabilities. However, rejecting target-private instances with
the CLIP’s zero-shot protocol requires oracle knowledge
about the target-private label names. To circumvent the
impossibility of the knowledge of label names, we propose
AutoLabel that automatically discovers and generates
object-centric compositional candidate target-private class
names. Despite its simplicity, we show that CLIP when
equipped with AutoLabel can satisfactorily reject the
target-private instances, thereby facilitating better align-
ment between the shared classes of the two domains. The
code is available1.
|
1. Introduction
Recognizing actions in video sequences is an important
task in the field of computer vision, which finds a wide
range of applications in human-robot interaction, sports,
surveillance, and anomalous event detection, among others.
Due to its high importance in numerous practical applica-
tions, action recognition has been heavily addressed using
deep learning techniques [32, 44, 58]. Much of the success
in action recognition have noticeably been achieved in the
supervised learning regime [6, 17, 49], and more recently
shown to be promising in the unsupervised regime [20, 34,
56] as well. As constructing large scale annotated and cu-
rated action recognition datasets is both challenging and ex-
pensive, focus has shifted towards adapting a model from a
source domain, having a labelled source dataset, to an un-
1https://github.com/gzaraunitn/autolabellabelled target domain of interest. However, due to the dis-
crepancy (or domain shift ) between the source and target
domains, naive usage of a source trained model in the target
domain leads to sub-optimal performance [51].
To counter the domain shift and and improve the trans-
fer of knowledge from a labelled source dataset to an unla-
belled target dataset, unsupervised video domain adaptation
(UVDA) methods [7, 9, 42] have been proposed in the liter-
ature. Most of the prior literature in UVDA are designed
with the assumption that the label space in the source and
target domain are identical. This is a very strict assumption,
which can easily become void in practice, as the target do-
main may contain samples from action categories that are
not present in the source dataset [43]. In order to make
UVDA methods more useful for practical settings, open-
set unsupervised video domain adaptation (OUVDA) meth-
ods have recently been proposed [5, 8]. The main task in
OUVDA comprise in promoting the adaptation between the
shared (orknown ) classes of the two domains by excluding
the action categories that are exclusive to the target domain,
also called as target-private (orunknown ) classes. Exist-
ing OUVDA prior arts either train a specialized open-set
classifier [5] or weighted adversarial learning strategy [8]
to exclude the target-private classes.
Contrarily, we address OUVDA by tapping into the very
rich representations of the open-sourced foundation Lan-
guage and Vision Models (LVMs). In particular, we use
CLIP (Contrastive Language-Image Pre-training) [45], a
foundation model that is trained on web-scale image-text
pairs, as the core element of our framework. We argue that
the LVMs ( e.g., CLIP) naturally lend themselves well to
OUVDA setting due to: (i)the representation learned by
LVMs from webly supervised image-caption pairs comes
encoded with an immense amount of prior about the real-
world, which is (un)surprisingly beneficial in narrowing the
shift in data distributions, even for video data; (ii)the zero-
shot recognition capability of such models facilitates identi-
fication and separation of the target-private classes from the
shared ones, which in turn ensures better alignment between
the known classes of the two domains.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
11504
ride horseclimbfencingshoot ball{Oracle zero-shot classesKnownclasses{A video of {action}TextEncoderZero-shot videoVisionEncoderV1T1T2T3T4V1/uni22C5T1V1/uni22C5T2V1/uni22C5T3V1/uni22C5T4A video of {ride horse}(a) Zero-shot prediction using CLIP
climbfencingKnownclasses{A video of {action}TextEncoder
Target-private videoVisionEncoderV1T1T2T3T4V1/uni22C5T1V1/uni22C5T2V1/uni22C5T3V1/uni22C5T4A video of {horse person}Target DatasetAutoLabelhorse personperson ballCandidatetarget-privateclasses (b) Rejecting target-private instances with our proposed AutoLabel
Figure 1. Comparison of AutoLabel with CLIP [45] for zero-shot prediction on target-private instances. (a) CLIP assumes the knowledge
about the oracle zero-shot classes names ( ride horse, shoot ball ); (b) Our proposed AutoLabel discovers automatically the
candidate target-private classes ( horse person, person ball ) and extends the known classes label set
Zero-shot inference using CLIP requires multi-modal in-
puts, i.e.,a test video and a set of allpossible prompts “ A
video of {label }”, where label is a class name, for
computing the cosine similarity (see Fig. 1a). However in
the OUVDA scenario, except the shared classes, a priori
knowledge about the target-private classes label names are
not available (the target dataset being unlabelled). Thus, ex-
ploiting zero-shot capability of CLIP to identify the target-
private instances in an unconstrained OUVDA scenario be-
comes a bottleneck. To overcome this issue we propose
AutoLabel , an automatic labelling framework that con-
structs a set of candidate target-private class names, which
are then used by CLIP to potentially identify the target-
private instances in a zero-shot manner.
In details, the goal of AutoLabel is to augment the set
of shared class names (available from the source dataset)
with a set of candidate target-private class names that best
represent the true target-private class names in the tar-
get dataset at hand (see Fig. 1b). To this end, we use
an external pre-trained image captioning model ViLT [31]
to extract a set of attribute names from every frame in
a video sequence (see Sec. 3.2.1 for details). This is
motivated by the fact that actions are often described by
the constituent objects and actors in a video sequence.
As an example, a video with the prompt “ A video
of{chopping onion }” can be loosely described by
the proxy prompt “ A video of {knife },{onion }
and{arm}” crafted from the predicted attribute names. In
other words, the attributes “ knife ”, “onion ” and “ arm”
when presented to CLIP in a prompt can elicit similar re-
sponse as the true action label “ chopping onion ”.
Naively expanding the label set using ViLT predicted at-
tributes can introduce redundancy because: (i)ViLT pre-
dicts attributes per frame and thus, there can be a lot of dis-
tractor object attributes in a video sequence; and (ii)ViLT
predicted attributes for the shared target instances will be
duplicates of the true source action labels. Redundancy
in the shared class names will lead to ambiguity in target-
private instance rejection.
Our proposed framework AutoLabel reduces the re-
dundancy in the effective label set in the following manner.First, it uses unsupervised clustering ( e.g., k-means [36]) on
the target dataset to cluster the target samples, and then con-
structs the top- kmost frequently occurring attributes among
the target samples that are assigned to each cluster. This
step gets rid of the long-tailed set of attributes, which are
inconsequential for predicting an action (see 3.2.2 for de-
tails). Second, AutoLabel removes the duplicate sets of
attributes that bear resemblance with the source class names
(being the same shared underlying class) by using a set
matching technique. At the end of this step, the effective
label set comprises the shared class names and the candi-
date sets of attribute names that represent the target-private
class names (see Sec. 3.2.3 for details). Thus, AutoLabel
unlocks the zero-shot potential of the CLIP, which is very
beneficial in unconstrained OUVDA.
Finally, to transfer knowledge from the source to the tar-
get dataset, we adopt conditional alignment using a sim-
plepseudo-labelling mechanism. In details, we provide to
the CLIP-based encoder the target samples and the extended
label set containing the shared and candidate target-private
classes. Then we take the top- kpseudo-labelled samples for
each predicted class and use them for optimizing a super-
vised loss (see Sec. 3.2.4 for details). Unlike many open-set
methods [5, 8] that reject all the target-private into a single
unknown category, AutoLabel allows us to discriminate
even among the target-private classes. Thus, the novelty of
ourAutoLabel lies not only in facilitating the rejection of
target-private classes from the shared ones, but also opens
doors to open world recognition [1].
In summary, our contributions are: (i)We demonstrate
that the LVMs like CLIP can be harnessed to address OU-
VDA, which can be excellent replacement to complicated
alignment strategies; (ii)We propose AutoLabel , an au-
tomatic labelling framework that discovers candidate target-
private classes names in order to promote better separation
of shared and target-private instances; and (iii)We conduct
thorough experimental evaluation on multiple benchmarks
and surpass the existing OUVDA state-of-the-art methods.
|
Yi_NAR-Former_Neural_Architecture_Representation_Learning_Towards_Holistic_Attributes_Prediction_CVPR_2023
|
Abstract
With the wide and deep adoption of deep learning mod-
els in real applications, there is an increasing need to model
and learn the representations of the neural networks them-
selves. These models can be used to estimate attributes
of different neural network architectures such as the accu-
racy and latency, without running the actual training or
inference tasks. In this paper, we propose a neural ar-
chitecture representation model that can be used to esti-
mate these attributes holistically. Specifically, we first pro-
pose a simple and effective tokenizer to encode both the
operation and topology information of a neural network
into a single sequence. Then, we design a multi-stage fu-
sion transformer to build a compact vector representation
from the converted sequence. For efficient model train-
ing, we further propose an information flow consistency
augmentation and correspondingly design an architecture
consistency loss, which brings more benefits with less aug-
mentation samples compared with previous random aug-
mentation strategies. Experiment results on NAS-Bench-
101, NAS-Bench-201, DARTS search space and NNLQP
show that our proposed framework can be used to pre-
dict the aforementioned latency and accuracy attributes of
both cell architectures and whole deep neural networks,
and achieves promising performance. Code is available at
https://github.com/yuny220/NAR-Former.
|
1. Introduction
As an ever increasing variety of deep neural network
models are widely adopted in academic research and real
applications, neural architecture representation is emerg-
ing as an universal need to predict model attributes holis-
tically. For example, modern neural architecture search
(NAS) methods can depend on the neural architecture rep-
1This work was done while Yun Yi was an intern at Intellifusion.
∗Corresponding authors.resentation to build good model accuracy predictors [5, 23,
24, 26, 31, 34], which estimate model accuracies without
running the expensive training procedure. In order to find
faster execution graphs when deploying models to neural
network accelerators, neural network compilers [3,20] need
it to build network latency predictors, which estimate the
real time cost without running it on the corresponding real
hardware. As a straightforward approach to solve these
holistic prediction tasks, a neural architecture representa-
tion model should be built to take the symbolic description
of the network as input, and generate its numerical repre-
sentation which can be easily used with existing modeling
tools to perform the desired downstream tasks.
There are some neural architecture representation mod-
els proposed in solving the individual tasks mentioned
above, but we find they are rather application specific and
have obvious drawbacks when used in new tasks. For ex-
ample, the early MLP, LSTM based accuracy prediction
approaches [6, 18, 24, 34] improve the efficiency and per-
formance of NAS, while there is a limitation of predic-
tion performance resulting from the the nonexistent or im-
plicit topology encoding. Some later proposed GCN based
methods [4, 17, 36] achieve better performance of accuracy
prediction than the above methods. However, due to the
use of adjacency matrix, the encoding dimension of these
methods scales quadratically with the depth of the input ar-
chitecture, making them difficult to model large networks.
NNLQP [20] implements latency prediction at the model
level, but GNN-based encoding scheme makes it inadequate
in modeling long range interactions between nodes.
Inspired by the progress in natural language understand-
ing, our network uses a hand designed yet general and ex-
tensible token encoding approach to encode the topology
information and key parameters of the input neural net-
work. Specifically, we design a generic real value tokenizer
to encode neural network nodes’ operation type, location,
and their inputs into vectors using the positional embed-
ding approach that is widely used in transformers [33] and
NERF [25]. This fixed length node encoding scheme makes
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7715
the network encoding scale linearly with the size of the in-
put network rather than quadratically as in models that rely
on taking adjacency matrix as input.
Based on the tokenized input sequence, we design a
transformer based model to further encode and fuse the net-
work descriptions to generate a compact representation, e.g.
a fixed length feature vector. Therefore, long range depen-
dencies between nodes can be established by transformer.
Besides the conventional multi-head self attention based
transformers, we propose to use a multi-stage fusion trans-
former structure at the deep stages of the model to further
fuse and refine the representation in a cascade manner.
For the network representation learning task, we also
propose a data augmentation method called information
flow consistency augmentation. The augmentation per-
mutes the encoding order of nodes in the original network
under our specified conditions without changing the struc-
ture of the network. We find it empirically improves the
performance of our method for downstream tasks.
The main contributions of this paper can be summarized
as the following points:
1. We propose a simple and effective neural network en-
coding approach which tokenizes both operation and
topology information of a neural network node into a
sequence. When it is used to encode the entire net-
work, the tokenized network encoding scheme scales
better than adjacency matrix based ones, and builds the
foundation for seamlessly using transformer structures
for network representation learning.
2. We design a multi-stage fusion transformer to learn
feature representations. Benefiting from the proposed
tokenizer, a concise pure transformer based neural ar-
chitecture representation learning framework (NAR-
Former) is proposed for the first time. Our NAR-
Former makes full use of the capacity of transformers
in handling sequence inputs, and gets promising per-
formance.
3. To facilitate efficient model training, we propose an
information flow consistency augmentation and cor-
respondingly design an architecture consistency loss,
which brings more benefits with less augmentation
samples compared with the existing random augmen-
tation strategy.
We conduct extensive experiments on accuracy predic-
tion, neural architecture search as well as latency prediction.
Experiments demonstrate the effectiveness of the proposed
representation model on processing both cell leveled net-
work components and whole deep neural networks. Specif-
ically, our accuracy predictor based on this representation
achieves highly competitive accuracy performance on cell-
based structures in NAS-Bench-101 [40] and NAS-Bench-201 [10] datasets. Compared with other predictor-based
NAS methods [23, 26], we efficiently find the architecture
with 97.52% accuracy in DARTS [19] by only querying
100 neural architectures. We also conduct latency predic-
tion experiments on neural networks deeper than 200 layers
to demonstrate the universality of our model.
|
Yu_Video_Probabilistic_Diffusion_Models_in_Projected_Latent_Space_CVPR_2023
|
Abstract
Despite the remarkable progress in deep generative
models, synthesizing high-resolution and temporally co-
herent videos still remains a challenge due to their high-
dimensionality and complex temporal dynamics along with
large spatial variations. Recent works on diffusion models
have shown their potential to solve this challenge, yet they
suffer from severe computation- and memory-inefficiency
that limit the scalability. To handle this issue, we propose
a novel generative model for videos, coined projected la-
tent video diffusion model (PVDM), a probabilistic dif-
fusion model which learns a video distribution in a low-
dimensional latent space and thus can be efficiently trained
with high-resolution videos under limited resources. Specifi-
cally, PVDM is composed of two components: (a) an autoen-
coder that projects a given video as 2D-shaped latent vectors
that factorize the complex cubic structure of video pixels and
(b) a diffusion model architecture specialized for our new fac-
torized latent space and the training/sampling procedure to
synthesize videos of arbitrary length with a single model. Ex-
periments on popular video generation datasets demonstrate
the superiority of PVDM compared with previous video syn-
thesis methods; e.g., PVDM obtains the FVD score of 639.7
on the UCF-101 long video (128 frames) generation bench-
mark, which improves 1773.4 of the prior state-of-the-art.
|
1. Introduction
Recent progresses of deep generative models have shown
their promise to synthesize high-quality, realistic samples in
various domains, such as images [ 9,27,41], audio [ 8,31,32],
3D scenes [ 6,38,48], natural languages [ 2,5],etc. As
a next step forward, several works have been actively
focusing on the more challenging task of video synthe-
sis [12,18,21,47,55,67]. In contrast to the success in other
domains, the generation quality is yet far from real-world
videos, due to the high-dimensionality and complexity of
videos that contain complicated spatiotemporal dynamics in
high-resolution frames.Inspired by the success of diffusion models in handling
complex and large-scale image datasets [ 9,40], recent ap-
proaches have attempted to design diffusion models for
videos [ 16,18,21,22,35,66]. Similar to image domains,
these methods have shown great potential to model video
distribution much better with scalability (both in terms of
spatial resolution and temporal durations), even achieving
photorealistic generation results [ 18]. However, they suffer
from severe computation and memory inefficiency, as diffu-
sion models require lots of iterative processes in input space
to synthesize samples [ 51]. Such bottlenecks are much more
amplified in video due to a cubic RGB array structure.
Meanwhile, recent works in image generation have pro-
posed latent diffusion models to circumvent the computation
and memory inefficiency of diffusion models [ 15,41,59].
Instead of training the model in raw pixels, latent diffusion
models first train an autoencoder to learn a low-dimensional
latent space succinctly parameterizing images [ 10,41,60]
and then models this latent distribution. Intriguingly, the ap-
proach has shown a dramatic improvement in efficiency for
synthesizing samples while even achieving state-of-the-art
generation results [ 41]. Despite their appealing potential,
however, developing a form of latent diffusion model for
videos is yet overlooked.
Contribution. We present a novel latent diffusion model
for videos, coined projected latent video diffusion model
(PVDM). Specifically, it is a two-stage framework (see Fig-
ure1for the overall illustration):
•Autoencoder: We introduce an autoencoder that repre-
sents a video with three 2D image-like latent vectors by
factorizing the complex cubic array structure of videos.
Specifically, we propose 3D →2D projections of videos
at each spatiotemporal direction to encode 3D video pix-
els as three succinct 2D latent vectors. At a high level,
we design one latent vector across the temporal direction
to parameterize the common contents of the video ( e.g.,
background), and the latter two vectors to encode the mo-
tion of a video. These 2D latent vectors are beneficial for
achieving high-quality and succinct encoding of videos,
as well as enabling compute-efficient diffusion model
architecture design due to their image-like structure.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18456
Denoising autoencoderVideo
encoder
Video
decoderLatent space
Diffusion process
⋯ ⋯
Upsample
residual block
(shared)Attention
layerDownsample
residual block
(shared): Concatenation
Generate the future clip
Generate the initial clip
Figure 1. Overall illustration of our projected latent video diffusion model (PVDM) framework. PVDM is composed of two components: (a)
(left) an autoencoder that maps a video into 2D image-like latent space (b) (right) a diffusion model operates in this latent space.
•Diffusion model: Based on the 2D image-like latent
space built from our video autoencoder, we design a
new diffusion model architecture to model the video
distribution. Since we parameterize videos as image-
like latent representations, we avoid computation-heavy
3D convolutional neural network architectures that are
conventionally used for handling videos. Instead, our
architecture is based on 2D convolution network diffu-
sion model architecture that has shown its strength in
handling images. Moreover, we present a joint training
of unconditional and frame conditional generative mod-
eling to generate a long video of arbitrary lengths.
We verify the effectiveness of our method on two popu-
lar datasets for evaluating video generation methods: UCF-
101 [ 54] and SkyTimelapse [ 64]. Measured with Inception
score (IS; higher is better [ 44]) on UCF-101, a represen-
tative metric of evaluating unconditional video generation,
PVDM achieves the state-of-the-art result of 74.40 on UCF-
101 in generating 16 frames, 256 ×256 resolution videos. In
terms of Fréchet video distance (FVD; lower is better [ 58])
on UCF-101 in synthesizing long videos (128 frames) of
256×256 resolution, it significantly improves the score from
1773.4 of the prior state-of-the-art to 639.7. Moreover, com-
pared with recent video diffusion models, our model shows
a strong memory and computation efficiency. For instance,
on a single NVIDIA 3090Ti 24GB GPU, a video diffusion
model [ 21] requires almost full memory ( ≈24GB) to train
at 128×128 resolution with a batch size of 1. On the other
hand, PVDM can be trained with a batch size of 7 at most
per this GPU with 16 frames videos at 256 ×256 resolution.
To our knowledge, the proposed PVDM is the first latent
diffusion model designed for video synthesis. We believe
our work would facilitate video generation research towards
efficient real-time, high-resolution, and long video synthesis
under the limited computational resource constraints.
|
Zhang_MOTRv2_Bootstrapping_End-to-End_Multi-Object_Tracking_by_Pretrained_Object_Detectors_CVPR_2023
|
Abstract
In this paper, we propose MOTRv2, a simple yet effective
pipeline to bootstrap end-to-end multi-object tracking with
a pretrained object detector. Existing end-to-end methods,
e.g. MOTR [43] and TrackFormer [20] are inferior to their
tracking-by-detection counterparts mainly due to their poor
detection performance. We aim to improve MOTR by ele-
gantly incorporating an extra object detector. We first adopt
the anchor formulation of queries and then use an extra ob-
ject detector to generate proposals as anchors, providing
detection prior to MOTR. The simple modification greatly
eases the conflict between joint learning detection and asso-
ciation tasks in MOTR. MOTRv2 keeps the query propoga-
tion feature and scales well on large-scale benchmarks.
MOTRv2 ranks the 1st place ( 73.4% HOTA on DanceTrack)
in the 1st Multiple People Tracking in Group Dance Chal-
lenge. Moreover, MOTRv2 reaches state-of-the-art perfor-
mance on the BDD100K dataset. We hope this simple and
effective pipeline can provide some new insights to the end-
to-end MOT community. Code is available at https:
//github.com/megvii-research/MOTRv2 .
|
1. Introduction
Multi-object tracking (MOT) aims to predict the trajec-
tories of all objects in the streaming video. It can be divided
into two parts: detection and association. For a long time,
the state-of-the-art performance on MOT has been domi-
nated by tracking-by-detection methods [4, 36, 44, 45] with
good detection performance to cope with various appear-
ance distributions. These trackers [44] first employ an ob-
ject detector (e.g., YOLOX [11]) to localize the objects in
each frame and associate the tracks by ReID features or IoU
matching. The superior performance of those methods par-
tially results from the dataset and metrics biased towards
detection performance. However, as revealed by the Dance-
* The work was done during internship at MEGVII Technology and
supported by National Key R&D Program of China (2020AAA0105200)
and Beijing Academy of Artificial Intelligence (BAAI).
DanceTrack BDD100K
54.273.473.583.7
32.343.644.856.5
HOTA mMOTA DetA mIDF1
Figure 1. Performance comparison between MOTR (grey bar) and
MOTRv2 (orange bar) on the DanceTrack and BDD100K datasets.
MOTRv2 improves the performance of MOTR by a large margin
under different scenarios.
Track dataset [27], their association strategy remains to be
improved in complex motion.
Recently, MOTR [43], a fully end-to-end framework is
introduced for MOT. The association process is performed
by updating the tracking queries while the new-born ob-
jects are detected by the detect queries. Its association per-
formance on DanceTrack is impressive while the detection
results are inferior to those tracking-by-detection methods,
especially on the MOT17 dataset. We attribute the infe-
rior detection performance to the conflict between the joint
detection and association processes. Since state-of-the-art
trackers [6,9,44] tend to employ extra object detectors, one
natural question is how to incorporate MOTR with an ex-
tra object detector for better detection performance. One
direct way is to perform IoU matching between the predic-
tions of track queries and extra object detector (similar to
TransTrack [28]). In our practice, it only brings marginal
improvements in object detection while disobeying the end-
to-end feature of MOTR.
Inspired by tracking-by-detection methods that take the
detection result as the input, we wonder if it is possible to
feed the detection result as the input and reduce the learning
of MOTR to the association. Recently, there are some ad-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22056
YOLOXMOTR
YOLOXMOTR
…Input @ 𝑡=0Prediction @ 𝑡=0Input @ 𝑡=1Prediction @ 𝑡=1…
ProposalQTrackQProposalQTrackQFigure 2. The overall architecture of MOTRv2. The proposals produced by state-of-the-art detector YOLOX [11] are used to generate the
proposal queries, which replaces the detect queries in MOTR [43] for detecting new-born objects. The track queries are transferred from
previous frame and used to predict the bounding boxes for tracked objects. The concatenation of proposal queries and track queries as well
as the image features are input to MOTR to generate the predictions frame-by-frame.
vances [18, 35] for anchor-based modeling in DETR. For
example, DAB-DETR initializes object queries with the
center points, height, and width of anchor boxes. Sim-
ilar to them, we modify the initialization of both detect
and track queries in MOTR. We replace the learnable po-
sitional embedding (PE) of detect query in MOTR with the
sine-cosine PE [30] of anchors, producing an anchor-based
MOTR tracker. With such anchor-based modeling, propos-
als generated by an extra object detector can serve as the
anchor initialization of MOTR, providing local priors. The
transformer decoder is used to predict the relative offsets
w.r.t. the anchors, making the optimization of the detection
task much easier.
The proposed MOTRv2 brings many advantages com-
pared to the original MOTR. It greatly benefits from the
good detection performance introduced by the extra object
detector. The detection task is implicitly decoupled from
the MOTR framework, easing the conflict between the de-
tection and association tasks in the shared transformer de-
coder. MOTRv2 learns to track the instances across frames
given the detection results from an extra detector.
MOTRv2 achieves large performance improvements on
the DanceTrack, BDD100K, and MOT17 datasets com-
pared to the original MOTR (see Fig. 1). On the
DanceTrack dataset, MOTRv2 surpasses the tracking-by-
detection counterparts by a large margin ( 14.8% HOTA
compared to OC-SORT [6]), and the AssA metric is 18.8%
higher than the second-best method. On the large-scale
multi-class BDD100K dataset [42], we achieved 43.6%
mMOTA, which is 2.4% better than the previous best solu-
tion Unicorn [41]. MOTRv2 also achieves state-of-the-art
performance on the MOT17 dataset [15, 21]. We hope our
simple and elegant design can serve as a strong baseline forfuture end-to-end multi-object tracking research.
|
Ye_AccelIR_Task-Aware_Image_Compression_for_Accelerating_Neural_Restoration_CVPR_2023
|
Abstract
Recently, deep neural networks have been successfully
applied for image restoration (IR) (e.g., super-resolution,
de-noising, de-blurring). Despite their promising perfor-
mance, running IR networks requires heavy computation.
A large body of work has been devoted to addressing this
issue by designing novel neural networks or pruning their
parameters. However, the common limitation is that while
images are saved in a compressed format before being en-
hanced by IR, prior work does not consider the impact of
compression on the IR quality.
In this paper, we present AccelIR, a framework that
optimizes image compression considering the end-to-end
pipeline of IR tasks. AccelIR encodes an image through
IR-aware compression that optimizes compression levels
across image blocks within an image according to the im-
pact on the IR quality. Then, it runs a lightweight IR net-
work on the compressed image, effectively reducing IR com-
putation, while maintaining the same IR quality and image
size. Our extensive evaluation using nine IR networks shows
that AccelIR can reduce the computing overhead of super-
resolution, de-nosing, and de-blurring by 49%, 29%, and
32% on average, respectively.
|
1. Introduction
Image restoration (IR) is a class of techniques that re-
covers a high-quality image from a lower-quality counter-
part (e.g., super-resolution, de-noising, de-blurring). With
the advances of deep learning, IR has been widely deployed
in various applications such as satellite/medical image en-
hancement [9,42,43,51,52], facial recognition [7,44,65,69],
and video streaming/analytics [15, 27, 66, 67, 70]. Mean-
while, the resolution of images used in these applica-
tions has been rapidly increasing along with the evolution
in client devices (e.g., smartphones [58, 61], TV moni-
tors [57]). Thus, deep neural networks (DNNs) used for
IR need to support higher-resolution images such as 4K
(4096×2160) and even 8K (7680 ×4320).
However, because the computing and memory overheadJPEG + IR AccelIRTaskFLOPs PSNR FLOPs PSNR
Super-resolution 1165G 25.28dB 298G 25.30dB
De-noising 1701G 30.49dB 1132G 30.70dB
De-blurring 2590G 30.57dB 1718G 30.58dB
Table 1. Computing overhead and quality of IR under the same
compression ratio (1.2bpp). AccelIR reduces computation by 34-
74% while providing the same IR quality and image size.
grow quadratically to the input resolution, applying IR net-
works to such a large image is computationally expen-
sive [32, 34]. Prior work addresses this issue using three
different approaches: 1) designing efficient feature extrac-
tion or up-scaling layers [2, 16, 29, 33, 59, 75], 2) adjusting
network complexities within an image according to the IR
difficulty [32, 34], and 3) pruning network parameters con-
sidering their importance [50]. The common limitation in
the prior studies is that they do not consider the detrimental
impact of image compression on the IR quality, despite the
fact that images in real-world applications are commonly
saved in a compressed format before being enhanced by IR.
In this work, we observe that there is a large opportunity
in optimizing image compression considering the end-to-
endpipeline of IR tasks. Compression loss has a signifi-
cant impact on the IR quality, while its impact also greatly
varies according to the image content, even within the same
image. Such heterogeneity offers room for IR-aware image
compression that optimizes compression levels across im-
age blocks within an image according to the impact on the
IR quality. IR-awareness allows us to use a lighter-weight
IR network because the quality enhancement from IR-aware
image compression can compensate the quality loss due to
the reduced network capacity.
Based on this observation, we present AccelIR, the first
IR-aware image compression framework that considers the
end-to-end pipeline of IR tasks, including image compres-
sion. AccelIR aims to reduce IR computation while main-
taining the same IR quality and image size. To enable this,
AccelIR develops a practical IR-aware compression algo-
rithm and adopts a lightweight IR network. AccelIR oper-
ates in two phases: offline profiling and online compression.
In the offline phase, AccelIR clusters image blocks in the
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18216
representative datasets [1, 21] into groups. For each group,
it constructs profiles that describe the impact of compres-
sion level on the resulting IR quality and image size. In ad-
dition to the profiles, a lightweight CNN is trained to guide
the best-fit group for unseen image blocks. In the online
phase, AccelIR retrieves the profiles for each block within
an image by running the CNN. Our framework then refers
to the block-level profiles to select the optimal compression
level for each block, maximizing the IR quality at the same
image size. Finally, the lightweight IR network is applied.
We evaluate AccelIR using a full system implementation
using JPEG [53] and WebP [68], the most widely used im-
age compression standards. As shown in Table 1, our evalu-
ation using five different super-resolution [2, 16, 36, 37, 56],
two de-noising [71, 73], and two de-blurring networks [12,
72] shows that AccelIR consistently delivers a significant
benefit in a wide range of settings. Compared to applying
IR to images encoded by the standard JPEG and WebP, Ac-
celIR reduces the computing cost of super-resolution, de-
noising, and de-blurring by 35-74%, 24-34%, and 24-34%,
respectively, under the same IR quality and image size. In
addition, AccelIR can support any type of image codec and
is well-fit to serve new IR tasks and networks that are not
shown in the training phase. Thus, AccelIR can be easily
integrated with the existing IR applications.
|
Yang_BEVFormer_v2_Adapting_Modern_Image_Backbones_to_Birds-Eye-View_Recognition_via_CVPR_2023
|
Abstract
We present a novel bird’s-eye-view (BEV) detector with
perspective supervision, which converges faster and bet-
ter suits modern image backbones. Existing state-of-the-
art BEV detectors are often tied to certain depth pre-
trained backbones like VoVNet, hindering the synergy be-
tween booming image backbones and BEV detectors. To
address this limitation, we prioritize easing the optimization
of BEV detectors by introducing perspective view supervi-
sion. To this end, we propose a two-stage BEV detector,
where proposals from the perspective head are fed into the
bird’s-eye-view head for final predictions. To evaluate the
effectiveness of our model, we conduct extensive ablation
studies focusing on the form of supervision and the gener-
ality of the proposed detector. The proposed method is ver-
ified with a wide spectrum of traditional and modern image
backbones and achieves new SoTA results on the large-scale
nuScenes dataset. The code shall be released soon.
|
1. Introduction
Bird’s-eye-view(BEV) recognition models [17,21,25,27,
29, 35, 42] are a class of camera-based models for 3D ob-
ject detection. They have attracted interest in autonomous
driving as they can naturally integrate partial raw observa-
tions from multiple sensors into a unified holistic 3D out-
put space. A typical BEV model is built upon an image
backbone, followed by a view transformation module that
lifts perspective image features into BEV features, which
are further processed by a BEV feature encoder and some
task-specific heads. Although much effort is put into de-
signing the view transformation module [17, 27, 42] and in-
corporating an ever-growing list of downstream tasks [9,27]
into the new recognition framework, the study of image
*: Equal contribution.
B: Corresponding author, email: [email protected] in BEV models receives far less attention. As a
cutting-edge and highly demanding field, it is natural to in-
troduce modern image backbones into autonomous driving.
Surprisingly, the research community chooses to stick with
V oVNet [13] to enjoy its large-scale depth pre-training [26].
In this work, we focus on unleashing the full power of mod-
ern image feature extractors for BEV recognition to unlock
the door for future researchers to explore better image back-
bone design in this field.
However, simply employing modern image backbones
without proper pre-training fails to yield satisfactory results.
For instance, an ImageNet [6] pre-trained ConvNeXt-XL
[23] backbone performs just on par with a DDAD-15M pre-
trained V oVNet-99 [26] for 3D object detection, albeit the
latter has 3.5 ×parameters of the former. We owe the strug-
gle of adapting modern image backbones to the following
issues: 1) The domain gap between natural images and au-
tonomous driving scenes. Backbones pre-trained on general
2D recognition tasks fall short of perceiving 3D scenes, es-
pecially estimating depth. 2) The complex structure of cur-
rent BEV detectors. Take BEVFormer [17] as an example.
The supervision signals of 3D bounding boxes and object
class labels are separated from the image backbone by the
view encoder and the object decoder, each of which is com-
prised of transformers of multiple layers. The gradient flow
for adapting general 2D image backbones for autonomous
driving tasks is distorted by the stacked transformer layers.
In order to combat the difficulties mentioned above in
adapting modern image backbones for BEV recognition, we
introduce perspective supervision into BEVFormer, i.e. ex-
tra supervision signals from perspective-view tasks and di-
rectly applied to the backbone. It guides the backbone to
learn 3D knowledge missing in 2D recognition tasks and
overcomes the complexity of BEV detectors, greatly facili-
tating the optimization of the model. Specifically, we build
a perspective 3D detection head [26] upon the backbone,
which takes image features as input and directly predicts
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17830
the 3D bounding boxes and class labels of target objects.
The loss of this perspective head, denoted as perspective
loss, is added to the original loss (BEV loss) deriving from
the BEV head as an auxiliary detection loss. The two de-
tection heads are jointly trained with their corresponding
loss terms. Furthermore, we find it natural to combine the
two detection heads into a two-stage BEV detector, BEV-
Former v2 . Since the perspective head is full-fledged, it
could generate high-quality object proposals in the perspec-
tive view, which we use as first-stage proposals. We encode
them into object queries and gather them with the learn-
able ones in the original BEVFormer, forming hybrid object
queries, which are then fed into the second-stage detection
head to generate the final predictions.
We conduct extensive experiments to confirm the effec-
tiveness and necessity of our proposed perspective super-
vision. The perspective loss facilitates the adaptation of
the image backbone, resulting in improved detection per-
formance and faster model convergence. While without
this supervision, the model cannot achieve comparable re-
sults even if trained with a longer schedule. Consequently,
we successfully adapt modern image backbones to the BEV
model, achieving 63.4% NDS on nuScenes [2] test-set.
Our contributions can be summarized as follows:
• We point out that perspective supervision is key to
adapting general 2D image backbones to the BEV
model. We add this supervision explicitly by a detec-
tion loss in the perspective view.
• We present a novel two-stage BEV detector, BEV-
Former v2. It consists of a perspective 3D and a BEV
detection head, and the proposals of the former are
combined with the object queries of the latter.
• We highlight the effectiveness of our approach by com-
bining it with the latest developed image backbones
and achieving significant improvements over previous
state-of-the-art results on the nuScenes dataset.
|
Xu_MEDIC_Remove_Model_Backdoors_via_Importance_Driven_Cloning_CVPR_2023
|
Abstract
We develop a novel method to remove injected backdoors
in deep learning models. It works by cloning the benign
behaviors of a trojaned model to a new model of the same
structure. It trains the clone model from scratch on a very
small subset of samples and aims to minimize a cloning loss
that denotes the differences between the activations of im-
portant neurons across the two models. The set of important
neurons varies for each input, depending on their magni-
tude of activations and their impact on the classification
result. We theoretically show our method can better recover
benign functions of the backdoor model. Meanwhile, we
prove our method can be more effective in removing back-
doors compared with fine-tuning. Our experiments show that
our technique can effectively remove nine different types of
backdoors with minor benign accuracy degradation, outper-
forming the state-of-the-art backdoor removal techniques
that are based on fine-tuning, knowledge distillation, and
neuron pruning.1
|
1. Introduction
Backdoor attack is a prominent threat to applications
of Deep Learning models. Misbehaviors, e.g., model mis-
classification, can be induced by an input stamped with a
backdoor trigger , such as a patch with a solid color or a
pattern [14, 29]. A large number of various backdoor attacks
have been proposed, targeting different modalities and fea-
turing different attack methods [2, 22, 37, 44, 49, 55]. An
important defense method is hence to remove backdoors
from pre-trained models. For instance, Fine-prune proposed
to remove backdoor by fine-tuning a pre-trained model on
clean inputs [27]. Distillation [23] removes neurons that
are not critical for benign functionalities. Model connectiv-
ity repair (MCR) [56] interpolates weight parameters of a
poisoned model and its finetuned version. ANP [50] adver-
1Code is available at https://github.com/qiulingxu/
MEDICsarially perturbs neuron weights of a poisoned model and
prunes those neurons that are sensitive to perturbations (and
hence considered compromised). While these techniques are
very effective in their targeted scenarios, they may fall short
in some other attacks. For example, if a trojaned model is
substantially robust, fine-tuning may not be able to remove
the backdoor without lengthy retraining. Distillation may be-
come less effective if a neuron is important for both normal
functionalities and backdoor behaviors. And pruning neu-
rons may degrade model benign accuracy. More discussions
of related work can be found in Section 2.
In this paper, we propose a novel backdoor removal tech-
nique MEDIC2as illustrated in Figure 1 (A). It works by
cloning the benign functionalities of a pre-trained model to a
new sanitized model of the same structure . Given a trojaned
model and a small set of clean samples , MEDIC trains the
clone model from scratch. The training is not only driven
by the cross-entropy loss as in normal model training, but
also by forcing the clone model to generate the same internal
activation values as the original model at the correspond-
ing internal neurons , i.e., steps 1⃝,2⃝, and 4⃝in Figure 1
(A). Intuitively, one can consider it essentially derives the
weight parameters in the clone model by resolving the acti-
vation equivalence constraints. There are a large number of
such constraints even with just a small set of clean samples.
However, such faithful cloning likely copies the backdoor
behaviors as well as it tends to generate the same set of
weight values. Therefore, our cloning is further guided by
importance (step 3⃝). Specifically, for each sample x, we
only force the important neurons to have the same activation
values. A neuron is considered important if (1) it tends to be
substantially activated by the sample, when compared to its
activation statistics over the entire population (the activation
criterion in red in Figure 1 (A)), and (2) the large activation
value has substantial impact on the classification result (the
impact criterion ) . The latter can be determined by analyz-
ing the output gradient regarding the neuron activation. By
constraining only the important neurons and relaxing the
2MEDIC stands for “Remove ModEl Backdoors via Importance DrIven
Cloning ”.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20485
others, the model focuses on cloning the behaviors related
to the clean inputs and precludes the backdoor. The set of
weight values derived by such cloning are largely different
from those in the original model. Intuitively, it is like solving
the same set of variables with only a subset of equivalence
constraints. The solution tends to differ substantially from
that by solving the full set of constraints.
Example. Figure 2 shows an example by visualizing the in-
ternal activations and importance values for a trojaned model
on a public benchmark [35]. Image (a) shows a right-turn
traffic sign stamped with a polygon trigger in yellow, which
induces the classification result of stop sign. Image (c) visu-
alizes the activation values for the trojaned image whereas
(d) shows those for its clean version. The differences be-
tween the two, visualized in (e), show that the neurons fall
into the red box are critical to the backdoor behaviors. A
faithful cloning method would copy the behaviors for these
neurons (and hence the backdoor). In contrast, our method
modulates the cloning process using importance values and
hence precludes the backdoor. The last image shows the
importance values with the bright points denoting important
neurons and the dark ones unimportant. Observe that it pre-
vents copying the behaviors of the compromised neurons
for this example as they are unimportant. One may notice
that the unimportant compromised neurons for this example
may become important for another example. Our method
naturally handles this using per-sample importance values.
□
Our technique is different from fine-tuning as it trains
from scratch. It is also different from knowledge distilla-
tion [25] shown in Figure 1 (B), which aims to copy be-
haviors across different model structures and constrains log-
its equivalence (and sometimes internal activation equiva-
lence [53] as well). In contrast, by using the same model
structure, we have a one-to-one mapping for individual neu-
rons in the two models such that we can enforce strong
constraints on internal equivalence. This allows us to copy
behaviors with only a small set of clean samples.
We evaluate our technique on nine different kinds of
backdoor attacks. We compare with five latest backdoor
removal methods (details in related work). The results show
our method can reduce the attack success rate to 8.5% on
average with only 2% benign accuracy degradation. It con-
sistently outperforms the other baselines by 25%. It usually
takes 15 mins to sanitize a model. We have also conducted
an adaptive attack in which the trigger is composed of ex-
isting benign features in clean samples. It denotes the most
adversary context for MEDIC . Our ablation study shows
all the design choices are critical. For example, without
using importance values, it can only reduce ASR to 36% on
average.
In summary, our main contributions include the follow-
ing.•We propose a novel importance driven cloning method
to remove backdoors. It only requires a small set of
clean samples.
•We theoretically analyze the advantages of the cloning
method.
•We empirically show that MEDIC outperforms the state-
of-the-art methods.
|
Yu_Phase-Shifting_Coder_Predicting_Accurate_Orientation_in_Oriented_Object_Detection_CVPR_2023
|
Abstract
With the vigorous development of computer vision, ori-
ented object detection has gradually been featured. In this
paper, a novel differentiable angle coder named phase-
shifting coder (PSC) is proposed to accurately predict the
orientation of objects, along with a dual-frequency version
(PSCD). By mapping the rotational periodicity of differ-
ent cycles into the phase of different frequencies, we pro-
vide a unified framework for various periodic fuzzy prob-
lems caused by rotational symmetry in oriented object de-
tection. Upon such a framework, common problems in
oriented object detection such as boundary discontinuity
and square-like problems are elegantly solved in a unified
form. Visual analysis and experiments on three datasets
prove the effectiveness and the potentiality of our approach.
When facing scenarios requiring high-quality bounding
boxes, the proposed methods are expected to give a com-
petitive performance. The codes are publicly available at
https://github.com/open-mmlab/mmrotate.
|
1. Introduction
As a fundamental task in computer vision, object de-
tection has been extensively studied. Early researches are
mainly focused on horizontal object detection [33], on the
ground that objects in natural scenes are usually oriented
upward due to gravity. However, in other domains such as
aerial images [2,12,20,23,25], scene text [7,10,13,14,34],
and industrial inspection [11, 19], oriented bounding boxes
are considered more preferable. Upon requirements in these
scenarios, oriented object detection has gradually been fea-
tured.
At present, several solutions around oriented object de-
*Corresponding author is Feipeng Da. This work is supported by Spe-
cial Project on Basic Research of Frontier Leading Technology of Jiangsu
Province of China (BK20192004C).tection have been developed, among which the most intu-
itive way is to modify horizontal object detectors by adding
an output channel to predict the orientation angle. Such a
solution faces two problems:
1) Boundary problem [24]: Boundary discontinuity
problem is often caused by angular periodicity. Assuming
orientation −π/2is equivalent to π/2, the network output is
sometimes expected to be −π/2and sometimes π/2when
facing the same input. Such a situation makes the network
confused about in which way it should perform regression.
2) Square-like problem [27]: Square-like problem usu-
ally occurs when a square bounding box cannot be uniquely
defined. Specifically, a square box should be equivalent to
a90◦rotated one, but the regression loss between them is
high due to the inconsistency of angle parameters. Such
ambiguity can also seriously confuse the network.
A more comprehensive introduction to these problems
can be found in previous researches [25, 27]. Also, sev-
eral methods have been proposed to address these problems,
which will be reviewed in Sec. 2.
Through rethinking the above problems, we find that
they can inherently be unified as rotationally symmetric
problems (boundary under 180◦and square-like under 90◦
rotation), which is quite similar to the periodic fuzzy prob-
lem of the absolute phase acquisition [37] in optical mea-
surement. Inspired by this, we come up with an idea to
utilize phase-shifting coding, a technique widely used in
optical measurement [36], for angle prediction in oriented
object detection. The technique has the potential to solve
both boundary discontinuity and square-like problems:
1) Phase-shifting encodes the measured distance (or par-
allax) into the periodic phase in optical measurement. The
orientation angle can also be encoded into the periodic
phase, and boundary discontinuity is thus inherently solved.
2) Phase-shifting also has the periodic fuzzy problem,
which is similar to the square-like problem, and many solu-
tions exist. For example, the dual-frequency phase-shifting
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13354
technique solves the periodic fuzzy problem by mixing
phases of different frequencies (also known as phase un-
wrapping [37]).
Motivations of this paper:
Based on the above analysis, we believe that the phase-
shifting technique can be modified and adapted to oriented
object detection. What is the principle of the phase-shifting
angle coder? How to integrate this module into a deep
neural network? Will this technique result in better perfor-
mance? These questions are what this paper is for.
Contributions of this paper:
1) We are the first to utilize the phase-shifting coder to
cope with the angle regression problem in the deep learning
area. An integral and stable solution is elaborated on in
this paper. Most importantly, the codes are well-written,
publicly available, and with reproducible results.
2) The performance of the proposed methods is evalu-
ated through extensive experiments. The experimental re-
sults are of high quality—All the listed results are retested
on identical environments to ensure fair comparisons (in-
stead of copied from other papers).
The rest of this paper is organized as follows:
Section 2 reviews the related methods around oriented
object detection. Section 3 describes the principles of the
phase-shifting coder in detail. Section 4 conducts exper-
iments on several datasets to evaluate the performance of
the proposed methods. Section 5 concludes the paper.
|
Zhang_VQACL_A_Novel_Visual_Question_Answering_Continual_Learning_Setting_CVPR_2023
|
Abstract
Research on continual learning has recently led to a va-
riety of work in unimodal community, however little atten-
tion has been paid to multimodal tasks like visual question
answering (VQA). In this paper, we establish a novel VQA
Continual Learning setting named VQACL, which contains
two key components: a dual-level task sequence where vi-
sual and linguistic data are nested, and a novel composi-
tion testing containing new skill-concept combinations. The
former devotes to simulating the ever-changing multimodal
datastream in real world and the latter aims at measuring
models’ generalizability for cognitive reasoning. Based on
our VQACL, we perform in-depth evaluations of five well-
established continual learning methods, and observe that
they suffer from catastrophic forgetting and have weak gen-
eralizability. To address above issues, we propose a novel
representation learning method, which leverages a sample-
specific and a sample-invariant feature to learn represen-
tations that are both discriminative and generalizable for
VQA. Furthermore, by respectively extracting such repre-
sentation for visual and textual input, our method can ex-
plicitly disentangle the skill and concept. Extensive exper-
imental results illustrate that our method significantly out-
performs existing models, demonstrating the effectiveness
and compositionality of the proposed approach. The code
is available at https://github.com/zhangxi1997/VQACL.
|
1. Introduction
Continual learning [43] has recently gained a lot of at-
tention in the deep learning community because it enables
models to learn continually on a sequence of non-stationary
tasks and is close to the human learning process [2, 36].
However, the vibrant research in continual learning mainly
focuses on unimodal tasks such as image classification [37,
46, 51] and sequence tagging [4, 48], and the demand of
multimodal tasks is ignored. In recent years, the volume of
multimodal data has grown tremendously [8, 56, 57]. For
example, tens of millions of texts, images, and videos are
uploaded to social media platforms every day, such as Face-
Figure 1. The illustration of real-world scenario for VQA system,
which may continuously receive new types of questions, fresh
visual concepts, and novel skill-concept compositions.
book and Twitter. To cope with such constantly emerging
real-world data, a practical AI system should be capable of
continually learning from multimodal sources while allevi-
ating forgetting previously learned knowledge.
Visual Question Answering (VQA) is a typical multi-
modal task and has drawn increasing interest over the past
few years [12, 49, 60], which can automatically generate
a textual answer given a question and an image. To deal
with ever-changing questions and visual scenes in real life,
applying continual learning to VQA is essential. However,
it is not easy to set up a suitable continual learning setting
for this task. We identify that two vital issues need to be
considered. First, the VQA input comes from both vision
and linguistic modalities, thus the task setting should si-
multaneously tackle continuous data from both modalities
in a holistic manner. For example, as shown in Fig. 1, the
AI system might deal with new types of questions (e.g.,
Where ... ,Why ... ) as well as fresh visual concepts (e.g., Lo-
quat,Deer ). Second, compositionality [24], a vital property
of cognitive reasoning, should be considered in the VQA
continual learning. The compositionality here denotes the
model’s generalization ability towards novel combinations
ofreasoning skills (i.e., question type) and visual concepts
(i.e., image object). As illustrated in Fig. 1, if the system
has been trained with question type Count (e.g., How many )
with a variety of objects (e.g., Person, Cat, and Dount ), as
well as another question type (e.g., What color ) about a new
object (e.g., Truck ). Then, it is expected to answer a novel
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19102
question like ‘How many trucks are there?’ , even if the
composition of skill Count and concept Truck has yet to be
seen. Such ability is very crucial when deploying a model
in the real world because it is infeasible to view all possible
skill-concept compositions. Remarkably, several works has
addressed continual learning with VQA [14, 16, 25]. How-
ever, they still apply a classic unimodal-like continual learn-
ing setting for the task by devising a set of VQA tasks sim-
ply based on question type or image scene, which ignores
above two crucial issues: handling continuous multimodal
data simultaneously and testing model’s compositionality.
To achieve these two keypoints, in this paper, we propose
a novel generative VQA continual learning setting named
VQACL based on two well-known datasets: VQA v2 [13]
and NExT-QA [49]. Specifically, as shown in Fig. 2(a), our
VQACL setting consists of a dual-level task sequence. In
the outer level, we set up a sequence of linguistic-driven
tasks to evaluate models’ ability for the ever-changing ques-
tion types. Moreover, to process the continuously shifted
visual contents, for each outer level task, we further con-
struct a series of randomly ordered visual-driven subtasks
according to image object categories in the inner level. Such
dual-level setting is similar to the cognitive process of chil-
dren, who master a skill by trying it on various objects. For
example, when learning to recognize colors, a baby usually
asks all the things surrounding him ‘ what color ’ they are.
Besides, to evaluate models’ compositionality, we construct
a novel composition split. As shown in Fig 2(b), we remove
a visual-driven subtask from each task in the outer level
during training and utilize it for testing. In this way, the
testing data contain novel skill-concept combinations that
are not seen at the training time. In conclusion, on the
one hand, our VQACL setting requires models to perform
effective multimodal knowledge transfer from old tasks to
new tasks while mitigating catastrophic forgetting [31]. On
the other hand, the model should be capable of generalizing
to novel compositions for cognitive reasoning.
Using the proposed VQACL setting, we establish an
initial set of baselines by adapting several well-known and
state-of-the-art continual learning methods [1, 3, 7, 22, 45]
from image classification to the generative VQA tasks.
The baselines are implemented on an advanced vision-and-
language transformer [9] without pre-training. After bench-
marking these baseline models, we find that few of them
can do well in the novel composition testing, which limits
their wide applications in practice. To enhance the model’s
compositionality, it is critical to learn an excellent repre-
sentation that is discriminative for seen skills or concepts,
and is generalizable to novel skill-concept compositions. To
achieve it, recent static VQA methods [27, 47, 59] always
first learn joint representations for visual and textual inputs,
and then utilize contrastive learning to implicitly disentan-
gle the skill and concept within the joint feature. How-ever, such implicit disentangling makes existing models still
dogged by the interference between the skill and concept,
leading to suboptimal generalization results. Moreover, the
complex contrastive sample building process makes these
works tough to be applied to continual learning.
Inspired by above discussions, we propose a novel repre-
sentation learning method for VQACL, which introduces a
sample-specific (SS) and a sample-invariant (SI) feature to
learn better representations that are both discriminative and
generalizable. To explicitly decouple the reasoning skills
and visual concepts, we learn the SS and SI representation
for visual and textual input separately. Specifically, the
SS feature for each modality is learned through a trans-
former encoder that stacks multiple self-attention layers,
which can encode the most attractive and salient contents
into the SS feature to make it discriminative. For the SI
feature, we resort to prototype learning to aggregate the
object class or question type information into it. Because
the category knowledge is stable and representative across
different scenarios, the SI feature can possess strong gen-
eralizability. Besides, to fit the continual learning setting,
we constantly update the SI feature in training. In this way,
it can capture new typical knowledge while retaining his-
torical experience, helping alleviate the forgetting problem.
In conclusion, combining the SS and SI features, we can
obtain the representation that is conducive to the model’s
compositional discriminability and generalizability.
In summary, the major contributions of our work are
threefold: (1) We introduce a new continual learning set-
ting VQACL to simulate real-world generative VQA. It can
not only simultaneously tackle the continuous data from
vision and linguistic modality, but also test models’ com-
positionality for cognitive reasoning. (2) We propose a sim-
ple but effective representation learning method for contin-
ual VQA, which novelly deploys a discriminative sample-
specific feature and a generalizable sample-invariant feature
to alleviate forgetting and enhance the models’ composition
ability. (3) We re-purpose and evaluate five well-established
methods on our VQACL, and observe that they struggle
to obtain satisfactory results. Remarkably, our model con-
sistently achieves the best performance, demonstrating the
effectiveness and compositionality of our approach.
|
Yang_Behavioral_Analysis_of_Vision-and-Language_Navigation_Agents_CVPR_2023
|
Abstract
To be successful, Vision-and-Language Navigation
(VLN) agents must be able to ground instructions to ac-
tions based on their surroundings. In this work, we develop
a methodology to study agent behavior on a skill-specific
basis – examining how well existing agents ground instruc-
tions about stopping, turning, and moving towards speci-
fied objects or rooms. Our approach is based on gener-
ating skill-specific interventions and measuring changes in
agent predictions. We present a detailed case study analyz-
ing the behavior of a recent agent and then compare multi-
ple agents in terms of skill-specific competency scores. This
analysis suggests that biases from training have lasting ef-
fects on agent behavior and that existing models are able to
ground simple referring expressions. Our comparisons be-
tween models show that skill-specific scores correlate with
improvements in overall VLN task performance.
|
1. Introduction
Following navigation instructions requires coordinating
observations and actions in accordance with the natural lan-
guage. Stopping when told to stop. Turning when told to
turn. And appropriately grounding referring expressions
when an action is conditioned on some aspect of the envi-
ronment. All three of these example are required when fol-
lowing the instruction “Turn left then go down the hallway
until you see a desk. Walk towards the desk and then stop. ”
In this work, we examine how well current instruction-
following agents can execute different types of these sub-
behaviors which we will refer to as skills .
We situate our study in the popular Vision-and-Language
Navigation (VLN) paradigm [2]. In a VLN episode, an
agent is spawned in a never-before-seen environment and
must navigate to a goal location specified by a natural
language navigation instruction. An agent’s instruction-
following capabilities are typically measured at the episode
level – examining whether an agent reaches near the goal
(success rate [2]), how efficiently it does so (SPL [1]), or
how well its trajectory matches the ground truth path whichthe human-generated instruction was based on (nDTW
[15]). These metrics are useful for comparing agents in ag-
gregate, but take a perspective that has little to say about an
agent’s fine grained competencies or what sub-instructions
it is able to ground appropriately.
In this work, we design an experimental paradigm based
on controlled interventions to analyze fine-grained agent be-
haviors. We focus our study on an agent’s ability to ex-
ecute unconditional instructions like stopping or turning,
as well as, conditional instructions that require more vi-
sual grounding like moving towards specified objects and
rooms. Our approach leverages annotations from RxR [13]
to produce truncated trajectory-instruction pairs that can
then be augmented with an additional skill-specific sub-
instruction. We carefully filter these trajectories and gen-
erate template-based sub-instructions to build non-trivial
intervention episodes that evaluate an agent’s ability to
ground skill-specific language to the appropriate actions.
To demonstrate the value of this approach, we present a
case study analyzing the behavior of a contemporary VLN
model [6]. While we find evidence that the model can reli-
ably ground some skill-specific language, our analysis also
reveals that its errors are not random. But rather, they re-
flect a systematic bias towards forward actions learned dur-
ing training. For object- or room-seeking skills, we find
only modest relationships between instructions and agent
actions. Finally, we derive aggregate skill-specific scores
and compare across VLN models with different overall task
performance. We find that higher skill-specific scores cor-
relate with higher task performance; however, not all skills
share the same scale of improvement between weaker and
stronger VLN models – suggesting that improvements in
VLN may be fueled by some skills more than others.
Contributions. To summarize this work, we:
- Develop an intervention-based behavioral analysis
paradigm for evaluating the behavior of VLN agents,1
- Provide a case study on a contemporary VLN agent [6],
evaluating fine-grained competencies and biases, and
- Examine the relationships between skill-specific metrics
1https://github.com/Yoark/vln-behave
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2574
and overall VLN task performance.
|
Yin_Gloss_Attention_for_Gloss-Free_Sign_Language_Translation_CVPR_2023
|
Abstract
Most sign language translation (SLT) methods to date
require the use of gloss annotations to provide additional
supervision information, however, the acquisition of gloss
is not easy. To solve this problem, we first perform an anal-
ysis of existing models to confirm how gloss annotations
make SLT easier. We find that it can provide two aspects
of information for the model, 1) it can help the model im-
plicitly learn the location of semantic boundaries in contin-
uous sign language videos, 2) it can help the model under-
stand the sign language video globally. We then propose
gloss attention , which enables the model to keep its atten-
tion within video segments that have the same semantics lo-
cally, just as gloss helps existing models do. Furthermore,
we transfer the knowledge of sentence-to-sentence similar-
ity from the natural language model to our gloss atten-
tion SLT network (GASLT) to help it understand sign lan-
guage videos at the sentence level. Experimental results on
multiple large-scale sign language datasets show that our
proposed GASLT model significantly outperforms existing
methods. Our code is provided in https://github.
com/YinAoXiong/GASLT .
|
1. Introduction
Sign languages are the primary means of communication
for an estimated 466 million deaf and hard-of-hearing peo-
ple worldwide [52]. Sign language translation (SLT), a so-
cially important technology, aims to convert sign language
videos into natural language sentences, making it easier for
deaf and hard-of-hearing people to communicate with hear-
ing people. However, the grammatical differences between
sign language and natural language [5, 55] and the unclear
semantic boundaries in sign language videos make it diffi-
cult to establish a mapping relationship between these two
kinds of sequences.
Existing SLT methods can be divided into three cate-
gories, 1) two-stage gloss-supervised methods, 2) end-to-
*Both authors contributed equally to this research.
†Corresponding author.
0 10 20 30 40 500
10
20
30
40
50
0.050.100.150.200.250.30(a) self-attention
+gloss-supervised
0 10 20 30 40 500
10
20
30
40
50
0.020.040.060.080.100.120.140.16(b) self-attention
+gloss-free
0 10 20 30 40 500
10
20
30
40
50
0.20.40.60.8(c) gloss-attention
+gloss-free
Figure 1. Visualization of the attention map in the shallow encoder
layer of three different SLT models. As shown in (a), an essential
role of gloss is to provide alignment information for the model
so that it can focus on relatively more important local areas. As
shown in (b), the traditional attention calculation method is diffi-
cult to converge to the correct position after losing the supervision
signal of the gloss. However, our proposed method (c) can still
flexibly maintain the attention in important regions (just like (a))
due to the injection of inductive bias, which can partially replace
the role played by gloss.
end gloss-supervised methods, and 3) end-to-end gloss-free
methods. The first two approaches rely on gloss annota-
tions, chronologically labeled sign language words, to assist
the model in learning alignment and semantic information.
However, the acquisition of gloss is expensive and cumber-
some, as its labeling takes a lot of time for sign language ex-
perts to complete [5]. Therefore, more and more researchers
have recently started to turn their attention to the end-to-
end gloss-free approach [42, 51]. It learns directly to trans-
late sign language videos into natural language sentences
without the assistance of glosses, which makes the approach
more general while making it possible to utilize a broader
range of sign language resources. The gloss attention SLT
network (GASLT) proposed in this paper is a gloss-free SLT
method, which improves the performance of the model and
removes the dependence of the model on gloss supervision
by injecting inductive bias into the model and transferring
knowledge from a powerful natural language model.
A sign language video corresponding to a natural lan-
guage sentence usually consists of many video clips with
complete independent semantics, corresponding one-to-one
with gloss annotations in the semantic and temporal order.
Gloss can provide two aspects of information for the model.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2551
On the one hand, it can implicitly help the model learn
the location of semantic boundaries in continuous sign lan-
guage videos. On the other hand, it can help the model
understand the sign language video globally.
In this paper, the GASLT model we designed obtain
information on these two aspects from other channels to
achieve the effect of replacing gloss. First, we observe that
the semantics of sign language videos are temporally local-
ized, which means that adjacent frames have a high prob-
ability of belonging to the same semantic unit. The visu-
alization results in Figure 1a and the quantitative analysis
results in Table 1 support this view. Inspired by this, we
design a new dynamic attention mechanism called gloss at-
tention to inject inductive bias [49] into the model so that
it tends to pay attention to the content in the local same se-
mantic unit rather than others. Specifically, we first limit
the number of frames that each frame can pay attention to,
and set its initial attention frame to frames around it so that
the model can be biased to focus on locally closer frames.
However, the attention mechanism designed in this way is
static and not flexible enough to handle the information at
the semantic boundary well. We then calculate an offset for
each attention position according to the input query so that
the position of the model’s attention can be dynamically ad-
justed on the original basis. It can be seen that, as shown in
Figure 1c, our model can still focus on the really important
places like Figure 1a after losing the assistance of gloss. In
contrast, as shown in Figure 1b, the original method fails to
converge to the correct position after losing the supervision
signal provided by the gloss.
Second, to enable the model to understand the semantics
of sign language videos at the sentence level and disam-
biguate local sign language segments, we transfer knowl-
edge from language models trained with rich natural lan-
guage resources to our model. Considering that there is a
one-to-one semantic correspondence between natural lan-
guage sentences and sign language videos. We can in-
directly obtain the similarity relationships between sign
language videos by inputting natural language sentences
into language models such as sentence bert [56]. Us-
ing this similarity knowledge, we can enable the model
to understand the semantics of sign language videos as a
whole, which can partially replace the second aspect of
the information provided by gloss. Experimental results
on three datasets RWTH-PHOENIX-WEATHER-2014T
(PHOENIX14T) [5], CSL-Daily [70] and SP-10 [66] show
that the translation performance of the GASLT model ex-
ceeds the existing state of the art methods, which proves
the effectiveness of our proposed method. We also conduct
quantitative analysis and ablation experiments to verify the
accuracy of our proposed ideas and the effectiveness of our
model approach.
To summarize, the contributions of this work are as fol-lows:
• We analyze the role of gloss annotations in sign lan-
guage translation.
• We design a novel attention mechanism and knowl-
edge transfer method to replace the role of gloss in sign
language translation partially.
• Extensive experiments on three datasets show the ef-
fectiveness of our proposed method. A broad range of
new baseline results can guide future research in this
field.
|
Zhang_Towards_Efficient_Use_of_Multi-Scale_Features_in_Transformer-Based_Object_Detectors_CVPR_2023
|
Abstract
Multi-scale features have been proven highly effective
for object detection but often come with huge and even
prohibitive extra computation costs, especially for the re-
cent Transformer-based detectors. In this paper, we pro-
pose Iterative Multi-scale Feature Aggregation (IMFA) – a
generic paradigm that enables efficient use of multi-scale
features in Transformer-based object detectors. The core
idea is to exploit sparse multi-scale features from just a
few crucial locations, and it is achieved with two novel de-
signs. First, IMFA rearranges the Transformer encoder-
decoder pipeline so that the encoded features can be iter-
atively updated based on the detection predictions. Second,
IMFA sparsely samples scale-adaptive features for refined
detection from just a few keypoint locations under the guid-
ance of prior detection predictions. As a result, the sam-
pled multi-scale features are sparse yet still highly ben-
eficial for object detection. Extensive experiments show
that the proposed IMFA boosts the performance of multiple
Transformer-based object detectors significantly yet with
only slight computational overhead.
|
1. Introduction
Detecting objects of vastly different scales has always
been a major challenge in object detection [28]. Fortu-
nately, strong evidence [11, 22, 25, 48, 69, 72] shows that
object detectors can significantly benefit from multi-scale
features while dealing with large scale variation. For
ConvNet-based object detectors like Faster R-CNN [42] and
FCOS [49], Feature Pyramid Network (FPN) [25] and its
variants [12, 18, 19, 30, 48, 69, 70] have become the go-to
components for exploiting multi-scale features.
Other than ConvNet-based object detectors, the recently
proposed DEtection TRansformer (DETR) [4] has estab-
lished a fully end-to-end object detection paradigm with
*marks corresponding author.†marks equal technical contribution.
Project Page: https://github.com/ZhangGongjie/IMFA.
80 100 120 140 160 180 200
GFLOPs4142434445COCO val2017 AP (%)SMCA-DETR(MS)Deform-DETR(MS)
Cond-DETR(SS)Cond-DETR(DC)Cond-DETR + IMFA (Ours)
Anchor-DETR(SS)Anchor-DETR(DC)Anchor-DETR + IMFA (Ours)
DAB-DETR(SS)DAB-DETR(DC)DAB-DETR + IMFA (Ours)
Single-Scale (SS)
Dilated-Conv (DC)
Multi-Scale (MS)
Multi-Scale w/ our IMFAFigure 1. The proposed Iterative Multi-scale Feature Aggregation
(IMFA) is a generic approach for efficient use of multi-scale fea-
tures in Transformer-based object detectors. It boosts detection
accuracy on multiple object detectors at minimal costs of addi-
tional computational overhead. Results are obtained with ResNet-
50. Best viewed in color.
promising performance. However, naively incorporating
multi-scale features using FPN in these Transformer-based
detectors [4,11,20,29,35,55,66,72] often brings enormous
and even unfeasible computation costs, primarily due to the
poor efficiency of the attention mechanism in processing
high-resolution features. Concretely, to handle a feature
map with a spatial size of H×W, ConvNet requires a com-
putational cost of O(HW), while the complexity of the at-
tention mechanism in Transformer-based object detectors is
O(H2W2). To mitigate this issue, Deformable DETR [72]
and Sparse DETR [43] replace the original global dense
attention with sparse attention. SMCA-DETR [11] re-
stricts most Transformer encoder layers to be scale-specific,
with only one encoder layer to integrate multi-scale fea-
tures. However, as the number of tokens increases quadrati-
cally w.r.t. feature map size (typically 20x ∼80x compared to
single-scale), these methods are still costly in computation
and memory consumption, and rely on special operators like
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
6206
deformable attention [72] that introduces extra complexity
for deployment. To the best of our knowledge, there is yet
no generic approach that can efficiently exploit multi-scale
features for Transformer-based object detectors.
In this paper, we present Iterative Multi-scale Feature
Aggregation (IMFA) , a concise and effective technique that
can serve as a generic paradigm for efficient use of multi-
scale features in Transformer-based object detectors. The
motivation comes from two key observations: (i)the com-
putation of high-resolution features is highly redundant as
the background usually occupies most of the image space,
thus only a small portion of high-resolution features are
useful to object detection; (ii)unlike ConvNet, the Trans-
former’s attention mechanism does not require grid-shaped
feature maps, which offers the feasibility of aggregating
multi-scale features only from some specific regions that
are likely to contain objects of interest. The two observa-
tions motivate us to sparsely sample multi-scale features
from just a few informative locations and then aggregate
them with encoded image features in an iterative manner.
Concretely, IMFA consists of two novel designs in the
Transformer-based detection pipelines. First , IMFA rear-
ranges the encoder-decoder pipeline so that each encoder
layer is immediately connected to its corresponding de-
coder layer. This design enables iterative update of en-
coded image features along with refined detection predic-
tions. Second , IMFA sparsely samples multi-scale features
from the feature pyramid generated by the backbone, with
the sampling process guided by previous detection predic-
tions. Specifically, motivated by the spatial redundancy
of high-resolution features, IMFA only focuses on a few
promising regions with high likelihood of object occurrence
based on prior predictions. Furthermore, inspired by the
significance of objects’ keypoints for recognition and lo-
calization [39, 59, 66, 71], IMFA first searches several key-
points within each promising region, and then samples use-
ful features around these keypoints at adaptively selected
scales. The sampled features are finally fed to the subse-
quent encoder layer along with the image features encoded
by the previous layer. With the two new designs, the pro-
posed IMFA aggregates only the most crucial multi-scale
features from those informative locations. Since the number
of the aggregated features is small, IMFA introduces min-
imal computational overhead while consistently improving
the detection performance of Transformer-based object de-
tectors. It is noteworthy that IMFA is a generic paradigm for
efficient use of multi-scale features: (i)as shown in Fig. 1,
it can be easily integrated with multiple Transformer-based
object detectors with consistent performance boosts; (ii)as
discussed in Section 5.4, IMFA has the potential to boost
DETR-like models on tasks beyond object detection.
To summarize, the contributions of this work are threefold.
• We propose a novel DETR-based detection pipeline,where encoded features can be iteratively updated along
with refined detection predictions. This new pipeline al-
lows to leverage intermediate predictions as guidance for
robust and efficient multi-scale feature encoding.
• We propose a sparse sampling strategy for multi-scale
features, which first identifies several promising regions
under the guidance of prior detections, then searches sev-
eral keypoints within each promising region, and finally
samples their features at adaptively selected scales. We
demonstrate that such sparse multi-scale features can sig-
nificantly benefit object detection.
• Based on the two contributions above, we propose Iter-
ative Multi-scale Feature Aggregation (IMFA) – a sim-
ple and generic paradigm that enables efficient use of
multi-scale features in Transformer-based object detec-
tors. IMFA consistently boosts detection performance
on multiple object detectors, yet remains computationally
efficient. This is the pioneering work that investigates a
generic approach for exploiting multi-scale features effi-
ciently in Transformer-based object detectors.
|
Yi_A_Simple_Framework_for_Text-Supervised_Semantic_Segmentation_CVPR_2023
|
Abstract
Text-supervised semantic segmentation is a novel re-
search topic that allows semantic segments to emerge with
image-text contrasting. However, pioneering methods could
be subject to specifically designed network architectures.
This paper shows that a vanilla contrastive language-image
pre-training (CLIP) model is an effective text-supervised se-
mantic segmentor by itself. First, we reveal that a vanilla
CLIP is inferior to localization and segmentation due to
its optimization being driven by densely aligning visual
and language representations. Second, we propose the
locality-driven alignment (LoDA) to address the problem,
where CLIP optimization is driven by sparsely aligning lo-
cal representations. Third, we propose a simple segmenta-
tion (SimSeg) framework. LoDA and SimSeg jointly amelio-
rate a vanilla CLIP to produce impressive semantic segmen-
tation results. Our method outperforms previous state-of-
the-art methods on PASCAL VOC 2012, PASCAL Context
and COCO datasets by large margins. Code and models
are available at github.com/muyangyi/SimSeg .
|
1. Introduction
Semantic segmentation is a fundamental task in com-
puter vision, with the purpose of allocating semantic classes
to the corresponding pixels. Most existing methods for se-
mantic segmentation are restricted by the scale of datasets.
The quantity or category is insufficient due to the high cost
of annotating segmentation masks. Text-supervised seman-
tic segmentation makes a breakthrough for this challenge,
where models are pre-trained with image-text pairs and
zero-shot transferred to semantic segmentation.
Figure 1illustrates an abstraction of text-supervised
semantic segmentation in comparison with existing task
paradigms. The base domain is denoted as DB, which
contains the manually labeled samples. The target do-
* Corresponding authors.
†Work was done when the first author was an intern at ByteDance.!!!"seg mapsOpen-Vocabulary!!!#!"Zero-ShotWeakly SupervisedText-Supervised (Ours)
image-text pairs + seg annotations
GroupViTTextEncoderimage-text pairsseg mapsnon-universal backboneImageEncoderTextEncoder
ImageEncoderTextEncoderimage-text pairsseg maps
GroupViTLSeg
SimSeg (Ours)fine-tune moduleimage-text pairs + seg annotationsImageEncoderTextEncoderseg maps
OpenSegfine-tune module!!!#!!!"
Figure 1. A comparison of our proposed approach with existing
paradigms, where DB,DT,DOdenote base domain, target do-
main and open domain, respectively. The components in red are
those missing in SimSeg. Illustration inspired by [ 50].
main is denoted as DT, which contains test samples. And
open domain DOinvolves a large variety of linguistic in-
formation. It can provide additional textual descriptions
when segmenting the images. Open-vocabulary meth-
ods ( e.g., LSeg [ 22], OpenSeg [ 17]) use pre-trained vision-
and-language models [ 20,33], but still need annotated sam-
ples to fine-tune. Weakly supervised methods [ 1,2] are
free from mask labels but require image-level class la-
bels ( DT✓D O). Text-supervision is an annotation-free
scheme, eliminating the need for mask annotations ( DB) or
image-level labels ( i.e.,DT*DO). Text-supervision lever-
ages massive web image-text pairs and enables to generate
segmentation masks in a zero-shot manner. GroupViT [ 44]
is the first work of text-supervision, yet the non-universal
backbone design hinders its flexibility ( e.g., novel backbone
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7071
adaptation and multi-task joint learning). We could improve
current methods by creating a simple framework for text-
supervised semantic segmentation. To this end, we target
on the vanilla CLIP [ 33] architecture, a neat dual-stream
contrastive language-image pre-training model.
As the preliminary of this work, we explore the potential
problems of a vanilla CLIP-based segmentor. We mainly
study CLIP developed with Transformer-based encoders
due to their intrinsic properties for segmentation [ 6] and su-
perior performance. CLIP is originally driven by aligning
vision and textual holistic vectors ( e.g.,[cls]tokens from
Transformer-based encoders), and a simple revision facili-
tates CLIP models for segmentation, i.e., densely aligning
all image patches and caption words. A similarity map,
which describes correlations between all image patches and
one class word , is a coarse categorical segmentation mask
per se . However, we observe two problems that greatly sup-
press the ability of the CLIP-based segmentor: (1) Visual
encoder of the learned CLIP model focuses on contextual
pixels, and (2) image-text contrasting mainly relies on con-
textual words. These problems jointly reveal that the opti-
mization of CLIP is significantly driven by contextual in-
formation. As a consequence, the CLIP-based segmentor
yields poor semantic segmentation results, due to an infe-
rior ability to perceive both contextual and non-contextual
information in complex natural images.
In the following, we attempt to solve the above prob-
lems. One practical strategy is avoiding optimization with
contextual information. For a versatile segmentor, both con-
textual and non-contextual information are essential. Con-
textual and non-contextual pixels should be sparsely aligned
to corresponding text entities. To this end, we propose
a locality-driven alignment (LoDA) strategy for training
CLIP models. Firstly, we propose to select partial features
with the maximum responses, in both image and text modal-
ities. Secondly, we propose to drive the image-text con-
trasting with only selected features. Our proposal success-
fully solves the problems from two aspects: (1) Vision en-
coder perceives main objects, (2) main objects and context
are equally significant in the image-text contrasting. Cou-
pled with LoDA, a simple but effective framework named
SimSeg is proposed to do zero-shot semantic segmentation.
Benefiting from our proposals, a simple CLIP framework is
equipped with impressive zero-shot semantic segmentation
performances. Our contributions are three-fold:
•We reveal the problems of a vanilla CLIP attached with
Transformer-based encoders when producing segmenta-
tion masks. To solve the problems, we propose a training
strategy named locality-driven alignment (LoDA).
•We design a simple but effective text-driven zero-shot
semantic segmentation framework named SimSeg. Our
proposed LoDA and SimSeg jointly allow a simple CLIP
to segment universal categories.•We achieve remarkable improvements over previous
methods on PASCAL VOC, PASCAL Context and
COCO zero-shot segmentation tasks. Moreover, we pro-
vide extensive analyses and ablations of our proposals.
|
Yu_Range-Nullspace_Video_Frame_Interpolation_With_Focalized_Motion_Estimation_CVPR_2023
|
Abstract
Continuous-time video frame interpolation is a funda-
mental technique in computer vision for its flexibility in
synthesizing motion trajectories and novel video frames at
arbitrary intermediate time steps. Yet, how to infer accu-
rate intermediate motion and synthesize high-quality video
frames are two critical challenges. In this paper, we present
a novel VFI framework with improved treatment for these
challenges. To address the former, we propose focalized
trajectory fitting, which performs confidence-aware motion
trajectory estimation by learning to pay focus to reliable
optical flow candidates while suppressing the outliers. The
second is range-nullspace synthesis, a novel frame renderer
cast as solving an ill-posed problem addressed by learning
decoupled components in orthogonal subspaces. The pro-
posed framework sets new records on 7 of 10 public VFI
benchmarks.
|
1. Introduction
Continuous-time Video Frame Interpolation (VFI) aims
at upsampling the temporal resolution of low frame-rate
videos steplessly by synthesizing the missing frames at ar-
bitrary time steps. It is a fundamental technology for vari-
ous downstream video applications such as streaming [34],
stabilization [5], and compression [35].
A key challenge of this task is to find a continuous map-
ping from arbitrary time step to the latent scene motion to
correctly render the target frame, observing the low frame-
rate video. Typically, it was realized via two stages: motion
trajectory fitting and frame synthesis, e.g.[1,3,4,10,13,17,
21, 29, 36, 38, 39]. In the former, a parametric trajectory
model is fitted from the optical flows extracted from input
frames, which can be resampled at any time step to get in-
termediate motion. For representing such a motion model,
∗This work is done during Zhiyang’s internship at SenseTime.
Correspondence should be addressed to Yu Zhang
([email protected]) and Shunqing Ren ([email protected]).
𝑰0(a) Consecutive input frames and estimated optical flows
(c) The proposed pipeline𝑰−1𝑰−1𝑰−1
𝑰2𝑰2𝑰2
𝑰1𝑰1𝑰1
𝑰0𝑰0𝑰0
Reconstruction
Reconstruction
Deep Range -Nullspace Solver
(b) Conventional pipelineRendering
Randering
Rays𝑰0
𝑰𝑡
Flow Confidence
𝑰1𝑰𝑡
Resample 𝑭0→𝑡
Resample 𝑭1→𝑡𝑰1
Resample 𝑭0→𝑡
Resample 𝑭1→𝑡
?
𝑰0
𝑰−1
𝑰1𝑰2
𝑾−1𝑾1𝑾2Figure 1. Motivation of the proposed method. Given input frames
and extracted optical flows (a), common VFI pipeline (b) is to fit
a continuous motion trajectory model, resample flows at the target
time step, based on which rendering rays are predicted to synthe-
size the intermediate frame by reorganizing pixels from the input
frames. Our pipeline (c) proposes improved treatment in two as-
pects: 1) our focalized motion estimation assigns dynamic weights
to the extracted optical flows to suppress the outlier flows and im-
prove fitting accuracy, and the novel range-nullspace solver treats
intermediate frame synthesis as an inverse problem instead of di-
rect rendering. Please see text for more details.
the Taylor polynomial is often adopted with different or-
ders [4,13,36]. However, fitting trajectory models are prone
to optical flow errors, which are inevitable due to occlusion.
Besides motion fitting, the frame synthesis step faces the
challenges of correctly inferring scene geometry. Particu-
larly, the intermediate flows resampled from the continuous
motion model only depict partial correspondences between
the intermediate frame with input frames, while existing
methods predict complete rendering rays in a data-driven
manner to facilitate full rendering, as shown in Fig. 1 (b).
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22159
Though powerful architectures [15] and rendering mecha-
nisms [10] were proposed, it requires the network to fully
encode the scene geometry to predict correct rays, which
may raise the difficulty of architecture design and the bur-
den of learning.
To overcome these issues, we propose a novel framework
with improved motion fitting and frame synthesis compo-
nents. As shown in Fig. 1 (c), the first is focalized trajectory
fitting, which extracts a set of optical flow candidates from
the input video and assigns learned confidence weights to
them. Confidence-aware, differentiable trajectory fitting is
then followed, focalizing only the confident flows and pro-
ducing improved parametric motion models. Such models,
when resampled at the target time step, can produce accu-
rate flows even at the occlusion boundary. For the latter,
we follow the fact that intermediate frame synthesis is an
ill-posed task and propose a deep solver based on the range
nullspace theory [2]. Our solver decomposes the latent in-
termediate frame into several orthogonal image components
and adopts different networks to learn each of them. Such
physics-guided design encourages learning decoupled sub-
tasks for each network, relieving the burden of encoding
scene geometry and benefiting the use of lightweight mod-
els. In particular, our framework sets new records on 7
out of 10 public VFI benchmarks while being parameter-
efficient.
We summarize the contributions of this paper as follows.
1) A novel lightweight VFI framework is proposed, which
refreshes the records of 7 out of 10 public VFI benchmarks.
2) The idea of focalized trajectory fitting, which improves
parametric motion estimation in VFI and generates better
resampling quality of intermediate flows. 3) A new perspec-
tive that treats intermediate frame synthesis as an ill-posed
problem, solved with a deep range nullspace solver that de-
couples frame synthesis into several orthogonal tasks.
|
Yang_UniSim_A_Neural_Closed-Loop_Sensor_Simulator_CVPR_2023
|
Abstract
Rigorously testing autonomy systems is essential for
making safe self-driving vehicles (SDV) a reality. It requires
one to generate safety critical scenarios beyond what can
be collected safely in the world, as many scenarios happen
rarely on our roads. To accurately evaluate performance,
we need to test the SDV on these scenarios in closed-loop,
where the SDV and other actors interact with each other
at each timestep. Previously recorded driving logs provide
*Indicates equal contribution.a rich resource to build these new scenarios from, but for
closed loop evaluation, we need to modify the sensor data
based on the new scene configuration and the SDV’s deci-
sions, as actors might be added or removed and the tra-
jectories of existing actors and the SDV will differ from the
original log. In this paper, we present UniSim, a neural
sensor simulator that takes a single recorded log captured
by a sensor-equipped vehicle and converts it into a realistic
closed-loop multi-sensor simulation. UniSim builds neural
feature grids to reconstruct both the static background and
dynamic actors in the scene, and composites them together
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1389
to simulate LiDAR and camera data at new viewpoints, with
actors added or removed and at new placements. To better
handle extrapolated views, we incorporate learnable priors
for dynamic objects, and leverage a convolutional network
to complete unseen regions. Our experiments show UniSim
can simulate realistic sensor data with small domain gap
on downstream tasks. With UniSim, we demonstrate, for the
first time, closed-loop evaluation of an autonomy system on
safety-critical scenarios as if it were in the real world.
|
1. Introduction
While driving along a highway, a car from the left sud-
denly swerves into your lane. You brake hard, avoiding
an accident, but discomforting your passengers. As you
replay the encounter in your mind, you consider how the
scenario would have gone if the other vehicle had acceler-
ated more, if you had slowed down earlier, or if you had
instead changed lanes for a more comfortable drive. Hav-
ing the ability to generate such “what-if” scenarios from a
single recording would be a game changer for developing
safe self-driving solutions. Unfortunately, such a tool does
not exist and the self-driving industry primarily test their
systems on pre-recorded real-world sensor data (i.e., log-
replay), or by driving new miles in the real-world. In the
former, the autonomous system cannot execute actions and
observe their effects as new sensor data different from the
original recording is not generated, while the latter is neither
safe, nor scalable or sustainable. The status quo calls for
novel closed-loop sensor simulation systems that are high
fidelity and represent the diversity of the real world.
Here, we aim to build an editable digital twin of the real
world (through the logs we captured), where existing ac-
tors in the scene can be modified or removed, new actors
can be added, and new autonomy trajectories can be exe-
cuted. This enables the autonomy system to interact with
the simulated world, where it receives new sensor obser-
vations based on its new location and the updated states
of the dynamic actors, in a closed-loop fashion. Such a
simulator can accurately measure self-driving performance,
as if it were actually in the real world, but without the
safety hazards, and in a much less capital-intensive manner.
Compared to manually-created game-engine based virtual
worlds [15, 62], it is a more scalable, cost-effective, realis-
tic, and diverse way towards closed-loop evaluation.
Towards this goal, we present UniSim, a realistic closed-
loop data-driven sensor simulation system for self-driving.
UniSim reconstructs and renders multi-sensor data for novel
views and new scene configurations from a single recorded
log. This setting is very challenging as the observations are
sparse and often captured from constrained viewpoints ( e.g.,
straight trajectories along the roads). To better handle ex-
trapolation from the observed views, we propose a series of
enhancements over prior neural rendering approaches. Inparticular, we leverage multi-resolution voxel-based neural
fields to represent and compose the static scene and dy-
namic agents, and volume render feature maps. To better
handle novel views and incorporate scene context to reduce
artifacts, a convolutional network (CNN) renders the fea-
ture map to form the final image. For dynamic agents, we
learn a neural shape prior that helps complete the objects to
render unseen areas. We use this sparse voxel-based rep-
resentations to efficiently simulate both image and LiDAR
observations under a unified framework. This is very useful
as SDVs often use several sensor modalities for robustness.
Our experiments show that UniSim realistically simu-
lates camera and LiDAR observations at new views for
large-scale dynamic driving scenes, achieving SoTA perfor-
mance in photorealism. Moreover, we find UniSim reduces
the domain gap over existing camera simulation methods on
the downstream autonomy tasks of detection, motion fore-
casting and motion planning. We also apply UniSim to aug-
ment training data to improve perception models. Impor-
tantly, we show, for the first time, closed-loop evaluation of
an autonomy system on photorealistic safety-critical scenar-
ios, allowing us to better measure SDV performance. This
further demonstrates UniSim’s value in enabling safer and
more efficient development of self-driving.
|
Yoshiyasu_Deformable_Mesh_Transformer_for_3D_Human_Mesh_Recovery_CVPR_2023
|
Abstract
We present Deformable mesh trans Former (DeFormer),
a novel vertex-based approach to monocular 3D human
mesh recovery. DeFormer iteratively fits a body mesh
model to an input image via a mesh alignment feedback
loop formed within a transformer decoder that is equipped
with efficient body mesh driven attention modules: 1) body
sparse self-attention and 2) deformable mesh cross atten-
tion. As a result, DeFormer can effectively exploit high-
resolution image feature maps and a dense mesh model
which were computationally expensive to deal with in pre-
vious approaches using the standard transformer attention.
Experimental results show that DeFormer achieves state-of-
the-art performances on the Human3.6M and 3DPW bench-
marks. Ablation study is also conducted to show the ef-
fectiveness of the DeFormer model designs for leveraging
multi-scale feature maps. Code is available at https:
//github.com/yusukey03012/DeFormer .
|
1. Introduction
Human mesh recovery from a single image is an im-
portant problem in computer vision, with a wide range
of applications like virtual reality, sports motion analysis
and human-computer interactions. It is a challenging prob-
lem as it requires modeling of complex nonlinear mappings
from an image to 3D body shape and pose.
In the past decade, remarkable progress has been accom-
plished in the field of 3D human mesh recovery based on
convolutional neural networks (CNNs) [41]. A common
way used in this field is a parametric approach that employs
statistical human models parameterized by shape and pose
parameters [33]. Here, CNNs features are regressed with
body shape and pose parameters with approximately 100 di-
mensions. Then, a body mesh surface is obtained from these
body parameter predictions by inputting them to the human
body kinematics and statistical shape models. Leveraging
multi-scale image feature maps and iteratively refining the
body parameters based on global and local spatial contexts
in an image, the pyramidal mesh alignment feedbacks (Py-
Figure 1. Summary of this work. We propose DeFormer, a mem-
ory efficient decoder-only transformer architecture for 3D human
mesh recovery based on block sparse self-attention [16, 40, 51]
and multi-scale (MS) deformable cross attention [56]. Leverag-
ing MS feature maps efficiently in transformer, our method out-
performs previous parametric approaches using MS feature maps
[52, 53] and vertex transformer approaches using a single-scale
feature [28, 29]. Further, by learning sparse self-attention patterns
based on body mesh and skeleton connectivity, DeFormer can re-
cover a dense mesh with 6.9K joint/vertex queries at interactive
rates. The units of the MPJPE and PA-MPJPE scores are in mil-
limeters. The lower is better.
MAF) model [53] produces a 3D mesh recovery result well-
aligned to an input image.
Another paradigm for human mesh recovery is a vertex-
based approach that directly regresses 3D vertex coordi-
nates [14, 15, 26, 28, 29, 35]. Recently, transformer archi-
tectures have been applied to vertex-based human mesh re-
covery and show good reconstruction performances by cap-
turing long-range interactions between body parts via self-
attentions. Furthermore, as a vertex-based approach, they
naturally possess potential in learning fine-grained effects
like facial expressions, finger poses and clothing non-rigid
deformations, if such supervisions are given. Despite their
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17006
promising performances, the main challenge of the current
transformer-based approaches based on the standard trans-
former attention is its memory cost, which limits the use of
high-resolution image feature maps and a dense body mesh
model within it for better 3D reconstruction. In fact, the cur-
rent methods are limited to managing a single-scale coarse
feature map with 7×7grids and a coarse mesh with around
400 vertices [28, 29].
In this paper, we propose a novel vertex-based trans-
former approach to 3D human mesh recovery, named
Deformable mesh trans Former (DeFormer). DeFormer it-
eratively fits a human mesh model to an image using the
mesh-alignment feedback loop formed in a transformer de-
coder. In order to leverage multi-scale feature maps and
a dense mesh model in the human mesh recovery trans-
former model, we design a decoder network architecture us-
ing block sparse attention and multi-scale (MS) deformable
attention [56] as illustrated in Fig. 1. The block sparse self-
attention module exploits the sparse connectivity patterns
established from a human body mesh and its skeleton. This
sparsifies self-attention access patterns and reduces mem-
ory consumption. The MS deformable cross attention mod-
ule aggregates multi-scale image features by attending only
to a small set of key sampling points around the mesh ver-
tex of the current reconstruction. DeFormer therefore can
attend to visual feature maps in both coarse and fine levels,
which contain not only global contexts but also local spa-
tial information. It can then extract useful contextualized
features for human mesh recovery and regress corrective
displacements to refine the estimated mesh shape. Conse-
quently, DeFormer can exploit high-resolution image fea-
ture maps and a dense mesh model that are not accessible
to previous approaches based on the standard transformer
attention [14,28,29] due to time/memory computational de-
mands. As a result, DeFormer achieves better performance
than previous approaches in 3D human mesh recovery.
The main contributions of this paper include:
•DeFormer : A novel decoder-only transformer archi-
tecture for monocular 3D human mesh recovery based
on the new body-mesh-driven attention modules that
leverage multi-scale visual features and a dense mesh
reasonably efficiently. DeFormer achieves new SOTA
results on the Human3.6M and 3DPW benchmarks.
•Body Sparse Self-Attention that exploits the sparse
connectivity patterns extracted from a human body
mesh model and its skeleton to restrict self-attention
access patterns, improving memory efficiency.
•Deformable Mesh cross Attention (DMA) that effi-
ciently aggregates and exploits multi-scale image fea-
ture maps by attending only to a small set of key sam-
pling points around the reference points obtained from
the current body joint and mesh vertex reconstructions.
|
Yang_RILS_Masked_Visual_Reconstruction_in_Language_Semantic_Space_CVPR_2023
|
Abstract
Both masked image modeling (MIM) and natural lan-
guage supervision have facilitated the progress of trans-
ferable visual pre-training. In this work, we seek the syn-
ergy between two paradigms and study the emerging prop-
erties when MIM meets natural language supervision. To
this end, we present a novel masked visual Reconstruction
In Language semantic Space (RILS) pre-training frame-
work, in which sentence representations, encoded by the text
encoder, serve as prototypes to transform the vision-only
signals into patch-sentence probabilities as semantically
meaningful MIM reconstruction targets. The vision models
can therefore capture useful components with structured in-
formation by predicting proper semantic of masked tokens.
Better visual representations could, in turn, improve the text
encoder via the image-text alignment objective, which is
essential for the effective MIM target transformation. Ex-
tensive experimental results demonstrate that our method
not only enjoys the best of previous MIM and CLIP but
also achieves further improvements on various tasks due to
their mutual benefits. RILS exhibits advanced transferabil-
ity on downstream classification, detection, and segmenta-
tion, especially for low-shot regimes. Code is available at
https://github.com/hustvl/RILS .
|
1. Introduction
Learning transferable representation lies a crucial task in
deep learning. Over the past few years, natural language
processing (NLP) has achieved great success in this line
of research [18, 45, 60]. To explore similar trajectories in
the vision domain, researchers tend to draw upon the suc-
cesses of NLP and have made tremendous progress: (i)
Inspired by the advanced model architecture [60] as well
as self-supervised learning paradigm [18] in NLP, vision
Transformers (ViT) [21, 42] and masked image modeling
*This work was done while S. Yang was an intern at ARC Lab, Tencent
PCG.†X. Wang and Y . Ge are the corresponding authors.
V-Enc
V-Enc
L-EncV-Dec
ImageMM
M
M
TextsContra//prediction
targetRecon
XXFigure 1. Overview of our RILS. Recon andContra represent
masked reconstruction loss and image-text contrastive loss. Dur-
ing pre-training, RILS learns to perform masked image modeling
and image-text contrastive simultaneously. Masked predictions
and corresponding targets are transformed to probabilistic distri-
butions in language space by leveraging text features as semantic-
rich prototypes. Under such a scheme, both objective are unified
and achieve mutual benefits from each other. Vision encoder ob-
tains the ability to capture meaningful and fine-grained context in-
formation, sparking decent transfer capacity.
(MIM) [3, 28] open a new era of self-supervised visual
representation learning, and show unprecedented transfer-
ability on various tasks, especially on fine-grained tasks
such as object detection [22, 39]. (ii) Inspired by the great
scalability brought by leveraging web-scale collections of
texts as training data in NLP [5, 48, 49, 72], CLIP [47] and
ALIGN [35] bring such a principle to vision and manifest
the immense potential of leveraging natural language super-
vision for scalable visual pre-training. Strong transferability
of pre-trained visual models on low-shot regimes ensues. It
also facilitates diverse applications by extracting contextu-
alized image or text features [26, 50, 52].
The remarkable attainment achieved by these two re-
search lines pushes us to ponder: Is it possible to unify both
masked image modeling and natural language supervision
to pursuit better visual pre-training? A straightforward way
towards this goal is to simply combine masked image mod-
eling (MIM) with image-text contrastive learning (ITC) for
multi-task learning. Although the na ¨ıve combination is fea-
sible to inherit the above advantages, we find it remains un-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23304
satisfactory due to the mutual benefits between MIM and
ITC have not yet been fully explored. Motivated by this, we
develop RILS, a tailored framework to seek the synergy of
masked image modeling and language supervision.
The core insight of RILS is to perform masked visual re-
construction in language semantic space . Specifically, in-
stead of reconstructing masked patches in the standalone vi-
sion space ( e.g., raw pixel [28,66], low-level features [3,62]
or high-level perceptions [12, 34, 63, 75]), we map patch
features to a probabilistic distribution over a batch of text
features as the reconstruction target, which is enabled by
ITC that progressively aligns the image and text spaces.
The text features serve as semantically rich prototypes and
probabilistic distributions explicitly inject the semantic in-
formation onto each image patch. The MIM objective is
formulated as a soft cross-entropy loss to minimize the KL
divergence between text-injected probabilistic distributions
of masked vision tokens and their corresponding targets.
The visual model optimized by our language-assisted re-
construction objective, in turn, improves ITC with better vi-
sual representations that capture fine-grained local contexts.
Under such a working mechanism, the two objec-
tives( i.e., MIM and ITC) complement each other and form
a unified solution for transferable and scalable visual pre-
training. Note that a lot of works [34, 63, 75] have mani-
fested the importance of semantic information in the recon-
struction target of MIM objectives. However, it is abstract
to pursue such a semantically rich space with visual-only
signals due to its unstructured characteristics [29]. Thanks
to natural language supervision, this issue is alleviated by
performing masked reconstruction in language space.
Extensive experiments on various downstream tasks
demonstrate that our design enjoys the best of both
worlds. With a vanilla ViT-B/ 16as the vision model
and25-epoch pre-training on 20million image-text pairs,
RILS achieves 83.3%top-1accuracy when fine-tune on
ImageNet- 1K [15], +1.2%and+0.6%better than the
MAE [28] and CLIP [47] counterparts. Advanced perfor-
mance can be consistently acquired when transfer to fine-
grained tasks such as detection and segmentation. More-
over, our approach exhibits promising out-of-the-box ca-
pability under an extremely low-shot regime. RILS also
demonstrates superior performance on zero-shot image
classification and image-text retrieval. On ImageNet- 1K
benchmark, RILS obtains 45.0%zero-shot classification ac-
curacy, +4.7%/+ 3.4%higher than CLIP [47] /SLIP [43]
under the same training recipe. Compelling results of RILS
imply the promising capacity in the unification of MIM and
language supervision.
|
Zhang_MP-Former_Mask-Piloted_Transformer_for_Image_Segmentation_CVPR_2023
|
Abstract
We present a mask-piloted Transformer which improves
masked-attention in Mask2Former for image segmenta-
tion. The improvement is based on our observation that
Mask2Former suffers from inconsistent mask predictions
between consecutive decoder layers, which leads to incon-
sistent optimization goals and low utilization of decoder
queries. To address this problem, we propose a mask-piloted
training approach, which additionally feeds noised ground-
truth masks in masked-attention and trains the model to
reconstruct the original ones. Compared with the predicted
masks used in mask-attention, the ground-truth masks serve
as a pilot and effectively alleviate the negative impact of
inaccurate mask predictions in Mask2Former. Based on this
technique, our MP-Former achieves a remarkable perfor-
mance improvement on all three image segmentation tasks
(instance, panoptic, and semantic), yielding +2.3AP and
+1.6mIoU on the Cityscapes instance and semantic seg-
mentation tasks with a ResNet-50 backbone. Our method
also significantly speeds up the training, outperforming
Mask2Former with half of the number of training epochs
on ADE20K with both a ResNet-50 and a Swin-L backbones.
Moreover, our method only introduces little computation dur-
ing training and no extra computation during inference. Our
code will be released at https://github.com/IDEA-
Research/MP-Former .
|
1. Introduction
Image segmentation is a fundamental problem in com-
puter vision which includes semantic, instance, and panoptic
*Equal contribution.
†This work was done when Hao Zhang and Feng Li were interns at
IDEA.
‡Corresponding author.segmentation. Early works design specialized architectures
for different tasks, such as mask prediction models [2, 15]
for instance segmentation and per-pixel prediction mod-
els [25, 28] for semantic segmentation. Panoptic segmen-
tation [18] is a unified task of instance and semantic seg-
mentation. But universal models for panoptic segmentation
such as Panoptic FPN [18] and K-net [34] are usually sub-
optimal on instance and semantic segmentation compared
with specialized models.
Recently, Vision Transformers [29] has achieved tremen-
dous success in many computer vision tasks, such as image
classification [11, 24] and object detection [1]. Inspired by
DETR’s set prediction mechanism, Mask2Former [7] pro-
poses a unified framework and achieves a remarkable per-
formance on all three segmentation tasks. Similar to DETR,
it uses learnable queries in Transformer decoders to probe
features and adopts bipartite matching to assign predictions
to ground truth (GT) masks. Mask2Former also uses pre-
dicted masks in a decoder layer as attention masks for the
next decoder layer. The attention masks serve as a guidance
to help next layer’s prediction and greatly ease training.
Despite its great success, Mask2Former is still an initial
attempt which is mainly based on a vanilla DETR model
that has a sub-optimal performance compared to its later
variants. For example, in each decoder layer, the predicted
masks are built from scratch by dot-producting queries and
feature map without performing layer-wise refinement as
proposed in deformable DETR [37]. Because each layer
build masks from scratch, the masks predicted by a query
in different layers may change dramatically. To show this
inconsistent predictions among layers, we build two metrics
mIoU-LiandUtilito quantify this problem and analyze its
consequences in Section 3. As a result, this problem leads to
a low utilization rate of decoder queries, which is especially
severe in the first few decoder layers. As the attention masks
predicted from an early layer are usually inaccurate, when
serving as a guidance for its next layer, they will lead to
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18074
sub-optimal predictions in the next layer.
Since detection is a similar task to segmentation, both
aiming to locate instances in images, we seek inspirations
from detection to improve segmentation. For example, the
recently proposed denoising training [19] for detection has
shown very effective to stabilize bipartite matching between
training epochs. DINO [33] achieves a new SOTA on COCO
detection built upon an improved denoising training method.
The key idea in denoising training is to feed noised GT boxes
in parallel with learnable queries into Transformer decoders
and train the model to denoise and recover the GT boxes.
Inspired by DN-DETR [19], we propose a mask-piloted
(MP) training approach. We divide the decoder queries into
two parts, an MP part and a matching part. The matching part
is the same as in Mask2Former, feeding learnable queries
and adopting bipartite matching to assign predictions to GT
instances. In the MP part, we feed class embeddings of GT
categories as queries and GT masks as attention masks into
a Transformer decoder layer and let the model to reconstruct
GT masks and labels. To reinforce the effectiveness of our
method, we propose to apply MP training for all decoder
layers. Moreover, we also add point noises to GT masks and
flipping noises to GT class embeddings to force the model
to recover GT masks and labels from inaccurate ones.
We summarize our contributions as follows.
1.We propose a mask-piloted training approach to im-
proving masked-attention in Mask2Former, which suf-
fers from inconsistent and inaccurate mask predictions
among layers. We also develop techniques including
multi-layer mask-piloted training with point noises and
label-guided training to further improve the segmenta-
tion performance. Our method is computationally ef-
ficient, introducing no extra computation during infer-
ence and little computation during training.
2.Through analyzing the predictions of Mask2Former, we
discover a failure mode of Mask2Former—inconsistent
predictions between consecutive layers. We build two
novel metrics to measure this failure, which are layer-
wise query utilization and mean IoU. We also show
that our method improves the two metrics by a large
margin, which validates its effectiveness on alleviating
inconsistent predictions.
3.When evaluating on three datasets, including ADE20k,
Cityscapes, and MS COCO, our MP-Former outper-
forms Mask2Former by a large margin. Besides im-
proved performance, our method also speeds up train-
ing. Our model exceeds Mask2Former on all three seg-
mentation tasks with only half of the number of train-
ing steps on ADE20k. Also, our model trained for 36
epochs is comparable with Mask2Former trained for 50
epochs on instance and panoptic segmentation on MS
COCO.
|
Yan_Universal_Instance_Perception_As_Object_Discovery_and_Retrieval_CVPR_2023
|
Abstract
All instance perception tasks aim at finding certain ob-
jects specified by some queries such as category names, lan-
guage expressions, and target annotations, but this com-
plete field has been split into multiple independent sub-
tasks. In this work, we present a universal instance per-
ception model of the next generation, termed UNINEXT .
UNINEXT reformulates diverse instance perception tasks
into a unified object discovery and retrieval paradigm and
can flexibly perceive different types of objects by simply
changing the input prompts. This unified formulation brings
the following benefits: (1) enormous data from different
tasks and label vocabularies can be exploited for jointly
training general instance-level representations, which is es-
pecially beneficial for tasks lacking in training data. (2)
the unified model is parameter-efficient and can save re-
dundant computation when handling multiple tasks simul-
taneously. UNINEXT shows superior performance on 20
challenging benchmarks from 10 instance-level tasks in-
cluding classical image-level tasks (object detection and
instance segmentation), vision-and-language tasks (refer-
ring expression comprehension and segmentation), and six
video-level object tracking tasks. Code is available at
https://github.com/MasterBin-IIAU/UNINEXT.
|
1. Introduction
Object-centric understanding is one of the most essen-
tial and challenging problems in computer vision. Over the
years, the diversity of this field increases substantially. In
this work, we mainly discuss 10 sub-tasks, distributed on
the vertices of the cube shown in Figure 1. As the most
fundamental tasks, object detection [8, 9, 30, 59, 82, 84, 91]
and instance segmentation [6, 37, 64, 90, 97] require finding
all objects of specific categories by boxes and masks re-
spectively. Extending inputs from static images to dynamic
*This work was performed while Bin Yan worked as an intern at
ByteDance. Email: yan [email protected].†Corresponding authors:
[email protected], [email protected].
Time
ReferenceFormat
CoarseFine
NoneLanguage or AnnotationStaticDynamicObject Detection
MOTS
VIS
RVOS
VOSRES
REC
SOT
MOTInstance Segmentation
personskateboardpersonsheepsheepsheepsheepdogsheep
a zebra to the left of the frame
The fourth person from the left
tennis player in whiteFigure 1. Task distribution on the Format-Time-Reference space.
Better view on screen with zoom-in.
videos, Multiple Object Tracking (MOT) [3, 74, 126, 128],
Multi-Object Tracking and Segmentation (MOTS) [47, 93,
108], and Video Instance Segmentation (VIS) [43,100,103,
112] require finding all object trajectories of specific cate-
gories in videos. Except for category names , some tasks
provide other reference information. For example, Refer-
ring Expression Comprehension (REC) [115,121,130], Re-
ferring Expression Segmentation (RES) [116,119,121], and
Referring Video Object Segmentation (R-VOS) [7, 86, 104]
aim at finding objects matched with the given language ex-
pressions like “The fourth person from the left”. Besides,
Single Object Tracking (SOT) [5, 51, 106] and Video Ob-
ject Segmentation (VOS) [18, 78, 107] take the target an-
notations (boxes or masks) given in the first frame as the
reference, requiring to predict the trajectories of the tracked
objects in the subsequent frames. Since all the above tasks
aim to perceive instances of certain properties, we refer to
them collectively as instance perception .
Although bringing convenience to specific applications,
such diverse task definitions split the whole field into frag-
mented pieces. As the result, most current instance percep-
tion methods are developed for only a single or a part of
sub-tasks and trained on data from specific domains. Such
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15325
fragmented design philosophy brings the following draw-
backs: (1) Independent designs hinder models from learn-
ing and sharing generic knowledge between different tasks
and domains, causing redundant parameters. (2) The pos-
sibility of mutual collaboration between different tasks is
overlooked. For example, object detection data enables
models to recognize common objects, which can naturally
improve the performance of REC and RES. (3) Restricted
by fixed-size classifiers, traditional object detectors are hard
to jointly train on multiple datasets with different label vo-
cabularies [36,61,88] and to dynamically change object cat-
egories to detect during inference [22, 61, 74, 81, 112, 120].
Since essentially all instance perception tasks aim at find-
ing certain objects according to some queries , it leads to a
natural question: could we design a unified model to solve
all mainstream instance perception tasks once and for all?
To answer this question, we propose UNINEXT, a uni-
versal instance perception model of the next generation.
We first reorganize 10 instance perception tasks into three
types according to the different input prompts: (1) cate-
gory names as prompts (Object Detection, Instance Seg-
mentation, VIS, MOT, MOTS). (2) language expressions
as prompts (REC, RES, R-VOS). (3) reference annota-
tions as prompts (SOT, VOS). Then we propose a uni-
fied prompt-guided object discovery and retrieval formula-
tion to solve all the above tasks. Specifically, UNINEXT
first discovers Nobject proposals under the guidance
of the prompts, then retrieves the final instances from
the proposals according to the instance-prompt match-
ing scores . Based on this new formulation, UNINEXT can
flexibly perceive different instances by simply changing the
input prompts. To deal with different prompt modalities,
we adopt a prompt generation module, which consists of a
reference text encoder and a reference visual encoder. Then
an early fusion module is used to enhance the raw visual
features of the current image and the prompt embeddings.
This operation enables deep information exchange and pro-
vides highly discriminative representations for the later in-
stance prediction step. Considering the flexible query-to-
instance fashion, we choose a Transformer-based object de-
tector [131] as the instance decoder. Specifically, the de-
coder first generates Ninstance proposals, then the prompt
is used to retrieve matched objects from these proposals.
This flexible retrieval mechanism overcomes the disadvan-
tages of traditional fixed-size classifiers and enables joint
training on data from different tasks and domains.
With the unified model architecture, UNINEXT can
learn strong generic representations on massive data from
various tasks and solve 10 instance-level perception tasks
using a single model with the same model parameters. Ex-
tensive experiments demonstrate that UNINEXT achieves
superior performance on 20 challenging benchmarks. The
contributions of our work can be summarized as follows.• We propose a unified prompt-guided formulation for
universal instance perception, reuniting previously
fragmented instance-level sub-tasks into a whole.
• Benefiting from the flexible object discovery and re-
trieval paradigm, UNINEXT can train on different
tasks and domains, in no need of task-specific heads.
• UNINEXT achieves superior performance on 20 chal-
lenging benchmarks from 10 instance perception tasks
using a single model with the same model parameters.
|
Yao_Large-Scale_Training_Data_Search_for_Object_Re-Identification_CVPR_2023
|
Abstract
We consider a scenario where we have access to the tar-
get domain, but cannot afford on-the-fly training data an-
notation, and instead would like to construct an alternative
training set from a large-scale data pool such that a com-
petitive model can be obtained. We propose a search and
pruning (SnP) solution to this training data search prob-
lem, tailored to object re-identification (re-ID), an appli-
cation aiming to match the same object captured by differ-
ent cameras. Specifically, the search stage identifies and
merges clusters of source identities which exhibit similar
distributions with the target domain. The second stage,
subject to a budget, then selects identities and their im-
ages from the Stage I output, to control the size of the re-
sulting training set for efficient training. The two steps
provide us with training sets 80% smaller than the source
pool while achieving a similar or even higher re-ID accu-
racy. These training sets are also shown to be superior to
a few existing search methods such as random sampling
and greedy sampling under the same budget on training
data size. If we release the budget, training sets resulting
from the first stage alone allow even higher re-ID accu-
racy. We provide interesting discussions on the specificity
of our method to the re-ID problem and particularly its role
in bridging the re-ID domain gap. The code is available at
https://github.com/yorkeyao/SnP
|
1. Introduction
The success of a deep learning-based object re-ID relies
on one of its critical prerequisites: the labeled training data.
To achieve high accuracy, typically a massive amount of
data needs to be used to train deep learning models. How-
ever, creating large-scale object re-ID training data with
manual labels is expensive. Furthermore, collecting training
data that contributes to the high test accuracy of the trained
model is even more challenging. Recent years have seen a
large amount of datasets proposed and a significant increase
in the data size of a single dataset. For example, the Rand-
Person [37] dataset has 8000 identities, which is more than
1K 15K# identities
7K
PersonXDukeMSMTSnP (Proposed)
MarketSource pool
Number of training samples (× 103)Rank-1 accuracy (%)Figure 1. We present a search and pruning (SnP) solution to the
training data search problem in object re-ID. The source data pool
is 1 order of magnitude larger than existing re-ID training sets in
terms of the number of images and the number of identities. When
the target is AlicePerson [1], from the source pool, our method
(SnP) results in a training set 80% smaller than the source pool
while achieving a similar or even higher re-ID accuracy. The
searched training set is also superior to existing individual train-
ing sets such as Market-1501 [54], Duke [55], and MSMT [45].
6×larger than the previous PersonX [37] dataset.
However, these datasets generally have their own dataset
bias, making the model trained on one dataset unable to
generalize well to another. For example, depending on the
filming scenario, different person re-ID datasets generally
have biases on camera positions, human races, and cloth-
ing style. Such dataset biases usually lead to model bias,
which results in the model’s difficulty performing well in
an unseen filming scenario. To address this, many try to
improve learning algorithms, including domain adaptation
and domain generalization methods [2, 10, 27, 33, 35, 56].
Whereas these algorithms are well-studied and have proven
successful in many re-ID applications, deciding what kind
of data to use for training the re-ID model is still an open
research problem, and has received relatively little attention
in the community. We argue that this is a crucial problem
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15568
to be answered in light of the ever-increasing scale of the
available datasets.
In this paper, we introduce SnP, a search and pruning so-
lution for sampling an efficient re-ID training set to a target
domain. SnP is designed for the scenario in that we have a
target dataset that does not have labeled training data. Our
collected source pool, in replace, provides suitable labeled
data to train a competitive model. Specifically, given a user-
specified budget ( e.g., maximum desired data size), we sam-
ple a subset of the source pool, which satisfies the budget re-
quirement and desirably has high-quality data to train deep
learning models. This scenario is especially helpful for de-
ploying a re-ID system for unknown test environments, as
it is difficult to manually label a training set for these new
environments. We note that due to the absence of an in-
distribution training set, the searched data are directly used
for training the re-ID model rather than pre-training.
In particular, we combine several popular re-ID datasets
into a source pool, and represent each image in the pool with
features. Those features are extracted from an Imagenet-
pretrained model [39]. The images with features are stored
on the dataserver to serve as a gallery. When there is a query
from the target, we extract the feature of the query image,
and search in the gallery for similar images on a feature
level. Specifically, in the search stage, we calculate feature-
level distance, i.e., Fréchet Inception Distance (FID) [16].
Given the constraint of a budget, we select the most repre-
sentative samples in the pruning stage, based on the outputs
from the search stage. This limits the size of the constructed
training set and enables efficient training.
Combining search and pruning, we construct a train-
ing dataset that empirically shows significant accuracy im-
provements on several object re-ID targets, compared to the
baselines. Without budget constraints, our searched training
sets allow higher re-ID accuracy than the complete source
pool, due to its target-specificity. With budget constraints,
the pruned training set still achieves comparable or better
performance than the source pool. The proposed SnP is
demonstrated to be superior to random or greedy sampling.
We show in Fig. 1 that the training set constructed by SnP
leads to the best performance on the target compared to the
others. We provide discussions on the specificity of our
method and its role in bridging the re-ID domain gap.
|
Zhang_Improving_the_Transferability_of_Adversarial_Samples_by_Path-Augmented_Method_CVPR_2023
|
Abstract
Deep neural networks have achieved unprecedented suc-
cess on diverse vision tasks. However, they are vulnerable to
adversarial noise that is imperceptible to humans. This phe-
nomenon negatively affects their deployment in real-world
scenarios, especially security-related ones. To evaluate the
robustness of a target model in practice, transfer-based at-
tacks craft adversarial samples with a local model and have
attracted increasing attention from researchers due to their
high efficiency. The state-of-the-art transfer-based attacks
are generally based on data augmentation, which typically
augments multiple training images from a linear path when
learning adversarial samples. However, such methods se-
lected the image augmentation path heuristically and may
augment images that are semantics-inconsistent with the
target images, which harms the transferability of the gen-
erated adversarial samples. To overcome the pitfall, we
propose the Path-Augmented Method (PAM). Specifically,
PAM first constructs a candidate augmentation path pool.
It then settles the employed augmentation paths during ad-
versarial sample generation with greedy search. Further-
more, to avoid augmenting semantics-inconsistent images,
we train a Semantics Predictor (SP) to constrain the length
of the augmentation path. Extensive experiments confirm
that PAM can achieve an improvement of over 4.8% on av-
erage compared with the state-of-the-art baselines in terms
of the attack success rates.
|
1. Introduction
Deep neural networks (DNNs) appear to be the state-
of-the-art solutions for a wide variety of vision tasks [21,
*Corresponding author.
Figure 1. Illustration of how SIM and our PAM augment images
(red dots) during the generation of adversarial samples. SIM only
considers one linear path from the target image Xto a baseline
image X′. Besides, SIM may augment images that are semantics-
inconsistent with the target image. In contrast, our PAM augments
images along multiple augmentation paths. We also constrain the
length of the path to avoid augmenting images that are semantics-
inconsistent with the target one.
28]. However, DNNs are vulnerable to adversarial sam-
ples [47], which are elaborately designed by adding human-
imperceptible noise to the clean image to mislead DNNs
into wrong predictions. The existence of adversarial sam-
ples causes negative effects on security-sensitive DNN-
based applications, such as self-driving [27] and medical
diagnosis [6, 7]. Therefore, it is necessary to understand
the DNNs [15, 32, 33, 40] and enhance attack algorithms
to better identify the DNN model’s vulnerability, which is
the first step to improve their robustness against adversarial
samples [23, 24, 42].
There are generally two kinds of attacks in the litera-
ture [9]. One is the white-box attacks, which consider the
white-box setting where attackers can access the architec-
tures and parameters of the victim models. The other is
the black-box attacks, which focus on the black-box situ-
ation where attackers fail to get access to the specifics of
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8173
the victim models [10, 39]. Black-box attacks are more ap-
plicable than the white-box counterparts to real-world sys-
tems. There are two basic black-box attack methodolo-
gies: the query-based [1, 2, 37] and the transfer-based at-
tacks [38, 41]. Query-based attacks interact with the victim
model to generate adversarial samples, but they may incur
excessive queries. In contrast, transfer-based attacks craft
adversarial samples with a local source model and do not
need to query the victim model. Therefore, transfer-based
attacks have attracted more attention recently because of
their high efficiency [10, 39].
However, transfer-based attacks generally craft adversar-
ial samples by employing white-box strategies like the Fast
Gradient Sign Method (FGSM) [11] to attack a local model,
which often leads to limited transferability due to overfit-
ting to the employed local model. Most existing solutions
address the overfitting issue from the perspective of opti-
mization and generalization, which regards the local model
and the target image as the training data of the adversar-
ial sample. Therefore, the transferability of the learned
adversarial sample corresponds to its generalization abil-
ity across attacking different models [20]. Such method-
ologies to improve adversarial transferability can be cate-
gorized into two groups. One is the optimizer-based ap-
proaches [9, 20, 34, 36], which adopt more advanced opti-
mizers to escape from poor local optima during the genera-
tion of adversarial samples. The other is the augmentation-
based methods [10,20,35,44], which resort to data augmen-
tation and exploit multiple training images to learn a more
transferable adversarial sample.
Current state-of-the-art augmentation-based attacks gen-
erally apply a heuristics-based augmentation method. For
example, the Scale-Invariant attack Method (SIM) [20] aug-
ments multiple scale copies of the target image, while Ad-
mix [35] augments multiple scale copies of the mixtures of
the target image and the images from other categories. SIM
exponentially augments images along a linear path from the
target image to a baseline image, which is the origin. Ad-
mix, in contrast, first augments the target image with the
mixture of the target image and the images from other cate-
gories. Then it also exponentially augments images along a
linear path from the mixture image to the origin. Therefore,
such methods only consider the image augmentation path to
one baseline image, i.e., the origin. Besides, although they
attempt to augment images that are semantics-consistent to
the target image [20, 35], they fail to constrain the length of
the image augmentation path, which may result in augment-
ing semantics-inconsistent images.
To overcome the pitfalls of existing augmentation-based
attacks, we propose a transfer-based attack called Path-
Augmented Method (PAM). PAM proposes to augment im-
ages from multiple image augmentation paths to improve
the transferability of the learned adversarial sample. How-ever, due to the continuous space of images, the possible
image augmentation paths starting from the target image
are countless. In order to cope with the efficiency prob-
lem, we first select representative path directions to con-
struct a candidate augmentation path pool. Then we settle
the employed augmentation paths during adversarial sam-
ple generation with greedy search. Furthermore, to avoid
augmenting semantics-inconsistent images, we train a Se-
mantics Predictor, which is a lightweight neural network, to
constrain the length of each augmentation path.
The difference between our PAM and SIM is illustrated
in Figure 1. During the generation of adversarial samples,
PAM augments images along multiple image augmentation
paths from the target image to different baseline images,
while SIM only augments images along a single image aug-
mentation path from the target image to the origin. Be-
sides, PAM constrains the length of the image augmenta-
tion path to avoid augmenting images that are far away from
the target image and preserve the semantic meaning of the
target image. In contrast, SIM may augment images that
are semantics-inconsistent with the target image due to the
overlong image augmentation path.
To confirm the superiority of our PAM, we conduct ex-
tensive experiments against both undefended and defended
models on the ImageNet dataset. Experimental results show
that our PAM can achieve an improvement of over 3.7%
on average compared with the state-of-the-art baselines in
terms of the attack success rates. We also evaluate the per-
formance of the combination of PAM with other compatible
attack methods. Again, experimental results confirm that
our method can significantly outperform the state-of-the-art
baselines by about 7.2% on average.
In summary, our contributions in this paper are threefold:
• We discover that the state-of-the-art augmentation-
based attacks (SIM and Admix) actually augment
training images from a linear path for learning adver-
sarial samples. We argue that they suffer from limited
and overlong augmentation paths.
• To address their pitfalls, We propose the Path-
Augmented Method (PAM). PAM augments images
from multiple augmentation paths during the genera-
tion of adversarial samples. Besides, to make the aug-
mented images preserve the semantic meaning of the
target image, we train a Semantics Predictor (SP) to
constrain the length of each augmentation path.
• We conduct extensive experiments to validate the ef-
fectiveness of our methodologies. Experimental re-
sults confirm that our approaches can outperform the
state-of-the-art baselines by a margin of over 3.7% on
average. Besides, when combined with other compati-
ble strategies, our method can significantly surpass the
state-of-the-art baselines by 7.2% on average.
8174
|
Xu_Where_Is_My_Wallet_Modeling_Object_Proposal_Sets_for_Egocentric_CVPR_2023
|
Abstract
This paper deals with the problem of localizing objects
in image and video datasets from visual exemplars. In par-
ticular, we focus on the challenging problem of egocen-
tric visual query localization. We first identify grave im-
plicit biases in current query-conditioned model design and
visual query datasets. Then, we directly tackle such bi-
ases at both frame and object set levels. Concretely, our
method solves these issues by expanding limited annota-
tions and dynamically dropping object proposals during
training. Additionally, we propose a novel transformer-
based module that allows for object-proposal set context to
be considered while incorporating query information. We
name our module Conditioned Contextual Transformer or
CocoFormer. Our experiments show the proposed adapta-
tions improve egocentric query detection, leading to a better
visual query localization system in both 2D and 3D configu-
rations. Thus, we can improve frame-level detection perfor-
mance from 26.28% to31.26% in AP , which correspond-
ingly improves the VQ2D and VQ3D localization scores
by significant margins. Our improved context-aware query
object detector ranked first and second respectively in the
VQ2D and VQ3D tasks in the 2nd Ego4D challenge. In ad-
dition to this, we showcase the relevance of our proposed
model in the Few-Shot Detection (FSD) task, where we also
achieve SOTA results. Our code is available at https:
//github.com/facebookresearch/vq2d_cvpr .
|
1. Introduction
The task of Visual Queries Localization can be described
as the question, ‘when was the last time that I saw X’, where
X is an object query represented by a visual crop. In the
Ego4D [24] setting, this task aims to retrieve objects from
an ‘episodic memory’, supported by the recordings from an
*Work done during an internship at Meta AI.
visual queryvisual
query
0.721 ->0.027
target object distractor
query
frameannotated
trackmodeltrain /validate
modelinference
query
framevisual
query
target
objectdistractor
easy
moderate
hard
0.954 ->0.7120.803 ->0.352Figure 1. Existing approaches to visual query localization are
biased. Top: only positive frames containing query object are used
to train and validate the query object detector, while the model
is naturally tested on whole videos, where the query object is
mostly absent. Training with background frames alleviates confu-
sion caused by hard negatives during inference and helps predict
fewer false positives in negative frames. Bottom : visual query,
target object, and distractors of three query objects from easy to
hard. The confidence scores in white from a biased model get sup-
pressed in our model in cyan. Our method improves the visual
query localization system with fewer false positives, even for the
very challenging scenario shown in the last row.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2593
egocentric device, such as VR headsets or AR glasses. A
real-world application of such functionality is localizing a
user’s items via a pre-registered object-centered image of
them. A functional visual query localization system will
allow users to find their belongings by a short re-play, or
via a 3D arrow pointing to their real-world localization.
The current solutions to this problem [24, 64] rely on
a so-called ‘Siam-detector’ that is trained on annotated
response tracks but tested on whole video sequences, as
shown in Fig. 1 (top, gray arrows). The Siam-detector
model design allows the incorporation of query exemplars
by independently comparing the query to all object propos-
als. During inference on a given video, the visual query is
fixed, and the detector runs over all the frames in the ego-
centric video recording.
Although existing methods offer promising results in
query object detection performance, it still suffers from do-
main andtask biases. The domain bias appears due to only
training with frames with well-posed objects in clear view,
while daily egocentric recordings are naturally out of fo-
cus, blurry, and undergoing uncommon view angles. On
the other hand, task bias refers to the issue of the query
object always being available during training, while in re-
ality, it is absent during most of the test time. Therefore,
baseline models predict false positives on distractors, espe-
cially when the query object is out of view. These issues are
exacerbated by the current models’ independent scoring of
each object proposal, as the baseline models learn to give
high scores to superficially similar objects while disregard-
ing other proposals to reassess those scores.
In Fig. 1 (bottom), we show the visual query, target
object, and distractors for three data samples from easy
to hard. The confidence scores in white of the distract-
ing objects are reported from a publicly-available ‘Siam-
RCNN’ model [24]. Due to random viewpoints and the
large number of possible object classes that are exhibited
in egocentric recordings, the target object is hard to dis-
cover and confused with high-confidence false positives. In
this work, we show that these biases can be largely tack-
led by training a conditioned contextual detector with aug-
mented object proposal sets. Our detector is based on a
hypernetwork architecture that allows incorporating open-
world visual queries, and a transformer-based design that
permits context from other proposals to take part in the de-
tection reasoning. We call this model Conditioned Contex-
tual Transformer or CocoFormer. CocoFormer has a condi-
tional projection layer that generates a transformation ma-
trix from the query. This transformation is then applied to
the proposal features to create query-conditioned proposal
embeddings. These query-aware proposal embeddings are
then fed to a set-transformer [33], which effectively allows
our model to utilize the global context of the correspond-
ing frame. We show our proposed CocoFormer surpassesSiam-RCNN and is more flexible in different applications
as a hyper-network, such as multimodal query object detec-
tion and few-shot object detection (FSD).
Our CocoFormer has the increased capability to model
objects for visual query localization because it tackles the
domain bias by incorporating conditional context, but it still
suffers from task bias induced by the current training strat-
egy. To alleviate this, we propose to sample proposal sets
from both labeled and unlabeled video frames, which col-
lects data closer to the real distribution. Concretely, we en-
large the positive query-frame training pairs by the natural
view-point change of the same object in the view, while we
create negative query-frame training pairs that incorporate
objects from background frames. Essentially, we sample
the proposal set from those frames in a balanced manner.
We collect all the objects in a frame as a set and decouple
the task by two set-level questions: (1) does the query ob-
ject exist? (2) what is the most similar object to the query?
Since the training bias issue only exists in the first prob-
lem, we sample positive/negative proposal sets to reduce it.
Note that this is an independent process of frame sampling,
as the domain bias impairs the understanding of objects in
the egocentric view, while task bias implicitly hinders the
precision of visual query detection.
Overall, our experiments show diversifying the object
query and proposal sets, and leveraging global scene infor-
mation can evidently improve query object detection. For
example, training with unlabeled frames can improve the
detection AP from 27.55% to28.74%, while optimizing the
conditional detector’s architecture can further push AP to
30.25%. Moreover, the visual query system can be evalu-
ated in both 2D and 3D configurations. In VQ2D, we ob-
serve a significant improvement from baseline 0.13to0.18
on the leaderboard. In VQ3D, we show consistent improve-
ment across all metrics. In the end, our improved context-
aware query object detector ranked first and second respec-
tively in the VQ2D and VQ3D tasks in the 2nd Ego4D chal-
lenge, and achieve SOTA in Few-Shot Detection (FSD) on
the COCO dataset.
|
Ye_Decoupling_Human_and_Camera_Motion_From_Videos_in_the_Wild_CVPR_2023
|
Abstract
We propose a method to reconstruct global human tra-
jectories from videos in the wild. Our optimization method
decouples the camera and human motion, which allows us
to place people in the same world coordinate frame. Most
existing methods do not model the camera motion; meth-
ods that rely on the background pixels to infer 3D human
motion usually require a full scene reconstruction, which
is often not possible for in-the-wild videos. However, even
when existing SLAM systems cannot recover accurate scene
reconstructions, the background pixel motion still provides
enough signal to constrain the camera motion. We show
that relative camera estimates along with data-driven hu-
man motion priors can resolve the scene scale ambiguity
and recover global human trajectories. Our method robustly
recovers the global 3D trajectories of people in challeng-
ing in-the-wild videos, such as PoseTrack. We quantify our
improvement over existing methods on 3D human dataset
Egobody. We further demonstrate that our recovered camera
scale allows us to reason about motion of multiple people in
a shared coordinate frame, which improves performance of
downstream tracking in PoseTrack.
|
1. Introduction
Consider the video sequence in Figure 1. As human ob-
servers, we can clearly perceive that the camera is following
the athlete as he runs across the field. However, when this
dynamic 3D scene is projected onto 2D images, because the
camera tracks the athlete, the athlete appears to be at the
center of the camera frame throughout the sequence — i.e.
the projection only captures the netmotion of the underlying
human and camera trajectory. Thus, if we rely only on the
person’s 2D motion, as many human video reconstruction
methods do, we cannot recover their original trajectory in
the world (Figure 1 bottom left). To recover the person’s 3D
motion in the world (Figure 1 bottom right), we must also
reason about how much the camera is moving.
We present an approach that models the camera motion
to recover the 3D human motion in the world from videos
in the wild. Our system can handle multiple people and re-
constructs their motion in the same world coordinate frame,
enabling us to capture their spatial relationships. Recovering
the underlying human motion and their spatial relationships
is a key step towards understanding humans from in-the-wild
videos. Tasks, such as autonomous planning in environments
with humans [50], or recognition of human interactions with
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21222
the environment [68] and other people [21, 37], rely on in-
formation about global human trajectories. Current methods
that recover global trajectories either require additional sen-
sors, e.g. multiple cameras or depth sensors [11, 51], or
dense 3D reconstruction of the environment [12, 31], both
of which are only realistic in active or controlled capture
settings. Our method acquires these global trajectories from
videos in the wild, with no constraints on the capture setup,
camera motion, or prior knowledge of the environment. Be-
ing able to do this from dynamic cameras is particularly
relevant with the emergence of large-scale egocentric video
datasets [5, 9, 69].
To do this, given an input RGB video, we first estimate
the relative camera motion between frames from the static
scene’s pixel motion with a SLAM system [56]. At the
same time, we estimate the identities and body poses of all
detected people with a 3D human tracking system [45]. We
use these estimates to initialize the trajectories of the humans
and cameras in the shared world frame. We then optimize
these global trajectories over multiple stages to be consistent
with both the 2D observations in the video and learned priors
about how human move in the world [46]. We illustrate
our pipeline in Figure 2. Unlike existing works [11, 31], we
optimize over human and camera trajectories in the world
frame without requiring an accurate 3D reconstruction of
the static scene. Because of this, our method operates on
videos captured in the wild, a challenging domain for prior
methods that require good 3D geometry, since these videos
rarely contain camera viewpoints with sufficient baselines
for reliable scene reconstruction.
We combine two main insights to enable this optimization.
First, even when the scene parallax is insufficient for accu-
rate scene reconstruction, it still allows reasonable estimates
of camera motion up to an arbitrary scale factor. In fact, in
Figure 2, the recovered scene structure for the input video
is a degenerate flat plane, but the relative camera motion
still explains the scene parallax between frames. Second,
human bodies can move realistically in the world in a small
range of ways. Learned priors capture this space of realistic
human motion well. We use these insights to parameterize
the camera trajectory to be both consistent with the scene
parallax and the 2D reprojection of realistic human trajecto-
ries in the world. Specifically, we optimize over the scale of
camera displacement, using the relative camera estimates, to
be consistent with the human displacement. Moreover, when
multiple people are present in a video, as is often the case
in in-the-wild videos, the motions of all the people further
constrains the camera scale, allowing our method to operate
on complex videos of people.
We evaluate our approach on EgoBody [69], a new dataset
of videos captured with a dynamic (ego-centric) camera
with ground truth 3D global human motion trajectory. Our
approach achieves significant improvement upon the state-of-the-art method that also tries to recover the human mo-
tion without considering the signal provided by the back-
ground pixels [63]. We further evaluate our approach on
PoseTrack [1], a challenging in-the-wild video dataset origi-
nally designed for tracking. To demonstrate the robustness
of our approach, we provide the results on all PoseTrack
validation sequences on our project page. On top of qualita-
tive evaluation, since there are no 3D ground-truth labels in
PoseTrack, we test our approach through an evaluation on
the downstream application of tracking. We show that the re-
covered scaled camera motion trajectory can be directly used
in the PHALP system [45] to improve tracking. The scaled
camera enables more persistent 3D human registration in the
3D world, which reduces the re-identification mistakes. We
provide video results and code at the project page.
|
Zhang_SINE_SINgle_Image_Editing_With_Text-to-Image_Diffusion_Models_CVPR_2023
|
Abstract
Recent works on diffusion models have demonstrated a
strong capability for conditioning image generation, e.g.,
text-guided image synthesis. Such success inspires many
efforts trying to use large-scale pre-trained diffusion mod-
els for tackling a challenging problem–real image editing.
Works conducted in this area learn a unique textual token
corresponding to several images containing the same ob-
ject. However, under many circumstances, only one im-
age is available, such as the painting of the Girl with a
Pearl Earring. Using existing works on fine-tuning the pre-
trained diffusion models with a single image causes severe
overfitting issues. The information leakage from the pre-
trained diffusion models makes editing can not keep the
same content as the given image while creating new fea-
tures depicted by the language guidance. This work aims
to address the problem of single-image editing. We propose
a novel model-based guidance built upon the classifier-free
guidance so that the knowledge from the model trained on
a single image can be distilled into the pre-trained diffu-
sion model, enabling content creation even with one given
image. Additionally, we propose a patch-based fine-tuning
that can effectively help the model generate images of arbi-
trary resolution. We provide extensive experiments to vali-
date the design choices of our approach and show promis-
ing editing capabilities, including changing style, content
addition, and object manipulation. Our code is made pub-
licly available here.
|
1. Introduction
Automatic real image editing is an exciting direction,
enabling content generation and creation with minimal ef-
fort. Although many works have been conducted in this
area, achieving high-fidelity semantic manipulation on an
image is still a challenging problem for the generative mod-
els, considering the target image might be out of the train-
ing data distribution [4, 11, 25, 26, 32, 58, 59]. The recently
introduced large-scale text-to-image models, e.g., DALL·E
2 [37], Imagen [42], Parti [55], and StableDiffusion [39],
can perform high-quality and diverse image generation with
“… sculpture”Source Image
Source Image“…super-hero cape”“coffee machine…”“coffee maker…”
“… in snow”“… blossom street”Source ImageSource Image“…covered by snow”Figure 1. With only onereal image, i.e., Source Image, our method
is able to manipulate and generate the content in various ways,
such as changing style, adding context, modifying the object, and
enlarging the resolution, through guidance from the text prompt.
natural language guidance. The success of these works has
inspired many subsequent efforts to leverage the pre-trained
large-scale models for real image editing [10, 15, 20, 40].
They show that, with properly designed prompts and a lim-
ited number of fine-tuning steps, the text-to-image models
can manipulate a given subject with text guidance.
On the downside, the recent text-guided editing works
that build upon the diffusion models suffer several limita-
tions. First, the fine-tuning process might lead to the pre-
trained large-scale model overfit on the real image, which
degrades the synthesized images’ quality when editing. To
tackle these issues, methods like using multiple images with
the same content and applying regularization terms on the
same object have been introduced [4, 40]. However, query-
ing multiple images with identical content or object might
not be an available choice; for instance, there is only one
painting for Girl with a Pearl Earring . Directly editing
the single image brings information leakage from the pre-
trained large-scale models, generating images with differ-
ent content (examples in Fig. 5); therefore, the application
scenarios of these methods are greatly constrained. Second,
these works lack a reasonable understanding of the object
geometry for the edited image. Thus, generating images
with different spatial size as the training data cause unde-
sired artifacts, e.g., repeated objects, and incorrectly modi-
fied geometry (examples in Fig. 5). Such drawbacks restrict
applying these methods for generating images with an ar-
bitrary resolution, e.g., synthesizing high-resolution images
from a single photo of a castle (as in Fig. 4), again limiting
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
6027
the usage of these methods.
In this work, we present SINE, a framework utilizing
pre-trained text-to-image diffusion models for SINgle im-
ageEditing and content manipulation. We build our ap-
proach based upon existing text-guided image generation
approaches [10, 40] and propose the following novel tech-
niques to solve overfitting issues on content and geometry,
and language drift [28, 31]:
• First, by appropriately modifying the classifier-free guid-
ance [22], we introduce model-based classifier-free guid-
ance that utilizes the diffusion model to provide the score
guidance for content and structure. Taking advantage of
the step-by-step sampling process used in diffusion mod-
els, we use the model fine-tuned on a single image to
plant a content “seed” at the early stage of the denois-
ing process and allow the pre-trained large-scale text-to-
image model to edit creatively conditioned with the lan-
guage guidance at a later stage.
• Second, to decouple the correlation between pixel posi-
tion and content, we propose a patch-based fine-tuning
strategy, enabling generation on arbitrary resolution.
With a text descriptor describing the content that is
aimed to be manipulated and language guidance depicting
the desired output, our approach can edit the single unique
image to the targeted domain with details preserved in arbi-
trary resolution. The output image keeps the structure and
background intact while having features well-aligned with
the target language guidance. As shown in Fig. 1, trained
on a painting Girl with a Pearl Earring with resolution as
512×512, we can sample an image of a sculpture of the
girl at the resolution of 640×512with the identity features
preserved. Moreover, our method can successfully handle
various edits such as style transfer, content addition, and
object manipulation (more examples in Fig. 3). We hope
our method can further boost creative content creation by
opening the door to editing arbitrary images.
|
Xu_Zero-Shot_Object_Counting_CVPR_2023
|
Abstract
Class-agnostic object counting aims to count object in-
stances of an arbitrary class at test time. Current methods
for this challenging problem require human-annotated ex-
emplars as inputs, which are often unavailable for novel
categories, especially for autonomous systems. Thus, we
propose zero-shot object counting (ZSC), a new setting
where only the class name is available during test time.
Such a counting system does not require human annotators
in the loop and can operate automatically. Starting from a
class name, we propose a method that can accurately iden-
tify the optimal patches which can then be used as counting
exemplars. Specifically, we first construct a class prototype
to select the patches that are likely to contain the objects
of interest, namely class-relevant patches. Furthermore, we
introduce a model that can quantitatively measure how suit-
able an arbitrary patch is as a counting exemplar. By ap-
plying this model to all the candidate patches, we can se-
lect the most suitable patches as exemplars for counting.
Experimental results on a recent class-agnostic counting
dataset, FSC-147, validate the effectiveness of our method.
Code is available at https://github.com/cvlab-
stonybrook/zero-shot-counting .
|
1. Introduction
Object counting aims to infer the number of objects in
an image. Most of the existing methods focus on counting
objects from specialized categories such as human crowds
[37], cars [29], animals [4], and cells [46]. These meth-
ods count only a single category at a time. Recently, class-
agnostic counting [28, 34, 38] has been proposed to count
objects of arbitrary categories. Several human-annotated
bounding boxes of objects are required to specify the ob-
jects of interest (see Figure 1a). However, having humans
in the loop is not practical for many real-world applications,
such as fully automated wildlife monitoring systems or vi-
*Work done prior to joining Amazon
Be happy , Douya(a) Few-shot Counting (b) Zero-Shot Counting
Figure 1. Our proposed task of zero-shot object counting (ZSC).
Traditional few-shot counting methods require a few exemplars of
the object category (a). We propose zero-shot counting where the
counter only needs the class name to count the number of object
instances. (b). Few-shot counting methods require human annota-
tors at test time while zero-shot counters can be fully automatic.
sual anomaly detection systems.
A more practical setting, exemplar-free class-agnostic
counting, has been proposed recently by Ranjan et al. [33].
They introduce RepRPN, which first identifies the objects
that occur most frequently in the image, and then uses them
as exemplars for object counting. Even though RepRPN
does not require any annotated boxes at test time, the
method simply counts objects from the class with the high-
est number of instances. Thus, it can not be used for count-
ing a specific class of interest. The method is only suitable
for counting images with a single dominant object class,
which limits the potential applicability.
Our goal is to build an exemplar-free object counter
where we can specify what to count. To this end, we in-
troduce a new counting task in which the user only needs
to provide the name of the class for counting rather than the
exemplars (see Figure 1b). In this way, the counting model
can not only operate in an automatic manner but also allow
the user to define what to count by simply providing the
class name. Note that the class to count during test time can
be arbitrary. For cases where the test class is completely
unseen to the trained model, the counter needs to adapt to
the unseen class without any annotated data. Hence, we
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15548
name this setting zero-shot object counting (ZSC), inspired
by previous zero-shot learning approaches [6, 57].
To count without any annotated exemplars, our idea is
to identify a few patches in the input image containing the
target object that can be used as counting exemplars. Here
the challenges are twofold: 1) how to localize patches that
contain the object of interest based on the provided class
name, and 2) how to select good exemplars for counting.
Ideally, good object exemplars are visually representative
for most instances in the image, which can benefit the object
counter. In addition, we want to avoid selecting patches that
contain irrelevant objects or backgrounds, which likely lead
to incorrect object counts.
To this end, we propose a two-step method that first lo-
calizes the class-relevant patches which contain the objects
of interest based on the given class name, and then selects
among these patches the optimal exemplars for counting.
We use these selected exemplars, together with a pre-trained
exemplar-based counting model, to achieve exemplar-free
object counting.
In particular, to localize the patches containing the ob-
jects of interest, we first construct a class prototype in a pre-
trained embedding space based on the given class name. To
construct the class prototype, we train a conditional vari-
ational autoencoder (V AE) to generate features for an ar-
bitrary class conditioned on its semantic embedding. The
class prototype is computed by taking the average of the
generated features. We then select the patches whose em-
beddings are the k-nearest neighbors of the class prototype
as the class-relevant patches.
After obtaining the class-relevant patches, we further se-
lect among them the optimal patches to be used as counting
exemplars. Here we observe that the feature maps obtained
using good exemplars and badexemplars often exhibit dis-
tinguishable differences. An example of the feature maps
obtained with different exemplars is shown in Figure 2. The
feature map from a good exemplar typically exhibits some
repetitive patterns (e.g., the dots on the feature map) that
center around the object areas while the patterns from a bad
exemplar are more irregular and occur randomly across the
image. Based on this observation, we train a model to mea-
sure the goodness of an input patch based on its correspond-
ing feature maps. Specifically, given an arbitrary patch and
a pre-trained exemplar-based object counter, we train this
model to predict the counting error of the counter when us-
ing the patch as the exemplar. Here the counting error can
indicate the goodness of the exemplar. After this error pre-
dictor is trained, we use it to select those patches with the
smallest predicted errors as the final exemplars for counting.
Experiments on the FSC-147 dataset show that our
method outperforms the previous exemplar-free counting
method [33] by a large margin. We also provide analy-
ses to show that patches selected by our method can be
Pre-trainedCounter
Bad ExemplarGood Exemplar
Query ImageFigure 2. Feature maps obtained using different exemplars given
a pre-trained exemplar-based counting model. The feature maps
obtained using good exemplars typically exhibit some repetitive
patterns while the patterns from bad exemplars are more irregular.
used in other exemplar-based counting methods to achieve
exemplar-free counting. In short, our main contributions
can be summarized as follows:
• We introduce the task of zero-shot object counting that
counts the number of instances of a specific class in
the input image, given only the class name and without
relying on any human-annotated exemplars.
• We propose a simple yet effective patch selec-
tion method that can accurately localize the optimal
patches across the query image as exemplars for zero-
shot object counting.
• We verify the effectiveness of our method on the FSC-
147 dataset, through extensive ablation studies and vi-
sualization results.
|
Yang_Vector_Quantization_With_Self-Attention_for_Quality-Independent_Representation_Learning_CVPR_2023
|
Abstract
Recently, the robustness of deep neural networks has
drawn extensive attention due to the potential distribution
shift between training and testing data (e.g., deep mod-
els trained on high-quality images are sensitive to corrup-
tion during testing). Many researchers attempt to make
the model learn invariant representations from multiple
corrupted data through data augmentation or image-pair-
based feature distillation to improve the robustness. In-
spired by sparse representation in image restoration, we opt
to address this issue by learning image-quality-independent
feature representation in a simple plug-and-play manner,
that is, to introduce discrete vector quantization (VQ) to re-
move redundancy in recognition models. Specifically, we
first add a codebook module to the network to quantize
deep features. Then we concatenate them and design a
self-attention module to enhance the representation. During
training, we enforce the quantization of features from clean
and corrupted images in the same discrete embedding space
so that an invariant quality-independent feature representa-
tion can be learned to improve the recognition robustness
of low-quality images. Qualitative and quantitative experi-
mental results show that our method achieved this goal ef-
fectively, leading to a new state-of-the-art result of 43.1 %
mCE on ImageNet-C with ResNet50 as the backbone. On
other robustness benchmark datasets, such as ImageNet-R,
our method also has an accuracy improvement of almost
2%. The source code is available at https://see.
xidian.edu.cn/faculty/wsdong/Projects/
VQSA.htm
|
1. Introduction
The past few years have witnessed the remarkable devel-
opment of deep convolutional neural networks (DCNNs) in
many recognition tasks, such as classification [9,20,31,46],
*Corresponding author.
(a) Image (b) Clean (c) QualNet (d) OursFigure 1. The Grad-CAM [45] maps of different models on defo-
cus blur images. (a) The clean images. (b) The maps of vanilla
ResNet50 [20] model on clean images. (c) and (d) show the maps
of QualNet50 [29] and our proposed method on defocus blur im-
ages with severity level 3. The results show that our method still
can focus on the salient object area without being seriously af-
fected by corruption. Best viewed in color.
detection [43, 48] and segmentation [5, 19]. Although
its performance has exceeded that of humans in some
datasets, its robustness still lags behind [10, 15]. Many re-
searchers [11, 21, 22, 53] have shown that the performance
of deep models trained in high-quality data decreases dra-
matically with low-quality data encountered during deploy-
ment, which usually contain common corruptions, includ-
ing blur, noise, and weather influence. For example, the
vanilla ResNet50 model has an accuracy of 76 %in the
clean ImageNet validation set, but its average accuracy is
less than 30 %in the same dataset contaminated by Gaus-
sian noise.
As shown in Fig. 1, the model’s attention map is seri-
ously affected by image quality. That is, the model pays
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
24438
close attention to an incomplete or incorrect area of the de-
focus blur image, which results in a wrong prediction. Deep
degradation prior (DDP) [55] also reveals that deep repre-
sentation space degradation is the reason for the decrease
in model performance, and finding a mapping between de-
graded and clean features will certainly benefit downstream
classification or detection tasks.
Generally, a simple and effective method is to consider
these common corruptions and to use low-quality simulated
images for augmented (fine-tuning) training. However, as
described in [29], this method does not explicitly map low-
quality features to high-quality features; instead, the model
tends to learn the average distribution among corruptions,
resulting in limited performance improvement. In addition,
when the type of corruption is unknown during training, this
approach is heavily based on well-designed augmentation
strategies to improve generalization and robustness.
To enhance the feature representation of low-quality im-
ages, another feasible solution is to use pairs of clean and
degraded images for training and align the degraded fea-
tures with the corresponding clean ones. Both DDP [55]
and QualNet [29] use high-quality features extracted from
clean images as supervision and improve low-quality fea-
tures through a distillation-like approach [1, 25, 26, 58]. Al-
though they learn the mapping relationship between low-
and high-quality feature representation, paired images are
time-consuming for training, and some irrelevant features
that are not useful for recognition are also forcibly aligned,
such as background, color, lighting, and other features
which are affected by the image quality.
The vector quantization (VQ) process is essentially a
special case in sparse representation, that is, the represen-
tation coefficient is a one-hot vector [50]. Similarly to the
application of sparse representation in image restoration
[13, 59], the compression-reconstruction learning frame-
work of VQ is also conducive to the removal of noisy redun-
dant information and learning essential features for recog-
nition. Inspired by this, we propose to use VQ to bridge
the gap between low- and high-quality features and learn
a quality-independent representation. Specifically, we first
add a vector quantizer (codebook) to the recognition model.
During the training process, due to the codebook update
mechanism, both low- and high-quality features will be as-
signed to the same discrete embedding space. Subsequently,
since direct hard quantization (selecting and replacing) may
lose some useful information, we choose to concatenate the
quantized features with the original ones and use a self-
attention module to further enhance the quality-independent
representation and weaken irrelevant features. In summary,
the main contributions of this paper are listed below.
• To the best of our knowledge, it is the first time
that we propose to introduce vector quantization into
the recognition model for learning quality-independentfeature representation and improving the models’ ro-
bustness on common corruptions.
• We concatenate the quantized feature vector with the
original one and use the self-attention module to en-
hance the quality-independent feature representation
instead of direct replacement in the standard vector
quantization method.
• Extensive experimental results show that our method
has achieved higher accuracy on benchmark low-
quality datasets than several current state-of-the-art
methods.
|
Zhang_PHA_Patch-Wise_High-Frequency_Augmentation_for_Transformer-Based_Person_Re-Identification_CVPR_2023
|
Abstract
Although recent studies empirically show that injecting
Convolutional Neural Networks (CNNs) into Vision Trans-
formers (ViTs) can improve the performance of person re-
identification, the rationale behind it remains elusive. From
a frequency perspective, we reveal that ViTs perform worse
than CNNs in preserving key high-frequency components
(e.g, clothes texture details) since high-frequency compo-
nents are inevitably diluted by low-frequency ones due to
the intrinsic Self-Attention within ViTs. To remedy such
inadequacy of the ViT, we propose a Patch-wise High-
frequency Augmentation (PHA) method with two core de-
signs. First , to enhance the feature representation ability
of high-frequency components, we split patches with high-
frequency components by the Discrete Haar Wavelet Trans-
form, then empower the ViT to take the split patches as aux-
iliary input. Second , to prevent high-frequency components
from being diluted by low-frequency ones when taking the
entire sequence as input during network optimization, we
propose a novel patch-wise contrastive loss. From the view
of gradient optimization, it acts as an implicit augmentation
to improve the representation ability of key high-frequency
components. This benefits the ViT to capture key high-
frequency components to extract discriminative person rep-
resentations. PHA is necessary during training and can be
removed during inference, without bringing extra complex-
ity. Extensive experiments on widely-used ReID datasets
validate the effectiveness of our method.
|
1. Introduction
*Corresponding author
https://github.com/zhangguiwei610/PHA
This work was partially supported by the National Natural Science
Foundation of China (No. 62072022) and the Fundamental Research
Funds for the Central Universities.
2540557085100ResNet101TransReID
0612182430ResNet101TransReID
Low-frequency
High-frequency Person images
Texture details5060708090100ResNet101TransReID
mAP (Market1501)Rank-1 (Market1501)mAP (MSMT17)Rank-1 (MSMT17)88.49558.880.695.2↑0.267.4↑8.685.3↑4.788.9↑0.5
75.690.346.272.476.8↑1.291.5↑1.257.4↑11.278.8↑6.4
8.7↑2.421.8↑4.67.3↑1.924.4↑5.96.317.25.418.5DHWT(a)(b)(c)Figure 1. Performance comparisons between ResNet101 ( 44.7M
#Param ) and TransReID ( 100M #Param ) on (a) original person
images, (b) low-frequency components, and (c) high-frequency
components of Market1501 and MSMT17 datasets, respectively.
Note that “#Param” refers to the number of parameters.
Person re-identification (ReID) aims to retrieve a specific
person, given in a query image, from the search over a large
set of images captured by various cameras [33, 36, 37, 45].
Most research to date has focused on extracting discrimi-
native person representations from single images, either by
Convolutional Neural Networks (CNNs) [24, 25, 39, 42],
Vision Transformers (ViTs) [2, 8, 23, 27, 41, 43] or Hybrid-
based approaches [10, 11, 14, 16, 35]. These studies empir-
ically show that injecting CNNs into ViTs can improve the
discriminative of person representations [8, 35].
Despite a couple of empirical solutions, the rationale for
why ViTs can be improved by CNNs remains elusive. To
this end, we explore possible reasons from a frequency per-
spective, which is of great significance in digital image pro-
cessing [4, 12, 32]. As shown in Fig. 1, we first employ
Discrete Haar Wavelet Transform (DHWT) [19] to trans-
form original person images into low-frequency compo-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14133
nents and high-frequency components, then conduct perfor-
mance comparisons between the ResNet101 [6] and Tran-
sReID [8] on original images, the low-frequency compo-
nents and high-frequency components of Market1501 and
MSMT17 datasets, respectively. Our comparisons reveal:
(1) Certain texture details of person images, which
are more related to the high-frequency components, are
crucial for ReID tasks. Specifically, from Fig. 1 (a) and
(b), with the same model, the performance on the origi-
nal images is consistently better than that on low-frequency
components. Taking the TransReID as an example, the
Rank-1/mAP on original images of MSMT17 dataset is
6.5%/10.0% higher than that on the low-frequency compo-
nents. The root reason might be that low-frequency compo-
nents only reflect coarse-grained visual patterns of images,
and lose texture details (e.g., bags and edges). In contrast,
the lost details are more related to the high-frequency com-
ponents, as shown in Fig. 1 (c). The degradation from Fig. 1
(a) to (b) indicates that certain details are key components
to improve the performance of ReID.
(2) The ViT performs worse than CNNs in preserv-
ing key high-frequency components (e.g., texture de-
tails of clothes and bags) of person images. As shown
in Fig. 1 (c), although the TransReID outperforms the
ResNet101 consistently on the original images and low-
frequency components, the ResNet101 exceeds the Tran-
sReID by 4.6%/2.4% and 5.9%/1.9% Rank-1/mAP on
high-frequency components of Market1501 and MSMT17
datasets. The poor performance of the TransReID on high-
frequency components shows its inadequacy in capturing
key high-frequency details of person images.
In view of the above, we analyze the possible reason
for such inadequacy of the ViT by revisiting Self-Attention
from a frequency perspective (Sec. 3.1). We reveal that
high-frequency components of person images are inevitably
diluted by low-frequency ones due to the Self-Attention
mechanism within ViTs. To remedy such inadequacy of
the ViT without modifying its architecture, we propose a
Patch-wise High-frequency Augmentation (PHA) method
with two core designs (Sec. 3.2). First , unlike previous
works that directly take frequency subbands as network in-
put [1, 4, 5, 17], we split patches with high-frequency com-
ponents by the Discrete Haar Wavelet Transform (DHWT)
and drop certain low-frequency components correspond-
ingly, then empower the ViT to take the split patches as
auxiliary input. This benefits the ViT to enhance the fea-
ture representation ability of high-frequency components.
Note that the dropped components are imperceptible to hu-
man eyes but essential for the model, thereby preventing
the model from overfitting to low-frequency components.
Second , to prevent high-frequency components from be-
ing diluted by low-frequency ones when taking the entire
sequence as input during network optimization, we pro-pose a novel patch-wise contrastive loss. From the view
of gradient optimization, it acts as an implicit augmentation
to enhance the feature representation ability of key high-
frequency components to extract discriminative person rep-
resentations (Sec. 3.3). With it, our PHA is necessary dur-
ing training and can be discarded during inference, without
bringing extra complexity. Our contributions include:
•We reveal that due to the intrinsic Self-Attention mech-
anism, the ViT performs worse than CNNs in capturing
high-frequency components of person images, which
are key ingredients for ReID. Hence, we develop a
Patch-wise High-frequency Augmentation (PHA) to
extract discriminative person representations by en-
hancing high-frequency components.
•We propose a novel patch-wise contrastive loss, en-
abling the ViT to preserve key high-frequency com-
ponents of person images. From the view of gradi-
ent optimization, it acts as an implicit augmentation to
enhance the feature representation ability of key high-
frequency components. By virtue of it, our PHA is
necessary during training and can be removed during
inference, without bringing extra complexity.
•Extensive experimental results perform favorably
against the mainstream methods on CUHK03-NP,
Market-1501, and MSMT17 datasets.
|
Yao_HGNet_Learning_Hierarchical_Geometry_From_Points_Edges_and_Surfaces_CVPR_2023
|
Abstract
Parsing an unstructured point set into constituent local
geometry structures (e.g., edges or surfaces) would be help-
ful for understanding and representing point clouds. This
motivates us to design a deep architecture to model the
hierarchical geometry from points, edges, surfaces (trian-
gles), to super-surfaces (adjacent surfaces) for the thor-
ough analysis of point clouds. In this paper, we present
a novel Hierarchical Geometry Network (HGNet) that in-
tegrates such hierarchical geometry structures from super-
surfaces, surfaces, edges, to points in a top-down man-
ner for learning point cloud representations. Technically,
we first construct the edges between every two neighbor
points. A point-level representation is learnt with edge-to-
point aggregation, i.e., aggregating all connected edges into
the anchor point. Next, as every two neighbor edges com-
pose a surface, we obtain the edge-level representation of
each anchor edge via surface-to-edge aggregation over all
neighbor surfaces. Furthermore, the surface-level repre-
sentation is achieved through super-surface-to-surface ag-
gregation by transforming all super-surfaces into the an-
chor surface. A Transformer structure is finally devised to
unify all the point-level, edge-level, and surface-level fea-
tures into the holistic point cloud representations. Extensive
experiments on four point cloud analysis datasets demon-
strate the superiority of HGNet for 3D object classification
and part/semantic segmentation tasks. More remarkably,
HGNet achieves the overall accuracy of 89.2% on ScanOb-
jectNN, improving PointNeXt-S by 1.5%.
|
1. Introduction
With the growing popularity of 3D acquisition technolo-
gies (such as 3D scanners, RGBD cameras, and LiDARs),
3D point cloud analysis has been a fast-developing topic in
the past decade. Practical point cloud analysis systems have
great potential for numerous applications, e.g., autonomous
driving, robotics, and virtual reality. In comparison to 2D
RGB image data, 3D data of point clouds has an additional
dimension of depth, leading to a three dimensional world as
Point
Surface
Super -surface
Edge
Figure 1. The progressive development of a four-level hierarchi-
cal geometry structure among points, edges, surfaces, and super-
surfaces. Such hierarchical geometry structure further in turn
strengthens surface-level, edge-level, and point-level features in
a top-down manner.
in our daily life. The 3D point clouds are armed with several
key merits, e.g., preserving rich geometry information and
being invariant to lighting conditions. Nevertheless, consid-
ering that point clouds are naturally unstructured without
any discretization, it is not trivial to directly applying the
typical CNN operations widely adopted in 2D vision over
the primary point set for analyzing 3D data.
To alleviate this limitation, a series of efforts have at-
tempted to interpret the geometry information rooted in
unstructured point clouds, such as the regular point-based
[30, 38], edge/relation-based [32, 46], or surface-based [39]
representations. For instance, PointNet [30] builds up the
foundation of regular point-based feature extraction via
stacked Multi-Layer Perceptrons (MLPs) plus symmetric
aggregation. The subsequent works (e.g., PointNet++ [32]
and DGCNN [46]) further incorporate edge/relation-based
features by using set abstraction or graph models for aggre-
gating/propagating points and edges, pursuing the mining
of finer geometric structures. More recently, RepSurf [39]
learns surface-based features over triangle meshes to enable
an amplified expression of local geometry. Despite showing
encouraging performances, most existing point cloud anal-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21846
ysis techniques lack a thorough understanding of the com-
positional local geometry structures in the point clouds.
In this work, we propose to mitigate this issue from
the standpoint of parsing an unstructured point set into a
hierarchical structure of constituent local geometry struc-
tures to comprehensively characterize the point clouds. Our
launching point is to construct a bottom-up local hierarchi-
cal topology from the leaf level of irregular and unordered
points, to the intermediary levels of edges and surfaces, and
the root level of super-surfaces (i.e., adjacent surfaces). Fig-
ure 1 conceptually depicts the progressive development of
a hierarchical geometry structure for a local set of points.
Such hierarchical structure in turn triggers the reinforce-
ment of 3D model capacity for capturing comprehensive ge-
ometry information. The learning of multi-granular geom-
etry features, i.e., surface-level, edge-level, and point-level
features, is enhanced by aggregating geometry information
from super-surfaces, surfaces, or edges derived from the hi-
erarchical topology in a top-down fashion.
By consolidating the idea of delving into the local hier-
archical geometry in point clouds, we present a new Hier-
archical Geometry Network (HGNet) to facilitate the learn-
ing of point cloud representations. Specifically, we con-
struct a four-level hierarchy among points, edges, surfaces,
and super-surfaces. The edges are first built between two
neighbor points and each is extended to a surface (trian-
gle) by connecting another neighbor point. The adjacent
surfaces are then grouped as a super-surface. Next, HGNet
executes top-down feature aggregation from super-surfaces,
surfaces, and edges, leading to three upgraded geometry
features of surfaces, edges, and points, respectively. Finally,
we capitalize on a Transformer structure to unify these re-
fined surface-level, edge-level, and point-level geometry
features with holistic contextual information, thereby im-
proving point cloud analysis.
We analyze and evaluate our HGNet through extensive
experiments over multiple 3D vision tasks for point cloud
analysis (e.g., 3D object classification and part/semantic
segmentation tasks), which empirically demonstrate its su-
perior advantage against state-of-the-art backbones. More
remarkably, on the real-world challenging benchmark of
ScanObjectNN, HGNet achieves to-date the best published
overall accuracy of 89.2%, which absolutely improves
PointNeXt-S (87.7%) with 1.5%. For part and semantic
segmentation on PartNet and S3DIS datasets, HGNet sur-
passes PosPool and PointNeXt by 1.8% and 1.8% in mIoU.
|
Yang_BEVHeight_A_Robust_Framework_for_Vision-Based_Roadside_3D_Object_Detection_CVPR_2023
|
Abstract
While most recent autonomous driving system focuses
on developing perception methods on ego-vehicle sensors,
people tend to overlook an alternative approach to lever-
age intelligent roadside cameras to extend the perception
ability beyond the visual range. We discover that the state-
of-the-art vision-centric bird’s eye view detection methods
have inferior performances on roadside cameras. This is
because these methods mainly focus on recovering the depth
regarding the camera center, where the depth difference be-
*Work done during an internship at DAMO Academy, Alibaba Group.
†Corresponding Author.tween the car and the ground quickly shrinks while the dis-
tance increases. In this paper, we propose a simple yet
effective approach, dubbed BEVHeight, to address this is-
sue. In essence, instead of predicting the pixel-wise depth,
we regress the height to the ground to achieve a distance-
agnostic formulation to ease the optimization process of
camera-only perception methods. On popular 3D detection
benchmarks of roadside cameras, our method surpasses
all previous vision-centric methods by a significant mar-
gin. The code is available at https://github.com/
ADLab-AutoDrive/BEVHeight .
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21611
|
1. Introduction
The rising tide of autonomous driving vehicles draws
vast research attention to many 3D perception tasks, of
which 3D object detection plays a critical role. While most
recent works tend to only rely on ego-vehicle sensors, there
are certain downsides of this line of work that hinders the
perception capability under given scenarios. For example,
as the mounting position of cameras is relatively close to
the ground, obstacles can be easily occluded by other vehi-
cles to cause severe crash damage. To this end, people have
started to develop perception systems that leverage intelli-
gent units on the roadside, such as cameras, to address such
occlusion issue and enlarge perception range so to increase
the response time in case of danger [5,11,28,34,36,37]. To
facilitate future research, there are two large-scale bench-
mark datasets [36,37] of various roadside cameras and pro-
vide an evaluation of certain baseline methods.
Recently, people discover that, in contrast to directly pro-
jecting the 2D images into a 3D space, leveraging a bird’s
eye view (BEV) feature space can significantly improve the
perception performance of vision centric system. One line
of the recent approach, which constitutes the state-of-the-art
camera-only method, is to generate implicitly or explicitly
the depth for each pixel to ease the optimization process of
bounding box regression. However, as shown in Fig. 1, we
visualize the per-pixel depth of a roadside image and notice
a phenomenon. Consider two points, one on the roof of a
car and another on the nearest ground. If we measure the
depth of these points to the camera center, namely dandd′,
the difference between these depth d−d′would drastically
decrease when the car moves away from the camera. We
conjecture this leads to two potential downsides: i) unlike
the autonomous vehicle that has a consistent camera pose,
roadside ones usually have different camera poses across
the datasets, which makes regressing depth hard; ii) depth
prediction is very sensitive to the change of extrinsic param-
eter, where it happens quite often in the real world.
On the contrary, we notice that the height to the ground is
consistent regardless of the distance between car and cam-
era center. To this end, we propose a novel framework
to predict the per-pixel height instead of depth, dubbed
BEVHeight. Specifically, our method firstly predicts cat-
egorical height distribution for each pixel to project rich
contextual feature information to the appropriate height in-
terval in wedgy voxel space. Followed by a voxel pooling
operation and a detection head to get the final output de-
tections. Besides, we propose a hyperparameter-adjustable
height sampling strategy. Note that our framework does not
depend on explicit supervision like point clouds.
We conduct extensive experiments on two popular
roadside perception benchmarks, DAIR-V2X [37] and
Rope3D [36]. On traditional settings where there is no
disruption to the cameras, our BEVHeight achieves state-of-the-art performance and surpasses all previous methods,
regardless of monocular 3D detectors or recent bird’s eye
view methods by a margin of 5%. In realistic scenarios, the
extrinsic parameters of these roadside units can be subject
to changes due to various reasons, such as maintenance and
wind blows. We simulate these scenarios following [38] and
observe a severe performance drop of the BEVDepth, from
41.97% to 5.49%. Compared to these methods, we show-
case the benefit of predicting the height instead of depth
and achieve 26.88% improvement over the BEVDepth [15],
which further evidences the robustness of our method.
|
Zhang_Diverse_Embedding_Expansion_Network_and_Low-Light_Cross-Modality_Benchmark_for_Visible-Infrared_CVPR_2023
|
Abstract
For the visible-infrared person re-identification
(VIReID) task, one of the major challenges is the
modality gaps between visible (VIS) and infrared (IR)
images. However, the training samples are usually lim-
ited, while the modality gaps are too large, which leads
that the existing methods cannot effectively mine diverse
cross-modality clues. To handle this limitation, we propose
a novel augmentation network in the embedding space,
called diverse embedding expansion network (DEEN).
The proposed DEEN can effectively generate diverse em-
beddings to learn the informative feature representations
and reduce the modality discrepancy between the VIS
and IR images. Moreover, the VIReID model may be
seriously affected by drastic illumination changes, while
all the existing VIReID datasets are captured under suffi-
cient illumination without significant light changes. Thus,
we provide a low-light cross-modality (LLCM) dataset,
which contains 46,767 bounding boxes of 1,064 identities
captured by 9 RGB/IR cameras. Extensive experiments
on the SYSU-MM01, RegDB and LLCM datasets show
the superiority of the proposed DEEN over several other
state-of-the-art methods. The code and dataset are released
at:https://github.com/ZYK100/LLCM
|
1. Introduction
Person re-identification (ReID) aims to match a given
person with gallery images captured by different cameras
[3, 9, 52]. Most existing ReID methods [22, 24, 30, 38, 50]
only focus on matching RGB images captured by visible
cameras at daytime. However, these methods may fail
*Corresponding author.
CNN
dimensionamplitude
0VIS imageIR image
CNN
with DEEN
dimensionamplitude
0VIS imageIR image(a) Feature Learning without the proposed DEEN
(b) Feature Learning with the proposed DEEN VIS image
IR image
VIS image
IR image
Embeddings
EmbeddingsFigure 1. Motivation of the proposed DEEN, which aims to gen-
erate diverse embeddings to make the network focus on learning
with the informative feature representations to reduce the modality
gaps between the VIS and IR images.
to achieve encouraging results when visible cameras can-
not effectively capture person’s information under complex
conditions, such as at night or low-light environments. To
solve this problem, some visible (VIS)-infrared (IR) per-
son re-identification (VIReID) methods [15,39,41,48] have
been proposed to retrieve the VIS (IR) images according to
the corresponding IR (VIS) images.
Compared with the widely studied person ReID task,
the VIReID task is much more challenging due to the ad-
ditional cross-modality discrepancy between the VIS and
IR images [33, 45, 49, 51]. Typically, there are two popu-
lar types of methods to reduce this modality discrepancy.
One type is the feature-level methods [5, 11, 16, 35, 40, 42],
which try to project the VIS and IR features into a common
embedding space, where the modality discrepancy can be
minimized. However, the large modality discrepancy makes
these methods difficult to project the cross-modality images
into a common feature space directly. The other type is the
image-level methods [4,28,29,32], which aim to reduce the
modality discrepancy by translating an IR (or VIS) image
1
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2153
IR images VIS images IR images VIS images
SYSU-MM01
RegDB
LLCMFigure 2. Comparison of person images on the SYSU-MM01 (1st
row), RegDB (2nd row), and LLCM (3rd-5th rows) datasets. Each
row shows four VIS images and four IR images of two identities.
It is obvious that our LLCM contains a more challenging and real-
istic VIReID environment.
into its VIS (or IR) counterpart by using the GANs [8]. De-
spite their success in reducing the modality gaps, the gen-
erated cross-modality images are usually accompanied by
some noises due to the lack of the VIS-IR image pairs.
In this paper, we propose a novel augmentation network
in the embedding space for the VIReID task, called diverse
embedding expansion network (DEEN), which consists of
a diverse embedding expansion (DEE) module and a multi-
stage feature aggregation (MFA) block. The proposed DEE
module can generate more embeddings followed by a novel
center-guided pair mining (CPM) loss to drive the DEE
module to focus on learning with the diverse feature rep-
resentations. As illustrated in Fig. 1, by exploiting the gen-
erated embeddings with diverse information, the proposed
DEE module can achieve the performance improvement by
using more diverse embeddings. The proposed MFA block
can aggregate the features from different stages for mining
potential channel-wise and spatial feature representations,
which increases the network’s capacity for mining different-
level diverse embeddings.
Moreover, we observe that the existing VIReID datasets
are captured under the environments with sufficient illumi-
nation. However, the performance of the VIReID methods
may be seriously affected by drastic illumination changes or
low illuminations. Therefore, we collect a challenging low-
light cross-modality dataset, called LLCM dataset, which is
shown in Fig. 2. Compared with the other VIReID datasets,
the LLCM dataset contains a larger number of identities and
images captured under low-light scenes, which introduces
more challenges to the real-world VIReID task.
In summary, the main contributions are as follows:
•We propose a novel diverse embedding expansion
(DEE) module with a center-guided pair mining (CPM) loss
to generate more embeddings for learning the diverse fea-ture representations. We are the first to augment the embed-
dings in the embedding space in VIReID. Besides, we also
propose an effective multistage feature aggregation (MFA)
block to mine potential channel-wise and spatial feature
representations.
•With the incorporation of DEE, CPM loss and MFA
into an end-to-end learning framework, we propose an
effective diverse embedding expansion network (DEEN),
which can effectively reduce the modality discrepancy be-
tween the VIS and IR images.
•We collect a low-light cross-modality (LLCM) dataset,
which contains 46,767 images of 1,064 identities captured
under the environments with illumination changes and low
illuminations. The LLCM dataset has more new and impor-
tant features, which can facilitate the research of VIReID
towards practical applications.
•Extensive experiments show that the proposed DEEN
outperforms the other state-of-the-art methods for the
VIReID task on three challenging datasets.
|
Ying_Mapping_Degeneration_Meets_Label_Evolution_Learning_Infrared_Small_Target_Detection_CVPR_2023
|
Abstract
Training a convolutional neural network (CNN) to detect
infrared small targets in a fully supervised manner has
gained remarkable research interests in recent years, but
is highly labor expensive since a large number of per-pixel
annotations are required. To handle this problem, in this
paper, we make the first attempt to achieve infrared small
target detection with point-level supervision. Interestingly,
during the training phase supervised by point labels,
we discover that CNNs first learn to segment a cluster
of pixels near the targets, and then gradually converge
to predict groundtruth point labels. Motivated by this
“mapping degeneration” phenomenon, we propose a label
evolution framework named label evolution with single
point supervision (LESPS) to progressively expand the
point label by leveraging the intermediate predictions of
CNNs. In this way, the network predictions can finally
approximate the updated pseudo labels, and a pixel-level
target mask can be obtained to train CNNs in an end-to-end
manner. We conduct extensive experiments with insightful
visualizations to validate the effectiveness of our method.
Experimental results show that CNNs equipped with LESPS
can well recover the target masks from corresponding
point labels, and can achieve over 70% and 95% of
their fully supervised performance in terms of pixel-level
intersection over union (IoU ) and object-level probability
of detection (P d), respectively. Code is available at
https://github.com/XinyiYing/LESPS.
|
1. Introduction
Infrared small target detection has been a longstanding,
fundamental yet challenging task in infrared search and
tracking systems, and has various important applications in
civil and military fields [49,57], including traffic monitoring
This work was supported by National Key Research and
Development Program of China No. 2021YFB3100800.
0.001.00
0.50
0.000.26
0.12
0.000.20
0.10
0.000.16
0.08
loss
Image Prediction Label
Training Epochs
Figure 1. An illustration of mapping degeneration under point
supervision. CNNs always tend to segment a cluster of pixels
near the targets with low confidence at the early stage, and then
gradually learn to predict GT point labels with high confidence.
[24, 54], maritime rescue [52, 53] and military surveillance
[7,47]. Due to the rapid response and robustness to
fast-moving scenes, single-frame infrared small target
(SIRST) detection methods have always attracted much
more attention, and numerous methods have been proposed.
Early methods, including filtering-based [9, 40], local
contrast-based [3, 16] and low rank-based [11, 43] methods,
require complex handcrafted features with carefully tuned
hyper-parameters. Recently, compact deep learning has
been introduced in solving the problem of SIRST detection
[24,45,54]. However, there are only a few attempts, and its
potential remains locked, unlike the extensive explorations
of deep learning for natural images. This is mainly due to
potential reasons, including lack of large-scale, accurately
annotated datasets and high stake application scenarios.
Infrared small targets are usually of very small size,
weak, shapeless and textureless, and are easily submerged
in diverse complex background clutters. As a result,
directly adopting existing popular generic object detectors
like RCNN series [13, 14,19,39], YOLO series [25, 37,
38] and SSD [29] to SIRST detection cannot produce
satisfactory performance. Realizing this, researchers have
been focusing on developing deep networks tailored for
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15528
infrared small targets by adequately utilizing the domain
knowledge. However, most existing deep methods for
SIRST detection [8, 24, 54] are fully supervised, which
usually requires a large dataset with accurate target mask
annotations for training. Clearly, this is costly [5, 26].
Therefore, a natural question arises: Can we develop
a new framework for SIRST detection with single point
supervision ? In fact, to substantially reduce the annotation
cost for object detection tasks, weakly supervised object
detection methods with point supervision [4, 5, 26, 56] have
been studied in the field of computer vision. Although these
weakly supervised methods achieve promising results, they
are not designed for the problem of SIRST detection,
and the class-agnostic labels ( i.e., only foreground
and background) of infrared small targets hinder their
applications [42, 58]. Therefore, in this work, we intend
to conduct the first study of weakly supervised SIRST
detection with single-point supervision.
A key motivation of this work comes from an interesting
observation during the training of SIRST detection
networks. That is, with single point labels serving as
supervision, CNNs always tend to segment a cluster of
pixels near the targets with low confidence at the early
stage, and then gradually learn to predict groundtruth (GT)
point labels with high confidence, as shown in Fig. 1.
It reveals the fact that region-to-region mapping is the
intermediate result of the final region-to-point mapping1.
We attribute this “mapping degeneration” phenomenon to
the special imaging mechanism of infrared system [24, 54],
the local contrast prior of infrared small targets [3, 8], and
the easy-to-hard learning property of CNNs [44], in which
the first two factors result in extended mapping regions
beyond the point labels, and the last factor contributes to
the degeneration process.
Based on the aforementioned discussion, in this work,
we propose a novel framework for the problem of weakly
supervised SIRST detection, dubbed label evolution with
single point supervision (LESPS). Specifically, LESPS
leverages the intermediate network predictions in the
training phase to update the current labels, which serve as
supervision until the next label update. Through iterative
label update and network training, the network predictions
can finally approximate the updated pseudo mask labels,
and the network can be simultaneously trained to achieve
pixel-level SIRST detection in an end-to-end2manner.
Our main contributions are summarized as: (1) We
present the first study of weakly supervised SIRST
1“region-to-region mapping” represents the mapping learned by CNNs
from target regions in images to a cluster of pixels near the targets, while
“region-to-point mapping” represents the mapping from target regions in
images to the GT point labels.
2Different from generic object detection [32, 59], “end-to-end” here
represents achieving point-to-mask label regression and direct pixel-level
inference in once training.detection, and introduce LESPS that can significantly
reduce the annotation cost. (2) We discover the mapping
degeneration phenomenon, and leverage this phenomenon
to automatically regress pixel-level pseudo labels from the
given point labels via LESPS. (3) Experimental results
show that our framework can be applied to different existing
SIRST detection networks, and enable them to achieve
over 70% and 95% of its fully supervised performance
in terms of pixel-level intersection over union ( IoU) and
object-level probability of detection ( Pd), respectively.
|
Zhang_Learning_3D_Representations_From_2D_Pre-Trained_Models_via_Image-to-Point_Masked_CVPR_2023
|
Abstract
Pre-training by numerous image data has become de-
facto for robust 2D representations. In contrast, due to the
expensive data processing, a paucity of 3D datasets severely
hinders the learning for high-quality 3D features. In this
paper, we propose an alternative to obtain superior 3D
representations from 2D pre-trained models via Image-to-
Point Masked Autoencoders, named as I2P-MAE . By self-
supervised pre-training, we leverage the well learned 2D
knowledge to guide 3D masked autoencoding, which recon-
structs the masked point tokens with an encoder-decoder ar-
chitecture. Specifically, we first utilize off-the-shelf 2D mod-
els to extract the multi-view visual features of the input point
cloud, and then conduct two types of image-to-point learn-
ing schemes. For one, we introduce a 2D-guided masking
strategy that maintains semantically important point tokens
to be visible. Compared to random masking, the network
can better concentrate on significant 3D structures with
key spatial cues. For another, we enforce these visible to-
kens to reconstruct multi-view 2D features after the decoder.
This enables the network to effectively inherit high-level 2D
semantics for discriminative 3D modeling. Aided by our
image-to-point pre-training, the frozen I2P-MAE, without
any fine-tuning, achieves 93.4% accuracy for linear SVM
on ModelNet40, competitive to existing fully trained meth-
ods. By further fine-tuning on on ScanObjectNN’s hard-
est split, I2P-MAE attains the state-of-the-art 90.11% ac-
curacy, +3.68% to the second-best, demonstrating supe-
rior transferable capacity. Code is available at https:
//github.com/ZrrSkywalker/I2P-MAE .
|
1. Introduction
Driven by huge volumes of image data [11, 40, 56, 61],
pre-training for better visual representations has gained
†Corresponding author
2D Pre-trained Models
Image-to-Point Learning
Large-scale 2D DataLimited 3D DataPre-train
Pre-train
I2P-MAEMAEEncoderMAEDecoderRich2DKnowledgeResNet,ViT,Swin,…Figure 1. Image-to-Point Masked Autoencoders. We leverage
the 2D pre-trained models to guide the MAE pre-training in 3D,
which alleviates the need of large-scale 3D datasets and learns
from 2D knowledge for superior 3D representations.
much attention in computer vision, which benefits a variety
of downstream tasks [8, 29, 39, 55, 62]. Besides supervised
pre-training with labels, many researches develop advanced
self-supervised approaches to fully utilize raw image data
via pre-text tasks, e.g., image-image contrast [5, 9, 10, 21,
28], language-image contrast [12, 53], and masked image
modeling [3, 4, 18, 19, 27, 31, 42, 70]. Given the popularity
of 2D pre-trained models, it is still absent for large-scale
3D datasets in the community, attributed to the expensive
data acquisition and labor-intensive annotation. The widely
adopted ShapeNet [6] only contains 50k point clouds of 55
object categories, far less than the 14 million ImageNet [11]
and 400 million image-text pairs [53] in 2D vision. Though
there have been attempts to extract self-supervisory signals
for 3D pre-training [33, 41, 47, 64, 69, 76, 78, 83], raw point
clouds with sparse structural patterns cannot provide suf-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21769
RandomMasking
3DCoord.2D-guided Masking
2D Semantics…
……3DCoord.…
…2D Attention Maps2D VisualFeaturesVisibleTokens
MaskedTokensPoint-MAE&-M2AEI2P-MAEFigure 2. Comparison of (Left) Existing Methods [47, 78] and
(Right) our I2P-MAE. On top of the general 3D MAE architec-
ture, I2P-MAE introduces two schemes of image-to-point learn-
ing: 2D-guided masking and 2D-semantic reconstruction.
ficient and diversified semantics compared to colorful im-
ages, which constrain the generalization capacity of pre-
training. Considering the homology of images and point
clouds, both of which depict certain visual characteristics
of objects and are related by 2D-3D geometric mapping,
we ask the question: can off-the-shelf 2D pre-trained mod-
els help 3D representation learning by transferring robust
2D knowledge into 3D domains?
To tackle this challenge, we propose I2P-MAE , a
Masked Autoencoding framework that conducts Image-to-
Point knowledge transfer for self-supervised 3D point cloud
pre-training. As shown in Figure 1, aided by 2D semantics
learned from abundant image data, our I2P-MAE produces
high-quality 3D representations and exerts strong transfer-
able capacity to downstream 3D tasks. Specifically, refer-
ring to 3D MAE models [47, 78] in Figure 2 (Left), we
first adopt an asymmetric encoder-decoder transformer [14]
as our fundamental architecture for 3D pre-training, which
takes as input a randomly masked point cloud and recon-
structs the masked points from the visible ones. Then, to
acquire 2D semantics for the 3D shape, we bridge the modal
gap by efficiently projecting the point cloud into multi-view
depth maps. This requires no time-consuming offline ren-
dering and largely preserves 3D geometries from different
perspectives. On top of that, we utilize off-the-shelf 2D
models to obtain the multi-view 2D features along with 2D
attention maps of the point cloud, and respectively guide the
pre-training from two aspects, as shown in Figure 2 (Right).
Firstly , different from existing methods [47, 78] to ran-
domly sample visible tokens, we introduce a 2D-guided
masking strategy that reserves point tokens with more spa-
tial semantics to be visible for the MAE encoder. In de-
tail, we back-project the multi-view attention maps into 3D
space as a spatial attention cloud. Each element in the
9493929190898887
93.4%Linear SVM (%)on ModelNet4092.9%
11623691.0%Pre-trainingEpochsFigure 3. Pre-training Epochs vs. Linear SVM Accuracy on
ModelNet40 [67]. With the image-to-point learning schemes, I2P-
MAE exerts superior transferable capability with much faster con-
vergence speed than Point-MAE [47] and Point-M2AE [78].
cloud indicates the semantic significance of the correspond-
ing point token. Guided by this, the 3D network can bet-
ter focus on the visible critical structures to understand the
global 3D shape, and also reconstruct the masked tokens
from important spatial cues.
Secondly , in addition to the recovering of masked point
tokens, we propose to concurrently reconstruct 2D seman-
tics from the visible point tokens after the MAE decoder.
For each visible token, we respectively fetch its projected
2D representations from different views, and integrate them
as the 2D-semantic learning target. By simultaneously re-
constructing the masked 3D coordinates and visible 2D con-
cepts, I2P-MAE is able to learn both low-level spatial pat-
terns and high-level semantics pre-trained in 2D domains,
contributing to superior 3D representations.
With the aforementioned image-to-point guidance, our
I2P-MAE significantly accelerates the convergence speed
of pre-training and exhibits state-of-the-art performance on
3D downstream tasks, as shown in Figure 3. Learning
from 2D ViT [14] pre-trained by CLIP [53], I2P-MAE,
without any fine-tuning, achieves 93.4% classification ac-
curacy by linear SVM on ModelNet40 [67], which has
surpassed the fully fine-tuned results of Point-BERT [76]
and Point-MAE [33]. After fine-tuning, I2P-MAE further
achieves 90.11% classification accuracy on the hardest split
of ScanObjectNN [63], significantly exceeding the second-
best Point-M2AE [78] by +3.68%. The experiments fully
demonstrate the effectiveness of learning from pre-trained
2D models for superior 3D representations.
Our contributions are summarized as follows:
• We propose Image-to-Point Masked Autoencoders
(I2P-MAE), a pre-training framework to leverage 2D
pre-trained models for learning 3D representations.
21770
• We introduce two strategies, 2D-guided masking and
2D-semantic reconstruction, to effectively transfer the
well learned 2D knowledge into 3D domains.
• Extensive experiments have been conducted to indicate
the significance of our image-to-point pre-training.
|
Zhang_Towards_Unbiased_Volume_Rendering_of_Neural_Implicit_Surfaces_With_Geometry_CVPR_2023
|
Abstract
Learning surface by neural implicit rendering has been
a promising way for multi-view reconstruction in recent
years. Existing neural surface reconstruction methods,
such as NeuS [24] and VolSDF [32], can produce reliable
meshes from multi-view posed images. Although they build
a bridge between volume rendering and Signed Distance
Function (SDF), the accuracy is still limited. In this pa-
per, we argue that this limited accuracy is due to the bias of
their volume rendering strategies, especially when the view-
ing direction is close to be tangent to the surface. We revise
and provide an additional condition for the unbiased vol-
ume rendering. Following this analysis, we propose a new
rendering method by scaling the SDF field with the angle
between the viewing direction and the surface normal vec-
tor. Experiments on simulated data indicate that our render-
ing method reduces the bias of SDF-based volume render-
ing. Moreover, there still exists non-negligible bias when
the learnable standard deviation of SDF is large at early
stage, which means that it is hard to supervise the rendered
depth with depth priors. Alternatively we supervise zero-
level set with surface points obtained from a pre-trained
Multi-View Stereo network. We evaluate our method on the
DTU dataset and show that it outperforms the state-of-the-
arts neural implicit surface methods without mask supervi-
sion.
|
1. Introduction
3D reconstruction is an important task in 3D games and
AR/VR applications. As a key technique in computer vi-
sion and graphics, recovering surfaces and textures from
Multi-View calibrated RGB images has been widely studied
in recent decades. Early unsupervised Multi-View Stereo
(MVS) approaches [14, 20] provide solutions through a
*Corresponding authorcertain multistage pipeline, including grouping the related
views, depth prediction, filtering with photometric con-
sistency and geometry consistency, fusion of points from
different views, meshing the dense points by off-the-shelf
methods such as screened Poisson Surface Reconstruction
[8], and texture mapping finally.
Later MVS networks [5,20,27,36] are developed rapidly
benefiting from the available large-scale 3D datasets. This
kind of MVS networks use Convolutional Neural Network
(CNN) to predict depth maps effectively, then follow the
traditional pipeline to fuse a global dense point cloud and
mesh it. However, MVS networks suffer from texture-less
regions and sudden depth changes, so there usually exist
many holes in the recovered meshes.
Recently, neural implicit surface and differentiable ren-
dering methods present a promising way to improve and
simplify the progress of the Multi-View 3D reconstruction.
The surfaces are represented as Signed Distance Functions
(SDF) [18, 24, 32, 33] or occupancy field [16, 17]. At the
same time, neural radiance field [13, 35] are proposed with
different volume rendering. The neural surface-based ren-
dering method can recover reliable and smooth surfaces, but
it is hard to train without mask supervision. On the contrary,
the different volume rendering can achieve good 2D views
without mask supervision, but the quality of 3D geometry
is rather coarse.
Is there some connections between the SDF field and
occupancy field? NeuS [24] and V olSDF [32] point that
the connection can be conducted with a certain Cumula-
tive Distribution Function (CDF). Thanks to this significant
progress, it is able to learn 3D surfaces effectively from neu-
ral implicit surface with the self-supervised volume render-
ing. The necessary input can only be well-posed 2D images.
Masks could be removed, because it is hard to obtain accu-
rate masks for many complex objects in the real world.
Although these great methods have made big progress
on 3D reconstruction from calibrated multi-view images,
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
4359
the accuracy of meshes is still limited compared with those
methods [5, 14, 20, 20, 27, 36] in a classical pipeline. We
find that there exist unexpected convex or concave surfaces
in regions with poor texture or strong highlight. In addition,
the learned surface and rendered color tend to be smooth,
and the high frequency details can not be captured well.
Based on NeuS [24] and V olSDF [32], we further ana-
lyze the precision of the bridge between the SDF field and
density field, and we found that there exist a bias between
real depth and rendered depth by SDF-based volume ren-
dering. We argue that there are two factors leading to the
bias: (1) The angle between the view direction and the nor-
mal vector; (2) The learnable standard deviation of the SDF
field. The bias increases with the growth of the angle. It
decreases with the descent of the learnable standard devia-
tion, but it is still relatively large when the deviation is not
small enough at early training stage. More details of the
analysis are described in Section 3. In order to reduce the
bias caused by the various angles, we modify the transfor-
mation between the SDF field and the density field. Further-
more, we adopt dense point clouds predicted by an acces-
sible MVS network to reduce the bias further. Finally we
evaluate our method and compare it with other SDF-based
volume rendering methods on the public benchmark.
To summarise, we provides the following three contribu-
tions: a) We amend the conditions of unbiased SDF-based
volume rendering, and analyze the bias of V olSDF [32] and
NeuS [24]. b) We propose a new transformation between
the SDF field and the density field, which does not re-
quire a plane assumption and outperforms V olSDF [32] and
NeuS [24] even without geometry priors. In particular, we
scale the SDF field by the inverse of the angle between the
view direction and the normal vector. c) Geometry priors
from a pre-trained MVS network are used with the anneal-
ing sampling to further reduce the bias effectively at early
training stage and boost the reconstruction quality.
|
Yan_SMAE_Few-Shot_Learning_for_HDR_Deghosting_With_Saturation-Aware_Masked_Autoencoders_CVPR_2023
|
Abstract
Generating a high-quality High Dynamic Range (HDR)
image from dynamic scenes has recently been extensively
studied by exploiting Deep Neural Networks (DNNs). Most
DNNs-based methods require a large amount of train-
ing data with ground truth, requiring tedious and time-
consuming work. Few-shot HDR imaging aims to generate
satisfactory images with limited data. However, it is dif-
ficult for modern DNNs to avoid overfitting when trained
on only a few images. In this work, we propose a novel
semi-supervised approach to realize few-shot HDR imag-
ing via two stages of training, called SSHDR. Unlikely pre-
vious methods, directly recovering content and removing
ghosts simultaneously, which is hard to achieve optimum,
we first generate content of saturated regions with a self-
supervised mechanism and then address ghosts via an itera-
tive semi-supervised learning framework. Concretely, con-
sidering that saturated regions can be regarded as mask-
ing Low Dynamic Range (LDR) input regions, we design a
Saturated Mask AutoEncoder (SMAE) to learn a robust fea-
ture representation and reconstruct a non-saturated HDR
image. We also propose an adaptive pseudo-label selection
strategy to pick high-quality HDR pseudo-labels in the sec-
ond stage to avoid the effect of mislabeled samples. Ex-
periments demonstrate that SSHDR outperforms state-of-
the-art methods quantitatively and qualitatively within and
across different datasets, achieving appealing HDR visual-
ization with few labeled samples.
|
1. Introduction
Standard digital photography sensors are unable to cap-
ture the wide range of illumination present in natural scenes,
resulting in Low Dynamic Range (LDR) images that often
*†The first three authors contributed equally to this work. This work
was partially supported by NSFC (U19B2037, 61901384), Natural Science
Basic Research Program of Shaanxi (2021JCW-03, 2023-JC-QN-0685),
and National Engineering Laboratory for Integrated Aero-Space-Ground-
Ocean Big Data Application Technology. Corresponding author: Yu Zhu.
Figure 1. The proposed method generates high-quality images
with few labeled samples when compared with several methods.
suffer from over or underexposed regions, which can dam-
age the details of the scene. High Dynamic Range (HDR)
imaging has been developed to address these limitations.
This technique combines several LDR images with different
exposures to generate an HDR image. While HDR imaging
can effectively recover details in static scenes, it may pro-
duce ghosting artifacts when used with dynamic scenes or
hand-held camera scenarios.
Historically, various techniques have been suggested
to address such issues, such as alignment-based methods
[3,10,27,37], patch-based methods [8,15,24], and rejection-
based methods [5, 11, 19, 20, 35, 40]. Two categories of
alignment-based approaches exist: rigid alignment ( e.g.,
homographies) that fail to address foreground motions, and
non-rigid alignment ( e.g., optical flow) that is error-prone.
Patch-based techniques merge similar regions using patch-
level alignment and produce superior results, but suffer
from high complexity. Rejection-based methods aim to
eliminate misaligned areas before fusing images, but may
result in a loss of information in motion regions.
As Deep Neural Networks (DNNs) become increas-
ingly prevalent, the DNN-based HDR deghosting methods
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5775
[9, 33, 36] achieve better visual results compared to tradi-
tional methods. However, these alignment approaches are
error-prone and inevitably cause ghosting artifacts (see Fig-
ure 1 Kalantari’s results). AHDR [31, 32] proposes spatial
attention to suppress motion and saturation, which effec-
tively alleviate misalignment problems. Based on AHDR,
ADNET [14] proposes a dual branch architecture using
spatial attention and PCD-alignment [29] to remove ghost-
ing artifacts. All these above methods directly learn the
complicated HDR mapping function with abundant HDR
ground truth data. However, it’s challenging to collect a
large amount of HDR-labeled data. The reasons can be at-
tributed to that 1) generating a ghost-free HDR ground truth
sample requires an absolute static background, and 2) it is
time-consuming and requires considerable manpower to do
manual post-examination. This generates a new setting that
only uses a few labeled data for HDR imaging.
Recently, FSHDR [22] attempts to generate a ghost-free
HDR image with only few labeled data. They use a prelim-
inary model trained with a large amount of unlabeled dy-
namic samples, and a few dynamic and static labeled sam-
ples to generate HDR pseudo-labels and synthesize artificial
dynamic LDR inputs to improve the model performance of
dynamic scenes further. This approach expects the model to
handle both the saturation and the ghosting problems simul-
taneously, but it is hard to achieve under the condition of
few labeled data, especially misaligned regions caused by
saturation and motion (see Figure 1 FSHDR). In addition,
FSHDR uses optical flow to forcibly synthesize dynamic
LDR inputs from poorly generated HDR pseudo-labels, the
errors in optical flow further intensify the degraded qual-
ity of artificial dynamic LDR images, resulting in an appar-
ent distribution shift between LDR training and testing data,
which hampers the performance of the network.
The above analysis makes it very challenging to di-
rectly generate a high-quality and ghost-free HDR image
with few labeled samples. A reasonable way is to address
the saturation problems first and then cope with the ghost-
ing problems with a few labeled samples. In this paper,
we propose a semi-supervised approach for HDR deghost-
ing, named SSHDR, which consists of two stages: self-
supervised learning network for content completion and
sample-quality-based iterative semi-supervised learning for
deghosting. In the first stage, we pretrain a Saturated Mask
AutoEncoder (SMAE), which learns the representation of
HDR features to generate content of saturated regions by
self-supervised learning. Specifically, considering that the
saturated regions can be regarded as masking the short LDR
input patches, inspired by [6], we randomly mask a high
proportion of the short LDR input and expect the model to
reconstruct a no-saturated HDR image from the remaining
LDR patches in the first stage. This self-supervised ap-
proach allows the model to recover the saturated regionswith the capability to effectively learn a robust represen-
tation for the HDR domain and map an LDR image to an
HDR image. In the second stage, to prevent the overfit-
ting problem with a few labeled training samples and make
full use of the unlabeled samples, we iteratively train the
model with a few labeled samples and a large amount of
HDR pseudo-labels from unlabeled data. Based on the
pretrained SMAE, a sample-quality-based iterative semi-
supervised learning framework is proposed to address the
ghosting artifacts. Considering the quality of pseudo-labels
is uneven, we develop an adaptive pseudo-labels selection
strategy to pick high-quality HDR pseudo-labels ( i.e., well-
exposed, ghost-free) to avoid awful pseudo-labels hamper-
ing the optimization process. This selection strategy is
guided by a few labeled samples and enhances the diver-
sity of training samples in each epoch. The experiments
demonstrate that our proposed approach can generate high-
quality HDR images with few labeled samples and achieves
state-of-the-art performance on individual and cross-public
datasets. Our contributions can be summarized as follows:
• We propose a novel and generalized HDR self-supervised
pretraining model, which uses mask strategy to recon-
struct an HDR image and addresses saturated problems
from one LDR image.
• We propose a sample-quality-based semi-supervised
training approach to select well-exposed and ghost-free
HDR pseudo-labels, which improves ghost removal.
• We perform both qualitative and quantitative experi-
ments, which show that our method achieves state-of-the-
art results on individual and cross-public datasets.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.