title
stringlengths 28
135
| abstract
stringlengths 0
12k
| introduction
stringlengths 0
12k
|
---|---|---|
Sawdayee_OReX_Object_Reconstruction_From_Planar_Cross-Sections_Using_Neural_Fields_CVPR_2023
|
Abstract
Reconstructing 3D shapes from planar cross-sections is
a challenge inspired by downstream applications like med-
ical imaging and geographic informatics. The input is an
in/out indicator function fully defined on a sparse collection
of planes in space, and the output is an interpolation of the
indicator function to the entire volume. Previous works ad-
dressing this sparse and ill-posed problem either produce
low quality results, or rely on additional priors such as target
topology, appearance information, or input normal direc-
tions. In this paper, we present OReX, a method for 3D shape
reconstruction from slices alone, featuring a Neural Field as
the interpolation prior. A modest neural network is trained
on the input planes to return an inside/outside estimate for a
given 3D coordinate, yielding a powerful prior that induces
smoothness and self-similarities. The main challenge for
this approach is high-frequency details, as the neural prior
is overly smoothing. To alleviate this, we offer an iterative
estimation architecture and a hierarchical input sampling
scheme that encourage coarse-to-fine training, allowing the
training process to focus on high frequencies at later stages.In addition, we identify and analyze a ripple-like effect stem-
ming from the mesh extraction step. We mitigate it by regular-
izing the spatial gradients of the indicator function around
input in/out boundaries during network training, tackling
the problem at the root. Through extensive qualitative and
quantitative experimentation, we demonstrate our method
is robust, accurate, and scales well with the size of the in-
put. We report state-of-the-art results compared to previous
approaches and recent potential solutions, and demonstrate
the benefit of our individual contributions through analysis
and ablation studies.1
|
1. Introduction
Reconstructing a 3D object from its cross-sections is a
long-standing task. It persists in fields including medical
imaging, topography mapping, and manufacturing. The typi-
cal setting is where a sparse set of arbitrary planes is given,
upon which the “inside” and “outside” regions of the de-
1Code and data available at https://github.com/haimsaw/
OReX
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20854
picted domain are labeled, and the entire shape in 3D is to
be estimated (see Fig. 1). This is a challenging and ill-posed
problem, especially due to the sparse and irregular nature
of the data. Classical approaches first localize the problem
by constructing an arrangement of the input planes, and
then introduce a local regularizer that governs the interpo-
lation of the input to within each cell. While sound, these
approaches typically involve simplistic regularization func-
tions, that only interpolate the volume within a cell bounded
by the relevant cross-sections; as a consequence, they intro-
duce over-smoothed solutions that do not respect features.
In addition, finding a cellular arrangement of planes is a
computationally-heavy procedure, adding considerable com-
plexity to the problem and rendering it quickly infeasible
for large inputs (see Sec. 4). As we demonstrate (Sec. 4),
recent approaches that reconstruct a mesh from an input
point cloud are not well suited to our setting, as they assume
a rather dense sampling of the entire shape. In addition,
these methods do not consider the information of an entire
cross-sectional plane, but rather only on the shape boundary.
In this paper, we introduce OReX —a reconstruction ap-
proach based on neural networks that estimates an entire
shape from its cross-sectional slices. Similar to recent ap-
proaches, the neural network constitutes the prior that ex-
trapolates the input to the entire volume. Neural networks
in general have already been shown to inherently induce
smoothness [17], and self-similarities [12], allowing natural
recurrence of patterns. Specifically, we argue that Neural
Fields [25] are a promising choice for the task at hand. Neu-
ral Fields represent a 3D scene by estimating its density and
other local geometric properties for every given 3D coordi-
nate. They are typically trained on 2D planar images, and are
required to complete the entire 3D scene according to multi-
view renderings or photographs. This neural representation
is differentiable by construction, and hence allows native
geometric optimization of the scene, realized via training.
We pose the reconstruction problem as a classification of
space into “in” and “out” regions, which are known for the
entire slice planes, and thus generate the indicator function
which its decision boundary is the output shape.
The main challenge with applying neural fields to this
problem is high-frequency details. Directly applying es-
tablished training schemes [19] shows strong spectral bias,
yielding overly smoothed results and other artifacts (Fig. 12).
Spectral bias is a well-known effect, indicating that higher
frequency is effectively learned slower [22]. To facilitate
effective high-frequency learning, avoiding the shadow cast
by the low frequency, we introduce two alterations. First, we
sample the planar data mostly around the inside/outside tran-
sition regions, where the frequencies are higher. This sam-
pling is further ordered from low to high-frequency regions
(according to the distance from the inside/outside boundary),
to encourage a low-to-high-frequency training progression.In addition, we follow recent literature and allow the net-
work to iteratively infer the result, where later iterations are
responsible for finer, higher-frequency corrections [1, 23].
Finally, we consider another high-frequency artifact, also
found in other neural-field-based works [9]. The desired
density (or indicator) function dictates a sharp drop in value
at the shape boundary. This is contradictory to the induced
neural prior, causing sampling-related artifacts in the down-
stream task of mesh extraction (Sec. 3.4). To alleviate this,
we penalize strong spatial gradients around the boundary
contours. This enforces a smoother transition between the in
and out regions, allowing higher-quality mesh extraction.
As we demonstrate (see Fig. 1), our method yields state-
of-the-art reconstructions from planar data, both for woman-
made and organic shapes. The careful loss and training
schemes are validated and analyzed through quantitative and
qualitative experimentation. Our method is arrangement-
free, and thus both interpolates all data globally, avoiding
local artifacts, and scales well to a large number of slices
(see Sec. 4).
|
Ryoo_Token_Turing_Machines_CVPR_2023
|
Abstract
We propose Token Turing Machines (TTM), a sequen-
tial, autoregressive Transformer model with memory for
real-world sequential visual understanding. Our model is
inspired by the seminal Neural Turing Machine, and has an
external memory consisting of a set of tokens which sum-
marise the previous history (i.e., frames). This memory is
efficiently addressed, read and written using a Transformer
as the processing unit/controller at each step. The model’s
memory module ensures that a new observation will only
be processed with the contents of the memory (and not the
entire history), meaning that it can efficiently process long
sequences with a bounded computational cost at each step.
We show that TTM outperforms other alternatives, such as
other Transformer models designed for long sequences and
recurrent neural networks, on two real-world sequential vi-
sual understanding tasks: online temporal activity detection
from videos and vision-based robot action policy learning.
Code is publicly available at: https://github.com/google-
research/scenic/tree/main/scenic/projects/token turing.
|
1. Introduction
Processing long, sequential visual inputs in a causal man-
ner is a problem central to numerous applications in robotics
and vision. For instance, human activity recognition mod-
els for monitoring patients and elders are required to make
real-time inference on ongoing activities from streaming
videos. As the observations grow continuously, these models
require an efficient way of summarizing and maintaining
information in their past image frames with limited com-
pute. Similarly, robots learning their action policies from
training videos, need to abstract history of past observations
and leverage it when making its action decisions in real-time.
This is even more important if the robot is required to learn
complicated tasks with longer temporal horizons.
A traditional way of handling online observations of
variable sequence lengths is to use recurrent neural net-
works (RNNs), which are sequential, auto-regressive mod-
els [13, 22, 35]. As Transformers [64] have become the defacto model architecture for a range of perception tasks, sev-
eral works have proposed variants which can handle longer
sequence lengths [19, 61, 67]. However, in streaming, or
sequential inference problems, efficient attention operations
for handling longer sequence lengths themselves are often
not sufficient since we do not want to run our entire trans-
former model for each time step when a new observation
(e.g., a new frame) is provided. This necessitates developing
models with explicit memories, enabling a model to fuse
relevant past history with current observation to make a pre-
diction at current time step. Another desideratum for such
models, to scale to long sequence lengths, is that the compu-
tational cost at each time step should be constant, regardless
of the length of the previous history.
In this paper, we propose Token Turing Machines (TTMs),
a sequential, auto-regressive model with external memory
and constant computational time complexity at each step.
Our model is inspired by Neural Turing Machines [30]
(NTM), an influential paper that was among the first to
propose an explicit memory and differentiable addressing
mechanism. The original NTM was notorious for being a
complex architecture that was difficult to train, and it has
therefore been largely forgotten as other modelling advances
have been made in the field. However, we show how we
can formulate an external memory as well as a processing
unit that reads and writes to this memory using Transformers
(plus other operations which are common in modern Trans-
former architectures). Another key component of TTMs is a
token summarization module, which provides an inductive
bias that intuitively encourages the memory to specialise to
different parts of its history during the reading and writing
operations. Moreover, this design choice ensures that the
computational cost of our network is constant irrespective
of the sequence length, enabling scalable, real-time, online
inference.
In contrast to the original NTM, our Transformer-based
modernisation is simple to implement and train. We demon-
strate its capabilities by achieving substantial improvements
over strong baselines in two diverse and challenging tasks:
(1) online temporal action detection (i.e., localisation) from
videos and (2) vision-based robot action policy learning.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19070
|
Shi_Learning_3D-Aware_Image_Synthesis_With_Unknown_Pose_Distribution_CVPR_2023
|
Abstract
Existing methods for 3D-aware image synthesis largely
depend on the 3D pose distribution pre-estimated on the
training set. An inaccurate estimation may mislead the
model into learning faulty geometry. This work proposes
PoF3D that frees generative radiance fields from the re-
quirements of 3D pose priors. We first equip the generator
with an efficient pose learner, which is able to infer a pose
from a latent code, to approximate the underlying true pose
distribution automatically. We then assign the discriminator
a task to learn pose distribution under the supervision
of the generator and to differentiate real and synthesized
images with the predicted pose as the condition. The
pose-free generator and the pose-aware discriminator are
jointly trained in an adversarial manner. Extensive results
on a couple of datasets confirm that the performance of
our approach, regarding both image quality and geometry
quality, is on par with state of the art. To our best
knowledge, PoF3D demonstrates the feasibility of learning
high-quality 3D-aware image synthesis without using 3D
pose priors for the first time. Project page can be found
here.
|
1. Introduction
3D-aware image generation has recently received grow-
ing attention due to its potential applications [3, 4, 9,
21, 30, 34, 47]. Compared with 2D synthesis, 3D-aware
image synthesis requires the understanding of the geometry
underlying 2D images, which is commonly achieved by
incorporating 3D representations, such as neural radiance
fields (NeRF) [2, 16, 17, 24, 25, 50], into generative models
like generative adversarial networks (GANs) [8]. Such a
formulation allows explicit camera control over the synthe-
sized results, which fits better with our 3D world.
To enable 3D-aware image synthesis from 2D image
collections, existing attempts usually rely on adequate
camera pose priors [3, 4, 9, 20, 21, 30, 47] for training.
†indicates equal contribution.
* This work was done during an internship at Ant Group.
(a) 𝜋-GAN: good pose distribution(b) 𝜋-GAN: bad pose distribution
(c) CAMPARI: good pose initialization(d) CAMPARI: bad pose initialization
Figure 1. Sensitivity to pose priors in existing 3D-aware image
synthesis approaches. π-GAN [4] works well given an adequate
pose distribution in (a), but fails to learn decent geometries given
aslightly changed distribution in (b). CAMPARI [20] delivers
reasonable results relying on a good initial pose distribution in (c),
but suffers from a wrongly estimated initialization in (d).
The priors are either estimated by conducting multiple pre-
experiments [4, 9, 47], or obtained by borrowing external
pose predictors or annotations [3]. Besides using a fixed
distribution for pose sampling, some studies also propose to
tune the pose priors together with the learning of image gen-
eration [20], but they are still dependent on the initial pose
distribution. Consequently, although previous methods can
produce satisfying images and geometry, their performance
is highly sensitive to the given pose prior. For example,
on the Cats dataset [51], π-GAN [4] works well with a
uniform pose distribution [-0.5, 0.5] (Fig. 1a), but fails
to generate a decent underlying geometry when changing
the distribution to [-0.3, 0.3] (Fig. 1b). Similarly, on the
CelebA dataset [15], CAMPARI [20] learns 3D rotation
when using Gaussian distribution N(0,0.24)as the initial
prior (Fig. 1c), but loses the canonical space ( e.g., the
middle image of Fig. 1d should be under the frontal view)
when changing the initialization to N(0,0.12)(Fig. 1d).
Such a reliance on pose priors causes unexpected instability
of 3D-aware image synthesis, which may burden this task
with heavy experimental cost.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13062
In this work, we present a new approach for 3D-aware
image synthesis, which removes the requirements for pose
priors. Typically, a latent code is bound to the 3D content
alone, where the camera pose is independently sampled
from a manually designed distribution. Our method, how-
ever, maps a latent code to an image, which is implicitly
composed of a 3D representation ( i.e., neural radiance field)
and a camera pose that can render that image. In this way,
the camera pose is directly inferred from the latent code
and jointly learned with the content, simplifying the input
requirement. To facilitate the pose-free generator better
capturing the underlying pose distribution, we re-design
the discriminator to make it pose-aware. Concretely, we
tailor the discriminator with a pose branch which is required
to predict a camera pose from a given image. Then, the
estimated pose is further treated as the conditional pseudo
label when performing real/fake discrimination. The pose
branch in the discriminator learns from the synthesized data
and its corresponding pose that is encoded in the latent code,
and in turn use the discrimination score to update the pose
branch in the generator. With such a loop-back optimization
process, two pose branches can align the fake data with the
distribution of the dataset.
We evaluate our pose-free method, which we call
PoF3D for short, on various datasets, including FFHQ [12],
Cats [51], and Shapenet Cars [5]. Both qualitative and
quantitative results demonstrate that PoF3D frees 3D-
aware image synthesis from hyper-parameter tuning on
pose distributions and dataset labeling, and achieves on par
performance with state-of-the-art in terms of image quality
and geometry quality.
|
Seth_DeAR_Debiasing_Vision-Language_Models_With_Additive_Residuals_CVPR_2023
|
Abstract
Large pre-trained vision-language models (VLMs) re-
duce the time for developing predictive models for various
vision-grounded language downstream tasks by providing
rich, adaptable image and text representations. However,
these models suffer from societal biases owing to the skewed
distribution of various identity groups in the training data.
These biases manifest as the skewed similarity between the
representations for specific text concepts and images of peo-
ple of different identity groups and, therefore, limit the use-
fulness of such models in real-world high-stakes applica-
tions. In this work, we present DEAR(Debiasing with
Additive Residuals), a novel debiasing method that learns
additive residual image representations to offset the orig-
inal representations, ensuring fair output representations.
In doing so, it reduces the ability of the representations to
distinguish between the different identity groups. Further,
we observe that the current fairness tests are performed
*Equal Contributionon limited face image datasets that fail to indicate why a
specific text concept should/should not apply to them. To
bridge this gap and better evaluate DEAR, we introduce
thePROTECTED ATTRIBUTE TAGASSOCIATION (PATA)
dataset – a new context-based bias benchmarking dataset
for evaluating the fairness of large pre-trained VLMs. Ad-
ditionally, PATAprovides visual context for a diverse human
population in different scenarios with both positive and neg-
ative connotations. Experimental results for fairness and
zero-shot performance preservation using multiple datasets
demonstrate the efficacy of our framework. The dataset is
released here.
|
1. Introduction
Deep learning-based vision-language models (VLMs) [ 45]
unify text and visual data into a common representation and
reduce the computing cost of training for specific computer
vision [ 19] and visually-grounded linguistic tasks [ 12,47].
VLMs are trained with a large amount of data with the
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
6820
aim of matching image and text representations for image-
caption pairs to capture diverse visual and linguistic con-
cepts. However, VLMs exhibit societal biases manifesting
as the skew in the similarity between their representation
of certain textual concepts and kinds of images [ 2,4,31].
These biases arise from the underlying imbalances in train-
ing data [ 2,3] and flawed training practices [ 55]. In this
work, we present a method to significantly reduce the bias
in VLMs by modifying the visual features of the models.
These societal biases in VLMs show up as the selective
association (or dissociation) of their representations of hu-
man images with specific physical characteristics and their
text representations for describing social labels [ 52]. For in-
stance, a higher degree of similarity between the representa-
tion of the text “doctor” and images of men than that of the
women can have trust consequences for models using the
representations from these VLMs. To estimate the degree
of such biases in VLMs, we compute the cosine similarity
between the representations of a set of human images and
specific key text phrases and compare their distribution over
some associated protected attributes . These attributes rep-
resent visually discernible characteristics like gender ,race,
andagecommon to certain collective identity groups. In
this work, we introduce D EAR, an additive residual-based
de-biasing technique that can be augmented with any pre-
trained visual-language model with separate visual and lan-
guage encoders [ 33,45,49] to improve their fairness.
Our empirical analysis indicates that the protected at-
tribute (PA) information from an image-text pair can be
disentangled using their representation by simply learning
a linear transformation of the visual representations pro-
duced by a VLM and subtracting (adding a negative resid-
ual) them from the original representation. We propose to
train our framework using two objectives: i) learn a resid-
ual representation that when added to the original repre-
sentation, renders it incapable of predicting different pro-
tected attributes, and ii) ensure that this modified represen-
tation is as close to the original as possible. We demon-
strate that learning additive residual enables the de-biasing
of pre-trained VLMs using quantitative skew computations
on multiple datasets and qualitative evaluations. We also
show that the resulting representation retains much of its
predictive properties by means of zero-shot evaluation on
different downstream tasks.
Recent efforts to mitigate these biases from VLMs,
such as by Berg et al. [ 2], use unimodal datasets like the
FairFace [ 28] and UTK [ 63] datasets. These face-image
datasets lack the context necessary to infer the situations
in which benign text associations can turn offensive when
applied selectively. For instance, if the image of a person
drinking water from a bottle is misconstrued as partaking
of alcohol and if this association (in terms of image-text
similarity) is selective to a specific group of people withsome specific visual characteristics, the association may
be deemed offensive. To this end, we introduce the P RO-
TECTED ATTRIBUTE TAGASSOCIATION (PATA) dataset to
test different associations for VLMs. P ATAcomprises im-
ages of persons in different contexts and their respective
text captions with positive and negative connotations. We
present our de-biasing results on both the FairFace and the
PATAdatasets. In summary, the paper makes the following
contributions:
•We present the D EAR framework – a simple,
computationally-efficient, and effective de-biasing
method for VLMs that only adapts the image encoder
of the VLM by adding a learned residual representa-
tion to it.
•We introduce the P ROTECTED ATTRIBUTE TAGAS-
SOCIATION dataset – a novel context-based bias eval-
uation dataset that enables nuanced reporting of biases
in VLMs for race, age, and gender protected attributes.
|
Rahman_Ambiguous_Medical_Image_Segmentation_Using_Diffusion_Models_CVPR_2023
|
Abstract
Collective insights from a group of experts have al-
ways proven to outperform an individual’s best diagnostic
for clinical tasks. For the task of medical image segmen-
tation, existing research on AI-based alternatives focuses
more on developing models that can imitate the best indi-
vidual rather than harnessing the power of expert groups.
In this paper, we introduce a single diffusion model-based
approach that produces multiple plausible outputs by learn-
ing a distribution over group insights. Our proposed model
generates a distribution of segmentation masks by leverag-
ing the inherent stochastic sampling process of diffusion us-
ing only minimal additional learning. We demonstrate on
three different medical image modalities- CT, ultrasound,
and MRI that our model is capable of producing several
possible variants while capturing the frequencies of their
occurrences. Comprehensive results show that our pro-
posed approach outperforms existing state-of-the-art am-
biguous segmentation networks in terms of accuracy while
preserving naturally occurring variation. We also propose
a new metric to evaluate the diversity as well as the ac-
curacy of segmentation predictions that aligns with the in-
terest of clinical practice of collective insights. Implemen-
tation code: https://github.com/aimansnigdha/Ambiguous-
Medical-Image-Segmentation-using-Diffusion-Models.
|
1. Introduction
Diagnosis is the central part of medicine, which heavily
relies on the individual practitioner assessment strategy. Re-
cent studies suggest that misdiagnosis with potential mor-
tality and morbidity is widespread for even the most com-
mon health conditions [32, 49]. Hence, reducing the fre-
quency of misdiagnosis is a crucial step towards improving
healthcare. Medical image segmentation, which is a cen-
tral part of diagnosis, plays a crucial role in clinical out-
comes. Deep learning-based networks for segmentation are
now getting traction for assisting in clinical settings, how-
ever, most of the leading segmentation networks in the lit-
erature are deterministic [17, 23, 34, 36, 41, 42, 44], mean-
ing they predict a single segmentation mask for each in-
put image. Unlike natural images, ground truths are not
PriorNet
μPrior
𝜎Prior
Xb,TXb,tXb,t-1Xb,0b
Sampling Network
Network z1, z2, z3 …a)Deterministic b) V AE-based c)Diffusion-based (Ours)
Input
Prediction Predictions Predictions Input Input
Annotations from different radiologists Input Figure 1. a) Deterministic networks produce a single output for
an input image. b) c-V AE-based methods encode prior informa-
tion about the input image in a separate network and sample latent
variables from there and inject it into the deterministic segmenta-
tion network to produce stochastic segmentation masks. c) In our
method the diffusion model learns the latent structure of the seg-
mentation as well as the ambiguity of the dataset by modeling the
way input images are diffused through the latent space. Hence our
method does not need an additional prior encoder to provide latent
variables for multiple plausible annotations.
deterministic in medical images as different diagnosticians
can have different opinions on the type and extent of an
anomaly [1,15,37,39]. Due to this, the diagnosis from med-
ical images is quite challenging and often results in a low
inter-rater agreement [22,24,56]. Depending on only pixel-
wise probabilities and ignoring co-variances between the
pixels might lead to misdiagnosis. In clinical practice, ag-
gregating interpretations of multiple experts have shown to
improve diagnosis and generate fewer false negatives [57].
In fact, utilizing the aptitude of multiple medical ex-
perts has been a part of long-standing clinical traditions
such as case conferences, specialist consultations, and tu-
mor boards. By harnessing the power of collective intelli-
gence, team-based decision-making provides safer health-
care through improved diagnosis [32, 40]. Although collec-
tive insight is gaining traction in healthcare for its potential
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
11536
in enhancing diagnostic accuracy, the method and its im-
plication remain poorly characterized in automated medical
vision literature. It has been suggested that the use of artifi-
cial intelligence can optimize these processes while consid-
ering physician workflows in clinical settings [40].
In recent times, there has been an outstanding improve-
ment in specialized deterministic models for different med-
ical image segmentation tasks [13, 44, 52–55, 61]. Deter-
ministic models are notorious for choosing the most likely
hypothesis even if there is uncertainty which might lead to
sub-optimal segmentation. To overcome this, some models
incorporate pixel-wise uncertainty for segmentation tasks,
however, they produce inconsistent outputs [25,26]. Condi-
tional variational autoencoders (c-V AE) [48], a conditional
generative model, can be fused with deterministic segmen-
tation networks to produce unlimited numbers of predic-
tions by sampling from the latent space conditioned on the
input image. Probabilistic U-net and its variants use this
technique during the inference process. Here, the latent
spaces are sampled from a prior network which has been
trained to be similar to c-V AE [8, 29, 30]. This dependency
on a prior network as well as injecting stochasticity only
at the highest resolution of the segmentation network pro-
duces less diverse and blurry segmentation predictions [46].
To overcome this problem, we introduce a single inherently
probabilistic model without any additional prior network
that represents the collective intelligence of several experts
to leverage multiple plausible hypotheses in the diagnosis
pipeline (visualized in Figure 1).
Diffusion probabilistic models are a class of generative
models consisting of Markov chains trained using varia-
tional inference [21]. The model learns the latent structure
of the dataset by modeling the diffusion process through la-
tent space. A neural network is trained to denoise noisy
image blurred using Gaussian noise by learning the reverse
diffusion process [50]. Recently, diffusion models have
been found to be widely successful for various tasks such
as image generation [14], and inpainting [35]. Certain ap-
proaches have also been proposed to perform semantic seg-
mentation using diffusion models [6,59]. Here, the stochas-
tic element in each sampling step of the diffusion model
using the same pre-trained model paves the way for gen-
erating multiple segmentation masks from a single input
image. However, there is still no exploration of using dif-
fusion models for ambiguous medical image segmentation
despite its high potential. In this paper, we propose the
CIMD ( Collectivly Intelligent Medical Diffusion), which
addresses ambiguous segmentation tasks of medical imag-
ing. First, we introduce a novel diffusion-based probabilis-
tic framework that can generate multiple realistic segmen-
tation masks from a single input image. This is motivated
by our argument that the stochastic sampling process of the
diffusion model can be harnessed to sample multiple plausi-ble annotations. The stochastic sampling process also elim-
inates the need for a separate ‘prior’ distribution during the
inference stage, which is critical for c-V AE-based segmen-
tation models to sample the latent distribution for ambigu-
ous segmentation. The hierarchical structure of our model
also makes it possible to control the diversity at each time
step hence making the segmentation masks more realistic
as well as heterogeneous. Lastly, in order to assess ambigu-
ous medical image segmentation models, one of the most
commonly used metrics is known as GED (Generalized En-
ergy Distance), which matches the ground truth distribu-
tion with prediction distribution. In real-world scenarios
for ambiguous medical image segmentation, ground truth
distributions are characterized by only a set of samples. In
practice, the GED metric has been shown to reward sam-
ple diversity regardless of the generated samples’ fidelity
or their match with ground truths, which can be potentially
harmful in clinical applications [30]. In medical practice,
individual assessments are manually combined into a single
diagnosis and evaluated in terms of sensitivity. When real-
time group assessment occurs, the participant generates a
consensus among themselves. Lastly, the minimum agree-
ment and maximum agreement among radiologists are also
considered in clinical settings. Inspired by the current prac-
tice in collective insight medicine, we coin a new metric,
namely the CI score ( Collective Insight) that considers total
sensitivity, general consensus, and variation among radiolo-
gists. In summary, the following are the major contributions
of this work:
• We propose a novel diffusion-based framework: Col-
lectively Intelligent Medical Diffusion (CIMD), that
realistically models heterogeneity of the segmentation
masks without requiring any additional network to pro-
vide prior information during inference unlike previ-
ous ambiguous segmentation works.
• We revisit and analyze the inherent problem of the
current evaluation metric, GED for ambiguous models
and explain why this metric is insufficient to capture
the performance of the ambiguous models. We intro-
duce a new metric inspired by collective intelligence
medicine, coined as the CI Score ( Collective Insight).
• We demonstrate across three medical imaging modal-
ities that CIMD performs on par or better than the
existing ambiguous image segmentation networks in
terms of quantitative standards while producing supe-
rior qualitative results.
|
Shen_Progressive_Transformation_Learning_for_Leveraging_Virtual_Images_in_Training_CVPR_2023
|
Abstract
To effectively interrogate UAV-based images for detect-
ing objects of interest, such as humans, it is essential to
acquire large-scale UAV-based datasets that include human
instances with various poses captured from widely varying
viewing angles. As a viable alternative to laborious and
costly data curation, we introduce Progressive Transfor-
mation Learning (PTL), which gradually augments a train-
ing dataset by adding transformed virtual images with en-
hanced realism. Generally, a virtual2real transformation
generator in the conditional GAN framework suffers from
quality degradation when a large domain gap exists be-
tween real and virtual images. To deal with the domain gap,
PTL takes a novel approach that progressively iterates the
following three steps: 1) select a subset from a pool of vir-
tual images according to the domain gap, 2) transform the
selected virtual images to enhance realism, and 3) add the
transformed virtual images to the training set while remov-
ing them from the pool. In PTL, accurately quantifying the
domain gap is critical. To do that, we theoretically demon-
strate that the feature representation space of a given object
detector can be modeled as a multivariate Gaussian dis-
tribution from which the Mahalanobis distance between a
virtual object and the Gaussian distribution of each object
category in the representation space can be readily com-
puted. Experiments show that PTL results in a substantial
performance increase over the baseline, especially in the
small data and the cross-domain regime.
|
1. Introduction
Training an object detector usually requires a large-scale
training image set so that the detector can acquire the abil-
ity to detect objects’ diverse appearances. This desire for a
large-scale training set is bound to be greater for object cat-
egories with more diverse appearances, such as the human
category whose appearances vary greatly depending on its
pose or viewing angles. Moreover, a person’s appearance
Transformation
Candidate SelectionVirtual2Real
TransformationSet Update
Real images
w/ conditional GANReal images
Virtual images
- -+: add -: subtract
+
+
Virtual imagesselect
generatorgeneratorFigure 1. Overview of Progressive Transformation Learning.
becomes more varied in images captured by an unmanned
aerial vehicle (UA V), leading to a wide variety of camera
viewing angles compared to ground-based cameras, mak-
ing the desire for a large-scale training set even greater. In
this paper, we aim to satisfy this desire, especially when the
availability of UA V-based images to train a human detector
is scarce, where this desire is more pressing.
As an intuitive way to expand the training set, one might
consider synthesizing virtual images to imitate real-world
images by controlling the optical and physical conditions in
a virtual environment. Virtual images are particularly use-
ful for UA V-based object detection since abundant object
instances can be rendered with varying UA V locations and
camera viewing angles along with ground-truth informa-
tion (e.g., bounding boxes, segmentation masks) that comes
free of charge. Therefore, a large-scale virtual image set
covering diverse appearances of human subjects that are
rarely shown in existing UA V-based object detection bench-
marks [1,3,53] can be conveniently acquired by controlling
entities and parameters in a virtual environment, such as
poses, camera viewing angles, and illumination conditions.
To make virtual images usable for training real-world ob-
ject detection models, recent works [19, 29, 38, 39] trans-
form virtual images to look realistic. They commonly use
the virtual2real generator [36] trained with the conditional
GAN framework to transform images in the source domain
to have the visual properties of images in the target do-
main. Here, virtual and real images are treated as the source
and target domains, respectively. However, the large dis-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
835
crepancy in visual appearance between the two domains,
referred to as the “domain gap”, result in the degraded
transformation quality of the generator. In fact, the afore-
mentioned works using virtual images validate their meth-
ods where the domain gap is not large (e.g., digit detec-
tion [19]) or when additional information is available (e.g.,
animal pose estimation with additional keypoint informa-
tion [29, 38, 39]). In our case, real and virtual humans in
UA V-based images inevitably have a large domain gap due
to the wide variety of human appearances.
To address the large domain gap, one critical question
inherent in our task is how to measure accurately the do-
main gap. Consequently, we estimate the domain gap in the
representation space of a human detector trained on the real
images. The representation space of the detector is learned
such that test samples, which have significantly different
properties than training samples from the perspective of the
detector, are located away from the training samples. In
this paper, we show that the feature distribution of object
entities belonging to a certain category, such as the human
category, in the representation space can be modeled with a
multivariate Gaussian distribution if the following two con-
ditions are met: i) the detector uses the sigmoid function to
normalize the final output and ii) the representation space
is constructed using the output of the penultimate layer of
the detector. This idea is inspired by [28], which shows
that softmax-based classifiers can be modeled as multivari-
ate Gaussian distributions. In this paper, we show that the
proposition is also applicable to sigmoid-based classifiers,
which are widely used by object detectors. Based on this
modeling, when the two aforementioned conditions are met,
the human category in the representation space can be rep-
resented by two parameters (i.e., mean and covariance) of a
multivariate Gaussian distribution that can be computed on
the training images. With the empirically calculated mean
and covariance, the domain gap from a single virtual hu-
man image to real human images (i.e., the training set) can
be measured using the Mahalanobis distance [35].
To add virtual images to the training set to include more
diverse appearances of objects while preventing the trans-
formation quality degradation caused by large domain gaps,
we introduce Progressive Transformation Learning (PTL)
(Figure 1). PTL progressively expands the training set by
adding virtual images through iterating the three steps: 1)
transformation candidate selection, 2) virtual2real transfor-
mation, and 3) set update. When selecting transformation
candidates from a virtual image pool, we use weighted ran-
dom sampling, which gives higher weights to images with
smaller domain gaps. The weight takes an exponential term
with one hyperparameter controlling the ratio between im-
ages with smaller domain gaps and images with more di-
verse appearances. Then, the virtual2real transformation
generator is trained via the conditional GAN, taking the se-lected transformation candidates as the “source” and the im-
ages in the training set as the “target”. After transforming
the transformation candidates by applying the virtual2real
transformation generator, the training set is expanded with
the transformed candidates while the original candidates are
excluded from the pool of virtual images.
The main contribution of this paper is that we have val-
idated the utility of virtual images in augmenting training
data via PTL coupled with carefully designed comprehen-
sive experiments. We first use the task of low-shot learn-
ing, where adequately expanding datasets has notable ef-
fects. Specifically, PTL provides better accuracy on three
UA V-view human detection benchmarks than other previ-
ous works that leverage virtual images in training, as well
as methods that only use real images. Then, we validate
PTL on the cross-domain detection task where training and
test sets are from distinct domains and virtual images can
serve as a bridge between these two sets. The experimen-
tal results indicate that a high-performance human detection
model can be effectively learned via PTL, even with the sig-
nificant lack of real-world training data.
|
Ren_VolRecon_Volume_Rendering_of_Signed_Ray_Distance_Functions_for_Generalizable_CVPR_2023
|
Abstract
The success of the Neural Radiance Fields (NeRF) in
novel view synthesis has inspired researchers to propose
neural implicit scene reconstruction. However, most exist-
ing neural implicit reconstruction methods optimize per-
scene parameters and therefore lack generalizability to
new scenes. We introduce VolRecon, a novel generalizable
implicit reconstruction method with Signed Ray Distance
Function (SRDF). To reconstruct the scene with fine de-
tails and little noise, VolRecon combines projection features
aggregated from multi-view features, and volume features
interpolated from a coarse global feature volume. Using
a ray transformer, we compute SRDF values of sampled
points on a ray and then render color and depth. On DTU
dataset, VolRecon outperforms SparseNeuS by about 30%
in sparse view reconstruction and achieves comparable ac-
curacy as MVSNet in full view reconstruction. Furthermore,
our approach exhibits good generalization performance on
the large-scale ETH3D benchmark. Code is available at
https://github.com/IVRL/VolRecon/ .
|
1. Introduction
The ability to reconstruct 3D geometries from images or
videos is crucial in various applications in robotics [16, 43,
52] and augmented/virtual reality [29,35]. Multi-view stereo
(MVS) [13, 15, 39, 47, 54, 55] is a commonly used technique
for this task. A typical MVS pipeline involves multiple steps,
i.e., multi-view depth estimation, filtering, and fusion [5,13].
Recently, there has been a growing interest in neural im-
plicit representations for various 3D tasks, such as shape
modeling [28, 36], surface reconstruction [49, 56], and novel
view synthesis [30]. NeRF [30], a seminal work in this area,
employs Multi-Layer Perceptrons (MLP) to model a radiance
field, producing volume density and radiance estimates for a
given position and viewing direction. While NeRF’s scene
representation and volume rendering approach has proven
effective for tasks such as novel view synthesis, it cannot
*Equal contribution
Input
Reconstruction
SparseNeuS Ours (VolRecon )
Figure 1. Generalizable implicit reconstructions from three views
(top). The state-of-the-art method SparseNeuS [26] produces over-
smoothed surfaces (left), while our (VolRecon) reconstructs finer
details (right). Best viewed on a screen when zoomed in.
generate accurate surface reconstruction due to difficulties
in finding a universal density threshold for surface extrac-
tion [56]. To address this, researchers have proposed neural
implicit reconstruction using the Signed Distance Function
(SDF) for geometry representation and modeling the vol-
ume density function [49, 56]. However, utilizing SDF with
only color supervision leads to unsatisfactory reconstruction
quality compared to MVS methods [15, 54] due to a lack of
geometry supervision and potential radiance-geometry ambi-
guities [51,62]. As a result, subsequent works have sought to
improve reconstruction quality by incorporating additional
priors, such as sparse Struction-from-Motion (SfM) point
clouds [11], dense MVS point clouds [60], normals [48, 59],
and depth maps [59].
Many neural implicit reconstruction methods are re-
stricted to optimizing one model for a particular scene and
cannot be applied to new, unseen scenes, i.e., across-scene
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16685
generalization. However, the ability to generalize learned
priors to new scenes is valuable in challenging scenarios
such as reconstruction with sparse views [4, 50, 58]. In or-
der to achieve across-scene generalization in neural implicit
reconstruction, it is insufficient to simply input the spatial
coordinate of a point as NeRF. Instead, we need to incorpo-
rate information about the scene, such as the points’ pro-
jection features on the corresponding images [4, 50, 58].
SparseNeuS [26] recently achieved across-scene general-
ization in implicit reconstruction with global feature vol-
umes [4]. Despite achieving promising results, SparseNeuS
is limited by the resolution of the feature volume due to
the memory constraints [22, 32], leading to over-smoothing
surfaces even with a higher resolution feature volume, Fig. 1.
In this paper, we propose V olRecon, a novel framework
for generalizable neural implicit reconstruction using the
Signed Ray Distance Function (SRDF). Unlike SDF, which
defines the distance to the nearest surface along any direc-
tions, SRDF [63] defines the distance to the nearest surface
along a given ray. We utilize a projection-based approach
to gather local information about surface location. We first
project each point on the ray into the feature map of each
source view to interpolate multi-view features. Then, we ag-
gregate the multi-view features to projection features using
a view transformer. However, when faced with challenging
situations such as occlusions and textureless surfaces, de-
termining the surface location along the ray with only local
information is difficult. To address this, we construct a coarse
global feature volume that encodes global shape priors like
SparseNeuS [26, 32]. We use the interpolated features from
the global feature volume, i.e.,volume features , and pro-
jection features of all the sampled points along the ray to
compute their SRDF values, with a ray transformer. Similar
to NeuS [49], we model the density function with SRDF
and then estimate the image and depth map with volume
rendering.
Extensive experiments on DTU [1] and ETH3D [40]
verify the effectiveness and generalization ability of our
method. On DTU, our method outperforms the state-of-
the-art method SparseNeuS [26] by 30% in sparse view
reconstruction and 22% in full view reconstruction. Further-
more, our method performs better than the MVS baseline
COLMAP [39]. Compared with MVSNet [54], a seminal
learning-based MVS method, our method performs better
in the depth evaluation and has comparable accuracy in full-
view reconstruction. On the ETH3D benchmark [40], we
show that our method has a good generalization ability to
large-scale scenes.
In summary, our contributions are as follows:
•We propose V olRecon, a new pipeline for generalizable
implicit reconstruction that produce detailed surfaces.
• Our novel framework comprises a view transformer toaggregate multi-view features and a ray transformer to
compute SRDF values of all the points along a ray.
•We introduce a combination of local projection features
and global volume features, which enables the recon-
struction of surfaces with fine details and high quality.
|
Si_Angelic_Patches_for_Improving_Third-Party_Object_Detector_Performance_CVPR_2023
|
Abstract
Deep learning models have shown extreme vulnerabil-
ity to distribution shifts such as synthetic perturbations and
spatial transformations. In this work, we explore whether
we can adopt the characteristics of adversarial attack meth-
ods to help improve robustness of object detection to dis-
tribution shifts such as synthetic perturbations and spa-
tial transformations. We study a class of realistic object
detection settings wherein the target objects have control
over their appearance. To this end, we propose a reversed
Fast Gradient Sign Method (FGSM) to obtain these angelic
patches that significantly increase the detection probabil-
ity, even without pre-knowledge of the perturbations. In de-
tail, we apply the patch to each object instance simultane-
ously, strengthening not only classification, but also bound-
ing box accuracy. Experiments demonstrate the efficacy of
the partial-covering patch in solving the complex bounding
box problem. More importantly, the performance is also
transferable to different detection models even under severe
affine transformations and deformable shapes. To the best
of our knowledge, we are the first object detection patch that
achieves both cross-model efficacy and multiple patches.
We observed average accuracy improvements of 30% in the
real-world experiments. Our code is available at: https:
//github.com/averysi224/angelic_patches .
|
1. Introduction
Deep learning models have been heavily deployed in
many safety-critical settings such as autonomous vehicles.
However, these models have been notoriously fragile to
mild perturbations. For example, natural corruptions like
weather conditions and simple lightning effects can signif-
icantly degrade the performance of state-of-the-art mod-
els [11, 14]. Similarly, the performance under small spa-
tial transformations exhibits a large gap compared to clean
benchmarks [2, 7, 15]. On the other hand, a set of carefully
designed adversarial examples [10] are able to manipulate
the prediction behavior arbitrarily without notice of human
eyes. The untrustworthiness of deep learning systems leads
Angelic Patch
Foggy Clean
Foggy Patched
Figure 1. In this paper, we demonstrate that optimizing a partial-
cover patch for pre-trained object detectors can improve robust-
ness and significantly boost performance for both the classification
and bounding box regression accuracy. In the left picture, one of
the three people is mis-detected after applying the fog corruption.
However, in the right image with our angelic patch , the detector
was able to detect all three people with even more accurate bound-
ing boxes. Consider all three people wearing an angelic raincoat,
we could save lives from the foggy-blinded autonomous car!
to high stake failures and devastating consequences.
Two main streams of algorithms investigate solving these
problems. One is to improve out-of-distribution behav-
ior by adding more robustness interventions and diverse
data during training. However, they do not fully close the
gap between standard model performance and perturbed re-
sults [8]. Others applied domain adaptation over covari-
ate shift achieving reasonable performance [23], yet this
method does not generalize on unseen domains. Whereas
the misspecified test time distribution occurs dominantly in
practice.
Motivated and inspired by the efficacy of these perturba-
tion/adversarial methods above, we ask the question: Can
we adopt the characteristics of adversarial attack methods
to help improve perturbation-robustness? We propose to
reconsider the problem setup itself and study in a scenario
where the target objects are in control of their appearance.
To simplify, we build the objects instead of the models to
improve detection reliability. As a concrete example, con-
sider a pedestrian interacting with autonomous cars that use
deep learning models for detection. Our approach is to pro-
vide a wearable patch designed to improve the visibility of
these people to these models (Figure 1). Such practices
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
24638
Step 1: Apply patch. Step 2: Apply corruption.
(a) An example of corruption-aware angelic patch training procedure.Step 1: Apply patch.
(b) An example of corruption-agnostic angelic patch training.
Figure 2. Examples of the two considered methods for constructing angelic patches. In the corruption-aware setting, we compute loss by
the predictions of the corrupted patched images. At test time, we test on the same type of corruption; In the corruption-agnostic setting, we
compute loss by the predictions of the corrupted clean images. At test time, we test on arbitrary corruption.
are already common for dealing with human drivers—e.g.,
wearing bright or reflective clothing at night.
Furthermore, though the adversarial attacks and robust-
ness of classifiers are well-studied, we focus on the less
studied yet practically broadly used object detection setting.
In detail, the detectors learn not only the object class infor-
mation but also learn about the localization and the size of
an object, e.g. bounding boxes. Besides, the object detec-
tion problem is essentially multiple instances, causing im-
plicit interaction between objects with even occlusions. To
our knowledge, we are the first to systematically investigate
this setting. We validated our method on both the single-
stage detector and the two-stage detector that are with the
proposal network as well as cross model experiments.
Our contribution is three-fold: First, we propose the
novel angelic patches with a Reversed Fast Signed Gra-
dient Method to improve the performance of both single-
stage and two-stage third-party object detectors. Second,
we demonstrate the efficacy of our framework on a wide va-
riety of detection settings including dozens of synthetic cor-
ruptions and affine transformations without additional aug-
mentation during training. Third, we are the first defense
physical patch that achieves cross-model validation on sev-
eral state-of-the-art unseen models. We extensively evalu-
ated our approach with both programmatic patches as well
as real-world experiments. We believe our approach identi-
fies a highly practical valuable strategy that can be used in
a broad range of applications.
In the following sections, we start with a review of re-
lated work, then give the proposed angelic patch framework
and experiment results. After that, we conclude the paper.
|
Shen_Disentangling_Orthogonal_Planes_for_Indoor_Panoramic_Room_Layout_Estimation_With_CVPR_2023
|
Abstract
Based on the Manhattan World assumption, most exist-
ing indoor layout estimation schemes focus on recovering
layouts from vertically compressed 1D sequences. However,
the compression procedure confuses the semantics of differ-
ent planes, yielding inferior performance with ambiguous
interpretability.
To address this issue, we propose to disentangle this 1D
representation by pre-segmenting orthogonal (vertical and
horizontal) planes from a complex scene, explicitly captur-
ing the geometric cues for indoor layout estimation. Con-
sidering the symmetry between the floor boundary and ceil-
ing boundary, we also design a soft-flipping fusion strategy
to assist the pre-segmentation. Besides, we present a fea-
ture assembling mechanism to effectively integrate shallow
and deep features with distortion distribution awareness.
To compensate for the potential errors in pre-segmentation,
we further leverage triple attention to reconstruct the dis-
entangled sequences for better performance. Experiments
on four popular benchmarks demonstrate our superiority
over existing SoTA solutions, especially on the 3DIoU met-
ric. The code is available at https://github.com/
zhijieshen-bjtu/DOPNet .
|
1. Introduction
Indoor panoramic layout estimation refers to recon-
structing 3D room layouts from omnidirectional images.
Since the panoramic vision system captures the whole-room
contextual information, we can estimate the complete room
layout with a single panoramic image. However, inferring
the 3D information from a 2D image is an ill-posed prob-
lem. Besides, the 360° field-of-view (FoV) of panoramas
brings severe distortions that increase along the latitude.
Both issues are challenging for indoor layout estimation.
†Corresponding author
(a) Compressing
Compressing
Compressing
C
Multi -scale features
Multi -scale features
Features assembling
Disentangling
(b)
Compressing
Compressing
Compressing
Compressing
Compressing
ResizeResize
ResizeSampling
Horizontal plane
Vertical planesFigure 1. (a) The commonly used architecture. (b) The proposed
one. Compared with the traditional pipeline, ours has two advan-
tages: (1) Disentangling the 1D representation into two separate
sequences with different plane semantics. (2) Adaptively integrat-
ing shallow and deep features with distortion awareness via a fea-
ture assembling mechanism rather than simple concatenation.
Different from outdoor scenarios, the indoor room has
the following properties: (1) The indoor scenes conform to
the Manhattan World assumption (The floors and ceilings
are all flat planes, and the walls are perpendicular to them).
(2) The room layout is always described as the corners or
the floor boundary and ceiling boundary. These characteris-
tics could be used as potential priors to guide the design of
a reasonable layout estimation method.
Based on the Manhattan World assumption, previous
approaches [19, 20, 8, 14, 21] prefer to estimate the lay-
out from a 1D sequence. They advocate compressing the
extracted 2D feature maps in height dimension to obtain
a 1D sequence, of which every element in this sequence
share the same distortion magnitude (Fig. 1a). We argue
that this compressed representation does not eliminate the
panoramic distortions because there is no explicit distortion
processing when extracting 2D feature maps before com-
pression. Moreover, this strategy roughly mixes the vertical
and horizontal planes together, confusing the semantics of
different planes that are crucial for layout estimation.
On the other hand, some researchers devoted themselves
1
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17337
to adopting different projection formats to boost the perfor-
mance, e.g., the bird’s view of the room [29] and cubemap
projection [27]. These projection-based schemes weaken
the negative effect of the distortions. Nevertheless, frequent
projection operations between different formats raise com-
putational complexity. In addition, there exists an inevitable
domain gap between the feature maps from different for-
mats.
To address the above problems, we propose a novel ar-
chitecture (Fig.1b) that disentangles the orthogonal planes
in advance to capture an explicit geometric cue. Follow-
ing [21], our room layout can be recovered from the pre-
dicted horizon-depth map and the room height. Therefore,
the “clean” depth-relevant features and height-relevant fea-
tures can both help with the layout estimation. To obtain
such “clean” features free from the disturbance of decora-
tions and furniture, we disentangle the widely used 1D rep-
resentation into two separate sequences — the horizontal
and vertical ones. Specifically, we pre-segment the vertical
plane (i.e., walls) and horizontal planes (i.e., floors and ceil-
ings) from the whole-room contextual information. Then
these orthogonal planes are compressed into two 1D repre-
sentations. Especially, based on the symmetry property be-
tween the floor boundary and ceiling boundary, we design a
soft-flipping fusion strategy to assist this process.
Moreover, we propose an assembling mechanism to fuse
multi-scale features with distortion awareness effectively.
To eliminate the negative effect of distortion, we com-
pute the attention among distortion-relevant positions fol-
lowing distortion distribution patterns. Meanwhile, cross-
scale interaction is carried out to integrate shallow geo-
metric structures and deep semantic features. Considering
the error of pre-segmentation, we further leverage triple at-
tention to reconstruct the two 1D sequences. Particularly,
we adopt graph-based attention to generate discriminative
channels, self-attention to rebuild long-range dependencies,
and cross-attention to provide the missing information for
different sequences.
To demonstrate our effectiveness, we conduct extensive
experiments on four popular datasets — MatterportLayout
[29], Zind [6], Stanford 2D-3D [1], and PanoContext [26].
The qualitative and quantitative results show that the pro-
posed solution outperforms other SoTA methods. Our con-
tributions can be summarized as follows:
• We propose to disentangle orthogonal planes to cap-
ture an explicit geometric cue for indoor 360° room
layout estimation, with a soft-flipping fusion strategy
to assist this procedure.
• We design a cross-scale distortion-aware assembling
mechanism to perceive distortion distribution as well
as integrate shallow geometric structures and deep se-
mantic features.• On popular benchmarks, our solution outperforms
other SoTA schemes, especially on the metric of in-
tersection over the union of 3D room layouts.
|
Sanghi_CLIP-Sculptor_Zero-Shot_Generation_of_High-Fidelity_and_Diverse_Shapes_From_Natural_CVPR_2023
|
Abstract
Recent works have demonstrated that natural language
can be used to generate and edit 3D shapes. However,
these methods generate shapes with limited fidelity and di-
versity. We introduce CLIP-Sculptor, a method to address
these constraints by producing high-fidelity and diverse
3D shapes without the need for (text, shape) pairs during
training. CLIP-Sculptor achieves this in a multi-resolution
approach that first generates in a low-dimensional latent
space and then upscales to a higher resolution for im-
proved shape fidelity. For improved shape diversity, we
use a discrete latent space which is modeled using a trans-
former conditioned on CLIP’s image-text embedding space.
We also present a novel variant of classifier-free guid-
ance, which improves the accuracy-diversity trade-off. Fi-
nally, we perform extensive experiments demonstrating that
CLIP-Sculptor outperforms state-of-the-art baselines.
|
1. Introduction
In recent years, there has been rapid progress in text-
conditioned image generation [31, 32, 35], which has been
driven by advances in multimodal understanding learned
from web-scaled paired (text, image) data. These advanceshave led to applications in domains ranging from content
creation [21, 22, 31] to human–robot interaction [40]. Un-
fortunately, developing the analogue of a text-conditioned
3D shape generator is challenging because it is difficult to
obtain (text, 3D shape) pairs at large scale. Prior work has
attempted to address this problem by collecting text-shape
paired data [2, 5, 9, 23, 26], but these approaches have been
limited to a small number of object categories.
A promising way around this data bottleneck is to use
weak supervision from large-scale vision/language mod-
els such as CLIP [30]. One approach is to directly op-
timize a 3D representation such that (text, image render)
pairs are aligned when projected into the CLIP embedding
space. Prior work has applied this approach to stylize 3D
meshes [25, 41] and to create abstract “dreamlike” objects
using neural radiance fields [18] or meshes [19]. However,
neither of the aforementioned methods produce realistic ob-
ject geometry, and they can require expensive optimization.
Another approach, more in line with text-to-image gener-
ators [31, 32], is to train a conditional generative model.
The CLIP-Forge system [37] builds such a model without
paired (text, shape) data by using rendered images of shapes
at training time and leveraging the CLIP embedding space
to bridge the modalities of image and text at inference time.
CLIP-Forge demonstrates compelling zero-shot generation
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18339
Table 1. High-level comparison between zero-shot text-to-shape
generation methods. Inference time is calculated using a single
NVIDIA Tesla V100 GPU.
Method Inference Fidelity Diversity
DreamFields [18] >24 hrs Low Single
Clip-Mesh [19] 30 min Low Single
Clip-Forge [37] 6.41 ms Medium Medium
CLIP-Sculptor 0.91 sec High High
abilities but produces low-fidelity shapes which do not cap-
ture the full diversity of shapes found in the training distri-
bution.
In this paper, we propose CLIP-Sculptor, a text-
conditioned 3D shape generative model that outperforms
the state of the art by improving shape diversity and fi-
delity with only (image, shape) pairs as supervision. CLIP-
Sculptor’s novelty lies in its multi-resolution, voxel-based
conditional generation scheme. Without (text, shape) pairs,
CLIP-Sculptor learns to generate 3D shapes of common
object categories from the text-image joint embedding
of CLIP. To achieve high fidelity outputs, CLIP-Sculptor
adopts a multi-resolution approach: it first generates a low-
resolution latent grid representation that captures the se-
mantics from text/image, then upscales it to a higher res-
olution latent grid representation with a super-resolution
model, and finally decodes the output geometry. To gen-
erate diverse shapes, CLIP-Sculptor adopts discrete latent
representations which are obtained using a vector quanti-
zation scheme that avoids posterior collapse. To further
improve shape fidelity and diversity, CLIP-Sculptor uses a
masked transformer architecture. We additionally propose a
novel annealed strategy for the classifier-free guidance [16].
To sum up, we make the following contributions:
• CLIP-Sculptor, a multi-resolution text-conditional
shape generative model that achieves both high fidelity
and diversity without the need for (text, shape) pairs.
• A novel variant of classifier-free guidance for gener-
ative models, with an annealed guidance schedule to
achieve better quality for a given diversity level.
|
Rony_Proximal_Splitting_Adversarial_Attack_for_Semantic_Segmentation_CVPR_2023
|
Abstract
Classification has been the focal point of research on ad-
versarial attacks, but only a few works investigate methods
suited to denser prediction tasks, such as semantic segmenta-
tion. The methods proposed in these works do not accurately
solve the adversarial segmentation problem and, therefore,
overestimate the size of the perturbations required to fool
models. Here, we propose a white-box attack for these mod-
els based on a proximal splitting to produce adversarial
perturbations with much smaller ℓ∞norms. Our attack
can handle large numbers of constraints within a nonconvex
minimization framework via an Augmented Lagrangian ap-
proach, coupled with adaptive constraint scaling and mask-
ing strategies. We demonstrate that our attack significantly
outperforms previously proposed ones, as well as classifica-
tion attacks that we adapted for segmentation, providing a
first comprehensive benchmark for this dense task.
|
1. Introduction
Research on white-box adversarial attacks has mostly fo-
cused on classification tasks, with several methods proposed
over the years for ℓp-norms [ 8,9,20–22,27,30,32,35,36,42].
In contrast, the literature on adversarial attacks for segmenta-
tion tasks has been much scarcer, with few works proposing
attacks [ 13,33,39]. The lack of studies on adversarial attacks
in segmentation may appear surprising because of the promi-
nence of this computer vision task in many applications
where a semantic understanding of image contents is needed.
In many safety-critical application areas, it is thus crucial to
assess the robustness of the employed segmentation models.
Although segmentation is treated as a per-pixel classifi-
cation problem, designing adversarial attacks for semantic
segmentation is much more challenging for several reasons.
First, from an optimization perspective, the problem of ad-
versarial example generation is more difficult. In a clas-
sification task, producing minimal adversarial examples is
a nonconvex constrained problem with a single constraint.
In a segmentation task, this optimization problem now hasmultiple constraints, since at least one constraint must be
addressed for each pixel in the image. For a dataset such
as Cityscapes [ 19], the images have a size of 2048×1024 ,
resulting in more than 2 million constraints. Consequently,
most attacks originally designed for classification cannot
be directly extended to segmentation. For instance, penalty
methods such as C&W [ 9] cannot tackle multiple constraints
since they rely on a binary search of the penalty weight.
Second, the computational and memory cost of gener-
ating adversarial examples can be prohibitive. White-box
adversarial attacks usually rely on computing the gradient
of a loss w.r.t. the input. In segmentation tasks, the dense
outputs result in high memory usage to perform a backward
propagation of the gradient. For reference, computing the
gradients of the logits w.r.t. the input requires ∼22GiB of
memory for FCN HRNetV2 W48 [ 38] on Cityscapes with
a2048×1024 image. Additionally, most recent classifi-
cation adversarial attacks require between 100 and 1 000
iterations, resulting in a run-time of up to a few seconds per
image [ 34,36] (on GPU) depending on the dataset and model.
For segmentation, this increases to tens or even hundreds of
seconds per image with larger models.
In this article, we propose an adversarial attack to pro-
duce minimal adversarial perturbations w.r.t. the ℓ∞-norm,
for deep semantic segmentation models. Building on Aug-
mented Lagrangian principles, we introduce adaptive strate-
gies to handle a large number of constraints ( i.e.>106).
Furthermore, we tackle the nonsmooth ℓ∞-norm minimiza-
tion with a proximal splitting instead of gradient descent. In
particular, we show that we can efficiently compute the prox-
imity operator of the sum of the ℓ∞-norm and the indicator
function of the space of possible perturbations. This results
in an adversarial attack that significantly outperforms the
DAG attacks [ 39], in addition to several classification attacks
that we were able to adapt for segmentation. We propose a
methodology to evaluate adversarial attacks in segmentation,
and compare the different approaches on the Cityscapes [ 19]
and Pascal VOC 2012 [ 23] datasets with a diverse set of archi-
tectures, including well-known DeepLabV3+ models [ 11],
the recently proposed transformer-based SegFormer [ 40],
and the robust DeepLabV3 DDC-AT model from [ 41]. The
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20524
(a) Pascal VOC 2012: ∥δ∥∞=0.67/255
(b) Cityscapes: ∥δ∥∞=0.53/255
Figure 1. Untargeted adversarial examples for FCN HRNetV2 W48 on Pascal VOC 2012 and Cityscapes. In both cases, more than 99%
of pixels are incorrectly classified. For each dataset, left is the original image and its predicted segmentation, middle is the amplified
perturbation and the ground truth segmentation and right is the adversarial image with its predicted segmentation. For Pascal VOC 2012, the
predicted classes are TV monitor (blue), person (beige) and chair (bright red).
proposed approach yields outstanding performances for all
models and datasets considered in our experiments. For in-
stance, our attack finds untargeted adversarial perturbations
withℓ∞-norms lower than 1/255on average for all models on
Cityscapes. With this attacker, we provide better means to as-
sess the robustness of deep segmentation models, which has
often been overestimated until now, as the existing attacks
could only find adversarial examples with larger norms.
|
Siddiqui_Panoptic_Lifting_for_3D_Scene_Understanding_With_Neural_Fields_CVPR_2023
|
Abstract
We propose Panoptic Lifting, a novel approach for learn-
ing panoptic 3D volumetric representations from images of
in-the-wild scenes. Once trained, our model can render
color images together with 3D-consistent panoptic segmen-
tation from novel viewpoints. Unlike existing approaches
which use 3D input directly or indirectly, our method re-
quires only machine-generated 2D panoptic segmentation
masks inferred from a pre-trained network. Our core con-
tribution is a panoptic lifting scheme based on a neural field
representation that generates a unified and multi-view con-
sistent, 3D panoptic representation of the scene. To ac-
count for inconsistencies of 2D instance identifiers across
views, we solve a linear assignment with a cost based on
the model’s current predictions and the machine-generated
segmentation masks, thus enabling us to lift 2D instances
to 3D in a consistent way. We further propose and ablate
contributions that make our method more robust to noisy,
machine-generated labels, including test-time augmenta-
tions for confidence estimates, segment consistency loss,
bounded segmentation fields, and gradient stopping. Ex-
perimental results validate our approach on the challeng-
ing Hypersim, Replica, and ScanNet datasets, improving by
8.4, 13.8, and 10.6% in scene-level PQ over state of the art.
|
1. Introduction
Robust panoptic 3D scene understanding models are key
to enabling applications such as VR, robot navigation, or
self-driving, and more. Panoptic image understanding – the
task of segmenting a 2D image into categorical “stuff” areas
and individual “thing” instances – has experienced tremen-
dous progress over the past years. These advances can be at-
tributed to continuously improved model architectures and
the availability of large-scale labeled 2D datasets, leading to
state-of-the-art 2D panoptic segmentation models [6,19,43]
that generalize well to unseen images captured in the wild.
Single-image panoptic segmentation, unfortunately, is
still insufficient for tasks requiring coherency and consis-
tency across multiple views. In fact, panoptic masks often
contain view-specific imperfections and inconsistent clas-
sifications, and single-image 2D models naturally lack the
ability to track unique object identities across views (see
Fig. 2). Ideally, such consistency would stem from a full,
3D understanding of the environment, but lifting machine-
generated 2D panoptic segmentations into a coherent 3D
panoptic scene representation remains a challenging task.
Project page: nihalsid.github.io/panoptic-lifting/
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9043
Recent works [11,18,40,45] have addressed panoptic 3D
scene understanding from 2D images by leveraging Neu-
ral Radiance Fields (NeRFs) [22], gathering semantic scene
data from multiple sources. Some works [11, 40] rely on
ground truth 2D and 3D labels, which are expensive and
time-consuming to acquire. The work of Kundu et al. [18]
instead exploits machine-generated 3D bounding box detec-
tion and tracking together with 2D semantic segmentation,
both computed using off-the-shelf models. However, this
approach is limited by the fact that 3D detection models,
when compared to 2D panoptic segmentation ones, strug-
gle to generalize beyond the data they were trained on. This
is in large part due to the large difference in scale between
2D and 3D training datasets. Furthermore, dependence on
multiple pre-trained models increases complexity and intro-
duces potentially compounding sources of error.
In this work we introduce Panoptic Lifting, a novel for-
mulation which represents a static 3D scene as a panoptic
radiance field (see Sec. 3.2). Panoptic Lifting supports ap-
plications like novel panoptic view synthesis and scene edit-
ing, while maintaining robustness to a variety of diverse in-
put data. Our model is trained from only 2D posed images
and corresponding, machine-generated panoptic segmenta-
tion masks, and can render color, depth, semantics, and 3D-
consistent instance information for novel views of the scene.
Starting from a TensoRF [4] architecture that encodes
density and view-dependent color information, we intro-
duce lightweight output heads for learning semantic and
instance fields. The semantic field, represented as a small
MLP, is directly supervised with the machine-generated 2D
labels. An additional segment consistency loss guides this
supervision to avoid optima that fragment objects in the
presence of label inconsistencies. The 3D instance field
is modelled by a separate MLP, holding a fixed number
of class-agnostic, 3D-consistent surrogate object identifiers.
Losses for both the fields are weighted by confidence esti-
mates obtained by test-time augmentation on the 2D panop-
tic segmentation model. Finally, we discuss specific tech-
niques, e.g. bounded segmentation fields and stopping
semantics-to-geometry gradients (see Sec. 3.3), to further
limit inconsistent segmentations.
In summary, our contributions are:
• A novel approach to panoptic radiance field represen-
tation that models the radiance, semantic class and in-
stance id for each point in the space for a scene by
directly lifting machine-generated 2D panoptic labels.
• A robust formulation to handle inherent noise and in-
consistencies in machine-generated labels, resulting in
a clean, coherent and view-consistent panoptic seg-
mentations from novel views, across diverse data.
Figure 2. Predictions from state-of-the-art 2D panoptic segmen-
tation methods such as Mask2Former [6] are typically noisy and
inconsistent when compared across views of the same scene. Typ-
ical failure modes include conflicting labels (e.g. sofa predicted
as a bed above) and segmentations (e.g. labeled void above). Fur-
thermore, instance identities are not preserved across frames (rep-
resented as different colors).
|
Schiappa_A_Large-Scale_Robustness_Analysis_of_Video_Action_Recognition_Models_CVPR_2023
|
Abstract
We have seen a great progress in video action recognition
in recent years. There are several models based on convolu-
tional neural network (CNN) and some recent transformer
based approaches which provide top performance on exist-
ing benchmarks. In this work, we perform a large-scale
robustness analysis of these existing models for video ac-
tion recognition. We focus on robustness against real-world
distribution shift perturbations instead of adversarial per-
turbations. We propose four different benchmark datasets,
HMDB51-P ,UCF101-P ,Kinetics400-P , and SSv2-P to per-
form this analysis. We study robustness of sixstate-of-the-art
action recognition models against 90different perturbations.
The study reveals some interesting findings, 1) transformer
based models are consistently more robust compared to CNN
based models, 2) Pretraining improves robustness for Trans-
former based models more than CNN based models, and 3)
All of the studied models are robust to temporal perturba-
tions for all datasets but SSv2; suggesting the importance of
temporal information for action recognition varies based on
the dataset and activities. Next, we study the role of augmen-
tations in model robustness and present a real-world dataset,
UCF101-DS , which contains realistic distribution shifts, to
further validate some of these findings. We believe this study
will serve as a benchmark for future research in robust video
action recognition1.
|
1. Introduction
Robustness of deep learning models against real-world
distribution shifts is crucial for various applications in vision,
such as medicine [4], autonomous driving [41], environment
monitoring [60], conversational systems [36], robotics [68]
and assistive technologies [5]. Distribution shifts with re-
spect to training data can occur due to the variations in en-
vironment such as changes in geographical locations, back-
ground, lighting, camera models, object scale, orientations,
*The authors contributed equally as first authors to this paper.
†Corresponding author:[email protected]
‡The authors contributed equally as supervisors to this paper.
1More details available at bit.ly/3TJLMUF .
Figure 1. Performance against robustness of action recognition
models on UCF101-P. y-axis: relative robustness γr, x-axis: ac-
curacy on clean videos, model names appended with P indicate
pre-training, and the size of circle indicates FLOPs.
motion patterns, etc. Such distribution shifts can cause the
models to fail when deployed in a real world settings [26].
For example, an AI ball tracker that replaced human camera
operators was recently deployed in a soccer game and repeat-
edly confused a soccer ball with the bald head of a lineman,
leading to a bad experience for viewers [13].
Robustness has been an active research topic due to its
importance for real-world applications [4, 41, 60]. However,
most of the effort is directed towards images [7, 25, 26].
Video is a natural form of input to the vision systems that
function in the real world. Therefore studying robustness
in videos is an important step towards developing reliable
systems for real world deployment. In this work we perform
a large-scale analysis on robustness of existing deep models
for video action recognition against common real world
spatial and temporal distribution shifts.
Video action recognition provides an important test sce-
nario to study robustness in videos given there are sufficient,
large benchmark datasets and well developed deep learning
models. Although the existing approaches have made im-
pressive progress in action recognition, there are several fun-
damental questions that still remain unanswered in the field.
Do these approaches enable effective temporal modeling, the
crux of the matter for action recognition approaches? Are
these approaches robust to real-world corruptions like noise
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14698
(temporally consistent and inconsistent), blurring effects,
etc? Do we really need heavy architectures for robustness,
or are light-weight models good enough? Are the recently
introduced transformer-based models, which give state of
the art accuracy on video datasets, more robust? Does pre-
training play a role in model robustness? This study aims at
finding answers to some of these critical questions.
Towards this goal, we present multiple benchmark
datasets to conduct robustness analysis in video action
recognition. We utilize four different widely used action
recognition datasets including HMDB51 [34], UCF101
[54], Kinetics-400 [8], and SSv2 [43] and propose
four corresponding benchmarks; HMDB51-P ,UCF101-P ,
Kinetics400-P , and SSv2-P . In order to create this benchmark,
we introduce 90 different common perturbations which in-
clude, 20 different noise corruptions, 15 blur perturbations,
15 digital perturbations, 25 temporal perturbations, and 15
camera motion perturbations as a benchmark. The study
covers 6 different deep architectures considering different
aspects, such as network size (small vs large), network archi-
tecture (CNN vs Transformers), and network depth (shallow
vs deep).
This study reveals several interesting findings about ac-
tion recognition models. We observe that recent transformer
based models are not only better in performance , but they
are also more robust than CNN models against most distri-
bution shifts (Figure 1). We also observe that pretraining
is more beneficial to transformers compared to CNN based
models in robustness. We find that all the models are very
robust against the temporal perturbations with minor drop in
performance on the Kinetics, UCF and HMDB. However, on
the SSv2 dataset, behavior of the models is different whereas
the performance drops on different temporal perturbations.
These observations show interesting phenomena about the
video action recognition datasets, i.e., the importance of tem-
poral information varies based on the dataset and activities .
Next, we study the role of training with data augmenta-
tions in model robustness and analyze the generalization of
these techniques to novel perturbations. To further study
the capability of such techniques, we propose a real-world
dataset, UCF101-DS , which contains realistic distribution
shifts without simulation. This dataset also helps us to bet-
ter understand the behavior of CNN and Transformer-based
models under realistic scenarios. We believe such findings
will open up many interesting research directions in video
action recognition and will facilitate future research on video
robustness which will lead to more robust architectures for
real-world deployment.
We make the following contributions in this study,
•A large-scale robustness analysis of video action recog-
nition models to different real-world distribution shifts.
•Provide insights including comparison of transformervs CNN based models, effect of pre-training, and effect
of temporal perturbations on video robustness.
•Four large-scale benchmark datasets to study robustness
for video action recognition along with a real-world
dataset with realistic distribution shifts.
|
Saragadam_WIRE_Wavelet_Implicit_Neural_Representations_CVPR_2023
|
Abstract
Implicit neural representations (INRs) have recently ad-
vanced numerous vision-related areas. INR performance
depends strongly on the choice of activation function em-
ployed in its MLP network. A wide range of nonlinearities
have been explored, but, unfortunately, current INRs de-
signed to have high accuracy also suffer from poor robust-
ness (to signal noise, parameter variation, etc.). Inspired by
harmonic analysis, we develop a new, highly accurate and
robust INR that does not exhibit this tradeoff. Our Wavelet
Implicit neural REpresentation (WIRE) uses as its ac-
tivation function the complex Gabor wavelet that is well-
known to be optimally concentrated in space–frequency and
to have excellent biases for representing images. A wide
range of experiments (image denoising, image inpainting,
super-resolution, computed tomography reconstruction, im-
age overfitting, and novel view synthesis with neural radi-
ance fields) demonstrate that WIRE defines the new state of
the art in INR accuracy, training time, and robustness.
|
1. Introduction
Implicit neural representations (INRs), which learn a
continuous function over a set of points, have emerged as
a promising general-purpose signal processing framework.
An INR consists of a multilayer perceptron (MLP) com-
bining linear layers and element-wise nonlinear activation
functions. Thanks to the MLP, INRs do not share the local-
ity biases that limit the performance of convolutional neural
networks (CNNs). Consequently, INRs have advanced the
state of the art in numerous vision-related areas, including
computer graphics [22, 27, 28], image processing [10], in-
verse problems [42], and signal representations [41].
Currently, INRs still face a number of obstacles that limit
their use. First, for applications with high-dimensional data
such as 3D volumes, training an INR to high accuracy can
still take too long (tens of seconds) for real time applica-
tions. Second, INRs are not robust to signal noise or insuffi-
cient measurements. Indeed, most works on INRs in the lit-
WIRE Gauss SIREN Ground truth
1.0 0.0
Spatially
spread -out
errorSpatially
compact
but large
errorSpatially
compact
and small
error
Error mapNonlinearity for Implicit Neural Representations
Approximation accuracy with various nonlinearities
Figure 1. Wavelet implicit neural representation (WIRE). We
propose a new nonlinearity for implicit neural representations
(INRs) based on the continuous complex Gabor wavelet that has
high representation capacity for visual signals. The top row shows
two commonly used nonlinearities: SIREN with sinusoidal non-
linearity and Gaussian nonlinearity, and WIRE that uses a contin-
uous complex Gabor wavelet. WIRE benefits from the frequency
compactness of sine, and spatial compactness of a Gaussian non-
linearity. The bottom row shows error maps for approximating
an image with strong edges. SIREN results in global ringing arti-
facts while Gaussian nonlinearity leads to compact but large error
at edges. WIRE produces results with the smallest and most spa-
tially compact error. This enables WIRE to learn representations
accurately, while being robust to noise and undersampling of data.
erature assume virtually no signal noise and large amounts
of data. We find in our own experiments that current INR
methods are ineffective for tasks such as denoising or super-
resolution. Finally, INRs still have room for improvement
in representational accuracy for fine details.
In this paper, we develop a new, faster, more accurate,
and robust INR that addresses these issues and takes INR
performance to the next level. To achieve this, we take in-
spiration from harmonic analysis and reconsider the nonlin-
ear activation function used in the MLP. Recent work has
shown that an INR can be interpreted as a structured signal
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available
|
Ren_TinyMIM_An_Empirical_Study_of_Distilling_MIM_Pre-Trained_Models_CVPR_2023
|
Abstract
Masked image modeling (MIM) performs strongly in pre-
training large vision Transformers (ViTs). However, small
models that are critical for real-world applications can-
not or only marginally benefit from this pre-training ap-
proach. In this paper, we explore distillation techniques to
transfer the success of large MIM-based pre-trained mod-
els to smaller ones. We systematically study different op-
tions in the distillation framework, including distilling tar-
gets, losses, input, network regularization, sequential dis-
tillation, etc, revealing that: 1) Distilling token relations
is more effective than CLS token- and feature-based distil-
lation; 2) An intermediate layer of the teacher network as
target perform better than that using the last layer when
the depth of the student mismatches that of the teacher;
3) Weak regularization is preferred; etc. With these find-
ings, we achieve significant fine-tuning accuracy improve-
ments over the scratch MIM pre-training on ImageNet-1K
classification, using all the ViT-Tiny, ViT-Small, and ViT-
base models, with +4.2%/+2.4%/+1.4% gains, respectively.
Our TinyMIM model of base size achieves 52.2 mIoU in
AE20K semantic segmentation, which is +4.1 higher than
the MAE baseline. Our TinyMIM model of tiny size achieves
79.6% top-1 accuracy on ImageNet-1K image classifica-
tion, which sets a new record for small vision models of
the same size and computation budget. This strong perfor-
mance suggests an alternative way for developing small
vision Transformer models, that is, by exploring better train-
ing methods rather than introducing inductive biases into
architectures as in most previous works. Code is available
at https://github.com/OliverRensu/TinyMIM.
|
1. Introduction
Masked image modeling (MIM), which masks a large
portion of the image area and trains a network to recover
the original signals for the masked area, has proven to be a
very effective self-supervised method for pre-training vision
Transformers [2, 11, 17, 49]. Thanks to its strong fine-tuning
performance, MIM has now been a main-stream pre-training
*Corresponding author: [email protected].
ViT-T ViT-S ViT-B7074788286Scratch
MAE
TinyMIM
-0.6+3.6+0.7+3.1+2.4+3.8Acc.
72.279.981.2Figure 1. Comparison among TinyMIM (ours), MAE [17] and
training from scratch by using ViT-T, -S and -B on ImageNet-1K.
We report top-1 accuracy. We adopt DeiT [41] when training from
scratch. For the first time, we successfully perform masked image
modeling pre-training for smaller ViTs.
ModelParam. Flops Top-1mIoU(M) (G) (%)
DeiT-T [41] 5.5 1.3 72.2 38.0
PVT-T [43] 13.0 1.9 75.1 39.8
CiT-T [36] 5.5 1.3 75.3 38.5
Swin [29] 8.8 1.2 76.9 40.4
EdgeViT-XS [31] 6.4 1.1 77.5 42.1
MobileViTv1-S [30] 4.9 2.0 78.4 42.7
MobileViTv3-S [42] 4.8 1.8 79.3 43.1
TinyMIM⋆-T (Ours) 5.8 1.3 79.6 45.0
Table 1. Comparison with state-of-the-art tiny Transformers with
architecture variants. The parameters indicate the backbone pa-
rameter excluding the parameters of the last classification layer
in classification or the decoder in segmentation. We report top-1
accuracy on ImageNet-1K classification and mIoU on ADE20K
segmentation.
method for vision Transformers, and numerous follow-ups
have been carried out in this research line, such as study-
ing how to set decoding architectures [21], reconstruction
targets [10, 32, 45, 59], etc., as well as revealing its proper-
ties [46, 48, 50].
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3687
Method ViT-T ViT-S ViT-B ViT-L
Scratch 72.2 79.9 81.2 82.6
MAE 71.6 80.6 83.6 85.9
Gap -0.6 +0.7 +2.4 +3.3
Table 2. Comparison between MAE pre-trained ViTs and ViTs
trained from scratch by using ViT-T, -S, -B and -L on ImageNet-
1K. We adopt DeiT when training from scratch. We report top-1
accuracy. As model size shrinks, the superiority of MAE gradually
vanishes. MAE even hurts the performance of ViT-T.
However, as shown in Table 2, MIM pre-training [17]
mainly effects for relatively large models. When the model
size is as small as ViT-Tiny (5 million parameters), which
is critical for real-world applications, MIM pre-training can
even hurt the fine-tuning accuracy on ImageNet-1K classifi-
cation. In fact, the accuracy drops by -0.6 compared to the
counterpart trained from scratch. This raises a question: can
small models also benefit from MIM pre-training, and how
can this be achieved?
In addition, the existing study on small vision Transform-
ers mainly focus on introducing certain inductive bias into
architecture design [6, 22, 30, 31]. The additional architec-
tural inductive biases facilitate optimization yet limit the
expressive capacity. It’s natural to ask whether we can boost
plain small vision Transformers to perform just as well.
In this work, we present TinyMIM, which answers the
above questions. Instead of directly training small ViT mod-
els using a MIM pretext task, TinyMIM uses distillation
technology [20] to transfer the knowledge of larger MIM
pre-trained models to smaller ones. Distillation endows the
nice properties of larger MIM pre-trained models to smaller
ones while avoiding solving a “too” difficult MIM task. Not-
ing that knowledge distillation has been well developed,
especially for supervised models [15], our main work is to
systematically study for the first time the effects of different
design options in a distillation framework when using MIM
pre-trained models as teachers. Specifically, we consider dis-
tillation targets, data augmentation, network regularization,
auxiliary losses, macro distillation strategy, etc., and draw
several useful findings:
•Distillation targets . There are two main findings re-
lated to distillation targets: 1) Distilling token relations
is more effective than distilling the CLS token and fea-
ture maps. 2) Using intermediate layers as the target
may perform better than using the last layer, and the
optimal target layer for different down-stream tasks,
e.g., classification and segmentation, can be different.
•Data and network regularization . Weak augmentation
and regularization is preferred: 1) The performance of
using a masked image is worse than using the originalimage; 2) Relatively small drop path rate (0 for teacher
and 0.1 for student) performs best.
•auxiliary losses . We find that an auxiliary MIM loss
does not improve fine-tuning accuracy.
•Macro distillation strategy . We find that using a se-
quential distillation strategy, i.e., “ViT-B →ViT-S →
ViT-T”, performs better than that distilling directly from
ViT-B to ViT-T.
By selecting the best framework options, we achieve sig-
nificant fine-tuning accuracy improvements over the direct
MIM pre-training on ImageNet-1K classification, using ViT
models of different sizes, as shown in Figure 1. Specifi-
cally, the gains of TinyMIM on the ViT-Tiny, ViT-Small, and
ViT-base models are +4.2%/+2.4%/+1.4%, respectively.
In particular, our TinyMIM⋆-T model with knowledge
distillation during finetune-tuning achieves a top-1 accuracy
of 79.6% on ImageNet-1K classification (see Table 1), which
performs better than all previous works that develop small
vision Transformer models by introducing architectural in-
ductive biases or smaller feature resolutions. It sets a new
accuracy record using similar model size and computation
budget. On ADE20K semantic segmentation, TinyMIM⋆-T
achieves 45.0 mIoU, which is +1.9 higher than the second
best method, MobileViTv3-S [42]. The strong fine-tuning
accuracy by TinyMIM⋆-T suggests an alternative way for
developing small vision Transformer models, that is, by
exploring better training methods rather than introducing
inductive biases into architectures as most previous works
have done.
|
Shen_Self-Supervised_3D_Scene_Flow_Estimation_Guided_by_Superpoints_CVPR_2023
|
Abstract
3D scene flow estimation aims to estimate point-wise
motions between two consecutive frames of point clouds.
Superpoints, i.e., points with similar geometric features,
are usually employed to capture similar motions of local
regions in 3D scenes for scene flow estimation. However,
in existing methods, superpoints are generated with the
offline clustering methods, which cannot characterize local
regions with similar motions for complex 3D scenes well,
leading to inaccurate scene flow estimation. To this end,
we propose an iterative end-to-end superpoint based scene
flow estimation framework, where the superpoints can be
dynamically updated to guide the point-level flow predic-
tion. Specifically, our framework consists of a flow guided
superpoint generation module and a superpoint guided flow
refinement module. In our superpoint generation module,
we utilize the bidirectional flow information at the previ-
ous iteration to obtain the matching points of points and
superpoint centers for soft point-to-superpoint association
construction, in which the superpoints are generated for
pairwise point clouds. With the generated superpoints,
we first reconstruct the flow for each point by adaptively
aggregating the superpoint-level flow, and then encode
the consistency between the reconstructed flow of pairwise
point clouds. Finally, we feed the consistency encod-
ing along with the reconstructed flow into GRU to refine
point-level flow. Extensive experiments on several different
datasets show that our method can achieve promising
performance. Code is available at https://github.
com/supersyq/SPFlowNet .
|
1. Introduction
Scene flow estimation is one of the vital components
of numerous applications such as 3D reconstruction [10],
Corresponding authors
Yaqi Shen, Le Hui, Jin Xie, and Jian Yang are with PCA Lab,
Key Lab of Intelligent Perception and Systems for High-Dimensional
Information of Ministry of Education, and Jiangsu Key Lab of Image and
Video Understanding for Social Security, School of Computer Science and
Engineering, Nanjing University of Science and Technology, China.
(a) Other clustering based methods
Flow
RefinementOffline
Clustering
Flow
Estimationfixed superpoints not learnable learnable
learnable
(b) Our approachFlow
Estimation
Flow
RefinementSuperpoint
Generation
dynamic superpointsFigure 1. Comparison with other clustering-based methods.
(a) Other clustering based methods utilize offline clustering
algorithms to split the point clouds into some fixed superpoints
for subsequent flow refinement, which is not learnable. (b)
Our method embeds the differentiable clustering (superpoint
generation) into our pipeline and generates dynamic superpoints at
each iteration. We visualize part of the scene in FlyingThings3D
[38] for better visualization. Different colors indicate different
superpoints and red lines indicate the ground truth flow.
autonomous driving [37], and motion segmentation [2].
Estimating scene flow from stereo videos and RGB-D
images has been studied for many years [17, 19]. Recently,
with the rapid development of 3D sensors, estimating scene
flow from two consecutive point clouds has receiving more
and more attention. However, due to the irregularity and
sparsity of point clouds, scene flow estimation from point
clouds is still a challenging problem in real scenes.
In recent years, many 3D scene flow estimation methods
have been proposed [11, 34, 37, 56, 57, 59]. Most of
these methods [34, 56] rely on dense ground truth scene
flow as supervision for model training. However, col-
lecting point-wise scene flow annotations is expensive and
time-consuming. To avoid the expensive point-level an-
notations, some efforts have been dedicated to weakly-
supervised and self-supervised scene flow estimation [9,
23, 46, 60]. For example, both Rigid3DSceneFlow [9]
and LiDARSceneFlow [7] propose a weakly-supervised
scene flow estimation framework, which only take the ego-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5271
motion and background masks as inputs. Especially, they
utilize the DBSCAN clustering algorithm [8] to segment
the foreground points into local regions with flow rigid-
ity constraints. In addition, RigidFlow [31] first utilizes
the off-line oversegmentation method [32] to decompose
the source point clouds into some fixed supervoxels, and
then estimates the rigid transformations for supervoxels as
pseudo scene flow labels for model training. In summary,
these clustering based methods utilize offline clustering
algorithms with hand-crafted features (i.e., coordinates and
normals) to generate the superpoints and use the consistent
flow constraints on these fixed superpoints for scene flow
estimation. However, for some complex scenes, the offline
clustering methods may cluster points with different flow
patterns into the same superpoints. Figure 1(a) shows that
[32] falsely clusters points with the entirely different flow
into the same superpoint colored in purple (highlighted
by the dotted circle). Thus, applying flow constraints to
the incorrect and fixed superpoints for flow estimation will
mislead the model to generate false flow results.
To address this issue, we propose an iterative end-to-
end superpoint guided scene flow estimation framework
(dubbed as “SPFlowNet”), which consists of an online su-
perpoint generation module and a flow refinement module.
Our pipeline jointly optimizes the flow guided superpoint
generation and superpoint guided flow refinement for more
accurate flow prediction (Figure 1(b)). Specifically, we
first utilize farthest point sampling (FPS) to obtain the
initial superpoint centers, including the coordinate, flow,
and feature information. Then, we use the superpoint-level
and point-level flow information in the previous iteration to
obtain the matching points of points and superpoint centers.
With the pairs of points and superpoint centers, we can learn
the soft point-to-superpoint association map. And we utilize
the association map to adaptively aggregate the coordinates,
features, and flow values of points for superpoint center
updating. Next, based on the updated superpoint-wise
flow values, we reconstruct the flow of each point via
the generated association map. Furthermore, we encode
the consistency between the reconstructed flow of pairwise
point clouds. Finally, we feed the reconstructed flow along
with the consistency encoding into a gated recurrent unit
to refine the point-level flow. Extensive experiments on
several benchmarks show that our approach achieves state-
of-the-art performance.
Our main contributions are summarized as follows:
We propose a novel end-to-end self-supervised scene
flow estimation framework, which iteratively gener-
ates dynamic superpoints with similar flow patterns
and refines the point-level flow with the superpoints.
Different from other offline clustering based methods,
we embed the online clustering into our model todynamically segment point clouds with the guidance
from pseudo flow labels generated at the last iteration.
A superpoint guided flow refinement layer is intro-
duced to refine the point-wise flow with superpoint-
level flow information, where the superpoint-wise flow
patterns are adaptively aggregated into the point-level
with the learned association map.
Our self-supervised scene flow estimation method out-
performs state-of-the-art methods by a large margin.
|
Sarkar_Parameter_Efficient_Local_Implicit_Image_Function_Network_for_Face_Segmentation_CVPR_2023
|
Abstract
Face parsing is defined as the per-pixel labeling of im-
ages containing human faces. The labels are defined to
identify key facial regions like eyes, lips, nose, hair, etc.
In this work, we make use of the structural consistency
of the human face to propose a lightweight face-parsing
method using a Local Implicit Function network, FP-LIIF.
We propose a simple architecture having a convolutional
encoder and a pixel MLP decoder that uses 1/26thnum-
ber of parameters compared to the state-of-the-art models
and yet matches or outperforms state-of-the-art models on
multiple datasets, like CelebAMask-HQ and LaPa. We do
not use any pretraining, and compared to other works, our
network can also generate segmentation at different reso-
lutions without any changes in the input resolution. This
work enables the use of facial segmentation on low-compute
or low-bandwidth devices because of its higher FPS and
smaller model size.
|
1. Introduction
Face parsing is the task of assigning pixel-wise labels
to a face image to distinguish various parts of a face, like
eyes, nose, lips, ears, etc. This segregation of a face image
enables many use cases, such as face image editing [20, 46,
57], face e-beautification [37], face swapping [16, 35, 36],
face completion [23].
Since the advent of semantic segmentation through the
use of deep convolutional networks [31], a multitude of
research has investigated face parsing as a segmentation
problem through the use of fully convolutional networks
[13, 14, 25, 26, 28, 29]. In order to achieve better results,
some methods [14, 28] make use of conditional random
fields (CRFs), in addition to CNNs. Other methods [25,27],
*Work done during internship at Adobe
Figure 1. The simple architecture of Local Implicit Image repre-
sentation base FP-LIIF: A light convolutional encoder of modified
resblocks followed by a pixel only MLP decoder
focus on a two-step approach that predicts bounding boxes
of facial regions (nose, eyes, hair. etc.) followed by
segmentation within the extracted regions. Later works
like AGRNET [48] and EAGR [49] claim that earlier ap-
proaches do not model the relationship between facial com-
ponents and that a graph-based system can model these
statistics, leading to more accurate segmentation.
In more recent research, works such as FaRL [59] in-
vestigate pretraining on a human face captioning dataset.
They pre-train a Vision Transformer (ViT) [8] and finetune
on face parsing datasets and show improvement in com-
parison to pre-training with classification based pre-training
like ImageNet [44], etc., or no pre-training at all. The cur-
rent state-of-the-art model, DML CSR [58], tackles the face
parsing task using multiple concurrent strategies including
multi-task learning, graph convolutional network (GCN),
and cyclic learning. The Multi-task approach handles edge
discovery in addition to face segmentation. The proposed
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20970
GCN is used to provide global context instead of an aver-
age pooling layer. Additionally, cyclic learning is carried
out to arrive at an ensemble model and subsequently per-
form self-distillation using the ensemble model in order to
learn in the presence of noisy labels.
In this work, we perform face segmentation by taking
advantage of the consistency seen in human facial struc-
tures. We take our inspiration from various face modeling
works [1, 12, 61] that can reconstruct a 3D model of a face
from 2D face images. These works show it is possible to
create a low-dimensional parametric model of the human
face in 3D. This led us to conclude that 2D modeling of
the human face should also be possible with low dimen-
sion parametric model. Recent approaches, like NeRF [34]
and Siren [47] demonstrated that it is possible to reconstruct
complex 3D and 2D scenes with implicit neural represen-
tation. Many other works [2, 11, 43, 56] demonstrate that
implicit neural representation can also model faces both in
3D and 2D. However, to map 3D and 2D coordinates to
the RGB space, the Nerf [34] and Siren [47] variants of the
models require training a separate network for every scene.
This is different from our needs, one of which is that we
must map an RGB image into label space and require a sin-
gle network for the whole domain. That brings us to another
method known as LIIF [3], which is an acronym for a Lo-
cal Implicit Image Function and is used to perform image
super-resolution. They learn an approximation of a contin-
uous function that can take in any RGB image with low res-
olution and output RGB values at the sub-pixel level. This
allows them to produce an enlarged version of the input im-
age. Thus, given the current success of learning implicit
representations and the fact that the human face could be
modeled using a low-dimension parametric model, we came
to the conclusion that a low parameter count LIIF-inspired
model should learn a mapping from a face image to its la-
bel space or segmentation domain. In order to test this hy-
pothesis, we modify a low-parameter version of EDSR [24]
encoder such that it can preserve details during encoding.
We also modify the MLP decoder to reduce the comput-
ing cost of our decoder. Finally, we generate a probability
distribution in the label space instead of RGB values. We
use the traditional cross-entropy-based losses without any
complicated training mechanisms or loss adaptations. An
overview of the architecture is depicted in Figure 1, and
more details are in Section 3. Even with a parameter count
that is 1/26thcompared to DML CSR [58], our model at-
tains state-of-the-art F1 and IoU results for CelebAMask-
HQ [21] and LaPa [29] datasets. Some visualizations of our
outputs are shared in Figure 3 and Figure 4.
To summarise, our key contributions are as follows:
• We propose an implicit representation-based simple
and lightweight neural architecture for human face se-
mantic segmentation.• We establish new state-of-the-art mean F1 and mean
IoU scores on CelebAMask-HQ [21] and LaPa [29].
• Our proposed model has a parameter count of 1/26th
or lesser compared to the previous state-of-the-art
model. Our model’s SOTA configuration achieves an
FPS of 110 compared to DML CSR’s FPS of 76.
|
Shin_Deep_Depth_Estimation_From_Thermal_Image_CVPR_2023
|
Abstract
Robust and accurate geometric understanding against
adverse weather conditions is one top prioritized condi-
tions to achieve a high-level autonomy of self-driving cars.
However, autonomous driving algorithms relying on the vis-
ible spectrum band are easily impacted by weather and
lighting conditions. A long-wave infrared camera, also
known as a thermal imaging camera, is a potential res-
cue to achieve high-level robustness. However, the miss-
ing necessities are the well-established large-scale dataset
and public benchmark results. To this end, in this pa-
per, we first built a large-scale Multi-Spectral Stereo (MS2)
dataset, including stereo RGB, stereo NIR, stereo thermal,
and stereo LiDAR data along with GNSS/IMU informa-
tion. The collected dataset provides about 195K synchro-
nized data pairs taken from city, residential, road, campus,
and suburban areas in the morning, daytime, and nighttime
under clear-sky, cloudy, and rainy conditions. Secondly,
we conduct an exhaustive validation process of monocu-
lar and stereo depth estimation algorithms designed on
visible spectrum bands to benchmark their performance
in the thermal image domain. Lastly, we propose a uni-
fied depth network that effectively bridges monocular depth
and stereo depth tasks from a conditional random field
approach perspective. Our dataset and source code are
available at https://github.com/UkcheolShin/
MS2-MultiSpectralStereoDataset .
|
1. Introduction
Recently, a number of researches have been conducted
for accurate and robust geometric understanding in self-
driving cars based on the widely-used benchmark datasets,
such as KITTI [15], DDAD [17], and nuScenes [4]. Mod-
ern computer vision algorithms deploy a deep neural net-
work and data-driven machine learning technique to achieve
high-level accuracy, which needs large-scale datasets. How-
ever, from the perspective of robustness in real-world, the
algorithms mostly rely on visible spectrum images and are
easily degenerated by weather and lighting conditions.
(a) Depth from thermal images via unified depth network
(b) RGB (Reference) (c) Thermal image (d) Depth map
Figure 1. Depth from thermal images in various environments.
Our proposed network can estimate both monocular and stereo
depth maps regardless of given a single or stereo thermal image
via unified network architecture. Furthermore, depth estimation
results from thermal images show high-level reliability and robust-
ness under day-light, low-light, and rainy conditions.
Therefore, recent works have actively investigated alter-
native sensors such as Near-Infrared (NIR) cameras [39],
LiDARs [16, 51], radars [14, 32], and long-wave infrared
(LWIR) cameras [35, 45] to achieve reliable and robust
geometric understanding in adverse conditions. Among
these alternative and complementary sensors, LWIR cam-
era ( i.e., thermal camera) has become more popular be-
cause of its competitive price, adverse weather robustness,
and unique modality information ( i.e., temperature). There-
fore, various thermal image based computer vision solu-
tions [3, 21–23, 27, 35, 45, 47–50] to achieve high-level ro-
bustness have been actively attracting attention recently.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1043
Table 1. Comprehensive comparison of multi-modal datasets . Compared to previous datasets [6, 8, 24, 25, 28, 53], the proposed Multi-
Spectral Stereo (MS2) dataset provides about 195K synchronized and rectified multi-spectral stereo sensor data ( i.e., RGB, NIR, thermal,
LiDAR, and GNSS/IMU data) covering diverse locations ( e.g., city, campus, residential, road, and suburban), times ( e.g., morning, daytime,
and nighttime), and weathers ( e.g., clear-sky, cloudy, and rainy).
Dataset Year Environment PlatformTotal # ofLiDAR IMURGB NIR Thermal Weather
Data Pairs Mono Stereo Mono Stereo Mono Stereo RAW Daytime Nighttime Rain
CATS [53] 2017 In/Outdoor Handheld 1.4K ✓ ✓ ✓ ✓ ✕ ✕ ✓ ✓ ✓ ✓ ✓ ✕
KAIST [6] 2018 Outdoor Vehicle Unknown ✓ ✓ ✓ ✓ ✕ ✕ ✓ ✕ ✓ ✓ ✓ ✕
ViViD [25] 2019 In/Outdoor Handheld 5.3K/4.3K ✓ ✓ ✓ ✕ ✕ ✕ ✓ ✕ ✓ ✓ ✓ ✕
MultiSpectralMotion [8] 2021 In/Outdoor Handheld 121K/27.3K ✓ ✓ ✓ ✕ ✓ ✕ ✓ ✕ ✓ ✓ ✓ ✕
ViViD++ [24] 2022 Outdoor Vehicle 56K ✓ ✓ ✓ ✕ ✕ ✕ ✓ ✕ ✓ ✓ ✓ ✕
OdomBeyondVision [28] 2022 IndoorHandheld/ 71K/✓ ✓ ✓ ✓ ✓ ✓ ✓ ✕ ✓ ✓ ✓ ✕UGV/UA V 117K/21K
Ours 2022 Outdoor Vehicle 195K ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
However, the missing necessities are the well-
established large-scale dataset and public benchmark
results. The publicly available datasets for autonomous
driving are overwhelmingly composed of the visible spec-
trum band ( i.e., RGB images), but it very rarely includes
other spectrum bands, such as the NIR band and LWIR
band. Especially, despite the advantage of the LWIR band,
just a few LWIR datasets have been recently released.
However, these datasets are indoor oriented [8, 25, 28],
small scale [25, 53], publicly unavailable [6], or limited
sensor diversity [6, 24]. Therefore, the necessity is getting
increase to design a large-scale multi-sensor driving dataset
to investigate the feasibility and challenges associated
with an autonomous driving perception system from
multi-spectral sensors.
The other necessity is thoroughly validating vision ap-
plications on the LWIR band. Estimating a depth map from
monocular and stereo images is one fundamental task for
geometric understanding. Despite numerous recent stud-
ies in depth estimation, these works have mainly focused
on depth estimation using RGB images. However, thermal
images, which typically have lower resolution, less texture,
and more noise than RGB images, could pose a challenge
for stereo-matching algorithms. This means that the perfor-
mances of these previous works in thermal image domains
are uncertain and may not be guaranteed.
To this end, in this paper, we provide a large-scale multi-
spectral dataset along with exhaustive experimental results
and a new perspective of depth unification to encourage ac-
tive research of various geometry algorithms from multi-
spectral data to achieve high-level performance, reliability,
and robustness against hostile conditions. Our contributions
can be summarized as follows:
• We provide a large-scale Multi-Spectral Stereo (MS2)
dataset, including stereo RGB, stereo NIR, stereo ther-
mal, and stereo LiDAR data along with GNSS/IMU
data. Our dataset provides about 195K synchronized
data pairs taken from city, residential, road, campus,
and suburban areas in the morning, daytime, and night-
time under clear-sky, cloudy, and rainy conditions.• We perform exhaustive validation and investigate that
monocular and stereo depth estimation algorithms
originally designed for visible spectral bands work rea-
sonably in thermal spectral bands.
• We propose a unified depth network that bridges
monocular depth and stereo depth estimation tasks
from the perspective of a conditional random field ap-
proach.
|
Ju_Human-Art_A_Versatile_Human-Centric_Dataset_Bridging_Natural_and_Artificial_Scenes_CVPR_2023
|
Abstract
Humans have long been recorded in a variety of forms
since antiquity. For example, sculptures and paintings were
the primary media for depicting human beings before the in-
vention of cameras. However, most current human-centric
computer vision tasks like human pose estimation and hu-
man image generation focus exclusively on natural images
in the real world. Artificial humans, such as those in sculp-
tures, paintings, and cartoons, are commonly neglected,
making existing models fail in these scenarios.
As an abstraction of life, art incorporates humans in both
natural and artificial scenes. We take advantage of it and
introduce the Human-Art dataset to bridge related tasks in
natural and artificial scenarios. Specifically, Human-Art
contains 50khigh-quality images with over 123kperson
instances from 5natural and 15artificial scenarios, which
are annotated with bounding boxes, keypoints, self-contact
points, and text information for humans represented in both
2D and 3D. It is, therefore, comprehensive and versatile
for various downstream tasks. We also provide a rich set
of baseline results and detailed analyses for related tasks,
including human detection, 2D and 3D human pose estima-
tion, image generation, and motion transfer. As a challeng-
ing dataset, we hope Human-Art can provide insights for
relevant research and open up new research questions.
|
1. Introduction
" Art is inspired by life but beyond it."
Human-centric computer vision (CV) tasks such as hu-
man detection [39], pose estimation [30], motion trans-
fer [59], and human image generation [35] have been in-
tensively studied and achieved remarkable performances in
the past decade, thanks to the advancement of deep learning
techniques. Most of these works use datasets [14,23,27,39]
*Equal contribution.
†Work in part done during an internship at IDEA.
‡Corresponding author.that focus on humans in natural scenes captured by cameras
due to the practical demands and easy accessibility.
However, besides being captured by cameras, humans
are frequently present and recorded in various other forms,
from ancient murals on walls to portrait paintings on pa-
per to computer-generated virtual figures in digital form.
However, existing state-of-the-art (SOTA) human detection
and pose estimation models [64, 71] trained on commonly
used datasets such as MSCOCO [27] generally work poorly
on these scenarios. For instance, the average precision of
such models can be as high as 63.2% and 79.8% on natural
scenes but drops significantly to 12.6% and 28.7% on the
sculpture scene. A fundamental reason is the domain gap
between natural and artificial scenes. Also, the scarcity of
datasets with artificial human scenes significantly restricts
the development of tasks such as anime character image
generation [9, 67, 74], character rendering [29], and char-
acter motion retargeting [1, 37, 68] in computer graphics
and other areas. With the growing interest in virtual reality
(VR), augmented reality (AR), and metaverse, this problem
is exacerbated and demands immediate attention.
There are a few small datasets incorporating humans in
artificial environments in the literature. Sketch2Pose [4]
and ClassArch [33] collect images in sketch and vase paint-
ing respectively. Consequently, they are only applicable to
the corresponding context. People-Art [61] is a human de-
tection dataset that consists of 1490 paintings. It covers arti-
ficial scenes in various painting styles, but its categories are
neither mutually exclusive nor collectively comprehensive.
More importantly, the annotation type and image number
in People-Art are limited, and hence this dataset is mainly
used for testing (instead of training) object detectors.
Art presents humans in both natural and artificial scenes
in various forms, e.g., dance, paintings, and sculptures. In
this paper, we take advantage of the classification of vi-
sual arts to introduce Human-Art , a versatile human-centric
dataset, to bridge the gap between natural and artificial
scenes. Human-Art is hierarchically structured and includes
high-quality human scenes in rich scenarios with precise
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
618
Acrobatics Cosplay Dance Drama Movie
Garage Kits Relief Sculpture
Kids Drawing Mural
Oil Painting Shadow Play Sketch Stained Glass Ukiyoe
Generated
Cartoon Digital Art Ink Painting WatercolorNatural
HumanArtificial
Human
Acrobatics
Shadow PlaySculpture
Oil PaintingRelief Garage Kits
MovieDramaDanceCosplay
Digital Art CartoonUkiyoe Stained Glass SketchMural Kids Drawing
Generated Watercolor Ink Painting
Visual Art
Painting Sculpture Drama Dance Movie
Acrobatics Shadow Play
Oil PaintingRelief
Garage KitsCosplay
Digital Art CartoonUkiyoe Stained Glass SketchMural
Kids Drawing
GeneratedWatercolor Ink Painting
Human
DetectionPose
EstimationMotion
TransferImage
GenerationSupported TasksFigure 1. Human-Art is a versatile human-centric dataset to bridge the gap between natural and artificial scenes. It includes 20high-quality
scenes, including natural and artificial humans in both 2D representation (yellow dashed boxes) and 3D representation (blue solid boxes).
manual annotations. Specifically, it is composed of 50k
images with about 123kperson instances in 20artistic cate-
gories, including 5natural and 15artificial scenarios in both
2D and 3D, as shown in Fig. 1. To support both recog-
nition and generation tasks, Human-Art provides precise
manual annotations containing human bounding boxes, 2D
keypoints, self-contact points, and text descriptions. It can
compensate for the lack of scenarios in prior datasets (e.g.,
MSCOCO [27]), link virtual and real worlds, and introduce
new challenges and opportunities for human-centric areas.
Human-Art has the following unique characteristics:
•Rich scenario :Human-Art focuses on scenes missing
in mainstream datasets (e.g., [27]), which covers most
human-related scenarios. Challenging human appear-
ances, diverse contexts, and various poses largely com-
plement the scenario deficiency of existing datasets
and will open up new challenges and opportunities.
•High quality : We guarantee inter-category variabil-
ity and intra-category diversity in style, author, origin,
and age. The 50k images are manually selected from
1,000k carefully collected images using standardized
data collection, filtering, and consolidating processes.
•Versatile annotations :Human-Art provides carefully
manual annotations of 2D human keypoints, human
bounding boxes, and self-contact points to support var-
ious downstream tasks. Also, we provide accessible
text descriptions to enable multi-modality learning.
With Human-Art , we conduct comprehensive experi-
ments and analysis on various downstream tasks including
human detection, human pose estimation, human mesh re-
covery, image generation, and motion transfer. Although
training on Human-Art can lead to a separate 31% and 21%
performance boost on human detection and human pose es-
timation, results demonstrate that human-related CV tasks
still have a long way to go before reaching maturity.
|
Kang_Superclass_Learning_With_Representation_Enhancement_CVPR_2023
|
Abstract
In many real scenarios, data are often divided into
a handful of artificial super categories in terms of ex-
pert knowledge rather than the representations of images.
Concretely, a superclass may contain massive and vari-
ous raw categories, such as refuse sorting. Due to the
lack of common semantic features, the existing classifica-
tion techniques are intractable to recognize superclass with-
out raw class labels, thus they suffer severe performance
damage or require huge annotation costs. To narrow this
gap, this paper proposes a superclass learning framework,
called SuperClass Learning with Representation Enhance-
ment(SCLRE), to recognize super categories by leverag-
ing enhanced representation. Specifically, by exploiting
the self-attention technique across the batch, SCLRE col-
lapses the boundaries of those raw categories and enhances
the representation of each superclass. On the enhanced
representation space, a superclass-aware decision bound-
ary is then reconstructed. Theoretically, we prove that by
leveraging attention techniques the generalization error of
SCLRE can be bounded under superclass scenarios. Exper-
imentally, extensive results demonstrate that SCLRE outper-
forms the baseline and other contrastive-based methods on
CIFAR-100 datasets and four high-resolution datasets.
|
1. Introduction
In recent decades, basic-level raw categorization (e.g.
cats vs dogs, apples vs bananas) has greatly developed
[9, 27] while high-level or super-coarse-grained visual cat-
egorization (e.g., recyclable waste vs kitchen waste, crea-
tures vs non-creatures) has received little attention. In many
real scenarios, there often exist a handful of high-level cat-
egories, wherein numerous images from diverse basic-level
categories share one common label. We tend to define this
kind of super-coarse-grained class as Superclass.
*Corresponding Author
EmbeddingSpace
SharedFeature
LocalFeatureLocalFeature
DataSpace“Apple”Domain
DataSpaceFruit DomainToyDomain
DataSpace
“Rotten”Domain(a) Image Feature Domain(b) Separate Fine Domain(c) Superclass Concept DomainFigure 1. Illustration of superclass learning. The samples from
a same superclass will be scatteredly distributed in the embed-
ding space. The process of superclass learning is to break old
domains and construct new domains. Red indicates the superclass
of kitchen waste, and blue indicates the superclass of recyclable
waste.
Refuse sorting, as an example, is such a recognization
problem with four superclasses, i.e., kitchen waste, recy-
clable waste, hazardous waste, and others waste. One task
of refuse sorting is to accurately collect various items, such
as rotten fruits, bones, raw vegetables, and eggshells, into
kitchen waste. Traditional recognition needs to identify
what exact basic-level categories they are, then sort them
out. Obviously, it is wasteful and unrealistic for superclass
identification.
Essentially, high-level superclasses contain two charac-
teristics, remarkably distinct from basic-level classes. First,
the basic-level classes contained in superclass problems are
usually scattered and share few common features. As de-
picted in the top-left corner subgraph of Fig. 1, the fruit
apple, bone, and eggs are remote from each other in fea-
ture spaces, though all of them belong to kitchen waste.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
24060
Second, the instances from two distinct superclasses may
share common features. Just as illustrated in Fig. 1, fruit
apple, from kitchen waste, and toy apple, from recyclable
waste, are close to each other as they share common seman-
tic features. Obviously, the above-mentioned characteristics
indicate that the smoothness assumption [27] under basic-
level classification (nearby images tend to have the same
label) does not hold in the superclass scenarios. Thus, the
existing classification techniques based on smoothness as-
sumption are not practically deployable and scalable and
they may suffer severe performance damage in superclass
settings. Accordingly, it is valuable and promising to inves-
tigate superclass identification.
To tackle the superclass problems, we have to address
two main challenges. First, we need to break the origi-
nal decision boundaries of basic-level classes and disclose
a superclass-aware boundary at the basic class level. As
depicted in bottom subgraph (a) of Fig. 1, the boundary of
the apple domain is original, however, it is useless and even
harmful for the refuse sorting as both fruit apples and toy
apples belong to this domain. To get the required domain
boundary, the apple domain needs to be separated into the
fruit domain and toy domain by leveraging their individ-
ual local features, as depicted in the bottom subgraph (b)
of Fig. 1. In superclass scenarios, it is not enough to in-
vestigate the boundary at a basic class level. Consequently,
the second challenge is to reconstruct a decision boundary
at a superclass level. To achieve this end, it is necessary to
merge the domain of classes, such as fruit apples, eggs, and
bones into a new rotten superclass domain, as depicted in
the bottom subgraph (c) of Fig. 1.
In this paper, we propose a SuperClassLearning frame-
work with Representation Enhancement. Considering that
the semantic representation at the basic class level is not
workable for superclass recognition, we propose one cross-
instance attention module which could seize the representa-
tion across the instances with the same superclass label. By
leveraging contrastive adjustment loss, the attention mech-
anism enhances this representation. Moreover, to overcome
the imbalance distribution of superclasses, we adopt target
adjustment loss to reconstruct a superclass-aware decision
boundary on the enhanced representation space.
In summary, this paper makes the following contribu-
tions:
• We propose an under-study but realistic problem, su-
perclass identification, that has notably distinct distri-
bution from basic-level categorization.
• We propose a novel representation enhancement
method by leveraging cross-instance attention and then
exploit it in superclass identification. And by theo-
retical analyses, we verify that this self-attention tech-
nique can bound the generalization error of superclassrecognition.
• Extensive experiments demonstrate that SCLRE out-
performs the SOTA classification techniques on one
artificial superclass dataset and three real datasets.
The remainder of the paper is organized as follows: in
Sec. 2 we briefly review related work and in Sec. 3 we de-
scribe our method for superclass recognition. Then, exten-
sive experiments and generalization error analysis are con-
ducted in Secs. 4 and 5. Finally, Sec. 6 draws a brief con-
clusion.
|
Ju_Distilling_Vision-Language_Pre-Training_To_Collaborate_With_Weakly-Supervised_Temporal_Action_Localization_CVPR_2023
|
Abstract
Weakly-supervised temporal action localization (WTAL)
learns to detect and classify action instances with only cat-
egory labels. Most methods widely adopt the off-the-shelf
Classification-Based Pre-training (CBP) to generate video
features for action localization. However, the different opti-
mization objectives between classification and localization,
make temporally localized results suffer from the serious in-
complete issue. To tackle this issue without additional anno-
tations, this paper considers to distill free action knowledge
from Vision-Language Pre-training (VLP), as we surpris-
ingly observe that the localization results of vanilla VLP
have an over-complete issue, which is just complementary
to the CBP results. To fuse such complementarity, we pro-
pose a novel distillation-collaboration framework with two
branches acting as CBP and VLP respectively. The frame-
work is optimized through a dual-branch alternate training
strategy. Specifically, during the B step, we distill the confi-
dent background pseudo-labels from the CBP branch; while
during the F step, the confident foreground pseudo-labels
are distilled from the VLP branch. As a result, the dual-
branch complementarity is effectively fused to promote one
strong alliance. Extensive experiments and ablation studies
on THUMOS14 and ActivityNet1.2 reveal that our method
significantly outperforms state-of-the-art methods.
|
1. Introduction
Temporal action localization (TAL), which aims to lo-
calize and classify action instances from untrimmed long
videos, has been recognized as an indispensable component
of video understanding [13,72,92]. To avoid laborious tem-
poral boundary annotations, the weakly-supervised setting
(WTAL) [31, 62, 64, 78], i.e.only video-level category la-
bels are available, has gained increasing attentions.
To date in the literature, almost all WTAL methods rely
on Classification-Based Pre-training (CBP) for video fea-
→
FP & FN
→ TP & TNCBPTrue Negative (TN)
False Negative (FN)
→→
(B)
VLP branchCBP branch
CollaborationDistill
BackgroundDistill
ForegroundVLPInput Videos
GT(A)
True Positive (TP)
False Positive (FP)
→→Figure 1. (A) Complementarity. Most works use Classification-
Based Pre-training (CBP) for localization, bringing high TN yet
serious FN. Vanilla Vision-Language Pre-training (VLP) confuses
action and background, bringing high TP yet serious FP. (B) Our
distillation-collaboration framework distills foreground from
the VLP branch while background from the CBP branch, and pro-
motes mutual collaboration, bringing satisfactory results.
ture extraction [5, 73]. A popular pipeline is to train an ac-
tion classifier with CBP features, then threshold the frame-
level classification probabilities for final localization re-
sults. As demonstrated in Figure 1 (A), these CBP meth-
ods suffer from the serious incompleteness issue ,i.e.only
detecting sparse discriminative action frames and incurring
high false negatives . The main reason is that the optimiza-
tion objective of classification pre-training, is to find sev-
eral discriminative frames for action recognition, which is
far from the objective of complete localization. As a re-
sult, features from CBP are inevitably biased towards par-
tial discriminative frames. To solve the incompleteness is-
sue, many efforts [33, 43, 56, 96] have been attempted, but
most of them are trapped in a ‘performance-cost dilemma’,
namely, solely digging from barren category labels to keep
costs low. Lacking location labels fundamentally limits the
performance, leaving a huge gap from strong supervision.
To jump out of the dilemma, this paper raises one novel
question: is there free action knowledge available, to help
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14751
complete detection results while maintain cheap annotation
overhead at the same time? We naturally turn our sights to
the prevalent Vision-Language Pre-training (VLP) [22, 66].
VLP has demonstrated great success to learn joint visual-
textual representations from large-scale web data. As lan-
guage covers rich information about objects, human-object
interactions, and object-object relationships, these learned
representations could provide powerful human-object co-
occurrence priors: valuable gifts for action localization.
We here take one step towards positively answering the
question, i.e.fill the research gap of distilling action priors
from VLP, namely CLIP [66], to solve the incomplete issue
for WTAL. As illustrated in Figure 1 (A), we first naively
evaluate the temporal localization performance of VLP by
frame-wise classification. But the results are far from satis-
factory, suffering from the serious over-complete issue ,i.e.
confusing multiple action instances into a whole, causing
high false positives . We conjecture the main reasons are:
(1) due to data and computational burden, almost all VLPs
are trained using image-text pairs. Hence, VLP lacks suffi-
cient temporal knowledge and relies more on human-object
co-occurrence for localization, making it struggle to distin-
guish the actions with visually similar background contexts;
(2) some background contexts have similar (confusing) tex-
tual semantics to actions, such as run-up vs.running.
Although simply steering VLP for WTAL is infeasible,
we fortunately observe the complementary property be-
tween CBP and VLP paradigms: the former localizes high
true negatives but serious false negatives, while the latter
has high true positives but serious false positives. To lever-
age the complementarity, as shown in Figure 1 (B), we de-
sign a novel distillation-collaboration framework that uses
two branches to play the roles of CBP and VLP, respec-
tively. The design rationale is to distill background knowl-
edge from the CBP branch, while foreground knowledge
from the VLP branch, for strong alliances. Specifically, we
first warm up the CBP branch using only category super-
vision to initialize confident background frames, and then
optimize the framework via an alternating strategy. Dur-
ing B step , we distill background pseudo-labels for the VLP
branch to solve the over-complete issue, hence obtaining
high-quality foreground pseudo-labels. During F step , we
leverage high-quality pseudo-labels for the CBP branch to
tackle the incomplete issue. Besides, in each step, we intro-
duce both confident knowledge distillation and representa-
tion contrastive learning for pseudo-label denoising, effec-
tively fusing complementarity for better results.
On two standard benchmarks: THUMOS14 and Activi-
tyNet1.2, our method improves the average performance by
3.5% and 2.7% over state-of-the-art methods. We also con-
duct extensive ablation studies to reveal the effectiveness of
each component, both quantitatively and qualitatively.
To sum up, our contributions lie in three folds:•We pioneer the first exploration in distilling free action
knowledge from off-the-shelf VLP to facilitate WTAL;
•We design a novel distillation-collaboration framework
that encourages the CBP branch and VLP branch to comple-
ment each other, by an alternating optimization strategy;
•We conduct extensive experiments and ablation studies
to reveal the significance of distilling VLP and our superior
performance on two public benchmarks.
|
Kang_Distilling_Self-Supervised_Vision_Transformers_for_Weakly-Supervised_Few-Shot_Classification__Segmentation_CVPR_2023
|
Abstract
We address the task of weakly-supervised few-shot im-
age classification and segmentation, by leveraging a Vision
Transformer (ViT) pretrained with self-supervision. Our
proposed method takes token representations from the self-
supervised ViT and leverages their correlations, via self-
attention, to produce classification and segmentation pre-
dictions through separate task heads. Our model is able
to effectively learn to perform classification and segmen-
tation in the absence of pixel-level labels during train-
ing, using only image-level labels. To do this it uses at-
tention maps, created from tokens generated by the self-
supervised ViT backbone, as pixel-level pseudo-labels. We
also explore a practical setup with “mixed” supervision,
where a small number of training images contains ground-
truth pixel-level labels and the remaining images have only
image-level labels. For this mixed setup, we propose to im-
prove the pseudo-labels using a pseudo-label enhancer that
was trained using the available ground-truth pixel-level la-
bels. Experiments on Pascal-5iand COCO-20idemonstrate
significant performance gains in a variety of supervision
settings, and in particular when little-to-no pixel-level la-
bels are available.
|
1. Introduction
Few-shot learning [22, 23, 77] is the problem of learn-
ing to perform a prediction task using only a small number
of examples ( i.e. supports) of the completed task, typically
in the range of 1-10 examples. This problem setting is ap-
pealing for applications where large amounts of annotated
data are expensive or impractical to collect. In computer vi-
sion, few-shot learning has been actively studied for image
classification [24, 36, 46, 66, 70, 85] and semantic segmen-
tation [17, 58, 61, 88, 89]. In this work, we focus on the
combined task of few-shot classification and segmentation
(FS-CS) [33], which aims to jointly predict (i) the presence
of each support class, i.e., multi-label classification, and (ii)
*Work done during an internship at FAIR.
Weakly-supervisedfew-shotlearner3-way 1-shot support setwith no ground-truth masks
query image
32class 1class 2class 3
Figure 1. Weakly-supervised few-shot classification and seg-
mentation. In this paper, we explore training few-shot models
using little-to-no ground-truth segmentation masks.
its pixel-level semantic segmentation.
Few-shot learners are typically trained either by meta-
learning [24, 56, 63, 71] or by transfer learning with fine-
tuning [11, 12, 16]. Both these paradigms commonly as-
sume the availability of a large-scale and fully-annotated
training dataset. For FS-CS, we need Ground-Truth (GT)
segmentation masks for query and support images during
training. We also need these GT masks for support im-
ages during testing. A natural alternative to collecting such
expensive segmentation masks would be to instead em-
ploy a weaker supervision, e.g. image-level [52,59] or box-
level labels [13, 30], for learning, i.e., to adopt a weakly-
supervised learning approach [5, 41, 43, 51]. Compared to
conventional weakly-supervised learning however, weakly-
supervised few-shot learning has rarely been explored in the
literature and is significantly more challenging. This is be-
cause in few-shot learning, object classes are completely
disjoint between training and testing, resulting in models
which are susceptible to severe over-fitting to the classes
seen during training.
In this work we tackle this challenging scenario, address-
ing weakly-supervised FS-CS where only image-level la-
bels, i.e. class labels, are available ( cf.Fig. 1). Inspired by
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19627
recent work by Caron et al. [9], which showed that pixel-
level semantic attention emerges from self-supervised vi-
sion transformers (ViTs) [18], we leverage attention maps
from a frozen self-supervised ViT to generate pseudo-GT
segmentation masks. We train an FS-CS learner on top of
the frozen ViT using the generated pseudo-GT masks as
pixel-level supervision. The FS-CS learner is a small trans-
former network that takes features from the ViT as input
and is trained to predict the pseudo-GT masks. Our com-
plete model thus learns to segment using its own intermedi-
ate byproduct, with no segmentation labeling cost, in a form
of distillation [2, 28, 91, 94] between layers within a model.
We also explore a practical training setting where a lim-
ited number of training images have both image-level and
pixel-level annotations, while the remaining images have
only image-level labels. For this mixed-supervised setting,
we propose to train an auxiliary mask enhancer that refines
attention maps into pseudo-GT masks. The enhancer is su-
pervised using the small number of available GT masks and
is used to generate the final pseudo-GT masks for the train-
ing images that do not have GT masks.
Lastly, to e ffectively address this weakly-supervised FS-
CS task, we propose a Classification-Segmentation Trans-
former (CST) architecture. CST takes as input ViT tokens
for query and support images, computes correlations be-
tween them, and then predicts classification and segmen-
tation outputs through separate task heads; the classifica-
tion head is trained with class labels while the segmenta-
tion head is trained with either GT masks or pseudo-GT
masks, depending on availability. Unlike prior work us-
ing ViTs that generate prediction for a single task using ei-
ther the class token [18, 74] or image tokens [50, 65], CST
uses each of them for each task prediction, which proves ad-
vantageous for both tasks. CST achieves moderate-to-large
performance gains for FS-CS on all three supervision lev-
els (image-level-, mixed-level, and pixel-level), when com-
pared to prior state-of-the-art methods.
Our contributions are the following:
i. We introduce a powerful baseline for few-shot classi-
fication and segmentation (FS-CS) using only image-
level supervision, which leverages localized semantic
information encoded in self-supervised ViTs [9].
ii. We propose a learning setup for FS-CS with mixed su-
pervision, and present an auxiliary mask enhancer to
improve performance compared to image-level super-
vision only.
iii. We design Classification-Segmentation Transformer,
and a multi-task learning objective of classification and
segmentation, which is beneficial for both tasks, and al-
lows for flexibly tuning the degree of supervision.
|
Kang_Meta-Learning_With_a_Geometry-Adaptive_Preconditioner_CVPR_2023
|
Abstract
Model-agnostic meta-learning (MAML) is one of the most
successful meta-learning algorithms. It has a bi-level opti-
mization structure where the outer-loop process learns a
shared initialization and the inner-loop process optimizes
task-specific weights. Although MAML relies on the stan-
dard gradient descent in the inner-loop, recent studies have
shown that controlling the inner-loop’s gradient descent with
a meta-learned preconditioner can be beneficial. Existing
preconditioners, however, cannot simultaneously adapt in
a task-specific and path-dependent way. Additionally, they
do not satisfy the Riemannian metric condition, which can
enable the steepest descent learning with preconditioned
gradient. In this study, we propose Geometry-Adaptive Pre-
conditioned gradient descent (GAP) that can overcome the
limitations in MAML; GAP can efficiently meta-learn a
preconditioner that is dependent on task-specific param-
eters, and its preconditioner can be shown to be a Rieman-
nian metric. Thanks to the two properties, the geometry-
adaptive preconditioner is effective for improving the inner-
loop optimization. Experiment results show that GAP out-
performs the state-of-the-art MAML family and precondi-
tioned gradient descent-MAML (PGD-MAML) family in a
variety of few-shot learning tasks. Code is available at:
https://github.com/Suhyun777/CVPR23-GAP .
|
1. Introduction
Meta-learning, or learning to learn , enables algorithms
to quickly learn new concepts with only a small number
of samples by extracting prior-knowledge known as meta-
knowledge from a variety of tasks and by improving the gen-
eralization capability over the new tasks. Among the meta-
learning algorithms, the category of optimization-based
meta-learning [8, 17, 20, 21, 48] has been gaining popularity
*Equal contribution.
†Corresponding author.due to its flexible applicability over diverse fields including
robotics [55, 59], medical image analysis [40, 54], language
modeling [37,42], and object detection [46,61]. In particular,
Model-Agnostic Meta-Learning (MAML) [20] is one of the
most prevalent gradient-based meta-learning algorithms.
Many recent studies have improved MAML by adopting
a Preconditioned Gradient Descent (PGD1) for inner-loop
optimization [34, 36, 44, 49, 53, 57, 66]. In this paper, we
collectively address PGD-based MAML algorithms as the
PGD-MAML family. A PGD is different from the ordinary
gradient descent because it performs a preconditioning on
the gradient using a preconditioning matrix P, also called a
preconditioner . A PGD-MAML algorithm meta-learns not
only the initialization parameter θ0of the network but also
the meta-parameter ϕof the preconditioner P.
For the inner-loop optimization, Pwas kept static in
most of the previous works (Figure 1(b)) [34, 36, 44, 57,
66]. Some of the previous works considered adapting the
preconditioner Pwith the inner-step k(Figure 1(c)) [49]
and some others with the individual task (Figure 1(d)) [53].
They achieved performance improvement by considering
individual tasks and inner-step, respectively, and showed
that both factors were valuable. However, both factors have
not been considered simultaneously in the existing studies.
When a parameter space has a certain underlying struc-
ture, there exists a Riemannian metric corresponding the pa-
rameter space [3, 4]. If the preconditioning matrix is the Rie-
mannian metric, the preconditioned gradient is known to be-
come the steepest descent on the parameter space [2 –4,6,27].
An illustration of a toy example is shown in Figure 2. The
optimization path of an ordinary gradient descent is shown
in Figure 2(a). Compared to the ordinary gradient descent, a
preconditioned gradient descent with a preconditioner that
does not satisfy the Riemannian metric condition can actually
harm the optimization. For the example in Figure 2(b), the
preconditioner affects the optimization into an undesirable
direction and negatively affects the gradient descent. On the
1PGD in our work should not be confused with Projected Gradient
Descent [13].
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16080
(a) MAML
(b) Non-adaptive P(ϕ)
(c) Adaptive P(k;ϕ)
(d) Adaptive P(Dtr
τ;ϕ)
(e) Adaptive P(θτ,k;ϕ)
Figure 1. Diagram of MAML and PGD-MAML family. For the inner-loop adaptation in each diagram, the dotted lines of the same color
indicate that they use a common preconditioning matrix (preconditioner). (a) MAML adaptation: no preconditioner is used ( i.e.,P=I).
(b)P(ϕ): a constant preconditioner is used in the inner-loop where the preconditioner’s meta-parameter ϕis meta-learned. (c) P(k;ϕ):
a constant preconditioner is used for each inner-step k. Preconditioner for each step is meta-learned, but P(k, ϕ)is not task-specific. (d)
P(Dtr
τ;ϕ): a constant preconditioner is used for each task. Preconditioner for each task is meta-learned, but P(Dtr
τ;ϕ)is not dependent on k.
(e) GAP adapts P(θτ,k;ϕ): a fully adaptive preconditioner is used where it is task-specific andpath-dependent . Instead of saying ‘dependent
onk’, we specifically say it is path-dependent because the exact dependency is on the task-specific parameter set θτ,kthat is considerably
more informative than k.
(a) Gradient Descent (GD)
(b) Preconditioned GD
(Non-Riemannian metric)
(c) Preconditioned GD
(Riemannian metric)
Figure 2. A toy example for illustrating the effect of Riemannian
metric (see Supplementary Section A). When the curvature of the
parameter space is poorly conditioned, (a) gradient descent can suf-
fer from the difficulty of finding the solution, (b) a preconditioned
gradient descent with a preconditioner that does not satisfy the
Riemannian metric condition can suffer further, and (c) a precondi-
tioned gradient descent with a preconditioner that is a Riemannian
metric can perform better.
contrary, if the preconditioner is the Riemannian metric cor-
responding the parameter space, the preconditioned gradient
descent can become the steepest descent and can exhibit a
better optimization behavior as shown in Figure 2(c). While
the Riemannian metric condition ( i.e., positive definiteness)
is a necessary condition for steepest descent learning, the
existing studies on PGD-MAML family did not consider
constraining preconditioners to satisfy the condition for Rie-
mannian metric.
In this study, we propose a new PGD method named
Geometry Aaptive Preconditioned gradient descent (GAP).Specifically, GAP satisfies two desirable properties which
have not been considered before. First, GAP’s precondi-
tioner PGAPis a fully adaptive preconditioner that can adapt
to the individual task ( task-specific ) and to the optimization-
path ( path-dependent ). The full adaptation is made possible
by having the preconditioner depend on the task-specific
parameter θτ,k(Figure 1(e)). Second, we prove that PGAP
is a Riemannian metric. To this end, we force the meta-
parameters of PGAPto be positive definite. Thus, GAP guar-
antees the steepest descent learning on the parameter space
corresponding to PGAP. Owing to the two properties, GAP
enables a geometry-adaptive learning in inner-loop optimiza-
tion.
For the implementation of GAP, we utilize the Singular
Value Decomposition (SVD) operation to come up with
our preconditioner satisfying the desired properties. For the
recently proposed large-scale architectures, computational
overhead can be an important design factor and we provide
a low-computational approximation, Approximate GAP , that
can be proven to asymptotically approximate the operation
of GAP.
To demonstrate the effectiveness of GAP, we empiri-
cally evaluate our algorithm on popular few-shot learning
tasks; few-shot regression, few-shot classification, and few-
shot cross-domain classification. The results show that GAP
outperforms the state-of-the-art MAML family and PGD-
MAML family.
Themain contributions of our study can be summarized
as follows:
•We propose a new preconditioned gradient descent
method called GAP, where it learns a preconditioner
that enables a geometry-adaptive learning in the inner-
loop optimization.
•We prove that GAP’s preconditioner has two desir-
able properties: (1) It is both task-specific and path-
dependent ( i.e., dependent on task-specific parameter
θτ,k). (2) It is a Riemannian metric.
16081
•For large-scale architectures, we provide a low-
computational approximation that can be theoretically
shown to approximate the GAP method.
•For popular few-shot learning benchmark tasks, we
empirically show that GAP outperforms the state-of-
the-art MAML family and PGD-MAML family.
|
Kang_Scaling_Up_GANs_for_Text-to-Image_Synthesis_CVPR_2023
|
Abstract
The recent success of text-to-image synthesis has taken
the world by storm and captured the general public’s imag-
ination. From a technical standpoint, it also marked a dras-
tic change in the favored architecture to design generative
image models. GANs used to be the de facto choice, with
techniques like StyleGAN. With DALL ·E 2, autoregressive
and diffusion models became the new standard for large-
scale generative models overnight. This rapid shift raises
a fundamental question: can we scale up GANs to benefit
from large datasets like LAION? We find that na ¨ıvely in-
creasing the capacity of the StyleGAN architecture quickly
becomes unstable. We introduce GigaGAN, a new GAN ar-
chitecture that far exceeds this limit, demonstrating GANs
as a viable option for text-to-image synthesis. GigaGAN
offers three major advantages. First, it is orders of mag-
nitude faster at inference time, taking only 0.13 seconds
to synthesize a 512px image. Second, it can synthesize
high-resolution images, for example, 16-megapixel images
in 3.66 seconds. Finally, GigaGAN supports various latent
space editing applications such as latent interpolation, style
mixing, and vector arithmetic operations.
|
1. Introduction
Recently released models, such as DALL ·E 2 [53], Ima-
gen [59], Parti [73], and Stable Diffusion [58], have ushered
in a new era of image generation, achieving unprecedented
levels of image quality and model flexibility. The now-
dominant paradigms, diffusion models and autoregressive
models, both rely on iterative inference. This is a double-
edged sword, as iterative methods enable stable training
with simple objectives but incur a high computational cost
during inference.
Contrast this with Generative Adversarial Networks
(GANs) [5,17,33,51], which generate images through a sin-
gle forward pass and thus inherently efficient. While such
models dominated the previous “era” of generative mod-
eling, scaling them requires careful tuning of the networkarchitectures and training considerations due to instabilities
in the training procedure. As such, GANs have excelled at
modeling single or multiple object classes, but scaling to
complex datasets, much less an open world, has remained
challenging. As a result, ultra-large models, data, and com-
pute resources are now dedicated to diffusion and autore-
gressive models. In this work, we ask – can GANs continue
to be scaled up and potentially benefit from such resources,
or have they plateaued? What prevents them from further
scaling, and can we overcome these barriers?
We first experiment with StyleGAN2 [34] and observe
that simply scaling the backbone causes unstable training.
We identify several key issues and propose techniques to
stabilize the training while increasing the model capacity.
First, we effectively scale the generator’s capacity by re-
taining a bank of filters and taking a sample-specific linear
combination. We also adapt several techniques commonly
used in the diffusion context and confirm that they bring
similar benefits to GANs. For instance, interleaving both
self-attention (image-only) and cross-attention (image-text)
with the convolutional layers improves performance.
Furthermore, we reintroduce multi-scale training, find-
ing a new scheme that improves image-text alignment and
low-frequency details of generated outputs. Multi-scale
training allows the GAN-based generator to use parameters
in low-resolution blocks more effectively, leading to better
image-text alignment and image quality. After careful tun-
ing, we achieve stable and scalable training of a one-billion-
parameter GAN (GigaGAN) on large-scale datasets, such as
LAION2B-en [63]. Our results are shown in Figure 1.
In addition, our method uses a multi-stage approach [12,
76]. We first generate at 64×64and then upsample to
512×512. These two networks are modular and robust
enough to be used in a plug-and-play fashion. We show that
our text-conditioned GAN-based upsampling network can
be used as an efficient, higher-quality upsampler for a base
diffusion model such as DALL ·E 2, despite never having
seen diffusion images at training time (Figures 1).
Together, these advances enable our GigaGAN to go
far beyond previous GANs: 36 ×larger than Style-
GAN2 [34] and 6 ×larger than StyleGAN-XL [62] and
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10124
A portrait of a human growing colorful
flowers from her hair. Hyperrealistic
oil painting. Intricate details. a blue Porsche 356 parked in
front of a yellow brick wall.Eiffel Tower, landscape
photographyA painting of a majestic royal
tall ship in Age of Discovery.
A golden luxury motorcycle parked
at the King's palace. 35mm f/4.5.A hot air balloon in shape of a
heart. Grand Canyonlow poly bunny with cute eyes A cube made of denim on a wooden
table
GigaGAN (4K)
Input
Real-ESRGAN
SD Upscaler
GigaGAN (1K)
GigaGAN Upscaler (4096px, 16Mpix, 3.66s)Input artwork ( 128px) Real-ESRGAN (1024px, 0.06s)
SD Upscaler (1024px, 7.75s) GigaGAN Upscaler (1024px, 0.13s)
Figure 1. Our model, GigaGAN, shows GAN frameworks can also be scaled up for general text-to-image synthesis and super-
resolution tasks, generating a 512px output at an interactive speed of 0.13s, and 4096px within 3.7s. Selected examples at 2K
resolution (text-to-image synthesis) and 1k or 4k resolutions (super-resolution) are shown. For the super-resolution task, we use
the caption of “Portrait of a colored iguana dressed in a hoodie.” and compare our model with the text-conditioned upscaler of
Stable Diffusion [57] and unconditional Real-ESRGAN [26]. Please zoom in for more details. See our arXiv paper and website
for more uncurated comparisons.
10125
XMC-GAN [75]. While our 1B parameter count is still
lower than the largest synthesis models, such as DALL ·E
2 (5.5B), and Parti (20B), we have not yet observed a qual-
ity saturation regarding the model size. GigaGAN achieves
a zero-shot FID of 9.09 on COCO2014, lower than the FID
of DALL ·E 2, Parti-750M, and Stable Diffusion.
Furthermore, GigaGAN has three major practical ad-
vantages compared to diffusion and autoregressive models.
First, it is orders of magnitude faster, generating a 512px
image in 0.13 seconds. Second, it can synthesize ultra high-
res images at 4k resolution in 3.66 seconds. Third, it is
endowed with a controllable, latent vector space that lends
itself to well-studied controllable image synthesis applica-
tions, such as prompt mixing (Figure 4), style mixing (Fig-
ure 5), and prompt interpolation (Figures A7 and A8).
In summary, our model is the first GAN-based method
that successfully trains a billion-scale model on billions
of real-world complex Internet images. This suggests that
GANs are still a viable option for text-to-image synthe-
sis and should be considered for future aggressive scaling.
Please visit our website for additional results.
|
Kamath_A_New_Path_Scaling_Vision-and-Language_Navigation_With_Synthetic_Instructions_and_CVPR_2023
|
Abstract
Recent studies in Vision-and-Language Navigation
(VLN) train RL agents to execute natural-language navi-
gation instructions in photorealistic environments, as a step
towards robots that can follow human instructions. How-
ever, given the scarcity of human instruction data and lim-
ited diversity in the training environments, these agents
still struggle with complex language grounding and spa-
tial language understanding. Pretraining on large text
and image-text datasets from the web has been extensively
explored but the improvements are limited. We investi-
gate large-scale augmentation with synthetic instructions.
We take 500+ indoor environments captured in densely-
sampled 360panoramas, construct navigation trajecto-
ries through these panoramas, and generate a visually-
grounded instruction for each trajectory using Marky [63],
a high-quality multilingual navigation instruction genera-
tor. We also synthesize image observations from novel view-
points using an image-to-image GAN [27]. The resulting
dataset of 4.2M instruction-trajectory pairs is two orders of
magnitude larger than existing human-annotated datasets,
and contains a wider variety of environments and view-
points. To efficiently leverage data at this scale, we train
a simple transformer agent with imitation learning. On the
challenging RxR dataset, our approach outperforms all ex-
isting RL agents, improving the state-of-the-art NDTW from
71.1 to 79.1 in seen environments, and from 64.6 to 66.8 in
unseen test environments. Our work points to a new path to
improving instruction-following agents, emphasizing large-
scale training on near-human quality synthetic instructions.
|
1. Introduction
Developing intelligent agents that follow human instruc-
tions is a long-term, formidable challenge in AI [66]. A
recent focus addressing this problem space is Vision-and-
Language Navigation (VLN) [3, 9]. Navigation is an ideal
Equal Contribution.yWork done while at Google Research.
Figure 1. Simpler agents with more data: We investigate large-
scale augmentation using 500+ environments annotated with syn-
thetic instructions that approach human quality.
test bed for studying instruction-following, since the task
can be simulated photo-realistically at scale and evaluation
is straightforward. However, datasets that capture the lin-
guistic diversity and idiosyncrasies of real human instruc-
tors are small and expensive to collect.
Shortages of human-annotated training data for other
vision-and-language tasks have been partially addressed
by pretraining transformers on up to billions of image-
text pairs. This has underpinned dramatic improvements
in image captioning [65, 70], visual question answering
[59], phrase grounding [26, 35], text-to-video retrieval,
video question answering [32] and text-to-image synthesis
[49, 69]. However, these are all static image or video tasks,
whereas VLN agents interact with 3D environments. In
VLN, pretraining on large image-text and text-only datasets
has been thoroughly explored [21, 22, 40, 45], but improve-
ments are more limited. Arguably, progress in VLN has
plateaued while still leaving a large gap between machine
and human performance [73]. We hypothesize that static
image-text and text-only datasets – despite their size – lack
the spatially grounded and action-oriented language needed
for effective VLN pretraining. Consider instructions from
the Room-across-Room (RxR) dataset [30], which illus-
trate that wayfinding requires an understanding of allocen-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10813
tric and egocentric spatial expressions ( near a grey console
table behind you ), verbs ( climb the stairs ), imperatives and
negations ( do not enter the room in front ) and temporal
conditions ( walk until you see an entrance on your left ).
Such expressions are rarely found in image-text datasets.
Though similar expressions are found in text-only corpora,
their meaning as it relates to the physical world is hard to
infer from text alone (without sensorimotor context) [6].
To address this problem, we investigate large-scale
augmentation with synthetic in-domain data, i.e., model-
generated navigation instructions for trajectories in realistic
3D environments using previously developed components
[27, 63]. We construct a large dataset using Marky [63],
which generates VLN instructions that approach the qual-
ity of human instructors. [63] released the 1M Marky
instruction-trajectory pairs situated in 61 Matterport3D [7]
environments. To increase the diversity of the environments
(and thus the scenes and objects available in them), we au-
tomatically annotate an additional 491 environments from
the Gibson dataset [67]. Gibson environments have been
underutilized in prior VLN work due to the lack of navi-
gation graphs indicating navigable trajectories through its
densely-sampled 360panoramas. We train a model that
classifies navigable directions for Matterport3D and use it
to construct the missing navigation graphs. We sample
3.2M trajectories from these graphs and annotate them with
Marky. To further increase the variability of trajectories, we
synthesize image observations from novel viewpoints using
an image-to-image GAN [27]. The resulting dataset is two
orders of magnitude larger than existing human-annotated
ones, and contains a wider variety of scenes and viewpoints.
We have released our Gibson navigation graphs and the
Marky-Gibson dataset.2
With orders of magnitude more training examples and
environments, we explore VLN agent performance with
imitation learning (IL), i.e., behavioral cloning and DAG-
GER [53] IL can take advantage of high-throughput trans-
former frameworks such as T5 [48] and thus efficiently train
on 4.2M instructions (accumulating over 700M steps of ex-
perience). This is a departure from most prior VLN work
in low-data settings, e.g. [10] report that pure IL under-
performs by 8.5% success rate compared to agents trained
with both IL and online reinforcement learning (RL) algo-
rithms such as A3C [44]. However, IL outperforms RL
in related tasks with sufficient training data [50]. Online
RL also requires interacting with the environment at each
step; this precludes efficient data prefetching and paral-
lel preprocessing and thus imposes unavoidable overhead
compared to IL. Empirically, we confirm that training ex-
isting models such as HAMT [10] on 4.2M instructions is
infeasible without ground-up re-engineering, though we do
find incorporating 10K additional synthetic instructions into
2//github.com/google-research-datasets/RxR/tree/main/marky-mT5HAMT training modestly improves performance. Training
with IL aligns with the trend towards large-scale multi-task
vision-and-language models trained with supervised learn-
ing; these have unified tasks as diverse as visual question
answering, image captioning, object detection, image clas-
sification, OCR and text reasoning [12] – and could include
VLN in future.
Experimentally, in detailed ablation studies we show that
adding Gibson environments, synthesizing additional image
observations from novel viewpoints, increasing the capacity
of the transformer, and finetuning with DAGGER all im-
prove agent performance. On the challenging RxR dataset
– which contains multilingual instructions with a median
trajectory length of 15m – our best agent using only imita-
tion learning outperforms all prior RL agents. Evaluating
on novel instruction-trajectories in seen environments (Val-
Seen), we improve over the state-of-the-art by 8%, reaching
79.1 NDTW. In new, unseen environments (Test), we im-
prove by 2%, achieving 66.8 NDTW. We also show that
that self-training with synthetic instructions in new envi-
ronments (still without human annotations) improves per-
formance by an additional 2% to 68.6 NDTW. Overall, our
RxR results point to a new path to improving instruction-
following agents, emphasizing large-scale training on near-
human quality synthetic instructions. Perhaps surprisingly,
on the English-only R2R dataset [3], our IL agent achieves
strong but not state-of-the-art results. Marky was trained
on RxR, so we attribute this to domain differences between
R2R and RxR, underscoring the domain dependence of syn-
thetic instructions.
|
Jung_Devils_on_the_Edges_Selective_Quad_Attention_for_Scene_Graph_CVPR_2023
|
Abstract
Scene graph generation aims to construct a semantic
graph structure from an image such that its nodes and edgesrespectively represent objects and their relationships. Oneof the major challenges for the task lies in the presenceof distracting objects and relationships in images; contex-tual reasoning is strongly distracted by irrelevant objects or
backgrounds and, more importantly, a vast number of irrel-
evant candidate relations. To tackle the issue, we proposethe Selective Quad Attention Network (SQUAT) that learnsto select relevant object pairs and disambiguate them via di-verse contextual interactions. SQUAT consists of two maincomponents: edge selection and quad attention. The edge
selection module selects relevant object pairs, i.e., edges inthe scene graph, which helps contextual reasoning, and thequad attention module then updates the edge features us-
ing both edge-to-node and edge-to-edge cross-attentions tocapture contextual information between objects and object
pairs. Experiments demonstrate the strong performanceand robustness of SQUAT, achieving the state of the art on
the Visual Genome and Open Images v6 benchmarks.
|
1. Introduction
The task of scene graph generation (SGG) is to construct
a visually-grounded graph from an image such that its nodesand edges respectively represent objects and their relation-ships in the image [ 27,43,46]. The scene graph provides a
semantic structure of images beyond individual objects and
thus is useful for a wide range of vision problems such asvisual question answering [ 36,37], image captioning [ 53],
image retrieval [ 11], and conditional image generation [ 10],
where a holistic understanding of the relationships amongobjects is required for high-level reasoning. With recent ad-
vances in deep neural networks for visual recognition, SGGhas been actively investigated in the computer vision com-
munity. A vast majority of existing methods tackle SGG byfirst detecting candidate objects and then performing con-textual reasoning between the objects via message pass-(a) Ground-truth scene graph
manhorse
glasses
bucketBackground
Background
(b) Fully-connected scene graph
horse
bucketBackground
Backgroundglassesman
Node
EdgeNode EdgeQueryKey-Value
N2N
E2NN2E
E2E
Node
Edge
(c) Overview of the quad attention module
Figure 1. (a) The ground-truth scene graph contains only 4
ground-truth objects and 4 relations between the objects. (b) Only13% of edges in a fully-connected graph have the actual relation-ships according to the ground-truths. (c) The overview of the quadattention. The node features are updated by node-to-node (N2N)and node-to-edge (N2E) attentions, and the edge features are up-dated by edge-to-node (E2N) and edge-to-edge (E2E) attentions.
ing [19,21,43] or sequential modeling [ 28,36,50]. Despite
these efforts, the task of SGG remains extremely challeng-ing, and even the state-of-the-art methods do not producereliable results for practical usage.
While there exist a multitude of challenges for SGG, the
intrinsic difficulty may lie in the presence of distracting ob-jects and relationships in images; there is a vast number ofpotential but irrelevant relations, i.e., edges, which quadrat-
ically increase with the number of candidate objects, i.e.,
nodes, in the scene graph. The contextual reasoning for
SGG in the wild is thus largely distracted by irrelevant ob-
jects and their relationship pairs. Let us take a simple ex-ample as in Fig. 1, where 4 objects and 4 relations in its
ground-truth scene graph exist in the given image. If our
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18664
object detector obtains 6 candidate boxes, 2 of which are
from the background (red), then the contextual reasoning,e.g., message passing or self-attention, needs to consider 30
potential relations, 87% of which are not directly related ac-cording to the ground-truth and most of them may thus actas distracting outliers. In practice, the situation is far worse;in the Visual Genome dataset, the standard benchmark forSGG, an image contains 38 objects and 22 relationships onaverage [ 43], which means that only around 1% of object
pairs have direct and meaningful relations even when ob-ject detection is perfect. As will be discussed in our exper-
iments, we find that existing contextual reasoning schemesobtain only a marginal gain at best and often degrade theperformance. The crux of the matter for SGG may lie indeveloping a robust model for contextual reasoning againstirrelevant objects and relations.
To tackle the issue, we propose the Selective Quad Atten-
tion Network (SQUAT) that learns to select relevant object
pairs and disambiguate them via diverse contextual interac-tions with objects and object pairs. The proposed methodconsists of two main components: edge selection and quadattention. The edge selection module removes irrelevant ob-ject pairs, which may distract contextual reasoning, by pre-dicting the relevance score for each pair. The quad attentionmodule then updates the edge features using edge-to-nodeand edge-to-edge cross-attentions as well as the node fea-tures using node-to-node and node-to-edge cross-attentions;it thus captures contextual information between all objectsand object pairs, as shown in Figure 1(c). Compared to pre-
vious methods [ 19,21], which perform either node-to-node
or node-to-edge interactions, our quad attention providesmore effective contextual reasoning by capturing diverse
interactions in the scene graph. For example, in the caseof Fig. 1(a), [‘man’, ‘feeding’, ‘horse’] relates to [‘man’,
‘holding’, ‘bracket’] and [‘horse’, ‘eat from’, ‘bracket’],and vice versa; node-to-node or node-to-edge interactionsare limited in capturing such relations between the edges.
Our contributions can be summarized as follows:
• We introduce the edge selection module for SGG that
learns to select relevant edges for contextual reasoning.
• We propose the quad attention module for SGG that
performs effective contextual reasoning by updatingnode and edge features via diverse interactions.
• The proposed SGG model, SQUAT, outperforms the
state-of-the-art methods on both Visual Genome and
Open Images v6 benchmarks. In particular, SQUATachieves remarkable improvement on the SGDet set-tings, which is the most realistic and challenging.
|
Jung_AnyFlow_Arbitrary_Scale_Optical_Flow_With_Implicit_Neural_Representation_CVPR_2023
|
Abstract
To apply optical flow in practice, it is often necessary
to resize the input to smaller dimensions in order to reduce
computational costs. However, downsizing inputs makes the
estimation more challenging because objects and motion
ranges become smaller. Even though recent approaches
have demonstrated high-quality flow estimation, they tend
to fail to accurately model small objects and precise bound-
aries when the input resolution is lowered, restricting their
applicability to high-resolution inputs. In this paper, we
introduce AnyFlow, a robust network that estimates accu-
rate flow from images of various resolutions. By repre-
senting optical flow as a continuous coordinate-based rep-
resentation, AnyFlow generates outputs at arbitrary scales
from low-resolution inputs, demonstrating superior perfor-
mance over prior works in capturing tiny objects with de-
tail preservation on a wide range of scenes. We establish
a new state-of-the-art performance of cross-dataset gener-
alization on the KITTI dataset, while achieving comparable
accuracy on the online benchmarks to other SOTA methods.
|
1. Introduction
Optical flow seeks to estimate per-pixel correspon-
dences, characterized as the horizontal and vertical shift,
from a pair of images. Specifically, it aims to identify
the correspondence across pixels in different images, which
is at the heart of numerous computer vision tasks such as
video denoising [4, 33, 70], action recognition [55, 60, 62]
and object tracking [11, 26, 46]. This is particularly chal-
lenging due to the fact that the scene geometry and object
motion are combined into a single observation, making the
inverse problem of estimating motion highly ill-posed.
A common assumption for enabling computationally
tractable optical flow is to explicitly account for small mo-
tion in local neighborhood and incorporate additional prior
to constrain the solution space, such as using total variation
*This work was done during the author’s internship at Meta.
†Affiliated with Meta at the time of this work.
Figure 1. Predicting 50% downsized images on Sintel [5], we
compare AnyFlow to RAFT [57]. AnyFlow shows clearer bound-
aries, accurate shapes, and better detection of small objects.
prior [15, 71] and smooth prior [2, 3, 51]. While effective,
these techniques are significantly restricted by the assump-
tion and do not generalize well for real-life scenes with so-
phisticated geometry and motions.
An alternative approach is motivated by the success of
deep neural networks. Different from using handcrafted
priors, learning-based methods [14, 67, 74] design end-to-
end networks to regress the motion field. Sun et al. [53]
use the coarse-to-fine approach to characterize the pixel-to-
pixel mapping and several methods [20, 22, 44, 67] are de-
veloped along this line. The drawback of these approaches
is that the accuracy of flow estimation is not only limited by
the sampling but also has a critical dependence on a good
initial solution because the pixel correspondence is funda-
mentally ambiguous. Tedd and Deng [57] resort to iterative
update module on top of the correlation volumes, allowing
better estimates in tiny and fast-moving objects. Indeed the
success of this model relies on having a correlation volume
that is high quality in characterizing the feature correspon-
dence for sub-pixel level. Inspired by its success, follow-
up methods [23, 25, 36, 50, 52] have further improved it in
multiple ways. One popular direction is to introduce atten-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5455
tion mechanisms [59] for efficiency [64,72], occlusion han-
dling [24], and large displacements [49]. As attention has
been found to be effective in estimating long-range depen-
dencies, Xu et al. [65] and Huang et al. [19] adopt Vision
Transformers [34] to perform global matching and demon-
strate their effectiveness. However, these methods naturally
become less effective for low-resolution images, where in-
accuracy is introduced when computing the correlation vol-
umes and the iterative refinements. They tend to fail in ac-
curately modeling small objects and precise boundary when
the input resolution is lowered. This inherently limits the
applicability for mobile devices in particular, where resiz-
ing the input to small size is often necessary to reduce the
computational cost.
In this paper, we propose AnyFlow, a novel method that
is agnostic to the resolution of images. The key insight
behind AnyFlow is the use of implicit neural representa-
tion (INR) for flow estimation. INRs, such as LIIF [8],
have been demonstrated to be effective for image super-
resolution, as they model an image as a continuous function
without limiting the output to a fixed resolution. This en-
ables image generation at arbitrary scales. Inspired by this
capability, we design a continuous coordinate-based flow
upsampler to infer flow at any desired scale. In addition,
we propose a novel warping scheme utilizing multi-scale
feature maps together with dynamic correlation lookup to
generalize well on diverse shapes of input. As the proposed
methods produce a synergistic effect, we demonstrate su-
perior performance for tiny objects and detail preservation
on a wide range of scenes involving complex geometry and
motions. Specifically, we verify the robustness for the input
with low resolution. Fig. 1 showcases our technique against
RAFT [57]. Our contributions are summarized as follows:
• We introduce AnyFlow, a novel network to produce
high quality flow estimation for arbitrary size of image.
• We present the methods to utilize multi-scale feature
maps and search optimal correspondence within pre-
dicted range, enabling robust estimation in wide ranges
of motion types.
• We demonstrate strong performance on cross-dataset
generalization by achieving more than 25% of error
reduction with only 0.1M additional parameters from
the baseline and rank 1st on the KITTI dataset.
Together, our contributions provide, for the first time, an
approach for arbitrary size of input, which significantly im-
proves the quality for low-resolution images and extends the
applicability for portable devices.
|
Kalb_Principles_of_Forgetting_in_Domain-Incremental_Semantic_Segmentation_in_Adverse_Weather_CVPR_2023
|
Abstract
Deep neural networks for scene perception in automated
vehicles achieve excellent results for the domains they were
trained on. However, in real-world conditions, the do-
main of operation and its underlying data distribution are
subject to change. Adverse weather conditions, in partic-
ular, can significantly decrease model performance when
such data are not available during training. Addition-
ally, when a model is incrementally adapted to a new do-
main, it suffers from catastrophic forgetting, causing a sig-
nificant drop in performance on previously observed do-
mains. Despite recent progress in reducing catastrophic
forgetting, its causes and effects remain obscure. Therefore,
we study how the representations of semantic segmenta-
tion models are affected during domain-incremental learn-
ing in adverse weather conditions. Our experiments and
representational analyses indicate that catastrophic forget-
ting is primarily caused by changes to low-level features in
domain-incremental learning and that learning more gen-
eral features on the source domain using pre-training and
image augmentations leads to efficient feature reuse in sub-
sequent tasks, which drastically reduces catastrophic for-
getting. These findings highlight the importance of meth-
ods that facilitate generalized features for effective contin-
ual learning algorithms.
|
1. Introduction
Semantic segmentation is widely used for environment
perception in automated driving, where it aims at recogniz-
ing and comprehending images at the pixel level. One fun-
damental constraint of the traditional deep learning-based
semantic segmentation models is that they are often only
trained and evaluated on data collected mostly in clear
weather conditions and that they assume that the domain of
the training data matches the domain they operate in. How-
ever, in the real world, those autonomous driving systems
Figure 1. Activation drift between models f1tof0measured by
relative mIoU on the first task of the models stitched together at
specific layers (horizontal axis). The layers of the encoder are
marked in the gray area, the decoder layers in the white area.
Layer-stitching reveals that during domain-incremental learning,
changes in low-level features are a major cause of forgetting. With
an improved training scheme, combining simple augmentations,
exchanging normalization layers and using pre-training, the model
is optimized to reuse low-level features during incremental learn-
ing, leading to significant reduction of catastrophic forgetting.
are faced with constantly changing driving environments
and variable input data distributions. Specifically, changing
weather conditions can have adverse effects on the perfor-
mance of segmentation models.
Therefore, a semantic segmentation model needs to be
adapted to these conditions. A naive solution to this prob-
lem would be to incrementally fine-tune the model to new
domains with labeled data. However, fine-tuning a neural
network to a novel domain will, in most cases, lead to a
severe performance drop in previously observed domains.
This phenomenon is usually referred to as catastrophic for-
getting and is a fundamental challenge when training a neu-
ral network on a continuous stream of data. Recently pro-
posed methods mostly mitigate this challenge by replaying
data from previous domains, re-estimating statistics or even
in an unsupervised manner by transferring training images
in the style of the novel domain [32, 48, 52]. The focus of
our work is to study how the internal representations of se-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19508
mantic segmentation models are affected during domain-
incremental learning and how efficient feature reuse can
mitigate forgetting without explicit replay of the previous
domain. Our main contributions are:
1. We analyze the activation drift that a model’s layers
are subjected to when adapting from good to adverse
weather conditions by stitching them with the previous
task’s network. We reveal that the major cause of for-
getting is a shift of low-level representations in the first
convolution layer that adversely affects the population
statistics of the following BatchNorm Layer.
2. Using different augmentation strategies to match the
target domains in color statistics or in the frequency
domain, we reveal that learning color-invariant fea-
tures stabilizes the representations in early layers, as
they don’t change when the model is adapted to a new
domain.
3. With a combination of pre-training, augmentations and
exchanged normalization layers, we achieve an over-
all reduction of forgetting of ∼20% mIoU compared
to fine-tuning without using any form of replay and
prove the effectiveness of pre-training and augmenta-
tions which are often overlooked in continual learning.
|
Kalischek_BiasBed_-_Rigorous_Texture_Bias_Evaluation_CVPR_2023
|
Abstract
The well-documented presence of texture bias in modern
convolutional neural networks has led to a plethora of al-
gorithms that promote an emphasis on shape cues, often
to support generalization to new domains. Yet, common
datasets, benchmarks and general model selection strate-
gies are missing, and there is no agreed, rigorous evaluation
protocol. In this paper, we investigate difficulties and limi-
tations when training networks with reduced texture bias. In
particular, we also show that proper evaluation and mean-
ingful comparisons between methods are not trivial. We
introduce BiasBed , a testbed for texture- and style-biased
training, including multiple datasets and a range of exist-
ing algorithms. It comes with an extensive evaluation pro-
tocol that includes rigorous hypothesis testing to gauge the
significance of the results, despite the considerable train-
ing instability of some style bias methods. Our extensive
experiments, shed new light on the need for careful, sta-
tistically founded evaluation protocols for style bias (and
beyond). E.g., we find that some algorithms proposed in the
literature do not significantly mitigate the impact of style
bias at all. With the release of BiasBed , we hope to fos-
ter a common understanding of consistent and meaning-
ful comparisons, and consequently faster progress towards
learning methods free of texture bias. Code is available at
https://github.com/D1noFuzi/BiasBed
|
1. Introduction
Visual object recognition is fundamental to our daily
lives. Identifying and categorizing objects in the environ-
ment is essential for human survival, indeed our brain is
able to assign the correct object class within a fraction of a
second, independent of substantial variations in appearance,
illumination and occlusion [28]. Recognition mainly takes
place along the ventral pathway [7], i.e., visual perception
induces a hierarchical processing stream from local patterns
to complex features, in feed-forward fashion [28]. Signalsare filtered to low frequency and used in parallel also in a
top-down manner [2], emphasizing the robustness and in-
variance to deviations in appearance.
Inspired by our brain’s visual perception, convolutional
neural architectures build upon the hierarchical intuition,
stacking multiple convolutional layers to induce feed-
forward learning of basic concepts to compositions of com-
plex objects. Indeed, early findings suggested that neu-
rons in deeper layers are activated by increasingly complex
shapes, while the first layers are mainly tailored towards
low-level features such as color, texture and basic geometry.
However, recent work indicates the opposite: convolutional
neural networks (CNNs) exhibit a strong bias to base their
decision on texture cues [11–13, 17], which heavily influ-
ences their performance, in particular under domain shifts,
which typically affect local texture more than global shape.
This inherent texture bias has led to a considerable body
of work that tries to minimize texture and style bias and
shift towards “human-like” shape bias in object recognition.
Common to all approaches is the principle of incorporating
adversarial texture cues into the learning pipeline – either
directly in input space [13, 15, 27] or implicitly in feature
space [20,22,30,33]. Perturbing the texture cues in the input
forces the neural network to make “texture-free” decisions,
thus focusing on the objects’ shapes that remain stable dur-
ing training. While texture cues certainly boost perfor-
mance in fine-grained classification tasks, where local tex-
ture patterns may indicate different classes, they can cause
serious harm when the test data exhibit a domain shift w.r.t.
training. In this light, texture bias can be seen as a main rea-
son for degrading domain generalization performance, and
various algorithms have been developed to improve general-
ization to new domains with different texture properties (re-
spectively image styles), e.g., [13, 15, 20, 22, 27, 30, 33, 40].
However, while a considerable number of algorithms
have been proposed to address texture bias, they are neither
evaluated on common datasets nor with common evaluation
metrics. Moreover they often employ inconsistent model
selection criteria or do not report at all how model selection
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22221
is done. With the present paper we promote the view that:
• the large domain shifts induced by texture-biased train-
ing cause large fluctuations in accuracy, which call for
a particularly rigorous evaluation;
• model selection has so far been ignored in the litera-
ture; together with the high training volatility, this may
have lead to overly optimistic conclusions from spuri-
ous results;
• in light of the difficulties of operating under drastic do-
main shifts, experimental results should include a no-
tion of uncertainty in the evaluation metrics.
Motivated by these findings, we have implemented Bias-
Bed, an open-source PyTorch [35] testbed that comes with
six datasets of different texture and shape biases, four ad-
versarial robustness datasets and eight fully implemented
algorithms. Our framework allows the addition of new al-
gorithms and datasets with few lines of code, including full
flexibility of all parameter settings. In order to tackle the
previously discussed limitations and difficulties for evalu-
ating such algorithms, we have added a carefully designed
evaluation pipeline that includes training multiple runs and
employing multiple model selection methods, and we re-
port all results based on sound statistical hypothesis tests
– all run with a single command. In addition to our novel
framework, the present paper makes the following contribu-
tions:
• We highlight shortcomings in the evaluation protocols
used in recent work on style bias, including the ob-
servation that there is a very high variance in the per-
formance of different runs of the same algorithm, and
even between different checkpoints in the same run
with similar validation scores.
• We develop and openly release a testbed that rig-
orously compares different algorithms using well-
established hypothesis testing methods. This testbed
includes several of the most prominent algorithms and
datasets in the field, and is easily extensible.
• We observe in our results that current algorithms on
texture-bias datasets fail to surpass simple ERM in a
statistically significant way, which is the main moti-
vation for this work and for using rigorous hypothesis
tests for evaluating style bias algorithms.
In Sec. 2, we provide a comprehensive overview of exist-
ing work on reducing texture bias. Furthermore we describe
the main forms of statistical hypothesis testing. We con-
tinue in Sec. 3 with a formal introduction to biased learning,
and systematically group existing algorithms into families
with common properties. In Sec. 4, we investigate previous
evaluation practices in detail, discuss their limitations, andgive a formal introduction to hypothesis testing. Sec. 5 de-
scribes our proposed BiasBed evaluation framework in de-
tail, while Sec. 6 presents experiments, followed by a dis-
cussion (Sec. 7), conclusions (Sec. 8) and a view towards
the broader impact of our work (Sec. 9).
|
Jung_On_the_Importance_of_Accurate_Geometry_Data_for_Dense_3D_CVPR_2023
|
Abstract
Learning-based methods to solve dense 3D vision prob-
lems typically train on 3D sensor data. The respectively
used principle of measuring distances provides advantages
and drawbacks. These are typically not compared nor
discussed in the literature due to a lack of multi-modal
datasets. Texture-less regions are problematic for structure
from motion and stereo, reflective material poses issues for
active sensing, and distances for translucent objects are in-
tricate to measure with existing hardware. Training on in-
accurate or corrupt data induces model bias and hampers
generalisation capabilities. These effects remain unnoticed
if the sensor measurement is considered as ground truth
during the evaluation. This paper investigates the effect of
sensor errors for the dense 3D vision tasks of depth estima-
tion and reconstruction. We rigorously show the significant
impact of sensor characteristics on the learned predictions
and notice generalisation issues arising from various tech-
nologies in everyday household environments. For evalu-
ation, we introduce a carefully designed dataset1compris-
ing measurements from commodity sensors, namely D-ToF ,
I-ToF , passive/active stereo, and monocular RGB+P . Our
study quantifies the considerable sensor noise impact and
paves the way to improved dense vision estimates and tar-
geted data fusion.
|
1. Introduction
Our world is 3D. Distance measurements are essential
for machines to understand and interact with our environ-
ment spatially. Autonomous vehicles [23, 30, 50, 58] need
this information to drive safely, robot vision requires dis-
tance information to manipulate objects [15,62,72,73], and
AR realism benefits from spatial understanding [6, 31].
A variety of sensor modalities and depth predic-
1dataset available at https://github.com/Junggy/HAMMER-dataset
Figure 1. Other datasets for dense 3D vision tasks reconstruct the
scene as a whole in one pass [8,12,56], resulting in low quality and
accuracy (cf. red boxes). On the contrary, our dataset scans the
background and every object in the scene separately a priori and
annotates them as dense and high-quality 3D meshes. Together
with precise camera extrinsics from robotic forward-kinematics,
this enables a fully dense rendered depth as accurate pixel-wise
ground truth with multimodal sensor data, such as RGB with po-
larization, D-ToF, I-ToF and Active Stereo. Hence, it allows quan-
tifying different downstream 3D vision tasks such as monocular
depth estimation, novel view synthesis, or 6D object pose estima-
tion.
tion pipelines exist. The computer vision community
thereby benefits from a wide diversity of publicly available
datasets [23, 51, 52, 57, 60, 61, 65], which allow for evalua-
tion of depth estimation pipelines. Depending on the setup,
different sensors are chosen to provide ground truth (GT)
depth maps, all of which have their respective advantages
and drawbacks determined by their individual principle of
distance reasoning. Pipelines are usually trained on the data
without questioning the nature of the depth sensor used for
supervision and do not reflect areas of high or low confi-
dence of the GT.
Popular passive sensor setups include multi-view stereo
cameras where the known or calibrated spatial relationship
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
780
between them is used for depth reasoning [51]. Correspond-
ing image parts or patches are photometrically or struc-
turally associated, and geometry allows to triangulate points
within an overlapping field of view. Such photometric cues
are not reliable in low-textured areas and with little ambient
light where active sensing can be beneficial [52,57]. Active
stereo can be used to artificially create texture cues in low-
textured areas and photon-pulses with a given sampling rate
are used in Time-of-Flight (ToF) setups either directly (D-
ToF) or indirectly (I-ToF) [26]. With the speed of light, one
can measure the distance of objects from the return time of
the light pulse, but unwanted multi-reflection artifacts also
arise. Reflective and translucent materials are measured at
incorrect far distances, and multiple light bounces distort
measurements in corners and edges. While ToF signals can
still be aggregated for dense depth maps, a similar setup is
used with LiDAR sensors which sparsely measure the dis-
tance using coordinated rays that bounce from objects in the
surrounding. The latter provides ground truth, for instance,
for the popular outdoor driving benchmark KITTI [23].
While LiDAR sensing can be costly, radar [21] provides
an even sparser but more affordable alternative. Multiple
modalities can also be fused to enhance distance estimates.
A common issue, however, is the inherent problem of warp-
ing onto a common reference frame which requires the in-
formation about depth itself [27, 37]. While multi-modal
setups have been used to enhance further monocular depth
estimation using self-supervision from stereo and temporal
cues [25, 60], its performance analysis is mainly limited to
average errors and restricted by the individual sensor used.
An unconstrained analysis of depth in terms of RMSE com-
pared against a GT sensor only shows part of the picture as
different sensing modalities may suffer from drawbacks.
Where are the drawbacks of current depth-sensing
modalities - and how does this impact pipelines trained
with this (potentially partly erroneous) data? Can self- or
semi-supervision overcome some of the limitations posed
currently? To objectively investigate these questions, we
provide multi modal sensor data as well as highly accurate
annotated depth so that one can analyse the deterioration
of popular monocular depth estimation and 3D reconstruc-
tion methods (see Fig. 1) on areas of different photometric
complexity and with varying structural and material prop-
erties while changing the sensor modality used for training.
To quantify the impact of sensor characteristics, we build a
unique camera rig comprising a set of the most popular in-
door depth sensors and acquire synchronised captures with
highly accurate ground truth data using 3D scanners and
aligned renderings. To this end, our main contributions can
be summarized as follows:
1. We question the measurement quality from commodity
depth sensor modalities and analyse their impact as
supervision signals for the dense 3D vision tasks ofdepth estimation and reconstruction.
2. We investigate performance on texture-varying mate-
rial as well as photometrically challenging reflec-
tive, translucent and transparent areas where learning
methods systematically reproduce sensor errors .
3. To objectively assess and quantify different data
sources, we contribute an indoor dataset compris-
ing an unprecedented combination of multi-modal
sensors , namely I-ToF, D-ToF, monocular RGB+P,
monochrome stereo, and active light stereo together
with highly accurate ground truth.
|
Kang_Benchmarking_Self-Supervised_Learning_on_Diverse_Pathology_Datasets_CVPR_2023
|
Abstract
Computationalpathologycanleadtosavinghumanlives,
butmodelsareannotationhungryandpathologyimagesare
notoriouslyexpensivetoannotate. Self-supervisedlearning
(SSL) has shown to be an effective method for utilizing un-
labeleddata,anditsapplicationtopathologycouldgreatly
benefit its downstream tasks. Yet, there are no principled
studiesthatcompareSSLmethodsanddiscusshowtoadapt
them for pathology. To address this need, we execute the
largest-scale study of SSL pre-training on pathology image
data,todate. Ourstudyisconductedusing4representative
SSLmethodsondiversedownstreamtasks. Weestablishthat
large-scale domain-aligned pre-training in pathology con-
sistently out-performs ImageNet pre-training in standard
SSL settings such as linear and fine-tuning evaluations, as
well as in low-label regimes. Moreover, we propose a set
of domain-specific techniques that we experimentally show
leads to a performance boost. Lastly, for the first time, we
applySSLtothechallengingtaskofnucleiinstancesegmen-
tationandshowlargeandconsistentperformanceimprove-
ments. We release the pre-trained model weights1.
|
1. Introduction
Thecomputationalanalysisofmicroscopicimagesofhu-
man tissue – also known as computational pathology – has
emerged as an important topic of research, as its clinical
implementations can result in the saving of human lives
*The first two authors contributed equally.
1https://lunit-io.github.io/research/publications/pathology_sslby improving cancer diagnosis [49] and treatment [42].
Deep Learning and Computer Vision methods in pathol-
ogyallowforobjectivity[15],large-scaleanalysis[20],and
triaging [5] but often require large amounts of annotated
data [52]. However, the annotation of pathology images re-
quiresspecialistswithmanyyearsofclinicalresidency[37],
resulting in scarce labeled public datasets and the need for
methods to train effectively on them.
When annotated data is scarce for a given Computer Vi-
siontask,onecommonandpracticalsolutionistofine-tune
a model that was pre-trained in a supervised manner us-
ing the ImageNet dataset [19,34]. This paradigm of trans-
ferlearning[34]wasrecentlychallengedbyself-supervised
learning (SSL), which trains on large amounts of unlabeled
data only, yet out-performs supervised pre-training on Ima-
geNet [8,10,26]. In the field of pathology, large unlabeled
datasetsareabundant[4,37,38,57]incontrasttothelackof
annotateddatasets[52]. IfweweretoapplySSLeffectively
to this huge amount of unlabeled data, downstream pathol-
ogy tasks could benefit greatly even if they contain limited
amount of annotated training data. Naturally, we ask the
question: How well does self-supervised learning help in
improving the performance of pathology tasks?
ImageNet pre-trained weights are commonly used in
medical imaging and are known to be helpful in attaining
high task performance [30,32,43,59]. Due to the differ-
ence between natural images and medical images, large-
scale domain-aligned pre-training has the potential to push
performance beyond ImageNet pre-training [39]. Accord-
ingly, recent works show that SSL pre-training on pathol-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3344
ogy data can improve performance on downstream pathol-
ogy tasks [3,16,23,55]. Our study aims to expand on these
previous works by evaluating multiple SSL methods on di-
versedownstream pathologytasks. In addition,we propose
techniquestoadaptSSLmethodsthatweredesignedfornat-
ural image data, to better learn from pathology data.
To understand how to adapt existing SSL methods to
workonpathologyimagedata,wemustidentifyseveralkey
differences between natural and pathology imagery. Unlike
natural images, pathology images can be rotated arbitrar-
ily (impossible to determine a canonical orientation) and
exhibit fewer variations in color. Also, pathology images
canbeinterpreteddifferentlydependingonthefield-of-view
(FoV) due to the multiple hierarchies and contextual differ-
ences involved in each task. We propose to overcome these
differenceswhenadaptingSSLmethodsforpathologydata,
viachangestothetrainingdataaugmentationschemeinpar-
ticular, during pre-training.
Inthispaper,wecarryoutanin-depthanalysisof4recent
andrepresentativeSSLmethods;MoCov2[12],SwAV[7],
Barlow Twins [61], and DINO [8], when applied to large-
scale pathology data. For this purpose, we source 19 mil-
lion image patches from Whole Slide Images (WSI) in The
Cancer Genome Atlas (TCGA) dataset [57] and apply our
domain-specific techniques in training the SSL methods on
this data. The evaluations are conducted for 2 different
downstream tasks over 5 datasets: (1) pathological image
classificationusingBACH[1],CRC[31],MHIST[56],and
PatchCamelyon [54] datasets, and (2) nuclei instance seg-
mentationandclassificationusingtheCoNSePdataset[25].
Ourlarge-scalestudyyieldsseveralusefulcontributions:
(a)weconductthelargest-scalestudyofSSLpre-trainingon
pathologyimagedatatodate,andshowitsbenefitoverusing
ImageNet pre-trained weights on diverse downstream tasks
(see Fig. 1), (b) we propose a set of carefully designed data
curation and data augmentation techniques that can further
improve downstream performance, (c) we demonstrate that
SSL is label-efficient, and is therefore a practical solution
in pathology where gathering annotation is particularly ex-
pensive,and(d)forthefirsttime,weapplySSLtothedense
predictiontaskofnucleiinstancesegmentationandshowits
valueunderdiverseevaluationsettings. Wereleaseourpre-
trained model weights at https://lunit-io.github.
io/research/publications/pathology_ssl to fur-
ther contribute to the research community.
|
Jones_Self-Supervised_Representation_Learning_for_CAD_CVPR_2023
|
Abstract
Virtually every object in the modern world was created,
modified, analyzed and optimized using computer aided de-
sign (CAD) tools. An active CAD research area is the use
of data-driven machine learning methods to learn from the
massive repositories of geometric and program represen-
tations. However, the lack of labeled data in CAD’s na-
tive format, i.e., the parametric boundary representation
(B-Rep), poses an obstacle at present difficult to overcome.
Several datasets of mechanical parts in B-Rep format have
recently been released for machine learning research. How-
ever, large-scale databases are mostly unlabeled, and la-
beled datasets are small. Additionally, task-specific label
sets are rare and costly to annotate. This work proposes
to leverage unlabeled CAD geometry on supervised learn-
ing tasks. We learn a novel, hybrid implicit/explicit surface
representation for B-Rep geometry. Further, we show that
this pre-training both significantly improves few-shot learn-
ing performance and achieves state-of-the-art performance
on several current B-Rep benchmarks.
|
1. Introduction
Almost every human-made object that exists today
started its life as a model in a CAD system. As the preva-lent method of creating 3D shapes, repositories of CAD
models are extensive. Further, CAD models have a robust
structure, including geometric and program representations
that have the potential to expose design and manufactur-
ing intent. Learning from CAD data can therefore enable
a variety of applications in design automation and design-
and fabrication-aware shape reconstruction and reverse en-
gineering.
An important challenge in learning from CAD is that
most of this data does not have labels that can be lever-
aged for inference tasks. Manually labeling B-Rep data is
time consuming and expensive, and its specialized format
requires CAD expertise, making it impractical for large col-
lections.
In this work we ask: how can we leverage large databases
ofunlabeled CAD geometry for analysis and modeling
tasks that typically require labels for learning?
Our work is driven by a simple, yet fundamental observa-
tion: the CAD data format was not developed to enable easy
visualizing or straightforward geometric interpretation: it
is a format designed to be compact, have infinite resolu-
tion, and allow easy editing. Indeed, CAD interfaces con-
sistently run sophisticated algorithms to convert the CAD
representation into geometric formats for rendering. Driven
by this observation, our key insight is to leverage large col-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21327
lections of unlabeled CAD data to learn to geometrically
interpret the CAD data format. We then leverage the net-
works trained over the geometric interpretation task in su-
pervised learning tasks where only small labeled collections
are available. In other words, we use geometry as a model
ofself-supervision and apply it to few-shot-learning.
Specifically, we learn to rasterize local CAD geometry
using an encoder-decoder structure. The standard CAD
format encodes geometry as parametric boundary repre-
sentations (B-Reps). B-Reps are graphs where the nodes
are parametric geometry (surfaces, curves, and points) and
edges denote the topological adjacency relationships be-
tween the geometry. Importantly, the parametric geome-
try associated with each node is unbounded , and bounds are
computed from the topological relationships: curves bound-
ing surfaces and points bounding curves. As shown in Fig-
ure 2, the geometry of a B-Rep face is computed by clipping
the surface primitive to construct a surface patch, where the
clipping mask is constructed from adjacent edges.
SurfaceBoundary
S(u, v): ℝ2 → ℝ3 d(u, v): ℝ2 → ℝ
Inside Outside(a) B-R ep Face (c) Im plicit Bound ary
(d) Implicit Bound ary (SDF) (b) Explicit Sur face
0011
v
uM(u, v): ℝ2 → { in, out}
0011
v
u
-0.4 -0.2 0.0 0.2 0.4
uv
Figure 2. A B-Rep face (a) is a surface patch cut from a geometric
primitive surface (b). The adjacent edges define a clipping mask
(c), which we learn an SDF (d).
Thus, B-reps are constructed piecewise by explicitly de-
fined surfaces with implicitly defined boundaries. This ob-
servation drives our proposed learning architecture, which
reconstructs faces by jointly decoding the explicit surface
parameterization as well as the implicit surface boundary.
Our proposed encoder uses message passing on the topo-
logical graph to capture the boundary information to encode
B-Rep faces. To handle graph heterogeneity (nodes com-
prised of faces, edges, and, vertices), we use a hierarchical
message passing architecture inspired by the Structured B-
Rep GCN [16]. Our decoder uses the learned embeddings
as latent codes for two per-face neural function evaluators:
one mapping from R2→R3that encodes the face’s para-
metric surface (Figure 2 (b)), and one mapping R2→Rthat encodes the face’s boundary as a signed distance field
(SDF) within the parametric surface (Figure 2 (c,d)).
We apply our proposed model of B-Rep self-supervision
to learn specialized B-Rep tasks from very small sets of la-
beled data—10s to 100s of examples vs 10k to 100k. To
do this, we use the embeddings learned on self-supervision
as input features to supervised tasks. We evaluate our ap-
proach on three tasks and datasets from prior work [3,6,21]
and validate our findings across varying training set sizes.
We show that our model consistently outperforms prior su-
pervised approaches, significantly improving performance
on smaller training sets. By using less data, our approach
also proves substantially faster to train, making possible ap-
plications that depend on training speed. We believe that
our differentiable CAD rasterizer paves the way to many
exciting future applications, and show one possibility by
prototyping a reverse engineering example.
|
Kang_Soft-Landing_Strategy_for_Alleviating_the_Task_Discrepancy_Problem_in_Temporal_CVPR_2023
|
Abstract
Temporal Action Localization (TAL) methods typically
operate on top of feature sequences from a frozen snippet
encoder that is pretrained with the Trimmed Action Classifi-
cation (TAC) tasks, resulting in a task discrepancy problem.
While existing TAL methods mitigate this issue either by re-
training the encoder with a pretext task or by end-to-end fine-
tuning, they commonly require an overload of high memory
and computation. In this work, we introduce Soft-Landing
(SoLa) strategy, an efficient yet effective framework to bridge
the transferability gap between the pretrained encoder and
the downstream tasks by incorporating a light-weight neural
network, i.e., a SoLa module, on top of the frozen encoder .
We also propose an unsupervised training scheme for the
SoLa module; it learns with inter-frame Similarity Match-
ing that uses the frame interval as its supervisory signal,
eliminating the need for temporal annotations. Experimen-
tal evaluation on various benchmarks for downstream TAL
tasks shows that our method effectively alleviates the task
discrepancy problem with remarkable computational effi-
ciency.
|
1. Introduction
Our world is full of untrimmed videos, including a
plethora of Y outube videos, security camera recordings, and
online streaming services. Analyzing never-ending video
streams is thus one of the most promising directions of com-
puter vision research in this era [29]. Amongst many long-
form video understanding tasks, the task of finding action
instances in time and classifying their categories, known as
Temporal Action Localization (T AL), has received intense
attention from both the academia and the industry in recent
years; T AL is considered to be the fundamental building
block for more sophisticated video understanding tasks since
it plays the basic role of distinguishing frame-of-interest
from irrelevant background frames [19, 34, 40]./g7/g25/g17/g26/g18/g19/g28/g9/g25/g27/g31/g30/g1/g8/g28/g15/g24/g19/g29/g14/g5/g10/g3/g21/g19/g15/g18/g14/g5/g10/g3/g21/g19/g15/g18
/g13/g26/g10/g15/g13/g30/g15/g25/g18/g15/g28/g18/g1/g14/g5/g10
/g6/g22/g20/g20/g19/g28/g19/g25/g30/g1
/g11/g28/g19/g30/g19/g32/g30/g1/g14/g15/g29/g23/g29/g15/g2
/g7/g25/g18/g3/g30/g26/g3/g7/g25/g18/g1/g14/g5/g10
/g16/g2
/g9/g25/g27/g31/g30/g1/g8/g28/g15/g24/g19/g29
/g4/g1/g12/g19/g29/g19/g15/g28/g17/g21/g1/g8/g26/g17/g31/g29
/g10/g3/g1/g5/g23/g16/g20/g21/g16/g17/g14 /g1/g9/g4/g6/g1/g7/g13/g21/g15/g18/g12/g20 /g11/g3 /g1/g8/g18/g6/g10 /g2/g18/g22/g19/g20/g3/g4/g1/g13/g25/g22/g27/g27/g19/g30/g1/g8/g19/g15/g30/g31/g28/g19/g29/g7/g25/g17/g26/g18/g19/g28
/g4/g1/g11/g15/g28/g15/g24/g19/g30/g19/g28/g29/g1/g8/g28/g26/g33/g19/g25
/g4/g1/g7/g25/g21/g15/g25/g17/g19/g18/g1/g13/g25/g22/g27/g27/g19/g30/g1/g8/g19/g15/g30/g31/g28/g19/g29
/g22/g22/g2
/g22/g22/g22/g2/g1/g22/g2
Figure 1. (a-i) Standard T AL framework assumes a “frozen snip-
pet encoder” and only focuses on designing a good T AL head,
causing the task discrepancy problem. Straightforward approaches
to alleviating the issue include devising (a-ii) a temporally sensi-
tive pretext task [1, 32, 41], and (a-iii) an end-to-end T AL training
procedure [20, 33]. However, both approaches break the aforemen-
tioned frozen snippet encoder assumption. On the other hand, (b)
our SoLa strategy shares the “frozen snippet encoder assumption”
with the standard T AL framework by providing a smooth linkage
between the frozen encoder and the downstream head. It offers gen-
eral applicability and more importantly, exceptional computational
efficiency.
Despite its importance, training a T AL model has a unique
computational challenge that hinders the naive extension of
conventional image processing models, mainly due to the
large size of the model input. For instance, videos in the
wild can be several minutes or even hours long, implying
that loading the whole video to a device for processing isoften infeasible. In this context, the prevailing convention
in processing a long-form video for T AL is to divide the
video into non-overlapping short snippets and deal with the
snippet-wise feature sequences. Specifically, a standard train-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
6514
ing pipeline for the long-form video understanding tasks
consists of two steps: (i) Train the snippet encoder with
a large-scale action recognition dataset (e.g., Kinetics400),
which is often different from the dataset for the downstream
task; (ii) Train the downstream head (e.g., T AL) that takes
the snippet feature sequences extracted from the pretrained
encoder. An issue here is that the mainstream pretext task for
the snippet-wise video encoder is “Trimmed” Action Classi-
fication (T AC), which does not handle action boundaries and
background frames. Although the current pipeline achieves
remarkable performance in T AL tasks due to the power of
large action recognition datasets, recent works [1, 32, 33, 41]
point out the task discrepancy problem that is inherent in
this two-staged approach. The task discrepancy problem,first introduced in [33], comes from the pretrained snippet
encoder’s insensitivity to different snippets within the same
action class. It results in a temporally invariant snippet
feature sequence, making it hard to distinguish foreground
actions from backgrounds. A straightforward approach to
the problem is adopting a temporally sensitive pretext task to
train the snippet encoder [1, 32], or devising an end-to-end
framework [20, 33], which are briefly described in Figure 1
(a). However, as all previous methods involve retraining the
snippet encoder, an excessive use of memory and computa-
tion is inevitable.
To tackle the task discrepancy problem, we propose a
new approach, namely Soft-Landing (SoLa) strategy, which
is neither memory nor computationally expensive. SoLa
strategy is a novel method which incorporates a light-weight
neural network, i.e., Soft-Landing (SoLa) module, between
the pretrained encoder and the downstream head. The prop-
erly trained SoLa module will act like a middleware between
the pretext and the downstream tasks, mitigating the task dis-
crepancy problem (Figure 1 (b)). Since the task adaptation
is solely done by the SoLa module, the parameters of thepretrained encoder are fixed in our SoLa strategy. The use
of a frozen encoder significantly differentiates our approach
from previous methods that mainly focus on designing an
appropriate training methodology for a snippet encoder. In
addition, our SoLa strategy only requires an access to the pre-
extracted snippet feature sequence, being fully compatible
with the prevailing two-stage T AL framework.
We also propose Similarity Matching , an unsupervised
training scheme for the SoLa module that involves neither
frame-level data manipulation nor temporal annotations. Our
training strategy circumvents the need for strong frame-level
data augmentation which most existing unsupervised repre-
sentation learning techniques [5, 10] rely on. This strategyperfectly suits our condition where frame-level data aug-mentation is impossible, as we only have an access to pre-
extracted snippet features. The new loss is based on a simple
empirical observation: “adjacent snippet features are simi-
lar, while distant snippet features remain distinct”. Coupledwith the Simsiam [6] framework, Similarity Matching not
only prevents the collapse, but also induces temporally sen-
sitive feature sequences, resulting in a better performance in
various downstream tasks.
The contributions of the paper can be summarized as
follows:
•To tackle the task discrepancy problem, we introduce
a novel Soft-Landing (SoLa) strategy, which does not
involve retraining of the snippet encoder. As we can
directly deploy the “frozen” pretrained snippet encoder
without any modification, our SoLa strategy offers eas-
ier applicability compared to previous works that re-
quire snippet encoders to be retrained.
•We propose Similarity Matching , a new self-
supervised learning algorithm for the SoLa strategy.
As frame interval is utilized as its only learning signal,
it requires neither data augmentation nor temporal an-
notation.
•With our SoLa strategy, we show significant improve-
ment in performance for downstream tasks, outperform-
ing many of the recent works that involve computation-
ally heavy snippet encoder retraining.
|
Kang_The_Dialog_Must_Go_On_Improving_Visual_Dialog_via_Generative_CVPR_2023
|
Abstract
Visual dialog (VisDial) is a task of answering a sequence
of questions grounded in an image, using the dialog history
as context. Prior work has trained the dialog agents solely
on VisDial data via supervised learning or leveraged pre-
training on related vision-and-language datasets. This paper
presents a semi-supervised learning approach for visually-
grounded dialog, called Generative Self-Training (GST), to
leverage unlabeled images on the Web. Specifically, GST
first retrieves in-domain images through out-of-distribution
detection and generates synthetic dialogs regarding the im-
ages via multimodal conditional text generation. GST then
trains a dialog agent on the synthetic and the original Vis-
Dial data. As a result, GST scales the amount of training
data up to an order of magnitude that of VisDial (1.2M →
12.9M QA data). For robust training of the synthetic di-
alogs, we also propose perplexity-based data selection and
multimodal consistency regularization. Evaluation on Vis-
Dial v1.0 and v0.9 datasets shows that GST achieves new
state-of-the-art results on both datasets. We further observe
the robustness of GST against both visual and textual ad-
versarial attacks. Finally, GST yields strong performance
gains in the low-data regime. Code is available at https:
//github.com/gicheonkang/gst-visdial .
|
1. Introduction
Recently, there has been extensive research towards de-
veloping visually-grounded dialog systems [12, 13, 34, 36]
due to their significance in many real-world applications
(e.g., helping visually impaired person). Notably, Visual Di-
alog (VisDial) [12] has provided a testbed for studying such
systems, where a dialog agent should answer a sequence
of image-grounded questions. For instance, the agent is ex-
pected to answer open-ended questions like “What color is
it?” and“How old does she look?” . This task requires a
holistic understanding of visual information, linguistic se-
*Equal contributionmantics in context ( e.g., it and she), and most importantly,
the grounding of these two.
Most of the previous approaches in VisDial [9, 10, 18, 20,
25, 26, 30, 31, 35, 49, 54, 55, 64, 67, 78, 84] have trained the
dialog agents solely on VisDial data via supervised learning.
More recent studies [8,53,77] have employed self-supervised
pre-trained models such as BERT [14] or ViLBERT [48]
and finetuned them on VisDial data. The models are typi-
cally pre-trained to recover masked inputs and predict the
semantic alignment between two segments. This pretrain-
then-transfer learning strategy has shown promising results
by transferring knowledge from the models pre-trained on
large-scale data sources [4, 71, 85] to VisDial.
Our research question is the following: How can the dia-
log agent expand its knowledge beyond what it can acquire
via supervised learning or self-supervised pre-training on
the provided datasets? Some recent studies have shown that
semi-supervised learning and pre-training have complemen-
tary modeling capabilities in image [86] and text classifica-
tion [16]. Inspired by them, we consider semi-supervised
learning (SSL) as a way to address the above question.
Let us assume that large amounts of unlabeled images
are available. SSL for VisDial can be applied to generate
synthetic conversations for the unlabeled images and train
the agent with the synthetic data. However, there are two
critical challenges to this approach. First, the target output
for VisDial ( i.e.,multi-turn visual QA data) is more complex
than that of the aforementioned studies [16,86]. Specifically,
they have addressed the classification problems, yielding
class probabilities as pseudo labels [39]. In contrast, SSL
for VisDial should generate a sequence of pseudo queries
(i.e.,visual questions) and pseudo labels ( i.e.,corresponding
answers) in natural language to train the answering agent.
It further indicates that the target output should be generated
while considering the multimodal andsequential nature of
the visual dialog task. Next, even if SSL yields synthetic
dialogs via text generation, there may be noise, such as
generating irrelevant questions or incorrect answers to given
contexts. A robust training method is required to leverage
such noisy synthetic dialog datasets.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
6746
In this paper, we study the above challenges in the con-
text of SSL, especially self-training [6, 16, 21, 28, 32, 39, 44,
52, 60, 65, 72, 73, 79, 80, 86], where a teacher model trained
on labeled data predicts the pseudo labels for unlabeled
data. Then, a student model jointly learns on the labeled
and the pseudo-labeled datasets. Unlike existing studies in
self-training that have mainly studied uni-modal, discrimi-
native tasks such as image classification [72, 80, 86] or text
classification [16, 32, 52], we extend the idea of self-training
to the task of multimodal conditional text generation.
To this end, we propose a new learning strategy, called
Generative Self-Training (GST), that artificially generates
multi-turn visual QA data and utilizes the synthetic data
for training. GST first trains the teacher model (answerer)
and the visual question generation model (questioner) using
VisDial data. It then retrieves a set of unlabeled images
from a Web image dataset, Conceptual 12M [7]. Next, the
questioner and the teacher generate a series of visual QA
pairs for the retrieved images. Finally, the student is trained
on the synthetic and the original VisDial data. We also pro-
pose perplexity-based data selection (PPL) and multimodal
consistency regularization (MCR) to effectively train the
student with the noisy dialog data. PPL is to selectively
utilize the answers whose perplexity of the teacher is below
a threshold. MCR encourages the student to yield consis-
tent predictions when the perturbed multimodal inputs are
given. As a result, GST successfully augments the synthetic
VisDial data (11.7M QA pairs), thus mitigating the need to
scale up the size of the human-annotated VisDial data, which
is prohibitively expensive and time-consuming.
Our key contributions are three-fold. First, we propose
Generative Self-Training (GST) that generates multi-turn
visual QA data to leverage unlabeled Web images effectively.
Second, experiments show that GST achieves new state-
of-the-art performance on VisDial v1.0 and v0.9 datasets.
We further demonstrate two important results: (1) GST is
indeed effective when the human-annotated visual dialog
data is extremely scarce (improving up to 11.09 absolute
points on NDCG), and (2) PPL and MCR are effective when
training the noisy synthetic dialog data. Third, to validate
the robustness of GST, we evaluate our proposed method
under three different visual and textual adversarial attacks,
i.e., FGSM, coreference, and random token attacks. We
observe that GST significantly improves the performance
compared with the baseline models against all adversarial
attacks, especially boosting NDCG scores from 21.60% to
45.43% in the FGSM attack [19].
|
Kalluri_GeoNet_Benchmarking_Unsupervised_Adaptation_Across_Geographies_CVPR_2023
|
Abstract
In recent years, several efforts have been aimed at im-
proving the robustness of vision models to domains and
environments unseen during training. An important practi-
cal problem pertains to models deployed in a new geography
that is under-represented in the training dataset, posing
a direct challenge to fair and inclusive computer vision.
In this paper, we study the problem of geographic robust-
ness and make three main contributions. First, we intro-
duce a large-scale dataset GeoNet for geographic adapta-
tion containing benchmarks across diverse tasks like scene
recognition (GeoPlaces), image classification (GeoImNet)
and universal adaptation (GeoUniDA). Second, we inves-
tigate the nature of distribution shifts typical to the prob-
lem of geographic adaptation and hypothesize that the ma-
jor source of domain shifts arise from significant varia-
tions in scene context (context shift), object design (de-
sign shift) and label distribution (prior shift) across ge-
ographies. Third, we conduct an extensive evaluation of
several state-of-the-art unsupervised domain adaptation al-
gorithms and architectures on GeoNet, showing that they do
not suffice for geographical adaptation, and that large-scale
pre-training using large vision models also does not lead to
geographic robustness. Our dataset is publicly available at
https://tarun005.github.io/GeoNet .
|
1. Introduction
In recent years, domain adaptation has emerged as an
effective technique to alleviate dataset bias [80] during train-
ing and improve transferability of vision models to sparsely
labeled target domains [27, 36, 40, 42, 49 –51, 68, 69, 86, 89].
While being greatly instrumental in driving research forward,
methods and benchmark datasets developed for domain adap-
tation [56, 57, 64, 83] have been restricted to a narrow set of
divergences between domains. However, the geographic ori-
gin of data remains a significant source of bias, attributable to
several factors of variation between train and test data. Train-
ing on geographically biased datasets may cause a model
to learn the idiosyncrasies of their geographies, preventing
Context ShiftDesign Shift
Training DomainUSA
Testing DomainAsia+(a)Geographic bias manifested in proposed GeoNet dataset
DANN MDD MCC CDAN ToAlign MemSAC
10
010203040Rel. Accuracy GainDomainNet (Real->Clipart)
GeoImNet (USA->Asia)
(b)Unsupervised domain adaptation does not suffice on GeoNet
RN-50 (24.0M) ViT-S (21.8M) ViT-B (85.9M) ViT-L (303.3M)
Backbone (#params)30405060Accuracy (%)Target Supervised
Sup.-IN 1k
MOCO-IN 1kSWAV-IN 1k
DINO-IN 1k
MAE-IN 1kSWAG-IG 3.7B
CLIP-LAION 400M
(c)Large vision models exhibit cross-domain drops on GeoNet
Figure 1. Summary of our contributions . (a): Training computer vision
models on geographically biased datasets suffers from poor generaliza-
tion to new geographies. We propose a new dataset called GeoNet to
study this problem and take a closer look at the various types of domain
shifts induced by geographic variations. (b) Prior unsupervised adapta-
tion methods that efficiently handle other variations do not suffice for
improving geographic transfer. (c) We highlight the limitations of mod-
ern convolutional and transformer architectures in addressing geographic
bias, exemplified here by USA →Asia transfer on GeoImNet.
generalization to novel domains with significantly different
geographic and demographic composition. Besides robust-
ness, this may have deep impact towards fair and inclusive
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15368
computer vision, as most modern benchmark datasets like
ImageNet [63] and COCO [47] suffer from a significant US
or UK-centric bias in data [24, 73], with poor representation
of images from various other geographies like Asia.
In this paper, we study the problem of geographic
adaptation by introducing a new large-scale dataset called
GeoNet, which constitutes three benchmarks – GeoPlaces
for scene classification, GeoImNet for object recognition and
GeoUniDA for universal domain adaptation. These bench-
marks contain images from USA and Asia, which are two
distinct geographical domains separated by various cultural,
economic, demographic and climatic factors. We addition-
ally provide rich metadata associated with each image, such
as GPS location, captions and hashtags, to facilitate algo-
rithms that leverage multimodal supervision.
GeoNet captures the multitude of novel challenges posed
by varying image and label distributions across geographies.
We analyze GeoNet through new sources of domain shift
caused by geographic disparity, namely (i) context shift ,
where the appearance and composition of the background in
images changes significantly across geographies, (ii) design
shift, where the design and make of various objects changes
across geographies, and (iii) prior shift , caused by different
per-category distributions of images in both domains. We
illustrate examples of performance drop caused by these fac-
tors in Fig. 1a, where models trained on images from USA
fail to classify common categories such as running track and
mailbox due to context and design shifts, respectively.
GeoNet is an order of magnitude larger than previous
datasets for geographic adaptation [58, 61], allowing the
training of modern deep domain adaptation methods. Im-
portantly, it allows comparative analysis of new challenges
posed by geographic shifts for algorithms developed on other
popular adaptation benchmarks [56, 57, 64, 83]. Specifically,
we evaluate the performance of several state-of-the-art un-
supervised domain adaptation algorithms on GeoNet, and
show their limitations in bridging domain gaps caused by
geographic disparities. As illustrated in Fig. 1b for the case
of DomainNet [56] vs. GeoNet, state-of-the-art models on
DomainNet often lead to accuracies even worse than a source
only baseline on GeoNet, resulting in negative relative gain
in accuracy (defined as the gain obtained by an adaptation
method over a source-only model as a percentage of gap be-
tween a source-only model and the target-supervised upper
bound). Furthermore, we also conduct a study of modern ar-
chitectures like vision transformers and various pre-training
strategies, to conclude that larger models with supervised
and self-supervised pre-training offer improvements in accu-
racy, which however are not sufficient to address the domain
gap (Fig. 1c). This highlights that the new challenges intro-
duced by geographic bias such as context and design shift are
relatively under-explored, where our dataset may motivate
further research towards this important problem.In summary, our contribution towards geographic domain
adaptation is four-fold:
•A new large-scale dataset, GeoNet, with benchmarks for
diverse tasks like scene classification and object recogni-
tion, with labeled images collected from geographically
distant locations across hundreds of categories (Sec. 3).
•Analysis of domain shifts in geographic adaptation,
which may be more complex and subtle than style or
appearance variations (Sec. 3.4).
•Extensive benchmarking of unsupervised adaptation al-
gorithms, highlighting their limitations in addressing
geographic shifts (Sec. 4.2).
•Demonstration that large-scale pretraining and recent
advances like vision transformers do not alleviate these
geographic disparities (Sec. 4.3).
|
Liao_AttentionShift_Iteratively_Estimated_Part-Based_Attention_Map_for_Pointly_Supervised_Instance_CVPR_2023
|
Abstract
Pointly supervised instance segmentation (PSIS) learns
to segment objects using a single point within the object
extent as supervision. Challenged by the non-negligible se-
mantic variance between object parts, however, the single
supervision point causes semantic bias and false segmenta-
tion. In this study, we propose an AttentionShift method, to
solve the semantic bias issue by iteratively decomposing the
instance attention map to parts and estimating fine-grained
semantics of each part. AttentionShift consists of two mod-
ules plugged on the vision transformer backbone: (i) to-
ken querying for pointly supervised attention map genera-
tion, and (ii) key-point shift, which re-estimates part-based
attention maps by key-point filtering in the feature space.
These two steps are iteratively performed so that the part-
based attention maps are optimized spatially as well as in
the feature space to cover full object extent. Experiments
on PASCAL VOC and MS COCO 2017 datasets show that
AttentionShift respectively improves the state-of-the-art of
by 7.7% and 4.8% under [email protected], setting a solid PSIS
baseline using vision transformer.
|
1. Introduction
Instance segmentation is one of the most important vi-
sion tasks with a wide range of applications in medical im-
age processing [21, 23, 36], human machine interface [3, 7,
24] and advanced driver assistance system [14,37,41]. Nev-
ertheless, this task requires great human efforts to annotate
instance masks, particular in the era of big data. For exam-
ple, it takes more than four years to create the ground-truth
instance masks in the MS COCO dataset by a human anno-
tator [29]. The large annotation cost hinders the deployment
*Equal Contribution (Liao: Idea, Experiment; Guo: Detection Base-
line, Writing).
†Corresponding Author.
Self-attention Map
Class Activation Map
Image & Supervision
Instance Attention Map
Attention
ShiftPart-based Attention MapBird
Image & Supervision Class Activation Map Self-attention Map
Instance Attention Map Part-based Attention Map
Attention
ShiftDogFigure 1. Comparison of existing methods with the pro-
posed AttentionShift. Upper: The class activation map (CAM)
method [51] suffers partial activation for the lack of spatial
constraint. The self-attention map generated by vision trans-
former [16] encounters false and missed object parts. Lower: At-
tentionShift literately optimizes part-based attention maps (indi-
cated by key-points) by shifting the key-points in the feature space
to precisely localize the full object extent. (Best viewed in color)
of instance segmentation to real-world applications.
Pointly supervised instance segmentation (PSIS) [25,
26], where each instance is indicated by a single point, has
been a promising approach to solve the annotation cost is-
sue. Compared with the precise mask annotation, PSIS re-
quires only about 20% annotation cost, which is comparable
with the weakly supervised method, while the performance
is far beyond the later [2].
Existing methods [25, 26] typically estimate a single
pseudo mask for each instance and refine the estimated
mask by training a segmentation model. However, such
methods ignore the fact that the semantic variance between
object parts is non-negligible. For example, a “dog head”
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19519
ViT
Key-point Shift in Feature SpaceImage
Attention Map
Part-based
Attention Maps Shifted
Attention Maps Ground -truth &
Estimated PointsPatch Token
QueryingPoint SupervisionQuery Tokens
Estimated
Key-points
ViT
Key-point Shift in Feature Space
Image
Attention Map
Part-based
Attention Maps Shifted
Attention Maps Ground -truth &
Estimated PointsPatch Token
QueryingPoint Supervision
Query Tokens
Estimated
Key-points
Figure 2. Flowchart of AttenttionShift. During training, the model first learns the fine-grained semantics by decomposing the instance
attention map to parts (indicated by key-points) and performing key-point shift the feature space. It then takes the estimated key-points
(each represents a part) as supervisions and querying their locations using the vision transformer.
and a “dog tail” belongs to the same semantic of “dog”, but
their appearances are quite different, Fig. 1 upper. With a
single supervision point, these methods could estimate the
semantic of “dog” as that of the most discriminative part
(“dog tail” in middle of Fig. 1 upper) or that of the regions
around the supervision point (“dog head” in right of Fig. 1
upper), which is termed as semantic bias . We make a statis-
tical analysis on ViT and found only 33% of ViT attention
maps can cover over 50% of foreground pixels. Despite its
ability to model long-range feature dependencies, ViT ap-
pears to remain vulnerable to the issue of semantic bias.
In this study, we propose AttentionShift to solve the se-
mantic bias problem by estimating fine-grained semantics
of multiple instance parts and learning an instance segmen-
tor under the supervision of estimated fine-grained seman-
tics, Fig. 1(lower). Considering that instance parts are un-
available during training, AttentionShift adopts an iterative
optimization procedure, which spatially decomposes each
instance to parts based on key-points defined on mean fea-
ture vectors and adaptively updates the key-points in a way
like mean shift [11] in the feature space, Fig. 2.
Using the vision transformer (ViT) as the backbone, At-
tentionShift consists of two steps: (i) token querying of
point supervision for instance attention map generation. It
takes advantage of the ViT to generate an instance attention
map by matching the semantics and locations of patch to-
kens with those of the supervision point. (ii) key-point shift
which re-estimates the part-based attention map by key-
point initialization and filtering in the feature space. These
two steps are iteratively performed so that the part-based at-
tention map, indicated by key-points, is optimized spatially
as well as in the feature space to cover the full object extent.We conduct experiments on commonly used PASCAL
VOC and the challenging MS COCO 2017 datasets. At-
tentionShift respectively improves the state-of-the-art of by
7.7% and 4.8% under [email protected], demonstrating the poten-
tial to fill the performance gap between pointly-supervised
instance segmentation and fully supervised instance seg-
mentation methods.
The contributions of this paper are concluded as follows:
• We propose a part-based attention map estimation ap-
proach for PSIS, which estimates fine-grained seman-
tics of instances to alleviate the semantic bias problem
in a systematic fashion.
• We represent object parts using key-points and lever-
age AttentionShift to operate key-points in the feature
space, providing a simple-yet-effective fashion to op-
timize part-based attention maps.
• AttentionShift achieves state-of-the-art performance,
setting a solid PSIS baseline using vision transformer.
|
Liu_SlowLiDAR_Increasing_the_Latency_of_LiDAR-Based_Detection_Using_Adversarial_Examples_CVPR_2023
|
Abstract
LiDAR-based perception is a central component of au-
tonomous driving, playing a key role in tasks such as ve-
hicle localization and obstacle detection. Since the safety
of LiDAR-based perceptual pipelines is critical to safe au-
tonomous driving, a number of past efforts have investi-
gated its vulnerability under adversarial perturbations of
raw point cloud inputs. However, most such efforts have fo-
cused on investigating the impact of such perturbations on
predictions (integrity), and little has been done to under-
stand the impact on latency (availability), a critical con-
cern for real-time cyber-physical systems. We present the
first systematic investigation of the availability of LiDAR
detection pipelines, and SlowLiDAR, an adversarial per-
turbation attack that maximizes LiDAR detection runtime.
The attack overcomes the technical challenges posed by the
non-differentiable parts of the LiDAR detection pipelines by
using differentiable proxies and uses a novel loss function
that effectively captures the impact of adversarial perturba-
tions on the execution time of the pipeline. Extensive ex-
perimental results show that SlowLiDAR can significantly
increase the latency of the six most popular LiDAR detec-
tion pipelines while maintaining imperceptibility1.
|
1. Introduction
The promise of autonomous transit has stimulated ex-
tensive efforts towards the development of self-driving plat-
forms [1–3, 5]. A central feature of most such platforms is
a LiDAR-based perceptual pipeline (usually integrated with
other sensors, such as camera and radar) for critical control
tasks, such as localization and object detection [1, 3, 51].
Since errors in localization or obstacle detection can cause
the vehicle to crash, these tasks are crucial in ensuring the
safety of an autonomous vehicle. As a result, extensive
prior research has been devoted to understanding and assur-
ing the robustness of the LiDAR-based perceptual pipelines
1Code is available at: https://github.com/WUSTL-CSPL/SlowLiDAR
Add
Perturb Figure 1. SlowLiDAR attack. The attack goal is to maximize
the runtime latency of the state-of-the-art LiDAR detection models
with either an adding-based attack or a perturbation-based attack.
against adversarial perturbations on its raw point cloud in-
puts [22, 51, 53, 54]. A key focus in prior work has been on
perceptual prediction accuracy, for example, the ability of
the adversary to hide real obstacles or create phantom obsta-
cles [15,32,43,44]. Another significant aspect of the robust-
ness analysis of LiDAR-based perception that has received
little attention is its real-time performance (availability) . In
particular, a delay in perceptual processing caused solely by
computational quirks of the LiDAR processing pipeline can
be just as damaging as a mistaken prediction. For example,
a delay in obstacle recognition can cause the vehicle to react
to an obstacle too late, failing to avoid a crash.
We present the first systematic analysis of the impact
of adversarial point cloud perturbations on the runtime la-
tency of common LiDAR perceptual pipelines. To this
end, we propose SlowLiDAR, an algorithmic framework for
adding adversarial perturbations to point cloud data aim-
ing to increase the execution time of LiDAR-based ob-
ject detection. Specifically, we consider two attack mech-
anisms (see Figure 1): 1) perturbation of the 3D coor-
dinates of existing points in the raw point cloud ( point
perturbation attacks ), and 2) adding points to the raw
point cloud ( point addition attacks ). Due to the unique
representation and processing procedure of LiDAR detec-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5146
tion pipelines, existing availability attacks on other AI
pipelines, such as camera-based detection [39], cannot be
directly applied. Specifically, there are three new chal-
lenges: C1. Non-differentiable aggregation. LiDAR point
clouds are generally sparsely distributed in a large 3D
space without an organized pattern. To process such un-
structured data, the state-of-the-art LiDAR-based detec-
tion models extract aggregated features of the points at
the level of either 2D or 3D cells [30, 52, 55]. However,
the aggregation operation is by nature non-differentiable
[14], which presents new challenges to end-to-end opti-
mization. C2. New objective function. Different than ex-
isting attacks targeting accuracy, a novel loss function
needs to be carefully designed to effectively capture the
impact of adversarial perturbations on runtime latency.
C3. Large search space. Because points are distributed in
a large space, the perturbation search space is quite large,
and conventional gradient-based optimization approaches
for crafting attacks may yield poor local optima.
We address these technical challenges as follows. First,
in order to enable end-to-end training of adversarial input
perturbations, we develop a differentiable proxy to approx-
imate the non-differentiable pre-processing pipelines. Sec-
ond, we conduct a response time analysis on the detection
pipeline to identify the most vulnerable components, and
design a novel loss function to best capture the impact of
input perturbations on the runtime latency of the identified
components. Third, to tackle the large search space, we
propose a probing algorithm to identify high-quality initial-
ization for gradient-based attack optimization.
We evaluate SlowLiDAR on six popular LiDAR-based
detection frameworks that are adopted in modern commer-
cial autonomous driving systems. Our experiments show
that SlowLiDAR is effective in slowing down the models
while retaining comparable imperceptibility. Moreover, the
performance of our attacks remains consistent across differ-
ent hardware and implementations. Our contributions can
be summarized as follows:
• We systematically dissect the LiDAR detection models
to analyze the attack surface of their runtime latency to
adversarial inputs.
• We propose the first adversarial perturbation attacks
against LiDAR detection with the goal of maximizing
runtime latency. To this end, we overcome the non-
differentiability of the pre-processing pipelines in or-
der to perform end-to-end gradient-based attack opti-
mization, and design a novel loss function to capture
the impact of input perturbations on execution time.
• We evaluate our attacks on six popular LiDAR detec-
tion models in modern commercial autonomous driv-
ing systems in different hardware and implementations
to demonstrate the ability to significantly increase la-
tency with comparable imperceptibility.
|
Liu_Delving_Into_Shape-Aware_Zero-Shot_Semantic_Segmentation_CVPR_2023
|
Abstract
Thanks to the impressive progress of large-scale vision-
language pretraining, recent recognition models can clas-
sify arbitrary objects in a zero-shot and open-set manner,
with a surprisingly high accuracy. However, translating this
success to semantic segmentation is not trivial, because this
dense prediction task requires not only accurate semantic
understanding but also fine shape delineation and existing
vision-language models are trained with image-level lan-
guage descriptions. To bridge this gap, we pursue shape-
aware zero-shot semantic segmentation in this study. In-
spired by classical spectral methods in the image segmenta-
tion literature, we propose to leverage the eigen vectors of
Laplacian matrices constructed with self-supervised pixel-
wise features to promote shape-awareness. Despite that this
simple and effective technique does not make use of the
masks of seen classes at all, we demonstrate that it out-
performs a state-of-the-art shape-aware formulation that
aligns ground truth and predicted edges during training.
We also delve into the performance gains achieved on dif-
ferent datasets using different backbones and draw several
interesting and conclusive observations: the benefits of pro-
moting shape-awareness highly relates to mask compact-
ness and language embedding locality. Finally, our method
sets new state-of-the-art performance for zero-shot seman-
tic segmentation on both Pascal and COCO, with significant
margins. Code and models will be accessed at SAZS.
|
1. Introduction
Semantic segmentation has been an established research
area for some time now, which aims to predict the categories
of an input image in a pixel-wise manner. In real-world ap-
plications including autonomous driving [17], medical diag-
nosis [31,46] and robot vision and navigation [9,63], an ac-
curate semantic segmentation module provides a pixel-wise
Pizza*
Dining
Table
Cup*
Cat
Input Images Ours w/o shape
awarenessOurs
Ground TruthFigure 1. Without retraining, SAZS is able to precisely segments
both seen and unseen objects in the zero-shot setting, largely out-
performing a strong baseline. *denotes unseen categories during
training).
understanding of the input image and is crucial for subse-
quent tasks (like decision making or treatment selection).
Despite that significant progress has been made in the
field of semantic segmentation [6,7,32,49,52,54,57,59,61],
most existing methods focus on the closed-set setting in
which dense prediction is performed on the same set of cat-
egories in training and testing time. Thus, methods that are
trained and perform well in the closed-set setting may fail
when applied to the open world, as pixels of unseen ob-
jects in the open world are likely to be assigned categories
that are seen during training, causing catastrophic conse-
quences in safety-critical applications such as autonomous
driving [62]. Straightforward solutions include fine-tuning
or retraining the existing neural networks, but it is impracti-
cal to enumerate unlimited unseen categories during retrain-
ing, let along large quantities of time and efforts needed.
More recent works [4, 14, 24, 27, 40] address this issue
by shifting to the zero-shot setting, in which the methods
are evaluated with semantic categories that are unseen dur-
ing training. While large-scale pre-trained visual-language
models such as CLIP [41] or ALIGN [18] shed light on the
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2999
potential of solving zero-shot tasks with priors contained in
large-scale pre-trained model, how to perform dense predic-
tion task in this setting is still under-explored. One recent
approach by Li et.al. [24] closes the gap by leveraging the
shared embedding space for languages and images, but fails
to effectively segment regions with fine shape delineation.
If the segmented shape of the target object is not accurate,
it will be a big safety hazard in practical applications, such
as in autonomous driving.
Inspired by the classical spectral methods and their in-
trinsic capability of enhancing shapeawareness, we pro-
pose a novel Shape- Aware Zero-Shot semantic segmenta-
tion framework ( SAZS ) to address the task of zero-shot
semantic segmentation. Firstly, the framework enforces
vision-language alignment on the training set using known
categories, which exploits rich language priors in the large-
scale pre-trained vision-language model CLIP [41]. Mean-
while, the framework also jointly enforces the boundary of
predicted semantic regions to be aligned with that of the
ground truth regions. Lastly, we leverage the eigenvectors
of Laplacian of affinity matrices that is constructed by fea-
tures learned in a self-supervised manner, to decompose in-
puts into eigensegments. They are then fused with learning-
based predictions from the trained model. The fusion out-
puts are taken as the final predictions of the framework.
As illustrated in Fig. 1, compared with [24], the predic-
tions of our approach are better aligned with the shapes
of objects. We also demonstrate the effectiveness of our
approach with elaborate experiments on PASCAL- 5iand
COCO- 20i, the results of which show that our method out-
performs former state-of-the-arts [4, 24, 36, 37, 51, 53] by
large margins. By examining a) the correlation between
shape compactness of target object and IoU and b) the
correlation between the language embedding locality and
IoU, we discover the large impacts on the performance
brought by the distribution of language anchors and ob-
ject shapes. Via extensive analyses, we demonstrate the ef-
fectiveness and generalization of SAZS framework’s shape
perception for segmenting semantic categories in the open
world.
|
Liu_EfficientViT_Memory_Efficient_Vision_Transformer_With_Cascaded_Group_Attention_CVPR_2023
|
Abstract
Vision transformers have shown great success due to
their high model capabilities. However, their remarkable
performance is accompanied by heavy computation costs,
which makes them unsuitable for real-time applications. In
this paper, we propose a family of high-speed vision trans-
formers named EfficientViT. We find that the speed of ex-
isting transformer models is commonly bounded by mem-
ory inefficient operations, especially the tensor reshaping
and element-wise functions in MHSA. Therefore, we design
a new building block with a sandwich layout, i.e., using a
single memory-bound MHSA between efficient FFN layers,
which improves memory efficiency while enhancing channel
communication. Moreover, we discover that the attention
maps share high similarities across heads, leading to com-
putational redundancy. To address this, we present a cas-
caded group attention module feeding attention heads with
different splits of the full feature, which not only saves com-
putation cost but also improves attention diversity. Compre-
hensive experiments demonstrate EfficientViT outperforms
existing efficient models, striking a good trade-off between
speed and accuracy. For instance, our EfficientViT-M5 sur-
passes MobileNetV3-Large by 1.9% in accuracy, while get-
ting 40.4% and 45.2% higher throughput on Nvidia V100
GPU and Intel Xeon CPU, respectively. Compared to
the recent efficient model MobileViT-XXS, EfficientViT-M2
achieves 1.8% superior accuracy, while running 5.8×/3.7×
faster on the GPU/CPU, and 7.4× faster when converted to
ONNX format. Code and models are available at here.
|
1. Introduction
Vision Transformers (ViTs) have taken computer vision
domain by storm due to their high model capabilities and
superior performance [18, 44, 69]. However, the constantly
improved accuracy comes at the cost of increasing model
sizes and computation overhead. For example, SwinV2 [43]
uses 3.0B parameters, while V-MoE [62] taking 14.7B pa-
rameters, to achieve state-of-the-art performance on Ima-
∗Work done when Xinyu was an intern of Microsoft Research.
Figure 1. Speed and accuracy comparisons between EfficientViT
(Ours) and other efficient CNN and ViT models tested on an
Nvidia V100 GPU with ImageNet-1K dataset [17].
geNet [17]. Such large model sizes and the accompanying
heavy computational costs make these models unsuitable
for applications with real-time requirements [40, 78, 86].
There are several recent works designing light and effi-
cient vision transformer models [9,19,29,49,50,56,79,81].
Unfortunately, most of these methods aim to reduce model
parameters or Flops, which are indirect metrics for speed
and do not reflect the actual inference throughput of models.
For example, MobileViT-XS [50] using 700M Flops runs
much slower than DeiT-T [69] with 1,220M Flops on an
Nvidia V100 GPU. Although these methods have achieved
good performance with fewer Flops or parameters, many
of them do not show significant wall-clock speedup against
standard isomorphic or hierarchical transformers, e.g., DeiT
[69] and Swin [44], and have not gained wide adoption.
To address this issue, in this paper, we explore how to
go faster with vision transformers, seeking to find princi-
ples for designing efficient transformer architectures. Based
on the prevailing vision transformers DeiT [69] and Swin
[44], we systematically analyze three main factors that af-
fect model inference speed, including memory access, com-
putation redundancy, and parameter usage. In particular,
we find that the speed of transformer models is commonly
memory-bound. In other words, memory accessing de-
lay prohibits the full utilization of the computing power
in GPU/CPUs [21, 32, 72], leading to a critically negative
impact on the runtime speed of transformers [15, 31]. The
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14420
most memory-inefficient operations are the frequent tensor
reshaping and element-wise functions in multi-head self-
attention (MHSA). We observe that through an appropri-
ate adjustment of the ratio between MHSA and FFN (feed-
forward network) layers, the memory access time can be re-
duced significantly without compromising the performance.
Moreover, we find that some attention heads tend to learn
similar linear projections, resulting in redundancy in atten-
tion maps. The analysis shows that explicitly decomposing
the computation of each head by feeding them with divers
|
Lattas_FitMe_Deep_Photorealistic_3D_Morphable_Model_Avatars_CVPR_2023
|
Abstract
In this paper, we introduce FitMe, a facial reflectance
model and a differentiable rendering optimization pipeline,
that can be used to acquire high-fidelity renderable hu-
man avatars from single or multiple images. The model
consists of a multi-modal style-based generator, that cap-
tures facial appearance in terms of diffuse and specular re-
flectance, and a PCA-based shape model. We employ a fast
differentiable rendering process that can be used in an op-
timization pipeline, while also achieving photorealistic fa-
cial shading. Our optimization process accurately captures
both the facial reflectance and shape in high-detail, by ex-
ploiting the expressivity of the style-based latent represen-
tation and of our shape model. FitMe achieves state-of-the-
art reflectance acquisition and identity preservation on sin-
gle “in-the-wild” facial images, while it produces impres-
sive scan-like results, when given multiple unconstrained
facial images pertaining to the same identity. In contrast
with recent implicit avatar reconstructions, FitMe requires
only one minute and produces relightable mesh and texture-
based avatars, that can be used by end-user applications.
|
1. Introduction
Despite the tremendous steps forward witnessed in the
last decade, 3D facial reconstruction from a single uncon-
strained image remains an important research problem with
an active presence in the computer vision community. Its
applications are now wide-ranging, including but not lim-
ited to human digitization for virtual and augmented real-
ity applications, social media and gaming, synthetic dataset
creation, and health applications. However, recent works
come short of accurately reconstructing the identity of dif-
ferent subjects and usually fail to produce assets that can be
used for photorealistic rendering. This can be attributed to
the lack of diverse and big datasets of scanned human geom-
etry and reflectance, the limited and ambiguous information
available on a single facial image, and the limitations of the
current statistical and machine learning methods.
3D Morhpable Models (3DMM) [18] have been a stan-
dard method of facial shape and appearance acquisition
from a single “in-the-wild” image. The seminal 3DMM
work in [7] used Principal Component Analysis (PCA), to
model facial shape and appearance with variable identity
and expression, learned from about 200 subjects. Since
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8629
Rendering-Based GAN Inversion
Rendering-Based GAN Tuning
Final Rendering
Trainable
Frozen
Renderer
Figure 2. FitMe method overview. For a target image I0, we optimize the latent vector Wof the generator G, and the shape identity
ps, expression pe, camera pcand illumination plparameters, by combining 3DMM fitting and GAN inversion methods, through accurate
differentiable diffuse UDand specular USrendering R. Then, from the optimized Wp, we tune the generator Gweights, through the same
rendering process. The reconstructed shape Sand facial reflectance (diffuse albedo AD, specular albedo ASand normals NS), achieve
great identity similarity and can be directly used in typical renderers, to achieve photorealistic rendering, as shown on the right-hand side.
then, larger models have been introduced, .i.e. the LSFM
[10], Basel Face Model [52] and Facescape [71], with thou-
sands of subjects. Moreover, recent works have introduced
3DMMs of complete human heads [43, 54, 55] or other fa-
cial parts such as ears [54] and tongue [53]. Finally, recent
works have introduced extensions ranging from non-linear
models [49, 66, 67] to directly regressing 3DMM parame-
ters [61,68]. However, such models cannot produce textures
capable of photorealistic rendering.
During the last decade we have seen considerable im-
provements in deep generative models. Generative Adver-
sarial Networks (GANs) [30], and specifically progressive
GAN architectures [34] have achieved tremendous results
in learning distributions of high-resolution 2D images of
human faces. Recently, style-based progressive generative
networks [35–38] are able to learn meaningful latent spaces,
that can be traversed in order to reconstruct and manipulate
different attributes of the generated samples. Some methods
have also been shown effective in learning a 2D representa-
tion of 3D facial attributes, such as UV maps [21,23,24,45].
3D facial meshes generated by 3DMMs can be utilized
in rendering functions, in order to create 2D facial images.
Differentiating the rendering process is also required in or-
der to perform iterative optimization. Recent advances in
differentiable rasterization [44], photorealitic facial shad-
ing [41] and rendering libraries [20,25,56], enable the pho-
torealistic differentiable rendering of such assets. Unfor-
tunately, 3DMM [10, 23, 45] works rely on the lambertian
shading model which comes short of capturing the com-
plexity of facial reflectance. The issue being, photorealistic
facial rendering requires various facial reflectance param-
eters instead of a single RGB texture [41]. Such datasets
are scarce, small and difficult to capture [27,46,57], despite
recent attempts to simplify such setups [39].
Several recent approaches have achieved either high-
fidelity facial reconstructions [6,23,45] or relightable facialreflectance reconstructions [16,17,19,40,41,65], including
infra-red [47], however, high-fidelity and relightable recon-
struction still remains elusive. Moreover, powerful mod-
els have been shown to capture facial appearance with deep
models [22, 42], but they fail to show single or multi image
reconstructions. A recent alternative paradigm uses implicit
representations to capture avatar appearance and geometry
[11, 69], the rendering of which depends on a learned neu-
ral rendering. Despite their impressive results, such implicit
representations cannot be used by common renderers and
are not usually relightable. Finally, the recently introduced
Albedo Morphable Model (AlbedoMM) [65] captures fa-
cial reflectance and shape with a linear PCA model, but per-
vertex color and normal reconstruction is too low-resolution
for photorealistic rendering. AvatarMe++ [40, 41] recon-
structs high-resolution facial reflectance texture maps from
a single “in-the-wild” image, however, its 3-step process
(reconstruction, upsampling, reflectance), cannot be opti-
mized directly with the input image.
In this work, we introduce FitMe , a fully renderable
3DMM with high-resolution facial reflectance texture maps,
which can be fit on unconstrained facial images using ac-
curate differentiable renderings. FitMe achieves identity
similarity and high-detailed, fully renderable reconstruc-
tions, which are directly usable by off-the-shelf rendering
applications. The texture model is designed as a multi-
modal style-based progressive generator, which concur-
rently generates the facial diffuse-albedo ,specular-albedo
andsurface-normals . A meticulously designed branched
discriminator enables smooth training with modalities of
different statistics. To train the model we create a capture-
quality facial reflectance dataset of 5k subjects, by fine-
tuning AvatarMe++ on the MimicMe [50] public dataset,
which we also augment in order to balance skin-tone rep-
resentation. For the shape, we use interchangeably a face
and head PCA model [54], both trained on large-scale ge-
8630
ometry datasets. We design a single or multi-image fitting
method, based on style-based generator projection [35] and
3DMM fitting. To perform efficient iterative fitting (in un-
der 1 minute), the rendering function needs to be differen-
tiable and fast, which makes models such as path tracing un-
usable. Prior works in the field [10,12,23] use simpler shad-
ing models (e.g. Lambertian), or much slower optimiza-
tion [16]. We add a more photorealistic shading than prior
work, with plausible diffuse and specular rendering, which
can acquire shape and reflectance capable of photorealistic
rendering in standard rendering engines (Fig. 1). The flexi-
bility of the generator’s extended latent space and the pho-
torealistic fitting, enables FitMe to reconstruct high-fidelity
facial reflectance and achieve impressive identity similar-
ity, while accurately capturing details in diffuse, specular
albedo and normals. Overall, in this work we present:
• the first 3DMM capable of generating high-resolution
facial reflectance and shape, with an increasing level
of detail, that can be photorealistically rendered,
• the first branched multi-modal style-based progressive
generator of high-resolution 3D facial assets (diffuse
albedo, specular albedo and normals), and a suitable
multi-modal branched discriminator,
• a method to acquire and augment a vast facial re-
flectance dataset of, using assets from a public dataset,
• a multi-modal generator projection, optimized with
diffuse and specular differentiable rendering.
|
Lin_ERM-KTP_Knowledge-Level_Machine_Unlearning_via_Knowledge_Transfer_CVPR_2023
|
Abstract
Machine unlearning can fortify the privacy and security
of machine learning applications. Unfortunately, the ex-
act unlearning approaches are inefficient, and the approxi-
mate unlearning approaches are unsuitable for complicated
CNNs. Moreover, the approximate approaches have serious
security flaws because even unlearning completely different
data points can produce the same contribution estimation
as unlearning the target data points. To address the above
problems, we try to define machine unlearning from the
knowledge perspective, and we propose a knowledge-level
machine unlearning method, namely ERM-KTP . Specifi-
cally, we propose an entanglement-reduced mask (ERM)
structure to reduce the knowledge entanglement among
classes during the training phase. When receiving the un-
learning requests, we transfer the knowledge of the non-
target data points from the original model to the unlearned
model and meanwhile prohibit the knowledge of the tar-
get data points via our proposed knowledge transfer and
prohibition (KTP) method. Finally, we will get the un-
learned model as the result and delete the original model
to accomplish the unlearning process. Especially, our pro-
posed ERM-KTP is an interpretable unlearning method be-
cause the ERM structure and the crafted masks in KTP
can explicitly explain the operation and the effect of un-
learning data points. Extensive experiments demonstrate
the effectiveness, efficiency, high fidelity, and scalability of
the ERM-KTP unlearning method. Code is available at
https://github.com/RUIYUN-ML/ERM-KTP
|
1. Introduction
In recent years, many countries have raised concerns
about protecting personal privacy. The privacy legislation,
e.g., the well-known European Union’s GDPR [19], have
been promulgated to oblige information service providers
*Both authors contributed equally
†Corresponding authorto remove personal data when receiving a request from
the data owner, i.e., the right-to-be-forgotten . Besides, the
GDPR stipulates that the service providers should remove
the corresponding impact of the data requested by the data
owner, in which the machine learning models are the most
representative. Many kinds of research demonstrate that the
machine learning models can memorize knowledge of the
data points, e.g., the membership inference attack [7,14,15]
can infer whether a data point is in the training set or not.
A naive approach is retraining the model after remov-
ing the target data points from the training set, but the hu-
man resources and materials consumed are costly. Thus,
aiming to efficiently remove data as well as their gener-
ated contribution to the model, a new ML privacy protec-
tion research direction emerged, called machine unlearning .
A good deal of related work attempted to solve the data-
removing challenge of inefficient retraining, and there are
two representative research fields, including exact unlearn-
ing[1] and approximate unlearning [3–5]. Unfortunately,
these approaches usually require huge computational, stor-
age overhead, and memory requirements for class-specific
machine unlearning tasks on the complicated convolutional
neural networks (CNNs). Furthermore, they may compro-
mise the model’s performance and even cause disastrous
forgetting. Most significantly, Thudi et al. [18] believed that
such approximate unlearning approaches have serious secu-
rity flaws because they usually define machine unlearning
as the distribution difference between the unlearned model
and the retrained model. According to this definition, even
unlearning completely different data points can produce the
same contribution estimation as unlearning the target data
points.
Unlearning algorithms are difficult to implement on deep
learning models because these models are often seen as
a black box and lack interpretability, which makes data
points’ contributions challenging to estimate. Though many
related works [2, 10, 16, 21] attempted to improve the inter-
pretability of deep learning models, they can not apply to
machine unlearning directly. For example, Zhou et al. [22]
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20147
leveraged the global average pooling (GAP) in CNNs to
generate a class activation mapping (CAM) to indicate the
discriminative image regions used by the CNNs to identify
the class. Liang et al. [12] proposed class-specific filters
to transform
|
Li_Lift3D_Synthesize_3D_Training_Data_by_Lifting_2D_GAN_to_CVPR_2023
|
Abstract
This work explores the use of 3D generative models to
synthesize training data for 3D vision tasks. The key require-
ments of the generative models are that the generated data
should be photorealistic to match the real-world scenar-
ios, and the corresponding 3D attributes should be aligned
with given sampling labels. However, we find that the recent
NeRF-based 3D GANs hardly meet the above requirements
due to their designed generation pipeline and the lack of
explicit 3D supervision. In this work, we propose Lift3D, an
inverted 2D-to-3D generation framework to achieve the data
generation objectives. Lift3D has several merits compared to
prior methods: (1) Unlike previous 3D GANs that the output
resolution is fixed after training, Lift3D can generalize to any
camera intrinsic with higher resolution and photorealistic
output. (2) By lifting well-disentangled 2D GAN to 3D object
NeRF , Lift3D provides explicit 3D information of generated
objects, thus offering accurate 3D annotations for down-
stream tasks. We evaluate the effectiveness of our framework
by augmenting autonomous driving datasets. Experimental
results demonstrate that our data generation framework can
effectively improve the performance of 3D object detectors.
Code: len-li.github.io/lift3d-web
|
1. Introduction
It is well known that the training of current deep learning
models requires a large amount of labeled data. However, col-
lecting and labeling the training data is often expensive and
time-consuming. This problem is especially critical when
the data is hard to annotate. For example, it is difficult for
humans to annotate 3D bounding boxes using a 2D image
due to the inherent ill-posed 3D-2D projection (3D bounding
boxes are usually annotated using LiDAR point clouds).
To alleviate this problem, a promising direction is to use
synthetic data to train our models. For example, data genera-
tion that can be conveniently performed using 3D graphics
*Work done during an internship at NIO Autonomous Driving.
†Corresponding author.
Noise z ∈𝑍𝑍
Pose p∈𝑃𝑃Low
Reso.High
Reso.
Image
2D Upsample
Pose p∈𝑃𝑃2D 3D3D 2D a: Previous 3D GANs
b: Our approach
2D GAN Noise z ∈𝑍𝑍 NeRFNeRFDownstream
Tasks (3D)Add novel objectsGoal: Synthesize downstream task dataset
Noise z ∈𝑍𝑍
Pose p∈𝑃𝑃Train 3D
Detectors
GAN
Downstream
Tasks (3D)Figure 1. Our goal is to generate novel objects and use them to
augment existing datasets. (a)Previous 3D GANs ( e.g., [28, 45])
rely on a 2D upsampler to ease the training of 3D generative ra-
diance field, while struggle a trade-off between high-resolution
synthesis and 3D consistency. (b)Our 2D-to-3D lifting process
disentangles the 3D generation from generative image synthesis,
leading to arbitrary rendering resolution and object pose sampling
for downstream tasks.
engines offers incredible convenience for visual perception
tasks. Several such simulated datasets have been created
in recent years [3, 14, 30, 31, 42]. These datasets have been
used successfully to train networks for perception tasks such
as semantic segmentation and object detection. However,
these datasets are expensive to generate, requiring specialists
to model specific objects and environments in detail. Such
datasets also tend to have a large domain gap from real-world
ones.
With the development of Generative Adversarial Net-
works (GAN) [18], researchers have paid increasing atten-
tion to utilize GANs to replace graphics engines for syn-
thesizing training data. For example, BigDatasetGAN [23]
utilizes condition GAN to generate classification datasets via
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
332
conditioning the generation process on the category labels.
SSOD [27] designs a GAN-based generator to synthesize
images with 2D bounding boxes annotation for the object
detection task. In this paper, we explore the use of 3D GANs
to synthesize datasets with 3D-related annotations, which is
valuable but rarely explored.
Neural radiance field (NeRF) [26] based 3D GANs [6,28],
which display photorealistic synthesis and 3D controllable
property, is a natural choice to synthesize 3D-related train-
ing data. However, our experimental results show that they
struggle to keep high-resolution outputs and geometry-
consistency results by relying on a 2D upsampler. Further-
more, the generated images are not well aligned with the
given 3D pose, due to the lack of explicit 3D consistency reg-
ularization. This misalignment would introduce large label
noise in the dataset, limiting the performance in downstream
tasks. In addition to these findings, the camera parameters
are fixed after training, making them challenging to align the
output resolution with arbitrary downstream data.
In this paper, we propose Lift3D, a new paradigm for syn-
thesizing 3D training data by lifting pretrained 2D GAN
to 3D generative radiance field. Compared with the 3D
GANs that rely on a 2D upsampler, we invert the gener-
ation pipeline into 2D-to-3D rather than 3D-to-2D to achieve
higher-resolution synthesis. As depicted in Fig. 1, we first
take advantage of a well-disentangled 2D GAN to generate
multi-view images with corresponding pseudo pose annota-
tion. The multi-view images are then lifted to 3D represen-
tation with NeRF reconstruction. In particular, by distilling
from pretrained 2D GAN, lift3D achieves high-quality syn-
thesis that is comparable to SOTA 2D generative models. By
decoupling the 3D generation from generative image synthe-
sis, Lift3D can generate images that are tightly aligned with
the sampling label. Finally, getting rid of 2D upsamplers,
Lift3D can synthesize images in any resolution by accumu-
lating single-ray evaluation. With these properties, we can
leverage the generated objects to augment existing dataset
with enhanced quantity and diversity.
To validate the effectiveness of our data generation frame-
work, we conduct experiments on image-based 3D object
detection tasks with KITTI [16] and nuScenes [4] datasets.
Our framework outperforms the best prior data augmen-
tation method [24] with significantly better 3D detection
accuracy. Furthermore, even without any labeled data, it
achieves promising results in an unsupervised manner. Our
contributions are summarized as follows:
•We provide the first exploration of using 3D GAN to
synthesize 3D training data, which opens up a new
possibility that adapts NeRF’s powerful capabilities of
novel view synthesis to benefit downstream tasks in 3D.
•To synthesize datasets with high-resolution images and
accurate 3D labels, we propose Lift3D, an inverted 2D-
GIRAFFE HD Ours
Position Rotation Position RotationFigure 2. We compare our generation result with GIRAFFE
HD [45]. We zoom in or rotate the sampled 3D box to control
the generation of models. The rotation of the 3D box introduces
artifacts to images generated by GIRAFFE HD. All images are
plotted with sampled 3D bounding boxes.
to-3D data generation framework that disentangles 3D
generation from generative image synthesis.
•Our experimental results demonstrate that the synthe-
sized training data can improve image-based 3D detec-
tors across different settings and datasets.
|
Lan_Self-Supervised_Geometry-Aware_Encoder_for_Style-Based_3D_GAN_Inversion_CVPR_2023
|
Abstract
StyleGAN has achieved great progress in 2D face recon-
struction and semantic editing via image inversion and la-
tent editing. While studies over extending 2D StyleGAN to
3D faces have emerged, a corresponding generic 3D GAN
inversion framework is still missing, limiting the applica-
tions of 3D face reconstruction and semantic editing. In
this paper, we study the challenging problem of 3D GAN
inversion where a latent code is predicted given a single
face image to faithfully recover its 3D shapes and detailed
textures. The problem is ill-posed: innumerable composi-
tions of shape and texture could be rendered to the cur-
rent image. Furthermore, with the limited capacity of a
global latent code, 2D inversion methods cannot preserve
faithful shape and texture at the same time when applied to
3D models. To solve this problem, we devise an effective
self-training scheme to constrain the learning of inversion.
The learning is done efficiently without any real-world 2D-
3D training pairs but proxy samples generated from a 3D
GAN. In addition, apart from a global latent code that cap-
tures the coarse shape and texture information, we augment
the generation network with a local branch, where pixel-
aligned features are added to faithfully reconstruct face de-
tails. We further consider a new pipeline to perform 3D
view-consistent editing. Extensive experiments show that
our method outperforms state-of-the-art inversion methods
in both shape and texture reconstruction quality.
|
1. Introduction
This work aims to devise an effective approach for
encoder-based 3D Generative Adversarial Network (GAN)
inversion. In particular, we focus on the reconstruction of
3D face, requiring just a single 2D face image as the input.
In the inversion process, we wish to map a given image to
the latent space and obtain an editable latent code with an
encoder. The latent code will be further fed to a generator to
reconstruct the corresponding 3D shape with high-quality
shape and texture. Further to the learning of an inversion
encoder, we also wish to develop an approach to synthesize
3D view-consistent editing results, e.g., changing a neutral
expression to smiling, by altering the estimated latent code.
GAN inversion [50] has been extensively studied for 2D
images but remains underexplored in the 3D world. Inver-sion can be achieved via optimization [1,2,41], which typi-
cally provides a precise image-to-latent mapping but can be
time-consuming, or encoder-based techniques [40, 47, 49],
which explicitly learn an encoding network that maps an
image into the latent space. Encoder-based techniques en-
joy faster inversion, but the mapping is typically inferior to
optimization. In this study, we extend the notion of encoder-
based inversion from 2D images to 3D shapes.
Adding the additional dimension makes inversion more
challenging beyond the goal of reconstructing an editable
shape with detail preservation. In particular, 1)Recovering
3D shapes from 2D images is an ill-posed problem, where
innumerable compositions of shape and texture could gen-
erate identical rendering results. 3D supervisions are cru-
cial to alleviate the ambiguity of shape inversion from im-
ages. Though high-quality 2D datasets are easily accessi-
ble, owing to the expensive cost of scans there is currently a
lack of large-scale labeled 3D datasets. 2)The global latent
code, due to its compact and low-dimensional nature, only
captures the coarse shape and texture information. With-
out high-frequency spatial details, we cannot generate high-
fidelity outputs. 3)Compared with 2D inversion methods
where the editing view mostly aligns with the input view ,
in 3D editing we expect the editing results to perform well
over the novel views with large pose variations. There-
fore, 3D GAN inversion is non-trivial task and could not
be achieved by directly applying existing approaches.
To this end, we propose a novel Encoder-based 3D G AN
invErsion framework, E3DGE, which addresses the afore-
mentioned three challenges. Our framework has three novel
components with a delicate model design. Specifically:
Learning Inversion with Self-supervised Learning - The
first component focuses on the training of the inversion en-
coder. To address the shape collapse of single-view 3D re-
construction without external 3D datasets, we retrofit the
generator of a 3D GAN model to provide us with diverse
pseudo training samples, which can then be used to train
our inversion encoder in a self-supervised manner. Specif-
ically, we generate 3D shapes from the latent space Wof
a 3D GAN, and then render diverse 2D views from each
3D shape given different camera poses. In this way, we can
generate many pseudo 2D-3D pairs together with the corre-
sponding latent codes. Since the pseudo pairs are generated
from a smooth latent space that learns to approximate a nat-
1
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20940
ural shape manifold, they serve as effective surrogate data
to train the encoder, avoiding potential shape collapse.
Local Features for High-Fidelity Inversion - The second
component learns to reconstruct accurate texture details.
Our novelty here is to leverage local features to enhance the
representation capacity, beyond just the global latent code
generated by the inversion encoder. Specifically, in addi-
tion to inferring an editable global latent code to represent
the overall shape of the face, we further devise an hour-glass
model to extract local features over the residuals details that
the global latent code fails to capture. The local features,
with proper projection to the 3D space, serve as conditions
to modulate the 2D image rendering. Through this effective
learning scheme, we marry the benefits of both global and
local priors and achieve high-fidelity reconstruction.
Synthesizing View-consistent Edited Output - The third
component addresses the problem of novel view synthesis,
a problem unique to 3D shape editing. Specifically, though
we achieve high-fidelity reconstruction through aforemen-
tioned designs, the local residual features may not fully
align with the scene when being semantically edited. More-
over, the occlusion issue further degrades the fusion perfor-
mance when rendering from novel views with large pose
variations. To this end, we propose a 2D-3D hybrid align-
ment module for high-quality editing. Specifically, a 2D
alignment module and a 3D projection scheme are intro-
duced to jointly align the local features with edited images
and inpaint occluded local features in novel view synthesis.
Extensive experiments show that our method achieves
3D GAN inversion with plausible shapes and high-fidelity
image reconstruction without affecting editability. Owing
to the self-supervised training strategy with delicate global-
local design, our approach performs well on real-world 2D
and 3D benchmarks without resorting to any real-world 3D
dataset for training. To summarize, our main contributions
are as follows:
• We propose an early attempt at learning an encoder-
based 3D GAN inversion framework for high-quality
shape and texture inversion. We show that, with care-
ful design, samples synthesized by a GAN could serve
as proxy data for self-supervised training in inversion.
• We present an effective framework that uses local fea-
tures to complement the global latent code for high-
fidelity inversion.
• We propose an effective approach to synthesize view-
consistent output with a 2D-3D hybrid alignment.
|
Li_Ego-Body_Pose_Estimation_via_Ego-Head_Pose_Estimation_CVPR_2023
|
Abstract
Estimating 3D human motion from an egocentric video
sequence plays a critical role in human behavior understand-
ing and has various applications in VR/AR. However, naively
learning a mapping between egocentric videos and human
motions is challenging, because the user’s body is often un-
observed by the front-facing camera placed on the head of
the user. In addition, collecting large-scale, high-quality
datasets with paired egocentric videos and 3D human mo-
tions requires accurate motion capture devices, which often
limit the variety of scenes in the videos to lab-like environ-
ments. To eliminate the need for paired egocentric video and
human motions, we propose a new method, Ego-Body Pose
Estimation via Ego-Head Pose Estimation (EgoEgo), which
decomposes the problem into two stages, connected by the
head motion as an intermediate representation. EgoEgo first
integrates SLAM and a learning approach to estimate accu-
rate head motion. Subsequently, leveraging the estimated
head pose as input, EgoEgo utilizes conditional diffusion
†indicates equal contribution.to generate multiple plausible full-body motions. This dis-
entanglement of head and body pose eliminates the need
for training datasets with paired egocentric videos and 3D
human motion, enabling us to leverage large-scale egocen-
tric video datasets and motion capture datasets separately.
Moreover, for systematic benchmarking, we develop a syn-
thetic dataset, AMASS-Replica-Ego-Syn (ARES), with paired
egocentric videos and human motion. On both ARES and
real data, our EgoEgo model performs significantly better
than the current state-of-the-art methods.
|
1. Introduction
Estimating 3D human motion from an egocentric video,
which records the environment viewed from the first-person
perspective with a front-facing monocular camera, is criti-
cal to applications in VR/AR. However, naively learning a
mapping between egocentric videos and full-body human
motions is challenging for two reasons. First, modeling
this complex relationship is difficult; unlike reconstruction
motion from third-person videos, the human body is often
out of view of an egocentric video. Second, learning this
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17142
mapping requires a large-scale, diverse dataset containing
paired egocentric videos and the corresponding 3D human
poses. Creating such a dataset requires meticulous instru-
mentation for data acquisition, and unfortunately, such a
dataset does not currently exist. As such, existing works
have only worked on small-scale datasets with limited mo-
tion and scene diversity [22, 47, 48].
We introduce a generalized and robust method, EgoEgo ,
to estimate full-body human motions from only egocentric
video for diverse scenarios. Our key idea is to use head
motion as an intermediate representation to decompose the
problem into two stages: head motion estimation from the
input egocentric video and full-body motion estimation from
the estimated head motion. For most day-to-day activities,
humans have an extraordinary ability to stabilize the head
such that it aligns with the center of mass of the body [13],
which makes head motion an excellent feature for full-body
motion estimation. More importantly, the decomposition of
our method removes the need to learn from paired egocentric
videos and human poses, enabling learning from a combina-
tion of large-scale, single-modality datasets (e.g., datasets
with egocentric videos or 3D human poses only), which are
commonly and readily available.
The first stage, estimating the head pose from an ego-
centric video, resembles the localization problem. However,
directly applying the state-of-the-art monocular SLAM meth-
ods [33] yields unsatisfactory results, due to the unknown
gravity direction and the scaling difference between the es-
timated space and the real 3D world. We propose a hybrid
solution that leverages SLAM and learned transformer-based
models to achieve significantly more accurate head motion
estimation from egocentric video. In the second stage, we
generate the full-body motion based on a diffusion model
conditioned on the predicted head pose. Finally, to evaluate
our method and train other baselines, we build a large-scale
synthetic dataset with paired egocentric videos and 3D hu-
man motions, which can also be useful for future work on
visuomotor skill learning and sim-to-real transfer.
Our work makes four main contributions. First, we pro-
pose a decomposition paradigm, EgoEgo , to decouple the
problem of motion estimation from egocentric video into
two stages: ego-head pose estimation, and ego-body pose
estimation conditioned on the head pose. The decomposi-
tion lets us learn each component separately, eliminating
the need for a large-scale dataset with two paired modalities.
Second, we develop a hybrid approach for ego-head pose
estimation, integrating the results of monocular SLAM and
learning. Third, we propose a conditional diffusion model
to generate full-body poses conditioned on the head pose.
Finally, we contribute a large-scale synthetic dataset with
both egocentric videos and 3D human motions as a test bed
to benchmark different approaches and showcase that our
method outperforms the baselines by a large margin.
|
Lee_Learning_Geometry-Aware_Representations_by_Sketching_CVPR_2023
|
Abstract
Understanding geometric concepts, such as distance and
shape, is essential for understanding the real world and
also for many vision tasks. To incorporate such information
into a visual representation of a scene, we propose learning
to represent the scene by sketching, inspired by human be-
havior. Our method, coined Learning by Sketching (LBS),
learns to convert an image into a set of colored strokes
that explicitly incorporate the geometric information of the
scene in a single inference step without requiring a sketch
dataset. A sketch is then generated from the strokes where
CLIP-based perceptual loss maintains a semantic similar-
ity between the sketch and the image. We show theoreti-
cally that sketching is equivariant with respect to arbitrary
affine transformations and thus provably preserves geomet-
ric information. Experimental results show that LBS sub-
stantially improves the performance of object attribute clas-
sification on the unlabeled CLEVR dataset, domain trans-
fer between CLEVR and STL-10 datasets, and for diverse
downstream tasks, confirming that LBS provides rich geo-
metric information.
|
1. Introduction
Since geometric principles form the bedrock of our phys-
ical world, many real-world scenarios involve geometric
concepts such as position, shape, distance, and orienta-
tion. For example, grabbing an object requires estimating
its shape and relative distance. Understanding geometric
concepts is also essential for numerous vision tasks such
as image segmentation, visual reasoning, and pose estima-
tion [27]. Thus, it is crucial to learn a visual representation
of the image that can preserve such information [76], which
we call geometry-aware representation .
However, there is still a challenge in learning geometry-
aware representations in a compact way that can be useful
for various downstream tasks. Previous approaches have
focused on capturing geometric features of an image in a
2D grid structure, using methods such as handcrafted fea-
ture extraction [2, 4, 14, 31], segmentation maps [21, 57], or
Sketch Image
Downstream TasksStroke 1Stroke1
1
Stroke n
1Stroke Generator
Task Task Task
Geometric
Information
Inference Inference Inference
Stroke
Representation
Figure 1. Overview of LBS, which aims to generate sketches that
accurately reflect the geometric information of an image. A sketch
consists of a set of strokes represented by a parameterized vec-
tor that specifies curve, color, and thickness. We leverage it as a
geometry-aware representation for various downstream tasks.
convolution features [36, 55]. Although these methods are
widely applicable to various domains, they often lack com-
pactness based on a high-level understanding of the scene
and tend to prioritize nuisance features such as the back-
ground. Another line of work proposes architectures that
guarantee to preserve geometric structure [12,13,23,58,73]
or disentangle prominent factors in the data [8, 28, 39, 51].
Although these methods can represent geometric concepts
in a compact latent space, they are typically designed to
learn features that are specific to a particular domain and
often face challenges in generalizing to other domains [64].
In this study, we present a novel approach to learning
geometry-aware representations via sketching . Sketching,
which is the process of converting the salient features of
an image into a set of color-coded strokes, as illustrated in
Figure 1, is the primary means by which humans represent
images while preserving their geometry. Our key idea is
that sketches can be a compact, high-level representation
of an image that accurately reflects geometric information.
Sketching requires a high-level understanding of the scene,
as it aims to capture the most salient features of the image
and abstract them into a limited number of strokes. In ad-
dition, a sketch can be represented as a set of parameters
by replacing each stroke with parametric curves. Sketch-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23315
Image Sketch Stroke
Image Sketch Stroke
Original Transformed TransformedImage Sketch Stroke
Figure 2. Examples of how sketches capture essential geometric concepts such as shape, size, orientation, curvature, and local distortion.
The control points of each stroke compactly represent the local geometric information of the corresponding region in the sketch. The
strokes as a whole maintain the geometric structure of the entire image under various transformations in the image domain.
ing has also been linked to how people learn geometric
concepts [66]. Based on these properties, we directly use
strokes as a geometry-aware representation and utilize them
for downstream tasks. Under the theoretical framework of
geometric deep learning [3], we conduct theoretical analy-
sis to validate the effectiveness of strokes as a representation
and prove their ability to preserve affine transformations.
To validate our hypothesis, we introduce Learning by
Sketching (LBS), a method that generates abstract sketches
coherent with the geometry of an input image. Our model is
distinct from existing sketch generation methods as it does
not require a sketch dataset for training, which often has
limited abstraction or fails to accurately reflect geometric
information. Instead, LBS learns to convert an image into
a set of colored Bézier curves that explicitly represent the
geometric concepts of the input image. To teach the style
of sketching, we use perceptual loss based on the CLIP
model [59, 68], which measures both semantic and geo-
metric similarities between images and generated sketches.
We propose a progressive optimization process that predicts
how strokes will be optimized from their initial positions
through CLIP-based perceptual loss to generate abstract
sketches in a single inference step. As a result, LBS gen-
erates a representation that reflects visual understanding in
a single inference step, without requiring a sketch dataset.
This produces highly explainable representations through
sketches, as illustrated in Figure 2.
We conduct experimental analyses to evaluate the ef-
fectiveness of our approach, learning by sketching, by ad-
dressing multiple research questions in various downstream
tasks, including: (i)describing the relationships between
geometric primitives, (ii)demonstrating simple spatial rea-
soning ability by conveying global and local geometric in-
formation, (iii) containing general geometric information
shared across different domains, and (iv)improving perfor-
mance on FG-SBIR, a traditional sketch task.
|
Li_Class_Balanced_Adaptive_Pseudo_Labeling_for_Federated_Semi-Supervised_Learning_CVPR_2023
|
Abstract
This paper focuses on federated semi-supervised learn-
ing (FSSL), assuming that few clients have fully labeled
data (labeled clients) and the training datasets in other
clients are fully unlabeled (unlabeled clients). Existing
methods attempt to deal with the challenges caused by not
independent and identically distributed data (Non-IID) set-
ting. Though methods such as sub-consensus models have
been proposed, they usually adopt standard pseudo label-
ing or consistency regularization on unlabeled clients which
can be easily influenced by imbalanced class distribution.
Thus, problems in FSSL are still yet to be solved. To seek
for a fundamental solution to this problem, we present Class
Balanced Adaptive Pseudo Labeling (CBAFed), to study
FSSL from the perspective of pseudo labeling. In CBAFed,
the first key element is a fixed pseudo labeling strategy to
handle the catastrophic forgetting problem, where we keep
a fixed set by letting pass information of unlabeled data
at the beginning of the unlabeled client training in each
communication round. The second key element is that we
design class balanced adaptive thresholds via consider-
ing the empirical distribution of all training data in local
clients, to encourage a balanced training process. To make
the model reach a better optimum, we further propose a
residual weight connection in local supervised training and
global model aggregation. Extensive experiments on five
datasets demonstrate the superiority of CBAFed. Code will
be available at https://github.com/minglllli/
CBAFed .
|
1. Introduction
Federated learning (FL) aims to train machine learning
models on a decentralized manner while preserving data
privacy, i.e., separate local models are trained on separate
local training datasets independently. In recent years, FL
has received much attention for privacy protection reasons
*Corresponding Author.[32]. However, most of FL works focused on supervised
learning with fully labeled data. But in practice, labeling
process of large-scale training data is laborious and expen-
sive. Due to the lack of funds or experts, large labeled train-
ing dataset is difficult for many companies and institutions
to obtain. These may hinder applicability of FL.
To handle this problem, federated semi-supervised learn-
ing (FSSL) has been explored by many researchers recently
[8, 14, 15]. There are broadly three lines of FSSL methods
by considering different places and status of labeled data.
The first two lines consider that there are only limited la-
beled data in the central server [8] or each client has par-
tially labeled data [15]. The third line assumes that few
clients have fully labeled data and the training datasets in
other clients are fully unlabeled [14, 18, 31]. Our paper
mainly focuses on the third line of FSSL, and we call lo-
cal clients with fully labeled data as labeled clients and the
other clients as unlabeled clients.
The main difficulties to train a third line FSSL model lie
in three folds: 1) There are no labeled data in unlabeled
clients. Thus, the training can be easily biased without
label guidance. 2) Due to the divergent class distribution
of labeled and unlabeled clients, namely the not indepen-
dent and identically distributed data (Non-IID) setting, in-
accurate supervisory signals may be generated in unlabeled
clients via employing the model trained in labeled clients by
either pseudo labeling or consistency regularization frame-
work. 3) Due to the catastrophic forgetting problems in
CNNs, with the training process of unlabeled clients going
on, models may forget the knowledge learned on labeled
clients and so decrease the prediction accuracy drastically.
RSCFed [14], the state-of-the-art method significantly
boosts the FSSL performance by first distilling sub-
consensus models, and then aggregating the sub-consensus
models to the global model. The sub-consensus models can
handle the Non-IID setting to some extent, but the mean-
teacher based consistency regularization framework in un-
labeled clients inevitably causes the accuracy degradation
when the classes are imbalanced distributed. RSCFed at-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16292
tempts to address the catastrophic forgetting problem by
adjusting aggregation weights for labeled and unlabeled
clients. But, it seems just alleviates the negative impact
on the unlabeled clients and the problem is still yet to be
tackled. Besides, the sub-consensus models may occur un-
stable training problems, and increase the communication
cost. Other methods like FedConsist [31] and FedIRM
[18] achieve promising results by utilizing pseudo labeling
methods to produce artificial labels, but they do not consider
the Non-IID setting among local clients.
Pseudo labeling methods are shown to be effective in
semi-supervised learning (SSL) [13, 24, 33, 37], which gen-
erate pseudo-labels for unlabeled images and propose sev-
eral strategies to ensure the quality of pseudo-labels. Ex-
isting FSSL works only adopt standard pseudo labeling or
consistency regularization on FSSL. Since the mentioned
difficulties hamper the usage of these methods, other rem-
edy is usually proposed to alleviate the difficulties, which
is palliative. Thus, we propose to study FSSL from the
perspective of pseudo labeling, seeking for a fundamental
solution to this problem.
Concretely, we present Class Balanced Adaptive Pseudo
Labeling, namely CBAFed, by rethinking standard pseudo
labeling methods in SSL. To handle the catastrophic forget-
ting problem, we propose a fixed pseudo labeling strategy,
which builds a fixed set by letting pass informative unla-
beled data and their pseudo labels at the beginning of the
unlabeled client training. Due to the Non-IID and heteroge-
neous data partition problems in FL, training distribution of
unlabeled data can be highly imbalanced, so existing thresh-
olds are not suitable in FSSL. We design class balanced
adaptive thresholds via considering the empirical distribu-
tion of all training data in local clients at the previous com-
munication round. Analysis proves that our method sets
a reasonably high threshold for extremely scarce classes
and encourages a balanced training process. To enhance
the learning ability and discover unlabeled data from tail
classes, we propose to leverage information from so-called
“not informative” unlabeled data. Besides, we also explore
a novel training strategy for labeled clients and the central
server, termed as residual weight connection, skip connect-
ing weights from previous epochs (for labeled clients) or
previous global models (for the central server). It can help
the model reach better optimum, when the training distribu-
tion is imbalanced and training amount is small. We con-
duct extensive experiments on five datasets to show the ef-
fectiveness of CBAFed. Overall, our main contributions can
be summarized as follows:
• We present a CBAFed method to deal with the catas-
trophic forgetting problems in federated learning . Un-
like existing FSSL frameworks that directly adopts
pseudo labeling or consistency regularization methods,
CBAFed explores a fundamental solution to FSSL viadesigning a novel pseudo labeling method.
• We introduce a residual weight connection method, to
improve the robustness of the models in labeled clients
and the central server, which skip connects weights
from previous epoch or communication round to fi-
nally reach better optimum.
• Experiments are conducted on five datasets: four nat-
ural datasets CIFAR-10/100, SVHN, fashion MNIST
and one medical dataset ISIC 2018 Skin. CBAFed
outperforms state-of-the-art FSSL methods by a large
margin, with 11.3% improvements on SVHN dataset.
|
Lin_Catch_Missing_Details_Image_Reconstruction_With_Frequency_Augmented_Variational_Autoencoder_CVPR_2023
|
Abstract
The popular VQ-VAE models reconstruct images
through learning a discrete codebook but suffer from a sig-
nificant issue in the rapid quality degradation of image re-
construction as the compression rate rises. One major rea-
son is that a higher compression rate induces more loss of
visual signals on the higher frequency spectrum which re-
flect the details on pixel space. In this paper, a Frequency
Complement Module (FCM) architecture is proposed to
capture the missing frequency information for enhancing
reconstruction quality. The FCM can be easily incorpo-
rated into the VQ-VAE structure, and we refer to the new
model as Frequancy Augmented VAE (FA-VAE ). In ad-
dition, a Dynamic Spectrum Loss (DSL) is introduced to
guide the FCMs to balance between various frequencies
dynamically for optimal reconstruction. FA-VAE is further
extended to the text-to-image synthesis task, and a Cross-
attention Autoregressive Transformer (CAT) is proposed to
obtain more precise semantic attributes in texts. Extensive
reconstruction experiments with different compression rates
are conducted on several benchmark datasets, and the re-
sults demonstrate that the proposed FA-VAE is able to re-
store more faithfully the details compared to SOTA meth-
ods. CAT also shows improved generation quality with bet-
ter image-text semantic alignment.
|
1. Introduction
VQ-V AE models [6,11,25,29,39,46] reconstruct images
through learning a discrete codebook of latent embeddings.
They gained wide popularity due to the scalable and versa-
tile codebook, which can be broadly applied to many visual
tasks such as image synthesis [11,49] and inpainting [4,31].
A higher compression rate is typically preferable in VQ-
VQ-GAN [11]𝑓:16lpips: 0.30, rFID: 9.49FA-VAE(Ours) 𝑓:16lpips: 0.21, rFID: 5.90FA-VAE (Ours) 𝑓:4lpips: 0.07, rFID: 2.25originalimageimage spectrum
Figure 1. Images and their frequency maps. Row 1: original and
reconstructed images. Row 2: the frequency maps of images, fre-
quency increases in any direction away from the center. fis the
compression rate. With a greater compression rate, more details
are lost during reconstruction, i.e. eyes and mouth shape, and
hair texture (pointed with red arrows) which align with the loss
of high-frequency features. All frequency figures in this paper use
the same colormap. rFID [14] and lpips [16] are lower the bet-
ter, and frequency values increase from red to green, zoom in for
better visualization.
V AE models since it provides memory efficiency and better
learning of coherent semantics structures [11, 40].
One main challenge quickly arises for a higher com-
pression rate, which severely compromises reconstruction
accuracy. Figure 1 row 1 shows that although the recon-
structed images at higher compression rates appear consis-
tent with the original image, details inconsistencies such
as the color and contour of the lips become apparent upon
closer scrutiny. Figure 1 row 2 reveals that similar degra-
dation also manifests on the frequency domains where fea-
tures towards the middle and higher frequency spectrum are
the least recoverable with greater compression rate.
Several causes stand behind this gap between pixel and
frequency space. The convolutional nature of autoen-
coders is prone to spectral bias , which favors learning low-
frequency features [22, 36]. This challenge is further ag-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1736
gravated when current methods exclusively design losses
or improve model architecture for better semantics resem-
blance [11,16,25] but often neglect the alignment on the fre-
quency domain [12,15]. On top of that, it is intuitively more
challenging for a decoder to reconstruct an image patch
from a single codebook embedding (high compression) than
multiple embeddings (less compression). The reason is that
the former mixes up features of incomplete and diverse fre-
quencies, while the latter could preserve more fine-grained
and complete features at various frequencies.
Inspired by these insights, the Frequency Augmented
VAE (FA-VAE )model is proposed, which aims to improve
reconstruction quality by achieving better alignment on the
frequency spectrums between the original and reconstructed
images. More specifically, new modules named Frequency
Complement Modules (FCM) are crafted and embedded at
multiple layers of FA-V AE’s decoder to learn to comple-
ment the decoder’s features with missing frequencies.
We observe that valuable middle and high frequencies
are mingled with the encoder’s feature maps during the
compression via an encoder, shown in Figure 3 row 4.
Therefore, a new loss termed Spectrum Loss (SL) is pro-
posed to guide FCMs to generate missing features that
align with the same level’s encoder features on the fre-
quency domain. Since most image semantics reside on the
low-frequency spectrum [48], SL prioritizes learning lower-
frequency features with diminishing weights as frequencies
increase.
Interestingly, we discover that checkerboard patterns ap-
pear in the complemented decoder’s features with SL, al-
though better reconstruction performance is achieved (Fig-
ure 3 column 4). We speculate that because SL sets a de-
terministic range for the low-frequency spectrum when ap-
plying weights on the frequencies without considering that
the importance of a frequency can vary from layer to layer.
Thus, an improved loss function Dynamic Spectrum Loss
(DSL) is crafted on top of SL with a learnable component to
adjust the range of low-frequency spectrum dynamically for
optimal reconstruction. DSL can improve reconstruction
quality even further than SL without the unnatural checker-
board artifacts in the features (Figure 3 column 5).
We further extend FA-V AE to the text-to-image gener-
ation task and propose the Cross-attention Autoregressive
Transformer (CAT) model. We first observe that only using
one or a few token embeddings is a coarse representation of
lengthy texts [8, 27, 33]. Thus CAT uses all token embed-
dings as a condition for more precise guidance. Moreover,
existing works typically use self-attention, and the text con-
dition is embedded merely at the beginning of the genera-
tion [11, 49]. This mechanism becomes problematic in the
autoregressive generation because one image token is gen-
erated at a time, thus the text condition gradually loosens its
connection with the generated tokens. To circumvent thisissue, CAT embeds a cross-attention mechanism that allows
the text condition to guide each step generation.
To summarize, our work includes the following contri-
butions:
• We propose a new type of architecture called Fre-
quency Augmented VAE (FA-VAE) for improving im-
age reconstruction through achieving more accurate
details reconstruction.
• We propose a new loss called Spectrum Loss (SL)
and its enhanced version Dynamic Spectrum Loss (D
SL), which guides the Frequency Complement Mod-
ules (FCM) in FA-V AE to adaptively learn different
low/high frequency mixtures for optimal reconstruc-
tion.
• We propose a new Cross-attention Autoregressive
Transformer (CAT) for text-to-image generation using
more fine-grained textual embeddings as a condition
with a cross-attention mechanism for better image-text
semantic alignment.
|
Li_An_In-Depth_Exploration_of_Person_Re-Identification_and_Gait_Recognition_in_CVPR_2023
|
Abstract
The target of person re-identification (ReID) and gait
recognition is consistent, that is to match the target pedes-
trian under surveillance cameras. For the cloth-changing
problem, video-based ReID is rarely studied due to the
lack of a suitable cloth-changing benchmark, and gait
recognition is often researched under controlled condi-
tions. To tackle this problem, we propose a Cloth-Changing
benchmark for Person re-identification and Gait recogni-
tion (CCPG). It is a cloth-changing dataset, and there are
several highlights in CCPG, (1) it provides 200 identities
and over 16K sequences are captured indoors and out-
doors, (2) each identity has seven different cloth-changing
statuses, which is hardly seen in previous datasets, (3)
RGB and silhouettes version data are both available for
research purposes. Moreover, aiming to investigate the
cloth-changing problem systematically, comprehensive ex-
periments are conducted on video-based ReID and gait
recognition methods. The experimental results demon-
strate the superiority of ReID and gait recognition sepa-
rately in different cloth-changing conditions and suggest
that gait recognition is a potential solution for addressing
the cloth-changing problem. Our dataset will be available
athttps://github.com/BNU-IVC/CCPG .
|
1. Introduction
With the rapid development of video devices, more and
more surveillance systems are taken into real applications
and play an essential role in protecting the security of our
society. However, along with the significantly growing
*Equal contribution.
†Corresponding authors.
Person Re-identification Gait RecognitionQuery QueryGallery Gallery
Query Person Retrieval Method Gallery
Figure 1. The targets of person re-identification and gait recogni-
tion are consistent, to search the target person from the gallery.
The main difference is that gait recognition normally requires
pedestrian segmentation.
amount of data, traditional social security is facing two
dilemmas: data storage and inefficient monitoring. Due to
artificial intelligence progress and diverse demand in so-
cial security, many security technologies have come into
our sights, such as face recognition, person re-identification
(ReID), and gait recognition. These technologies reduce
data size vastly and improve efficiency in social security.
Video-based ReID plays a vital role in surveillance video
analysis, and it intends to match the probe sequence of the
specific person from gallery sequences in surveillance sys-
tems by learning features of multiple frames [5, 27, 43].
Compared with image-based ReID [10,21,36], video-based
ReID provides both appearance and rich temporal informa-
tion. Previous video-based ReID methods [9, 12, 19, 20, 33,
36, 38] focus on the cloth-consistent setting, where people
do not change their clothes in the short term. However, in
the real world, clothes-changing situations are everywhere.
Due to its practical application in social security, cloth-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13824
changing ReID has gotten more attention in these years.
Another technology is gait recognition. Gait is a unique
biometric characteristic that describes the walking patterns
of one person [23] and contains spatial statics and temporal
dynamics in the walking process. Compared with other bio-
metric features, gait is hard to disguise and can work in long
distances [13], so it has a potential for social security. Popu-
lar gait recognition datasets [28,37] are collected under con-
trol conditions, and some real-world benchmarks [41, 44]
come into our sights in these years. Many gait recogni-
tion methods [4, 7, 17, 18, 24] are mainly developed based
on these datasets. Several challenges are studied explicitly,
such as carrying bags and cloth-changing. However, the re-
search on gait recognition in real scenarios is insufficient.
So gait recognition in the wild has attracted increasing at-
tention.
In real scenarios, the final targets of ReID and gait recog-
nition are consistent, that is, to recognize the target person
across cameras, as shown in Fig. 1. Previous works [3, 39]
conduct experiments to compare the performance of video-
based ReID and gait recognition on video-based ReID
datasets, which are cloth-unchanged. However, dressing
variation in practical applications is a significant problem,
especially the finer cloth-changing problem, e.g., changing
a whole fit, changing only tops, and changing only pants.
Little work specializes in these cloth-changing conditions,
and no comparison experiment has been conducted between
video-based ReID and gait recognition. The main reason
is the lack of such a cloth-changing benchmark for com-
parison of cloth-changing conditions. So it is time to build
such a cloth-changing dataset for video-based ReID and gait
recognition. First of all, we should take three aspects into
consideration. (1) RGB data . A benchmark should pro-
vide RGB data for comparison between video-based ReID
and gait recognition at least, because RGB data is the basic
research data for them. (2) Clothes statuses . People usu-
ally change their clothes daily, and a dataset should contain
different clothing statuses close to daily life. (3) Collec-
tion environment . The raw data should not be collected
under strict restrictions, which makes the dataset similar to
real-world applications as much as possible since our final
target is to satisfy social security demands in the real world.
To our best knowledge, no such public dataset could sat-
isfy all the requirements mentioned above. CCVID [8] only
contains the front views of the pedestrians, rather than di-
verse views. GREW [44] lacks RGB data and finer clothes
changing.
Therefore, in this paper, we propose a benchmark, Cloth-
Changing benchmark for Person re-identification and Gait
recognition ( CCPG ). Especially, the CCPG dataset has sev-
eral important features. (1)It contains 200 subjects wearing
many different clothes and over 16,000 sequences, and the
RGB data is available. (2)Subjects contain seven differentclothes statuses, whose cloth-changing ways include chang-
ing a whole fit, changing tops, changing pants, and carrying
bags, and these abundant types of dressing are supported
for cloth-changing comparison. In addition, more accu-
rate tracking results are provided and reduce the noise in
the dataset, because precise human detection guarantees the
following recognition mission. (3)Our raw data is collected
indoors and outdoors, and many challenges are considered,
including viewing angles, occlusions, illumination changes,
etc., which make it more similar to the real world.
Taking advantages of our new cloth-changing bench-
mark, we conduct a series of systematic experiments be-
tween video-based ReID and gait recognition, aiming to
compare their performance in different dressing settings.
First, leading video-based ReID methods are performed
on the CCPG dataset in different cloth-changing settings,
which indicates that they are sensitive to the appearance
features and still have room to improve on cloth-changing
problems. Second, popular gait recognition methods are
implemented on the CCPG dataset and achieve higher per-
formance in some cloth-changing settings. In addition,
gait recognition methods are as good as ReID in partial
cloth-changing situations, e.g., only changing tops and only
changing pants. Third, we compare the performance of
these pedestrian retrieval methods and analyze experimental
results in different cloth-changing settings. The key to ad-
dressing the cloth-changing problem is to exploit the cloth-
invariant features, and these experimental results demon-
strate the SOTA gait recognition methods have more poten-
tial for cloth-changing problems.
In summary, our main contributions are summarized as
follows:
• First, we present a brand new cloth-changing bench-
mark, named CCPG. Compared with others, our
benchmark provides finer cloth-changing conditions.
We hope it could help study the cloth-changing prob-
lem for video-based ReID and gait recognition. Impor-
tantly, it should be emphasized that our data collection
obtains permission from authorities and adult volun-
teers for research purposes.
• Second, with the motivation to study the performance
of video-based ReID and gait recognition methods un-
der different cloth-changing conditions, comprehen-
sive experiments are conducted on them, and the re-
sults show the better performance of gait recognition
on CCPG, and suggest that gait recognition is a poten-
tial solution for solving cloth-changing problems.
|
Liu_Progressive_Neighbor_Consistency_Mining_for_Correspondence_Pruning_CVPR_2023
|
Abstract
The goal of correspondence pruning is to recognize cor-
rect correspondences (inliers) from initial ones, with appli-
cations to various feature matching based tasks. Seeking
neighbors in the coordinate and feature spaces is a com-
mon strategy in many previous methods. However, it is
difficult to ensure that these neighbors are always consis-
tent, since the distribution of false correspondences is ex-
tremely irregular. For addressing this problem, we propose
a novel global-graph space to search for consistent neigh-
bors based on a weighted global graph that can explicitly
explore long-range dependencies among correspondences.
On top of that, we progressively construct three neighbor
embeddings according to different neighbor search spaces,
and design a Neighbor Consistency block to extract neigh-
bor context and explore their interactions sequentially. In
the end, we develop a Neighbor Consistency Mining Net-
work (NCMNet) for accurately recovering camera poses
and identifying inliers. Experimental results indicate that
our NCMNet achieves a significant performance advantage
over state-of-the-art competitors on challenging outdoor
and indoor matching scenes. The source code can be found
athttps://github.com/xinliu29/NCMNet .
|
1. Introduction
Estimating high-quality feature correspondences be-
tween two images is of crucial significance to numerous
computer vision tasks, such as visual simultaneous local-
ization and mapping (SLAM) [33], structure from mo-
tion (SfM) [41, 49], image fusion [31], and image reg-
istration [48, 51]. Off-the-shelf feature extraction meth-
ods [4, 29, 52] can be employed to establish initial corre-
spondences. Due to complex matching situations ( e.g., se-
vere viewpoint variations, illumination changes, occlusions,
blurs, and repetitive structures), a great number of false cor-
respondences, called outliers, are inevitable [19, 30]. To
mitigate the negative impact of outliers for downstream
tasks, correspondence pruning [5, 24, 53] can be imple-
∗Corresponding Author
Input(a) neighbors incoordinatespace 𝑘𝑛𝑛search
(b) neighborsin featurespace 𝑘𝑛𝑛search
(c) neighbors inglobal-graphspace GlobalGraph𝑘𝑛𝑛searchSGC
OutlierInlierSampledinlierSearchregionNeuralNetworkFigure 1. The acquisition process and visual comparison of (a)
spatial neighbors, (b) feature-space neighbors, and (c) global-
graph neighbors. SGC : the spectral graph convolution operation.
mented to further identify correct correspondences, also
known as inliers, from initial ones. However, unlike images
that contain sufficient information, e.g., texture and RGB
information, correspondence pruning is extremely challeng-
ing since the spatial positions of initial correspondences are
discrete and irregular [55].
Intuitively, inliers commonly conform to consistent con-
straints ( e.g., lengths, angles, and motion) under the
2D rigid transformation, while outliers are randomly dis-
tributed, see the top left of Fig. 1. Therefore, the con-
sistency of correspondences as a vital priori knowledge
has been studied extensively to distinguish inliers and out-
liers [9, 13, 14], in which the neighbor consistency has re-
ceived widespread attention [25, 26, 57]. For well-defined
neighbors, previous approaches [5, 8, 27] employ k-nearest
neighbor ( knn) search in the coordinate space of raw corre-
spondences to seek spatially consistent neighbors, denoted
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9527
as spatial neighbors. Subsequently, several works, such
as CLNet [57] and MS2DG-Net [11], search for feature-
consistent neighbors ( i.e., feature-space neighbors) by per-
forming knn search in the feature space learned from the
neural network. They devise various strategies to exploit
the neighbor consistency of correspondences, and show sat-
isfactory progress. Nevertheless, there may exist numerous
outliers in the vicinity of an inlier due to the random dis-
tribution of outliers, especially for the challenging match-
ing scenes. As shown in Fig. 1, in the coordinate and fea-
ture spaces, the searched neighbors of a sampled inlier (blue
line) always contain some unexpected outliers (red line).
To tackle this issue, we propose a new global-graph
space to seek consistent neighbors for each correspondence.
Inspired by the fact that inliers have strong consistency at
a global level [13, 22, 23], we first construct a weighted
global graph, in which nodes denote all correspondences
and edges represent their pairwise affinities calculated by
the preliminary inlier scores. The dependence between two
correspondences is determined to be tight if they have high
scores simultaneously. Next, we use a spectral graph con-
volution operation [21,57] to further explore long-range de-
pendencies among correspondences and increase the dis-
crimination between inliers and outliers. We finally adopt
knn search in the global-graph space to search for globally
consistent neighbors, called global-graph neighbors as il-
lustrated in Fig. 1(c). Noteworthily, the positions of global-
graph neighbors are not required to be spatially close to the
sampled inlier. In other words, this kind of neighbor has a
large search region (see the ablation for quantitative results)
due to our global operation.
Moreover, a single type of neighbor is inadequate for
all complex matching situations. Therefore, we present a
new Neighbor Consistency (NC) block to take full advan-
tage of three types of neighbors and improve the robust-
ness. Specifically, we progressively construct three neigh-
bor embeddings according to the spatial, feature-space, and
global-graph neighbors. To extract corresponding neigh-
bor context and explore their interactions, we design two
successive layers, i.e., Self-Context Extraction (SCE) layer
and Cross-Context Interaction(CCI) layer. The SCE layer
is responsible for dynamically capturing intra-neighbor re-
lations and aggregating their context, while the CCI layer
fuses and modulates inter-neighbor interactive information.
Finally, an effective Neighbor Consistency Mining Network
(NCMNet) is developed to achieve correspondence pruning.
Our contributions are three-fold: (1) Based on the fact
that inliers have strong consistency at a global level, we pro-
pose a novel global-graph space to seek consistent neigh-
bors for each correspondence. (2) We present a new NC
block to progressively mine the consistency of three types of
neighbors by extracting intra-neighbor context and explor-
ing inter-neighbor interactions in a sequential manner. (3)We develop an effective NCMNet for correspondence prun-
ing, obtaining considerable performance gains when com-
pared to state-of-the-art works.
|
Lin_Neural_Scene_Chronology_CVPR_2023
|
Abstract
In this work, we aim to reconstruct a time-varying 3D
model, capable of rendering photo-realistic renderings with
independent control of viewpoint, illumination, and time,
from Internet photos of large-scale landmarks. The core
challenges are twofold. First, different types of temporal
changes, such as illumination and changes to the under-
lying scene itself (such as replacing one graffiti artwork
with another) are entangled together in the imagery. Sec-
ond, scene-level temporal changes are often discrete and
sporadic over time, rather than continuous. To tackle these
problems, we propose a new scene representation equipped
with a novel temporal step function encoding method that
can model discrete scene-level content changes as piece-
wise constant functions over time. Specifically, we represent
the scene as a space-time radiance field with a per-image
The authors from Zhejiang University are affiliated with the State Key
Lab of CAD&CG.∗This work was done when Haotong Lin was in a remote
internship at Cornell University.†Corresponding author: Xiaowei Zhou.illumination embedding, where temporally-varying scene
changes are encoded using a set of learned step functions.
To facilitate our task of chronology reconstruction from In-
ternet imagery, we also collect a new dataset of four scenes
that exhibit various changes over time. We demonstrate that
our method exhibits state-of-the-art view synthesis results
on this dataset, while achieving independent control of view-
point, time, and illumination. Code and data are available
athttps://zju3dv.github.io/NeuSC/ .
|
1. Introduction
If we revisit a space we once knew during our childhood,
it might not be as we remembered it. The buildings may
have weathered, or have been newly painted, or may have
been replaced entirely. Accordingly, there is no such thing
as a single, authoritative 3D model of a scene—only a model
of how it existed at a given instant in time. For a famous
landmark, Internet photos can serve as a kind of chronicle
of that landmark’s state over time, if we could organize the
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20752
information in those photos in a coherent way. For instance,
if we could reconstruct a time-varying 3D model, then we
could revisit the scene at any desired point in time.
In this work, we explore this problem of chronology re-
construction , revisiting the work on Scene Chronology from
nearly a decade ago [27]. As in that work, we seek to use In-
ternet photos to build a 4D model of a scene, from which we
can dial in any desired time (within the time interval where
we have photos). However, the original Scene Chronol-
ogy work was confined to reconstructing planar, rectangular
scene elements, leading to limited photo-realism. We can
now revisit this problem with powerful neural scene represen-
tations, inspired by methods such as NeRF in the Wild [26].
However, recent neural reconstruction methods designed for
Internet photos assume that the underlying scene is static,
which works well for landmarks with a high degree of per-
manence, but fails for other scenes, like New York’s Times
Square, that feature more ephemeral elements like billboards
and advertisements.
However, we find that adapting neural reconstruction
methods [26] to the chronology reconstruction problem has
many challenges, and that straightforward extensions do
not work well. For instance, augmenting a neural radiance
field (NeRF) model with an additional time input t, and fit-
ting the resulting 4D radiance field to a set of images with
timestamps yields temporally oversmoothed models, where
different scene appearances over time are blended together,
forming ghosted content; such a model underfits the tempo-
ral signal. On the other hand, applying standard positional
encoding [31] to the time input overfits the temporal sig-
nal, conflating transient appearance changes due to factors
like illumination with longer-term, sporadic changes to the
underlying scene itself.
Instead, we seek a model that can disentangle transient,
per-image changes from longer-term, scene-level changes,
and that allows for independent control of viewpoint, time,
and illumination at render-time. Based on the observation
that scene-level content changes are often sudden, abrupt
“step function”-like changes (e.g., a billboard changing from
one advertisement to another), we introduce a novel encod-
ing method for time inputs that can effectively model piece-
wise constant scene content over time, and pair this method
with a per-image illumination code that models transient
appearance changes. Accordingly, we represent 4D scene
content as a multi-layer perceptron (MLP) that stores density
and radiance at each space-time (x, y, z, t )scene point, and
takes an illumination code as a side input. The time input
tto this MLP is encoded with our proposed step function
encoding that models piecewise constant temporal changes.
When fit to a set of input images, we find that our representa-
tion can effectively factor different kinds of temporal effects,
and can produce high-quality renderings of scenes over time.
To evaluate our method, we collect a dataset of imagesfrom Flickr and calibrate them using COLMAP, resulting
in 52K successfully registered images. These photos are
sourced from four different scenes, including dense tourist ar-
eas, graffiti meccas, and museums, building upon the datasets
used in Scene Chronology. These scenes feature a variety of
elements that change over time, including billboards, graffiti
art, and banners. Experiments on these scenes show that
our method outperforms current state-of-the-art methods and
their extensions to space-time view synthesis [6, 26]. We
also present a detailed ablation and analysis of our proposed
time encoding method.
In summary, our work makes the following contributions:
•To the best of our knowledge, ours is the first work
to achieve photo-realistic chronology reconstruction,
allowing for high-quality renderings of scenes with
controllable viewpoint, time, and illumination.
•We propose a novel encoding method that can model
abrupt content changes without overfitting to transient
factors. This leads to a fitting procedure that can ef-
fectively disentangle illumination effects from content
changes in the underlying scene.
•We benchmark the task of chronology reconstruction
from Internet photos and make our dataset and code
available to the research community.
|
Karim_MED-VT_Multiscale_Encoder-Decoder_Video_Transformer_With_Application_To_Object_Segmentation_CVPR_2023
|
Abstract
Multiscale video transformers have been explored in a
wide variety of vision tasks. To date, however, the multiscale
processing has been confined to the encoder or decoder
alone. We present a unified multiscale encoder-decoder
transformer that is focused on dense prediction tasks in
videos. Multiscale representation at both encoder and de-
coder yields key benefits of implicit extraction of spatiotem-
poral features ( i.e. without reliance on input optical flow)
as well as temporal consistency at encoding and coarse-
to-fine detection for high-level ( e.g. object) semantics to
guide precise localization at decoding. Moreover, we pro-
pose a transductive learning scheme through many-to-many
label propagation to provide temporally consistent predic-
tions. We showcase our Multiscale Encoder-Decoder Video
Transformer (MED-VT) on Automatic Video Object Seg-
mentation (AVOS) and actor/action segmentation, where we
outperform state-of-the-art approaches on multiple bench-
marks using only raw images, without using optical flow.
|
1. Introduction
Transformers have been applied to a wide range of image
and video understanding tasks as well as other areas [13].
The ability of such architectures to establish data relation-
ships across space and time without the local biases in-
herent in convolutional and other similarly constrained ap-
proaches arguably is key to the success. Multiscale process-
ing has potential to enhance further the learning abilities of
transformers through cross-scale learning [4, 6, 8, 19, 43].
A gap exists, however, as no approach has emerged that
makes full use of multiscale processing during both encod-
ing and decoding in video transformers. Recent work has
focused on multiscale transformer encoding [8,19], yet does
not incorporate multiscale processing in the transformer de-
coder. Other work has proposed multiscale transformer de-
coding [6], yet was designed mainly for single images and
VisTRMviTMask2 FormerMEDVT (Ours)Input ClipInput ClipInput ClipInput ClipCoarse FeaturesFeature PyramidFeature PyramidMultiscale Transformer EncoderTransformer EncoderMultiscale Transformer EncoderMultiscale Transformer DecoderTransformer DecoderMultiscale Transformer DecoderTask HeadInductive LearningTransductive Learning Figure 1. Comparison of state-of-the-art multiscale video trans-
formers and our approach. Video transformers take an input clip
and feed features to a transformer encoder-decoder, with alterna-
tive approaches using multiscale processing only in the encoder or
decoder. We present a unified multiscale encoder-decoder video
transformer (MED-VT), while predicting temporally consistent
segmentations through transductive learning. We showcase MED-
VT on two tasks, video object and actor/action segmentation.
did not consider the structured prediction nature inherent to
video tasks, i.e. the importance of temporal consistency.
In response, we present the first Multiscale Encoder
Decoder Video Transformer (MED-VT). At encoding, its
within and between-scale attention mechanisms allow it to
capture both spatial, temporal and integrated spatiotemporal
information. At decoding, it introduces learnable coarse-to-
fine queries that allow for precise target delineation, while
enforcing temporal consistency among the predicted masks
through transductive learning [38, 52].
We primarily illustrate the utility of MED-VT on the task
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
6323
of Automatic Video Object Segmentation (A VOS). A VOS
separates primary foreground object(s) from background in
a video without any supervision, i.e. without information on
the objects of interest [53]. This task is important and chal-
lenging, as it is a key enabler of many subsequent visually-
guided operations, e.g., autonomous driving and augmented
reality. A VOS shares challenges common to any VOS task
(e.g., object deformation and clutter). Notably, however, the
requirement of complete automaticity imposes extra chal-
lenges to A VOS, as it does not benefit from any per video
initialization. Lacking prior information, solutions must ex-
ploit appearance ( e.g., colour and shape) as well as motion
to garner as much information as possible.
MED-VT responds to these challenges. Its within and
between scale attention mechanisms capture both appear-
ance and motion information as well as yield temporal con-
sistency. Its learnable coarse-to-fine queries allow seman-
tically laden information at deeper layers to guide finer
scale features for precise object delineation. Its transductive
learning through many-to-many label propagation ensures
temporally consistent predictions. To showcase our model
beyond A VOS, we also apply it to actor/action segmenta-
tion. Figure 1 overviews MED-VT compared to others.
|
Lin_Actionlet-Dependent_Contrastive_Learning_for_Unsupervised_Skeleton-Based_Action_Recognition_CVPR_2023
|
Abstract
The self-supervised pretraining paradigm has achieved
great success in skeleton-based action recognition. How-
ever, these methods treat the motion and static parts equally,
and lack an adaptive design for different parts, which has a
negative impact on the accuracy of action recognition. To
realize the adaptive action modeling of both parts, we pro-
pose an Actionlet-Dependent Contrastive Learning method
(ActCLR). The actionlet, defined as the discriminative sub-
set of the human skeleton, effectively decomposes motion
regions for better action modeling. In detail, by con-
trasting with the static anchor without motion, we extract
the motion region of the skeleton data, which serves as
the actionlet, in an unsupervised manner. Then, center-
ing on actionlet, a motion-adaptive data transformation
method is built. Different data transformations are ap-
plied to actionlet and non-actionlet regions to introduce
more diversity while maintaining their own characteristics.
Meanwhile, we propose a semantic-aware feature pooling
method to build feature representations among motion and
static regions in a distinguished manner. Extensive experi-
ments on NTU RGB+D and PKUMMD show that the pro-
posed method achieves remarkable action recognition per-
formance. More visualization and quantitative experiments
demonstrate the effectiveness of our method. Our project
website is available at https://langlandslin.
github.io/projects/ActCLR/
|
1. Introduction
Skeletons represent human joints using 3D coordinate
locations. Compared with RGB videos and depth data,
skeletons are lightweight, privacy-preserving, and compact
to represent human motion. On account of being easier
and more discriminative for analysis, skeletons have been
widely used in action recognition task [19,23,31,32,46,48].
Supervised skeleton-based action recognition meth-
*Corresponding author. This work is supported by the National Natural
Science Foundation of China under contract No.62172020.
1Data
Actionlet
Positive Negative
Attract Push AwayFigure 1. Our proposed approach (ActCLR) locates the motion
regions as actionlet to guide contrastive learning.
ods [3,27,28] have achieved impressive performance. How-
ever, their success highly depends on a large amount of la-
beled training data, which is expensive to obtain. To get
rid of the reliance on full supervision, self-supervised learn-
ing [16, 32, 34, 49] has been introduced into skeleton-based
action recognition. It adopts a two-stage paradigm, i.e.first
applying pretext tasks for unsupervised pretraining and then
employing downstream tasks for finetuning.
According to learning paradigms, all methods can be
classified into two categories: reconstruction-based [14, 32,
41] and contrastive learning-based. Reconstruction-based
methods capture the spatial-temporal correlation by pre-
dicting masked skeleton data. Zheng et al. [49] first pro-
posed reconstructing masked skeletons for long-term global
motion dynamics. Besides, the contrastive learning-based
methods have shown remarkable potential recently. These
methods employ skeleton transformation to generate pos-
itive/negative samples. Rao et al. [24] applied Shear and
Crop as data augmentation. Guo et al. [8] further proposed
to use more augmentations, i.e.rotation, masking, and flip-
pling, to improve the consistency of contrastive learning.
These contrastive learning works treat different regions
of the skeleton sequences uniformly. However, the motion
regions contain richer action information and contribute
more to action modeling. Therefore, it is sub-optimal to di-
rectly apply data transformations to all regions in the previ-
ous works, which may degrade the motion-correlated infor-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2363
mation too much. For example, if the mask transformation
is applied to the hand joints in the hand raising action, the
motion information of the hand raising is totally impaired.
It will give rise to the false positive problem, i.e., the seman-
tic inconsistency due to the information loss between pos-
itive pairs. Thus, it is necessary to adopt a distinguishable
design for motion and static regions in the data sequences.
To tackle these problems, we propose a new actionlet-
dependent contrastive learning method (ActCLR) by treat-
ing motion and static regions differently, as shown in Fig. 1.
Anactionlet [38] is defined as a conjunctive structure of
skeleton joints. It is expected to be highly representative of
one action and highly discriminative to distinguish the ac-
tion from others. The actionlet in previous works is defined
in a supervised way, which relies on action labels and has
a gap with the self-supervised pretext tasks. To this end,
in the unsupervised learning context, we propose to obtain
actionlet by comparing the action sequence with the aver-
age motion to guide contrastive learning. In detail, the av-
erage motion is defined as the average of all the series in
the dataset. Therefore, this average motion is employed as
the static anchor without motion. We contrast the action
sequence with the average motion to get the area with the
largest difference. This region is considered to be the re-
gion where the motion takes place, i.e., actionlet.
Based on this actionlet, we design a motion-adaptive
transformation strategy. The actionlet region is transformed
by performing the proposed semantically preserving data
transformation. Specifically, we only apply stronger data
transformations to non-actionlet regions. With less inter-
ference in the motion regions, this motion-adaptive trans-
formation strategy makes the model learn better seman-
tic consistency and obtain stronger generalization perfor-
mance. Similarly, we utilize a semantic-aware feature pool-
ing method. By extracting the features in the actionlet re-
gion, the features can be more representative of the motion
without the interference of the semantics in static regions.
We provide thorough experiments and detailed analysis
on NTU RGB+D [17, 26] and PKUMMD [18] datasets to
prove the superiority of our method. Compared to the state-
of-the-art methods, our model achieves remarkable results
with self-supervised learning.
In summary, our contributions are summarized as fol-
lows:
• We propose a novel unsupervised actionlet-based con-
trastive learning method. Unsupervised actionlets are
mined as skeletal regions that are the most discrimina-
tive compared with the static anchor, i.e., the average
motion of all training data.
• A motion-adaptive transformation strategy is designed
for contrastive learning. In the actionlet region, we
employ semantics-preserving data transformations tolearn semantic consistency. And in non-actionlet re-
gions, we apply stronger data transformations to obtain
stronger generalization performance.
• We utilize semantic-aware feature pooling to extract
motion features of the actionlet regions. It makes fea-
tures to be more focused on motion joints without be-
ing distracted by motionless joints.
|
Li_Deep_Random_Projector_Accelerated_Deep_Image_Prior_CVPR_2023
|
Abstract
Deep image prior (DIP) has shown great promise in
tackling a variety of image restoration (IR) and general vi-
sual inverse problems, needing no training data . However,
the resulting optimization process is often very slow, in-
evitably hindering DIP’s practical usage for time-sensitive
scenarios. In this paper, we focus on IR, and propose
two crucial modifications to DIP that help achieve sub-
stantial speedup: 1) optimizing the DIP seed while freez-
ing randomly-initialized network weights, and 2) reduc-
ing the network depth. In addition, we reintroduce ex-
plicit priors, such as sparse gradient prior—encoded by
total-variation regularization, to preserve the DIP peak per-
formance. We evaluate the proposed method on three IR
tasks, including image denoising, image super-resolution,
and image inpainting, against the original DIP and vari-
ants, as well as the competing metaDIP that uses meta-
learning to learn good initializers with extra data . Our
method is a clear winner in obtaining competitive restora-
tion quality in a minimal amount of time. Our code is avail-
able at https://github.com/sun-umn/Deep-
Random-Projector .
|
1. Introduction
Deep image prior as an emerging “prior” for visual
inverse problems Deep image prior (DIP) [29] and its
variants [9, 21] have recently claimed numerous successes
in solving image restoration (IR) (e.g., image denoising,
super-resolution, and image inpainting) and general visual
inverse problems [22]. The idea is to parameterize a visual
object of interest, say x, as the output of a structured deep
neural network (DNN), i.e., x=Gθ(z), where Gθis a
DNN and zis a seed that is often drawn randomly and then
frozen. Now given a visual inverse problem of the form
y≈f(x)—yis the observation, fmodels the observation
process, and ≈allows modeling observational noise—and
the typical maximum-a posterior (MAP)-inspired regular-ized data-fitting formulation ( λ: regularization parameter):
min
xℓ(y, f(x))|{z}
data-fitting loss+λ R(x)|{z}
regularizer, (1)
one can plug in the reparametrization x=Gθ(z)to obtain
min
θℓ(y, f◦Gθ(z)) +λR◦Gθ(z), (2)
where ◦denotes functional composition. A salient feature
of DIP for IR is that they are training-free despite the pres-
ence of the DNN Gθ, i.e., no extra data other than yand
fare needed for problem-solving.
DIP is not a standalone prior, unlike traditional priors
such as sparse gradient [24], dark channel [8], and self-
similarity [7]: favorable results from DIP are obtained
as a result of the tight integration of architecture, over-
parametrization of Gθ, first-order optimization methods,
and appropriate early stopping (ES) . The Gθin prac-
tice is often a structured convolutional neural network and
significantly overparameterized. Due to the heavy over-
parametrization, in principle, f◦Gθ(z)could perfectly
fity, especially when no extra regularization R◦Gθ(z)
is present. When such overfitting occurs, the recovered
bx=Gθ(z)accounts for the potential noise in yalso in ad-
dition to the desired visual content. What comes to the res-
cue is the hallmark “early-learning-then-overfitting” phe-
nomenon: the learning process picks up mostly the desired
visual content first before starting to fit the noise [16, 30],
which is believed to be a combined implicit regulariza-
tion effect of overparametrization, convolutional structures
inGθ, and first-order optimization methods [10, 12]. So,
if one can locate the peak-performance point and stop the
fitting process sharp there, i.e., performing appropriate ES,
they could get a good estimate for x.
Practical issues: overfitting and slowness Despite the
remarkable empirical success of DIP in solving IRs, there
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18176
Figure 1. Comparison of DIP and our DRP-DIP for image denoising. Left: immediate reconstructions generated by DIP and DRP-DIP
respectively at different time cutoffs. DRP-DIP can restore skeleton of the image in 1second, whereas DIP cannot do so even after 10
seconds. Right : PSNR trajectories over time. Our DRP-DIP reaches the performance peak in much shorter time than DIP.
are a couple of pressing issues (see Fig. 1) impeding the
practical deployment of DIP:
•Overfitting : Most previous works reporting the suc-
cesses of DIP set the number of iterations directly, based
on visual inspection and trial-and-error. Visual inspec-
tion precludes large-scale batch processing or deployment
into unfamiliar (e.g., scientific imaging of unknown mi-
nuscule objects) or unobservable scenarios (e.g., multi-
dimensional objects that are hard to visualize), whereas
trial-and-error likely performs the tuning with reference
to the unknown groundtruth object. So, strictly speaking,
previous successes mostly only show the potential of DIP ,
without providing an operational ES strategy [16,22,30] .
Ideas that try to mitigate overfitting, including controlling
the capacity of Gθ, explicit regularization, and explicit
noise modeling, tend to prolong the iteration process and
push the peak performance to final iterations.
•Slowness : For DIP to reach the performance peak from
random initialization, it typically takes thousands of
steps. Although the number is comparable to that of typ-
ical iterative methods for solving Eq. (1), here each step
entails a forward and a backward pass through the Gθ
and hence is much more expensive. In fact, on a state-of-
the-art GPU card such as Nvidia V100, the whole pro-
cess can take up to tens of minutes for simple IR and
up to hours for advanced IR tasks. The optimization
slowness inevitably hinders DIP’s applicability to time-
sensitive problems.
These two issues are intertwined: mitigating overfitting es-
calates the slowness. Thus an ideal solution is to speed up
the process of climbing to the performance peak and then
stop around the peak .
Our focus and contributions Our previous works [16,
30] have proposed effective ES methods for DIP. In this
paper, we tackle the slowness issue. We propose an effec-
tive and efficient DIP variant, called deep random projec-tor(DRP), that requires substantial less computation time
than DIP to obtain a comparable level of peak performance.
DRP consists of three judiciously chosen modifications to
DIP: (1) optimizing the DIP seed while freezing randomly-
initialized network weights, (2) reducing the network depth,
and (3) including additional explicit prior(s) for regulariza-
tion, such as total variation (TV) regularization that encodes
the sparse-gradient prior [24]. One can quickly see the su-
periority of our method from Fig. 1. Our main contributions
include:
• proposing deep random projector (DRP) that integrates
three essential modifications to DIP for speedup (Sec. 3);
• validating that DRP achieves peak performance compa-
rable to that of DIP in much less time on three IR tasks
(Sec. 4.2, Sec. 4.3, and Sec. 4.4), and showing that DRP is
much more efficient than meta-learning-based meta-DIP
for speedup (Sec. 4.6);
• demonstrating that our ES method in [30] can be directly
integrated, without modification, to detect near-peak per-
formance for DRP. Hence, we have a complete solution
to both overfitting and slowness issues in DIP (Sec. 4.5).
|
Li_A_Whac-a-Mole_Dilemma_Shortcuts_Come_in_Multiples_Where_Mitigating_One_CVPR_2023
|
Abstract
Machine learning models have been found to learn
shortcuts—unintended decision rules that are unable to
generalize—undermining models’ reliability. Previous works
address this problem under the tenuous assumption that only
a single shortcut exists in the training data. Real-world im-
ages are rife with multiple visual cues from background to
texture. Key to advancing the reliability of vision systems
is understanding whether existing methods can overcome
multiple shortcuts or struggle in a Whac-A-Mole game, i.e.,
where mitigating one shortcut amplifies reliance on others.
To address this shortcoming, we propose two benchmarks:
1) UrbanCars, a dataset with precisely controlled spuri-
ous cues, and 2) ImageNet-W, an evaluation set based on
ImageNet for watermark, a shortcut we discovered affects
nearly every modern vision model. Along with texture and
background, ImageNet-W allows us to study multiple short-
cuts emerging from training on natural images. We find
computer vision models, including large foundation models—
regardless of training set, architecture, and supervision—
struggle when multiple shortcuts are present. Even methods
explicitly designed to combat shortcuts struggle in a Whac-
A-Mole dilemma. To tackle this challenge, we propose Last
Layer Ensemble, a simple-yet-effective method to mitigate
multiple shortcuts without Whac-A-Mole behavior. Our re-
sults surface multi-shortcut mitigation as an overlooked chal-
lenge critical to advancing the reliability of vision systems.
The datasets and code are released: https://github.
com/facebookresearch/Whac-A-Mole .
|
1. Introduction
Machine learning often achieves good average perfor-
mance by exploiting unintended cues in the data [ 26]. For
instance, when backgrounds are spuriously correlated with
objects, image classifiers learn background as a rule for ob-
ject recognition [ 93]. This phenomenon—called “shortcut
learning”—at best suggests average metrics overstate model
performance and at worst renders predictions unreliable as
models are prone to costly mistakes on out-of-distribution
(OOD) data where the shortcut is absent. For example,
†Work done during the internship at Meta AI.∗Equal Contribution.COVID diagnosis models degraded significantly when spuri-
ous visual cues ( e.g., hospital tags) were removed [17].
Most existing works design and evaluate methods under
the tenuous assumption that a single shortcut is present in
the data [ 33,61,74]. For instance, Waterbirds [ 74], the most
widely-used dataset, only benchmarks the mitigation of the
background shortcut [ 7,15,59]. While this is a useful simpli-
fied setting, real-world images contain multiple visual cues;
models learn multiple shortcuts. From ImageNet [ 18,82]
to facial attribute classification [ 51] and COVID-19 chest
radiographs [ 17], multiple shortcuts are pervasive. Whether
existing methods can overcome multiple shortcuts or strug-
gle in a Whac-A-Mole game—where mitigating one shortcut
amplifies others—remains a critical open question.
We directly address this limitation by proposing two
datasets to study multi-shortcut learning: UrbanCars and
ImageNet-W . In UrbanCars (Fig. 1a), we precisely inject
two spurious cues—background and co-occurring object. Ur-
banCars allows us to conduct controlled experiments probing
multi-shortcut learning in standard training as well as short-
cut mitigation methods, including those requiring shortcut
labels. In ImageNet-W (IN-W) (Fig. 1b), we surface a new
watermark shortcut in the popular ImageNet dataset (IN-
1k). By adding a transparent watermark to IN-1k validation
set images, ImageNet-W, as a new test set, reveals vision
models ranging from ResNet-50 [ 31] to large foundation
models [ 10]universally rely on watermark as a spurious cue
for the “carton” class ( cf. cardboard box in Fig. 1b). When
a watermark is added, ImageNet top-1 accuracy drops by
10.7% on average across models. Some, such as ResNet-50,
suffer a catastrophic 26.7% drop (from 76.1% on IN-1k to
49.4% on IN-W) (Sec. 2.2)). Along with texture [ 27,34]
and background [ 93] benchmarks, ImageNet-W allows us to
study multiple shortcuts emerging in natural images.
We find that across a range of supervised/self-supervised
methods, network architectures, foundation models, and
shortcut mitigation methods, vision models struggle when
multiple shortcuts are present. Benchmarks on UrbanCars
and multiple shortcuts in ImageNet (including ImageNet-W)
reveal an overlooked challenge in the shortcut learning prob-
lem: multi-shortcut mitigation resembles a Whac-A-Mole
game, i.e., mitigating one sho
|
Liu_MarS3D_A_Plug-and-Play_Motion-Aware_Model_for_Semantic_Segmentation_on_Multi-Scan_CVPR_2023
|
Abstract
3D semantic segmentation on multi-scan large-scale
point clouds plays an important role in autonomous sys-
tems. Unlike the single-scan-based semantic segmentation
task, this task requires distinguishing the motion states of
points in addition to their semantic categories. However,
methods designed for single-scan-based segmentation tasks
perform poorly on the multi-scan task due to the lacking of
an effective way to integrate temporal information. We pro-
pose MarS3D, a plug-and-play motion-aware module for
semantic segmentation on multi-scan 3D point clouds. This
module can be flexibly combined with single-scan models
to allow them to have multi-scan perception abilities. The
model encompasses two key designs: the Cross-Frame Fea-
ture Embedding module for enriching representation learn-
ing and the Motion-Aware Feature Learning module for en-
hancing motion awareness. Extensive experiments show
that MarS3D can improve the performance of the base-
line model by a large margin. The code is available at
https://github.com/CVMI-Lab/MarS3D .
|
1. Introduction
3D semantic segmentation on multi-scan large-scale
point clouds is a fundamental computer vision task that ben-
efits many downstream problems in autonomous systems,
such as decision-making, motion planning, and 3D recon-
struction, to name just a few. Compared with the single-
scan semantic segmentation task, this task requires under-
standing not only the semantic categories but also the mo-
tion states ( e.g., moving or static) of points based on multi-
scan point cloud data.
In the past few years, extensive research has been con-
ducted on single-scan semantic segmentation with signifi-
cant research advancements [4, 5, 12, 25, 31, 33, 36]. These
approaches are also applied to process multi-scan point
*Equal contribution.
†Corresponding author.
non-movingnon-movingnon-movingcarcarcar
carcarcar
non-movingmovingmovingBaselineMarS3DGTFigure 1. Comparison of our proposed method, MarS3D, with
baseline method using SPVCNN [25] as the backbone on Se-
manticKITTI [1] dataset. MarS3D achieves excellent results in
the classification of semantic categories and motion states, while
the baseline method can not distinguish motion well from static.
clouds, wherein multiple point clouds are fused to form
a single point cloud before being fed to the network for
processing. Albeit simple, this strategy may lose tempo-
ral information and make distinguishing motion states a
challenging problem. As a result, they perform poorly
in classifying the motion states of objects. As shown
in Figure 1, the simple point cloud fusion strategy can-
not effectively enable the model to distinguish the motion
states of cars even with a state-of-the-art backbone network
SPVCNN [25]. Recently, there have been some early at-
tempts [7, 23, 24, 28] to employ attention modules [24] and
recurrent networks [7,23,28] to fuse information across dif-
ferent temporal frames. However, these approaches do not
perform well on the multi-scan task due to the insufficiency
of temporal representations and the limited feature extrac-
tion ability of the model.
In sum, a systematic investigation of utilizing the
rich spatial-temporal information from multiple-point cloud
scans is still lacking. This requires answering two critical
questions: (1) how can we leverage the multi-scan informa-
tion to improve representation learning on point clouds for
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9372
better semantic understanding? and (2) how can the tem-
poral information be effectively extracted and learned for
classifying the motion states of objects?
In this paper, we propose a simple plug-and-play
Motion- awareSegmentation module for 3Dmulti-scan
analysis ( MarS3D ), which can seamlessly integrate with
existing single-scan semantic segmentation models and en-
dow them with the ability to perform accurate multi-scan
3D point cloud semantic segmentation with negligible com-
putational costs. Specifically, our method incorporates two
core designs: First, to enrich representation learning of
multi-frame point clouds, we propose a Cross-Frame Fea-
ture Embedding (CFFE) module which embeds time-step
information into features to facilitate inter-frame fusion and
representation learning. Second, inspired by the observa-
tion that objects primarily move along the horizontal ground
plane ( i.e.,xy-plane) in large-scale outdoor scenes, i.e., min-
imal motion along the z-axis, we propose a Motion-Aware
Feature Learning (MAFL) module based on Bird’s Eye
View (BEV), which learns the motion patterns of objects
between frames to facilitate effectively discriminating the
motion states of objects.
We extensively evaluate our approach upon several
mainstream baseline frameworks on SemanticKITTI [1]
and nuScenes [3] dataset. It consistently improves the per-
formance of the baseline approaches, e.g., MinkUnet [5],
by 6.24% in mIoU on SemanticKITTI with a negligible in-
crease in model parameters, i.e., about 0.2%. The main con-
tributions are summarized as follows:
• We are the first to propose a plug-and-play mod-
ule for large-scale multi-scan 3D semantic segmenta-
tion, which can be flexibly integrated with mainstream
single-scan segmentation models without incurring too
much cost.
• We devise a Cross-Frame Feature Embedding module
to fuse multiple point clouds while preserving their
temporal information, thereby enriching representa-
tion learning for multi-scan point clouds.
• We introduce a BEV-based Motion-Aware Feature
Learning module to exploit temporal information and
enhance the model’s motion awareness, facilitating the
prediction of motion states.
• We conduct extensive experiments and comprehensive
analyses of our approach with different backbone mod-
els. The proposed model performs favorably compared
to the baseline methods while introducing negligible
extra parameters and inference time.
|
Kong_Understanding_Masked_Image_Modeling_via_Learning_Occlusion_Invariant_Feature_CVPR_2023
|
Abstract
Recently, Masked Image Modeling (MIM) achieves great
success in self-supervised visual recognition. However, as a
reconstruction-based framework, it is still an open question
to understand how MIM works, since MIM appears very dif-
ferent from previous well-studied siamese approaches such
as contrastive learning. In this paper, we propose a new
viewpoint: MIM implicitly learns occlusion-invariant fea-
tures, which is analogous to other siamese methods while the
latter learns other invariance. By relaxing MIM formulation
into an equivalent siamese form, MIM methods can be in-
terpreted in a unified framework with conventional methods,
among which only a) data transformations, i.e. what invari-
ance to learn, and b) similarity measurements are different.
Furthermore, taking MAE [29] as a representative example
of MIM, we empirically find the success of MIM models
relates a little to the choice of similarity functions, but the
learned occlusion invariant feature introduced by masked
image – it turns out to be a favored initialization for vision
transformers, even though the learned feature could be less
semantic. We hope our findings could inspire researchers to
develop more powerful self-supervised methods in computer
vision community.
|
1. Introduction
Invariance matters in science [33]. In self-supervised
learning, invariance is particularly important: since ground
truth labels are not provided, one could expect the favored
learned feature to be invariant (or more generally, equiv-
ariant [17]) to a certain group of transformations on the
inputs. Recent years, in visual recognition one of the most
successful self-supervised frameworks – contrastive learn-
ing[22, 42, 47] – benefits a lot from learning invariance .
The key insight of contrastive learning is, because recog-
nition results are typically insensitive to the deformations
(e.g. cropping, resizing, color jittering) on the input images,
†Corresponding author. This work is supported by The National Key
Research and Development Program of China (No. 2017YFA0700800) and
Beijing Academy of Artificial Intelligence (BAAI).a good feature should also be invariant to the transforma-
tions. Therefore, contrastive learning suggests minimizing
the distance between two (or more [10]) feature maps from
the augmented copies of the same data, which is formulated
as follows:
min
θE
x∼DM(z1, z2), z 1=fθ(T1(x)), z 2=fθ(T2(x)),
(1)
where Dis the data distribution; fθ(·)means the encoder
network parameterized by θ;T1(·)andT2(·)are two trans-
formations on the input data, which defines what invariance
to learn; M(·,·)is the distance function1(orsimilarity mea-
surement ) to measure the similarity between two feature
maps z1andz2. Clearly, the choices of TandMare es-
sential in contrastive learning algorithms. Researchers have
come up with a variety of alternatives. For example, for the
transformation T, popular methods include random crop-
ping [3, 12, 28, 30], color jittering [12], rotation [25, 44],
jigsaw puzzle [41], colorization [56] and etc. For the simi-
larity measurement M,InfoMax principle [3] (which can be
implemented with MINE [7] or InfoNCE loss [12,14,30,42]),
feature de-correlation [6, 55], asymmetric teacher [15, 28],
triplet loss [36] and etc., are proposed.
Apart from contrastive learning, very recently Masked
Image Modeling (MIM ,e.g.[5]) quickly becomes a new
trend in visual self-supervised learning. Inspired by Masked
Language Modeling [18] in Natural Language Processing ,
MIM learns feature via a form of denoising autoencoder [48]:
images which are occluded with random patch masks are
fed into the encoder, then the decoder predicts the original
embeddings of the masked patches:
min
θ,ϕE
x∼DM(dϕ(z), x⊙(1−M)), z =fθ(x⊙M),
(2)
where “ ⊙” means element-wise product; Mispatch mask2;
fθ(·)anddϕ(·)areencoder anddecoder respectively; zis
1Following the viewpoint in [15], we suppose distance functions could
contain parameters which are jointly optimized with Eq. (1). For example,
weights in project head [12] or predict head [15, 28] are regarded as a part
of distance function M(·).
2So “x⊙M” represents “unmasked patches” and vice versa.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Excep
|
Lei_A_Hierarchical_Representation_Network_for_Accurate_and_Detailed_Face_Reconstruction_CVPR_2023
|
Abstract
Limited by the nature of the low-dimensional represen-
tational capacity of 3DMM, most of the 3DMM-based face
reconstruction (FR) methods fail to recover high-frequency
facial details, such as wrinkles, dimples, etc. Some attempt
to solve the problem by introducing detail maps or non-
linear operations, however, the results are still not vivid. To
this end, we in this paper present a novel hierarchical repre-
sentation network (HRN) to achieve accurate and detailed
face reconstruction from a single image. Specifically, we
implement the geometry disentanglement and introduce the
hierarchical representation to fulfill detailed face modeling.
Meanwhile, 3D priors of facial details are incorporated to
enhance the accuracy and authenticity of the reconstruction
results. We also propose a de-retouching module to achieve
better decoupling of the geometry and appearance. It is
noteworthy that our framework can be extended to a multi-
view fashion by considering detail consistency of different
views. Extensive experiments on two single-view and two
multi-view FR benchmarks demonstrate that our method
outperforms the existing methods in both reconstruction ac-
curacy and visual effects. Finally, we introduce a high-
quality 3D face dataset FaceHD-100 to boost the research
of high-fidelity face reconstruction. The project homepage
is at https://younglbw.github.io/HRN-homepage/.
|
1. Introduction
High-fidelity 3D face reconstruction finds a wide range
of applications in many scenarios, such as AR/VR, medicaltreatment, film production, etc. While extensive works al-
ready achieved excellent reconstruction performance using
specialized hardware like LightStage [2, 11, 35], estimating
highly detailed face models from single or sparse-view im-
ages is still a challenging problem. Based on 3DMM [8],
a statistical model learned from a collection of face scans,
many works [16, 22, 23, 32] attempt to reconstruct the 3D
face from a single image and achieve impressive results.
However, limited by the nature of the low dimensional rep-
resentational ability of the 3DMM, these methods can not
recover the detailed facial geometry.
Recently, some methods [13, 24, 38] devote to capturing
high-frequency facial details such as wrinkles by predicting
a displacement map. They achieve realistic results, how-
ever, fail to model the mid-frequency details, such as the
detailed contour of the jaw, cheeck, etc. To this end, some
works try to capture the overall details by introducing latent
encoding of details [19] or non-linear operations [20, 44].
Nevertheless, it is hard to make a trade-off when handling
the mid- and high-frequency details simultaneously. An-
other challenge is how to obtain accurate shapes and de-
tailed 3D facial priors considering multifarious lightings
and skins for different images. [10, 13] resort to the wrin-
kle statistics computed from 3D face scans to fulfill realis-
tic high-frequency details, but still fail to model the mid-
frequency details.
Based on the observations above, we introduce a hierar-
chical representation network (HRN) for accurate and de-
tailed face reconstruction from single image, as shown in
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
394
Fig. 2. Firstly, we decouple the facial geometry into low-
frequency geometry, mid-frequency (MF) details, and high-
frequency (HF) details. Then, in a hierarchical fashion, we
model these parts with face-wise blendshape coefficients,
vertex-wise deformation map, and pixel-wise displacement
map, respectively (shown in Fig. 1). Concretely, we employ
two image translation networks [27] to estimate the corre-
sponding detail maps (deformation and displacement map),
and further employ them to generate the detailed face model
in a coarse-to-fine manner. Moreover, we introduce the 3D
priors of MF and HF details by fitting face scans with our
hierarchical representation to facilitate accurate and faith-
ful modeling. Inspired by [33], we propose a de-retouching
module to adaptively refine the base texture to overcome the
ambiguities between skin blemishes and illuminations. Ex-
tensive experiments show that our method outperforms the
existing methods on two large-scale benchmarks, exhibiting
excellent performance in terms of detail capturing and accu-
rate shape modeling. Thanks to the detail disentanglement
strategy and the guidance of detail priors, we extend HRN
to a multi-view fashion and achieve accurate FR from only
a few views. Finally, to boost the research of sparse-view
and high-fidelity FR, we introduce a high-quality 3D face
dataset named FaceHD-100.
Our main contributions in this work are as follows:
(A)We present a hierarchical modeling strategy and pro-
pose a novel framework HRN for single-view FR task. Our
HRN produces accurate and highly detailed FR results and
outperforms the existing state-of-the-art methods on two
large-scale single-view FR benchmarks.
(B)We introduce detail priors to guide the faithful modeling
of hierarchical details and design a de-retouching module to
facilitate the decoupling of geometry and appearance.
(C)We extend HRN to a multi-view fashion to form MV-
HRN, which enables accurate face modeling from sparse-
view images and outperforms the existing methods on two
large-scale multi-view FR benchmarks.
(D)To boost the research on sparse-view and high-fidelity
FR tasks, we introduce a high-quality 3D face dataset
FaceHD-100, containing 2,000 detailed 3D face models and
corresponding high-definition multi-view images.
|
Le_GamutMLP_A_Lightweight_MLP_for_Color_Loss_Recovery_CVPR_2023
|
Abstract
Cameras and image-editing software often process im-
ages in the wide-gamut ProPhoto color space, encompass-
ing 90% of all visible colors. However, when images are
encoded for sharing, this color-rich representation is trans-
formed and clipped to fit within the small-gamut standard
RGB (sRGB) color space, representing only 30% of visi-
ble colors. Recovering the lost color information is chal-
lenging due to the clipping procedure. Inspired by neu-
ral implicit representations for 2D images, we propose a
method that optimizes a lightweight multi-layer-perceptron
(MLP) model during the gamut reduction step to predict
the clipped values. GamutMLP takes approximately 2 sec-
onds to optimize and requires only 23 KB of storage. The
small memory footprint allows our GamutMLP model to be
saved as metadata in the sRGB image—the model can be
extracted when needed to restore wide-gamut color values.
We demonstrate the effectiveness of our approach for color
recovery and compare it with alternative strategies, includ-
ing pre-trained DNN-based gamut expansion networks and
other implicit neural representation methods. As part of
this effort, we introduce a new color gamut dataset of 2200
wide-gamut/small-gamut images for training and testing.
|
1. Introduction
The RGB values of our color images do not represent
the entire range of visible colors. The span of visible colors
that can be reproduced by a particular color space’s RGB
primaries is called a gamut. Currently, the vast majority of
color images are encoded using the standard RGB (sRGB)
color space [ 7]. The sRGB gamut is capable of reproducing
approximately 30% of the visible colors and was optimized
for the display hardware of the 1990s. Close to 30 years
later, this small-gamut color space still dominates how im-
ages are saved, even though modern display hardware is ca-
pable of much wider gamuts.
Interestingly, most modern DSLR and smartphone cam-
eras internally encode images using the ProPhoto color
space [ 12]. ProPhoto RGB primaries define a wide gamut
capable of representing 90% of all visible colors [ 33].
0.01
0.00
Original wide-gamut
ProPhoto imagesRGB image with
color clipping
Conversion back to
ProPhotoError map w.r.t.
original ProPhoto
PSNR: 40.11 PSNR: 54.77(A) Input ProPhoto image and saved sRGB image with MLP embedded (requires 23 KB).ProPhoto
gamutsRGB
gamut
Original ProPhoto RGB
values remain clipped inside
the sRGB gamut.
(B) Current approach converting
sRGB back to ProPhoto.(C) Our approach converting sRGB back to
ProPhoto using the embedded MLP.Error map w.r.t.
original ProPhotoConversion back to
ProPhoto with MLP
Recovered colors using the
embedded MLP to restore
clipped wide-gamut RGB
values.Clipped color
values Predict
original colors
Lightweight
MLP
Embedded MLP
as metadata in sRGB image𝑥
𝑦
𝑅′
G′
B′𝑅
𝐺
𝐵Figure 1. (A) shows a wide-gamut (ProPhoto) image that has
been converted and saved as a small-gamut (sRGB) image; color
clipping is required to fit the smaller sRGB gamut (as shown in
the chromaticity diagrams). (B) Standard color conversion back
to the wide-gamut color space is not able to recover the clipped
colors. (C) Conversion back to the wide-gamut RGB using our
lightweight GamutMLP (23 KB) can recover the clipped color val-
ues back to their original values.
Image-processing software, such as Adobe Photoshop, also
uses this color-rich space to manipulate images, especially
when processing camera RAW-DNG files. By process-
ing images in the wide-gamut ProPhoto space, cameras
and editing software allow users the option to save an
image in other color spaces—such as AdobeRGB, UHD,
and Display-P3—that have much wider color gamuts than
sRGB. However, these color spaces are still rare, and most
images are ultimately saved in sRGB. To convert color val-
ues between ProPhoto and sRGB, a gamut reduction step
is applied that clips the wide-gamut color values to fit the
smaller sRGB color gamut. Once gamut reduction is ap-
plied, it is challenging to recover the original wide-gamut
values. As a result, when images are converted back to a
wide-gamut color space for editing or display, much of the
color fidelity is lost, as shown in Figure 1.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18268
Optimize MLP to predict original ProPhoto RGB valuesConvert to sRGB
(with clipping)
Input GT
ProPhoto image (wide-gamut)
Prediction
MLP … … …
L2
LosssRGB image (small-gamut)
Convert to ProPhoto
(values remained clipped)
MLP model bitstream
01101011101110111011101110
11100011010100011101101010
1100110..
1
2
34
Target ProPhoto RGB values Input spatial coordinate ( x,y) and
"clipped" ProPhoto RGB values ( R', G', B' )
Clipped (wide-gamut) ProPhoto
Embed MLP in sRGB image
as metadata (23 KB)
Clipped values
shown in "red"
(𝑅, 𝐺, 𝐵) (𝑥, 𝑦, 𝑅′, 𝐺′, 𝐵′)Figure 2. An overview of the gamut reduction stage in our framework. This phase shows the gamut reduction step, where the wide-gamut
ProPhoto is converted to the small-gamut sRGB. While saving the sRGB image, an MLP is optimized based on the original and clipped
ProPhoto color values. The MLP is embedded in the sRGB image as metadata.
Contribution We address the problem of recovering the
RGB colors in sRGB images back to their original wide-
gamut RGB representation. Our work is inspired by
coordinate-based implicit neural image representations that
use multilayer perceptrons (MLPs) as a differentiable im-
age representation. We propose to optimize a lightweight
(23 KB) MLP model that takes the gamut-reduced RGB
values and their spatial coordinates as input and predicts
the original wide-gamut RGB values. The idea is to opti-
mize the MLP model when the ProPhoto image is saved to
sRGB and embed the MLP model parameters in the sRGB
image as a comment field. The lightweight MLP model is
extracted and used to recover the wide-gamut color values
when needed. We describe an optimization process for the
MLP that requires ∼2 seconds per full-sized image. We
demonstrate the effectiveness of our method against several
different approaches, including other neural image repre-
sentations and pre-trained deep-learning-based models. As
part of this work, we have created a dataset of 2200 wide-
gamut/small-gamut image pairs for training and testing.
|
Li_Improving_Vision-and-Language_Navigation_by_Generating_Future-View_Image_Semantics_CVPR_2023
|
Abstract
Vision-and-Language Navigation (VLN) is the task that
requires an agent to navigate through the environment
based on natural language instructions. At each step, the
agent takes the next action by selecting from a set of naviga-
ble locations. In this paper, we aim to take one step further
and explore whether the agent can benefit from generating
the potential future view during navigation. Intuitively, hu-
mans will have an expectation of how the future environ-
ment will look like, based on the natural language instruc-
tions and surrounding views, which will aid correct naviga-
tion. Hence, to equip the agent with this ability to generate
the semantics of future navigation views, we first propose
three proxy tasks during the agent’s in-domain pre-training:
Masked Panorama Modeling (MPM), Masked Trajectory
Modeling (MTM), and Action Prediction with Image Gen-
eration (APIG). These three objectives teach the model to
predict missing views in a panorama (MPM), predict miss-
ing steps in the full trajectory (MTM), and generate the
next view based on the full instruction and navigation his-
tory (APIG), respectively. We then fine-tune the agent on
the VLN task with an auxiliary loss that minimizes the dif-
ference between the view semantics generated by the agent
and the ground truth view semantics of the next step. Em-
pirically, our VLN-SIG achieves the new state-of-the-art
on both Room-to-Room dataset and CVDN dataset. We fur-
ther show that our agent learns to fill in missing patches in
future views qualitatively, which brings more interpretabil-
ity over agents’ predicted actions. Lastly, we demonstrate
that learning to predict future view semantics also enables
the agent to have better performance on longer paths.1
|
1. Introduction
In Vision-and-Language Navigation, the agent needs to
navigate through the environment based on natural lan-
1Code is available at https://github.com/jialuli-luka/
VLN-SIG.git .
:DONWKURXJKWKHEHGURRPLQWRWKHKDOOZD\7XUQULJKWDWWKHVHFRQGGRRU«
«&URVV0RGDOLW\7UDQVIRUPHU
«
0DVNHG7UDMHFWRU\0RGHOLQJ070
0DVNHG3DQRUDPD0RGHOLQJ030
$FWLRQ3UHGLFWLRQZLWK,PDJH*HQHUDWLRQ$3,*
,PDJH7RNHQL]HU3DWFK6HPDQWLF&DOFXODWLRQ,PDJH6HPDQWLFV&RGHERRN6HOHFWLRQ&DOFXODWH,PDJH6HPDQWLFV7UDLQWR*HQHUDWH,PDJH6HPDQWLFV
7H[W(QFRGLQJ+LVWRU\(QFRGLQJ2EVHUYDWLRQ(QFRGLQJFigure 1. Overview of our proposed method VLN-SIG. We obtain
the semantics of an image with a pre-trained image tokenizer, and
use codebook selection (Sec. 3.4) and patch semantic calculation
(Sec. 3.3) to adapt it to efficient in-domain VLN learning. We pre-
train the agent on three proxy tasks (Sec. 3.5) and fine-tune the
agent using Action Prediction with Image Generation (APIG) as
the additional auxiliary task (Sec. 3.6).
guage instructions. Many datasets have been proposed to
solve this challenging task [2, 5, 22, 33, 40]. Based on these
datasets, previous works aim to strengthen the navigation
model or agent from several aspects: understanding long
and detailed instructions in different languages, inferring
and locating target objects based on common knowledge,
navigating in diverse and potentially unseen environments,
and learning better alignment between the environment and
the instructions. However, most previous work simplifies
the navigation process as an action selection process, where
at each time step, the agent picks the action from a set of
pre-defined candidates. This simplification does not take
advantage of human’s ability to expect what scene will be
more likely to happen next during navigation. For exam-
ple, given the instruction “Walk through the bedroom into
the hallway. Turn right and wait in the kitchen doorway.”,
before walking out of the bedroom, humans will expect the
hallway to be a long passage with doors, and the kitchen
probably contains objects like a kitchen island and sink.
Humans have these expectations based on common sense
knowledge and use them to select candidates during navi-
gation. Thus, in this paper, we explore whether AI agents
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10803
could also benefit from the ability to generate future scenes
for action selection during navigation.
[21] first explores the useful idea of generating future
scenes with high quality, based on history observations on
an indoor Vision-and-Language Navigation dataset. Fur-
thermore, they adapt their synthetic future scenes for VLN
by replacing the original observations with the generated
observations. However, their replacement method did not
enhance the agents’ performance on the VLN task. The po-
tential of using semantic information from generated future
observations is still underexplored. Thus, in this paper, we
aim to equip the agent with both the ability to predict future
scenes and also benefit from learning semantics in future
generated observations.
We propose VLN-SIG: Vision-and-Language Naviga-
tion with Image Semantics Generation. As shown in Fig-
ure 1, to first calculate the overall image semantics, we
tokenize the view images into visual tokens with a pre-
trained discrete variational autoencoder (dV AE). While the
pre-trained dV AE has a large vocabulary of 8192, we pro-
pose and compare static and dynamic codebook selection
processes to optimize a subset of the codebook at one time
during training. This helps the agent to learn the more im-
portant tokens and focus on optimizing more difficult to-
kens. Then, we propose and compare three ways based on
mean value, block-wise weighted value, and sampling to
represent the semantics of all patches for efficient image
semantics learning. In the pre-training stage, we propose
three tasks: Masked Panorama Modeling (MPM), Masked
Trajectory Modeling (MTM) and Action Prediction with
Image Generation (APIG). In Masked Panorama Modeling,
the agent learns to predict the semantics of multiple miss-
ing views in a full panorama. In Masked Trajectory Mod-
eling, the agent learns to predict the semantics of multiple
steps in the full trajectory. MPM and MTM together give
the agent the ability to understand the semantics in each vi-
sual token and learn to recognize the semantics contained
in each view. In Action Prediction with Image Genera-
tion, the agent mimics the navigation process to predict the
next step by generating the visual tokens contained in the
next navigation view. This task enables the agent to imag-
ine the next step view semantics before making actions.
We then fine-tune the agent on the step-by-step Vision-and-
Language Navigation task. We further enhance agents’ abil-
ity to predict future views by optimizing an auxiliary task
during navigation, which minimizes the difference between
predicted observations and the target observation. Though
this task does not help the agent make navigation decision
directly, the future visual semantics injected by this task
helps the agent understand the environment better.
We conduct experiments on Room-to-Room (R2R)
datasets [2] and Cooperative Vision-and-Dialog Navigation
(CVDN) dataset [40]. Empirical results show that our pro-posed VLN-SIG outperforms the strong SotA agents [8,10]
by a relative gain of 4.5% in goal progress (meters) on
CVDN test leaderboard, and 3% absolute gain in success
rate on Room-to-Room test leaderboard. We further demon-
strate that our proposed codebook selection methods and
patch semantic calculation methods are crucial for learn-
ing to generate image semantics with ablation studies. Be-
sides, we show that our agent achieves better performance
for longer paths. Lastly, we show that our agent learns to fill
in missing patches in the future views, which brings more
interpretability over agents’ predictions.
|
Kim_Generalizable_Implicit_Neural_Representations_via_Instance_Pattern_Composers_CVPR_2023
|
Abstract
Despite recent advances in implicit neural representa-
tions (INRs), it remains challenging for a coordinate-based
multi-layer perceptron (MLP) of INRs to learn a common
representation across data instances and generalize it for
unseen instances. In this work, we introduce a simple yet
effective framework for generalizable INRs that enables a
coordinate-based MLP to represent complex data instances
by modulating only a small set of weights in an early MLP
layer as an instance pattern composer; the remaining MLP
weights learn pattern composition rules for common repre-
sentations across instances. Our generalizable INR frame-
work is fully compatible with existing meta-learning and hy-
pernetworks in learning to predict the modulated weight for
unseen instances. Extensive experiments demonstrate that
our method achieves high performance on a wide range of
domains such as an audio, image, and 3D object, while the
ablation study validates our weight modulation.
|
1. Introduction
Implicit neural representations (INR) have shown the po-
tential to represent complex data as continuous functions.
Assuming that a data instance comprises the pairs of a co-
ordinate and its output features, INRs adopt a parameterized
neural network as a mapping function from an input coor-
dinate into its output features. For example, a coordinate-
based MLP [23] predicts RGB values at each 2D coordinate
as an INR of an image. Despite the popularity of INRs, a
trained MLP cannot be generalized to represent other in-
stances, since each MLP learns to memorize each data in-
stance. Thus, INRs necessitate separate training of MLPs to
represent a lot of data instances as continuous functions.
*Equal contribution
†Corresponding author
Figure 1. The reconstructed images of 178 ×178 ImageNette by
TransINR [4] (left) and our generalizable INRs (right).
Generalizable INRs aim to learn common representa-
tions of a MLP across instances, while modulating fea-
tures or weights of the coordinate-based MLP to adapt
unseen data instances [4, 7, 24]. The feature-modulation
method exploits the latent vector of an instance to con-
dition the activations in MLP layers through concatena-
tion [17] or affine-transform [8, 18]. Despite the computa-
tional efficiency of feature-modulation, the modulated INRs
have unsatisfactory results to represent complex data due to
their limited modulation capacity. On the other hand, the
weight-modulation method learns to update the whole MLP
weights to increase the modulation capacity for high perfor-
mance. However, modulating whole MLP weights leads to
unstable and expensive training [4, 7, 9, 24].
In this study, we propose a simple yet effective frame-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
11808
work for generalizable INRs via Instance Pattern Com-
posers to modulate only a small set of MLP weights. We
postulate that a complex data instance can be represented by
composing low-level patterns in the instance [23]. Thus, we
rethink and categorize the weights of MLP into i) instance
pattern composers and ii) pattern composition rule . The in-
stance pattern composer is a weight matrix in the early layer
of our coordinate-based MLP to extract the instance content
patterns of each data instance as a low-level feature. The re-
maining weights of MLP is defined as a pattern composition
rule, which composes the instance content patterns in an
instance-agnostic manner. In addition, our framework can
adopt both optimization-based meta-learning and hypernet-
works to predict the instance pattern composer for an INR
of unseen instance. In experiments, we demonstrate the ef-
fectiveness of our generalizable INRs via instance pattern
composers on various domains and tasks.
Our main contributions are summarized as follows. 1)
Instance pattern composers enable a coordinate-based MLP
to represent complex data by modulating only one weight,
while pattern composition rule learns the common repre-
sentation across data instances. 2) Our instance pattern
composers are compatible with optimization-based meta-
learning and hypernetwork to predict modulated wights of
unseen data during training. 3) We conduct extensive exper-
iments to demonstrate the effectiveness of our framework
through quantitative and qualitative analysis.
|
Liu_PoseExaminer_Automated_Testing_of_Out-of-Distribution_Robustness_in_Human_Pose_and_CVPR_2023
|
Abstract
Human pose and shape (HPS) estimation methods
achieve remarkable results. However, current HPS bench-
marks are mostly designed to test models in scenarios that
are similar to the training data. This can lead to criti-
cal situations in real-world applications when the observed
data differs significantly from the training data and hence
is out-of-distribution (OOD). It is therefore important to
test and improve the OOD robustness of HPS methods. To
address this fundamental problem, we develop a simula-
tor that can be controlled in a fine-grained manner us-
ing interpretable parameters to explore the manifold of im-
ages of human pose, e.g. by varying poses, shapes, and
clothes. We introduce a learning-based testing method,
termed PoseExaminer, that automatically diagnoses HPS
algorithms by searching over the parameter space of hu-
man pose images to find the failure modes. Our strat-
egy for exploring this high-dimensional parameter space
is a multi-agent reinforcement learning system, in which
the agents collaborate to explore different parts of the pa-
rameter space. We show that our PoseExaminer discov-
ers a variety of limitations in current state-of-the-art mod-
els that are relevant in real-world scenarios but are missed
by current benchmarks. For example, it finds large regions
of realistic human poses that are not predicted correctly,
as well as reduced performance for humans with skinny
and corpulent body shapes. In addition, we show that
fine-tuning HPS methods by exploiting the failure modes
found by PoseExaminer improve their robustness and even
their performance on standard benchmarks by a significant
margin. The code are available for research purposes at
https://github.com/qihao067/PoseExaminer.
|
1. Introduction
In recent years, the computer vision community has
made significant advances in 3D human pose and shape
(HPS) estimation. But despite the high performance on
standard benchmarks, current methods fail to give reliable
predictions for configurations that have not been trained on
Figure 1. PoseExaminer is an automatic testing tool used to study
the performance and robustness of HPS methods in terms of ar-
ticulated pose, shape, global rotation, occlusion, etc. It system-
atically explores the parameter space and discovers a variety of
failure modes. (a) illustrates three failure modes in PARE [18] and
(b) shows the efficacy of training with PoseExaminer.
or in difficult viewing conditions such as when humans are
significantly occluded [18] or have unusual poses or cloth-
ing [34]. This lack of robustness in such out-of-distribution
(OOD) situations, which typically would not fool a human
observer, is generally acknowledged and is a fundamental
open problem for HPS methods that should be addressed.
The main obstacle, however, to testing the robustness of
HPS methods is that test data is limited, because it is ex-
pensive to collect and annotate. One way to address this
problem is to diagnose HPS systems by generating large
synthetic datasets that randomly generate images of humans
in varying poses, clothing, and background [4, 34]. These
studies suggest that the gap between synthetic and real do-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
672
mains in HPS is sufficiently small for testing on synthetic to
be a reasonable strategy, as we confirm in our experiments.
These methods, however, only measure the average per-
formance on the space of images. In sensitive domains
that use HPS ( e.g. autonomous driving), the failure modes ,
where performance is low, can matter more. Most impor-
tantly, despite large scale, these datasets lack diversity in
some dimensions. For example, they mostly study pose for
common actions such as standing, sitting, walking, etc.
Inspired by the literature on adversarial machine learn-
ing [5, 11], recent approaches aim to systematically ex-
plore the parameter space of simulators for weaknesses in
the model performance. This has proven to be an effec-
tive approach for diagnosing limitations in image classifi-
cation [45], face recognition [43], and path planning [39].
However, they are not directly applicable to testing the ro-
bustness of the high-dimensional regression task of pose es-
timation. For example, [43, 45] were designed for simple
binary classification tasks and can only optimize a limited
number of parameters, and [39] only perturbs an initial real-
world scene to generate adversarial examples, which does
not enable the full exploration of the parameter space.
In this work, we introduce PoseExaminer (Fig. 1), a
learning-based algorithm to automatically diagnose the ro-
bustness of HPS methods. It efficiently searches through
the high-dimensional continuous space of a simulator of
human images to find failure modes. The strategy in Pose-
Examiner is a multi-agent reinforcement learning approach,
in which the agents collaborate with one another to search
the latent parameter space for model weaknesses. More
specifically, each agent starts at different random initialized
seeds and explores the latent parameter space to find failure
cases, while at the same time avoiding exploring the regions
close to the other agents in the latent space. This strategy
enables a highly parallelizable way of exploring the high-
dimensional continuous latent space of the simulator. After
converging to a local optimum, each agent explores the lo-
cal parameter space to find a connected failure region that
defines a whole subspace of images where the pose is in-
correctly predicted ( i.e. failure mode). We demonstrate that
very large subspaces of failures exist even in the best HPS
models. We use the size of these failure subspaces together
with the success rate of the agents as a new measure of out-
of-distribution robustness.
Our experiments on four state-of-the-art HPS models
show that PoseExaminer successfully discovers a variety
of failure modes that provide new insights about their real-
world performance. For example, it finds large subspaces of
realistic human poses that are not predicted correctly, and
reduced performance for humans with skinny and corpulent
body shapes. Notably, we find that the failure modes found
in synthetic data generalize well to real images: In addi-
tion to using PoseExaminer as a new benchmark, we alsofind that fine-tuning SOTA methods using the failure modes
discovered by PoseExaminer enhances their robustness and
even the performance on 3DPW [52] and AIST++ [25, 50]
benchmarks. We also note that computer graphics rendering
pipelines become increasingly realistic, which will directly
benefit the quality of our automated testing approach.
In short, our contributions are three-fold:
• We propose PoseExaminer, a learning-based algorithm
to automatically diagnose the robustness of human
pose and shape estimation methods. Compared to prior
work, PoseExaminer is the first to efficiently search for
a variety of failure modes in the high-dimensional con-
tinuous parameter space of a simulator using a multi-
agent reinforcement learning framework.
• We introduce new metrics, which have not been pos-
sible to measure before, for quantifying the robust-
ness of HPS methods based on our automated testing
framework. We perform an in-depth analysis of cur-
rent SOTA methods, revealing a variety of diverse fail-
ure modes that generalize well to real images.
• We show that the failure modes discovered by PoseEx-
aminer can be used to significantly improve the real-
world performance and robustness of current methods.
|
Kim_Achieving_a_Better_Stability-Plasticity_Trade-Off_via_Auxiliary_Networks_in_Continual_CVPR_2023
|
Abstract
In contrast to the natural capabilities of humans to
learn new tasks in a sequential fashion, neural networks
are known to suffer from catastrophic forgetting , where the
model’s performances on old tasks drop dramatically after
being optimized for a new task. Since then, the continual
learning (CL) community has proposed several solutions
aiming to equip the neural network with the ability to learn
the current task ( plasticity ) while still achieving high accu-
racy on the previous tasks ( stability ). Despite remarkable
improvements, the plasticity-stability trade-off is still far
from being solved and its underlying mechanism is poorly
understood. In this work, we propose Auxiliary Network
Continual Learning (ANCL), a novel method that applies
an additional auxiliary network which promotes plasticity
to the continually learned model which mainly focuses on
stability. More concretely, the proposed framework mate-
rializes in a regularizer that naturally interpolates between
plasticity and stability, surpassing strong baselines on task
incremental and class incremental scenarios. Through
extensive analyses on ANCL solutions, we identify some
essential principles beneath the stability-plasticity trade-
off. The code implementation of our work is available at
https://github.com/kim-sanghwan/ANCL .
|
1. Introduction
The continual learning (CL) model aims to learn from
current data while still maintaining the information from
previous training data. The naive approach of continuously
fine-tuning the model on sequential tasks, however, suffers
from catastrophic forgetting [8, 21]. Catastrophic forget-
ting occurs in a gradient-based neural network because the
updates made with the current task are likely to override
the model weights that have been changed by the gradients
from the old tasks.
Catastrophic forgetting can be understood in terms ofstability-plasticity dilemma [22], one of the well-known
challenges in continual learning. Specifically, the model
not only has to generalize well on past data ( stability ) but
also learn new concepts ( plasticity ). Focusing on stability
will hinder the neural network from learning the new data,
whereas too much plasticity will induce more forgetting
of the previously learned weights. Therefore, CL model
should strike a balance between stability and plasticity.
There are various ways to define the problem of CL.
Generally speaking, it can be categorized into three sce-
narios [27] : Task Incremental Learning (TIL), Domain In-
cremental Learning (DIL), and Class Incremental Learning
(CIL). In TIL, the model is informed about the task that
needs to be solved; the task identity is given to the model
during the training session and the test time. In DIL, the
model is required to solve only one task at hands without
the task identity. In CIL, the model should solve the task
itself and infer the task identity. Since the model should
discriminate all classes that have been seen so far, it is usu-
ally regarded as the hardest continual learning scenario. Our
study performs extensive evaluations on TIL and CIL set-
ting which will be further explained in Sec. 4.
Recently, several papers [19, 20, 28, 31] proposed the
usage of an auxiliary network or an extra module that is
solely trained on the current dataset, with the purpose of
combining this additional structure with the previous net-
work or module that has been continuously trained on the
old datasets. For example, Active Forgetting with synap-
tic Expansion-Convergence (AFEC) [28] regularizes the
weights relevant to the current task through a new set of
network parameters called the expanded parameters based
on weight regularization methods. The expanded param-
eters are solely optimized on the current task and are al-
lowed to forget the previous ones. As a result, AFEC can
reduce potential negative transfer by selectively merging the
old parameters with the expanded parameters. The stability-
plasticity balance in AFEC is adjusted via hyperparameters
which scale the regularization terms for remembering the
old tasks and learning the new tasks.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
11930
The authors of the above papers propose to mitigate the
stability-plasticity dilemma by infusing plasticity through
the auxiliary network or module (detailed explanation in
Appendix A). However, a precise characterization of the in-
teractive mechanism between the previous model and the
auxiliary model is still missing in the literature. Therefore,
in this paper, we first formalize the framework of CL that
adopts the auxiliary network called Auxiliary Network Con-
tinual Learning (ANCL). Given this environment, we then
investigate the stability-plasticity trade-off through various
analyses from both a theoretical and empirical point of view.
Our main contributions can be summarized as follows:
• We propose the framework of Auxiliary Network Con-
tinual Learning (ANCL) that can naturally incorporate
the auxiliary network into a variety of CL approaches
as a plug-in method (Sec. 3.1).
• We empirically show that ANCL outperforms existing
CL baselines on both CIFAR-100 [16] and Tiny Ima-
geNet [17] (Sec. 4).
• Furthermore, we perform three analyses to investigate
the stability-plasticity trade-off within ANCL (Sec. 5):
Weight Distance ,Centered Kernel Alignment , and
Mean Accuracy Landscape .
|
Koley_Picture_That_Sketch_Photorealistic_Image_Generation_From_Abstract_Sketches_CVPR_2023
|
Abstract Sketches
Subhadeep Koley1,2Ayan Kumar Bhunia1Aneeshan Sain1,2Pinaki Nath Chowdhury1,2
Tao Xiang1,2Yi-Zhe Song1,2
1SketchX, CVSSP, University of Surrey, United Kingdom.
2iFlyTek-Surrey Joint Research Centre on Artificial Intelligence.
{s.koley, a.bhunia, a.sain, p.chowdhury, t.xiang, y.song }@surrey.ac.uk
(a) (b)
Edgemap Sketch Sketch Sketch Sketch Sketch
Existing Methods Proposed Method
Figure 1. (a) Set of photos generated by the proposed method. (b) While existing methods can generate faithful photos from perfectly
pixel-aligned edgemaps , they fall short drastically in case of highly deformed and sparse free-hand sketches . In contrast, our autoregressive
sketch-to-photo generation model produces highly photorealistic outputs from highly abstract sketches .
Abstract
Given an abstract, deformed, ordinary sketch from un-
trained amateurs like you and me, this paper turns it into a
photorealistic image – just like those shown in Fig. 1(a),
all non-cherry-picked. We differ significantly from prior
art in that we do not dictate an edgemap-like sketch to
start with, but aim to work with abstract free-hand human
sketches. In doing so, we essentially democratise the sketch-
to-photo pipeline, “picturing” a sketch regardless of how
good you sketch. Our contribution at the outset is a de-
coupled encoder-decoder training paradigm, where the de-
coder is a StyleGAN trained on photos only. This impor-
tantly ensures that generated results are always photoreal-
istic. The rest is then all centred around how best to deal
with the abstraction gap between sketch and photo. For
that, we propose an autoregressive sketch mapper trained
on sketch-photo pairs that maps a sketch to the StyleGAN
latent space. We further introduce specific designs to tackle
the abstract nature of human sketches, including a fine-
grained discriminative loss on the back of a trained sketch-
photo retrieval model, and a partial-aware sketch augmen-
tation strategy. Finally, we showcase a few downstream
tasks our generation model enables, amongst them is show-
ing how fine-grained sketch-based image retrieval, a well-
studied problem in the sketch community, can be reduced
to an image (generated) to image retrieval task, surpass-
ing state-of-the-arts. We put forward generated results in
the supplementary for everyone to scrutinise. Project page:
https://subhadeepkoley.github.io/PictureThatSketch
|
1. Introduction
People sketch, some better than others. Given a shoe
image like ones shown in Fig. 1(a), everyone can scribble
a few lines to depict the photo, again mileage may vary –
top left sketch arguably lesser than that at bottom left. The
opposite, i.e., hallucinating a photo based on even a very ab-
stract sketch, is however something humans are very good
at having evolved on the task over millions of years. This
seemingly easy task for humans, is exactly one that this pa-
per attempts to tackle, and apparently does fairly well at –
given an abstract sketch from untrained amateurs like us,
our paper turns it into a photorealistic image (see Fig. 1).
This problem falls into the general image-to-image trans-
lation literature [41, 64]. Indeed, some might recall prior
arts ( e.g., pix2pix [41], CycleGAN [105], MUNIT [38], Bi-
cycleGAN [106]), and sketch-specific variants [33, 86] pri-
marily based on pix2pix [41] claiming to have tackled the
exact problem. We are strongly inspired by these works, but
significantly differ on one key aspect – we aim to generate
from abstract human sketches, not accurate photo edgemaps
which are already “photorealistic”.
This is apparent in Fig. 1(b), where when edgemaps are
used prior works can hallucinate high-quality photorealis-
tic photos, whereas rather “peculiar” looking results are ob-
tained when faced with amateur human sketches. This is be-
cause all prior arts assume pixel-alignment during transla-
tion – so your drawing skill (or lack of it), got accurately re-
flected in the generated result. As a result, chance is you and
me will not fetch far on existing systems if not art-trained to
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
6850
sketch photorealistic edgemaps – we, in essence, democra-
tise the sketch-to-photo generation technology, “picturing”
a sketch regardless of how good you sketch.
Our key innovation comes after a pilot study where we
discovered that the pixel-aligned artefact [69] in prior art
is a direct result of the typical encoder-decoder [41] archi-
tecture being trained end-to-end – this enforces the gener-
ated results to strictly follow boundaries defined in the in-
put sketch (edgemap). Our first contribution is therefore a
decoupled encoder-decoder training, where the decoder is
pre-trained StyleGAN [46] trained on photos only, and is
frozen once trained. This importantly ensures generated re-
sults are sampled from the StyleGAN [46] manifold there-
fore of photorealistic quality.
The second, perhaps more important innovation lies with
how we bridge the abstraction gap [20, 21, 36] between
sketch and photo. For that, we propose to train an encoder
that performs a mapping from abstract sketch representa-
tion to the latent space of the learned latent space of Style-
GAN [46] ( i.e., not actual photos as per the norm). To train
this encoder, we use ground-truth sketch-photo pairs, and
impose a novel fine-grained discriminative loss between the
input sketch and the generated photo, together with a con-
ventional reconstruction loss [102] between the input sketch
and the ground-truth photo, to ensure the accuracy of this
mapping process. To double down on dealing with the ab-
stract nature of sketches, we further propose a partial-aware
augmentation strategy where we render partial versions of a
full sketch and allocate latent vectors accordingly (the more
partial the input, the lesser vectors assigned).
Our autoregressive generative model enjoys a few in-
teresting properties once trained: (i)abstraction level ( i.e.,
how well the fine-grained features in a sketch are reflected
in the generated photo) can be easily controlled by alter-
ing the number of latent vectors predicted and padding the
rest with Gaussian noise, (ii)robustness towards noisy and
partial sketches, thanks to our partial-aware sketch aug-
mentation strategy, and (iii) good generalisation on input
sketches across different abstraction levels (from edgemaps,
to sketches across two datasets). We also briefly show-
case two potential downstream tasks our generation model
enables: fine-grained sketch-based image retrieval (FG-
SBIR), and precise semantic editing. On the former, we
show how FG-SBIR, a well-studied task in the sketch com-
munity [10,71–73], can be reduced to an image (generated)
to image retrieval task, and that a simple nearest-neighbour
model based on VGG-16 [78] features can already surpass
state-of-the-art. On the latter, we demonstrate how pre-
cise local editing can be done that is more fine-grained than
those possible with text and attributes.
We evaluate using conventional metrics (FID, LPIPS),
plus a new retrieval-informed metric to demonstrate supe-
rior performance. But, as there is no better way to convincethe jury other than presenting allfacts, we offer allgener-
ated results in the supplementary for everyone to scrutinise.
|
Kong_Indescribable_Multi-Modal_Spatial_Evaluator_CVPR_2023
|
Abstract
Multi-modal image registration spatially aligns two im-
ages with different distributions. One of its major challenges
is that images acquired from different imaging machines
have different imaging distributions, making it difficult to
focus only on the spatial aspect of the images and ignore
differences in distributions. In this study, we developed a
self-supervised approach, Indescribable Multi-model Spatial
Evaluator (IMSE), to address multi-modal image registration.
IMSE creates an accurate multi-modal spatial evaluator to
measure spatial differences between two images, and then op-
timizes registration by minimizing the error predicted of the
evaluator. To optimize IMSE performance, we also proposed
a new style enhancement method called Shuffle Remap which
randomizes the image distribution into multiple segments,
and then randomly disorders and remaps these segments, so
that the distribution of the original image is changed. Shuffle
Remap can help IMSE to predict the difference in spatial
location from unseen target distributions. Our results show
that IMSE outperformed the existing methods for registration
using T1-T2 and CT-MRI datasets. IMSE also can be easily
integrated into the traditional registration process, and can
provide a convenient way to evaluate and visualize registra-
tion results. IMSE also has the potential to be used as a new
paradigm for image-to-image translation. Our code is avail-
able at https://github.com/Kid-Liet/IMSE .
*Corresponding author.
X Y
IMSE
Instance mapping
X YGAN
Distribution mapping
Moving x Target y Corresponding Non-corresponding
Figure 1. The GAN based methods can only ensure that the distri-
bution of the Xdomain is mapped that of the Ydomain. Ideally,
we want to achieve instance registration in which moving and target
images are one-to-one corresponded.
|
1. Introduction
The purpose of multi-modal image registration is to align
two images with different distributions (Moving ( M) and
Target ( T) images) by warping the space through the defor-
mation field ϕ. A major challenge in multi-modal image
registration is that images from different modalities may dif-
fer in multiple aspects given the fact that images are acquired
using different imaging machines, or different acquisition
parameters. Due to dramatically different reconstruction
and acquisition methods, there is no simple one-to-one map-
ping between different imaging modalities. From the per-
spective of measuring similarity, mainstream unsupervised
multi-modal registration methods can be divided into two
categories: similarity operator based registration and image-
to-image translation based registration.
Similarity operator based registration uses multi-modal
similarity operators as loss functions for registration, for ex-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9853
ample, normalized cross-correlation (NCC) [15, 22, 31, 32],
mutual information (MI) [5, 23, 27, 35], and modality-
independent neighborhood descriptor (MIND) [3, 8, 13, 39].
Similarity operators are based on a prior mathematical knowl-
edge. These are carefully designed and improved over time.
They can be applied to both traditional registration process
(Eq. 1) and neural network registration (Eq. 2):
ˆϕ= arg min
ϕLsim(M(ϕ), T).(1)
Or
ˆθ= arg min
θ
E(M,T )[Lsim(M, T, g θ(M, T ))]
.(2)
Similarity operators have several limitations. 1)It is un-
likely to design similarity operators that can maintain high
accuracy for all data from various imaging modalities. 2)
It is not possible to estimate the upper limit these operators
can achieve and hence it is difficult to find the improvement
directions.
Image-to-image translation [1, 16, 17, 21, 29, 38, 42]
based multi-modal registration first translations multi-modal
images into single-modal images (Eq 3) using a generative
adversarial network (GAN [10]), and then use Mean Abso-
lute Error (MAE) or Mean Squared Error (MSE) to evaluate
the error at each pixel in space (Eq 4).
min
Gmax
DLAdv(G, D ) =ET[log(D(T))] +
EM[log(1−D(G(M)))].(3)
And
ˆθ= arg min
θ
E(M,T )[∥G(M), T, g θ(G(M), T)∥1]
.
(4)
Image-to-image translation based registration cleverly avoids
the complex multi-modal problem and reduces the difficulty
of registration to a certain extent. However, it has obvious
drawbacks. 1)The methods based on GAN require training
a generator using existing multi-modal data. The trained
model will not work if it encounters unseen data, which
greatly limits its applicable scenarios. 2)More importantly,
registration is an instance problem. However, the method
based on GAN is to remap the data distribution between
different modal. As shown in Figure 1, the distribution has
bias and variance. We cannot guarantee that the translated
image corresponds exactly to the instance target image at the
pixel level. For example, Figure 2 shows that there is still
residual distribution difference between the target image and
translated image. Therefore, even if they are well aligned in
space, there is still a large error in the same organ.
To address these challenges, we propose a novel idea
based on self-supervision, namely, Indescribable Multi-
modal Spatial Evaluator, or IMSE for short. The IMSE
1
0
CT
-
MR
Moving Translated TargetCycleGAN
ErrorIMSE
Error
T1
-
T2Figure 2. The distributions of the Moving images translated by Cy-
cleGAN still have large residual differences from the Target images.
IMSE gives smaller error assessment values in the overlapping
regions (converging to blue).
approach creates an accurate multi-modal spatial evaluator
to metric spatial differences between two images, and then
optimizes registration by minimizing the error predicted by
the evaluator. In Figure 2, we provide a visual demonstration
of IMSE in terms of spatial location for T1-T2 and MRI-CT,
respectively. Even though distribution differences between
the Moving and Target images are still significant, IMSE is
low in overlapping regions of the same organs. The main
contributions of this study can be summarized as follows:
•We introduce the concepts of relative single-modal and
absolute single-modal as an extension of the current
definition of single-modality.
•Based on relative single-modal and absolute single-
modal, we propose a self-supervised IMSE method
to evaluate spatial differences in multi-modal image
registration. The main advantage of IMSE is that it
focuses only on differences in spatial location while
ignoring differences in multi-modal distribution caused
by different image acquisition mechanisms. Our results
show that IMSE outperformed the existing metrics for
registration using T1-T2 and CT-MRI datasets.
•We propose a new style enhancement method named
Shuffle Remap. Shuffle Remap can help IMSE to ac-
curate predict the difference in spatial location from an
unseen target distribution. As a enhancement method,
Shuffle Remap can be impactful in the field of domain
generalization.
•We develop some additional functions for IMSE. 1)
As a measure, IMSE can be integrated into both the
registration based on neural network and the traditional
registration. 2)IMSE can also be used to establish a
new image-to-image translation paradigm. 3)IMSE
can provide provide a convenient way to evaluate and
visualize registration results.
9854
|
Lim_BiasAdv_Bias-Adversarial_Augmentation_for_Model_Debiasing_CVPR_2023
|
Abstract
Neural networks are often prone to bias toward spurious
correlations inherent in a dataset, thus failing to generalize
unbiased test criteria. A key challenge to resolving the issue
is the significant lack of bias-conflicting training data (i.e.,
samples without spurious correlations). In this paper, we
propose a novel data augmentation approach termed Bias-
Adversarial augmentation (BiasAdv) that supplements bias-
conflicting samples with adversarial images. Our key idea
is that an adversarial attack on a biased model that makes
decisions based on spurious correlations may generate syn-
thetic bias-conflicting samples, which can then be used as
augmented training data for learning a debiased model.
Specifically, we formulate an optimization problem for gen-
erating adversarial images that attack the predictions of
an auxiliary biased model without ruining the predictions
of the desired debiased model. Despite its simplicity, we
find that BiasAdv can generate surprisingly useful synthetic
bias-conflicting samples, allowing the debiased model to
learn generalizable representations. Furthermore, BiasAdv
does not require any bias annotations or prior knowledge of
the bias type, which enables its broad applicability to exist-
ing debiasing methods to improve their performances. Our
extensive experimental results demonstrate the superiority
of BiasAdv, achieving state-of-the-art performance on four
popular benchmark datasets across various bias domains.
|
1. Introduction
Real-world datasets are often inherently biased [2, 34],
where certain visual attributes are spuriously correlated
with class labels. For example, let us consider a binary clas-
sification task between cats and dogs. Unbeknownst to us,
our dataset could consist of most cats indoors and most dogs
outdoors, as illustrated in Figure 1. When trained on such
a biased dataset, neural networks often learn unintended
shortcuts [2, 8, 34, 39] ( e.g., making predictions based on
*Corresponding author: [email protected]
(Cat, Indoor) ( Dog, Outdoor)
Debiased
model
(Cat, Outdoor) ( Dog, Indoor)
Synthetic bias-conflicting
samples by BiasAdv
Cat
or
Dog
Auxiliary model
(Biased)
Adversarial
Adversari a
AttackObjective : Classify Catvs. Dog
Bias attribute : Indoor vs. Outdoor (Unknown)
Outdoor)
(
Dog
(
Ind
Bias-conflicting
ndoor)
(
Dog
(
O
Bias-guidingFigure 1. An overview of BiasAdv. In MetaShift [26], the bias
attribute {Indoor, Outdoor }is spuriously correlated to the class
label{Cat, Dog }. In this work, we refer to data with such spurious
correlations as bias-guiding samples and without such correlations
asbias-conflicting samples, respectively. Using the biased dataset,
we train an auxiliary model to be biased, and BiasAdv supple-
ments bias-conflicting samples using adversarial images which at-
tack the biased predictions of the auxiliary model while preserving
the predictions of the debiased model. By leveraging the diversi-
fied bias-conflicting data, BiasAdv allows the debiased model to
learn generalizable representations for unbiased classification.
the background) and fail to generalize in a new unbiased test
environment. To tackle the problem, conventional methods
have utilized explicit bias annotations [1,19,39,43] or prior
knowledge of the bias type [2, 3, 5, 9, 46]. However, bias
annotations are expensive and laborious to obtain, and pre-
suming certain bias types in advance limits the capability to
be universally applicable to various bias types.
To train a debiased model without bias annotations, the
main line of recent research [6, 22, 30, 34, 42] has com-
monly utilized an intentionally biased model as an auxiliary
model under the idea that bias attributes are easy-to-learn .
In essence, these methods identify bias-conflicting samples
based on the auxiliary model and train the debiased model
in a way that focuses more on the identified samples ( i.e., re-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3832
weighting based on the auxiliary model). Although recent
re-weighting methods have achieved remarkable success in
debiasing without bias annotations, they have inherent limi-
tation; since the number of bias-conflicting samples is often
too small for a model to learn generalizable representations,
the model is prone to over-fitting [25]. Consequently, re-
weighting methods suffer from the degraded performance
on bias-guiding samples [20, 44], which raises the question
of whether these methods truly make models debiased or
simply deflect models in unintended directions.
To resolve the aforementioned issues, data augmentation
methods have recently been proposed to supplement bias-
conflicting samples. For example, BiaSwap [20] conducts
image-to-image translation to synthesize bias-conflicting
samples. However, it requires delicate training of complex
and expensive image translation models [36], limiting its
applicability. On the other hand, DFA [25] utilizes feature-
level swapping based on disentangled representations be-
tween bias-guiding and bias-conflicting features. Learning
disentangled representations, however, is often challenging
on real-world datasets [27, 28, 31].
In this paper, we devise a much simpler yet more effec-
tive approach to generate bias-conflicting samples, coined
Bias-Adversarial augmentation (BiasAdv). Figure 1 shows
an overview of BiasAdv. We utilize an auxiliary model that
intentionally learns biased shortcuts, likewise [30, 34]. The
key idea of BiasAdv is that an adversarial attack on the
biased auxiliary model may generate adversarial images
that alter the bias cue from the input images ( i.e., bias-
conflicting samples). Concretely, we formulate an optimiza-
tion problem to generate adversarial images that attack the
predictions of the biased auxiliary model without ruining
the predictions of the desired debiased model. Then, the
generated adversarial images are used as additional training
data to train the debiased model. It is noteworthy that, un-
like previous data augmentation methods [20, 25], BiasAdv
does not require complex image translation models or dis-
entangled representations, so it can be seamlessly applied
to any debiasing method based on the biased model. Fur-
thermore, we show that BiasAdv, despite its simplicity, can
generate surprisingly useful synthetic bias-conflicting sam-
ples, which significantly improves debiasing quality.
The main contributions of our work are three-fold:
• We propose BiasAdv, a simple and effective data aug-
mentation method for model debiasing, which utilizes
adversarially attacked images as additional training
data. Our method does not require any bias annotations
or prior knowledge of the bias type during training.
• BiasAdv can be easily applied to existing re-weighting
methods without architectural or algorithmic changes.
We confirm that BiasAdv significantly improves the
performance, achieving up to 22.8%, 13.4%, 7.9%,and 8.0% better performance than the state-of-the-art
results on CIFAR-10C [25], BFFHQ [25], BAR [34],
and MetaShift [26], respectively.
• We demonstrate the effectiveness of BiasAdv through
extensive ablation studies and analyses. Our key find-
ing is that BiasAdv helps to learn generalizable repre-
sentations and prevents over-fitting; it does not degrade
the performance of bias-guiding samples and improves
model robustness against input corruptions.
|
Li_Causally-Aware_Intraoperative_Imputation_for_Overall_Survival_Time_Prediction_CVPR_2023
|
Abstract
Previous efforts in vision community are mostly made on
learning good representations from visual patterns. Beyond
this, this paper emphasizes the high-level ability of causal
reasoning. We thus present a case study of solving the chal-
lenging task of Overall Survival (OS) time in primary liver
cancers. Critically, the prediction of OS time at the early
stage remains challenging, due to the unobvious image pat-
terns of reflecting the OS. To this end, we propose a causal
inference system by leveraging the intraoperative attributes
and the correlation among them, as an intermediate super-
vision to bridge the gap between the images and the final
OS. Particularly, we build a causal graph, and train the im-
ages to estimate the intraoperative attributes for final OS
prediction. We present a novel Causally-aware Intraop-
erative Imputation Model (CAWIM) that can sequentially
predict each attribute using its parent nodes in the esti-
mated causal graph. To determine the causal directions,
*Equal contribution
†Corresponding authorwe propose a splitting-voting mechanism, which votes for
the direction for each pair of adjacent nodes among multi-
ple predictions obtained via causal discovery from hetero-
geneity. The practicability and effectiveness of our method
are demonstrated by the promising results on liver cancer
dataset of 361 patients with long-term observations.
|
1. Introduction
The success of recent deep learning model is largely at-
tributed to learning the good representations for visual pat-
terns. Such representations essentially facilitate various vi-
sion task, such as recognition and synthesis [15, 25,33].
Nevertheless, one important goal for the vision commu-
nity is to model and summarize the relationships of ob-
served variables of a system, in order to enable well pre-
dictions on similar data. Essentially, it is desirable to un-
derstand how the system is changed if one modifies these
relationships under certain conditions, e.g., the effects of
a treatment in healthcare. Thus this demands the high-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15681
level ability of causal reasoning beyond the previous ef-
forts of only learning good representations for visual pat-
terns [1, 5, 16,21,35,37]. This naturally leads into our task
of causal inference.
This paper presents a case study of solving the challeng-
ing task of Overall Survival (OS) time estimation in Pri-
mary Liver Cancers (PLC). Generally, the liver cancer re-
mains one of the most common malignancies worldwide in
the 21st century, as there are about one million new cases
every year [36]. The five-year survival rate for advanced
PLC is only about 5% [7]. Therefore, early and accurate
prediction of OS time estimation can provide informative
guidance for individualized treatment planning and reduc-
ing burden of medical resources [6, 27,32,34]. On the
other hand, one shall easily notice that with the renaissance
of deep learning, great achievements have been made on
medical imaging analysis, such as diagnosis, segmentation,
and classification [16, 21,35,37]. Unfortunately, it still re-
mains challenging for experienced clinicians to predict OS
time at early stage, even with advanced modern diagnostic
tools such as Magnetic Resonance Imaging (MRI) or tumor
marker tests [3, 11], and the deep learning tools [1, 5].
Some studies propose to leverage deep neural networks
for OS time prediction. They focus on fusion learning
of multi-modal image features and some basic information
(i.e., age and gender) [5, 8,14,22,24,30,39]. Neverthe-
less, the accurate prediction based on only preoperative in-
formation (such as image and tumor marker indexes) is still
challenging, possibly due to missing information from early
diagnosis stage to the final stage. This missing informa-
tion includes the texture and pathological attributes of the
liver, health level of the patient, and post-operative treat-
ment [9, 17,20]. For example, as in Fig. 1, patient A and
B with different OS time have almost identical preopera-
tive indicators cause, history of disease, etc,, making the
preoperative-based model hard for discrimination.
To amend this problem, we present a causal inference
system that can well utilize the intraoperative information,
which is pretty easy to be accessed in training data. Accord-
ing to medical priories, such information records patholog-
ical attributes, which can be more reflective about the pa-
tient’s health level and the postoperative recovery. Inspired
by this, we propose to leverage this auxiliary information
to help build our causal inference system. Again, we take
the example in Fig. 1. Although preoperative information
cannot differentiate patient A from B, their intraoperative
index shows great difference in distribution, which can thus
be employed for the discrimination. Additionally, there are
many indicators from medical experts that the intraoperative
indexes are related to each other. For instance, the clinico-
pathologic hepatocirrhosis is dependent on the hepatocir-
rhosis; the sum of tumor diameter is affected by the num-
ber of tumors; the fibrosis (reflected on S-Score) can prob-ably deteriorate to cirrhosis [4], etc. By leveraging these
relationships, we can better understand the causal inference
concrete from both observed data modelling and medical
expert-level knowledge of these variables.
To this end, we encapsulate these priors and the inspired
proposals into a new method, dubbed as Causally-Aware In-
traoperative Imputation Model (CAWIM). It incorporates
causal discovery module to sequentially estimate intraop-
erative indexes as an intermediate stage towards final OS
time prediction. Specifically, our model is composed of
two key steps: i)estimating the intraoperative indexes us-
ing preoperative information, i.e., image and indexes; ii)
followed by OS time prediction using estimated intraopera-
tive indexes and preoperative information. To achieve more
accurate prediction of intraoperative indexes that is determi-
nant to the prediction power of the whole method, we pro-
pose a Causally-aware Directed Acyclic Graph (CaDAG)
module. It learns the causal structure represented as a DAG
over intraoperative features. To identify the causal rela-
tions beyond the traditional PC algorithm [26], we propose
asplitting-voting mechanism, which is inspired by the re-
cent work [18] that learn the causal structure with the as-
sistance of an auxiliary domain index variable. Our pro-
posed mechanism can not only identify the causal relations
even when this domain index variable is not available, but
also can be theoretically guaranteed that the learned graph
is not acyclic. During test stage, we sequentially estimate
each intraoperative index with preoperative information and
additionally, its parent set among other intraoperative in-
dexes. The utility of our method can be demonstrated by
a significant improvement of OS time prediction on an in-
house liver cancer dataset, as well as better interpretabil-
ity of learned causal structure, more accurate estimation of
intraoperative indexes and more interpretable visualization
results.
In a nutshell, we for the first time present a case study
of building a causally-aware intraoperative imputation sys-
tem for the challenging task of overall survival time pre-
diction. The proposed method of building the casual in-
ference system, can be naturally extended to other similar
medical tasks. Our key contributions are listed as follows.
(1) New Paradigm for OS time Prediction. We propose
to leverage intraoperative indexes as an intermediate stage
during training. The leverage of this information can sig-
nificantly alleviate the “missing information” issue. To the
best of our knowledge, we are the first to leverage auxiliary
information (in addition to preopearative features) for OS
time prediction. (2) Causal Structure Learning. We pro-
pose a novel splitting-voting mechanism that can identify
the causal structure even when the domain index variable is
missing. (3) Better Prediction Power. Our method can sig-
nificantly improve the prediction power for liver cancer over
the competitors. The methods are evaluated on the medical
15682
dataset, which, to the best of our knowledge, is the largest
primary live cancer dataset.
|
Li_Hard_Sample_Matters_a_Lot_in_Zero-Shot_Quantization_CVPR_2023
|
Abstract
Zero-shot quantization (ZSQ) is promising for compress-
ing and accelerating deep neural networks when the data
for training full-precision models are inaccessible. In ZSQ,
network quantization is performed using synthetic samples,
thus, the performance of quantized models depends heavily
on the quality of synthetic samples. Nonetheless, we find
that the synthetic samples constructed in existing ZSQ meth-
ods can be easily fitted by models. Accordingly, quantized
models obtained by these methods suffer from significant
performance degradation on hard samples. To address this
issue, we propose HArd sample Synthesizing and Training
(HAST). Specifically, HAST pays more attention to hard sam-
ples when synthesizing samples and makes synthetic samples
hard to fit when training quantized models. HAST aligns
features extracted by full-precision and quantized models to
ensure the similarity between features extracted by these two
models. Extensive experiments show that HAST significantly
outperforms existing ZSQ methods, achieving performance
comparable to models that are quantized with real data.
|
1. Introduction
Deep neural networks (DNNs) achieve great success in
many domains, such as image classification [28, 43, 44], ob-
ject detection [15, 16, 39], semantic segmentation [13, 53],
and embodied AI [3,4,11]. These achievements are typically
paired with the rapid growth of parameters and computa-
tional complexity, making it challenging to deploy DNNs
on resource-constrained edge devices. In response to the
challenge, network quantization proposes to represent the
full-precision models, i.e., floating-point parameters and ac-
tivations, using low-bit integers, resulting in a high compres-
sion rate and an inference-acceleration rate [24]. These meth-
*Equal contribution. Email: [email protected]
†Corresponding author. Email: [email protected]
CIFAR-10 CIFAR-100 ImageNet406080100T op-1 Accuracy(%)IntraQ
HAST(Ours)
IntraQ(Real Data)(a) 3-bit quantization.
CIFAR-10 CIFAR-100 ImageNet6080100T op-1 Accuracy(%)IntraQ
HAST(Ours)
IntraQ(Real Data) (b) 4-bit quantization.
Figure 1. Performance of the proposed HAST on three datasets
compared with the state-of-the-art method IntraQ [50] and the
method fine-tuning with real data [50]. HAST quantizes ResNet-20
on CIFAR-10/CIFAR-100 and ResNet-18 on ImageNet to 3-bit
(left) and 4-bit (right), achieving performance comparable to the
method fine-tuning with real data.
ods implicitly assume that the training data of full-precision
models are available for the process of network quantization.
However, the training data, e.g., medical records, can be
inaccessible due to privacy and security issues. Such a prac-
tical and challenging scenario has led to the development
ofzero-shot quantization (ZSQ) [2], quantizing networks
without accessing the training data.
Many efforts have been devoted to ZSQ [2, 19, 38, 46, 48,
50]. In ZSQ, some works perform network quantization by
weight equalization [38], bias correction [1], or weight round-
ing strategy [19], at the cost of some performance degrada-
tion. To promote the performance of quantized models, ad-
vanced works propose to leverage synthetic data for network
quantization [2, 7, 46, 48, 50]. Specifically, they fine-tune
quantized models with data synthesized using full-precision
models, achieving promising improvement in performance.
Much attention has been paid to the generation of syn-
thetic sample, since high-quality synthetic samples lead to
high-performance quantized models [2, 46]. Recent works
employ generative models to synthesize data with fruitful
approaches, considering generator design [51], boundary
sample generation [6], adversarial training scheme [35], and
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
24417
IntraQ IntraQ(Real Data) HAST(Ours)
020406080100T op-1 Accuracy(%)Synthetic data
Real training data
Real test data(a) Performance of converged 3-bit ResNet-20.
0.4 0.6 0.8 1.0
Sample difficulty0.02.04.06.0T est error rate(%)IntraQ
IntraQ(Real Data)
HAST(Ours) (b) Variation of error rate with sample difficulty.
0.0 0.2 0.4 0.6 0.8 1.0
Sample difficulty103
101
FractionIntra
IntraQ(Real Data)
HAST(Ours) (c) Variation of fraction with sample difficulty.
Figure 2. Analysis on synthetic data. (a) Performance of converged 3-bit ResNet-20. We quantize ResNet-20 to 3-bit using IntraQ [50], Real
Data [50], and Our HAST, respectively, The top-1 accuracy on both training data (synthetic data for ZSQ methods) and test data is reported.
(b) The error rate of test samples with different difficulties. (c) Distribution visualization of sample difficulty using GHM [30]. For each
converged quantized model, we randomly sample 10,000 synthetic/real samples and count the fraction of samples based on difficulty. Note
that the y-axis uses a log scale since the number of samples with different difficulties can differ by order of magnitude.
effective training strategy [7]. Since the quality of synthetic
samples is typically limited by the generator [36], advanced
works treat synthesizing samples as a problem of noise op-
timization [2]. Namely, the noise distribution is optimized
to approximate some specified properties of real data dis-
tributions, such as batch normalization statistics (BNS) and
inception loss (IL) [20]. To promote model performance,
IntraQ [50] focuses on the property of synthetic samples and
endows samples with heterogeneity, achieving state-of-the-
art performance as depicted in Figure 1.
Although existing ZSQ methods achieve considerable
performance gains by leveraging synthetic samples, there is
still a significant performance gap between models trained
with synthetic data and those trained with real data [50]. To
reduce the performance gap, we investigate the difference
between real and synthetic data. Specifically, we study the
difference in generalization error between models trained
with real data and those trained with synthetic data. Our
experimental results show that synthetic data lead to larger
generalization errors than real data, as illustrated in Figure 2a.
Namely, synthetic sample lead to a more significant gap
between training and test accuracy than real data.
We conjecture that the performance gap stems from the
misclassification of hard test samples. To verify the conjec-
ture, we conduct experiments to study how model perfor-
mance varies with sample difficulty, where GHM [30] is
employed to measure the difficulty of samples quantitatively.
The results shown in Figure 2b demonstrate that, on difficult
samples, models trained with synthetic data perform worse
than those trained with real data. This may result from that
synthetic samples are easy to fit, which is consistent with the
observation on inception loss of synthetic data [33]. We ver-
ify the assumption through a series of experiments, where we
count the fraction of samples of different difficulties using
GHM [30]. The results are reported in Figure 2c, where we
observe a severe missing of hard samples in synthetic sam-
ples compared to real data. Consequently, quantized models
fine-tuned with these synthetic data may fail to generalize
well on hard samples in the test set.In light of conclusions drawn from Figure 2, the samples
synthesized for fine-tuning quantized models in ZSQ should
be hard to fit. To this end, we propose a novel HArd sample
Synthesizing and Training (HAST) scheme. The insight of
HAST has two folds: a) The samples constructed for fine-
tuning models should not be easy for models to fit; b) The
features extracted by full-precision and the quantized model
should be similar. To this end, in the process of synthesizing
samples, HAST pays more attention to hard samples in a re-
weighting manner, where the weights are equal to the sample
difficulty introduced in GHM [30]. Meanwhile, in the fine-
tuning process, HAST further promotes the sample difficulty
on the fly and aligns the features between the full-precision
and quantized models.
To verify the effectiveness of HAST, we conduct compre-
hensive experiments on three datasets under two quantization
precisions, following settings used in [50]. Our experimental
results show that HAST using only 5,120 synthetic samples
outperforms previous state-of-the-art method [50] and even
achieves performance comparable with quantization with
real data.
Our main contributions can be summarized as follows:
•We observe that the performance degradation in zero-
shot quantization is attributed to the lack of hard sam-
ples. Namely, the synthetic samples used in existing
ZSQ methods are easily fitted by quantized models, dis-
tinguishing models trained on synthetic samples from
those trained on real data, as depicted in Figure 2.
•Built upon our empirical observation, we propose a
novel HArd sample Synthesizing and Training (HAST)
scheme to promote the performance of ZSQ. Specifi-
cally, HAST generates hard samples and further pro-
motes the sample difficulty on the fly when training
models, paired with a feature alignment constraint to
ensure the similarity of features extracted by these two
models, as summerized in Algorithm 1.
•Extensive experiments demonstrate the superiority of
HAST over existing ZSQ methods. More specifically,
24418
HAST using merely 5,120 synthetic samples outper-
forms the previous state-of-the-art method and achieves
performance comparable to models fine-tuned using
real training data, as shown in Figure 1.
|
Levy_SeaThru-NeRF_Neural_Radiance_Fields_in_Scattering_Media_CVPR_2023
|
Abstract
Research on neural radiance fields (NeRFs) for novel
view generation is exploding with new models and exten-
sions. However, a question that remains unanswered is what
happens in underwater or foggy scenes where the medium
strongly influences the appearance of objects. Thus far,
NeRF and its variants have ignored these cases. However,
since the NeRF framework is based on volumetric render-
ing, it has inherent capability to account for the medium’s
effects, once modeled appropriately. We develop a new ren-
dering model for NeRFs in scattering media, which is based
on the SeaThru image formation model, and suggest a suit-
able architecture for learning both scene information and
medium parameters. We demonstrate the strength of our
method using simulated and real-world scenes, correctly
rendering novel photorealistic views underwater. Even
more excitingly, we can render clear views of these scenes,
removing the medium between the camera and the scene
and reconstructing the appearance and depth of far objects,
which are severely occluded by the medium. Our code and
unique datasets are available on the project’s website.
|
1. Introduction
The pioneering work of Mildenhall et al. [25] on Neural
Radiance Fields (NeRFs) has tremendously advanced the
field of Neural Rendering, due to its flexibility and unprece-
dented quality of synthesized images. Yet, the formulations
of the original NeRF [25] and its followup variants assumethat images were acquired in clear air, i.e., in a medium
that does not scatter or absorb light in a significant man-
ner and that the acquired image is composed solely of the
object radiance. The NeRF formulation is based on volu-
metric rendering equations that take into account sampled
points along 3D rays. Assuming a clear air environment, an
implicit assumption, which is often enforced explicitly with
dedicated loss components [5], is that a single opaque (high
density) object is encountered per ray, with zero density be-
tween the camera and the object.
In stark contrast to clear air case, when the medium is
absorbing and / or scattering (e.g., haze, fog, smog, and all
aquatic habitats), the volume rendering equation has a true
volumetric meaning, as the entire volume, and not only the
object, contributes to image intensity. As the NeRF model
estimates color and density at every point of a scene, it lends
itself perfectly to general volumetric rendering, given that
the appropriate rendering model is used. Here, we bridge
this gap with SeaThru-NeRF , a framework that incorporates
a rendering model that takes into account scattering media.
This is achieved by assigning separate color and density
parameters to the object (scene) and the medium, within the
NeRF framework. Our approach adopts the SeaThru un-
derwater image formation model [1, 3] to account for scat-
tering media. SeaThru is a generalization of the standard
wavelength-independent attenuation (e.g., fog) image for-
mation model, where twodifferent wideband coefficients
are used to represent the medium, which is more accurate
when attenuation is wavelength-dependent (as in all wa-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
56
ter bodies and under some atmospheric conditions). In our
model, the medium parameters are separate per color chan-
nel, and are learned functions of the viewing angles, enforc-
ing them to be constant only along 3D rays in the scene.
Attempting to optimize existing NeRFs on scenes with
scattering medium results in cloud-like objects floating in
space, while our formulation enables the network to learn
the correct representation of the entire 3D volume, that con-
sists of both the scene and the medium. Our experiments
demonstrate that SeaThru-NeRF produces state-of-the-art
photorealistic novel view synthesis on simulated and chal-
lenging real-world scenes (see Fig. 1) with scattering media,
that include complex geometries and appearances. In addi-
tion, it enables:
1.Color restoration of the scenes as if they were not im-
aged through a medium, as our modeling allows full sepa-
ration of object appearance from the medium effects.
2.Estimation of 3D scene structure which surpasses that
of structure-from-motion (SFM) or current NeRFs, espe-
cially in far areas of bad visibility, as we jointly reconstruct
and reason for the geometry and medium.
3.Estimation of wideband medium parameters , which
are informative properties of the captured environment, and
potentially allowing simulation under different conditions.
|
Kumari_Multi-Concept_Customization_of_Text-to-Image_Diffusion_CVPR_2023
|
Abstract
While generative models produce high-quality images of
concepts learned from a large-scale database, a user often
wishes to synthesize instantiations of their own concepts (for
example, their family, pets, or items). Can we teach a model
to quickly acquire a new concept, given a few examples? Fur-
thermore, can we compose multiple new concepts together?
We propose Custom Diffusion, an efficient method for aug-
menting existing text-to-image models. We find that only op-
timizing a few parameters in the text-to-image conditioning
mechanism is sufficiently powerful to represent new concepts
while enabling fast tuning ( ∼6minutes). Additionally, we
can jointly train for multiple concepts or combine multi-
ple fine-tuned models into one via closed-form constrained
optimization. Our fine-tuned model generates variations of
multiple new concepts and seamlessly composes them with
existing concepts in novel settings. Our method outperforms
or performs on par with several baselines and concurrent
works in both qualitative and quantitative evaluations, while
being memory and computationally efficient.
|
1. Introduction
Recently released text-to-image models [53, 57, 60, 79]
have represented a watershed year in image generation. By
simply querying a text prompt, users are able to generate
images of unprecedented quality. Such systems can generate
a wide variety of objects, styles, and scenes – seemingly
“anything and everything”.
However, despite the diverse, general capability of such
models, users often wish to synthesize specific concepts from
their own personal lives. For example, loved ones such as
family, friends, pets, or personal objects and places, such as
a new sofa or a recently visited garden, make for intriguing
concepts. As these concepts are by nature personal, they are
unseen during large-scale model training. Describing these
concepts after the fact, through text, is unwieldy and unable
to produce the personal concept with sufficient fidelity.
This motivates a need for model customization . Given the
few user-provided images, can we augment existing text-to-
image diffusion models with the new concept (for example,
their pet dog or a “moongate” as shown in Figure 1)? The
fine-tuned model should be able to generalize and compose
them with existing concepts to generate new variations. This
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1931
poses a few challenges – first, the model tends to forget [12,
35, 52] or change [34, 41] the meanings of existing concepts:
e.g., the meaning of “moon” being lost when adding the
“moongate” concept. Secondly, the model is prone to overfit
the few training samples and reduce sampling variations.
Moreover, we study a more challenging problem, compo-
sitional fine-tuning – the ability to extend beyond tuning for
a single, individual concept and compose multiple concepts
together, e.g., pet dog in front of moongate (Figure 1).
Improving compositional generation has been studied in re-
cent works [40]. But composing multiple new concepts poses
additional challenges, such as mixing unseen concepts.
In this work, we propose a fine-tuning technique, Custom
Diffusion for text-to-image diffusion models. Our method
is computationally and memory efficient. To overcome the
above-mentioned challenges, we identify a small subset of
model weights, namely the key and value mapping from
text to latent features in the cross-attention layers [5, 70].
Fine-tuning these is sufficient to update the model with the
new concept. To prevent model forgetting, we use a small
set of real images with similar captions as the target images.
We also introduce augmentation during fine-tuning, which
leads to faster convergence and improved results. To inject
multiple concepts, our method supports training on both
simultaneously or training them separately and then merging.
We build our method on Stable Diffusion [1] and experi-
ment on various datasets with as few as four training images.
For adding single concepts, our method shows better text
alignment and visual similarity to the target images than con-
current works and baselines. More importantly, our method
can compose multiple new concepts efficiently, whereas con-
current methods struggle and often omit one. Finally, our
method only requires storing a small subset of parameters
(3%of the model weights) and reduces the fine-tuning time
(6 minutes on 2 A100 GPUs, 2−4×faster compared to
concurrent works). Full version of the paper is available at
https://arxiv.org/abs/2212.04488.
|
Kil_Towards_Unified_Scene_Text_Spotting_Based_on_Sequence_Generation_CVPR_2023
|
Abstract
Sequence generation models have recently made signif-
icant progress in unifying various vision tasks. Although
some auto-regressive models have demonstrated promising
results in end-to-end text spotting, they use specific detec-
tion formats while ignoring various text shapes and are
limited in the maximum number of text instances that can
be detected. To overcome these limitations, we propose
a UNIfied scene Text Spotter, called UNITS. Our model
unifies various detection formats, including quadrilater-
als and polygons, allowing it to detect text in arbitrary
shapes. Additionally, we apply starting-point prompting to
enable the model to extract texts from an arbitrary start-
ing point, thereby extracting more texts beyond the num-
ber of instances it was trained on. Experimental results
demonstrate that our method achieves competitive perfor-
mance compared to state-of-the-art methods. Further anal-
ysis shows that UNITS can extract a larger number of texts
than it was trained on. We provide the code for our method
at https://github.com/clovaai/units.
|
1. Introduction
End-to-end scene text spotting, which can jointly de-
tect and recognize text from an image at once, has recently
gained significant attention. This work has practical appli-
cations in various fields such as visual navigation, visual
question answering, and document image understanding.
In this paper, we formulate scene text spotting as a se-
quence generation task. This approach casts the vision task
as a language modeling task conditioned on the image and
text prompt [6,7,27,38]. By formulating the output as a se-
quence of discrete tokens, the vision task can be performed
by generating a sequence through an auto-regressive trans-
former decoder. Recent studies have attempted to integrate
various tasks into an auto-regressive model, including text
spotting. SPTS [28] treated all detection formats as the sin-
*Corresponding author.
†This work was done while the authors were at Naver Cloud.
(a) Single central point format.
(b) Bounding box format.
(c) Quadrilateral format.
(d) Polygonal format.
Figure 1. Various types of detection formats. The green line rep-
resents the boundary shape of the detection format, and the red dot
represents the points used for the corresponding format.
gle central point and predicted the coordinate tokens of the
central point and word tokens auto-regressively. However,
there are several challenges associated with applying the se-
quence generation approach directly to scene text spotting.
As illustrated in Fig. 1, there are various methods to indi-
cate the location and boundaries of text instances, and each
method has its own trade-offs in terms of annotation costs
and potential applicability. For instance, a single central
point or bounding box annotation has a relatively low anno-
tation cost and is appropriate when the detailed shape infor-
mation is not necessary. However, in many fields where text
spotting is applied, handling text location information as
only a single point might be insufficient. As shown in Fig. 2,
in the scene text editing [17, 32], which converts the text in
the image to desired text while preserving the text style, de-
tailed text shape extraction is required. Therefore, for such
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15223
(a) Original image.
(b) Result of image translation.
Figure 2. Example of image translation using scene text editing.
tasks, outputting text location information in quadrilateral
or polygonal formats beyond a single point is necessary.
Similarly, in visual document understanding, which utilizes
optical character recognition as a pre-processing step, more
detailed detection information of texts could be useful. In
summary, since each detection format has its own trade-
offs, it is better to cover all detection formats instead of
relying on only one.
Also, conventional sequence generation models have a
limitation in terms of generating sequences longer than the
maximum length the model has been trained on. This limi-
tation is particularly problematic for scene text spotting, as
it requires generating more points than bounding boxes and
generating text transcriptions in addition to object classes.
For text-rich images such as documents, this limitation be-
comes a bottleneck in extracting all the texts.
To address these challenges in applying the sequence
generation method to text spotting, we propose a novel end-
to-end UNIfied scene Text Spotter, called UNITS, which
aims to overcome these limitations. We unify various de-
tection formats, such as quadrilateral and polygon, into the
model, allowing it to detect arbitrary-shaped text areas. This
approach enables the model to learn different annotation
formats together using a single unified model, and it can
extract all detection formats. We use prompts to unify var-
ious detection formats, enabling a single model to predict
the coarse or fine-grained location information of text in-
stances.
Moreover, we introduce starting-point prompting, which
enables the model to extract texts from arbitrary starting
points, allowing it to generate longer sequences than the
maximum decoding length. This approach enhances the
model’s efficiency and enables it to extract more text than
the number of text boxes it has been trained on.
In addition, we employ a multi-way transformer decoder
from the mixture-of-experts (MoE) [4, 9, 37]. This decoder
separates each time stamp into detection and recognition,
allowing the model to converge faster by guiding the model
to generate a detection or a recognition token and assigningan expert accordingly. Each block of the multi-way trans-
former consists of shared attention modules and two task
experts: detection and recognition experts.
Experimental results demonstrate that our proposed
method achieves competitive performance on text spotting
benchmarks while extracting texts in various detection for-
mats using a single unified model.
Our main contributions are:
• We propose a novel sequence generation-based scene
text spotting method that can extract arbitrary-shaped
text areas by unifying various detection formats.
• Our model can extract more texts than the decoder
length allows using the starting-point prompt, which
generalizes the model to spot more texts than it has
been trained on.
• Experimental results demonstrate that our method
achieves competitive performance on text spotting
benchmarks and provides additional functionalities
|
Liu_Reducing_the_Label_Bias_for_Timestamp_Supervised_Temporal_Action_Segmentation_CVPR_2023
|
Abstract
Timestamp supervised temporal action segmentation
(TSTAS) is more cost-effective than fully supervised coun-
terparts. However, previous approaches suffer from severe
label bias due to over-reliance on sparse timestamp an-
notations, resulting in unsatisfactory performance. In this
paper, we propose the Debiasing-TSTAS (D-TSTAS) frame-
work by exploiting unannotated frames to alleviate this bias
from two phases: 1) Initialization. To reduce the depen-
dencies on annotated frames, we propose masked times-
tamp predictions (MTP) to ensure that initialized model
captures more contextual information. 2) Refinement. To
overcome the limitation of the expressiveness from sparsely
annotated timestamps, we propose a center-oriented times-
tamp expansion (CTE) approach to progressively expand
pseudo-timestamp groups which contain semantic-rich mo-
tion representation of action segments. Then, these pseudo-
timestamp groups and the model output are used to iter-
atively generate pseudo-labels for refining the model in a
fully supervised setup. We further introduce segmental con-
fidence loss to enable the model to have high confidence
predictions within the pseudo-timestamp groups and more
accurate action boundaries. Our D-TSTAS outperforms the
state-of-the-art TSTAS method as well as achieves compet-
itive results compared with fully supervised approaches on
three benchmark datasets.
|
1. Introduction
Analyzing and understanding human actions in videos
is very important for many applications, such as human-
robot interaction [14] and healthcare [33]. Recently, sev-
eral approaches have been very successful in locating and
analyzing activities in videos, including action localization
[12,29,37,50], action segmentation [7,11,20,23], and action
recognition [9, 27, 39, 41, 47].
*These authors contributed equally to this work.
†Corresponding author.
Figure 1. (a) The TSTAS task aims to segment actions in videos
by timestamp annotations. (b) An example of focus bias: exist-
ing initialization methods prefer to predict frames similar to the
annotated timestamp, leading to incorrectly identifying dissimilar
frames within the segment and similar frames outside the segment.
(c) An example of representation bias: the frames of complex
action share large semantic variance, e.g. the process of cutting
cheese includes both taking out and cutting, resulting in the mod-
els that rely on sparse timestamps to produce biased pseudo-labels.
Despite the success of previous approaches, they rely on
fully temporal supervision, where the start and end frames
of each action are annotated. As a lightweight alternative,
many researchers have started exploring timestamp super-
vised temporal action segmentation (TSTAS), where each
action segment is annotated with only one frame in the
untrimmed video, as shown in Fig. 1 (a).
Most previous approaches follow a two-step pipeline by
initializing and then iteratively refining the model with gen-
erated pseudo-labels. However, relying only on supervised
signals of sparse single-timestamp annotations, these meth-
ods fail to learn the semantics of entire action segments,
which is referred to as label bias in this paper, including fo-
cus bias andrepresentation bias . Specifically, to initialize
the segmentation model, the previous methods [17, 28, 32]
adopt the Naive approach proposed in [28] (referred to as
Naive), which computes the loss at the annotated frames
for training. However, utilizing only the annotated frames
leads to focus bias , where the initialized model tends to fo-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
6503
cus on frames distributed over segments of various action
categories similar to the annotated frames [51] (shown in
Fig. 1 (b)).
During refining the segmentation model, to utilize the
unlabeled frames, typical methods generate hard [17,28,51]
or soft weighted [32] pseudo-labels that are used to re-
fine the model like fully supervised methods. To gener-
ate pseudo-labels, the above methods depend on the times-
tamps that contain semantic or relative position informa-
tion. Despite the success of such refined models, these ap-
proaches suffer from representation bias that single-frame
fails to represent the entire action segment for complex ac-
tions with the large semantic variance of various frames (il-
lustrated in Fig. 1 (c)). Moreover, when refining the seg-
mentation model by these biased pseudo-labels, represen-
tation bias can be accumulated and expanded, resulting in
unsatisfactory performance that is still a gap compared to
fully supervised approaches.
In this paper, we propose a novel framework called
Debiasing-TSTAS (D-TSTAS) to reduce label bias, con-
taining masked timestamp predictions and center-oriented
timestamp expansion approaches. During the initialization
phase, we aim to alleviate the focus bias problem by prop-
agating timestamp supervision information to the contex-
tual frames of the timestamps. In contrast to previous ap-
proaches, we propose the Masked Timestamp Predictions
(MTP) approach that masks the input features of times-
tamps to remove the dependencies of the model on anno-
tated frames. In this way, the initialized model is forced
to reconstruct the annotated frames in corresponding out-
put features and then predict their action categories by con-
textual information of timestamps. Furthermore, to capture
the semantics of both timestamps and contextual frames, we
adopt the Naive after our MTP.
While our initialized model can reduce focus bias by
capturing more contextual information, it does not guar-
antee that the generated pseudo-labels avoid represen-
tation bias during refining. Inspired by query expan-
sion, we propose a Center-oriented Timestamp Expansion
(CTE) approach for obtaining potential trustworthy unla-
beled frames, which we refer to as pseudo-timestamps,
to progressively expand the pseudo-timestamp groups that
contain more semantics than single annotated timestamps.
More specifically, it consists of three steps: 1) In the gen-
erating step, we generate pseudo-labels by current pseudo-
timestamp groups and the model output. 2) In the updating
step, we choose the semantically dissimilar center frames of
each segment in the pseudo-labels as pseudo-timestamps to
expand the pseudo-timestamp groups. 3) In the segmenting
step, the model is refined by pseudo-labels and our segmen-
tal confidence loss, which smooths the predicted probabil-
ities in each action segment and maintains high confidence
within pseudo-timestamp groups. The above steps are ex-ecuted several times alternately to improve the model pre-
diction in a lazy manner instead of generating pseudo-labels
per epoch in previous methods [28, 51].
Our contributions can be summarised as follows:
• We study the label bias in the TSTAS task and propose
a novel D-TSTAS framework to reduce both focus and
representation bias.
• Our masked timestamp predictions approach is the first
attempt to alleviate the dependencies on timestamps,
promoting the model to capture contextual informa-
tion. Coupling MTP and Naive as a general solution
is used to initialize the model in the TSTAS.
• Compared to sparsely annotated timestamps, our
center-oriented timestamp expansion approach pro-
gressively expands pseudo-timestamp groups to con-
tain semantic-rich motion representations of action
segments.
• The proposed D-TSTAS not only outperforms state-of-
the-art TSTAS approaches but also achieves competi-
tive results compared with fully supervised approaches
on three benchmark datasets.
|
Kong_Efficient_Frequency_Domain-Based_Transformers_for_High-Quality_Image_Deblurring_CVPR_2023
|
Abstract
We present an effective and efficient method that explores
the properties of Transformers in the frequency domain for
high-quality image deblurring. Our method is motivated
by the convolution theorem that the correlation or convo-
lution of two signals in the spatial domain is equivalent to
an element-wise product of them in the frequency domain.
This inspires us to develop an efficient frequency domain-
based self-attention solver (FSAS) to estimate the scaled
dot-product attention by an element-wise product operation
instead of the matrix multiplication in the spatial domain. In
addition, we note that simply using the naive feed-forward
network (FFN) in Transformers does not generate good de-
blurred results. To overcome this problem, we propose a
simple yet effective discriminative frequency domain-based
FFN (DFFN), where we introduce a gated mechanism in
the FFN based on the Joint Photographic Experts Group
(JPEG) compression algorithm to discriminatively determine
which low- and high-frequency information of the features
should be preserved for latent clear image restoration. We
formulate the proposed FSAS and DFFN into an asymmetri-
cal network based on an encoder and decoder architecture,
where the FSAS is only used in the decoder module for better
image deblurring. Experimental results show that the pro-
posed method performs favorably against the state-of-the-art
approaches.
|
1. Introduction
Image deblurring aims to restore high-quality images
from blurred ones. This problem has achieved significant
progress due to the development of various effective deep
models with large-scale training datasets.
Most state-of-the-art methods for image deblurring are
mainly based on deep convolutional neural networks (CNNs).
The main success of these methods is due to developing kind-
s of network architectural designs, for example, the multi-
Corresponding author: Jiangxin Dong and Jinshan Pan.
/uni00000014/uni00000015/uni00000013 /uni00000014/uni00000019/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000017/uni00000013 /uni00000015/uni0000001b/uni00000013 /uni00000016/uni00000015/uni00000013 /uni00000016/uni00000019/uni00000013 /uni00000017/uni00000013/uni00000013
/uni00000029/uni0000002f/uni00000032/uni00000033/uni00000056/uni00000003/uni0000000b/uni0000002a/uni0000000c/uni00000016/uni00000015/uni00000011/uni00000018/uni00000013/uni00000016/uni00000015/uni00000011/uni0000001a/uni00000018/uni00000016/uni00000016/uni00000011/uni00000013/uni00000013/uni00000016/uni00000016/uni00000011/uni00000015/uni00000018/uni00000016/uni00000016/uni00000011/uni00000018/uni00000013/uni00000016/uni00000016/uni00000011/uni0000001a/uni00000018/uni00000016/uni00000017/uni00000011/uni00000013/uni00000013/uni00000016/uni00000017/uni00000011/uni00000015/uni00000018/uni00000033/uni00000036/uni00000031/uni00000035/uni00000003/uni0000000b/uni00000047/uni00000025/uni0000000cOurs
DeepRFT+
Stripformer
Restormer
MIMO-Unet+MAXIM-3S
IPTFigure 1. Comparisons of the proposed method and state-of-the-
art ones on the GoPro dataset [16] in terms of accuracy, floating
point operations (FLOPs), and network parameters. The circle size
indicates the number of the network parameter.
scale [4,16,22] or multi-stage [31,32] network architectures,
generative adversarial learning [11, 12], physics model in-
spired network structures [6 –8, 17, 33], and so on. As the
basic operation in these networks, the convolution opera-
tion is a spatially-invariant local operation, which does not
model the spatially variant properties of the image contents.
Most of them use larger and deeper models to remedy the
limitation of the convolution. However, simply increasing
the capacity of deep models does not always lead to better
performance as shown in [17, 33].
Different from the convolution operation that models the
local connectivity, Transformers are able to model the global
contexts by computing the correlations of one token to all
other tokens. They have been shown to be an effective ap-
proach in lots of high-level vision tasks and also have great
potential to be the alternatives of deep CNN models. In im-
age deblurring, the methods based on Transformers [27, 30]
also achieve better performance than the CNN-based meth-
ods. However, the computation of the scaled dot-product
attention in Transformers leads to quadratic space and time
complexity in terms of the number of tokens. Although
using smaller and fewer tokens can reduce the space and
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5886
time complexity, such strategy cannot model the long-range
information of features well and usually leads to significant
artifacts when handling high-resolution images, which thus
limits the performance improvement.
To alleviate this problem, most approaches use the down-
sampling strategy to reduce the spatial resolution of fea-
tures [26]. However, reducing the spatial resolution of fea-
tures will cause information loss and thus affect the image
deblurring. Several methods reduce the computational cost
by computing the scaled dot-product attention in terms of
the number of features [29, 30]. Although the computational
cost is reduced, the spatial information is well not explored,
which may affect the deblurring performance.
In this paper, we develop an effective and efficient method
that explores the properties of Transformers for high-quality
image deblurring. We note that the scaled dot-product at-
tention computation is actually to estimate the correlation
of one token from the query and all the tokens from the key.
This process can be achieved by a convolution operation
when rearranging the permutations of tokens. Based on this
observation and the convolution theorem that the convolu-
tion in the spatial domain equals a point-wise multiplication
in the frequency domain, we develop an efficient frequen-
cy domain-based self-attention solver (FSAS) to estimate
the scaled dot-product attention by an element-wise product
operation instead of the matrix multiplication. Therefore,
the space and time complexity can be reduced to O(N)
O(NlogN)for each feature channel, where Nis the num-
ber of the pixels.
In addition, we note that simply using the feed-forward
network (FFN) by [30] does not generate good deblurred
results. To generate better features for latent clear image
restoration, we develop a simple yet effective discrimina-
tive frequency domain-based FFN (DFFN). Our DFFN is
motivated by the Joint Photographic Experts Group (JPEG)
compression algorithm. It introduces a gated mechanism
in the FFN to discriminatively determine which low- and
high-frequency information should be preserved for latent
clear image restoration.
We formulate the proposed FSAS and DFFN into an end-
to-end trainable network based on an encoder and decoder
architecture to solve image deblurring. However, we find
that as features of shallow layers usually contain blur effects,
applying the scaled dot-product attention to shallow features
does not effectively explore global clear contents. As the
features from deep layers are usually clearer than those from
shallow layers, we develop an asymmetric network architec-
ture, where the FSAS is only used in the decoder module
for better image deblurring. We analyze that the exploring
properties of Transformers in the frequency domain is able
to facilitate blur removal. Experimental results demonstrate
that the proposed method generates favorable results against
state-of-the-art methods in terms of accuracy and efficiency(Figure 1).
The main contributions are summarized as follows:
We develop an efficient frequency domain-based self-
attention solver to estimate the scaled dot-product at-
tention. Our analysis demonstrates that using the fre-
quency domain-based solver reduces the space and time
complexity and is much more effective and efficient.
We propose a simple yet effective discriminative fre-
quency domain-based FFN based on the JPEG compres-
sion algorithm to discriminatively determine which low
and high-frequency information should be preserved
for latent clear image restoration.
We develop an asymmetric network architecture based
on an encoder and decoder network, where the frequen-
cy domain-based self-attention solver is only used in
the decoder module for better image deblurring.
We analyze that the exploring properties of Transform-
ers in the frequency domain is able to facilitate blur
removal and show that our approach performs favor-
ably against state-of-the-art methods.
|
Liu_Hierarchical_Prompt_Learning_for_Multi-Task_Learning_CVPR_2023
|
Abstract
Vision-language models (VLMs) can effectively transfer
to various vision tasks via prompt learning. Real-world sce-
narios often require adapting a model to multiple similar yet
distinct tasks. Existing methods focus on learning a specific
prompt for each task, limiting the ability to exploit poten-
tially shared information from other tasks. Naively training
a task-shared prompt using a combination of all tasks ig-
nores fine-grained task correlations. Significant discrepan-
cies across tasks could cause negative transferring. Consid-
ering this, we present Hierarchical Prompt (HiPro) learn-
ing, a simple and effective method for jointly adapting a
pre-trained VLM to multiple downstream tasks. Our method
quantifies inter-task affinity and subsequently constructs a
hierarchical task tree. Task-shared prompts learned by in-
ternal nodes explore the information within the correspond-
ing task group, while task-individual prompts learned by
leaf nodes obtain fine-grained information targeted at each
task. The combination of hierarchical prompts provides
high-quality content of different granularity. We evaluate
HiPro on four multi-task learning datasets. The results
demonstrate the effectiveness of our method.
|
1. Introduction
Vision-language pre-training [23, 34, 49, 71, 74] has re-
cently shown great potential to leverage human language
for addressing a wide range of downstream recognition
tasks. Vision-language models (VLMs), e.g., CLIP [49]
and ALIGN [23], align embeddings of images and texts
from massive web data, encouraging the matching image-
text pair to be similar and pushing away the unmatched
pair [6, 19]. During inference, the task-relevant content in
text modality can be provided to query the latent knowledge
of the pre-trained VLMs for facilitating visual recognition.
*Equal contribution.
†Corresponding author.
−202468101202468wrandwindwshrwavg
0.0110.0290.0560.130.310.7725>5(a) Train loss
−202468101202468wrandwindwshrwavg
2425252628303440>40
(b) Test error
Figure 1. Task-shared prompt vs. task-individual prompt on
multi-task learning. We visualize ( a) train loss and ( b) test er-
ror surface [15] for classifier weights ( wrand,wind, and wshr),
which synthesized from the random initialization prompt, task-
individual prompt, and task-shared prompt, respectively, on one of
the target tasks (i.e., the Art task of the Office-Home dataset [65]).
The task-individual prompt is only trained on this task. The task-
shared prompt is trained on the combination of all tasks. The av-
erage weights ( wavg=1
2(wshr+wind)) can perform well to test
samples. More details refer to the supplementary materials.
The provided task-relevant texts, often constructed by
theprompt template and category words, can significantly
influence the recognition performance. Prompt engineer-
ing[23, 49], i.e., manually designing prompts, is a straight-
forward way to obtain meaningful prompts for adapting
VLMs. However, it inevitably introduces artificial bias and
relies on time-consuming attempts [49]. Recent advances
on prompt learning [79,80] show an alternative way, which
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10888
Task: Satellite Images (EuroSAT)Task-IndividualPromptAcc(%)a photo of a {class}texture.40.54Task: DescribableTexture (DTD)Task-SharedPromptAcc(%)a photo with repeating {class}pattern.39.72Individual Prompt + SharedPromptAcc(%)a photo of a {class}texture.a photo with repeating {class}pattern.42.73Task-IndividualPromptAcc(%)a centered satellite photo of a {class}.36.68Task-SharedPromptAcc(%)a photo with repeating {class}pattern.36.75Individual Prompt + SharedPromptAcc(%)a centered satellite photo of a {class}.a photo with repeating {class}pattern.39.57Figure 2. The benefits of prompt learning with multiple tasks. Note that DTD [9] dataset and EuroSAT [45] dataset employ the same
task-shared prompt. Task-individual prompt and task-shared prompt can represent different contents of recognition tasks. Ensembling their
zero-shot classifiers can improve performance.
aims to learn the appropriate soft prompt in a data-driven
manner on each downstream task. With few training data,
prompt learning has shown considerable improvement com-
pared with the hand-crafted prompt.
Despite substantial progress, existing approaches [79–
81] still focus on adapting VLMs to the individual task.
However, challenges in realistic situations demand adapting
a model to several similar but different tasks, also known
as the problem of multi-task learning [20, 75]. More im-
portantly, current methods learn the specific prompt corre-
sponding to each task, which can not leverage information
in other tasks to benefit individual tasks. Actually, the trans-
ferred prompt can be reused for similar tasks. For exam-
ple, “a photo of a {class}.” is a general prompt for most
recognition tasks. Specifically, as shown in Figure 2, for
two distinct tasks, i.e., texture images and satellite images,
a well-designed prompt can leverage the potential connec-
tions across them.
This paper explores how to simultaneously adapt a pre-
trained VLM to multiple target tasks through prompting.
A straightforward way is to learn the same prompt for all
tasks. However, this na ¨ıve approach ignores the charac-
teristics of each task and fails to achieve the optimum on
each task. Nevertheless, we found that the task-shared
prompt can significantly complement the prompt designed
(or learned) individually for each task . As shown in Fig-
ure 2, the task-individual (hand-crafted) prompt captures
the fine-grained content of each task. The task-shared
(hand-crafted) prompt represents the general content across
tasks. The combination of task-shared and task-individual
prompts can embrace both general and fine-grained content
to enhance recognition.
Another perspective is provided for an in-depth explana-
tion. In Figure 1a, we see that, the classifier weights synthe-
sized from the task-individual prompt ( trained on the indi-
vidual target task) have lower training loss than the weights
from the task-shared prompt ( trained on the combination
of all tasks). However, the performance of task-individual
prompt on the test set is poor (Figure 1b), which implies
that the task-individual prompt has the risk of over-fitting.Meanwhile, the task-shared prompt, generalizing on various
tasks, can be considered as a regularization to avoid over-
fitting. Averaging weights from the task-shared prompt and
the task-individual prompt can improve the performance on
test data (Figure 1b).
Although similar tasks can facilitate each other by shar-
ing knowledge, we can notassume all the offered tasks
can benefit from training together. Significant discrepancies
across tasks could lead to poor performance, also known as
negative-transfer [73]. On the other hand, even for the ideal
case, i.e., there exists the same beneficial prompt across all
tasks, only learning the global (coarse-grained) task-shared
prompt neglects the information transferred within some
fine-grained task groups.
To address this problem, we present Hierarchical Prompt
(HiPro) learning to capture multi-grained shared informa-
tion while mitigating negative transfer between dissimilar
tasks. Our HiPro constructs a hierarchical task tree by
agglomerative hierarchical clustering based on inter-task
affinity. Specifically, the internal node of the tree repre-
sents a task group containing a cluster of similar tasks (at
descendant leaves). Meanwhile, dissimilar tasks would be
divided into different sub-trees, mitigating conflict. For
each node, HiPro learns a corresponding prompt to cap-
ture the general information of the fine-grained task group.
Our HiPro learns not only task-individual prompts (for leaf
nodes) but also multi-grained task-share prompts (for non-
leaf nodes). For inference, HiPro combines various weights
generated from learned prompts, leveraging the information
in all tasks to improve the performance of the individual
task.
Comprehensive experiments are constructed to validate
the effectiveness of our method. HiPro works well on a
large-scale multi-task learning benchmark consisting of di-
verse visual recognition tasks. Compared with the existing
prompt learning methods [40,79,80], HiPro has a significant
improvement demonstrating the benefit of learning prompts
with multiple tasks. Additional visualizations are also pro-
vided for analysis.
10889
|
Litrico_Guiding_Pseudo-Labels_With_Uncertainty_Estimation_for_Source-Free_Unsupervised_Domain_Adaptation_CVPR_2023
|
Abstract
Standard Unsupervised Domain Adaptation (UDA) meth-
ods assume the availability of both source and target data
during the adaptation. In this work, we investigate Source-
free Unsupervised Domain Adaptation (SF-UDA), a specific
case of UDA where a model is adapted to a target domain
without access to source data. We propose a novel approach
for the SF-UDA setting based on a loss reweighting strategy
that brings robustness against the noise that inevitably af-
fects the pseudo-labels. The classification loss is reweighted
based on the reliability of the pseudo-labels that is measured
by estimating their uncertainty. Guided by such reweight-
ing strategy, the pseudo-labels are progressively refined by
aggregating knowledge from neighbouring samples. Further-
more, a self-supervised contrastive framework is leveraged
as a target space regulariser to enhance such knowledge
aggregation. A novel negative pairs exclusion strategy is pro-
posed to identify and exclude negative pairs made of samples
sharing the same class, even in presence of some noise in the
pseudo-labels. Our method outperforms previous methods
on three major benchmarks by a large margin. We set the new
SF-UDA state-of-the-art on VisDA-C and DomainNet with
a performance gain of +1.8% on both benchmarks and on
PACS with +12.3% in the single-source setting and +6.6%
in multi-target adaptation. Additional analyses demonstrate
that the proposed approach is robust to the noise, which re-
sults in significantly more accurate pseudo-labels compared
to state-of-the-art approaches.
|
1. Introduction
Deep learning methods achieve remarkable performance
in visual tasks when the training and test sets share a similar
distribution, while their generalisation ability on unseen data
decreases in presence of the so called domain shift [18, 48].
Moreover, DNNs require a huge amount of labelled data to
be trained on a new domain entailing a considerable cost
for collecting and labelling the data. Unsupervised DomainAdaptation (UDA) approaches aim to transfer the knowledge
learned on a labelled source domain to an unseen target
domain without requiring any target label [2, 13, 22, 57].
Common UDA techniques have the drawback of requiring
access to source data while they are adapting the model
to the target domain. This may not always be possible in
many applications, i.e. when data privacy or transmission
bandwidth become critical issues. In this work, we focus
on the setting of Source-free adaptation (SF-UDA) [37, 55,
61, 66], where source data is no longer accessible during
adaptation, but only unlabelled target data is available. As
a result, standard UDA methods cannot be applied in the
SF-UDA setting, since they require data from both domains.
Recently, several SF-UDA methods have been proposed,
focusing on generative models [36,43], class prototypes [37],
entropy-minimisation [37, 61], self-training [37] and auxil-
iary self-supervised tasks [55]. Yet, generative models re-
quire a large computational effort to generate images/features
in the target style [36], entropy-minimisation methods of-
ten lead to posterior collapse [37] and the performance of
self-training solutions [37] suffer from noisy pseudo-labels.
Self supervision and pseudo-labelling have also been intro-
duced as joint SF-UDA strategies [55, 60], raising the issue
of choosing a suitable pretext task and refining pseudo-labels.
For instance, Chen et al. [5] propose to refine predictions
within a self-supervised strategy. Yet, their work does not
take into account the noise inherent in pseudo-labels, which
leads to equally weighting all samples without explicitly ac-
counting for their uncertainty. This may cause pseudo-labels
with high uncertainty to still contributing in the classification
loss, resulting in detrimental noise overfitting and thus poor
adaptation.
In this work, we propose a novel Source-free adaptation
approach that builds upon an initial pseudo-labels assign-
ment (for all target samples) performed by using the pre-
trained source model, always assumed to be available. To
obtain robustness against the noise that inevitably affects
such pseudo-labels, we propose to reweight the classification
loss based on the reliability of the pseudo-labels. We mea-
sure it by estimating pseudo-labels uncertainty, after they
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7640
Figure 1. (a) We obtain the refined pseudo-label ˆyt(green circle with black outline) for the current sample by looking at pseudo-labels of
neighbour samples. (b) Predictions from neighbours are used to estimate the uncertainty of ˆytby computing the weight wthrough Entropy
ˆH. (c) A temporal queue Qestoring predictions at past T epochs, i.e. {e−T, ..., e −1}, is used within the contrastive framework to exclude
pairs of samples sharing the same class from the list of negative pairs (query, key) .
have been refined by knowledge aggregation from neigh-
bours sample, as shown in Figure 1 (a). The introduced
loss reweighting strategy penalises pseudo-labels with high
uncertainty to guide the learning through reliable pseudo-
labels. Differently from previous reweighting strategies, we
reweight the loss by estimating the uncertainty of the refined
pseudo-labels by simply analysing neighbours’ predictions,
as shown in Figure 1 (b).
The process of refining pseudo-labels necessarily requires
a regularised target feature space in which neighbourhoods
are composed of semantically similar samples, possibly shar-
ing the same class. With this objective, we exploit an aux-
iliary self-supervised contrastive framework. Unlike prior
works, we introduce a novel negative pairs exclusion strat-
egy that is robust to noisy pseudo-labels, by leveraging past
predictions stored in a temporal queue. This allows us to
identify and exclude negative pairs made of samples belong-
ing to the same class, even if their pseudo-labels are noisy,
as shown in Figure 1 (c).
We benchmark our method on three major domain adap-
tation datasets outperforming the state-of-the-art by a large
margin. Specifically, on VisDA-C and DomainNet, we set
the new state-of-the-art with 90.0% and 69.6% accuracy,
with a gain of +1.8% in both cases, while on PACS we im-
prove the only existing SF-UDA baseline [1] by +12.3% and
+6.6% in single-source and multi-target settings. Ablation
studies demonstrate the effectiveness of individual compo-
nents of our pipeline in adapting the model from the source
to the target domain. We also show how our method is able
to progressively reduce the noise in the pseudo-labels, betterthan the state-of-the-art.
To summarise, the main contributions of this work are:
•We introduce a novel loss re-weighting strategy that
evaluates the reliability of refined pseudo-labels by es-
timating their uncertainty. This enables our method to
mitigate the impact of the noise in the pseudo-labels.
To the best of our knowledge, this is the first work
that estimates the reliability of pseudo-labels after their
refinement.
•We propose a novel negative pairs exclusion strategy
which is robust to noisy pseudo-labels, being able to
identify and exclude those negative pairs composed of
same-class samples.
•We validate our method on three benchmark datasets,
outperforming SOTA by a large margin, while also
proving the potential of the approach in progressively
refining the pseudo-labels.
The remainder of the paper is organized as follows. In
Section 2, we discuss related works in the literature. Sec-
tion 3 describes the proposed method. Section 4 illustrates
the experimental setup and reports the obtained results. Fi-
nally, conclusions are drawn in Section 5 together with a
discussion of limitations.
|
Karim_C-SFDA_A_Curriculum_Learning_Aided_Self-Training_Framework_for_Efficient_Source_CVPR_2023
|
Abstract
Unsupervised domain adaptation (UDA) approaches fo-
cus on adapting models trained on a labeled source do-
main to an unlabeled target domain. In contrast to UDA,
source-free domain adaptation (SFDA) is a more practi-
cal setup as access to source data is no longer required
during adaptation. Recent state-of-the-art (SOTA) methods
on SFDA mostly focus on pseudo-label refinement based
self-training which generally suffers from two issues: i) in-
evitable occurrence of noisy pseudo-labels that could lead
to early training time memorization, ii) refinement process
requires maintaining a memory bank which creates a signif-
icant burden in resource constraint scenarios. To address
these concerns, we propose C-SFDA, a curriculum learn-
ing aided self-training framework for SFDA that adapts effi-
ciently and reliably to changes across domains based on se-
lective pseudo-labeling. Specifically, we employ a curricu-
lum learning scheme to promote learning from a restricted
amount of pseudo labels selected based on their reliabili-
ties. This simple yet effective step successfully prevents la-
bel noise propagation during different stages of adaptation
and eliminates the need for costly memory-bank based label
refinement. Our extensive experimental evaluations on both
image recognition and semantic segmentation tasks confirm
the effectiveness of our method. C-SFDA is also applica-
ble to online test-time domain adaptation and outperforms
previous SOTA methods in this task.
|
1. Introduction
Deep neural network (DNN) models have achieved re-
markable success in various visual recognition tasks [16,
22, 41, 44]. However, even very large DNN models of-
ten suffer significant performance degradation when there
*Most of this work was done during Nazmul Karim’s internship with
SRI International. Project Page: https://sites.google.com/
view/csfdacvpr2023/homeis a distribution or domain shift [54, 78] between training
(source) and test (target) domains. To address the prob-
lem of domain shifts, various Unsupervised Domain Adap-
tation (UDA) [19, 31] algorithms have been developed over
recent years. Most UDA techniques require access to la-
beled source domain data during adaptation, which limits
their application in many real-world scenarios, e.g. source
data is private, or adaptation in edge devices with lim-
ited computational capacity. In this regard, source-free do-
main adaptation setting has recently gained significant in-
terest [34, 35, 84], which considers the availability of only
source pre-trained model and unlabeled target domain data.
Recent state-of-the-art SFDA methods (e.g., SHOT [42],
NRC [83], G-SFDA [85], AdaContrast [4]) mostly rely on
the self-training mechanism that is guided by the source pre-
trained model generated pseudo-labels (PLs). Label refine-
ment using the knowledge of per-class cluster structure in
feature space is recurrently used in these methods. At early
stages of adaptation, the label information formulated based
on cluster structure can be severely misleading or noisy;
shown in Fig. 1. As the adaptation progresses, this label
noise can negatively impact the subsequent cluster struc-
ture as the key to learning meaningful clusters hinges on
the quality of pseudo-labels itself. Therefore, the inevitable
presence of label noise at early training time is a critical is-
sue in SFDA and requires proper attention. Furthermore,
distributing cluster knowledge among neighbor samples re-
quires a memory bank [4, 42] which creates a significant
burden in resource-constraint scenarios. In addition, most
memory bank dependent SFDA techniques are not suitable
for online test-time domain adaptation [73,76]; an emerging
area of UDA that has gained traction in recent times. De-
signing a memory-bank-free SFDA approach that can guide
the self-training with highly precise pseudo-labels is a very
challenging task and a major focus of this work.
In our work, we focus on increasing the reliability
of generated pseudo-labels without using a memory-bank
and clustering-based pseudo-label refinement. Our analy-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
24120
àCorrect Pseudo-labelsàIncorrect/Noisy Pseudo-labels
Target DomainSource ModelExisting SFDA: SST with AllPseudo-Labels
Ours : SST with SelectedPseudo-LabelsIteration = 0Iteration = 1Iteration = T-1àModel Corrected PredictionCluster A Cluster B Cluster “A” AnchorCluster “B” AnchorClass “A” InstanceClass “B” InstanceExisting SFDA Techniques
Target Domain Feature SpaceMisleading Neighbors àSelected Labels (DR)Figure 1. Left: In SFDA, we only have a source model that needs to be adapted to the target data. Among the source-generated pseudo-labels, a large portion
is noisy which is important to avoid during supervised self-training (SST) with regular cross-entropy loss. Instead of using all pseudo-labels, we choose the
most reliable ones and effectively propagate high-quality label information to unreliable samples. As the training progresses, the proposed selection strategy
tends to choose more samples for SST due to the improved average reliability of pseudo-labels. Such a restricted self-training strategy creates a model with
better discrimination ability and eventually corrects the noisy predictions. Here, Tis the total number of iterations. Right: While existing SFDA techniques
leverages cluster structure knowledge in the feature space, there may exist many misleading neighbors —pseudo-labels of neighbors’ that are different from
the anchors’ true label. Therefore, clustering-based label propagation inevitably suffers from label noise in subsequent training.
sis shows that avoiding early training-time memorization
(ETM) of noisy labels encourages noise-free learning in
subsequent stages of adaptation. We further analyze that
even with an expensive label refinement technique in place,
learning equally from all labels eventually leads to label-
noise memorization. Therefore, we employ a curriculum
learning-aided self-training framework, C-SFDA, that pri-
oritizes learning from easy-to-learn samples first and hard
samples later on. We show that one can effectively iden-
tify the group of easy samples by utilizing the reliabil-
ity of pseudo-labels, i.e. prediction confidence and uncer-
tainty. We then follow a carefully designed curriculum
learning pipeline to learn from highly reliable (easy) sam-
ples first and gradually propagate more refined label infor-
mation among less reliable (hard) samples later on. In ad-
dition to the self-training, we facilitate unsupervised con-
trastive representation learning that helps us prevent the
early training-time memorization phenomenon. Our main
contributions are summarized as follows:
• We introduce a novel SFDA technique that focuses
on noise-free self-training exploiting the reliability of
generated pseudo-labels. With the help of curriculum
learning, we aim to prevent early training time memo-
rization of noisy pseudo-labels and improve the quality
of subsequent self-training as shown in Fig. 1.
• By prioritizing the learning from highly reliable
pseudo-labels first, we aim to propagate refined and
accurate label information among less reliable sam-
ples. Such a selective self-training strategy elimi-
nates the requirement of a computationally costly and
memory-bank dependent label refinement framework.
• C-SFDA achieves state-of-the-art performance on ma-
jor benchmarks for image recognition and semantic
segmentation. Being highly memory-efficient, the pro-
posed method is also applicable to online test-time
adaptation settings and obtains SOTA performance.
|
Li_Learning_Generative_Structure_Prior_for_Blind_Text_Image_Super-Resolution_CVPR_2023
|
Abstract
Blind text image super-resolution (SR) is challenging as
one needs to cope with diverse font styles and unknown
degradation. To address the problem, existing methods
perform character recognition in parallel to regularize the
SR task, either through a loss constraint or intermediate
feature condition. Nonetheless, the high-level prior could
still fail when encountering severe degradation. The prob-
lem is further compounded given characters of complex
structures, e.g., Chinese characters that combine multiple
pictographic or ideographic symbols into a single charac-
ter. In this work, we present a novel prior that focuses
more on the character structure. In particular, we learn
to encapsulate rich and diverse structures in a StyleGAN
and exploit such generative structure priors for restora-
tion. To restrict the generative space of StyleGAN so that
it obeys the structure of characters yet remains flexible in
handling different font styles, we store the discrete fea-
tures for each character in a codebook. The code subse-
quently drives the StyleGAN to generate high-resolution
structural details to aid text SR. Compared to priors based
on character recognition, the proposed structure prior ex-
erts stronger character-specific guidance to restore faithful
and precise strokes of a designated character. Extensive
experiments on synthetic and real datasets demonstrate the
compelling performance of the proposed generative structure
prior in facilitating robust text SR. Our code is available at
https://github.com/csxmli2016/MARCONet .
|
1. Introduction
Blind text super-resolution (SR) aims at restoring a high-
resolution (HR) image from a low-resolution (LR) text image
that is corrupted by unknown degradation. The problem is
relatively less explored than SR of natural and face images.
This seemingly easy task actually possesses many unique
challenges. In particular, each character owns its unique
structure. A suboptimal restoration that destroys the struc-
ture with, e.g., distorted, missing or additional strokes, is
easily perceptible and may alter the meaning of the original
word. The task becomes more difficult when dealing with
writing systems such as Chinese, Kanji and Hanja characters,
in which the patterns of thousands of characters vary in com-
plex ways and are mostly composed of different subunits (or
compound ideographs). A slight deviation in strokes, say
‘太’ and ‘大’ carries entirely different meanings. The prob-
lem becomes more intricate given the different typefaces.
In addition, complex and unknown degradation make the
data-driven convolutional neural networks (CNNs) incapable
of generalizing well to real-world degraded observations.
To ameliorate the difficulties in blind text SR, espe-
cially in retaining the unique structure of each character,
recent studies design new loss functions to constrain SR re-
sults semantically close to the high-resolution (HR) ground-
truth [7,8,45,50,79], or incorporate the recognition prior into
the intermediate features [40,41,44]. Figure 1 shows several
representative results of these two types of methods, i.e., TB-
SRN [7] that applies content-aware constraint and TATT [41]
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10103
that embeds the recognition prior into the SR process. The
results are not satisfactory despite the models were retrained
with data synthesized with sophisticated degradation models,
i.e., BSRGAN [75] and Real-ESRGAN [66]. TBSRN and
TATT perform well when the LR character has fewer strokes.
However, they both fail to generate the desired structures
when the character is composed of complex strokes, e.g.,
the 4, 6, 8, 11- thcharacters in (a). The results suggest that
high-level recognition prior cannot well benefit the SR task
in restoring faithful and accurate structures.
When dealing with complex characters, we can identify
some commonly used strokes as constituents of characters or
their subparts. These obvious structural regularities have not
been studied thoroughly in the literature. In particular, most
existing text SR studies [7, 40, 41, 45, 50, 79] focus on scene
text images that are presented in different view perspectives
or arranged in irregular patterns. In contrast, our goal is
to explore structural regularities specific to complex char-
acters and investigate an effective way to exploit the prior.
The investigation has practical values in many applications,
including the restoration of old printed media ( e.g., newspa-
pers or books) for digitalization and preservation, and the
enhancement of captions in digital media.
The key idea of our solution is to mine structure prior
from a generative network, and use the prior to facilitate
blind SR of text images. To capture the structure prior of
characters, we train a StyleGAN to generate characters of
different styles. As our intent is not to generate infinite
and non-repetitive results, we restrict the generative space of
StyleGAN to obey the structure of characters by constraining
the generation with a codebook. Specifically, the codebook
stores the discrete code of each character, and each code
serves as a constant to StyleGAN for generating a specific
high-resolution character. The font style is governed by the
Wlatent space of StyleGAN. Once the character StyleGAN
is trained, we can extract its intermediate features serving
as the generative structure prior, which encapsulates the
intrinsic structure and style of the LR character.
Using the prior for text SR can be conveniently achieved
through an additional Transformer-based encoder and a text
SR network that accepts prior. In particular, given a text
image that constitutes multiple characters ( e.g., Figure 1),
we use a Transformer-based encoder to predict the font style,
character bounding boxes, and their respective indexes in the
codebook jointly. The codebook indexes drive the structure
prior generation for each character. In the text SR network,
each LR character is super-resolved, guided by the respective
priors that are aligned using their bounding boxes.
Experiments demonstrate that our design can generate
robust and consistent structure priors for diverse and severely
degraded text input. With the embedded structure prior, our
approach performs superior against state-of-the-art methods
in generating photo-realistic text results and shows excellentgeneralization to real-world LR text images. While our
study mainly employs Chinese characters as examples, the
proposed method can be readily extended to characters of
other languages. We call our approach as MARCONet . The
main contributions are summarized as follows:
•We show that blind SR task, especially for characters
with complex structures, can be restored by using their
structure prior encapsulated in a generative network.
•We present a viable way of learning such generative
structure prior through reformulating a StyleGAN by
replacing its single constant with discrete codes that
represent different characters.
•To retrieve the prior accurately, we propose a
Transformer-based encoder to jointly predict the font
styles, character bounding boxes and their indexes in
codebook from the LR input.
|
Li_AMT_All-Pairs_Multi-Field_Transforms_for_Efficient_Frame_Interpolation_CVPR_2023
|
Abstract
We present All-Pairs Multi-Field Transforms ( AMT ), a
new network architecture for video frame interpolation. Itis based on two essential designs. First, we build bidirec-
tional correlation volumes for all pairs of pixels, and use
the predicted bilateral flows to retrieve correlations for up-dating both flows and the interpolated content feature. Sec-
ond, we derive multiple groups of fine-grained flow fieldsfrom one pair of updated coarse flows for performing back-
ward warping on the input frames separately. Combiningthese two designs enables us to generate promising task-
oriented flows and reduce the difficulties in modeling large
motions and handling occluded areas during frame interpo-lation. These qualities promote our model to achieve state-
of-the-art performance on various benchmarks with high
efficiency. Moreover , our convolution-based model com-petes favorably compared to Transformer-based models interms of accuracy and efficiency. Our code is available at
https://github.com/MCG-NKU/AMT .
|
1. Introduction
Video frame interpolation (VFI) is a long-standing video
processing technology, aiming to increase the temporal res-
olution of the input video by synthesizing intermediate
frames from the reference ones. It has been applied to
various downstream tasks, including slow-motion genera-tion [ 22,61], novel view synthesis [ 11,29,71], video com-
pression [ 60], text-to-video generation [ 52], etc.
Recently, flow-based VFI methods [ 17,22,26,34,53,69]
have been predominant in referenced research due to theireffectiveness. A common flow-based technique estimates
bilateral/bidirectional flows from the given frames and thenpropagates pixels/features to the target time step via back-
ward [ 2,17,26] or forward [ 14,37,38] warping. Thus, the
quality of a synthesized frame relies heavily on flow esti-
mation results. In fact, it is cumbersome to approximate
intermediate flows through pretrained optical flow models,
*Equal contribution
†C.L. Guo is the corresponding author.[26]
[26][26]
[17]
[14][43]
Figure 1. Performance vs. number of parameters and FLOPs. The
PSNR values are obtained from the Vimeo90K dataset [ 65]. We
use a 720p frame pair to calculate FLOPs. Our AMT outperformsthe state-of-the-art methods and is with higher efficiency.
and these flows are unqualified for VFI usage [ 14,17].
A feasible way to alleviate this issue is to estimate task-
oriented flows in an end-to-end training manner [ 22,26,
32,65]. However, some major challenges, such as large
motions and occlusions, are still pending to be resolved.
These challenges mainly arise from the defective estimation
of optical flows. Thus, a straightforward question shouldbe: Why do previous methods have difficulties in predict-
ing promising task-oriented flows when facing these chal-
lenges? Inspired by the recent studies [ 26,65] that demon-
strate task-oriented flow is generally consistent with ground
truth optical flow but diverse in local details , we attempt to
answer the above question from two perspectives:
(i)The flow fields predicted by existing VFI methods
are not consistent enough with the true displacements, es-
pecially when encountering large motions (see Fig. 2). Ex-
isting methods mostly adopt the UNet-like architecture [ 48]
with plain convolutions to build VFI models. However, this
type of architecture is vulnerable to accumulating errors at
early stages when modeling large motions [ 45,56,63,70].
As a result, the predicted flow fields are not accurate.
(ii)Existing methods predict one pair of flow fields, re-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9801
Overlaid IFRNet [ 26] Flow IFRNet [ 26] Result
Ground Truth Our Flow Our Result
Figure 2. Qualitative comparisons of estimated flows and the inter-
polated frames. Our AMT guarantees the general consistency of
intermediate flows and synthesizes fast-moving objects with oc-cluded regions precisely, while the previous state-of-the-art IFR-Net [ 26] fails to achieve them.
stricting the solution set in a tight space. This makes them
struggle to handle occlusions and details around the mo-tion boundaries, which consequently deteriorates the final
results (see Fig. 2and Fig. 5).
In this paper, we present a new network architecture,
dubbed All-pairs Multi-field Transforms ( AMT ), for video
frame interpolation. AMT explores two new designs to im-prove the fidelity and diversity of predicted flows regarding
the above two main shortcomings of previous works.
Our first design is based on all-pairs correlation in
RAFT [ 56], which adequately models the dense correspon-
dence between frames, especially for large motions. We
propose to build bidirectional correlation volumes instead
of unidirectional one and introduce a scaled lookup strat-
egy to solve the coordinate mismatch issue caused by the
invisible frame. Besides, the retrieved correlations assist
our model in jointly updating bilateral flows and the inter-
polated content feature in a cross-scale manner. Thus, the
network guarantees the fidelity of flows across scales, lay-
ing the foundation for the following refinement.
Second, considering that predicting one pair of flow
fields is hard to cope with the occlusions, we propose toderive multiple groups of fine-grained flow fields from one
pair of updated coarse bilateral flows. The input frames canbe separately backward warped to the target time step bythese flows. Such diverse flow fields provide adequate po-tential solutions for each pixel to be interpolated, particu-larly alleviating the ambiguity issue in the occluded areas.
We examine the proposed AMT on several public bench-
marks with different model scales, showing strong perfor-mance and high efficiency in contrast to the state-of-the-art (SOTA) methods (see Fig. 1). Our small model outper-
forms IFRNet-B, a SOTA lightweight model, by +0.17dBPSNR on Vimeo90K [ 65] with only 60% of its FLOPs
and parameters. For large-scale setting, our AMT exceeds
the previous SOTA ( i.e., IFRNet-L) by +0.15 dB PSNR on
Vimeo90K [ 65] with 75% of its FLOPs and 65% of its pa-rameters. Besides, we provide a huge model for comparison
with the SOTA transformer-based method VFIFormer [ 34].
Our convolution-based AMT shows a comparable perfor-
mance but only needs nearly 23 ×less computational cost
compared to VFIFormer [ 34]. Considering its effectiveness,
we hope our AMT could bring a new perspective for the ar-chitecture design in efficient frame interpolation.
|
Liu_ML2P-Encoder_On_Exploration_of_Channel-Class_Correlation_for_Multi-Label_Zero-Shot_Learning_CVPR_2023
|
Abstract
Recent studies usually approach multi-label zero-
shot learning (MLZSL) with visual-semantic mapping on
spatial-class correlation, which can be computationally
costly, and worse still, fails to capture fine-grained class-
specific semantics. We observe that different channels may
usually have different sensitivities on classes, which can
correspond to specific semantics. Such an intrinsic channel-
class correlation suggests a potential alternative for the
more accurate and class-harmonious feature representa-
tions. In this paper, our interest is to fully explore the power
of channel-class correlation as the unique base for MLZSL.
Specifically, we propose a light yet efficient Multi-Label
Multi-Layer Perceptron-based Encoder , dubbed (ML)2P-
Encoder, to extract and preserve channel-wise semantics.
We reorganize the generated feature maps into several
groups, of which each of them can be trained independently
with (ML)2P-Encoder. On top of that, a global group-
wise attention module is further designed to build the multi-
label specific class relationships among different classes,
which eventually fulfills a novel Channel- ClassCorrelation
MLZSL framework (C3-MLZSL)1. Extensive experiments on
large-scale MLZSL benchmarks including NUS-WIDE and
Open-Images-V4 demonstrate the superiority of our model
against other representative state-of-the-art models.
|
1. Introduction
The proliferation of smart devices has greatly enriched
human life when it comes to the era of big data. These smart
devices are usually equipped with cameras such that users
can easily produce and share their images. With the increas-
ing abundance of public images, how to analyze them ac-
curately has become a challenging problem. Recent years
*Jingcai Guo is the corresponding author.
1Released code: github.com/simonzmliu/cvpr23_mlzsl
INPUT
WATERTREEGARDENFeature MapsChannel-Class CorrelationPredictSeen LabelsUnseen LabelsFigure 1. Example of Channel-Class Correlation. Our method
achieves the prediction of unseen classes by exploiting the unique
distribution of channel responses as semantic information for the
class and building correlations with responses from the same chan-
nel (zoom in for a better view).
have witnessed great success in classifying an image into
a specific class [20, 37, 39], namely, single-label classifica-
tion. However, in reality, the images [17,46] usually contain
abundant information and thereby consist of multiple labels.
In recent years, the multi-label classification has been
widely investigated by exploring the relationship among
different labels from multiple aspects [9, 13, 14, 16, 42].
However, in some scenarios where extensive collections of
images exist, e.g., Flickr2, users can freely set one or more
individual tags/labels for each image, while the presented
objects and labels in these images may not be fully shown
in any previous collection, and thus result in a domain gap
for the recognition. Therefore, in real-world applications,
the model is required to gain the ability to predict unseen
classes as well. As one of the thriving research topics, zero-
2https://www.flickr.com
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23859
shot learning (ZSL) [1, 12, 15, 34] is designed to transfer
tasks from seen classes to unseen classes, and naturally
recognizes novel objects of unseen classes. Specifically,
ZSL has made continuous success in single-label classifica-
tion [19, 26, 31, 45, 48]. However, these methods can hardly
be extended to the multi-label scenario since exploring the
cross-class relationships in an image is non-trivial.
Recently, some works have focused on multi-label zero-
shot learning (MLZSL) tasks and obtained some promising
results [33, 36, 49]. Other works considered incorporating
attention mechanisms into their models, such as LESA [22]
andBiAM [35]. LESA [22] designed an attention-sharing
mechanism for different patches in the image so that each
patch can output the corresponding class. In another way,
BiAM [35] designed a bi-level attention to extract relations
from regional context and scene context, which can enrich
the regional features of the model and separate the features
of different classes.
Although previous works have made considerable
progress, their designed methods have been limited to the
processing of spatial-domain information. First of all, the
over-reliance on spatial-class correlation fails to capture
fine-grained class-specific semantics. In addition, the ad-
ditional processing of spatial information greatly increases
the computational cost of the model and limits the infer-
ence speed. Given the shortcomings of the above methods,
we found through analysis that the channel response can
be used as the semantic information of the class. Firstly,
the response of each class in the channel is unique, which
creates conditions for obtaining the unique semantics. Sec-
ondly, for classes with certain semantic associations, there
must be some channels that capture their common infor-
mation. Therefore, channel information, as an easily over-
looked part after feature extraction, can complete the task
of capturing multi-label information. In MLZSL, we can
complete the prediction of unseen classes by obtaining the
responses of seen classes in the channel domain, and the
relationship between seen and unseen classes. Finally, the
subsequent analysis of the channel response greatly saves
computational costs.
Specifically, as shown in Figure 1, as seen classes, “wa-
ter” and “tree” have unique response distributions on feature
channels, and these responses can be used as semantic infor-
mation for classification tasks. Besides, in order to explore
the correlation of classes, we found that although the se-
mantic information of “water” and “tree” is different, there
are still some channels that respond simultaneously (i.e. the
blue channel). We need to build this correlation during the
training process through modeling so that the model can
learn multi-label correlations. In the ZSL process, for the
unseen class “garden”, we know that it is related to “water”
(i.e. purple layer) and “tree” (i.e. green, orange, and gray
layer) by obtaining its semantic information and matchingwith seen classes. This observation suggests that channels
can help not only to classify objects but also to establish as-
sociations between classes. Previous methods which only
consider spatial information are unable to obtain this intrin-
sic channel-class correlation and dissimilarity, thus achiev-
ing sub-optimal performance on the MLZSL task.
To address the above challenges and construct a more ac-
curate and robust MLZSL system, we propose to group the
generated feature maps and process them in a group-wise
manner, thus enhancing the model by fully exploring the
channel-class correlations. Besides, by properly designing
a light yet efficient Multi-Label Multi-Layer Perceptron-
based Encoder, i.e., (ML)2P-Encoder, we can easily analyze
the local relationship between channels while significantly
reducing the computation overhead. Finally, these groups
are recombined and then perform the calculation of group
attention, indicating that the model is analyzed locally and
globally from the perspective of the channels, which can
ensure the integrity of the representation.
In summary, our contributions are four-fold:
1. To the best of our knowledge, our method first suggests
the concept of channel-class correlation in MLZSL,
and proposes a channel-sensitive attention module
(ML)2P-Encoder to extract and preserve channel-wise
semantics for channel groups.
2. Different from previous works that use spatial-class
correlation to extract global and local features, we al-
ternatively explore the channel-class correlation as the
unique base for MLZSL.
3. In conjunction with (ML)2P-Encoder, a global group-
wise attention is also designed to establish the multi-
label specific class relationships among classes.
4. Extensive experiments on large-scale datasets NUS-
WIDE and Open-Images -V4 demonstrate the effec-
tiveness of our method against other state-of-the-art
models.
|
Lin_Optimal_Transport_Minimization_Crowd_Localization_on_Density_Maps_for_Semi-Supervised_CVPR_2023
|
Abstract
The accuracy of crowd counting in images has improved
greatly in recent years due to the development of deep neu-
ral networks for predicting crowd density maps. However,
most methods do not further explore the ability to localize
people in the density map, with those few works adopting
simple methods, like finding the local peaks in the density
map. In this paper, we propose the optimal transport mini-
mization (OT-M) algorithm for crowd localization with den-
sity maps. The objective of OT-M is to find a target point
map that has the minimal Sinkhorn distance with the in-
put density map, and we propose an iterative algorithm
to compute the solution. We then apply OT-M to gener-
ate hard pseudo-labels (point maps) for semi-supervised
counting, rather than the soft pseudo-labels (density maps)
used in previous methods. Our hard pseudo-labels pro-
vide stronger supervision, and also enable the use of recent
density-to-point loss functions for training. We also propose
a confidence weighting strategy to give higher weight to the
more reliable unlabeled data. Extensive experiments show
that our methods achieve outstanding performance on both
crowd localization and semi-supervised counting. Code is
available at https://github.com/Elin24/OT-M .
|
1. Introduction
Crowd understanding gains much attention due to its
wide applications in surveillance [33, 61] and crowd dis-
aster prevention. Most studies in this area concentrate
on crowd counting, whose objective is to provide the to-
tal number and distribution of crowds in a scene automat-
ically. Due to the development of deep learning, recent
methods [5,58,59,62,67] have achieved success on a variety
of counting benchmarks [50, 61, 62, 66]. Counting methods
can be extended to other applications, such as traffic man-
agement [63], animal protection [2], and health care [32].
Although crowd counting has been greatly developed,
most methods do not explore further applications of the
estimated density maps after obtaining the count. Specifi-
CNN OT-M
(ours)Figure 1. The relationship between crowd counting CNNs and OT-
M algorithm. CNNs are trained to generate density maps (soft la-
bels). OT-M produces point maps (hard labels) from the predicted
density maps, without needing training.
cally, there is limited research on crowd localization, track-
ing, or group analysis with predicted density maps. Taking
crowd localization as an example, only a few methods, such
as local maximum [13, 58, 65], integer programming [36],
or Gaussian mixture models (GMM) [14], have been pro-
posed to locate pedestrians or tiny objects from density
maps. Moreover, recent localization methods ignore the
counting density map, and instead are based on point de-
tection [40, 54], blob segmentation [1, 10, 17], or inverse
distance maps [24]. However, this will increase the inef-
ficiency of a crowd understanding system, since separate
networks are required for counting and localization.
To broaden the application of density maps for localiza-
tion, in this paper we propose a parameter-free algorithm,
Optimal Transport Minimization (OT-M), to estimate the
point map indicating locations of objects from a counting
density map (see Fig. 1). OT-M minimizes the Sinkhorn
distance [7] between the density map (source) and point lo-
calization map (target), through an alternating scheme that
estimates the optimal transport plan from the current point
map (the OT-step), and updates the point map by minimiz-
ing their transport cost (the M-step). OT-M is parameter-
free and requires no training, and thus can be applied to any
crowd-counting method using density maps.
To demonstrate the applicability of density-map based
localization, we apply OT-M to semi-supervised counting.
In previous work, [38] builds a baseline for semi-supervised
counting based on the mean-teacher framework [55], but
finds it ineffective. Looking closely, we note that the base-
line in [38] uses a softpseudo-label (density map) to super-
vise the student model, whereas successful semi-supervised
classification [52] or segmentation [6, 55] methods usually
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21663
are based on hard pseudo-labels (e.g., class labels or binary
segmentation masks). In the context of semi-supervised
crowd counting, the hard label is the point map in which
each person is pseudo-annotated with a point. Thus, in this
paper we generate hard pseudo-labels using OT-M for the
unlabeled crowd images for semi-supervised crowd count-
ing. As an additional benefit, the hard pseudo-labels al-
low training CNNs under semi-supervised learning using
recent density-to-point loss functions (e.g., Bayesian loss
(BL) [34] or generalized loss (GL) [58]), which are more
effective than traditional losses, e.g. L2.
Similar to other semi-supervised tasks, some estimated
pseudo labels may be inaccurate due to limitations of the
current trained model. To reduce the effect of these noisy
pseudo-labels, we propose a confidence-weighted GL (C-
GL) for semi-supervised counting. Specifically, we com-
pute the unbalanced optimal transport plan between the stu-
dent’s predicted density map (source) and the hard pseudo-
labels from the teacher (target), and then define the pixel-
wise and point-wise confidences based on the consistencies
between source, target and the plan. Experiments show that
the trained model is more robust with our C-GL.
In summary, the contributions of this paper are 3-fold:
•We propose an OT-M algorithm to estimate the loca-
tions of objects from density maps, which is based on
minimizing the Sinkhorn distance between the den-
sity map and the target point map. Since OT-M is
parameter-free, it can be applied to any density map
without training.
•We use OT-M to produce hard pseudo-labels for semi-
supervised counting, which conforms with schemes in
other semi-supervised tasks. The hard label also al-
lows applying density-to-point loss like GL to unla-
beled data for more effective training.
•To mitigate risks brought by inaccurate pseudo-labels,
we propose a confidence-weighted Generalized Loss
to reduce the influence of inconsistencies between the
teacher’s and student’s predictions. Experiments show
that our loss improves semi-supervised counting per-
formance.
|
Kim_NeuralField-LDM_Scene_Generation_With_Hierarchical_Latent_Diffusion_Models_CVPR_2023
|
Abstract
Automatically generating high-quality real world 3D
scenes is of enormous interest for applications such as vir-
tual reality and robotics simulation. Towards this goal, we
introduce NeuralField-LDM, a generative model capable of
synthesizing complex 3D environments. We leverage Latent
Diffusion Models that have been successfully utilized for ef-
ficient high-quality 2D content creation. We first train a
scene auto-encoder to express a set of image and pose pairs
as a neural field, represented as density and feature voxel
grids that can be projected to produce novel views of the
scene. To further compress this representation, we train a
latent-autoencoder that maps the voxel grids to a set of la-
tent representations. A hierarchical diffusion model is then
fit to the latents to complete the scene generation pipeline.
We achieve a substantial improvement over existing state-
of-the-art scene generation models. Additionally, we show
how NeuralField-LDM can be used for a variety of 3D con-
tent creation applications, including conditional scene gen-
eration, scene inpainting and scene style manipulation.
|
1. Introduction
There has been increasing interest in modelling 3D real-
world scenes for use in virtual reality, game design, digi-
*Equal contribution.
†Work done during an internship at NVIDIA.tal twin creation and more. However, designing 3D worlds
by hand is a challenging and time-consuming process, re-
quiring 3D modeling expertise and artistic talent. Recently,
we have seen success in automating 3D content creation
via 3D generative models that output individual object as-
sets [15, 46, 73]. Although a great step forward, automating
the generation of real-world scenes remains an important
open problem and would unlock many applications ranging
from scalably generating a diverse array of environments
for training AI agents ( e.g. autonomous vehicles) to the de-
sign of realistic open-world video games. In this work, we
take a step towards this goal with NeuralField-LDM (NF-
LDM), a generative model capable of synthesizing complex
real-world 3D scenes. NF-LDM is trained on a collection
of posed camera images and depth measurements which are
easier to obtain than explicit ground-truth 3D data, offering
a scalable way to synthesize 3D scenes.
Recent approaches [2, 6, 8] tackle the same problem of
generating 3D scenes, albeit on less complex data. In [6,8],
a latent distribution is mapped to a set of scenes using ad-
versarial training, and in GAUDI [2], a denoising diffusion
model is fit to a set of scene latents learned using an auto-
decoder. These models all have an inherent weakness of
attempting to capture the entire scene into a single vector
that conditions a neural radiance field. In practice, we find
that this limits the ability to fit complex scene distributions.
Recently, diffusion models have emerged as a very pow-
erful class of generative models, capable of generating high-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8496
quality images, point clouds and videos [18, 24, 39, 48, 53,
73,78]. Yet, due to the nature of our task, where image data
must be mapped to a shared 3D scene without an explicit
ground truth 3D representation, straightforward approaches
fitting a diffusion model directly to data are infeasible.
In NeuralField-LDM, we learn to model scenes using a
three-stage pipeline. First, we learn an auto-encoder that en-
codes scenes into a neural field, represented as density and
feature voxel grids. Inspired by the success of latent diffu-
sion models for images [53], we learn to model the distribu-
tion of our scene voxels in latent space to focus the genera-
tive capacity on core parts of the scene and not the extrane-
ous details captured by our voxel auto-encoders. Specif-
ically, a latent-autoencoder decomposes the scene voxels
into a 3D coarse, 2D fine and 1D global latent. Hierar-
chichal diffusion models are then trained on the tri-latent
representation to generate novel 3D scenes. We show how
NF-LDM enables applications such as scene editing, birds-
eye view conditional generation and style adaptation. Fi-
nally, we demonstrate how score distillation [46] can be
used to optimize the quality of generated neural fields, al-
lowing us to leverage the representations learned from state-
of-the-art image diffusion models that have been exposed to
orders of magnitude more data.
Our contributions are: 1) We introduce NF-LDM, a hi-
erarchical diffusion model capable of generating complex
open-world 3D scenes and achieving state of the art scene
generation results on four challenging datasets. 2) We ex-
tend NF-LDM to semantic birds-eye view conditional scene
generation, style modification and 3D scene editing.
|
Li_BioNet_A_Biologically-Inspired_Network_for_Face_Recognition_CVPR_2023
|
Abstract
Recently, whether and how cutting-edge Neuroscience
findings can inspire Artificial Intelligence (AI) confuse both
communities and draw much discussion. As one of the most
critical fields in AI, Computer Vision (CV) also pays much
attention to the discussion. To show our ideas and exper-
imental evidence to the discussion, we focus on one of the
most broadly researched topics both in Neuroscience and
CV fields, i.e., Face Recognition (FR). Neuroscience studies
show that face attributes are essential to the human face-
recognizing system. How the attributes contribute also be
explained by the Neuroscience community. Even though a
few CV works improved the FR performance with attribute
enhancement, none of them are inspired by the human face-
recognizing mechanism nor boosted performance signifi-
cantly. To show our idea experimentally, we model the
biological characteristics of the human face-recognizing
system with classical Convolutional Neural Network Op-
erators (CNN Ops) purposely. We name the proposed
Biologically-inspired Network as BioNet . Our BioNet con-
sists of two cascade sub-networks, i.e., the Visual Cor-
tex Network (VCN) and the Inferotemporal Cortex Net-
work (ICN). The VCN is modeled with a classical CNN
backbone. The proposed ICN comprises three biologically-
inspired modules, i.e., the Cortex Functional Compartmen-
talization, the Compartment Response Transform, and the
Response Intensity Modulation. The experiments prove
that: 1) The cutting-edge findings about the human face-
recognizing system can further boost the CNN-based FR
network. 2) With the biological mechanism, both identity-
related attributes ( e.g., gender) and identity-unrelated at-
tributes ( e.g., expression) can benefit the deep FR models.
Surprisingly, the identity-unrelated ones contribute even
more than the identity-related ones. 3) The proposed BioNet
significantly boosts state-of-the-art on standard FR bench-
mark datasets. For example, BioNet boosts IJB-B@1e-6
from 52.12% to 68.28% and MegaFace from 98.74% to
99.19%. The source code is released in1.
1https://github.com/pengyuLPY/BioNet.git
Figure 1. (A) Face recognition mechanism of human brains. (B)
Architecture of BioNet.
|
1. Introduction
It sparked much discussion in both communities that
Zador, Bengio et al . [54] claimed fundamental Neuro-
science research must be invested to accelerate AI progress.
Some researchers agree with Zador’s opinion. For example,
LeCun et al. [52] and Goodfellow et al. [10] proposed the
Convolutional Neural Network (CNN) inspired by past clas-
sical Neuroscience discoveries about the human visual cor-
tex. However, other researchers have some concerns, e.g.,
1) Except for the high-level and abstract senses from Neuro-
science, can their specific studies support the AI fields? 2)
Since CNN [9,10,26,52] was proposed many years ago, few
AI works have been inspired by recent Neuroscience find-
ings. It results in a lack of evidence to support that the latest
Neuroscience studies can continue to drive AI progress.
To show an idea and some experimental evidence to
this discussion, we focus on one of the most broadly re-
searched topics in both fields, i.e., Face Recognition (FR).
The latest Neuroscience studies [1, 2, 7, 38, 51] found that
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10344
besides the visual cortex, the inferotemporal cortex plays a
vital role in the human face-recognizing system. Because
of the following three biological characteristics, the infer-
otemporal cortex characterizes the complicated relationship
between attributes and makes attributes contribute to FR.
1) The inferotemporal cortex is functionally compartmen-
talized by the face stimuli ( i.e.,identity ,attributes ) [1, 7].
2) The responses of the functional compartments are trans-
formed into the successor neurons for processing com-
plex tasks [38], e.g., FR. 3) The intensities of the func-
tional compartment are variant in the inferotemporal cor-
tex [2], demonstrating that the attributes are not equally
essential to the human face-recognizing system. In the
AI field, some CNN-based FR works proposed effective
loss functions [4, 18, 21, 33, 40, 45] and others designed
task-oriented face recognition structures [22,23,27–29,42].
Because of these excellent works, FR in AI achieved big
progress. Although a few AI FR works [14, 19, 43] boosted
the performance with attribute enhancement, none of them
are inspired by the human face-recognizing mechanism
nor boosted performance significantly. We experimentally
prove that the deep FR models do not capture the biological
characteristics of the human face-recognizing system intro-
duced above. We suspect this is the factor limiting their
performance improvement.
To address the problem and experimentally support the
opinion of Zador, Bengio et al. [54], we purposely model
the biological characteristics of the human face-recognizing
system with classical CNN Ops. We name the proposed
Biologically-inspired Network as BioNet . The proposed
BioNet is constituted by two cascade sub-networks, i.e.,
Visual Cortex Network (VCN) andInferotemporal Cor-
tex Network (ICN) , as Fig.1 shows. The VCN is modeled
with a CNN backbone ( e.g., CASIA-Net [53], ResNet [12]),
which follows the common suggestions in [10, 52]. The
proposed ICN is composed of three biologically-inspired
modules, i.e. the Cortex Functional Compartmentalization
(CFC), the Compartment Response Transform (CRT), and
the Response Intensity Modulation (RIM). CFC is based
on the attention mechanism [8, 15, 44] to functionally com-
partmentalize the feature maps with face stimuli ( i.e.iden-
tityand attributes ). CRT is implemented by Multilayer
Perception (MLP) [36] to transform intra-compartment re-
sponses to successor neurons for processing complex FR
task. RIM follows the ensemble mechanism [55] to fuse
inter-compartment features via adaptive weights to achieve
the final FR representation. The proposed modules follow
the human face-recognizing mechanism and empower ICN
to characterize the complicated relationship between stim-
uli. All of them are indispensable in BioNet.
With such a Biologically-inspired Network, we achieve
better attribute-enhanced deep FR models than ever. Fur-
thermore, we conduct careful analyses of the proposedmodules and the impacts of attributes. We also compare
the BioNet to the attention, multi-task learning, and ensem-
ble mechanisms, verifying the advantage of our proposals.
We think the experiments in this paper can support the con-
clusions:
1. To the best of our knowledge, after CNN was pro-
posed, our BioNet is the first network inspired by the
latest Neuroscience studies. It provides experimental
evidence that the latest Neuroscience studies can fur-
ther boost the CNN-based FR network and continue to
drive AI progress.
2. With the Neuroscience mechanism, both identity-
related attributes ( e.g.,gender ) and identity-unrelated
attributes ( e.g.,expression ) can benefit the deep FR
models. Surprisingly, we find the identity-unrelated
ones contribute even more than the identity-related
ones. Besides, we also propose an online latent at-
tributes mining method and prove that the latent at-
tributes contribute to the FR task, too.
3. Without bells and whistles, Our BioNet consistently
and significantly boosts state-of-the-art of FR models
on standard FR benchmark datasets, e.g., IJB-A [24],
IJB-B [46], IJB-C [46], and MegaFace [20].
|
Kim_Diffusion_Video_Autoencoders_Toward_Temporally_Consistent_Face_Video_Editing_via_CVPR_2023
|
Abstract
Inspired by the impressive performance of recent face
image editing methods, several studies have been naturally
proposed to extend these methods to the face video editing
task. One of the main challenges here is temporal consis-
tency among edited frames, which is still unresolved. To this
end, we propose a novel face video editing framework based
on diffusion autoencoders that can successfully extract the
decomposed features - for the first time as a face video edit-
ing model - of identity and motion from a given video. This
modeling allows us to edit the video by simply manipulat-
ing the temporally invariant feature to the desired direction
for the consistency. Another unique strength of our model
is that, since our model is based on diffusion models, it can
satisfy both reconstruction and edit capabilities at the same
time, and is robust to corner cases in wild face videos ( e.g.
occluded faces) unlike the existing GAN-based methods.1
|
1. Introduction
As one of the standard tasks in computer vision to change
various face attributes such as hair color, gender, or glasses
1Project page: https://diff-video-ae.github.ioof a given face image, face editing has been continuously
gaining attention due to its various applications and en-
tertainment. In particular, with the improvement of analy-
sis and manipulation techniques for recent Generative Ad-
versarial Network (GAN) models [ 8,11,22,29,30], we
simply can do this task by manipulating a given image’s
latent feature. In addition, very recently, many methods
for face image editing also have been proposed based on
Diffusion Probabilistic Model (DPM)-based methods that
show high-quality and flexible manipulation performance
[3,12,17,19,23,25].
Naturally, further studies [ 2,35,41] have been proposed
to extend image editing methods to incorporate the tempo-
ral axis for videos. Now, given real videos with a human
face, these studies try to manipulate some target facial at-
tributes with the other remaining features and motion in-
tact. They all basically edit each frame of a video inde-
pendently via off-the-shelf StyleGAN-based image editing
techniques [ 22,29,41].
Despite the advantages of StyleGAN in this task such as
high-resolution image generation capability and highly dis-
entangled semantic representation space, one harmful draw-
back of GAN-based editing methods is that the encoded real
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
6091
images cannot perfectly be recovered by the pretrained gen-
erator [ 1,26,34]. Especially, if a face in a given image is
unusually decorated or occluded by some objects, the fixed
generator cannot synthesize it. For perfect reconstruction,
several methods [ 6,27] are suggested to further tune the
generator for GAN-inversion [ 26,34] on one or a few tar-
get images, which is computationally expensive. Moreover,
after the fine-tuning, the original editability of GANs can-
not be guaranteed. This risk could be worse in video domain
since we have to finetune the model on the multiple frames.
Aside from the reconstruction issue of existing GAN-
based methods, it is critical in video editing tasks to con-
sider the temporal consistency among edited frames to pro-
duce realistic results. To address this, some prior works rely
on the smoothness of the latent trajectory of the original
frames [ 35] or smoothen the latent features directly [ 2] by
simply taking the same editing step for all frames. However,
smoothness does not ensure temporal consistency. Rather,
the same editing step can make different results for differ-
ent frames because it can be unintentionally entangled with
irrelevant motion features. For example, in the middle row
of Fig. 1, eyeglasses vary across time and sometimes dimin-
ish when the man closes his eyes.
In this paper, we propose a novel video editing frame-
work for human face video, termed Diffusion Video Au-
toencoder , that resolves the limitations of the prior works.
First, instead of GAN-based editing methods suffering from
imperfect reconstruction quality, we newly introduce diffu-
sion based model for face video editing tasks. As the re-
cently proposed diffusion autoencoder (DiffAE) [ 23] does,
thanks to the expressive latent spaces of the same size as
input space, our model learns a semantically meaningful la-
tent space that can perfectly recover the original image back
and are directly editable. Not only that, for the first time as
a video editing model, encode the decomposed features of
the video: 1) identity feature shared by all frames, 2) fea-
ture of motion or facial expression in each frame, and 3)
background feature that could not have high-level represen-
tation due to large variances. Then, for consistent editing,
we simply manipulate a single invariant feature for the de-
sired attribute (single editing operation per video), which is
also computationally beneficial compared to the prior works
that require editing the latent features of all frames.
We experimentally demonstrate that our model appropri-
ately decomposes videos into time-invariant and per-frame
variant features and can provide temporally consistent ma-
nipulation. Specifically, we explore two ways of manipula-
tion. The first one is to edit features in the predefined set
of attributes by moving semantic features to the target di-
rection found by learning linear classifier in semantic repre-
sentation space on annotated CelebA-HQ dataset [ 10]. Ad-
ditionally, we explore the text-based editing method that op-
timizes a time-invariant latent feature with CLIP loss [ 7]. Itis worth noting that since we cannot fully generate edited
images for CLIP loss due to the computational cost, we pro-
pose the novel strategy that rather uses latent state of inter-
mediate time step for the efficiency.
To summarize, our contribution is four-fold:
•We devise diffusion video autoencoders based on dif-
fusion autoencoders [ 23] that decompose the video
into a single time-invariant and per-frame time-variant
features for temporally consistent editing.
•Based on the decomposed representation of diffusion
video autoencoder, face video editing can be con-
ducted by editing only the single time-invariant iden-
tity feature and decoding it together with the remaining
original features.
•Owing to the nearly-perfect reconstruction ability of
diffusion models, our framework can be utilized to edit
exceptional cases such that a face is partially occluded
by some objects as well as usual cases.
•In addition to the existing predefined attributes edit-
ing method, we propose a text-based identity editing
method based on the local directional CLIP loss [ 7,24]
for the intermediately generated product of diffusion
video autoencoders.
|
Kulkarni_Learning_To_Predict_Scene-Level_Implicit_3D_From_Posed_RGBD_Data_CVPR_2023
|
Abstract
We introduce a method that can learn to predict scene-
level implicit functions for 3D reconstruction from posed
RGBD data. At test time, our system maps a previously
unseen RGB image to a 3D reconstruction of a scene via
implicit functions. While implicit functions for 3D recon-
struction have often been tied to meshes, we show that we
can train one using only a set of posed RGBD images. This
setting may help 3D reconstruction unlock the sea of ac-
celerometer+RGBD data that is coming with new phones.
Our system, D2-DRDF , can match and sometimes outper-
form current methods that use mesh supervision and shows
better robustness to sparse data.
|
1. Introduction
Consider the image in Figure 1. From this ordinary RGB
image, we can understand the complete 3D geometry of the
scene, including the floor and walls behind the chairs. Our
goal is to enable computers to recover this geometry from
a single RGB image. To this end, we present a method that
does so while learning only on posed RGBD images.
The task of reconstructing the full 3D of a scene from a
single previously unseen RGB image has long been known
to be challenging. Early work on full 3D relied on vox-
els [6, 17] or meshes [19], but these representations fail on
scenes due to topology and memory challenges. Implicit
functions (or learning to map each point in R3to a value like
the distance to the nearest surface) have shown substantial
promise at overcoming these challenges. When conditioned
on an image, these have led to several successful methods.
Unfortunately, the implicit function status quo mainly
ties implicit function reconstruction methods to mesh su-
pervision. This symbiosis has emerged since meshes give
excellent direct supervision. However, methods are limited
to training with an image-aligned mesh that is usually wa-
tertight (and often artist-created) [35, 38, 44] and occasion-
ally non-watertight but professionally-scanned [27, 61].
We present a method, Depth to DRDF (D2-DRDF), that
breaks the implicit-function/mesh connection and can train
an effective single RGB image implicit 3D reconstruction
(Ours) Posed Depth
(Prior work) Explicit
3D MeshInference: Single RGB Image to 3D Training Data
Input
Figure 1. We propose a method, D2-DRDF, that reconstructs
full 3D from a single RGB image. During inference (left), our
method uses implicit functions to reconstruct the complete 3D in-
cluding visible and occluded surfaces (visualized as surface nor-
mals
) such as the occluded wall and empty floor. For training,
our method uses only RGBD images and poses, unlike most prior
works that need an explicit and often watertight 3D mesh.
system using a set of RGBD images with known pose. We
envision that being able to entirely skip meshing will enable
the use of vast quantities of lower-quality data from con-
sumers (e.g., from increasingly common consumer phones
with LiDAR scanners and accelerometers) as well as robots.
In addition to permitting training, the bypassing of meshing
may enable adaptation in a new environment on raw data
without needing an expert to ensure mesh acquisition.
Our key insight is that we can use segments of observed
free space in depth maps in other views to constrain dis-
tance functions. We show this using the Directed Ray Dis-
tance Function (DRDF) [27] that has recently shown good
performance in 3D reconstruction using implicit functions
and has the benefit of not needing watertight meshes for
training. Given an input reference view, the DRDF breaks
the problem of predicting the 3D surface into a set of in-
dependent 1D distance functions, each along a ray through
a pixel in the reference view and accounting for only sur-
faces on the ray. Rather than use an ordinary unsigned dis-
tance function, the DRDF signs the distance using the lo-
cation of the camera’s pinhole and the intersections along
the ray. While [27] showed their method could be trained
onnon-watertight meshes, their method is still dependent
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17256
on meshes. In our paper, we show that the DRDF can be
cleanly supervised using auxiliary views of RGBD obser-
vations and their poses. We derive constraints on the DRDF
that power loss functions for training our system. While the
losses on any one image are insufficient to produce a recon-
struction, they provide a powerful signal when accumulated
across thousands of training views.
We evaluate our method on realistic indoor scene
datasets and compare it with methods that train with full
mesh supervision. Our method is competitive and some-
times even better compared to the best-performing mesh-
supervised method [28] with full, professional captures. As
we degrade scan completeness, our method largely main-
tains its performance while mesh-based methods perform
substantially worse. We conclude by showing that fine-
tuning of our method on a handful of images enables a sim-
ple, effective method for fusing and completing scenes from
a handful of RGBD images.
|
Lang_SCOOP_Self-Supervised_Correspondence_and_Optimization-Based_Scene_Flow_CVPR_2023
|
Abstract
Scene flow estimation is a long-standing problem in com-
puter vision, where the goal is to find the 3D motion of
a scene from its consecutive observations. Recently, there
have been efforts to compute the scene flow from 3D point
clouds. A common approach is to train a regression model
that consumes source and target point clouds and outputs
the per-point translation vector. An alternative is to learn
point matches between the point clouds concurrently with
regressing a refinement of the initial correspondence flow.
In both cases, the learning task is very challenging since the
flow regression is done in the free 3D space, and a typical
solution is to resort to a large annotated synthetic dataset.
We introduce SCOOP , a new method for scene flow esti-
mation that can be learned on a small amount of data with-
out employing ground-truth flow supervision. In contrast to
previous work, we train a pure correspondence model fo-
cused on learning point feature representation and initial-
ize the flow as the difference between a source point and
its softly corresponding target point. Then, in the run-time
phase, we directly optimize a flow refinement component
with a self-supervised objective, which leads to a coherent
and accurate flow field between the point clouds. Experi-
ments on widespread datasets demonstrate the performance
gains achieved by our method compared to existing leading
techniques while using a fraction of the training data. Our
code is publicly available1.
|
1. Introduction
Scene flow estimation [27] is a fundamental problem
in computer vision with various use-cases, such as au-
tonomous driving, scene parsing, pose estimation, and ob-
ject tracking, to name a few. Given two consecutive obser-
vations of a 3D scene, the aim is to compute the dynamics
of the scene between the observations. Scene flow predic-
tion based on 2D images has been thoroughly investigated
in the literature [17, 19, 28, 32, 33]. However, in light of the
1https://github.com/itailang/SCOOP
*The work was done during an internship at Google Research.
100 1,000 10,000
Train set size (#, in a log10 scale)0102030405060708090100Strict accuracy (%)
FlowNet3DFLOT
FESTA3DFlow
BiPFNSCOOP (ours)SCOOP+(ours)
RigidFlowJust Go with the Flow
Self-Point-FlowSCOOP (ours)SCOOP+(ours)
Full supervision, synthetic train data (FlyingThings3D)
Self supervision, synthetic train data (FlyingThings3D)
Self supervision, real train data (KITTI)Figure 1. Flow accuracy on the KITTI benchmark vs. the train
set size. Our method is trained on one or two orders of magnitude
less data while surpassing the performance of the competing tech-
niques [4, 13, 14, 16, 21, 24, 30, 31] by a large margin. Please see
Table 1 for the complete details of the evaluation settings.
recent proliferation of 3D sensors, such as LiDAR, there is a
surge of interest in scene flow methods that operate directly
on the 3D data [10, 13, 16, 21, 34].
Liu et al . [16] were among the first to pursue this
research avenue. They proposed FlowNet3D, a fully-
supervised neural network that learned to regress the flow
between 3D point clouds and showed remarkable perfor-
mance improvement over image-based techniques [1, 19,
29]. Since their method required ground-truth flow anno-
tations, which are scarce for real-world data, they turned to
training on a large synthetic dataset that compromised the
generalization capability to real-world LiDAR data.
Follow-up works devised self-supervised learning
schemes [13, 21] and narrowed the domain gap by training
on unannotated LiDAR point cloud pairs. However, similar
to Liu et al. [16], they used a regression approach in which
the model should learn to compute the flow in the free 3D
space . This task is extremely challenging, given the irreg-
ular nature of point clouds, and requires a large amount of
training data for the network to converge.
In another line of work [8, 11, 24], researchers leveraged
point cloud correspondence for scene flow prediction. In
this approach, the flow is computed as the translation of
a point in the first point cloud (source) to its softly corre-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5281
Correspondence
Learning Regr ession
Learning Refinement
Learning Point
clouds Flow Point
clouds Flow
Point
clouds Flow Point
clouds Flow Flow Point
clouds Train
Test
Correspondence
Model Refinement
Model Regression
Model Correspondence
Learning
Refinement
Optimization Initial
Flow
Flow Point
clouds Point
clouds Optimize
Freeze
Correspondence
Model Flow
Optimization No
Training aaaaaaaFlowNet3D [16] aaaaaaaaaaFLOT [24] aaaaaaaaaaaNeural Prior [15] aSCOOP (ours)
Figure 2. Comparison of scene flow approaches. Given a pair of point clouds, FlowNet3D [16] learns to regress the flow in the free
3D space, and the trained model is frozen for testing. FLOT [24] concurrently trains two network components: one that computes point
correspondence and another that regresses a correction to the resulting correspondence flow. Neural Prior [15] optimizes the flow between
the point clouds from scratch without learning. In contrast to previous work, we take a hybrid approach. We train a pure correspondence
model without flow regression , which serves for flow initialization. Then, we directly optimize only the flow refinement at the test-time.
sponding point in the second one (target). The softly cor-
responding point is a weighted sum of target points based
on point similarity in a learned latent space. Thus, rather
than the challenging regression problem in the 3D ambi-
ent space, the flow estimation task boils down to point
feature learning and is reduced to the convex combination
space [26] of existing target points. However, to relax this
constraint, another network component is trained to regress
flow corrections. The joint training of point representation
and flow refinement burdens the learning process and re-
tains the reliance on large datasets with flow supervision.
Another emerging approach is an optimization-only flow
computation [15, 23]. In this case, no training data is in-
volved, and the flow is optimized at run-time for each scene
separately. Despite the high accuracy such a dedicated op-
timization achieves, it requires a long processing time.
We present SCOOP, a hybrid flow estimation method
that can be learned from a small amount of training data.
SCOOP consists of two parts: a self-supervised neural net-
work for point cloud correspondence and a direct flow re-
finement optimization module. During the training phase,
the network learns to extract point features for soft point
matches, which initialize the flow between the point clouds.
In contrast to previous work, our network is focused on
learning just the point embeddings, allowing its training on
a very small dataset, as shown in Figure 1. Additionally,
we consider the confidence of the network in the computed
correspondences to guide the learning process better.
Then, instead of training another network for regress-
ing flow updates, we define an optimization problem and
directly optimize residual flow refinement vectors at run-
time. The optimization objective encourages a coherent
flow field while retaining the translated source points close
to the target point cloud. Our design choices improve the
accuracy compared to learning-based methods and reduce
the processing time with respect to the optimization-only
approach [15, 23]. For both correspondence learning and
refinement optimization, we use a self-supervised distance
objective and a smoothness prior instead of ground-truthflow labels. Figure 2 presents the difference between our
approach and leading previous ones.
In summary, we propose a hybrid flow prediction ap-
proach for point clouds based on self-supervised correspon-
dence learning and direct run-time residual flow optimiza-
tion. Using well-established datasets in the scene flow liter-
ature, we show that our approach yields clear performance
improvement over existing state-of-the-art methods while
using a fraction of the training data and without employing
any ground-truth flow supervision.
|
Lin_Towards_Fast_Adaptation_of_Pretrained_Contrastive_Models_for_Multi-Channel_Video-Language_CVPR_2023
|
Abstract
Multi-channel video-language retrieval require models
to understand information from different channels (e.g.
video +question, video +speech) to correctly link a video
with a textual response or query. Fortunately, contrastive
multimodal models are shown to be highly effective at align-
ing entities in images/videos and text, e.g., CLIP [20]; text
contrastive models are extensively studied recently for their
strong ability of producing discriminative sentence embed-
dings, e.g., SimCSE [5]. However, there is not a clear
way to quickly adapt these two lines to multi-channel video-
language retrieval with limited data and resources. In this
paper, we identify a principled model design space with two
axes: how to represent videos and how to fuse video and text
information. Based on categorization of recent methods, we
investigate the options of representing videos using contin-
uous feature vectors or discrete text tokens; for the fusion
method, we explore the use of a multimodal transformer or
a pretrained contrastive text model. We extensively evalu-
ate the four combinations on five video-language datasets.
We surprisingly find that discrete text tokens coupled with
a pretrained contrastive text model yields the best perfor-
mance, which can even outperform state-of-the-art on the
iVQA and How2QA datasets without additional training on
millions of video-text data. Further analysis shows that this
is because representing videos as text tokens captures the
key visual information and text tokens are naturally aligned
with text models that are strong retrievers after the con-
trastive pretraining process. All the empirical analysis es-
tablishes a solid foundation for future research on afford-
able and upgradable multimodal intelligence.
|
1. Introduction
From retrieving a trending video on TikTok with natural
language descriptions to asking a bot to solve your tech-
nical problem with the question and a descriptive video,AI agents handling multi-channel video-language retrieval-
style tasks have been increasingly demanded in this post-
social-media era. These tasks require the agent to fuse in-
formation from multiple channels, i.e., video and text to
retrieve a text response or return a multi-channel sample
for a text query. To power such agents, a popular ap-
proach [9, 10, 15, 34, 40] consists of two rounds of pre-
training: (1) The 1st round is to obtain unimodal pre-
trained models, such as visual-only encoders [3, 6, 17, 20]
(e.g., S3D,CLIP) and text-only encoders [4,13,22,24] (e.g.,
BERT) (2) The 2nd round aims at pretraining on visual-
text dataset - specifically, researchers leverage techniques
like masked token modeling [9, 40] or contrastive learn-
ing [10,15,33,34] to align and fuse unimodal features from
model pretrained in the 1st round.
Such methods achieve good performance on multi-
channel retrieval-style tasks but they suffer from two ma-
jor limitations: 1) huge amounts of data and computational
resources are required for the second-round “pretraining”,
which significantly limits the research exploration without
such resources; 2) the domain of video data used in the
second round “pretraining” has to be strongly correlated
with downstream tasks [9], which may restrict such meth-
ods from being generally applicable.
To alleviate such limitations, we study a novel problem:
fast adaptation of pretrained contrastive models on multi-
channel video-language retrieval under limited resources.
Specifically, we propose to adapt both contrastive multi-
modal models [16,20] and contrastive text models [5,22] to
enjoy their strong encoding ability and discriminative em-
bedding space. There has been tremendous progress re-
cently on large-scale contrastive multimodal models [16,
17, 20]. Through pretraining on millions of images/videos,
these models are highly effective at encoding visual inputs
and linking entities across modalities . Meanwhile, con-
trastive text models [5, 22] have been also densely stud-
ied to obtain discriminative sentence embeddings . These
models are shown to perform well on challenging text re-
trieval tasks such as semantic search [19], which requires
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14846
the model to understand both language and real-world
knowledge [5, 22]. Such an ability allows the model to
retrieve a text response or encode a text query in multi-
channel video-language retrieval-style tasks. Thereby, once
the video information is effectively incorporated, it elimi-
nates the necessity of second-round “pretraining” on large
scale multimodal datasets, enabling fast adaptation to any
video-language retrieval-style tasks.
We first conduct a systematic analysis on potential model
designs, as shown in Figure 1. We identify the model de-
sign space with two design principles: how to represent the
video information and how to fuse this video information
with questions or other text such as speech. To represent
video information, we could either adopt Continuous Fea-
tures which is commonly used in existing work [18, 34],
or project videos into unified Text Tokens [11] from vari-
ous modalities. To fuse information from multiple channels,
i.e., video and question/speech, there are two potential op-
tions, namely, a Multimodal Transformer [34] or a Text
Transformer [18]. Hence, there are four combinations de-
rived from this model design space, namely, Continuous
Features + Multimodal Transformer [34], Continuous
Features + Text Transformer ,Text Tokens + Multimodal
Transformer ,Text Tokens + Text Transformer .
Our exploration of this model design space results into
a simple yet effective approach that allows fast adaptation
of pretrained contrastive models, which first leverages con-
trastive multimodal models to retrieve a sequence of Text
Tokens for the visual input and then feeds these tokens to-
gether with other text to contrastive Text Transformer for
answer retrieval. Its fast adaptation ability not only comes
from the ability of linking entities across modalities of the
contrastive multimodal model, but it also enjoys the natural
alignment with contrastive text models to produce discrimi-
native embeddings. To the best of our knowledge, this is the
first proposal to adapt pretrained contrastive models in this
manner, although each individual design choice may have
been adopted previously. We further conduct in-depth anal-
ysis to understand 1) the trade-off between data efficiency
and accuracy, 2) the impact of pretrained contrastive text
model, and 3) the possible limitation of this framework.
The contribution could be summarized three-fold:
• We identified the principled model design space for
fast adaption of pretrained contrastive multimodal
models and pretrained contrastive text models.
• We conducted extensive experiments on five video-
language datasets, observed a consistent trend across
these four variants, and even obtained state-of-the-art
performance (e.g., 6.5% improvement on How2QA)
with the proposed Text Tokens + Text Transformer
variant without using millions of extra multimodal
data samples , which is essential to democratize thecommunity of video+language.
• The proposed Text Tokens + Text Transformer vari-
ant scales significantly better than the other variants,
w.r.t. the quality of pretrained text models. The
code will be released at https://github.com/
XudongLinthu/upgradable- multimodal-
intelligence to facilitate future research.
|
Liu_MixMAE_Mixed_and_Masked_Autoencoder_for_Efficient_Pretraining_of_Hierarchical_CVPR_2023
|
Abstract
In this paper, we propose Mixed and Masked AutoEn-
coder (MixMAE), a simple but efficient pretraining method
that is applicable to various hierarchical Vision Transform-
ers. Existing masked image modeling (MIM) methods for
hierarchical Vision Transformers replace a random subset
of input tokens with a special [MASK] symbol and aim at
reconstructing original image tokens from the corrupted im-
age. However, we find that using the [MASK] symbol greatly
slows down the training and causes pretraining-finetuning
inconsistency, due to the large masking ratio (e.g., 60%
in SimMIM). On the other hand, MAE does not introduce
[MASK] tokens at its encoder at all but is not applicable
for hierarchical Vision Transformers. To solve the issue and
accelerate the pretraining of hierarchical models, we replace
the masked tokens of one image with visible tokens of an-
other image, i.e., creating a mixed image. We then conduct
dual reconstruction to reconstruct the two original images
from the mixed input, which significantly improves efficiency.
While MixMAE can be applied to various hierarchical Trans-
formers, this paper explores using Swin Transformer with a
large window size and scales up to huge model size (to reach
600M parameters). Empirical results demonstrate that Mix-
MAE can learn high-quality visual representations efficiently.
Notably, MixMAE with Swin-B/W14 achieves 85.1% top-1
accuracy on ImageNet-1K by pretraining for 600 epochs.
Besides, its transfer performances on the other 6 datasets
show that MixMAE has better FLOPs / performance tradeoff
than previous popular MIM methods.
|
1. Introduction
Utilizing unlabeled visual data in self-supervised manners
to learn representations is intriguing but challenging. Follow-
ing BERT [12] in natural language processing, pretraining
with masked image modeling (MIM) shows great success
Corresponding author.in learning visual representations for various downstream
vision tasks [4, 16, 33, 39, 40], including image classifica-
tion [11], object detection [25], semantic segmentation [42],
video classification [15], and motor control [39]. While
those state-of-the-art methods [4, 16] achieved superior per-
formance on vanilla Vision Transformer (ViT) [14, 34], it is
still an open question that how to effectively pretrain hierar-
chical ViT to purchase further efficiencies [8, 10, 26, 28, 35]
on broad vision tasks.
In general, existing MIM approaches replace a portion of
input tokens with a special [MASK] symbol and aim at re-
covering the original image patches [4, 40]. However, using
[MASK] symbol leads to two problems. On the one hand,
the[MASK] symbol used in pretraining never appears in the
finetuning stage, resulting in pretraining-finetuning incon-
sistency [12]. On the other hand, the pretrained networks
waste much computation on processing the less informative
[MASK] symbols, making the pretraining process inefficient.
Those problems become severer when a large masking ratio
is used [4,16,33,40]. For example, in SimMIM [40], a mask-
ing ratio of 60% is used during the pretraining, i.e., 60% of
the input tokens are replaced with the [MASK] symbols. As
a result, SimMIM needs relatively more epochs (i.e., 800)
for pretraining. In addition, as the high masking ratio causes
much pretraining-finetuning inconsistency, the performances
of SimMIM on downstream tasks are limited.
In contrast, MAE [16] does not suffer from the above
problems by discarding the masked tokens in the encoder and
uses the [MASK] symbols only in the lightweight decoder.
MAE utilizes the vanilla ViT [14] as the encoder, which can
process the partial input efficiently with the self-attention
operation. However, the design also limits the application of
MAE on hierarchical ViTs as the hierarchical ViTs cannot
process 1D token sequences with arbitrary lengths [28, 35].
In this work, we propose MixMAE, a generalized pre-
training method that takes advantage of both SimMIM [40]
and MAE [40] while avoiding their limitations. In particular,
given two random images from the training set, MixMAE
creates a mixed image with random mixing masks as input
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
6252
ApproachCompatible with
hierarchical ViTPretraining
efficientPretrain-finetune
consistent
BEiT [4] ✓ ✗ ✗
SimMIM [40] ✓ ✗ ✗
MAE [16] ✗ ✓ ✓
MixMAE ✓ ✓ ✓
Table 1. Key differences between MixMAE and related works.
and trains a hierarchical ViT to reconstruct the two original
images to learn visual representations. From one image’s
perspective, instead of replacing the masked tokens of the
image with the special [MASK] symbols, the masked tokens
are replaced by visible tokens of the other image. MixMAE
adopts an encoder-decoder design. The encoder is a hierar-
chical ViT and processes the mixed image to obtain hidden
representations of the two partially masked images. Before
the decoding, the hidden representations are unmixed and
filled with the [MASK] tokens. Following MAE [16], the
decoder is a small ViT to reconstruct the two original images.
We illustrate the proposed MixMAE in Figure 1.
MixMAE can be widely applied to pretrain different hi-
erarchical ViTs, such as Swin Transformer [28], Twins [8],
PVT [35], etc. Thanks to the utilization of the hierarchical
architecture, we can naturally apply the pretrained encoder
to object detection and semantic segmentation tasks. Em-
pirically, with similar model sizes and FLOPs, MixMAE
consistently outperforms BEiT [4] and MAE [16] on a wide
spectrum of downstream tasks, including image classifica-
tion on iNaturalist [18] and Places [41], object detection
and instance segmentation on COCO [25], and semantic seg-
mentation on ADE20K [42]. By abandoning using [MASK]
tokens in the encoder, MixMAE shows much better pretrain-
ing efficiency than SimMIM [40] on various hierarchical
ViTs [8, 28, 35].
|
Lei_Blind_Video_Deflickering_by_Neural_Filtering_With_a_Flawed_Atlas_CVPR_2023
|
Abstract
Many videos contain flickering artifacts; common causes
of flicker include video processing algorithms, video gener-
ation algorithms, and capturing videos under specific situ-
ations. Prior work usually requires specific guidance such
as the flickering frequency, manual annotations, or extra
consistent videos to remove the flicker. In this work, we
propose a general flicker removal framework that only re-
ceives a single flickering video as input without additional
guidance. Since it is blind to a specific flickering type or
guidance, we name this “blind deflickering. ” The core of
our approach is utilizing the neural atlas in cooperation
with a neural filtering strategy. The neural atlas is a uni-
fied representation for all frames in a video that provides
temporal consistency guidance but is flawed in many cases.
To this end, a neural network is trained to mimic a filter
to learn the consistent features (e.g., color, brightness) and
avoid introducing the artifacts in the atlas. To validate our
method, we construct a dataset that contains diverse real-
world flickering videos. Extensive experiments show that
our method achieves satisfying deflickering performance
and even outperforms baselines that use extra guidance on a
public benchmark. The source code is publicly available at
*Equal contribution.
†Corresponding authors.https://chenyanglei.github.io/deflicker .
|
1. Introduction
A high-quality video is usually temporally consistent,
but many videos suffer from flickering artifacts for var-
ious reasons, as shown in Figure 2. For example, the
brightness of old movies can be very unstable since some
old cameras with low-quality hardware cannot set the ex-
posure time of each frame to be the same [16]. Be-
sides, high-speed cameras with very short exposure time
can capture the high-frequency (e.g., 60 Hz) changes of in-
door lighting [25]. Effective processing algorithms such
as enhancement [38, 45], colorization [29, 60], and style
transfer [33] might bring flicker when applied to tempo-
rally consistent videos. Videos from video generations ap-
proaches [46, 50, 61] also might contain flickering artifacts.
Since temporally consistent videos are generally more vi-
sually pleasing, removing the flicker from videos is highly
desirable in video processing [9, 13, 14, 54, 59] and compu-
tational photography.
In this work, we are interested in a general approach for
deflickering: (1) it is agnostic to the patterns or levels of
flickering (e.g., old movies, high-speed cameras, process-
ing artifacts), (2) it only takes a single flickering video and
does not require other guidance (e.g., flickering types, ex-
tra consistent videos). That is to say, this model is blind
to flickering types and guidance, and we name this task as
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10439
Frame 1
Frame 2
Old movies Old cartoons Time-lapse Slow-motion Colorization White balance Intrinsic Text-to-video
Figure 2. Videos with flickering artifacts. Flickering artifacts exist in unprocessed videos, including old movies, old cartoons, time-lapse
videos, and slow-motion videos. Besides, some processing algorithms [6, 18, 60] can introduce the flicker to temporally consistent unpro-
cessed videos. Synthesized videos from video generations approaches [46, 50, 61] also might be temporal inconsistent. Blind deflickering
aims to remove various types of flickering with only an unprocessed video as input.
blind deflickering . Thanks to the blind property, blind de-
flickering has very wide applications.
Blind deflickering is very challenging since it is hard to
enforce temporal consistency across the whole video with-
out any extra guidance. Existing techniques usually de-
sign specific strategies for each flickering type with specific
knowledge. For example, for slow-motion videos captured
by high-speed cameras, prior work [25] can analyze the
lighting frequency. For videos processed by image process-
ing algorithms, blind video temporal consistency [31, 32]
obtains long-term consistency by training on a temporally
consistent unprocessed video. However, the flickering types
or unprocessed videos are not always available, and existing
flickering-specific algorithms cannot be applied in this case.
One intuitive solution is to use the optical flow to track the
correspondences. However, the optical flow from the flick-
ering videos is not accurate, and the accumulated errors of
optical flow are also increasing with the number of frames
due to inaccurate estimation [7].
With two key observations and designs, we successfully
propose the first approach for blind deflickering that can re-
move various flickering artifacts without extra guidance or
knowledge of flicker. First, we utilize a unified video rep-
resentation named neural atlas [26] to solve the major chal-
lenge of solving long-term inconsistency. This neural atlas
tracks all pixels in the video, and correspondences in differ-
ent frames share the same pixel in this atlas. Hence, a se-
quence of temporally consistent frames can be obtained by
sampling from the shared atlas. Secondly, while the frames
from the shared atlas are consistent, the structures of images
are flawed: the neural atlas cannot easily model dynamic
objects with large motion; the optical flow used to construct
the atlas is imperfect. Hence, we propose a neural filtering
strategy to take the treasure and throw the trash from the
flawed atlas. A neural network is trained to learn the invari-
ant under two types of distortion, which mimics the artifacts
in the atlas and the flicker in the video, respectively. At test
time, this network works as a filter to preserve the consis-
tency property and block the artifacts from the flawed atlas.
We construct the first dataset containing various types
of flickering videos to evaluate the performance of blinddeflickering methods faithfully. Extensive experimental re-
sults show the effectiveness of our approach in handling dif-
ferent flicker. Our approach also outperforms blind video
temporal consistency methods that use an extra input video
as guidance on a public benchmark.
Our contributions can be summarized as follows:
• We formulate the problem of blind deflickering and
construct a deflickering dataset containing diverse
flickering videos for further study.
• We propose the first blind deflickering approach to re-
move diverse flicker. We introduce the neural atlas to
the deflickering problem and design a dedicated strat-
egy to filter the flawed atlas for satisfying deflickering
performance.
• Our method outperforms baselines on our dataset and
even outperforms methods that use extra input videos
on a public benchmark.
|
Li_DISC_Learning_From_Noisy_Labels_via_Dynamic_Instance-Specific_Selection_and_CVPR_2023
|
Abstract
Existing studies indicate that deep neural networks
(DNNs) can eventually memorize the label noise. We ob-
serve that the memorization strength of DNNs towards each
instance is different and can be represented by the con-
fidence value, which becomes larger and larger during
the training process. Based on this, we propose a Dy-
namic Instance-specific Selection and Correction method
(DISC) for learning from noisy labels (LNL). We first use
a two-view-based backbone for image classification, ob-
taining confidence for each image from two views. Then
we propose a dynamic threshold strategy for each instance,
based on the momentum of each instance’s memorization
strength in previous epochs to select and correct noisy la-
beled data. Benefiting from the dynamic threshold strategy
and two-view learning, we can effectively group each in-
stance into one of the three subsets (i.e., clean, hard, and
purified) based on the prediction consistency and discrep-
ancy by two views at each epoch. Finally, we employ differ-
ent regularization strategies to conquer subsets with differ-
ent degrees of label noise, improving the whole network’s
robustness. Comprehensive evaluations on three control-
lable and four real-world LNL benchmarks show that our
method outperforms the state-of-the-art (SOTA) methods
to leverage useful information in noisy data while allevi-
ating the pollution of label noise. Code is available at
https://github.com/JackYFL/DISC .
|
1. Introduction
Label noise is inevitable in image classification model
learning, especially for large-scale database annotations
through web-crawling [31,40], crowd-sourcing [52], or pre-
This research was supported in part by the National Key R&D Pro-
gram of China (grant 2021ZD0111901), and the National Natural Science
Foundation of China (grant 62176249).
(a)
(b)
(c)
Clean class
-
1 feature
Class
-
2 prototype
Noisy class
-
1 feature
Clean class
-
2 feature
Noisy class
-
2 feature
Class
-
1 prototype
80
81
75
49
43
73
72
83
65
80
85
95
97
95
96
97
94
94
95
88
82
82
63
78
80
80
(a)
(b)
(c)
Clean class
-
1 feature
Class
-
2 prototype
Noisy class
-
1 feature
Clean class
-
2 feature
Noisy class
-
2 feature
Class
-
1 prototype
80
81
75
49
43
73
72
83
65
80
85
95
97
95
96
97
94
94
95
88
82
63
78
80Figure 1. An illustration of DNN’s increasing memorization
strength during network training. The class prototypes are weights
of the DNN classifier, and for simplicity, we only take a two-class
case as an example. (a) In the beginning, DNN first fits clean
data whose features are closer to class prototypes than noisy data,
which is more sparsely distributed in feature space. (b) As training
progresses, DNN begins to fit slightly noisy data, some of which
can also be classified to its labeled class with relatively high con-
fidence. (c) By the end of the training, the DNN has greatly in-
creased its memorization strength, and even extremely noisy data
can also be grouped into its labeled class with high confidence.
trained models [12], etc. Recent studies show that DNNs
are susceptible to label noise and could fit to the entire data
set [2, 55] including the noisy set. Meanwhile, researchers
found that DNNs have a memorization effect [2], i.e., the
learning process of DNNs follows a curriculum, in which
simple patterns are memorized first, followed by more diffi-
cult ones like data with noisy labels. Recent studies have
explored the use of memorization effect for LNL tasks,
with many of these approaches being "early-learning"-
based methods [1, 7, 11, 17, 23, 24, 29, 30, 32, 44, 53, 57, 58].
These methods leverage an early-stage DNN to improve the
model robustness and generalization ability.
Early-learning-based LNL methods can be divided into
three main directions: sample selection [7,17,23,24,29,53],
label correction [1,30,44,57] and regularization [11,32,37,
56, 58, 60]. Sample selection-based methods usually utilize
the early-stage DNN’s losses or confidence to select reliable
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
24070
1
2
N
2
1
Confidence
Instances
K(a)
1
2
N
2
1
K
Confidence
Instances (b)
1
2
N
K
2
1
Condence
Instances (c)
Figure 2. An illustration of different threshold strategies, including
(a) the global threshold, (b) the class-wise threshold, and (c) the
proposed dynamic instance-specific threshold.
instances, which are utilized to update the network. Some
of these methods require a predefined threshold [24, 30] or
prior knowledge about the noise label rate [17, 53] to select
instances. Such a global predefined threshold, as shown in
Fig. 2 (a), is usually difficult to determine, and may require
prior knowledge about the noise label rate to avoid exces-
sive or insufficient noisy data selection, which will further
lead to over-fitting and confirmation bias issues [17]. La-
bel correction-based methods try to learn [44] or generate
pseudo-labels [30, 57] to replace the original noisy ones.
Many of these methods employ the semi-supervised learn-
ing (SSL) techniques to pseudo-label the noisy data, and
most of them use global (say MixMatch [3], FixMatch [39])
or class-wise threshold (say FlexMatch [54]) to recalibrate
labels. As shown in Fig. 2 (b), while a class-wise thresh-
old considers the fitting difficulty of different classes, it still
applies a uniform threshold for individual instances in each
class. This remains sub-optimum if we consider an example
of face images, in which profile face images are more dif-
ficult to fit (relatively lower classification confidence) than
frontal ones (relatively higher classification confidence) of
the same subject. Regularization-based methods aim to de-
sign robust loss functions [11, 32, 37, 58, 60, 61] or regular-
ization techniques such as augmentation [56] that can uti-
lize all instances to improve the model robustness against
label noise. While these methods work well on moderately
noisy data, they may have poor generalization ability under
extremely noisy data (see Table 1), since all instances are
utilized during the training process.
Based on the observation that the memorization strength
for individual instances increases during network training,
we argue that neither a global threshold nor a class-wise
threshold is optimum for LNL. Therefore, we propose a Dy-
namic Instance-specific Selection and Correction (DISC)
approach (see Fig. 3 (a)) for LNL. DISC leverages a dy-
namic instance-specific threshold strategy (Fig. 2 (c)) fol-
lowing a memorization curriculum to select reliable in-
stances and correct noisy labels. Each threshold is ob-
tained through the momentum of each instance’s memoriza-
tion strength in previous epochs. Such a dynamic threshold
strategy can determine a reasonable threshold for each in-
stance according to its memorization strength by the net-
work. Inspired by previous methods of RRL [30], AugDisc
[35] and FixMatch [39], DISC also adopts weak and strongaugmentations to produce two different views for image
classification via a shared-weight model. Unlike previous
methods, which use predictions from one view to select
reliable instances or generate pseudo-labels for unlabeled
data, DISC considers the consistency and discrepancy of
two views and divides the noisy data into reliable instances
(clean set), hard instances (hard set), and recalibrate noisy
labeled instances (purified set), reflecting different degrees
of label noise. By dividing the noisy data into three differ-
ent subsets, DISC can alleviate the contamination of noisy
labels to LNL model learning by conquering them via dif-
ferent regularization strategies. As a result, the method can
better make full use of the whole noisy dataset. The contri-
butions of this paper include:
• We observe the memorization strength of DNNs towards
individual instances can be represented by confidence
value, which increases along with training. We provide
evidence and experimental analyses to validate this claim.
• Based on the insight of memorization strength, we pro-
pose a simple yet effective dynamic instance-specific
threshold strategy of LNL that selects reliable instances
and recalibrates noisy labels following an easy to hard
curriculum.
• Additionally, we leverage the dynamic threshold strategy
to group noisy data into three subsets based on predictions
from two views generated from weak and strong augmen-
tations. We then adopt different regularization strategies
to handle individual subsets.
|
Kong_Understanding_Masked_Autoencoders_via_Hierarchical_Latent_Variable_Models_CVPR_2023
|
Abstract
Masked autoencoder (MAE), a simple and effective self-
supervised learning framework based on the reconstruction
of masked image regions, has recently achieved prominent
success in a variety of vision tasks. Despite the emergence
of intriguing empirical observations on MAE, a theoreti-
cally principled understanding is still lacking. In this work,
we formally characterize and justify existing empirical in-
sights and provide theoretical guarantees of MAE. We for-
mulate the underlying data-generating process as a hierar-
chical latent variable model, and show that under reason-
able assumptions, MAE provably identifies a set of latent
variables in the hierarchical model, explaining why MAE
can extract high-level information from pixels. Further, we
show how key hyperparameters in MAE (the masking ratio
and the patch size) determine which true latent variables
to be recovered, therefore influencing the level of semantic
information in the representation. Specifically, extremely
large or small masking ratios inevitably lead to low-level
representations. Our theory offers coherent explanations
of existing empirical observations and provides insights for
potential empirical improvements and fundamental limita-
tions of the masked-reconstruction paradigm. We conduct
extensive experiments to validate our theoretical insights.
|
1. Introduction
Self-supervised learning (SSL) has achieved tremendous
success in learning transferable representations without la-
bels, showing strong results in a variety of downstream
tasks [12,14, 16,23,49]. As a major SSL paradigm, masked
image modeling (MIM) [1–3, 11,13,22,41,63,69] performs
the reconstruction of purposely masked image pixels as the
pretraining task. Among MIM methods, masked autoen-
coding (MAE) [22] has gained significant traction due to its
computational efficiency and state-of-the-art performance
in a wide range of downstream tasks.
Empirical observations from previous work reveal vari-
ous intriguing properties of MAE. In particular, aggressive
*Joint first author. †Joint senior author.
Figure 1. Masking-reconstruction under a hierarchical generating
process. In a hierarchical data-generating process, high-level latent vari-
ables (e.g., z1) represent high-level information such as semantics, and
low-level latent variables (e.g., [z2,z3,z4]) represent low-level informa-
tion such as texture. We show that through proper masking, MAE learns
to recover high-level latent variables with identifiability guarantees.
masking has been shown critical to downstream task per-
formances [22, 28,61,63]. It is conjectured that such mask-
ing forces the model to learn meaningful high-level seman-
tic understanding of the objects and scenes rather than the
low-level information such as texture. However, it remains
largely unclear whether such intuitions are sound in princi-
ple. Theoretically verifying and characterizing these empir-
ical insights would not only grant a certificate to the current
approaches but would also offer theoretical insights for al-
gorithmic advancements.
In this work, we establish a principled yet intuitive
framework for understanding MAE and providing identifia-
bility guarantees. Concretely, we first formulate the under-
lying data-generating process as a hierarchical latent vari-
able model (Figure 1), with high-level variables correspond-
ing to abstract and semantic information like classes, and
low-level variables corresponding to elaborate and granular
information like texture. Such latent variable models have
been studied in causal discovery [29, 62]. In [27, 50], it is
hypothesized that complex data, such as images, follow a
hierarchical latent structure.
Stemming from this formulation, we show that under
reasonable assumptions, MAE can recover a subset of the
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7918
true latent variables within the hierarchy, where the levels
of the learned latent variables are explicitly determined by
how masking is performed. Our theoretical framework not
only unifies existing empirical observations in a coherent
fashion but also gives rise to insights for potential empir-
ical improvemen
|
Kim_Coreset_Sampling_From_Open-Set_for_Fine-Grained_Self-Supervised_Learning_CVPR_2023
|
Abstract
Deep learning in general domains has constantly been
extended to domain-specific tasks requiring the recognition
of fine-grained characteristics. However, real-world appli-
cations for fine-grained tasks suffer from two challenges: a
high reliance on expert knowledge for annotation and ne-
cessity of a versatile model for various downstream tasks in
a specific domain (e.g., prediction of categories, bounding
boxes, or pixel-wise annotations). Fortunately, the recent
self-supervised learning (SSL) is a promising approach to
pretrain a model without annotations, serving as an effective
initialization for any downstream tasks. Since SSL does not
rely on the presence of annotation, in general, it utilizes the
large-scale unlabeled dataset, referred to as an open-set. In
this sense, we introduce a novel Open-Set Self-Supervised
Learning problem under the assumption that a large-scale
unlabeled open-set is available, as well as the fine-grained
target dataset, during a pretraining phase. In our problem
setup, it is crucial to consider the distribution mismatch be-
tween the open-set and target dataset. Hence, we propose
SimCore algorithm to sample a coreset, the subset of an
open-set that has a minimum distance to the target dataset in
the latent space. We demonstrate that SimCore significantly
improves representation learning performance through ex-
tensive experimental settings, including eleven fine-grained
datasets and seven open-sets in various downstream tasks.
|
1. Introduction
The success of deep learning in general computer vision
tasks has encouraged its widespread applications to specific
domains of industry and research [ 21,46,73], such as facial
recognition or vehicle identification. We particularly focus
on the visual recognition of fine-grained datasets, where the
goal is to differentiate between hard-to-distinguish images.
However, real-world application for fine-grained tasks poses
two challenges for practitioners and researchers developing
algorithms. First, it requires a number of experts for anno-
*equal contributiontation, which incurs a large cost [ 3,14,60]. For example,
ordinary people do not have professional knowledge about
aircraft types or fine-grained categories of birds. Therefore,
a realistic presumption for a domain-specific fine-grained
dataset is that there may be no or very few labeled samples.
Second, fine-grained datasets are often re-purposed or used
for various tasks according to the user’s demand, which mo-
tivates development of a versatile pretrained model. One
might ask, as a target task, that bird images be classified
by species or even segmented into foreground and back-
ground. A good initialization model can handle a variety
of annotations for fine-grained datasets, such as multiple
attributes [ 43,47,74], pixel-level annotations [ 53,74], or
bounding boxes [ 35,47,53].
Recently, self-supervised learning (SSL) [ 11,14,24,28]
has enabled learning how to represent the data even without
annotations, such that the representations serve as an effec-
tive initialization for any future downstream tasks. Since
labeling is not necessary, SSL generally utilizes an open-
set, or large-scale unlabeled dataset, which can be easily
obtained by web crawling [ 23,64], for the pretraining. In
this paper, we introduce a novel Open-Set Self-Supervised
Learning (OpenSSL) problem, where we can leverage the
open-set as well as the training set of fine-grained target
dataset. Refer to Figure 1for the overview of OpenSSL.
In the OpenSSL setup, since the open-set may contain
instances irrelevant to the fine-grained dataset, we should
consider the distribution mismatch. A large distribution
mismatch might inhibit representation learning for the target
task. For instance, in Figure 2, SSL on the open-set ( OS) does
not always outperform SSL on the fine-grained dataset ( X)
because it depends on the semantic similarity between Xand
OS. This is in line with the previous observations [ 21,22,64]
that the performance of self-supervised representation on
downstream tasks is correlated with similarity of pretraining
and fine-tuning datasets.
To alleviate this mismatch issue, we could exploit a core-
set, a subset of an open-set, which shares similar semantics
with the target dataset. As a motivating experiment, we man-
ually selected the relevant classes from ImageNet ( OSoracle)
that are supposed to be helpful according to each target
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7537
?
ObjectDetectionSemanticSegmentationImageClassificationSelf-SupervisedPretrainingDownstream Tasks
airplane: 93%manufacturervariant
Open-SetFine-Grained Dataset?+Coreset
Figure 1. Overview of an OpenSSL problem. For any downstream tasks, we pretrain an effective model with the fine-grained dataset
via self-supervised learning (SSL). Here, the assumption for a large-scale unlabeled open-set in the pretraining phase is well-suited for a
real-world scenario. The main goal is to find a coreset, highlighted by the blue box, among the open-set to enhance fine-grained SSL.XOSX+OSX+OSrandX+OSoracle
Aircraft30%35%40%45%50%Cars30%40%50%60%Pet40%50%60%70%80%Birds20%25%30%35%40%Target (X)Classes forOSoracle(#)Aircraftairliner, warplane, ... (8)Carsconvertible, jeep, ... (10)PetPersian cat, beagle, ... (24)Birdsgoldfinch, junco, ... (20)Figure 2. Linear evaluation performance on the fine-grained target dataset. Each color corresponds to a pretraining dataset, while “+” means
merging two datasets. The table on the right side shows the manually selected categories from an open-set ( OS), ImageNet-1k [ 20] in this
case, according to each target dataset ( X). Selected categories and exact numbers are detailed in Appendix A. We followed the typical linear
evaluation protocol [ 14,24] and used the SimCLR method [ 14] on ResNet50 encoder [ 29].
dataset. Interestingly, in Figure 2, merging OSoracle toX
shows a significant performance gain, and its superiority
over merging the entire open-set ( X+OS) or the randomly
sampled subset ( X+OSrand) implies the necessity of a sam-
pling algorithm for the coreset in the OpenSSL problem.
Therefore, we propose SimCore , a simple yet effective
coreset sampling algorithm from an unlabeled open-set. Our
main goal is to find a subset semantically similar to the target
dataset. We formulate the data subset selection problem to
obtain a coreset that has a minimum distance to the target
dataset in the latent space. SimCore significantly improves
performance in extensive experimental settings (eleven fine-
grained datasets and seven open-sets), and shows consistent
gains with different architectures, SSL losses, and down-
stream tasks. Our contributions are outlined as follows:
•We first propose a realistic OpenSSL task, assuming an
unlabeled open-set available during the pretraining phase
on the fine-grained dataset.
•We propose a coreset selection algorithm, SimCore, to
leverage a subset semantically similar to the target dataset.
•Our extensive experiments with eleven fine-grained
datasets and seven open-sets substantiate the significance
of data selection in our OpenSSL problem.
|
Lai_Spherical_Transformer_for_LiDAR-Based_3D_Recognition_CVPR_2023
|
Abstract
LiDAR-based 3D point cloud recognition has benefited
various applications. Without specially considering the Li-
DAR point distribution, most current methods suffer from
information disconnection and limited receptive field, es-
pecially for the sparse distant points. In this work, we
study the varying-sparsity distribution of LiDAR points and
present SphereFormer to directly aggregate information
from dense close points to the sparse distant ones. We de-
sign radial window self-attention that partitions the space
into multiple non-overlapping narrow and long windows.
It overcomes the disconnection issue and enlarges the re-
ceptive field smoothly and dramatically, which significantly
boosts the performance of sparse distant points. Moreover,
to fit the narrow and long windows, we propose exponen-
tial splitting to yield fine-grained position encoding and
dynamic feature selection to increase model representation
ability. Notably, our method ranks 1ston both nuScenes and
SemanticKITTI semantic segmentation benchmarks with
81.9%and74.8%mIoU, respectively. Also, we achieve
the 3rdplace on nuScenes object detection benchmark with
72.8%NDS and 68.5%mAP . Code is available at https:
//github.com/dvlab-research/SphereFormer.git.
|
1. Introduction
Nowadays, point clouds can be easily collected by Li-
DAR sensors. They are extensively used in various indus-
trial applications, such as autonomous driving and robotics.
In contrast to 2D images where pixels are arranged densely
and regularly, LiDAR point clouds possess the varying-
sparsity property — points near the LiDAR are quite dense,
while points far away from the sensor are much sparser, as
shown in Fig. 2 (a).
However, most existing work [12, 13, 24, 25, 55, 70–72]
does not specially consider the the varying-sparsity point
distribution of outdoor LiDAR point clouds. They inherit
from 2D CNNs or 3D indoor scenarios, and conduct local
operators ( e.g., SparseConv [24, 25]) uniformly for all lo-
cations. This causes inferior results for the sparse distant
points. As shown in Fig. 1, although decent performance
78.7951.54
13.2879.2154.31
19.3180.860.7830.38
0102030405060708090
close (<20m)medium (>=20m, <50m)far (>=50m)SparseConvCubic Self-attentionOurs(%) mIoUFigure 1. Semantic segmentation performance on nuScenes valset
for points at different distances.
is yielded for the dense close points, it is difficult for these
methods to deal with the sparse distant points optimally.
We note that the root cause lies in limited receptive field.
For sparse distant points, there are few surrounding neigh-
bors. This not only results in inconclusive features, but also
hinders enlarging receptive field due to information discon-
nection. To verify this finding, we visualize the Effective
Receptive Field (ERF) [40] of the given feature (shown with
the yellow star) in Fig. 2 (d). The ERF cannot be expanded
due to disconnection, which is caused by the extreme spar-
sity of the distant car.
Although window self-attention [22, 30], dilated self-
attention [42], and large-kernel CNN [10] have been pro-
posed to conquer the limited receptive field, these methods
do not specially deal with LiDAR point distribution, and re-
main to enlarge receptive field by stacking local operators
as before, leaving the information disconnection issue still
unsolved. As shown in Fig. 1, the method of cubic self-
attention brings a limited improvement.
In this paper, we take a new direction to aggregate long-
range information directly in a single operator to suit the
varying-sparsity point distribution. We propose the module
ofSphereFormer to perceive useful information from points
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17545
(a) LiDAR Point Cloud(b) Radial Window Partition
(c) Sparse Distant Points(d) ERF of SparseConv(e) ERF of our method
carroad
vegetation
building
pole
fence
trunkFigure 2. Effective Receptive Field (ERF) of SparseConv and ours. (a) LiDAR point cloud. (b) Radial window partition. Only a single
radial window is shown. Points inside the window are marked in red. (c) Zoom-in sparse distant points. A sparse caris circled in yellow.
(d) ERF of SparseConv, given the point of interest (with yellow star). White and red denote high contribution. (e) ERF of ours.
50+ meters away and yield large receptive field for feature
extraction. Specifically, we represent the 3D space using
spherical coordinates (r, θ, ϕ )with the sensor being the ori-
gin, and partition the scene into multiple non-overlapping
windows. Unlike the cubic window shape, we design radial
windows that are long and narrow. They are obtained by
partitioning only along the θandϕaxis, as shown in Fig. 2
(b). It is noteworthy that we make it a plugin module to
conveniently insert into existing mainstream backbones.
The proposed module does not rely on stacking local op-
erators to expand receptive field, thus avoiding the discon-
nection issue, as shown in Fig. 2 (e). Also, it facilitates
the sparse distant points to aggregate information from the
dense-point region, which is often semantically rich. So,
the performance of the distant points can be improved sig-
nificantly ( i.e., +17.1% mIoU) as illustrated in Fig. 1.
Moreover, to fit the long and narrow radial windows, we
propose exponential splitting to obtain fine-grained relative
position encoding. The radius rof a radial window can
be over 50 meters, which causes large splitting intervals.
It thus results in coarse position encoding when converting
relative positions into integer indices. Besides, to let points
at varying locations treat local and global information dif-
ferently, we propose dynamic feature selection to make fur-
ther improvements.
In total, our contribution is three-fold.
• We propose SphereFormer to directly aggregate long-
range information from dense-point region. It in-
creases the receptive field smoothly and helps improvethe performance of sparse distant points .
• To accommodate the radial windows, we develop ex-
ponential splitting for relative position encoding. Our
dynamic feature selection further boosts performance.
• Our method achieves new state-of-the-art results on
multiple benchmarks of both semantic segmentation
and object detection tasks.
|
Liu_Revisiting_Temporal_Modeling_for_CLIP-Based_Image-to-Video_Knowledge_Transferring_CVPR_2023
|
Abstract
Image-text pretrained models, e.g., CLIP , have shown
impressive general multi-modal knowledge learned from
large-scale image-text data pairs, thus attracting increas-
ing attention for their potential to improve visual represen-
tation learning in the video domain. In this paper, based
on the CLIP model, we revisit temporal modeling in the
context of image-to-video knowledge transferring, which is
the key point for extending image-text pretrained models to
the video domain. We find that current temporal model-
ing mechanisms are tailored to either high-level semantic-
dominant tasks (e.g., retrieval) or low-level visual pattern-
dominant tasks (e.g., recognition), and fail to work on the
two cases simultaneously. The key difficulty lies in modeling
temporal dependency while taking advantage of both high-
level and low-level knowledge in CLIP model. To tackle
this problem, we present Spatial-Temporal Auxiliary Net-
work (STAN) – a simple and effective temporal modeling
mechanism extending CLIP model to diverse video tasks.
Specifically, to realize both low-level and high-level knowl-
edge transferring, STAN adopts a branch structure with
decomposed spatial-temporal modules that enable multi-
level CLIP features to be spatial-temporally contextual-
ized. We evaluate our method on two representative video
tasks: Video-Text Retrieval and Video Recognition. Exten-
sive experiments demonstrate the superiority of our model
over the state-of-the-art methods on various datasets, in-
cluding MSR-VTT, DiDeMo, LSMDC, MSVD, Kinetics-400,
and Something-Something-V2. Codes will be available at
https://github.com/farewellthree/STAN
|
1. Introduction
Recent years have witnessed the great success of image-
text pretrained models such as CLIP [31]. Pretrained on
over 400M image-text data pairs, these models learned
transferable rich knowledge for various image understand-
ing tasks. Similarly, video domains also call for a CLIP -likemodel to solve downstream video tasks. However, it is hard
to get a pretrained model as powerful as CLIP in the video
domain due to the unaffordable demands on computation re-
sources and the difficulty of collecting video-text data pairs
as large and diverse as image-text data. Instead of directly
pursuing video-text pretrained models [16, 26], a potential
alternative solution that benefits video downstream tasks is
to transfer the knowledge in image-text pretrained models
to the video domain, which has attracted increasing atten-
tion in recent years [11, 12, 25, 28, 29, 40].
Extending pretrained 2D image models to the video do-
main is a widely-studied topic in deep learning [4, 7], and
the key point lies in empowering 2D models with the ca-
pability of modeling temporal dependency between video
frames while taking advantages of knowledge in the pre-
trained models. In this paper, based on CLIP [31], we revisit
temporal modeling in the context of image-to-video knowl-
edge transferring, and present Spatial-Temporal Auxiliary
Network (STAN) – a new temporal modeling method that
is easy and effective for extending image-text pretrained
model to diverse downstream video tasks.
We find that current efforts on empowering CLIP with
temporal modeling capability can be roughly divided into
posterior structure based methods and intermediate struc-
ture based methods as shown in Fig. 1(a). Posterior struc-
ture based methods [11,12,25] employ a late modeling strat-
egy, which take CLIP as a feature extractor and conduct
temporal modeling upon the embeddings of video frames
extracted independently from CLIP . Upon the highly se-
mantic embeddings, though the structure is beneficial for
transferring the well-aligned visual-language representation
(i.e.,high-level knowledge) to downstream tasks, it hardly
captures the spatial-temporal visual patterns ( i.e.,low-level
knowledge) among different frames, which is important for
video understanding. As shown in Fig. 1(b), compared to
theCLIP baseline that employs a naive mean pooling to
aggregate the features of all frames to obtain a video rep-
resentation, the performance improvement brought by the
typical posterior structure, i.e.CLIP4clip-seqTrans [25] is
trivial, especially on the video action recognition task where
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
6555
VideoOutput⊕
VideoOutput
VideoOutput
CLIP layerTemporal modeling layerData flow(a)1.40.23.80.53.74.2
012345Improvement over CLIP Retrieval (R@1)Recognition (Top1 Acc)
(b)Posteriorstructure(CLIP4clip-seqTrans)Intermediatestructure(XCLIP)Branchstructure(Ours)Figure 1. (a) Illustration of temporal modeling with posterior structure (left), intermediate structure (middle) and our branch structure(right).
(b) Performance comparison among the posterior structure based CLIP4clip-seqTrans [25] , intermediate structure based XCLIP [28] and
our branch structure based STAN. We take the CLIP model with a naive mean pooling to aggregate the features of all frames into video
representations as the baseline. We present the improvement brought by different methods over this baseline w.r.t. Recall@1 on MSRVTT
for video-text retrieval and Top-1 accuracy on Kinetics-400 for video recognition.
spatial-temporal visual patterns are crucial.
In contrast to posterior structure based methods, inter-
mediate structure based methods [4, 28, 29] strengthen the
spatial-temporal modeling capability of CLIP via plugging
temporal modeling modules directly between CLIP layers,
and achieve 3.7% improvement over the baseline on the
video action recognition task. However, we find that in-
serting additional modules into CLIP would impact the pre-
trained high-level semantic knowledge in the model, which
only outperforms the baseline by 0.2% on the video-text
retrieval tasks. Therefore, modeling temporal dependency
while taking advantage of knowledge in different levels of
representation is important for extending the CLIP model to
the video domain.
Unlike the above methods, inspired by FPN [22] that
introduces a branch network to strengthen multi-level rep-
resentation learning for CNNs, our proposed STAN em-
ploys a new branch structure outside of the visual back-
bone, as shown in Fig. 1(a). Thanks to the branch structure,
STAN augments the features of video frames with spatial-
temporal contexts at different CLIP output levels without
affecting the forward-propagating of CLIP itself. Thus, it
is able to take advantage of both high-level and low-level
knowledge in the pretrained model simultaneously, and ef-
fectively extends CLIP to diverse downstream video tasks.
STAN consists of multiple layers with a spatial-temporal
separated design. Specifically, the layer operates spatial-
temporal modeling via alternatively stacking two separate
modules – an intra-frame module and a cross-frame module,
which enables the layer to boost the performance of model
via reusing the pretrained parameter of CLIP layers to ini-
tialize the intra-frame spatial modules. We further investi-
gate two instantiations of cross-frame modules, i.e.,the self-
attention-based module and 3D convolution based module,
to facilitate the comprehensive understanding of STAN indifferent implementations.
We evaluate our STAN on both the high-level semantic-
dominant task ( i.e.,video-text retrieval) and low-level vi-
sual pattern-dominant task ( i.e.,, video recognition), trial-
ing our methods from the two different perspectives. Ex-
tensive experiments demonstrate our expanded models are
generally effective on the two different tasks. For video-
text retrieval, we surpass the CLIP4clip by +3.7%, +3.1%,
and +2.1% R@1 on MSRVTT, DiDemo, and LSMDC.
For video recognition, we achieve competitive performance
on Kinetics-400, with 88×fewer FLOPs than Swin3D-L
[24] and improve CLIP baseline by 20%+ on Something-
Something-V2.
Our main contributions are summarized as: (1) we re-
visit temporal modeling in the context of image-to-video
knowledge transferring and figure out that the key challenge
lies in modeling temporal dependency while taking advan-
tage of both high-level and low-level knowledge; (2) we
propose Spatial-Temporal Auxiliary Network (STAN) – a
new branch structure for temporal modeling, which facil-
itates representation learning of video frames with includ-
ing spatial-temporal contexts at different levels and better
transfer the pretrained knowledge in CLIP to diverse video
tasks; (3) our method achieves competitive results on both
video-text retrieval and video recognition tasks compared to
SOTA methods.
|
Lee_Fix_the_Noise_Disentangling_Source_Feature_for_Controllable_Domain_Translation_CVPR_2023
|
Abstract
Recent studies show strong generative performance in
domain translation especially by using transfer learning
techniques on the unconditional generator. However, the
control between different domain features using a single
model is still challenging. Existing methods often require
additional models, which is computationally demanding
and leads to unsatisfactory visual quality. In addition, they
have restricted control steps, which prevents a smooth tran-
sition. In this paper, we propose a new approach for high-
quality domain translation with better controllability. The
key idea is to preserve source features within a disentangled
subspace of a target feature space. This allows our method
to smoothly control the degree to which it preserves source
features while generating images from an entirely new do-
main using only a single model. Our extensive experiments
show that the proposed method can produce more consistent
and realistic images than previous works and maintain pre-
cise controllability over different levels of transformation.
The code is available at LeeDongYeun/FixNoise.
|
1. Introduction
Image translation between different domains is a long-
standing problem in computer vision [8, 9,13,20,22,24,
35,52,62]. Controllability in domain translation is impor-
tant since it allows the users to set the desired properties.
Recently, several studies have shown promising results in
domain translation using a pre-trained unconditional gen-
erator, such as StyleGAN2 [27], and its fine-tuned version
[29,30,42,48]. These studies implemented domain trans-
lation by embedding an image from the source domain to
the latent space of the source model and by providing the
obtained latent code into the target model to generate a tar-
get domain image. To preserve semantic correspondence
between different domains, previous works commonly fo-
cused on the hierarchical design of the unconditional gener-
ator. They used several techniques like freezing [30] and
swapping [42] layers or both [29]. In these approaches,
users can control the degree of preserved source features
by setting the number of freezing or swapping layers of the
target model differently.
However, one of the notable limitations of the previous
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14224
methods is that they cannot control features across domains
in a single model. Imagine morphing between two images
x0andx1. Previous methods approximated midpoints be-
tween x0andx1by either building a new hybrid model by
converting weights or training a new model. In these ap-
proaches, each intermediate point is drawn from the output
distribution of different models, which would produce in-
consistent results. Moreover, getting an additional model
for each intermediate point (image) also increases the com-
putational cost. Another common limitation of these layer-
based methods is that their control levels are discrete and
restricted to the number of layers, which prevents fine-grain
control.
In this paper, we introduce a new training strategy,
FixNoise, for cross-domain controllable domain translation.
To control features across domains in a single model, we
argue that the source features should be preserved but dis-
entangled with the target in the model’s inherited space. To
this end, we focus on the fact that the noise input of Style-
GAN2, which is added after each convolution, expands the
functional space composed of the latent code expression. In
other words, the feature space could be seen as a set of sub-
spaces corresponding to each random noise. To preserve
the source features only to a particular subset of the fea-
ture space of the target model, we fix the noise input when
applying a simple feature matching loss. The disentangled
feature space allows our method to fine-grain control the
preserved source features only in a single model without
limited control steps through linear interpolation between
the fixed and random noise. The extensive experiments
demonstrate that our approach can generate more consistent
and realistic results than existing methods on cross-domain
feature control and also show better performance on domain
translation qualitatively and quantitatively.
|
Li_Efficient_and_Explicit_Modelling_of_Image_Hierarchies_for_Image_Restoration_CVPR_2023
|
Abstract
The aim of this paper is to propose a mechanism to
efficiently and explicitly model image hierarchies in the
global, regional, and local range for image restoration. To
achieve that, we start by analyzing two important prop-
erties of natural images including cross-scale similarity
and anisotropic image features. Inspired by that, we pro-
pose the anchored stripe self-attention which achieves a
good balance between the space and time complexity of
self-attention and the modelling capacity beyond the re-
gional range. Then we propose a new network architec-
ture dubbed GRL to explicitly model image hierarchies in
the G lobal, R egional, and L ocal range via anchored stripe
self-attention, window self-attention, and channel attention
enhanced convolution. Finally, the proposed network is ap-
plied to 7 image restoration types, covering both real and
synthetic settings. The proposed method sets the new state-
of-the-art for several of those. Code will be available at
https://github.com/ofsoundof/GRL-Image-
Restoration.git .
|
1. Introduction
Image restoration aims at recovering high-quality images
from low-quality ones, resulting from an image degrada-
tion processes such as blurring, sub-sampling, noise cor-
ruption, and JPEG compression. Image restoration is an ill-
posed inverse problem since important content information
about the image is missing during the image degradation
processes. Thus, in order to recover a high-quality image,
the rich information exhibited in the degraded image should
be fully exploited.
Natural images contain a hierarchy of features at global,
regional, and local ranges which could be used by deep neu-
ral networks for image restoration. First , the local range
covers a span of several pixels and typical features are edges
and local colors. To model such local features, convo-
lutional neural networks (CNNs) with small kernels ( e.g.
3×3) are utilized. Second , the regional range is charac-
terized by a window with tens of pixels. This range of pix-
(a)bridge from ICB, 2749×4049(b)0848x4 from DIV2K, 1020×768
(c)073from Urban100, 1024×765
Figure 1. Natural images show a hierarchy of features in a global,
regional, and local range. The local (edges, colors) and regional
features (the pink squares) could be well modelled by CNNs and
window self-attention. By contrast, it is difficult to efficiently and
explicitly model the rich global features (cyan rectangles).
els can cover small objects and components of large objects
(pink squares in Fig. 1). Due to the larger range, modelling
the regional features (consistency, similarity) explicitly with
large-kernel CNNs would be inefficient in both parameters
and computation. Instead, transformers with a window at-
tention mechanism are well suited for this task. Third , be-
yond local and regional, some features have a global span
(cyan rectangles in Fig. 1), incl. but not limited to symme-
try, multi-scale pattern repetition (Fig. 1a), same scale tex-
ture similarity (Fig. 1b), and structural similarity and con-
sistency in large objects and content (Fig. 1c). To model
features at this range, global image understanding is needed.
Different from the local and regional range features,
there are two major challenges to model the global range
features. Firstly, existing image restoration networks based
on convolutions and window attention could not capture
long-range dependencies explicitly by using a single com-
putational module. Although non-local operations are used
in some works, they are either used sparsely in the network
or applied to small image crops. Thus, global image under-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18278
Figure 2. The proposed GRL achieves state-of-the-art performances on various image restoration tasks. Details provided in Sec. 5.
standing still mainly happens via progressive propagation
of features through repeated computational modules. Sec-
ondly, the increasing resolution of today’s images poses a
challenge for long-range dependency modelling. High im-
age resolution leads to a computational burden associated
with pairwise pixel comparisons and similarity searches.
The aforementioned discussion leads to a series of re-
search questions: 1) how to efficiently model global range
features in high-dimensional images for image restora-
tion; 2) how to model image hierarchies (local, regional,
global) explicitly by a single computational module for
high-dimensional image restoration; 3) and how can this
joint modelling lead to a uniform performance improvement
for different image restoration tasks. The paper tries to an-
swer these questions in Sec. 3, Sec. 4, and Sec. 5, resp.
First , we propose anchored stripe self-attention for effi-
cient dependency modelling beyond the regional range. The
proposed self-attention is inspired by two properties of nat-
ural images including cross-scale similarity and anisotropic
image features. Cross-scale similarity means that struc-
tures in a natural image are replicated at different scales.
Inspired by that, we propose to use anchors as an inter-
mediate to approximate the exact attention map between
queries and keys in self-attention. Since the anchors sum-
marize image information into a lower-dimensional space,
the space and time complexity of self-attention can be sig-
nificantly reduced. In addition, based on the observation
of anisotropic image features, we propose to conduct an-
chored self-attention within vertical and horizontal stripes.
Due to the anisotropic shrinkage of the attention range, a
further reduction of complexity is achieved. And the com-
bination of axial stripes also ensures a global view of the
image content. When equipped with the stripe shift oper-
ation, the four stripe self-attention modes (horizontal, ver-
tical, shifted horizontal, shifted vertical) achieves a good
balance between computational complexity and the capac-
ity of global range dependency modelling. Furthermore, the
proposed anchored stripe self-attention is analyzed from the
perspective of low-rankness and similarity propagation.
Secondly , a new transformer network is proposed to ex-
plicitly model global, regional, and local range dependen-
cies in a single computational module. The hierarchical
modelling of images is achieved by the parallel computa-
tion of the proposed anchored stripe self-attention, window
self-attention, and channel-attention enhanced convolution.
And the transformer architecture is dubbed GRL .Thirdly , the proposed GRL transformer is applied to var-
ious image restoration tasks. Those tasks could be classi-
fied into three settings based on the availability of data in-
cluding real image restoration, synthetic image restoration,
and data synthesis based real image restoration. In total,
seven tasks are explored for the proposed network includ-
ing image super-resolution, image denoising, JPEG com-
pression artifacts removal, demosaicking, real image super-
resolution, single image motion deblurring, and defocus de-
blurring. As shown in Fig. 2, the proposed network shows
promising results on the investigated tasks.
|
Kim_DCFace_Synthetic_Face_Generation_With_Dual_Condition_Diffusion_Model_CVPR_2023
|
Abstract
Generating synthetic datasets for training face recogni-
tion models is challenging because dataset generation en-
tails more than creating high fidelity images. It involves
generating multiple images of same subjects under different
factors (e.g., variations in pose, illumination, expression,
aging and occlusion) which follows the real image condi-
tional distribution. Previous works have studied the gener-
ation of synthetic datasets using GAN or 3D models. In this
work, we approach the problem from the aspect of combin-
ing subject appearance (ID) and external factor (style) con-
ditions. These two conditions provide a direct way to con-
trol the inter-class and intra-class variations. To this end,
we propose a Dual Condition Face Generator (DCFace)
based on a diffusion model. Our novel Patch-wise style ex-
tractor and Time-step dependent ID loss enables DCFace to
consistently produce face images of the same subject under
different styles with precise control. Face recognition mod-
els trained on synthetic images from the proposed DCFace
provide higher verification accuracies compared to previ-
ous works by 6.11% on average in 4out of 5test datasets,
LFW, CFP-FP , CPLFW, AgeDB and CALFW. Code Link
|
1. Introduction
What does it take to create a good training dataset for
visual recognition? An ideal training dataset for recogni-
tion tasks would have 1) large inter-class variation, 2) large
intra-class variation and 3) small label noise. In the context
of face recognition (FR), it means, the dataset has a large
number of unique subjects, large intra-subject variations,
and reliable subject labels. For instance, large-scale face
datasets such as WebFace4M [73] contain over 1M subjects
and large number of images/subject. Both the number of
subjects and the number of images per subject are impor-
tant for training FR models [11,31]. Also, datasets amassed
by crawling the web are not free from label noise [7, 73].
In various domains, synthetic datasets are traditionally
1. Subject Variation
2. Style Variation
3. Subject ConsistencyAC
B
LabeledDatasetFigure 1. Illustration of three factors that characterize a labeled
face dataset. It contains large subject variation, style variation and
label consistency. Synthetic face datasets should be created with
all three factors in mind. Face images in this figure are samples
generated by our proposed method which combines arbitrary ID
condition with style condition while preserving subject identity.
used to help generalize deep models when only limited real
datasets could be collected [13, 21, 63, 74] or when bias ex-
ists in the real dataset [34, 64]. Lately, more attention has
been drawn to training with only synthetic datasets in the
face domain, as synthetic data can avoid leaking the privacy
of real individuals. This is important as real face datasets
have been under scrutiny for their lack of informed consent,
as web-crawling is the primary means of large-scale data
collection [17, 22, 73]. Also, synthetic training datasets can
remedy some long-standing issues in real datasets, e.g.the
long tail distribution, demographic bias, etc.
When it comes to generating synthetic training datasets,
the following questions should be raised. (i) How many
novel subjects can be synthesized (ii) How well can we
mimic the distribution of real images in the target domain
and (iii) How well can we consistently generate multiple
images of the same subjects? We start with the hypothesis
that face dataset generation can be formulated as a problem
that maximizes these criteria together.
Previous efforts in generating synthetic face datasets
touch on one of the three aspects but do not consider all of
them together [3, 49]. SynFace [49] generates high-fidelity
face images based on DiscoFaceGAN [12], coming close to
real images in terms of FID metric [18]. However, we were
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12715
Style Image 𝑿𝒔𝒕𝒚𝑩ID Image 𝑿𝒊𝒅𝑨Predicted"𝑿𝒔𝒕𝒚𝑨𝑬𝒔𝒕𝒚Generate ID image𝑁(0,1)𝑮𝒊𝒅
Dual Condition Generator 𝑮𝒎𝒊𝒙𝑬𝒊𝒅Style BankChoose Style image
Subject : ASubject : BSubject : A𝝐𝜽1. Sampling Stage2. Mixing StageIdentity GeneratorABCLabeled DatasetFigure 2. Two stage dataset generation paradigm. In the sampling stage, 1) Gidgenerates a high-quality face image Xidthat defines how
a person looks and 2) the style bank selects a style image Xstythat defines the overall style of the final image. The mixing stage generates
image with identity from Xidand style from Xsty. Repeating this process multiple times, one can generate a labeled synthetic face dataset.
surprised to find that the actual number of unique subjects
that can be generated by DiscoFaceGAN is less than 500, a
finding that will be discussed in Sec. 3.1. The recent state
of the art (SoTA), DigiFace [3], can generate 1M large-scale
synthetic face images with many unique subjects based on
3D parametric model rendering. However, it falls short in
matching the quality and style of real face images.
We propose a new data generation scheme that addresses
all three criteria, i.e.the large number of novel subjects
(uniqueness ), real dataset style matching ( diversity ) and la-
bel consistency ( consistency ). In Fig. 1, we illustrate the
high-level idea by showcasing some of our generated face
samples. The key motivation of our paper is that the syn-
thetic dataset generator needs to control the number of
unique subjects, match the training dataset’s style distribu-
tion and be consistent in the subject label.
In light of this, we formulate the face image generation
as a dual condition inverse problem, retrieving the unknown
imageYfrom the observable Identity condition Xidand
Style condition Xsty. Specifically, Xidspecifies how a per-
son looks and Xstyspecifies how Xidshould be portrayed
in an image. Xstycontains identity-independent informa-
tion such as pose, expression, and image quality.
Our choice of dual conditions (identity and style) is im-
portant in how we generate a synthetic dataset as ID and
style conditions are controllable factors that govern the
dataset’s characteristics. To achieve this, we propose a
two-stage generation paradigm. First, we generate a high-
quality face image Xidusing a face image generator and
sample a style image Xstyfrom a style bank. Secondly, we
mix these two conditions using a dual condition generator
which predicts an image that has the ID of Xidand a style
ofXsty. An illustration is given in Fig. 2.
Training the mixing generator in stage 2 is not trivial as
it would require a triplet of ( XA
id,XB
sty,XA
sty) where XA
sty
is a hypothetical combination of the ID of subject Aand the
style of subject B. To solve this problem, we propose a new
dual condition generator that can learn from ( XA
id,XA
sty), a
tuple of same subject images that can always be obtained
in a labeled dataset. The novelty lies in our style condi-tion extractor and ID loss which prevents the training from
falling into a degenerate solution. We modify the diffusion
model [19, 55] to take in dual conditions and apply an aux-
iliary time-dependent ID loss that can control the balance
between sample diversity and label consistency.
We show that our Dual Condition Face Dataset Gener-
ator (DCFace) is capable of surpassing the previous meth-
ods in terms of FR performance, establishing a new bench-
mark in face recognition with synthetic face datasets. We
also show the roles dataset subject uniqueness, diversity and
consistency play in face recognition performance.
The followings are the contributions of the paper.
• We propose a two-stage face dataset generator that
controls subject uniqueness, diversity and consistency.
• For this, we propose a dual condition generator that
mixes the two independent conditions XidandXsty.
• We propose uniqueness, consistency and diversity met-
rics that quantify the respective properties of a given
dataset, useful measures that allow one to compare
datasets apart from the recognition performance.
• We achieve SoTA in FR with 0.5M image synthetic
training dataset by surpassing the previous methods by
6.11% on average in 5popular test datasets.
|
Lao_Simultaneously_Short-_and_Long-Term_Temporal_Modeling_for_Semi-Supervised_Video_Semantic_CVPR_2023
|
Abstract
In order to tackle video semantic segmentation task at
a lower cost, e.g., only one frame annotated per video,
lots of efforts have been devoted to investigate the utiliza-
tion of those unlabeled frames by either assigning pseudo
labels or performing feature enhancement. In this work,
we propose a novel feature enhancement network to si-
multaneously model short- and long-term temporal corre-
lation. Compared with existing work that only leverage
short-term correspondence, the long-term temporal corre-
lation obtained from distant frames can effectively expand
the temporal perception field and provide richer contex-
tual prior. More importantly, modeling adjacent and dis-
tant frames together can alleviate the risk of over-fitting,
hence produce high-quality feature representation for the
distant unlabeled frames in training set and unseen videos
in testing set. To this end, we term our method SSLTM ,
short for Simultaneously Short- and Long-Term Temporal
Modeling. In the setting of only one frame annotated per
video, SSLTM significantly outperforms the state-of-the-art
methods by 2%∼3%mIoU on the challenging VSPW
dataset. Furthermore, when working with a pseudo label
based method such as MeanTeacher, our final model only
exhibits 0.13% mIoU less than the ceiling performance ( i.e.,
all frames are manually annotated).
|
1. Introduction
Deep neural networks have been the de-facto solution
for many vision tasks such as image recognition [12], ob-
ject detection [15] and semantic segmentation [21]. These
state-of-the-art results are generally achieved by training
very deep networks on large-scale labeled datasets, e.g., Im-
ageNet [27], COCO [16] and Cityscapes [4], etc. However,
building such labeled datasets is labor-intensive and com-
plicated. Hence, it is very appealing to explore less label-
dependent alternatives that only requires a (small) portion
of the dataset to be annotated [14, 24–26, 37].
FirstLabeledFrameDistantUnlabeledFrameGTw/DistantFramew/oDistantFrame(a) The model trained with distant frames demonstrates better segmentation
output for distant unlabeled frames in the training set.
DistantFrameCurrentFrameGT
w/DistantFramew/oDistantFrame
(b) The adjacent frames do not contain sufficient visual clues to segment the
car in current frame, while the distant frame can provide richer context to
guide the segmentation model.
Figure 1. The importance of involving distant frames in train-
ing. The exploitation of distant frames not only reduces the risk
of over-fitting to the labeled frame and its adjacent ones, but also
provides temporally long-term context to enhance the feature rep-
resentation.
In this work, we aim to train the video semantic seg-
mentation model under an extreme setting of annotation
availability, i.e., each video in the training set only has its
first frame annotated. The significance of this problem is
twofold: 1). Video semantic segmentation is a fundamen-
tal task in computer vision, with wide applications in many
scenarios like autonomous driving [9], robot controlling [7];
2). Compared with dense annotations, it takes much less (if
not the least) cost to label one frame per video. More im-
portantly, given the information redundancy [18, 41] within
a video, it seems intuitively unnecessary to annotate ev-
ery frame at all costs. Thus, it is of great practical and
theoretical interests to explore the feasibility of conduct-
ing video semantic segmentation with one-frame-per-video-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14763
annotation.
Existing methods for this problem can be grouped into
Pseudo Label based approaches and Feature Enhancement
based ones, depending on whether explicit pseudo labels
are generated for the unlabeled frames. For the former ones
[1, 6, 40], a pseudo-label generator is often trained with the
annotated frames, then the model is updated using both la-
beled and unlabeled data. As a comparison, the latter group
of methods [18, 41] concentrates on obtaining high-quality
representations based on the features from both labeled and
unlabeled frames. Thus, these methods rely on feature en-
hancement modules that are specially designed for temporal
feature fusion. Note, Pseudo Label based approaches and
Feature Enhancement based ones are orthogonal, i.e., they
pay attention to different aspects of the semi-supervised
video semantic segmentation task, and can usually work to-
gether to combine the best of two worlds as shown in Sec-
tion 4.5. In this work, we will focus on the latter ones -
designing innovative feature enhancement modules.
Prior arts on feature enhancement mostly focus on mod-
eling short-term temporal correlation, under the assumption
of temporal consistency [18, 41] among adjacent frames.
Nevertheless, the distant frame is less exploited in existing
work, due to its severe content changes and weak temporal
consistency. However, in the setting of partial annotation,
the absence of distant frames in training results in signifi-
cant drawbacks: 1). As illustrated in Figure 1a, if the distant
frame is not involved in the training phase, the model will be
over adapted (or even over-fitted) to the labeled frame and
its adjacent ones. Consequently, the generalization to dis-
tant frames and unseen videos in the testing set is severely
hurt, leading to poor segmentation performance in the test-
ing stage. 2). Since the distant frame can provide long-term
temporal context, the representation quality of the current
frame can be improved by leveraging the information from
its distant frame. A qualitative sample is given in Figure 1b,
where a severely occluded car is correctly segmented with
the help of long-term context from the distant frame.
To address the aforementioned drawbacks, we propose
a novel Simultaneously Short- and Long-Term Temporal
Modeling (SSLTM) method to capture the temporal rela-
tionship from both adjacent and distant frames. To achieve
this goal, we design three novel components in our model
for representation learning: 1). We refer to the labeled
frame as query frame, for its adjacent frames in the same
video, we model the short-term inter-frame correlations
by a Spatial-Temporal Transformer (STT). 2). For the pur-
pose of long-term temporal modeling, we obtain a refer-
ence frame by randomly sampling a distant frame from the
same video of the query frame, then feeding the reference
frame’s feature to our proposed Reference Frame Context
Enhancement (RFCE) module, so as to enhance the rep-
resentation of query frame. Meanwhile, as the referenceframe is selected randomly from the entire video, our model
is potentially trained with all data, rather than just the la-
beled frames and their adjacent ones. As such, we expect
the model to be prevented from over-fitting to some extent.
3). To compensate for the semantic category representation
from RFCE, we further propose a Global Category Context
(GCC) module to model the global information across the
whole dataset.
In summary, our method is a pioneer work to exploit both
short- and long-term inter-frame correlations in the video
semantic segmentation task. Thanks to the effective model-
ing of distant frames, our RFCE demonstrates outstanding
performance, especially under the setting of partial annota-
tion. Specifically, on the challenging VSPW dataset [22],
the mIoU of our final model only decreases by 0.13% when
switching from per-frame-annotation to the one-frame-per-
video-annotation setting. To our knowledge, this is the
first work that nearly closes the gap between dense anno-
tations and one-frame-per-video ones, which is of great sig-
nificance in practical applications. Compared with exist-
ing Feature Enhancement based video semantic segmenta-
tion methods [5,18,22,23,30,41], our SSLTM demonstrates
advantageous results (mIoU as 39.79%) by a large margin
(2%∼3%mIoU) on the VSPW dataset.
|
Kim_Feature_Separation_and_Recalibration_for_Adversarial_Robustness_CVPR_2023
|
Abstract
Deep neural networks are susceptible to adversarial at-
tacks due to the accumulation of perturbations in the fea-
ture level, and numerous works have boosted model robust-
ness by deactivating the non-robust feature activations that
cause model mispredictions. However, we claim that these
malicious activations still contain discriminative cues and
that with recalibration, they can capture additional use-
ful information for correct model predictions. To this end,
we propose a novel, easy-to-plugin approach named Fea-
ture Separation and Recalibration (FSR) that recalibrates
the malicious, non-robust activations for more robust fea-
ture maps through Separation and Recalibration. The Sep-
aration part disentangles the input feature map into the
robust feature with activations that help the model make
correct predictions and the non-robust feature with activa-
tions that are responsible for model mispredictions upon
adversarial attack. The Recalibration part then adjusts
the non-robust activations to restore the potentially useful
cues for model predictions. Extensive experiments verify
the superiority of FSR compared to traditional deactivation
techniques and demonstrate that it improves the robustness
of existing adversarial training methods by up to 8.57%
with small computational overhead. Codes are available
athttps://github.com/wkim97/FSR .
|
1. Introduction
Despite the advancements of deep neural networks
(DNNs) in computer vision tasks [2,10,20,39], they are vul-
nerable to adversarial examples [19,43] that are maliciously
crafted to subvert the decisions of these models by adding
imperceptible noise to natural images. Adversarial exam-
ples are also known to be successful in real-world cases,
including autonomous driving [17] and biometrics [27, 41],
and to be effective even when target models are unknown
to the attacker [26, 31, 43]. Thus, it has become crucial to
devise effective defense strategies against this insecurity.
To this end, numerous defense techniques have been pro-
posed, including defensive distillation [38], input denois-
ing [30], and attack detection [36, 54]. Among these meth-ods, adversarial training [19, 33], which robustifies a model
by training it on a set of worst-case adversarial examples,
has been considered to be the most successful and popular.
Even with adversarial training, however, small adversar-
ial perturbations on the pixel-level accumulate to a much
larger degree in the intermediate feature space and ruin the
final output of the model [50]. To solve this problem, re-
cent advanced methods disentangled and deactivated the
non-robust feature activations that cause model mispredic-
tions. Xie et al. [50] applied classical denoising techniques
to deactivate disrupted activations, and Bai et al. [4] and
Yan et al. [55] deactivated channels that are irrelevant to
correct model decisions. These approaches, however, in-
evitably neglect discriminative cues that potentially lie in
these non-robust activations. Ilyas et al. [22] have shown
that a model can learn discriminative information from non-
robost features in the input space. Based on this finding, we
argue that there exist potential discriminative cues in the
non-robust activations, and deactivating them could lead to
loss of these useful information that can provide the model
with better guidance for making correct predictions.
For the first time, we argue that with appropriate adjust-
ment, the non-robust activations that lead to model mis-
predictions could recapture discriminative cues for correct
model decisions. To this end, we propose a novel Feature
Separation and Recalibration (FSR) module that aims to
improve the feature robustness . We first separate the in-
termediate feature map of a model into the malicious non-
robust activations that are responsible for model mispre-
dictions and the robust activations that still provide useful
cues for correct model predictions even under adversarial
attacks. Exploiting only the robust feature just like the ex-
isting methods [4, 50, 55], however, could lead to loss of
potentially useful cues in the non-robust feature. Thus, we
recalibrate the non-robust activations to capture cues that
provide additional useful guidance for correct model deci-
sions. These additional cues can better guide the model to
make correct predictions and thus boost its robustness.
Fig. 1 visualizes the attention maps [40] on the features
of natural images by a naturally trained model ( fnat) and
the robust ( f+), non-robust ( f−), and recalibrated features
(˜f−) on adversarial examples ( x′) obtained from an adver-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8183
𝑥′ 𝑓+𝑓− ሚ𝑓−𝑓𝑛𝑎𝑡
Figure 1. Visualization of attention maps [40] on the features of
natural images on a naturally trained model ( fnat) and the robust
(f+), non-robust ( f−), and recalibrated features ( ˜f−) of adver-
sarial examples ( x′) obtained from an adversarial training [33]
model equipped with our FSR module on CIFAR-10 dataset. The
robust feature captures discriminative cues regarding the ground
truth class, while the non-robust feature captures irrelevant cues.
To further boost feature robustness, we recalibrate the non-robust
feature ( f−→˜f−) and capture additional useful cues for model
predictions. Adversarial images x′are upscaled for clearer visual
representations.
sarial training [33] model equipped with our FSR module.
Given an adversarial example, while the non-robust fea-
ture (f−) captures cues irrelevant to the ground truth class,
the robust feature ( f+) captures discriminative cues ( e.g.,
horse’s leg). Our FSR module recalibrates the non-robust
activations ( f−→˜f−), which are otherwise neglected by
the existing methods, and restores additional useful cues not
captured by the robust activations ( e.g., horse’s body). With
these additional cues, FSR further boosts the model’s ability
to make correct decisions on adversarial examples.
Thanks to its simplicity, our FSR module can be easily
plugged into any layer of a CNN model and is trained with
the entire model in an end-to-end manner. We extensively
evaluate the robustness of our FSR module on benchmark
datasets against various white-box and black-box attacks
and demonstrate that our approach improves the robustness
of different variants of adversarial training (Sec. 4.2) with
small computational overhead (Sec. 4.4). We also show
that our approach of recalibrating non-robust activations is
superior to existing techniques [4, 50, 55] that simply deac-
tivate them (Sec. 4.2). Finally, through ablation studies, we
demonstrate that our Separation stage can effectively dis-
entangle feature activations based on their effects on model
decision and that our Recalibration stage successfully re-
captures useful cues for model predictions (Sec. 4.3).
In summary, our contributions are as follow:
• In contrast to recent methods that deactivate distorted
feature activations, we present a novel point of viewthat these activations can be recalibrated to capture
useful cues for correct model decisions.
• We introduce an easy-to-plugin Feature Separation
and Recalibration (FSR) module, which separates
non-robust activations from feature maps and recali-
brates these feature units for additional useful cues.
• Experimental results demonstrate the effectiveness of
our FSR module on various white- and black-box at-
tacks with small computational overhead and verify
our motivation that recalibration restores discrimina-
tive cues in non-robust activations.
|
Liu_Delving_StyleGAN_Inversion_for_Image_Editing_A_Foundation_Latent_Space_CVPR_2023
|
Abstract
GAN inversion and editing via StyleGAN maps an in-
put image into the embedding spaces ( W,W+, andF)
to simultaneously maintain image fidelity and meaningful
manipulation. From latent space Wto extended latent
spaceW+to feature space Fin StyleGAN, the editability
of GAN inversion decreases while its reconstruction quality
increases. Recent GAN inversion methods typically explore
W+andFrather than Wto improve reconstruction fidelity
while maintaining editability. As W+andFare derived
fromWthat is essentially the foundation latent space of
StyleGAN, these GAN inversion methods focusing on W+
andFspaces could be improved by stepping back to W. In
this work, we propose to first obtain the proper latent code
in foundation latent space W. We introduce contrastive
learning to align Wand the image space for proper la-
tent code discovery. Then, we leverage a cross-attention en-
coder to transform the obtained latent code in WintoW+
andF, accordingly. Our experiments show that our explo-
*Y . Song and Q. Chen are the joint corresponding authors.ration of the foundation latent space Wimproves the repre-
sentation ability of latent codes in W+and features in F,
which yields state-of-the-art reconstruction fidelity and ed-
itability results on the standard benchmarks. Project page:
https://kumapowerliu.github.io/CLCAE .
|
1. Introduction
StyleGAN [29–31] achieves numerous successes in im-
age generation. Its semantically disentangled latent space
enables attribute-based image editing where image content
is modified based on the semantic attributes. GAN in-
version [62] projects an input image into the latent space,
which benefits a series of real image editing methods [4,36,
49, 65, 72]. The crucial part of GAN inversion is to find
the inversion space to avoid distortion while enabling ed-
itability. Prevalent inversion spaces include the latent space
W+[1] and the feature space F[28]. W+is shown to
balance distortion and editability [56, 71]. It attracts many
editing methods [1, 2, 5, 20, 25, 53] to map real images into
this latent space. On the other hand, Fcontains spatial im-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10072
age representation and receives extensive studies from the
image embedding [28, 48, 59, 63] or StyleGAN’s parame-
ters [6, 14] perspectives.
The latent space W+and feature space Freceive wide
investigations. In contrast, Karras et al. [31] put into explor-
ingWand the results are unsatisfying. This may be because
that manipulation in Wwill easily bring content distortions
during reconstruction [56], even though Wis effective for
editability. Nevertheless, we observe that W+andFare
indeed developed from W, which is the foundation latent
space in StyleGAN. In order to improve image editability
while maintaining reconstruction fidelity (i.e., W+andF),
exploring Wis necessary. Our motivation is similar to the
following quotation:
“You can’t build a great building on a weak foundation.
You must have a solid foundation if you’re going to have a
strong superstructure. ”
—Gordon B. Hinckley
In this paper, we propose a two-step design to improve
the representation ability of the latent code in W+andF.
First, we obtain the proper latent code in W. Then, we use
the latent code in Wto guide the latent code in W+andF.
In the first step, we propose a contrastive learning paradigm
to align the Wand image space. This paradigm is derived
from CLIP [51] where we switch the text branch with W.
Specifically, we construct the paired data that consists of
one image Iand its latent code w∈ W with pre-trained
StyleGAN. During contrastive learning, we train two en-
coders to obtain two feature representations of Iandw, re-
spectively. These two features are aligned after the train-
ing process. During GAN inversion, we fix this contrastive
learning module and regard it as a loss function. This loss
function is set to make the one real image and its latent code
wsufficiently close. This design improves existing stud-
ies [31] on Wthat their loss functions are set on the image
space (i.e., similarity measurement between an input image
and its reconstructed image) rather than the unified image
and latent space. The supervision on the image space only
does not enforce well alignment between the input image
and its latent code in W.
After discovering the proper latent code in W, we lever-
age a cross-attention encoder to transform wintow+∈
W+andf∈ F . When computing w+, we set was the
query and w+as the value and key. Then, we calculate the
cross-attention map to reconstruct w+. This cross-attention
map enforces the value w+close to the query w, which en-
ables the editability of w+to be similar to that of w. Be-
sides, w+is effective in preserving the reconstruction abil-
ity. When computing f, we set the was the value and key,
while setting fas the query. So wwill guide ffor fea-
ture refinement. Finally, we use w+andfin StyleGAN to
generate the reconstruction result.
We named our method CLCAE (i.e., StyleGAN in-version with Contrastive Learning and Cross-Attention
Encoder). We show that our CLCAE can achieve state-
of-the-art performance in both reconstruction quality and
editing capacity on benchmark datasets containing human
portraits and cars. Fig. 1 shows some results. This indi-
cates the robustness of our CLCAE. Our contributions are
summarized as follows:
• We propose a novel contrastive learning approach to
align the image space and foundation latent space W
of StyleGAN. This alignment ensures that we can ob-
tain proper latent code wduring GAN inversion.
• We propose a cross-attention encoder to transform la-
tent codes in WintoW+andF. The representation
of latent code in W+and feature in Fare improved to
benefit reconstruction fidelity and editability.
• Experiments indicate that our CLCAE achieves state-
of-the-art fidelity and editability results both qualita-
tively and quantitatively.
|
Lin_Zero-Shot_Everything_Sketch-Based_Image_Retrieval_and_in_Explainable_Style_CVPR_2023
|
Abstract
This paper studies the problem of zero-short sketch-
based image retrieval (ZS-SBIR), however with two sig-
nificant differentiators to prior art (i) we tackle all vari-
ants (inter-category, intra-category, and cross datasets) of
ZS-SBIR with just one network (“everything”), and (ii)
we would really like to understand how this sketch-photo
matching operates (“explainable”). Our key innovation lies
with the realization that such a cross-modal matching prob-
lem could be reduced to comparisons of groups of key local
patches – akin to the seasoned “bag-of-words” paradigm.
Just with this change, we are able to achieve both of the
aforementioned goals, with the added benefit of no longer
requiring external semantic knowledge. Technically, ours is
a transformer-based cross-modal network, with three novel
components (i) a self-attention module with a learnable tok-
enizer to produce visual tokens that correspond to the most
informative local regions, (ii) a cross-attention module to
compute local correspondences between the visual tokens
across two modalities, and finally (iii) a kernel-based rela-
tion network to assemble local putative matches and pro-
duce an overall similarity metric for a sketch-photo pair.
Experiments show ours indeed delivers superior perfor-
mances across all ZS-SBIR settings. The all important ex-
plainable goal is elegantly achieved by visualizing cross-
modal token correspondences, and for the first time, via
sketch to photo synthesis by universal replacement of all
matched photo patches. Code and model are available at
https://github.com/buptLinfy/ZSE-SBIR .
|
1. Introduction
Zero-shot sketch-based image retrieval (ZS-SBIR) is a
central problem to sketch understanding [8, 16, 19, 20, 28,
34, 50, 54, 55, 57, 60, 67]. The zero-shot setting is largely
driven by the prevailing data scarcity problem of human
sketches [19,28,58] – they are much harder to acquire com-
pared with photos. As research matures on the non zero-
∗Equal contribution.
†Corresponding author.
(a) (b)
Unseen categoryTraining
data
Transfer
Shareable
Local
Match
(c) Figure 1. Attentive regions of self-/cross-attention and the learned
visual correspondence for tackling unseen cases. (a) The proposed
retrieval token [ Ret] can attend to informative regions. Different
colors are attention maps from different heads. (b) Cross-attention
offers explainability by explicitly constructing local visual cor-
respondence. The local matches learned from training data are
shareable knowledge, which enables ZS-SBIR to work under di-
verse settings (inter- / intra-category and cross datasets) with just
one model. (c) An input sketch can be transformed into its image
by the learned correspondence, i.e., sketch patches are replaced by
the closest image patches from the retrieved image.
shot [4,25,33,41–43,59,62], and in a push to make sketch-
based retrieval commercially viable, recent research efforts
had mainly focused on ZS-SBIR (or the simpler few-shot
setting) [16, 19, 28, 34, 50, 57, 60].
Great strides have been made but attempts have largely
aligned with the larger photo-based zero-shot literature,
where the key lies in leveraging external knowledge for
cross-category adaptation [19, 34]. That of conducting
cross-modal matching is, however, less studied, and most
prior art relies on a gold standard triplet loss with some
auxiliary modules [16] to learn a joint embedding. Further-
more, as problems such as domain shift and fine-grained
matching come to play, research efforts are mostly done in
silo for different settings: category-level (standard) [28, 50,
60], fine-grained [2], and cross-dataset [40]. Last but defi-
nitely not least, one can not help but wonder why many of
the proposed algorithm work – what is matched, and how is
the transfer conducted?
This paper aims to tackle all said problems associated
with the current status quo for ZS-SBIR. In particular, we
advocate for (i) a single model to tackle all three settings of
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23349
ZS-SBIR, (ii) ditching the requirement on external knowl-
edge to conduct category transfer, and more importantly,
(iii) a way to explain why our model works (or not).
At the very core of our contribution lies with a well-
explored insight that predates “deep vision”, that image
matching can be achieved by establishing local patch cor-
respondences and computing a distance score based on that
– yes, loosely similar to that of “bag of visual words” [11,
26, 51] (without building dictionaries). In our context of
ZS-SBIR, we would like to conduct this matching (i) cross
two diverse modalities in sketch and photo, and (ii) cross-
category, granularity (fine-grained or not) and dataset (do-
main difference) boundaries. The biggest upside of this re-
alization is that just as how “bag of visual words” is explain-
able, we can directly visualize the patch correspondences to
achieve a similar level of explainability (see Figure 1).
Our solution first is a transformer-based cross-modal net-
work, that (i) sources local patches independently in each
modality, (ii) establishes patch-to-patch correspondences
across two modalities, and (iii) computes matching scores
based on putative correspondences. We put forward a novel
design for each of the three components. We approach (i)
by proposing a novel CNN-based learnable tokenizer, that
is specifically tailored to sketch data. This is because the
vanilla non-overlapping patch-wise tokenization proposed
in ViT [18] is not friendly to the sparse nature of sketches
(as most patches would belong to the uninformative blank).
Our tokenizer on the other hand attends to a larger recep-
tive field [37] hence more keen to sketch data. With this
tokenizer, visual cues from nearby regions are aggregated
when constructing visual tokens, so that structural informa-
tion is preserved. In the same spirit of class token developed
in ViT for image recognition, we introduce a learnable re-
trieval token to prioritize tokens for cross-modal matching.
To establish patch-to-patch correspondences, a novel
cross-attention module is proposed that operates across
sketch-photo modalities. Specifically, we propose cross-
modal multi-head attention, in which the query embeddings
are exchanged between sketch and photo branches to rea-
son patch-level correspondences with only category-level
supervision. With the putative matches in place, inspired by
relation networks [53], we propose a kernel-based relation
network to aggregate the correspondences and calculate a
similarity score between each sketch-photo pair.
We achieve state-of-the-art performance across all said
ZS-SBIR settings. Explainability is offered (i) as per tradi-
tion in terms of visualizing patch correspondences, where
interesting local matches can be observed, such as the
antlers ofdeer in Figure 1(b), regardless of a sketch
being very abstract, and (ii) by replacing all patches in a
sketch with their photo correspondences, to perform sketch
to photo synthesis as shown in Figure 1(c).
|
Liu_Hierarchical_Supervision_and_Shuffle_Data_Augmentation_for_3D_Semi-Supervised_Object_CVPR_2023
|
Abstract
State-of-the-art 3D object detectors are usually trained
on large-scale datasets with high-quality 3D annotations.
However , such 3D annotations are often expensive and
time-consuming, which may not be practical for real ap-
plications. A natural remedy is to adopt semi-supervised
learning (SSL) by leveraging a limited amount of labeled
samples and abundant unlabeled samples. Current pseudo-labeling-based SSL object detection methods mainly adopt
a teacher-student framework, with a single fixed thresholdstrategy to generate supervision signals, which inevitably
brings confused supervision when guiding the student net-work training. Besides, the data augmentation of the point
cloud in the typical teacher-student framework is too weak,
and only contains basic down sampling and flip-and-shift(i.e., rotate and scaling), which hinders the effective learn-
ing of feature information. Hence, we address these is-
sues by introducing a novel approach of Hierarchical Su-pervision and Shuffle Data Augmentation (HSSDA), whichis a simple yet effective teacher-student framework. The
teacher network generates more reasonable supervision forthe student network by designing a dynamic dual-threshold
strategy. Besides, the shuffle data augmentation strategy
is designed to strengthen the feature representation ability
of the student network. Extensive experiments show thatHSSDA consistently outperforms the recent state-of-the-art
methods on different datasets. The code will be released athttps://github.com/azhuantou/HSSDA .
|
1. Introduction
Kinds of important applications, especially autonomous
driving, have been motivating the rapid development of 3D
*Corresponding author.Vanilla TeacherTeacher NetworkUnlabeled scene
Pseudo-labeling scene
Ours Teacher Vanilla StudentStudent
NetworkUnlabeled scene
Augmented scene
Student
NetworkUnlabeled scene
Augmented scene
RGB Image Full labeled scene
(a) (b)Ours Student
High threshold Low threshold
Teacher NetworkUnlabeled scene
Pseudo-labeling sceneDual-threshold strategyFlip&Shift Split&Shuffle
OR
Figure 1. Illustration of (a) the previous teacher compared to our
teacher framework and (b) the previous student compared to ourstudent framework. The black dashed box includes the RGB im-
age and the corresponding fully annotated 3D point cloud (greenbox). The left side of the yellow dotted line in (a) representsthe pseudo-labeling scene generated by the single threshold ofthe vanilla teacher network, causing the student network may beseverely misled due to missing mined objects (high threshold) orfalse positive objects (low threshold), while our proposed teachernetwork generates three groups of pseudo labels(shown as green,red, blue) to provide hierarchical supervision for the student net-
work. (b) shows our student network adopts stronger shuffled data
augmentation than the vanilla student network to learn the strongerability of feature representation.
object detection by the range sensor data ( e.g., LiDAR point
cloud). Up to now, many point-based and point-voxel-based
methods [ 10,30,31,49] have been proposed. Despite the im-
pressive progress, a large amount of accurate instance-level
3D annotations have to be provided for training, which is
more time-consuming and expensive than 2D object annota-tion. This hinders the application and deployment of exist-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23819
ing advanced detection models. To this end, how to reduce
the dependence on huge annotated datasets has achieved
growing interest in the object detection community.
As one of the important machine learning schemes for
reducing data annotation, semi-supervised learning (SSL)aims to improve the generalization ability of model train-ing with a small number of labeled data together withlarge-scale unlabeled samples. In the semi-supervised ob-
ject detection community, most works focus on 2D object
detection [ 13,17,34,40,43], among which the teacher-
student model is the mainstream framework. Specifically,the teacher network with weakly augmented labeled datagenerates pseudo labels to train the student network withstrong data augmentation. This pipeline has been widelyverified to be effective, in which the quality of pseudo labels
and the data augmentation strategy are the key and manyworks have been proposed to tackle them [ 2,6,33,34]. Ben-
efiting from 2D semi-supervised object detection, several3D semi-supervised object detection methods have been
proposed [ 23,39,47,52], which still mainly adopted the
teacher-student model. These methods, as well as 2D semi-
supervised object detection methods [ 21,34,55], mainly use
a hard way, e.g., a score threshold, to get pseudo labels to
train the student network. This kind of strategy is difficult
to guarantee the quality of pseudo labels. Taking the scorethreshold strategy as an example, if the threshold is too low,
the pseudo labels will contain many false objects, while ifit is too high, the pseudo labels will miss many real ob-
jects which will be improperly used as background (see in
Fig. 1(a)). As exhibited in [ 39], only about 30% of the ob-
jects can be mined from the unlabeled scenes even at theend of the network training. Thus, both of those two cases
will bring the student network confused supervision, whichharms the performance of the teacher-student model. This
would inevitably happen for the single threshold strategy,even adopting some optimal threshold search method [ 39].
Thus, how to produce reasonable pseudo labels from the
teacher network output is an important issue to address for
better training the student networks.
Besides the quality of pseudo labels, data augmentation
is also the key to the teacher-student model as mentionedpreviously. Extensive works in 2D semi-supervised object
detection have shown that strong data augmentation is very
important to learn the strong feature representation ability
of the student network. Thus, kinds of strong data aug-
mentation strategies, e.g., Mixup [ 48], Cutout [ 6], and Mo-
saic [ 4] have been widely adopted. However, current 3D
semi-supervised object detection methods adopt some weakdata augmentation strategies, e.g., flip-and-shift. These
kinds of data augmentations are not able to well drive the
student network to learn strong feature representation abil-
ity. Thus, the good effect of data augmentation in 2D semi-supervised object detection does not appear obviously in 3Dsemi-supervised object detection.
To tackle the above issues of the quality of pseudo labels
and data augmentation, we propose a Hierarchical Super-vision and Shuffle Data Augmentation (HSSDA) methodfor 3D semi-supervised object detection. We still adoptthe teacher-student model as our mainframe. For obtain-ing more reasonable pseudo labels for the student network,we design a dynamic dual-threshold strategy to divide the
output of the teacher network into three groups: (1) high-
confidence level pseudo labels, (2) ambiguous level pseudolabels, and (3) low-confidence level pseudo labels, as shown
in Fig. 1(a). This division provides hierarchical supervision
signals for the student network. Specifically, the first group
is used as the strong labels to learn the student network,while the second join learning through a soft-weight way.
The higher the score is, the more it affects learning. The
third group is much more likely to tend to be false objects.We directly delete them from the point cloud to avoid con-
fusing parts of the object point cloud into the background.
For strengthening the feature representation ability of
the student network, we design a shuffle data augmentation
strategy in this paper. As shown in Fig. 1(b), we first gener-
ate shuffled scenes by splitting and shuffling the point cloud
patches in BEV (bird-eye view) and use them as inputs tothe student model. Next, the feature maps extracted from
the detector backbone are unshuffled back to the original
point cloud geometry location.
To summarize, our contributions are as follows:
• We propose a novel hierarchical supervision gener-
ation and learning strategy for the teacher-student
model. This strategy can provide the student networkhierarchical supervision signal, which can fully utilizethe output of the teacher network.
• We propose a shuffle data augmentation strategy that
can strengthen the feature representation ability of the
student network.
• Our proposed hierarchical supervision strategy and
shuffle data augmentation strategy can be directly ap-plied to the off-the-shelf 3D semi-supervised point
cloud object detector and extensive experimentsdemonstrate that our method has achieved state-of-the-
art results.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.