title
stringlengths 28
135
| abstract
stringlengths 0
12k
| introduction
stringlengths 0
12k
|
---|---|---|
Liu_Continual_Detection_Transformer_for_Incremental_Object_Detection_CVPR_2023
|
Abstract
Incremental object detection (IOD) aims to train an ob-
ject detector in phases, each with annotations for new ob-
ject categories. As other incremental settings, IOD is sub-
ject to catastrophic forgetting, which is often addressed by
techniques such as knowledge distillation (KD) and exem-
plar replay (ER). However, KD and ER do not work well
if applied directly to state-of-the-art transformer-based ob-
ject detectors such as Deformable DETR [59] and UP-
DETR [9]. In this paper, we solve these issues by proposing
a ContinuaL DEtection TRansformer (CL-DETR), a new
method for transformer-based IOD which enables effective
usage of KD and ER in this context. First, we introduce a
Detector Knowledge Distillation (DKD) loss, focusing on
the most informative and reliable predictions from old ver-
sions of the model, ignoring redundant background predic-
tions, and ensuring compatibility with the available ground-
truth labels. We also improve ER by proposing a calibration
strategy to preserve the label distribution of the training set,
therefore better matching training and testing statistics. We
conduct extensive experiments on COCO 2017 and demon-
strate that CL-DETR achieves state-of-the-art results in the
IOD setting.1
|
1. Introduction
Humans inherently learn in an incremental manner, ac-
quiring new concepts over time without forgetting previ-
ous ones. In contrast, machine learning suffers from catas-
trophic forgetting [21, 35, 36], where learning from non-
i.i.d. data can override knowledge acquired previously. Un-
surprisingly, forgetting also affects object detection [2, 12,
20, 37, 44, 50, 54]. In this context, the problem was formal-
ized by Shmelkov et al. [44], who defined an incremental
object detection (IOD) protocol, where the training samples
for different object categories are observed in phases, re-
1Code: https://lyy.mpi-inf.mpg.de/CL-DETR/
Old categories All categories36384042Average Precision (%) w/ ER+KD w/ Ours Upper BoundFigure 1. The final Average Precision (AP, %) of two-phase incre-
mental object detection on COCO 2017. We observe 70and10
categories in the first and second phases, respectively. The base-
line is Deformable DETR [59]. “Upper bound” shows the results
of joint training with all previous data accessible in each phase.
stricting the ability of the trainer to access past data.
Popular methods to address forgetting in tasks other than
detection include Knowledge Distillation (KD) and Exem-
plar Replay (ER). KD [11,16,17,26,57] uses regularization
in an attempt to preserve previous knowledge when train-
ing the model on new data. The key idea is to encourage
the new model’s logits or feature maps to be close to those
of the old model. ER methods [5, 29, 32, 33, 41, 52] work
instead by memorising some of the past training data (the
exemplars ), replaying them in the following phases to “re-
member” the old object categories.
Recent state-of-the-art results in object detection have
been achieved by a family of transformer-based architec-
tures that include DETR [4], Deformable DETR [59] and
UP-DETR [9]. In this paper, we show that KD and ER
do not work well if applied directly to these models. For
instance, in Fig. 1 we show that applying KD and ER to
Deformable DETR leads to much worse results compared
to training with all data accessible in each phase ( i.e., the
standard non-incremental setting).
We identify two main issues that cause this drop in per-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23799
formance. First, transformer-based detectors work by test-
ing a large number of object hypotheses in parallel. Because
the number of hypotheses is much larger than the typical
number of objects in an image, most of them are negative,
resulting in an unbalanced KD loss. Furthermore, because
both old and new object categories can co-exist in any given
training image, the KD loss and regular training objective
can provide contradictory evidence. Second, ER methods
for image classification try to sample the same number of
exemplars for each category. In IOD, this is not a good strat-
egy because the true object category distribution is typically
highly skewed. Balanced sampling causes a mismatch be-
tween the training and testing data statistics.
In this paper, we solve these issues by proposing Con-
tinuaL DEtection TRansformer (CL-DETR), a new method
for transformer-based IOD which enables effective usage of
KD and ER in this context. CL-DETR introduces the con-
cept of Detector Knowledge Distillation (DKD), selecting
the most confident object predictions from the old model,
merging them with the ground-truth labels for the new cate-
gories while resolving conflicts, and applying standard joint
bipartite matching between the merged labels and the cur-
rent model’s predictions for training. This approach sub-
sumes the KD loss, applying it only for foreground predic-
tions correctly matched to the appropriate model’s hypothe-
ses. CL-DETR also improves ER by introducing a new cal-
ibration strategy to preserve the distribution of object cat-
egories observed in the training data. This is obtained by
carefully engineering the set of exemplars remembered to
match the desired distribution. Furthermore, each phase
consists of a main training step followed by a smaller one
focusing on better calibrating the model.
We also propose a more realistic variant of the IOD
benchmark protocol. In previous works [12, 44], in each
phase, the incremental detector is allowed to observe all im-
ages that contain a certain type of object. Because images
often contain a mix of object classes, both old and new, this
means that the same images can be observed in different
training phases. This is incompatible with the standard def-
inition of incremental learning [16, 33, 41] where, with the
exception of the examples deliberately stored in the exem-
plar memory, the images observed in different phases do not
repeat. We redefine the IOD protocol to avoid this issue.
We demonstrate CL-DETR by applying it to dif-
ferent transformer-based detectors including Deformable
DETR [59] and UP-DETR [9]. As shown in Fig. 1, our
results on COCO 2017 show that CL-DETR leads to signif-
icant improvements compared to the baseline, boosting AP
by4.2percentage points compared to a direct application
of KD and ER to the underlying detector model. We further
study and justify our modelling choices via ablations.
To summarise, we make four contributions : (1) The
DKD loss that improves KD for knowledge distillation byresolving conflicts between distilled knowledge and new ev-
idence and by ignoring redundant background detections;
(2) A calibration strategy for ER to match the stored ex-
emplars to the training set distribution; (3) A revised IOD
benchmark protocol that avoids observing the same images
in different training phases; (4) Extensive experiments on
COCO 2017, including state-of-the-art results, an in-depth
ablation study, and further visualizations.
|
Ling_PanoSwin_A_Pano-Style_Swin_Transformer_for_Panorama_Understanding_CVPR_2023
|
Abstract
In panorama understanding, the widely used equirectan-
gular projection (ERP) entails boundary discontinuity and
spatial distortion. It severely deteriorates the conventional
CNNs and vision Transformers on panoramas. In this pa-
per, we propose a simple yet effective architecture named
PanoSwin to learn panorama representations with ERP . To
deal with the challenges brought by equirectangular projec-
tion, we explore a pano-style shift windowing scheme and
novel pitch attention to address the boundary discontinu-
ity and the spatial distortion, respectively. Besides, based
on spherical distance and Cartesian coordinates, we adapt
absolute positional embeddings and relative positional bi-
ases for panoramas to enhance panoramic geometry infor-
mation. Realizing that planar image understanding might
share some common knowledge with panorama understand-
ing, we devise a novel two-stage learning framework to
facilitate knowledge transfer from the planar images to
panoramas. We conduct experiments against the state-of-
the-art on various panoramic tasks, i.e., panoramic object
detection, panoramic classification, and panoramic layout
estimation. The experimental results demonstrate the effec-
tiveness of PanoSwin in panorama understanding.
|
1. Introduction
Panoramas are widely used in many real applications,
such as virtual reality, autonomous driving, civil surveil-
lance, etc. Panorama understanding has attracted increas-
ing interest in the research community [5, 27, 34]. Among
these methods, the most popular and convenient represen-
tation of panorama is adopted via equirectangular projec-
tion (ERP), which maps the latitude and longitude of the
spherical representation to horizontal and vertical grid co-
ordinates. However, the inherent omnidirectional vision re-
mains the challenge of the panorama understanding. Al-
though convolutional neural networks (CNNs) [11, 14, 28]
have shown outstanding performances on planar image un-
derstanding, most CNN-based methods are unsuitable for
panoramas because of two fundamental problems entailed
a. original windowsdistant in ERP but close in sphere
pitch attention
b. shifted windows
c. rotated panoramashift Figure 1. (1). Fig. ais how a panoramic image looks, just like
a planar world map, where top/bottom regions are connected to
the earth’s poles; the right side is connected to the left. (2). Our
PanoSwin is based on window attention [19]. Fig. aalso shows
the original window partition in dotted orange, where the two
windows in bold orange are separated by equirectangular projec-
tion(ERP). (3). Fig. bshows pano-style shift windowing scheme,
which brings the two departed regions together. (4). Fig. cshows
our pitch attention module, which helps a distorted window to in-
teract with an undistorted one.
by ERP: (1)polar and side boundary discontinuity and(2)
spatial distortion . Specifically, the north/south polar region
in spherical representations are closely connected. But the
converted region covers the whole top/bottom boundaries.
On this account, polar boundary continuity is destroyed by
ERP. Similarly, side boundary continuity is also destroyed
since the left and right sides are split by ERP. Meanwhile,
spatial distortion also severely deforms the image content,
especially in polar regions.
A common solution is to adapt convolution to the spher-
ical space [4, 5, 24, 34]. However, these methods might suf-
fer from high computation costs from the adaptation pro-
cess. Besides, Spherical Transformer [2] and PanoFormer
[22] specially devise patch sampling approaches to remove
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17755
panoramic distortion. However, the specially designed
patch sampling approaches might not be feasible for planar
images. In our experiments, we demonstrate that exploiting
planar knowledge can boost the performance of panorama
understanding.
Inspired by Swin Transformer [19], we propose
PanoSwin Transformer to reduce the distortion of
panoramic images, as briefly shown in Fig. 1. To cope with
boundary discontinuity , we explore a pano-style shift win-
dowing scheme (PSW). In PSW, side continuity is estab-
lished by horizontal shift. To establish polar continuity, we
first split the panorama in half and then rotate the right half
counterclockwise. To overcome spatial distortion , we first
rotate the pitch of the panorama by 0.5π. So the polar re-
gions of the original feature map are “swapped” with some
equator regions of the rotated panorama. For each win-
dow in the original panorama, we locate a corresponding
window in the rotated panorama. Then we perform cross-
attention between these two windows. We name the module
pitch attention (PA), which is plug-and-play and can be in-
serted in various backbones. Intuitively, pitch attention can
help a window “know” how it looks without distortion.
To leverage planar knowledge, some works [24, 25]
proposed to make novel panoramic kernel mimick out-
puts from planar convolution kernel layer by layer. How-
ever, PanoSwin is elaborately designed to be compati-
ble with planar images: PanoSwin can be switched from
pano mode to vanilla swin mode . Let PanoSwin in these
two modes be denoted as PanoSwin pand PanoSwin s.
PanoSwin p/PanoSwin scan be adopted to process panora-
mas/planar images, details about which will be introduced
in Sec. 3.6. In our paper, PanoSwin is under pano mode
by default. The double-mode feature of PanoSwin makes
it possible to devise a simple two-stage learning paradigm
based on knowledge preservation to leverage planar knowl-
edge: we first pretrain PanoSwin swith planar images; then
we switch it to PanoSwin pand train it with a knowledge
preservation (KP) loss and downstream task losses. This
paradigm is able to facilitate transferring common visual
knowledge from planar images to panoramas.
Our main contributions are summarized as follows: (1)
We propose PanoSwin to learn panorama features, in which
Pano-style Shift Windowing scheme (PSW) is proposed to
resolve polar and side boundary discontinuity; (2)we pro-
pose pitch attention module (PA) to overcome spatial dis-
tortion introduced by ERP; (3)PanoSwin is designed to be
compatible with planar images. Therefore, we proposed a
KP-based two-stage learning paradigm to transfer common
visual knowledge from planar images to panoramas; (4)we
conduct experiments on various panoramic tasks, including
panoramic object detection, panoramic classification, and
panoramic layout estimation on five datasets. The results
have validated the effectiveness of our proposed method.
|
Li_LoGoNet_Towards_Accurate_3D_Object_Detection_With_Local-to-Global_Cross-Modal_Fusion_CVPR_2023
|
Abstract
LiDAR-camera fusion methods have shown impressive
performance in 3D object detection. Recent advancedmulti-modal methods mainly perform global fusion, whereimage features and point cloud features are fused across
the whole scene. Such practice lacks fine-grained region-level information, yielding suboptimal fusion performance.
In this paper , we present the novel Local-to-Global fusionnetwork (LoGoNet), which performs LiDAR-camera fusionat both local and global levels. Concretely, the Global Fu-sion (GoF) of LoGoNet is built upon previous literature,while we exclusively use point centroids to more preciselyrepresent the position of voxel features, thus achieving bet-ter cross-modal alignment. As to the Local Fusion (LoF),we first divide each proposal into uniform grids and thenproject these grid centers to the images. The image featuresaround the projected grid points are sampled to be fusedwith position-decorated point cloud features, maximally uti-lizing the rich contextual information around the proposals.The Feature Dynamic Aggregation (FDA) module is furtherproposed to achieve information interaction between theselocally and globally fused features, thus producing more
informative multi-modal features. Extensive experimentson both Waymo Open Dataset (WOD) and KITTI datasetsshow that LoGoNet outperforms all state-of-the-art 3D de-
tection methods. Notably, LoGoNet ranks 1st on Waymo
3D object detection leaderboard and obtains 81.02 mAPH
(L2) detection performance. It is noteworthy that, for thefirst time, the detection performance on three classes sur-passes 80 APH (L2) simultaneously. Code will be availableathttps://github.com/sankin97/LoGoNet .
*Corresponding author
|
1. Introduction
3D object detection, which aims to localize and clas-
sify the objects in the 3D space, serves as an essentialperception task and plays a key role in safety-critical au-tonomous driving [ 1,20,58]. LiDAR and cameras are
two widely used sensors. Since LiDAR provides accu-rate depth and geometric information, a large number ofmethods [ 24,48,63,68,72,73] have been proposed and
achieve competitive performance in various benchmarks.However, due to the inherent limitation of LiDAR sensors,
point clouds are usually sparse and cannot provide sufficientcontext to distinguish between distant regions, thus causingsuboptimal performance.
To boost the performance of 3D object detection, a nat-
ural remedy is to leverage rich semantic and texture in-formation of images to complement the point cloud. As
shown in Fig. 1(a), recent advanced methods introduce the
global fusion to enhance the point cloud with image fea-tures [ 2,5,7,8,22,23,25,27,34,54,55,60,69,71]. They
typically fuse the point cloud features with image featuresacross the whole scene. Although certain progress has beenachieved, such practice lacks fine-grained local informa-
tion. For 3D detection, foreground objects only account fora small percentage of the whole scene. Merely performing
global fusion brings marginal gains.
To address the aforementioned problems, we propose a
novel Local-to-Global fusion Network, termed LoGoNet,which performs LiDAR-camera fusion at both global andlocal levels, as shown in Fig. 1(b). Our LoGoNet is
comprised of three novel components, i.e., Global Fusion
(GoF), Local Fusion (LoF) and Feature Dynamic Aggrega-tion (FDA). Specifically, our GoF module is built on previ-
ous literature [ 8,25,34,54,55] that fuse point cloud features
and image features in the whole scene, where we use thepoint centroid to more accurately represent the position of
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17524
Global Fusion
Local FusionSpatial Location Guidance
Instance Semantic Guidancefuse
fuse
Point Cloud Image
ROI Point Cloud ROI Image
(b)(a)
(c)75.5475.6776.29 76.3377.1077.6478.41 78.4579.6079.94 79.9781.02
7576777879808182mAPH (L2)
Local-to-Global fusion (Ours)
Global fusion
LiDAR-only
(c)*
* *
*
* *
*
*Test time augmentations or ensemble
Time
Figure 1. Comparison between (a) global fusion and (b) local fusion. Global fusion methods perform fusion of point cloud features and
image features across the whole scene, which lacks fine-grained region-level information. The proposed local fusion method fuses featuresof two modalities on each proposal, complementary to the global fusion methods. (c) Performance comparison of various methods in
Waymo 3D detection leaderboard [ 51]. Our LoGoNet attains the top 3D detection performance, clearly outperforming all state-of-the-art
global fusion based and LiDAR-only detectors. Please refer to Table 1for a detailed comparison with more methods.
each voxel feature, achieving better cross-modal alignment.
And we use the global voxel features localized by point cen-troids to adaptively fuse image features through deformable
cross-attention [ 75] and adopt the ROI pooling [ 9,48]t o
generate the ROI-grid features.
To provide more fine-grained region-level information
for objects at different distances and retain the original po-sition information within a much finer granularity, we pro-pose the Local Fusion (LoF) module with the Position In-formation Encoder (PIE) to encode position information of
the raw point cloud in the uniformly divided grids of eachproposal and project the grid centers onto the image plane tosample image features. Then, we fuse sampled image fea-tures and the encoded local grid features through the cross-attention [ 53] module. To achieve more information inter-
action between globally fused features and locally fusedROI-grid features for each proposal, we propose the FDAmodule through self-attention [ 53] to generate more infor-
mative multi-modal features for second-stage refinement.
Our LoGoNet achieves superior performance on two 3D
detection benchmarks, i.e., Waymo Open Dataset (WOD)
and KITTI datasets. Notably, LoGoNet ranks 1st on Waymo
3D object detection leaderboard and obtains 81.02 mAPH
(L2) detection performance. Note that, for the first time, thedetection performance on three classes surpasses 80 APH(L2) simultaneously.
The contributions of our work are summarized as fol-
lows:
• We propose a novel local-to-global fusion network,termed LoGoNet , which performs LiDAR-camera fu-
sion at both global and local levels.
• Our LoGoNet is comprised of three novel components,
i.e., GoF, LoF and FDA modules. LoF provides fine-
grained region-level information to complement GoF.FDA achieves information interaction between glob-ally and locally fused features, producing more infor-mative multi-modal features.
• LoGoNet achieves state-of-the-art performance on
WOD and KITTI datasets. Notably, our Lo-
GoNet ranks 1st on Waymo 3D detection leaderboard
with 81.02 mAPH (L2).
|
Li_Metadata-Based_RAW_Reconstruction_via_Implicit_Neural_Functions_CVPR_2023
|
Abstract
Many low-level computer vision tasks are desirable to
utilize the unprocessed RAW image as input, which remains
the linear relationship between pixel values and scene ra-
diance. Recent works advocate to embed the RAW im-
age samples into sRGB images at capture time, and recon-
struct the RAW from sRGB by these metadata when needed.
However, there still exist some limitations in making full
use of the metadata. In this paper, instead of following
the perspective of sRGB-to-RAW mapping, we reformulate
the problem as mapping the 2D coordinates of the meta-
data to its RAW values conditioned on the corresponding
sRGB values. With this novel formulation, we propose to
reconstruct the RAW image with an implicit neural function,
which achieves significant performance improvement (more
than 10dB average PSNR) only with the uniform sampling.
Compared with most deep learning-based approaches, our
method is trained in a self-supervised way that requiring no
pre-training on different camera ISPs. We perform further
experiments to demonstrate the effectiveness of our method,
and show that our framework is also suitable for the task of
guided super-resolution.
|
1. Introduction
Low-level computer vision tasks benefit a lot from the
scene-referred RAW images [7, 39, 19, 17, 16], which is
rendered to the display-referred standard RGB (sRGB) im-
ages via camera image signal processors (ISPs). Compared
with sRGB images, typical RAW images has the advantages
of linear relationship between pixel values and scene ra-
diance, as well as higher dynamic range. However, RAW
images occupy obviously more memory than the sRGB im-
ages in common format like JPEG, which is unfavourable
for transferring and sharing. Moreover, since most dis-
play and printing devices are designed for images stored
and shared in sRGB format, it is inconvenient to directly re-
place sRGB with RAW. Consequently, mapping sRGB im-
ages back to their RAW counterparts, which is also called
RAW reconstruction, is regarded as the appropriate way to
PSNR: 49.15dB
PSNR: 46.72dB
1%
sRGB (input) RIR [25] SAM [28]
PSNR: 51.12dB
PSNR: 63.59dB
1%
RAW (GT) CAM [24] Ours
Figure 1. As RAW images are beneficial to many low-level com-
puter vision tasks, we aim to reconstruct the RAW image from
the corresponding sRGB image with the assistance of extra meta-
data. In this figure, the reconstructed RAW images are visual-
ized through error maps. As can be seen, our method remarkably
outperforms other related methods with the improvement of more
than 10 dB PSNR. We owe this performance boost to the effec-
tiveness of our proposed implicit neural function (INF) .
utilize the advantage of RAW data [23, 25, 36, 10, 28, 24].
Early RAW reconstruction methods focus on building
standard models to reverse ISPs, which is parameterized
by either explicit functions [4, 18, 14, 5] or neural net-
works [23, 36, 10]. However, these approaches are faced
with the same issue that a parameterized model is only
suitable for a specific ISP. Meanwhile, a series of meth-
ods [25, 26, 28, 24] propose to overcome this problem by
embedding extra metadata into sRGB images at capture
time. For such methods, the main challenge is to improve
the accuracy with lower metadata generation cost. RIR [25]
implements complex optimization algorithm to estimate the
global mapping parameters as metadata, but suffers high
computational cost. SAM [28] adopts a uniform sampling
on RAW images to generate the metadata, which is further
replaced with a sampler network by CAM [24].
For the metadata-based methods of SAM [28] and
CAM [24], the embedded RAW samples stores partial infor-
mation of ISPs which helps to reconstruct the RAW images
better; also, by conditioning the reconstruction algorithm on
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18196
these metadata, the recovery of the RAW data turns into a
conditional mapping function instead of a function fitted to
a specific case, enabling the potential to achieve better gen-
eralization. Therefore, we adopt this strategy in the paper.
Despite the progress that SAM [28] and CAM [24] have
made, there still exist some limitations for the metadata-
based RAW reconstruction methods. SAM utilizes RBF in-
terpolation [3], the main idea of which is to calculate the
difference between sampling and target points by a kernel
function. However, a fixed kernel function lacks the flexi-
bility to model various sRGB-to-RAW mappings. CAM di-
rectly uses a neural network for reconstruction but requires
pre-training on pairs of sRGB and raw data from different
types of ISPs. Also, we observe that the results of these
methods fail to recover the saturated regions [26] (i.e., pix-
els with any channel value close to the maximum), as is
shown in Figure 1.
To address the limitation, we propose a two-way RAW
reconstruction algorithm based on an implicit neural func-
tion (INF). Previously, RAW reconstruction is formulated
as mapping a sRGB image and the metadata to its RAW im-
age. In this paper, we reformulate the problem as mapping
the 2D coordinates of the metadata to its RAW values con-
ditioned on the corresponding sRGB values, i.e. an implicit
function. With this novel formulation, we can also decom-
pose the problem into two aspects: a mapping function from
the sRGB values to the corresponding raw values; a super-
resolution function to interpolate the RAW image from the
sparse samples. We observe that the super-resolution part
usually exhibits much higher errors, indicating the latter a
more challenging task. Accordingly, two branches are de-
signed for each task inside an implicit neural network and
the hyper-parameters for these branches are tuned to accom-
modate the difficulty of the tasks. Also, notice that with this
formulation, the network can be trained in a self-supervised
way, without the need of corresponding RAW images.
Our contribution can be summarized as follows:
• We reformulate the RAW reconstruction problem as
a RAW image approximation problem that learns the
2D-to-RAW mapping of image coordinates to RAW
values conditioned on its sRGB image.
• We decompose the reconstruction into two aspects and
design the implicit neural network accordingly.
• We conduct extensive experiments on different cam-
eras and demonstrate our algorithm outperforms exist-
ing work significantly.
|
Kawahara_Teleidoscopic_Imaging_System_for_Microscale_3D_Shape_Reconstruction_CVPR_2023
|
Abstract
This paper proposes a practical method of microscale
3D shape capturing by a teleidoscopic imaging system. The
main challenge in microscale 3D shape reconstruction is
to capture the target from multiple viewpoints with a large
enough depth-of-field. Our idea is to employ a teleidoscopic
measurement system consisting of three planar mirrors and
monocentric lens. The planar mirrors virtually define mul-
tiple viewpoints by multiple reflections, and the monocen-
tric lens realizes a high magnification with less blurry and
surround view even in closeup imaging. Our contributions
include, a structured ray-pixel camera model which han-
dles refractive and reflective projection rays efficiently, an-
alytical evaluations of depth of field of our teleidoscopic
imaging system, and a practical calibration algorithm of
the teleidoscopic imaging system. Evaluations with real im-
ages prove the concept of our measurement system.
|
1. Introduction
Microscale 3D reconstruction has found profound appli-
cations in a wide range of domains including medical imag-
ing, life science, and aquaculture, due to its non-constrained
and non-invasive measurements. The main challenges in
image-based microscopic 3D shape measurement is its shal-
low depth of field and camera arrangement in the closeup
scenario. Applying conventional multiple camera system
designed for human-size capture [ 14,30] cannot be a feasi-
ble solution due to limitations on camera placement. Con-ventional multiple mirror system [ 33] also have difficulties
inevitably in depth-of-focus due to differences in their opti-
cal path lengths with varying numbers of bounces.
In this paper, we show that the fuller 3D shape of a mi-
croscale object can be recovered. Our key idea is to employ
a catadioptric imaging system which realizes a practical
closeup multi-view imaging. The point of our design is that
the system has a monocentric front lens like a teleidoscope,
instead of using microscopic system in the camera side.
That is, as shown in Fig. 1(a), we introduce a kaleidoscopic
multi-facet mirror between the front lens and the camera,
where the design realizes a deeper depth-of-field and results
in less blurring imaging. Unlike conventional microscopic
imaging system such as differential phase contrast (DPC)
microscopy [ 4,34] and multi-focus approaches [ 15,24,26],
our method realizes a multi-view capture of the target from
a single physical viewpoint which can contribute to free-
viewpoint rendering, 3D shape reconstruction, and reflec-
tion analysis. Our system also has an advantage over ex-
isting imaging systems that build multiple views behind the
main lens [ 7,21] in a closeup environment. The wide FoV
with a monocentric main lens and virtual multiple views
with mirrors allow closeup and surrounding view capturing
in focus.
We call our system teleidoscopic imaging system and
show that the system can be compactly modeled by a struc-
tured ray-pixel camera model [ 11], which handles refractive
and reflective projection rays efficiently. Based on our ray-
pixel camera model, we derive a practical calibration algo-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20813
rithm that estimates the positions of the monocentric lens
and multi-facet mirror w.r.t. the camera by using a single
reference planar pattern ( i.e., a checkerboard). Given the
calibration parameters, a scene point can then be linearly
triangulated from its teleidoscopic projection in a direct lin-
ear transform (DLT) manner [ 13].
We implement our method with an imaging system
consisting of a camera, three planar mirrors, a monocen-
tric lens, and a projector placed outside the mirrors for
structured-light casting. We quantitatively evaluate the
computation cost of numerical projections, the robustness
of the calibration, and the depth of field of our teleido-
scopic imaging system. We also validate the effectiveness
of our method qualitatively on a number of real-world mi-
croscale objects. These results demonstrate that our method
enables holistic and dense reconstruction of microscale ob-
jects. We believe our method expands the avenues of three-
dimensional analysis of microscale objects and scenes in
real world scenarios.
|
Li_DynaMask_Dynamic_Mask_Selection_for_Instance_Segmentation_CVPR_2023
|
Abstract
The representative instance segmentation methods
mostly segment different object instances with a mask of
the fixed resolution, e.g., 2828grid. However, a low-
resolution mask loses rich details, while a high-resolution
mask incurs quadratic computation overhead. It is a chal-
lenging task to predict the optimal binary mask for each in-
stance. In this paper, we propose to dynamically select suit-
able masks for different object proposals. First, a dual-level
Feature Pyramid Network (FPN) with adaptive feature ag-
gregation is developed to gradually increase the mask grid
resolution, ensuring high-quality segmentation of objects.
Specifically, an efficient region-level top-down path (r-FPN)
is introduced to incorporate complementary contextual and
detailed information from different stages of image-level
FPN (i-FPN). Then, to alleviate the increase of computa-
tion and memory costs caused by using large masks, we de-
velop a Mask Switch Module (MSM) with negligible compu-
tational cost to select the most suitable mask resolution for
each instance, achieving high efficiency while maintaining
high segmentation accuracy. Without bells and whistles, the
proposed method, namely DynaMask, brings consistent and
noticeable performance improvements over other state-of-
the-arts at a moderate computation overhead. The source
code: https://github.com/lslrh/DynaMask .
|
1. Introduction
Instance segmentation (IS) is an important computer vi-
sion task, aiming at simultaneously predicting the class la-
bel and the binary mask for each instance of interest in an
image. It works as the cornerstone of many downstream
vision applications, such as autonomous driving, video
surveillance, and robotics, to name a few. Recent years
have witnessed the significant advances of deep convolu-
tional neural networks (CNNs) based IS methods with a rich
amount of training data as the rocket fuel [1, 10, 17, 25, 26].
Existing IS methods can be roughly divided into two ma-
*denotes the equal contribution, †denotes the corresponding author.
This work is supported by the Hong Kong RGC RIF grant (R5001-18).
Figure 1. Dynamic mask selection results. Some “hard” sam-
ples with irregular shapes like “person” are assigned larger masks,
while the “easy” ones like “frisbee” are assigned smaller ones.
jor categories: two-stage [6, 10, 17] and single-stage meth-
ods [1, 2, 26]. The former first detect a sparse set of pro-
posals and then performs mask predictions based on them,
while the latter directly predict classification scores and
masks based on the pre-defined anchors. Generally speak-
ing, two-stage methods could produce higher segmenta-
tion accuracy but cost more computational resources than
single-stage methods.
Among the many recently developed IS methods, the
proposal-based two-stage methods [6, 10, 17], which fol-
low a detection-and-segmentation paradigm, are still the top
performers. These methods need to predict a binary grid
mask of uniform resolution for all proposals, e.g.,2828,
and then upsample it to the original image size. For in-
stance, Mask R-CNN [10] first generates a group of propos-
als with an object detector and then performs per pixel fore-
ground/background segmentation on the Regions of Inter-
est (RoIs) [24]. Despite achieving promising performance,
the low-resolution mask of Mask R-CNN is insufficient to
capture more detailed information, resulting in unsatisfac-
tory predictions, especially over object boundaries. An in-
tuitive solution to improve the segmentation quality is to
adopt a larger mask. Nevertheless, high-resolution masks
often generate excessive predictions on the smooth regions,
resulting in high computational complexity.
It is difficult to segment different objects in an image
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
11279
with masks of the same resolution. Objects with irregu-
lar shapes and complicated boundaries demand more fine-
grained masks to predict, referred to as “hard” samples,
such as the “person” in Fig. 1. In comparison, the “easy”
samples with regular shapes and fewer details can be effi-
ciently segmented using coarser masks, like the “frisbee”
in Fig. 1. Inspired by the above observations, we propose
to adaptively adjust the mask size for each instance for bet-
ter IS performance. Specifically, instead of using a uniform
resolution for all instances, we assign high-resolution masks
to “hard” objects and low-resolution masks to “easy” ob-
jects. In this way, the redundant computation for “easy”
samples can be reduced while the accuracy of “hard” sam-
ples can be improved, achieving a balance between accu-
racy and speed. As shown in Tab. 1, however, directly pre-
dicting a high-resolution mask by Mask R-CNN [10] unex-
pectedly degrades the mask average precision (AP). This at-
tributes to two main reasons. First, the RoI features of larger
objects are extracted from higher pyramid levels, which are
very coarse due to the downsampling operations [20]. Thus
simply increasing the mask size of these RoIs will not bring
extra useful information. Second, the mask head of Mask
R-CNN is oversimplified, so it cannot make more precise
predictions as the mask grid size increases.
To overcome the above mentioned problems, we propose
a dual-level FPN framework to gradually enlarge the mask
grid. Specifically, in addition to traditional image-level FPN
(i-FPN), a region-level FPN (r-FPN) is designed to achieve
coarse-to-fine mask prediction. Wherein we construct infor-
mation flows between i-FPN and r-FPN at different pyra-
mid levels, aiming to incorporate complementary contex-
tual and detailed information from multiple feature levels
for high-quality segmentation. With the dual-level FPN, we
present a data-dependent Mask Switch Module (MSM) with
negligible computational cost to adaptively select masks for
each instance. The overall approach, namely DynaMask, is
evaluated on benchmark instance segmentation datasets to
demonstrate its superiority over state-of-the-arts. The major
contributions of this work are summarized as follows:
A dynamic mask selection method (DynaMask) is pro-
posed to adaptively assign appropriate masks to dif-
ferent instances. Specifically, it assigns low-resolution
masks to “easy” samples for efficiency while assigning
high-resolution masks to “hard” samples for accuracy.
A dual-level FPN framework is developed for IS. We
construct direct information flows from i-FPN to r-
FPN at multiple levels, facilitating complementary in-
formation aggregation from multiple pyramid levels.
Extensive experiments demonstrate that DynaMask
achieves a good trade-off between IS accuracy and effi-
ciency, outperforming many state-of-the-art two-stage
IS methods at a moderate computation overhead.Method Resolution AP FLOPs
Mask R-CNN [10]1414 32.9 0.2G
2828 34.7 0.5G
5656 33.8 2.0G
112112 32.5 8.0G
DynaMask1414 32.9 0.2G
2828 36.1 0.6G
5656 37.1 1.0G
112112 37.6 1.4G
Table 1. Mask AP and FLOPs with different mask resolutions.
For Mask R-CNN, directly increasing the mask resolution will de-
crease the mask AP. While for our DynaMask, higher mask reso-
lution results in better performance.
|
Li_DSFNet_Dual_Space_Fusion_Network_for_Occlusion-Robust_3D_Dense_Face_CVPR_2023
|
Abstract
Sensitivity to severe occlusion and large view angles lim-
its the usage scenarios of the existing monocular 3D dense
face alignment methods. The state-of-the-art 3DMM-based
method, directly regresses the model’s coefficients, under-
utilizing the low-level 2D spatial and semantic information,
which can actually offer cues for face shape and orienta-
tion. In this work, we demonstrate how modeling 3D facial
geometry in image and model space jointly can solve the oc-
clusion and view angle problems. Instead of predicting the
whole face directly, we regress image space features in the
visible facial region by dense prediction first. Subsequently,
we predict our model’s coefficients based on the regressed
feature of the visible regions, leveraging the prior knowl-
edge of whole face geometry from the morphable models to
complete the invisible regions. We further propose a fusion
network that combines the advantages of both the image
and model space predictions to achieve high robustness and
accuracy in unconstrained scenarios. Thanks to the pro-
posed fusion module, our method is robust not only to occlu-
sion and large pitch and roll view angles, which is the bene-fit of our image space approach, but also to noise and large
yaw angles, which is the benefit of our model space method.
Comprehensive evaluations demonstrate the superior per-
formance of our method compared with the state-of-the-art
methods. On the 3D dense face alignment task, we achieve
3.80% NME on the AFLW2000-3D dataset, which outper-
forms the state-of-the-art method by 5.5% . Code is avail-
able at https://github.com/lhyfst/DSFNet .
|
1. Introduction
3D dense face alignment is an important prob-
lem with many applications, e.g. video conferencing,
AR/VR/metaverse, games, facial analysis, etc. Many meth-
ods have been proposed [8–12, 16, 21, 26, 27, 29, 33, 39, 41,
47, 49]. However, these methods are sensitive to severe oc-
clusion and large view angles [19, 26, 31, 32], limiting their
applicability of 3D dense face alignment on wild images
where occlusion and view angles often occur.
3D dense face alignment from a single image is an ill-
posed problem, mainly because of the depth ambiguity. The
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
4531
(a)
(i)(c)(b)
3DMM
coefficients
(ii) (iii)Figure 2. In this case, only one eye is visible. (a) The 3DMM-
based method fails. (b) Face parsing algorithm [43] still works.
(c) Our method first (i) predicts reliable geometry in visible region
by dense prediction, then (ii) completes the whole face by facial
geometry prior, producing a reasonable result. (iii) Viewed in im-
age view.
existing methods [8,10,27,29,33] use a contractive CNN to
predict the coefficients of 3DMM [4] directly. However,
contractive CNNs are essentially ill-suited for this task [18]
due to some reasons, including: mixed depth and 2D spatial
information, the loss of low-level 2D spatial information as
a result of invariance attribute of CNNs, and mixed facial
region and occluder in the process of the contraction.
Severe occlusion and large view angles pose problems
due to the complexity of the many-to-one mapping from 2D
image to 3D shape. In contrast, low-level vision features
are less variant according to geometry transform. There-
fore, dense prediction is essentially more robust to the above
problem in the visible region, because dense prediction re-
lies more on local information, where an example is shown
in Fig. 2 (b). Even if most of the face is masked out and
only the left eye is visible, the face parsing algorithm is still
able to deduce a reasonable parsing result.
Based on this observation, we decentralize the instance-
level 3DMM coefficients regression (i.e., whole-face level)
to pixel-level dense prediction in image space to improve
the robustness against occlusion and large view angles, by
proposing a 3D facial geometry’s 2D image space repre-
sentation. To complete the invisible region due to extra- or
self-occlusion, a novel post-process algorithm is proposed
to convert the dense prediction for the visible face region
into 3D facial geometry that includes the whole face area.
Fig. 2 (c) shows that our image space prediction recovers
reasonable results only seeing one eye, while the SOTA
method fails to produce a reasonable result.
We further compare the robustness and accuracy be-
tween the image space prediction with the model space
prediction that directly regresses 3DMM’s coefficients, and
discover that there is a complementary relationship between
these two spaces. Thus, we propose a dual space fusion
network (DSFNet) that predicts using the image and model
spaces using a two-branch architecture. With the fusionmodule, our DSFNet effectively combines the advantages
of both spaces. In summary, the main contributions of this
paper are:
• We propose a novel 3D facial geometry’s 2D im-
age space representation, followed by a novel post-
processing algorithm. It achieves robust 3D dense face
alignment to occlusion and large view angles.
• We introduce a fusion network, which combines the
advantages of both the image and model space predic-
tions to achieve high robustness and accuracy in un-
constrained scenarios.
• On the 3D dense face alignment task, we achieve
3.80% NME on AFLW2000-3D dataset, which out-
performs the state-of-the-art method by 5.5% .
|
Kim_Sampling_Is_Matter_Point-Guided_3D_Human_Mesh_Reconstruction_CVPR_2023
|
Abstract
This paper presents a simple yet powerful method for
3D human mesh reconstruction from a single RGB image.
Most recently, the non-local interactions of the whole mesh
vertices have been effectively estimated in the transformer
while the relationship between body parts also has begun
to be handled via the graph model. Even though those ap-
proaches have shown the remarkable progress in 3D hu-
man mesh reconstruction, it is still difficult to directly infer
the relationship between features, which are encoded from
the 2D input image, and 3D coordinates of each vertex. To
resolve this problem, we propose to design a simple fea-
ture sampling scheme. The key idea is to sample features
in the embedded space by following the guide of points,
which are estimated as projection results of 3D mesh ver-
tices (i.e., ground truth). This helps the model to concen-
trate more on vertex-relevant features in the 2D space, thus
leading to the reconstruction of the natural human pose.
Furthermore, we apply progressive attention masking to
precisely estimate local interactions between vertices even
under severe occlusions. Experimental results on bench-
mark datasets show that the proposed method efficiently im-
proves the performance of 3D human mesh reconstruction.
The code and model are publicly available at: https:
//github.com/DCVL-3D/PointHMR_release .
|
1. Introduction
The goal of 3D human mesh reconstruction is to esti-
mate 3D coordinates of points, which make up the human
body surface. Since the high-quality 3D human model has
been consistently required for various immersive applica-
tions, many studies have devoted considerable efforts to
accurately reconstruct the 3D human mesh. In the early
stage of this field, complex optimization techniques were
*equal contribution
†corresponding author
Figure 1. (a) Traditional process of feature extraction for estimat-
ing 3D coordinates. (b) Vertex-relevant feature extraction process
based on the proposed point-guided sampling method for estimat-
ing 3D coordinates.
adopted to generate the 3D human model based on the re-
lationship between multiple scenes, which are acquired by
using stereo or multiple-view camera systems. Recently,
owing to the great success of deep learning, the problem of
3D human mesh reconstruction now can be resolved only
with a single RGB image, thus the majority has begun to
develop compact network architectures and efficient train-
ing strategies. Even though such deep learning-based ap-
proaches have shown the significant progress in 3D human
mesh reconstruction, this task is still challenging due to se-
vere occlusions by diverse human poses and depth ambigu-
ities by the monocular setting.
Deep learning-based approaches can be divided into
two main groups: model-based and model-free methods.
In the former, most methods aim to estimate shape and
pose parameters of the skinned multi-person linear (SMPL)
model [24], which is capable of yielding the whole vertices
via these two simple factors, thus most widely employed in
this field. Traditional encoder-decoder architectures, which
are mostly composed of stacked convolutional layers, are
sufficient to conduct the regression for estimating those pa-
rameters. Despite their great performance, model-based
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12880
methods have the obvious shortcoming, i.e., reconstruction
results are limited to the pre-defined types of human body
models. On the other hand, model-free methods have at-
tempted to directly infer 3D coordinates of mesh vertices
from input features without using any specific human body
model. Compared to the model-based approach, which ob-
tains the well-defined full mesh by adjusting shape and pose
parameters, the model-free approach needs to estimate 3D
coordinates of whole vertices directly from the network.
Most methods in this category are based on the transformer
to grasp non-local interactions between mesh vertices. The
graph model (e.g., graph convolution) also has been utilized
together to allow for body part relations in a local manner.
One important advantage of the model-free approach is the
flexibility to adapt to other applications, e.g., hand pose es-
timation, without significant changes of the data format and
the training strategy. However, inferring the 3D coordinate
from a single monocular image is still challenging due to
lack of learning the correspondence between encoded fea-
tures and spatial positions.
In this paper, we propose a simple yet powerful method
for 3D human mesh reconstruction. To this end, we con-
duct feature sampling at vertex-relevant points of the input
image as shown in Fig. 1, which are estimated through the
heatmap decoder trained by projection results of 3D mesh
vertices (i.e., ground truth). These sampled features are sub-
sequently fed into the transformer encoder as the form of
the vertex token (see Fig. 2). In a similar way of [6], we
apply attention masking to the transformer encoder, how-
ever, the difference is that the local connection is defined
with the range of multiple levels through the sequence of
transformer encoders. This progressive attention masking
helps the model understand local relations between vertices
precisely even in occlusions. The main contribution of the
proposed method can be summarized as follows:
• We propose to utilize the correspondence between en-
coded features and vertex positions, which are pro-
jected into the 2D space, via our point-guided fea-
ture sampling scheme. By explicitly indicating such
vertex-relevant features to the transformer encoder, co-
ordinates of the 3D human mesh are accurately esti-
mated.
• Our progressive attention masking scheme helps the
model efficiently deal with local vertex-to-vertex rela-
tions even under complicated poses and occlusions.
|
Liu_Few-Shot_Non-Line-of-Sight_Imaging_With_Signal-Surface_Collaborative_Regularization_CVPR_2023
|
Abstract
The non-line-of-sight imaging technique aims to recon-
struct targets from multiply reflected light. For most exist-
ing methods, dense points on the relay surface are raster
scanned to obtain high-quality reconstructions, which re-
quires a long acquisition time. In this work, we propose a
signal-surface collaborative regularization (SSCR) frame-
work that provides noise-robust reconstructions with a min-
imal number of measurements. Using Bayesian inference,
we design joint regularizations of the estimated signal, the
3D voxel-based representation of the objects, and the 2D
surface-based description of the targets. To our best knowl-
edge, this is the first work that combines regularizations in
mixed dimensions for hidden targets. Experiments on syn-
thetic and experimental datasets illustrated the efficiency of
the proposed method under both confocal and non-confocal
settings. We report the reconstruction of the hidden targets
with complex geometric structures with only 55confocal
measurements from public datasets, indicating an acceler-
ation of the conventional measurement process by a factor
of 10,000. Besides, the proposed method enjoys low time
and memory complexity with sparse measurements. Our
approach has great potential in real-time non-line-of-sight
imaging applications such as rescue operations and au-
tonomous driving.
|
1. Introduction
The non-line-of-sight (NLOS) imaging technique en-
ables reconstructions of targets out of the direct line of
sight, which is attractive in various applications such as au-
tonomous driving, remote sensing, rescue operations and
medical imaging [1,5,6,10,15,16,19,21,26,33–35,38–40].
A typical scenario of NLOS imaging is shown in Figure 1.
Several points on the visible surface are illuminated by a
laser and the back-scattered light from the target is de-
tected to reconstruct the target. The NLOS detection sys-
tem is confocal if each illumination point is the same with
Figure 1. A typical non-line-of-sight imaging scenario. a) The
time resolved signals are measured at only 33focal points. b)
The three views of the reconstructed target obtained with the pro-
posed SSCR method.
the detection point, and non-confocal otherwise. The time-
correlated single-photon counting (TCSPC) technique is ap-
plied in the detection process due to the extremely low pho-
ton intensity after multiple diffuse reflections. In practice, a
single-photon avalanche diode (SPAD) in the Geiger-mode
can be used to record the photon events with time-of-flight
(TOF) information [3]. The first experimental demonstra-
tion of NLOS imaging dates back to 2012, where the targets
are reconstructed with the back-projection (BP) method
[37]. Extensions of this approach include its fast implemen-
tation [2], the filtering technique for reconstruction quality
enhancement [17], and weighting factors for noise reduc-
tion [11].
A number of efficient methods have been designed
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13303
for fast reconstructions. The light cone transform (LCT)
method [30] formulates the physical model as a convolu-
tion operator, so that the reconstructions can be obtained us-
ing the Wiener deconvolution method with the fast Fourier
transform. The directional light cone transform (D-LCT)
[42] generalizes the LCT and reconstructs the albedo and
surface normal simultaneously. The method of frequency
wavenumber migration (F-K) [20] formulates the propaga-
tion of light using the wave equation, and also provides a
fast inversion algorithm with the frequency-domain interpo-
lation technique. Whereas the LCT, D-LCT and F-K meth-
ods only work directly in confocal measurement scenarios,
the phasor field (PF) method [23,24,32] converts the NLOS
imaging scenarios to LOS cases and works for the gen-
eral non-confocal setting with low computation complex-
ity. For high-quality and noise-robust reconstructions, the
signal-object collaborative regularization (SOCR) method
can be applied, but brings additional computational cost. In
recent years, deep learning-based methods are also intro-
duced to the field of NLOS imaging [7, 8, 27, 43]. Besides,
advances in hardware enhance the distance of NLOS detec-
tion to kilometers [39], or make it possible to reconstruct
targets on the scale of millimeters [38].
Despite these breakthroughs, the trade off between the
acquisition time and the imaging quality is inevitable. In
the raster scanning mode, the acquisition time is propor-
tional to the number of measurement points with fixed scan-
ning speed. Due to the intrinsic ill-posedness of the NLOS
reconstruction problem [22] and heavy measurement noise
[11], dense measurements are necessary for high quality re-
constructions [20, 23, 30]. The measurement process may
take from seconds to hours, which poses a great challenge
for applications such as autonomous driving, where real-
time reconstruction of the video stream is needed. The ac-
quisition process can be accelerated by reducing the number
of pulses used for each illumination point. In the work [18],
the pulse number that record the first returning photon is
used to reconstruct the target. Another way to reduce the ac-
quisition time is to design array detectors for non-confocal
measurements. For example, the implementation of the
phasor field method with SPAD arrays realizes low-latency
real-time video imaging of the hidden scenes [28]. A third
way to accelerate the NLOS detection process is to reduce
the number of measurement points. It is shown that 1616
confocal measurements are enough to reconstruct the hid-
den target by incorporating the compressed sensing tech-
nique [41].
In this paper, we study the randomness in the photon de-
tection process of NLOS scenarios and propose an imag-
ing method that deals with a very limited number of spatial
measurements, which we term the few-shot NLOS detec-
tion scenarios. We design joint regularizations of the esti-
mated signal, the 3D voxel-based representation of the ob-
Figure 2. The least-squares solution of the statue with 33con-
focal measurements [20]. The target cannot be identified even
though its simulated signal matches the measurements well (see
Fig. 3). Strong regularizations are needed to reconstruct the target.
See also Fig. 6 for a comparison.
Figure 3. Comparisons of the measured data and the simulated
data of the least-squares solution for the instance of the statue [20].
The measured signals are shown in black. The simulated data of
the least-squares solution are shown in red. The shapes of the
signals are very close to each other.
jects, and the 2D surface-based description of the targets,
which leads to faithful reconstruction results. The main
contributions of this work are as follows.
We propose a signal-surface collaborative regulariza-
tion (SSCR) framework for few-shot non-line-of-sight
reconstructions, which works under both confocal and
non-confocal settings.
We report the reconstruction of the hidden targets with
complex geometric structures with only 55confo-
cal measurements from public datasets, indicating an
acceleration of the conventional measurement process
by a factor of 10,000.
13304
|
Ko_MELTR_Meta_Loss_Transformer_for_Learning_To_Fine-Tune_Video_Foundation_CVPR_2023
|
Abstract
Foundation models have shown outstanding perfor-
mance and generalization capabilities across domains.
Since most studies on foundation models mainly focus on
the pretraining phase, a naive strategy to minimize a sin-
gle task-specific loss is adopted for fine-tuning. However,
such fine-tuning methods do not fully leverage other losses
that are potentially beneficial for the target task. There-
fore, we propose MEtaLossTRansformer ( MELTR ), a
plug-in module that automatically and non-linearly com-
bines various loss functions to aid learning the target task
via auxiliary learning. We formulate the auxiliary learn-
ing as a bi-level optimization problem and present an ef-
ficient optimization algorithm based on Approximate Im-
plicit Differentiation (AID). For evaluation, we apply our
framework to various video foundation models (UniVL,
Violet and All-in-one), and show significant performance
gain on all four downstream tasks: text-to-video retrieval,
video question answering, video captioning, and multi-
modal sentiment analysis. Our qualitative analyses demon-
strate that MELTR adequately ‘transforms’ individual loss
functions and ‘melts’ them into an effective unified loss.
Code is available at https://github.com/mlvlab/
MELTR .
|
1. Introduction
Large-scale models trained on a huge amount of data
have gained attention due to their adaptability to a wide
range of downstream tasks. As introduced in [1], deep
learning models with the generalizability are referred to
as foundation models. In recent years, several foundation
models for various domains have been proposed ( e.g., [2,3]
*Equal contribution.
†Corresponding author.for natural language processing, [4, 5] for images and lan-
guage, and [6–8] for videos) and they mainly focus on pre-
train the model often with various multiple pretext tasks.
On the other hand, strategies for fine-tuning on downstream
tasks are less explored. For instance, a recently proposed
video foundation model UniVL [7] is pretrained with a lin-
earcombination of several pretext tasks such as text-video
alignment, masked language/frame modeling, and caption
generation. However, like other domains, fine-tuning is
simply performed by minimizing a single target loss. Other
potentially beneficial pretext tasks have remained largely
unexplored for fine-tuning.
Auxiliary learning is a natural way to utilize multiple
pretext task losses for learning. Contrary to multi-task
learning that aims for generalization across tasks, auxiliary
learning focuses only on the primary task by taking ad-
vantage of several auxiliary tasks. Most auxiliary learning
frameworks [9,10] manually selected auxiliary tasks, which
require domain knowledge and may not always be benefi-
cial for the primary task. To automate task selection, meta
learning was integrated into auxiliary learning [11–13].
Here, the model learns to adaptively leverage multiple aux-
iliary tasks to assist learning of the primary task. Likewise,
the pretext task losses can be unified into a single auxiliary
loss to be optimized in a way that helps the target down-
stream task.
To this end, we propose Meta Loss Transformer
(MELTR), a plug-in module that automatically andnon-
linearly transforms various auxiliary losses into a unified
loss. MELTR built on Transformers [14] takes the tar-
get task loss as well as pretext task losses as input and
learns their relationship via self-attention. In other words,
MELTR learns to fine-tune a foundation model by combin-
ing the primary task with multiple auxiliary tasks, and this
can be viewed as a meta-learning (or ‘learning-to-learn’)
problem. Similar to meta-learning-based auxiliary learning
frameworks [13,15], this can be formulated as a bi-level op-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20105
timization problem, which generally involves a heavy com-
putational cost due to the second-order derivative and its in-
verse, e.g., the inverse Hessian matrix. To circumvent this,
we present an efficient training scheme that approximates
the inverse Hessian matrix. We further provide empirical
analyses on the time-performance trade-off of various opti-
mization algorithms.
To verify the generality of our proposed method, we ap-
ply it to three video foundation models: UniVL [7], Vi-
olet [16], and All-in-one [17]. These foundation models
are originally pretrained with a linear combination of sev-
eral pretext tasks such as text-video alignment, masked lan-
guage/frame modeling, and caption generation. We exper-
iment by fine-tuning on the text-to-video retrieval, video
question answering, video captioning, and multi-modal
sentiment analysis task with five datasets: YouCook2,
MSRVTT, TGIF, MSVD, and CMU-MOSI. For each task
and dataset, our MELTR improves both previous foun-
dation models and task-specific models by large margins.
Furthermore, our extensive qualitative analyses and abla-
tion studies demonstrate that MELTR effectively learns to
non-linearly combine pretext task losses, and adaptively re-
weights them for the target downstream task.
To sum up, our contributions are threefold:
• We propose MEtaLossTRansformer ( MELTR ),
a novel fine-tuning framework for video foundation
models. We also present an efficient optimization al-
gorithm to alleviate the heavy computational cost of
bi-level optimization.
• We apply our framework to three video foundation
models in four downstream tasks on five benchmark
video datasets, where MELTR significantly outper-
forms the baselines fine-tuned with single-task and
multi-task learning schemes.
• We provide in-depth qualitative analyses on how
MELTR non-linearly transforms individual loss func-
tions and combines them into an effective unified loss
for the target downstream task.
|
Li_ACSeg_Adaptive_Conceptualization_for_Unsupervised_Semantic_Segmentation_CVPR_2023
|
Abstract
Recently, self-supervised large-scale visual pre-training
models have shown great promise in representing pixel-
level semantic relationships, significantly promoting the de-
velopment of unsupervised dense prediction tasks, e.g., un-
supervised semantic segmentation (USS). The extracted re-
lationship among pixel-level representations typically con-
tains rich class-aware information that semantically iden-
tical pixel embeddings in the representation space gather
together to form sophisticated concepts. However, lever-
aging the learned models to ascertain semantically con-
sistent pixel groups or regions in the image is non-trivial
since over/ under-clustering overwhelms the conceptualiza-
tion procedure under various semantic distributions of dif-
ferent images. In this work, we investigate the pixel-level
semantic aggregation in self-supervised ViT pre-trained
models as image Segmentation and propose the Adaptive
Conceptualization approach for USS, termed ACSeg . Con-
cretely, we explicitly encode concepts into learnable proto-
types and design the Adaptive Concept Generator (ACG),
which adaptively maps these prototypes to informative con-
cepts for each image. Meanwhile, considering the scene
complexity of different images, we propose the modularity
loss to optimize ACG independent of the concept number
based on estimating the intensity of pixel pairs belonging to
the same concept. Finally, we turn the USS task into clas-
sifying the discovered concepts in an unsupervised manner.
Extensive experiments with state-of-the-art results demon-
strate the effectiveness of the proposed ACSeg.
|
1. Introduction
Semantic segmentation is one of the primary tasks in
computer vision, which has been widely used in many do-
mains, such as autonomous driving [7, 14] and medical
*Corresponding author. Project page: https://lkhl.github.io/ACSeg.
(a) Under-clustering (b) Over-clustering
(c) Our Adaptive ConceptualizationPixel-level representation Prototype Inactive prototype Pixels in a concept
Concept
Adaptive
Concept
Generator ConceptFigure 1. Comparison between existing methods and our adaptive
conceptualization on finding underlying “concepts” in the pixel-
level representations produced by a pre-trained model. While
under-clustering just focuses on a single object and over-clustering
splits objects, our adaptive conceptualization processes different
images adaptively through updating the initialized prototypes with
the representations for each image.
imaging [12, 24, 41]. With the development of deep learn-
ing and the increasing amount of data [7, 11, 28, 57], uplift-
ing performance has been achieved on this task by optimiz-
ing deep neural networks with pixel-level annotations [29].
However, large-scale pixel-level annotations are expensive
and laborious to obtain. Different kinds of weak supervision
have been explored to achieve label efficiency [37], e.g.,
image-level [1,49], scribble-level [27], and box-level super-
vision [35]. More than this, some methods also achieve se-
mantic segmentation without relying on any labels [19, 20],
namely unsupervised semantic segmentation (USS).
Early approaches for USS are based on pixel-level self-
supervised representation learning by introducing cross-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7162
view consistency [6,20], edge detection [19,56], or saliency
prior [43]. Recently, the self-supervised ViT [4] provides a
new paradigm for USS due to its property of containing se-
mantic information in pixel-level representations. We make
it more intuitive through Figure 1, which shows that in the
representation space of an image, the pixel-level representa-
tions produced by the self-supervised ViT contain underly-
ing clusters. When projecting these clusters into the image,
they become semantically consistent groups of pixels or re-
gions representing “concepts”.
In this work, we aim to achieve USS by accurately ex-
tracting and classifying these “concepts” in the pixel rep-
resentation space of each image. Unlike the previous at-
tempts which only consider foreground-background parti-
tion [40, 44, 48] or divide each image into a fixed number
of clusters [18, 32], we argue that it is crucial to consider
different images distinguishably due to the complexity of
various scenarios (Figure 1). We thus propose the Adaptive
Conceptualization for unsupervised semantic Segmentation
(ACSeg), a framework that finds these underlying concepts
adaptively for each image and achieves USS by classifying
the discovered concepts in an unsupervised manner.
To achieve conceptualization, we explicitly encode con-
cepts to learnable prototypes and adaptively update them for
different images by a network, as shown in Figure 2. This
network, named as Adaptive Concept Generator (ACG), is
implemented by iteratively applying scaled dot-product at-
tention [45] on the prototypes and pixel-level representa-
tions in the image to be processed. Through such a struc-
ture, the ACG learns to project the initial prototypes to the
concept in the representation space depending on the in-
put pixel-level representations. Then the concepts are ex-
plicitly presented in the image as different regions by as-
signing each pixel to the nearest concept in the representa-
tion space. The ACG is end-to-end optimized without any
annotations by the proposed modularity loss. Specifically,
we construct an affinity graph on the pixel-level representa-
tions and use the connection relationship of two pixels in the
affinity graph to adjust the strength of assigning two pixels
to the same concept, motivated by the modularity [34].
As the main part of ACSeg, the ACG achieves precise
conceptualization for different images due to its adaptive-
ness, which is reflected in two aspect: Firstly, it can adap-
tively operate on pixel-level representations of different im-
ages thanks to the dynamic update structure. Secondly, the
training objective does not enforce the number of concepts,
resulting in adaptive number of concepts for different im-
ages. With these properties, we get accurate partition for
images with different scene complexity via the concepts
produced by the ACG, as shown in Figure 1(c). Therefore,
in ACSeg, the semantic segmentation of an image can fi-
nally be achieved by matting the corresponding regions in
the image and classifying them with the help of powerful
Update Updated prototypes Initialized prototypesPrototypes Pixel-level representation Pixels in a concept
Update by pixel-level representations Update by each otherFigure 2. Intuitive explanation for the basic idea of the ACG.
The concepts are explicitly encoded to learnable prototypes and
dynamically updated according to the input pixel-level representa-
tions. After update, the pixels are assigned to the nearest concept
in the representation space.
image-level pre-trained models.
For evaluation, we apply ACSeg on commonly used
semantic segmentation datasets, including PASCAL VOC
2012 [11] and COCO-Stuff [20, 28]. The experimental
results show that the proposed ACSeg surpasses previous
methods on different settings of unsupervised semantic seg-
mentation tasks and achieves state-of-the-art performance
on the PASCAL VOC 2012 unsupervised semantic segmen-
tation benchmark without post-processing and re-training.
Moreover, the visualization of the pixel-level representa-
tions and the concepts shows that the ACG is applicable for
decomposing images with various scene complexity. Since
the ACG is fast to converge without learning new represen-
tations and the concept classifier is employed in a zero-shot
manner, we draw the proposed ACSeg as a generalizable
method which is easy to modify and adapt to a wide range
of unsupervised image understanding.
|
Lin_Adaptive_Human_Matting_for_Dynamic_Videos_CVPR_2023
|
Abstract
The most recent efforts in video matting have focused
on eliminating trimap dependency since trimap annotations
are expensive and trimap-based methods are less adapt-
able for real-time applications. Despite the latest tripmap-
free methods showing promising results, their performance
often degrades when dealing with highly diverse and un-
structured videos. We address this limitation by introduc-
ingAdaptive Matting for Dynamic Videos, termed AdaM ,
which is a framework designed for simultaneously differen-
tiating foregrounds from backgrounds and capturing alpha
matte details of human subjects in the foreground. Two in-
terconnected network designs are employed to achieve this
goal: (1) an encoder-decoder network that produces alpha
mattes and intermediate masks which are used to guide the
transformer in adaptively decoding foregrounds and back-
grounds, and (2) a transformer network in which long- and
short-term attention combine to retain spatial and tempo-
ral contexts, facilitating the decoding of foreground de-
tails. We benchmark and study our methods on recently
introduced datasets, showing that our model notably im-
proves matting realism and temporal coherence in complex
real-world videos and achieves new best-in-class general-
izability. Further details and examples are available at
https://github.com/microsoft/AdaM .
|
1. Introduction
Video human matting aims to estimate a precise alpha
matte to extract the human foreground from each frame ofan input video. In comparison with image matting [6,10,14,
21, 30, 39, 44, 47], video matting [2, 5, 11, 18, 19, 33, 36, 38]
presents additional challenges, such as preserving spatial
and temporal coherence.
Many different solutions have been put forward for the
video matting problem. A straightforward approach is to
build on top of image matting models [44], which is to im-
plement an image matting approach frame by frame. It may,
however, result in inconsistencies in alpha matte predictions
across frames, which will inevitably lead to flickering ar-
tifacts [42]. On the other hand, top performers leverage
dense trimaps to predict alpha mattes, which is expensive
and difficult to generalize across large video datasets. To
alleviate the substantial trimap limitation, OTVM [35] pro-
posed a one-trimap solution recently. BGM [22, 33] pro-
poses a trimap-free solution, which needs to take an addi-
tional background picture without the subject at the time of
capture. While the setup is less time-consuming than cre-
ating trimaps, it may not work well if used in a dynamic
background environment. The manual prior required by
these methods limits their use in some real-time applica-
tions, such as video conferencing. Lately, more general so-
lutions, e.g., MODNet [16] and RVM [23], have been pro-
posed which involve manual-free matting without auxiliary
inputs. However, in challenging real-world videos, back-
grounds are inherently non-differentiable at some points,
causing these solutions to produce blurry alpha mattes.
It is quite challenging to bring together the benefits of
both worlds, i.e., a manual-free model that produces accu-
rate alpha mattes in realistic videos. In our observation, the
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10229
Figure 2. Qualitative sample results of MODNet [16], RVM [23] and the
proposed AdaM on real video scenes.
significant challenges can mostly be explained by the inher-
ent unconstrained and diverse nature of real-world videos.
As a camera moves in unstructured and complex scenes,
foreground colors can resemble passing objects in the back-
ground, which makes it hard to separate foreground subjects
from cluttered backgrounds. This might result in blurred
foreground boundaries or revealing backgrounds behind the
subjects, as shown in Fig. 2. MODNet [16] and RVM
[23] are both nicely designed models with auxiliary-free ar-
chitectures to implicitly model backgrounds. In complex
scenes, however, models without guidance may experience
foreground-background confusion, thereby degrading the
alpha matte accuracy.
In this paper, we aim to expand the applicability of
the matting architecture such that it can serve as a reli-
able framework for human matting in real-world videos.
Our method does not require manual efforts (e.g., manu-
ally annotated trimaps or pre-captured backgrounds). The
main idea is straightforward: understanding and structur-
ing background appearance can make the underlying mat-
ting network easier to render high-fidelity foreground alpha
mattes in dynamic video scenes. Toward this goal, an in-
terconnected two-network framework is employed: (1) an
encoder-decoder network with skip connections produces
alpha mattes and intermediate masks that guide the trans-
former network in adaptively enhancing foregrounds and
backgrounds, and (2) a transformer network with long- and
short-term attention that retain both spatial and temporal
contexts, enabling foreground details to be decoded. Based
on a minimal-auxiliary strategy, the transformer network
obtains an initial mask from an off-the-shelf segmenter for
coarse foreground/background (Fg/Bg) guidance, but the
decoder network predicts subsequent masks automatically
in a data-driven manner. The proposed method seeks to pro-
duce accurate alpha mattes in challenging real-world envi-
ronments while eliminating the sensitivities associated with
handling an ill-initialized mask. Compared to the recently
published successes in video matting study, our main con-
tributions are as follows:
• We propose a framework for human matting with uni-fied handling of complex unconstrained videos without
requiring manual efforts. The proposed method pro-
vides a data-driven estimation of the foreground masks
to guide the network to distinguish foregrounds and
backgrounds adaptively.
• Our network architecture and training scheme have
been carefully designed to take advantage of both long-
and short-range spatial and motion cues. It reaches
top-tier performance on the VM [23] and CRGNN [42]
benchmarks.
|
Liu_SCOTCH_and_SODA_A_Transformer_Video_Shadow_Detection_Framework_CVPR_2023
|
Abstract
Shadows in videos are difficult to detect because of the
large shadow deformation between frames. In this work,
we argue that accounting for shadow deformation is essen-
tial when designing a video shadow detection method. To
this end, we introduce the shadow deformation attention
trajectory ( SODA ), a new type of video self-attention mod-
ule, specially designed to handle the large shadow defor-
mations in videos. Moreover, we present a new shadow
contrastive learning mechanism ( SCOTCH ) which aims at
guiding the network to learn a unified shadow represen-
tation from massive positive shadow pairs across differ-
ent videos. We demonstrate empirically the effectiveness
of our two contributions in an ablation study. Furthermore,
we show that SCOTCH andSODA significantly outperforms
existing techniques for video shadow detection. Code is
available at the project page: https://lihaoliu-
cambridge.github.io/scotch_and_soda/
|
1. Introduction
Shadow is an inherent part of videos, and they have
an adverse effect on a wide variety of video vision tasks.
Therefore, the development of robust video shadow detec-
tion techniques, to alleviate those negative effects, is of
great interest for the community. Video shadow detection
is usually formulated as a segmentation problem for videos,
however and due to the nature of the problem, shadow de-
tection greatly differs from other segmentation tasks such
as object segmentation. For inferring the presence of shad-
ows in an image, one has to account for the global content
information such as light source orientation, and the pres-
ence of objects casting shadows. Importantly, in a given
video, shadows considerably change appearance (deforma-
tion) from frame to frame due to light variation and object
motion. Finally, shadows can span over different back-
grounds over different frames, making approaches relying
EncoderBlock1t×!"×#"×c1t×!$%×#$%×c2t×!&'×#&'×c3
Block3Block2Stage1
DecoderMLPLayert×!&'×#&'×c3
t×!$%×#$%×c2t×!"×#"×c1Stage3
UpsampleLayerInputVideo
SegmentationMasks
t = 3t = 2t = 1t = 4Stage 2Deformation Trajectory AttentionShadowContrast
Shadow FeatureNon-shadow FeatureGet SimilarGet DifferentA is the input of BDeformation Trajectoryz1z2z3
z!"z#"z$"Figure 1. Overview of our SCOTCH andSODA framework. A MiT
encoder extracts multi-scale features for each frame of the video
(stage 1). Then, our deformation attention trajectory is applied
to features individually to incorporate temporal information (stage
2). Finally, an MLP layer combines the multi-scale information to
generate the segmentation masks (stage 3). The model is trained
to contrast shadow and non-shadow features, by minimising our
shadow contrastive loss with massive positive shadow pairs.
on texture information unreliable.
Particularly, video shadow detection methods can be
broadly divided into two main categories. The first category
refers to image shadow detection (ISD) [9,15,35,36,43,46].
This family of techniques computes the shadow detec-
tion frame by frame. Although computationally saving,
these methods are incapable of handling temporal informa-
tion. The second category refers to video shadow detection
(VSD) [6, 9, 14, 16, 25]. These methods offer higher per-
formance as the analysis involves spatial-temporal informa-
tion. Hence, our main focus is video shadow detection.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10449
State-of-the-art video shadow detection methods rely on
deep neural networks, which are trained on large annotated
datasets. Specifically, those methods are composed of three
parts: (i) a feature extraction network that extracts spatial
features for each frame of the video: (ii) a temporal ag-
gregation mechanism [6, 14] enriching spatial features with
information from different frames; and (iii) a decoder, that
maps video features to segmentation masks. Additionally,
some works enforce consistency between frames prediction
by using additional training criterion [9,25]. We retain from
these studies that the design of the temporal aggregation
mechanism and the temporal consistency loss is crucial to
the performance of a video shadow detection network, and
we will investigate both of those aspects in this work.
The current temporal aggregation mechanisms available
in the literature were typically designed for video tasks
such as video action recognition, or video object segmenta-
tion. Currently, the most widely used temporal aggregation
mechanism is based on a variant of the self-attention mech-
anism [1, 29, 32, 40, 41]. Recently, trajectory attention [29]
has been shown to provide state-of-the-art results on video
processing. Intuitively, trajectory attention aggregates in-
formation along the object’s moving trajectory, while ignor-
ing the context information, deemed as irrelevant. However,
shadows in videos are subject to strong deformations, mak-
ing them difficult to track, and thus they might cause the
trajectory attention to fail.
In this work, we first introduce the ShadOw Deformation
Attention trajectory ( SODA ), a spatial-temporal aggregation
mechanism designed to better handle the large shadow de-
formations that occur in videos. SODA operates in two steps.
First, for each spatial location, an associated token is com-
puted between the given spatial location and the video,
which contains information in every time-step for the given
spatial location. Second, by aggregating every associated
spatial token, a new token is yielded with enriched spatial
deformation information. Aggregating spatial-location-to-
video information along the spatial dimension helps the net-
work to detect shape changes in videos.
Besides, we introduce the Shadow COnTrastive meCH-
anism ( SCOTCH ), a supervised contrastive loss with massive
positive shadow pairs aiming to drive our network to learn
more discriminative features for the shadow regions in dif-
ferent videos. Specifically, in training, we add a contrastive
loss at the coarsest layer of the encoder, driving the fea-
tures from shadow regions close together, and far from the
features from the non-shadow region. Intuitively, this con-
trastive mechanism drives the encoder to learn high-level
representations of shadow, invariant to all the various fac-
tors of shadow variations, such as shape and illumination.
In summary, our contributions are as follows:
• We introduce a new video shadow detection frame-
work, in which we highlight:–SODA , a new type of trajectory attention that har-
monise the features of the different video frames
at each resolution.
–SCOTCH , a contrastive loss that highlights a mas-
sive positive shadow pairs strategy in order to
make our encoder learn more robust high-level
representations of shadows.
• We evaluate our proposed framework on the video
shadow benchmark dataset ViSha [6], and compare
with the state-of-the-art methods. Numerical and vi-
sual experimental results demonstrate that our ap-
proach outperforms, by a large margin, existing ones
on video shadow detection. Furthermore, we provide
an ablation study to further support the effectiveness of
the technical contributions.
|
Karunratanakul_HARP_Personalized_Hand_Reconstruction_From_a_Monocular_RGB_Video_CVPR_2023
|
Abstract
We present HARP (HAnd Reconstruction and Personal-
ization), a personalized hand avatar creation approach that
takes a short monocular RGB video of a human hand as
input and reconstructs a faithful hand avatar exhibiting a
high-fidelity appearance and geometry. In contrast to the
major trend of neural implicit representations, HARP mod-
els a hand with a mesh-based parametric hand model, a
vertex displacement map, a normal map, and an albedo
without any neural components. The explicit nature of our
representation enables a truly scalable, robust, and efficient
approach to hand avatar creation as validated by our ex-
periments. HARP is optimized via gradient descent from
a short sequence captured by a hand-held mobile phone
and can be directly used in AR/VR applications with real-
time rendering capability. To enable this, we carefully de-
sign and implement a shadow-aware differentiable render-
ing scheme that is robust to high degree articulations and
self-shadowing regularly present in hand motions, as wellas challenging lighting conditions. It also generalizes to un-
seen poses and novel viewpoints, producing photo-realistic
renderings of hand animations. Furthermore, the learned
HARP representation can be used for improving 3D hand
pose estimation quality in challenging viewpoints. The key
advantages of HARP are validated by the in-depth analyses
on appearance reconstruction, novel view and novel pose
synthesis, and 3D hand pose refinement. It is an AR/VR-
ready personalized hand representation that shows superior
fidelity and scalability.
|
1. Introduction
Advancements in AR/VR devices are introducing a new
reality in which the physical and digital worlds merge. The
human hand is a crucial element for an intimate and in-
teractive experience in these environments, serving as the
primary interface between humans and the digital world.
Therefore, it is essential to capture, reconstruct, and animate
life-like digital hands for AR and VR applications. Without
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12802
this capability, the authenticity and practicality of AR/VR
consumer products will always be limited.
Despite its importance, the research into hand avatar cre-
ation has so far been limited. Most works [8, 35, 59] focus
on creating an appearance space on top of a parametric hand
model such as MANO [62]. Such an appearance space pro-
vides a compact way to represent hand texture but is rather
limited in expressivity to handle non-standard textures. The
recent LISA [12] model has emerged as an alternative, us-
ing an implicit function to represent hand geometry and tex-
ture color fields. Training a new identity in LISA, how-
ever, requires a multi-view capturing setup as well as a large
amount of data and computing power. In the nearby fields of
face and body avatar creation, many works that leverage an
implicit function [19,40,41,74] or NeRF-based [42] volume
rendering [39,53,77] have also been recently explored. The
NeRF-based method such as HumanNeRF [77] produces a
convincing novel view synthesis but still shows blurry ar-
tifacts around highly articulated parts and cannot be easily
exported to other applications.
We argue that democratizing hand avatar creation for
AR/VR users requires a method that is (1) accurate : so that
personalized hand appearance and geometry can be faith-
fully reconstructed; (2) scalable : allowing hand avatars to
be obtained using a commodity camera; (3) robust : ca-
pable of handling out-of-distribution appearance and self-
shadows between fingers and palm; and (4) efficient : with
real-time rendering capability.
To this end, we propose HARP, a personalized hand re-
construction method that can create a faithful hand avatar
from a short RGB video captured by a hand-held mobile
phone. HARP leverages a parametric hand model, an ex-
plicit appearance, and a differentiable rasterizer and shader
to reconstruct a hand avatar and environment lighting in
an analysis-by-synthesis manner, without any neural net-
work component . Our observation is that human hands are
highly articulated. The appearance changes of observed
hands in a captured sequence can be dramatic and largely
attributed to articulations and light interaction. Learning
neural representations, such as implicit texture fields [12] or
volume-based representations like NeRF [70], is vulnerable
to the over-fitting to a short monocular training sequence
and can hardly generalize well to sophisticated and dexter-
ous hand movements. By properly disentangling geometry,
appearance, and self-shadow with explicit representations,
HARP can significantly improve the reconstruction quality
and generate life-like renderings on novel views and novel
animations performing highly articulated motions. Further-
more, the nature of the explicit representation allows the
results from HARP to be conveniently exported to standard
graphics applications.
In summary, the key advantages of HARP are: (1)
HARP is a simple personalized hand avatar creation methodthat reconstructs high-fidelity appearance and geometry us-
ing only a short monocular video. HARP demonstrates
that an explicit representation with a differentiable raster-
izer and shader is enough to obtain life-like hand avatars.
(2) The hand avatar from HARP is controllable and com-
patible with standard rasterization graphics pipelines allow-
ing for photo-realistic rendering in AR/VR applications. (3)
Moreover, HARP can be used to improve 3D hand pose
estimation in challenging viewpoints. We perform exten-
sive experiments on the tasks of appearance reconstruction,
novel-view-and-pose synthesis, and 3D hand poses refine-
ment. Compared to existing approaches, HARP is more ac-
curate, robust, and generalizable with superior scalability.
|
Li_KERM_Knowledge_Enhanced_Reasoning_for_Vision-and-Language_Navigation_CVPR_2023
|
Abstract
Vision-and-language navigation (VLN) is the task to en-
able an embodied agent to navigate to a remote location
following the natural language instruction in real scenes.
Most of the previous approaches utilize the entire fea-
tures or object-centric features to represent navigable can-
didates. However, these representations are not efficient
enough for an agent to perform actions to arrive the tar-
get location. As knowledge provides crucial information
which is complementary to visible content, in this paper, we
propose a Knowledge Enhanced Reasoning Model (KERM)
to leverage knowledge to improve agent navigation ability.
Specifically, we first retrieve facts (i.e., knowledge described
by language descriptions) for the navigation views based on
local regions from the constructed knowledge base. The re-
trieved facts range from properties of a single object (e.g.,
color, shape) to relationships between objects (e.g., action,
spatial position), providing crucial information for VLN. We
further present the KERM which contains the purification,
fact-aware interaction, and instruction-guided aggregation
modules to integrate visual, history, instruction, and fact
features. The proposed KERM can automatically select and
gather crucial and relevant cues, obtaining more accurate
action prediction. Experimental results on the REVERIE,
R2R, and SOON datasets demonstrate the effectiveness of
the proposed method. The source code is available at
https://github.com/XiangyangLi20/KERM .
|
1. Introduction
Vision-and-language navigation (VLN) [3,12,23,24,36,
38] is one of the most attractive embodied AI tasks, where
agents should be able to understand natural language in-
structions, perceive visual content in dynamic 3D environ-
ments, and perform actions to navigate to the target loca-
tion. Most previous methods [9,15,22,31,34] depend on se-
Panoramic ViewInstructionGoupthestairs through theoutdoor living area and …
thekitchen …between thekitchen island and pantry .
Retrieved Facts
<sofa next to wall>
<lamp to right of sofa>
<a living room area>
…<light hanging over
kitchen island>
<laminate countertop>
…<ceiling recessed light>
<wood countertop>
<contemporary table>
…Navigable
Candidate 1
<lamp to right of sofa>
<white wall partition>
<sofa next to wall>Knowledge
database<workspace kitchen island>
<contemporary table>
<switch above sofa> …
✔
✔Navigable
Candidate 2Navigable
Candidate 3Figure 1. Illustration of knowledge related navigable candidates,
which provides crucial information such as attributes and relation-
ships between objects for VLN. Best viewed in color.
quential models ( e.g., LSTMs and Transformers) to contin-
uously receive visual observations and align them with the
instructions to predict actions at each step. More recently,
transformer-based architectures [5, 7, 25] which make use
of language instructions, current observations, and histori-
cal information have been widely used.
Most of the previous approaches utilize the entire fea-
tures [5,12,13,25] or object-centric features [1,7,10,20] to
represent navigable candidates. For example, Qi et al. [22]
and Gao et al. [10] encode discrete images within each
panorama with detected objects. Moudgil et al. [20] utilize
both object-level and scene-level features to represent visual
observations. However, these representations are not effi-
cient enough for an agent to navigate to the target location.
For example, as shown in Figure 1, there are three candi-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2583
dates. According to the instruction and the current location,
candidate2 is the correct navigation. Based on the entire
features of a candidate view, it is hard to select the correct
one, as candidate2 and candidate3 belong to the same cat-
egory ( i.e.,“dining room”). Meanwhile, it is also hard to
differentiate them from individual objects, as “lamp” and
“light” are the common components for them.
As humans make inferences under their knowledge [11],
it is important to incorporate knowledge related to navigable
candidates for VLN tasks. First, knowledge provides cru-
cial information which is complementary to visible content.
In addition to visual information, high-level abstraction of
the objects and relationships contained by knowledge pro-
vides essential information. Such information is indispens-
able to align the visual objects in the view image with the
concepts mentioned in the instruction. As shown in Fig-
ure 1, with the knowledge related to candidate2 ( i.e.,<light
hanging over kitchen island >), the agent is able to navigate
to the target location. Second, the knowledge improves the
generalization ability of the agent. As the alignment be-
tween the instruction and the navigable candidate is learned
in limited-seen environments, leveraging knowledge ben-
efits the alignment in the unseen environment, as there is
no specific regularity for target object arrangement. Third,
knowledge increases the capability of VLN models. As
rich conceptual information is injected into VLN models,
the correlations among numerous concepts are learned. The
learned correlations are able to benefit visual and language
alignment, especially for tasks with high-level instructions.
In this work, we incorporate knowledge into the VLN
task. To obtain knowledge for view images, facts ( i.e.,
knowledge described by language descriptions) are re-
trieved from the knowledge base constructed on the Vi-
sual Genome dataset [16]. The retrieved facts by CLIP
[26] provide rich and complementary information for vi-
sual view images. And then, a knowledge enhanced rea-
soning model (KERM) which leverages knowledge for suf-
ficient interaction and better alignment between vision and
language information is proposed. Especially, the proposed
KERM consists of a purification module, a fact-aware inter-
action module, and an instruction-guided aggregation mod-
ule. The purification model aims to extract key informa-
tion in the fact representations, the visual region representa-
tions, and the historical representations respectively guided
by the instruction. The fact-aware interaction module al-
lows visual and historical representations to obtain the in-
teraction of the facts with cross-attention encoders. And
the instruction-guided aggregation module extracts the most
relevant components of the visual and historical representa-
tions according to the instruction for fusion.
We conduct the experiments on three VLN datasets, i.e.,
the REVERIE [24], SOON [36], and R2R [3]. Our ap-
proach outperforms state-of-the-art methods on all splits ofthese datasets under most metrics. The further experimental
analysis demonstrates the effectiveness of our method.
In summary, we make the following contributions:
• We incorporate region-centric knowledge to compre-
hensively depict navigation views in VLN tasks. For
each navigable candidate, the retrieved facts ( i.e.,
knowledge described by language descriptions) are
complementary to visible content.
• We propose the knowledge enhanced reasoning model
(KERM) to inject fact features into the visual represen-
tations of navigation views for better action prediction.
• We conduct extensive experiments to validate the ef-
fectiveness of our method and show that it outperforms
existing methods with a better generalization ability.
|
Klingner_X3KD_Knowledge_Distillation_Across_Modalities_Tasks_and_Stages_for_Multi-Camera_CVPR_2023
|
Abstract
Recent advances in 3D object detection (3DOD) have
obtained remarkably strong results for LiDAR-based mod-
els. In contrast, surround-view 3DOD models based on
multiple camera images underperform due to the neces-
sary view transformation of features from perspective view
(PV) to a 3D world representation which is ambiguous
due to missing depth information. This paper introduces
X3KD, a comprehensive knowledge distillation framework
across different modalities, tasks, and stages for multi-
camera 3DOD. Specifically, we propose cross-task distil-
lation from an instance segmentation teacher (X-IS) in the
PV feature extraction stage providing supervision without
ambiguous error backpropagation through the view trans-
formation. After the transformation, we apply cross-modal
feature distillation (X-FD) and adversarial training (X-AT)
to improve the 3D world representation of multi-camera
features through the information contained in a LiDAR-
based 3DOD teacher. Finally, we also employ this teacher
for cross-modal output distillation (X-OD), providing dense
supervision at the prediction stage. We perform extensive
ablations of knowledge distillation at different stages of
multi-camera 3DOD. Our final X3KD model outperforms
previous state-of-the-art approaches on the nuScenes and
Waymo datasets and generalizes to RADAR-based 3DOD.
Qualitative results video at https://youtu.be/1do9DPFmr38.
|
1. Introduction
3D object detection (3DOD) is an essential task in vari-
ous real-world computer vision applications, especially au-
tonomous driving. Current 3DOD approaches can be cate-
gorized by their utilized input modalities, e.g., camera im-
ages [28, 40, 46] or LiDAR point clouds [25, 55, 60], which
dictates the necessary sensor suite during inference. Re-
cently, there has been significant interest in surround-view
*These authors contributed equally to this work.
†Automated Driving, Qualcomm Technologies, Inc.
‡Qualcomm AI Research, an initiative of Qualcomm Technologies, Inc.
§Automated Driving, QT Technologies Ireland Limited
Figure 1. While previous approaches considered multi-camera
3DOD in a standalone fashion or with depth supervision, we pro-
pose X3KD, a knowledge distillation framework using cross-
modal and cross-task information by distilling information from
LiDAR-based 3DOD and instance segmentation teachers into dif-
ferent stages (marked by red arrows) of the multi-camera 3DOD.
multi-camera 3DOD, aiming to leverage multiple low-cost
monocular cameras, which are conveniently embedded in
current vehicle designs in contrast to expensive LiDAR
scanners. Existing solutions to 3DOD are mainly based on
extracting a unified representation from multiple cameras
[28,30,37,41] such as the bird’s-eye view (BEV) grid. How-
ever, predicting 3D bounding boxes from 2D perspective-
view (PV) images involves an ambiguous 2D to 3D transfor-
mation without depth information, which leads to lower per-
formance compared to LiDAR-based 3DOD [1, 28, 30, 55].
While LiDAR scanners may not be available in commer-
cially deployed vehicle fleets, they are typically available in
training data collection vehicles to facilitate 3D annotation.
Therefore, LiDAR data is privileged; it is often available
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13343
Model LSS++ DS GFLOPS mAP↑ NDS↑
BEVDepth†✗ ✗ 298 32.4 44.9
✗ ✓ 298 33.1 44.9
✓ ✗ 316 34.9 47.0
✓ ✓ 316 35.9 47.2
X3KD(Ours) ✓ ✓ 316 39.0 50.5
Table 1. Analysis of BEVDepth†(re-implementation of [28]):
We compare the architectural improvement of a larger Lift-Splat-
Shoot (LSS++) transform to using depth supervision (DS).
during training but not during inference. The recently intro-
duced BEVDepth [28] approach pioneers using accurate 3D
information from LiDAR data at training time to improve
multi-camera 3DOD, see Fig. 1 (top part). Specifically, it
proposed an improved Lift-Splat-Shoot PV-to-BEV trans-
form (LSS++) and depth supervision (DS) by projected Li-
DAR points, which we analyze in Table 1. We observe that
the LSS++ architecture yields significant improvements,
though depth supervision seems to have less effect. This
motivates us to find additional types of supervision to trans-
fer accurate 3D information from LiDAR point clouds to
multi-camera 3DOD. To this end, we propose cross-modal
knowledge distillation (KD) to not only use LiDAR data but
a high-performing LiDAR-based 3DOD model , as in Fig. 1
(middle part). To provide an overview of the effectiveness
of cross-modal KD at various multi-camera 3DOD network
stages, we present three distillation techniques: feature dis-
tillation (X-FD) and adversarial training (X-AT) to improve
the feature representation by the intermediate information
contained in the LiDAR 3DOD model as well as output dis-
tillation (X-OD) to enhance output-stage supervision.
For optimal camera-based 3DOD, extracting useful PV
features before the view transformation to BEV is equally
essential. However, gradient-based optimization through an
ambiguous view transformation can induce non-optimal su-
pervision signals. Recent work proposes pre-training the
PV feature extractor on instance segmentation to improve
the extracted features [49]. Nevertheless, neural networks
are subject to catastrophic forgetting [23] such that knowl-
edge from pre-training will continuously degrade if not re-
tained by supervision. Therefore, we propose cross-task in-
stance segmentation distillation (X-IS) from a pre-trained
instance segmentation teacher into a multi-camera 3DOD
model, see Fig. 1 (bottom part). As shown in Table 1, our
X3KD framework significantly improves upon BEVDepth
without additional complexity during inference.
To summarize, our main contributions are as follows:
• We propose X3KD, a KD framework across modali-
ties, tasks, and stages for multi-camera 3DOD.
• Specifically, we introduce cross-modal KD from a
strong LiDAR-based 3DOD teacher to the multi-
camera 3DOD student, which is applied at multiple
network stages in bird’s eye view, i.e., feature-stage(X-FD and X-AT) and output-stage (X-OD).
• Further, we present cross-task instance segmentation
distillation (X-IS) at the PV feature extraction stage.
• X3KD outperforms previous approaches for multi-
camera 3DOD on the nuScenes and Waymo datasets.
• We transfer X3KD to RADAR-based 3DOD and train
X3KD only through KD without using ground truth.
• Our extensive ablation studies on nuScenes and
Waymo provide a comprehensive evaluation of KD at
different network stages for multi-camera 3DOD.
|
Liu_Multiple_Instance_Learning_via_Iterative_Self-Paced_Supervised_Contrastive_Learning_CVPR_2023
|
Abstract
Learning representations for individual instances when
only bag-level labels are available is a fundamental
challenge in multiple instance learning (MIL). Recent
works have shown promising results using contrastive self-
supervised learning (CSSL), which learns to push apart
representations corresponding to two different randomly-
selected instances. Unfortunately, in real-world applica-
tions such as medical image classification, there is often
class imbalance, so randomly-selected instances mostly be-
long to the same majority class, which precludes CSSL from
learning inter-class differences. To address this issue, we
propose a novel framework, Iterative Self-paced Supervised
Contrastive Learning for MIL Representations (ItS2CLR),
which improves the learned representation by exploiting
instance-level pseudo labels derived from the bag-level la-
bels. The framework employs a novel self-paced sampling
strategy to ensure the accuracy of pseudo labels. We evalu-
ate ItS2CLR on three medical datasets, showing that it im-
proves the quality of instance-level pseudo labels and repre-
sentations, and outperforms existing MIL methods in terms
of both bag and instance level accuracy.1
|
1. Introduction
The goal of multiple instance learning (MIL) is to per-
form classification on data that is arranged in bags of in-
stances. Each instance is either positive or negative, but
these instance-level labels are not available during training;
only bag-level labels are available. A bag is labeled as pos-
itive if anyof the instances in it are positive, and negative
otherwise. An important application of MIL is cancer diag-
nosis from histopathology slides. Each slide is divided into
*Equal Contribution
†Joint Last Author
1Code is available at https://github.com/Kangningthu/
ItS2CLRhundreds or thousands of tiles but typically only slide-level
labels are available [ 6,9,17,35,39,53].
Histopathology slides are typically very large, in the or-
der of gigapixels (the resolution of a typical slide can be
as high as 105⇥105), so end-to-end training of deep neu-
ral networks is typically infeasible due to memory limi-
tations of GPU hardware. Consequently, state-of-the-art
approaches [ 6,35,39,44,53] utilize a two-stage learning
pipeline: (1) a feature-extraction stage where each instance
is mapped to a representation which summarizes its content,
and (2) an aggregation stage where the representations ex-
tracted from all instances in a bag are combined to produce
a bag-level prediction (Figure 1). Notably, our results indi-
cate that even in the rare settings where end-to-end training
is possible, this pipeline still tends to be superior (see Sec-
tion4.3).
In this work, we focus on a fundamental challenge in
MIL: how to train the feature extractor. Currently, there are
three main strategies to perform feature-extraction, which
have significant shortcomings. (1) Pretraining on a large
natural image dataset such as ImageNet [ 39,44] is problem-
atic for medical applications because features learned from
natural images may generalize poorly to other domains [ 38].
(2) Supervised training using bag-level labels as instance-
level labels is effective if positive bags contain mostly posi-
tive instances [ 11,34,50], but in many medical datasets this
is not the case [ 5,35]. (3) Contrastive self-supervised learn-
ing (CSSL) outperforms prior methods [ 14,35], but is not as
effective in settings with heavy class imbalance, which are
of crucial importance in medicine. CSSL operates by push-
ing apart the representations of different randomly selected
instances. When positive bags contain mostly negative in-
stances, CSSL training ends up pushing apart negative in-
stances from each other, which precludes it from learning
features that distinguish positive samples from the negative
ones (Figure 2). We discuss this finding in Section 2.
Our goal is to address the shortcomings of current
feature-extraction methods. We build upon several key in-
sights. First, it is possible to extract instance-level pseudo
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted versio
|
Kolmogorov_Solving_Relaxations_of_MAP-MRF_Problems_Combinatorial_In-Face_Frank-Wolfe_Directions_CVPR_2023
|
Abstract
We consider the problem of solving LP relaxations
of MAP-MRF inference problems, and in particular the
method proposed recently in [16, 35]. As a key computa-
tional subroutine, it uses a variant of the Frank-Wolfe (FW)
method to minimize a smooth convex function over a combi-
natorial polytope. We propose an efficient implementation
of this subroutine based on in-face Frank-Wolfe directions ,
introduced in [4] in a different context. More generally, we
define an abstract data structure for a combinatorial sub-
problem that enables in-face FW directions, and describe
its specialization for tree-structured MAP-MRF inference
subproblems. Experimental results indicate that the result-
ing method is the current state-of-art LP solver for some
classes of problems. Our code is available at pub.ist.
ac.at/ ˜vnk/papers/IN-FACE-FW.html .
|
1. Introduction
The main focus of this paper is on the problem of min-
imizing a function of discrete variables z= (z1;:::;zn)
with unary and pairwise terms:
min
z2D1:::DnX
v2[n]fv(zv) +X
uv2Efuv(zu;zv)(1)
HereG= ([n];E)is an undirected graph and D1;:::;Dn
are finite sets. This problem is often referred to as
MAP-MRF inference (maximum a posteriori inference in a
Markov Random Field ).
A prominent approach to tackle this NP-hard problem in
practice is to solve its natural LP relaxation (see e.g. [39]),
also called Basic LP relaxation [17]:
min
0X
v2[n]
a2Dvfv(a)va+X
uv2E
(a;b)2DuDvfuv(a;b)ua;vb (2a)
X
b02Dvua;vb0=ua;X
a02Duua0;vb=vb8uv;a;b (2b)
X
a2Dvua= 18v (2c)Designing algorithms to (approximately) solve this relax-
ation for large-scale problems has been a very active area
of research. A popular approach is to use message passing
techniques, which perform a block-coordinate ascent on the
dual objective [5, 11, 13, 36, 38, 40] This strategy is very ef-
fective for some problems, but for other problems it may
get stuck in a suboptimal point. Many techniques have been
developed that are guaranteed to converge to the optimal so-
lution of the LP relaxation [8,9,16,18,20,21,25,26,28–32,
34, 35].
In this paper we revisit the approach in [16, 35]. Its key
computational subroutine is to minimize a quadratic con-
vex function over combinatorial polytope, which is done by
invoking a variant of the Frank-Wolfe (FW) algorithm [3].
We study efficient implementations of the latter in the con-
text of MAP-MRF inference. Our main contribution is in-
corporating in-face FW directions introduced in [4]. The
idea is to speed-up computations by running FW algorithm
on a smaller “contracted” subproblem obtained by taking
a face of the polytope containing the current point. It has
been used for applications such as low-rank matrix com-
pletion [4], cluster detection in networks [1], and training
sparse neural networks with `1regularization [6]. We inves-
tigate the use of in-face FW directions for general combina-
torial polytopes, and describe an abstract data structure that
enables such directions. We then specialize it to subprob-
lems corresponding to tree-structured MAP-MRF inference
problems. Our approach has the following features:
It may happen that the contracted subproblem splits into
independent subproblems. These subproblems are han-
dled by a block-coordinate version of FW.
We store a cache of “atoms” for each contracted sub-
problem. We describe how to efficiently transform these
atoms when the current face is recomputed.
For an edgeuv2Eand fixed fractional unary vectors for
u;vwe can compute an optimal fractional pairwise vec-
tor for edge uvby solving a small-scale optimal trans-
portation (OT) problems. Such computations were used
in [27] for computing primal feasible solutions of relax-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
11980
ation (2). We show how to use them for improving the
performance of in-face FW directions.
To our knowledge, the issues above have not been discussed
in the literature so far.
We remark that in-face FW directions effectively imple-
ment the following rather natural idea: the optimization
should be performed only over “active” pairs (v;a)that are
likely to be present in the support of an optimal solution; the
other pairs should be fixed. A related idea appeared in the
context of message passing algorithms in [37], where mes-
sages are updated only in a subgraph in which the current
best labels keep changing. The method in [37] works only
with dual variables, and uses heuristic criteria for choosing
the subgraph. We believe that in-face FW directions allow
a more principled criterion for deciding which variables to
fix and for how long. Note that Freund et al. [4] proved that
their criterion retains the convergence rate of the basic FW
algorithm. We use a different criterion that also takes into
account the ratio of runtimes on the original and on con-
tracted subproblems.
In Section 4 we test the algorithms on benchmark prob-
lems in the evaluation [10], and compare them with LP
solvers used in [10]. Results suggest that the method
in [16, 35] with in-face FW directions is the current state-
of-the-art LP solver for certain classes of problems.
|
Liu_InstMove_Instance_Motion_for_Object-Centric_Video_Segmentation_CVPR_2023
|
Abstract
Despite significant efforts, cutting-edge video segmenta-
tion methods still remain sensitive to occlusion and rapid
movement, due to their reliance on the appearance of ob-
jects in the form of object embeddings, which are vul-
nerable to these disturbances. A common solution is to
use optical flow to provide motion information, but essen-
tially it only considers pixel-level motion, which still re-
lies on appearance similarity and hence is often inaccu-
rate under occlusion and fast movement. In this work,
we study the instance-level motion and present InstMove,
which stands for Instance Motion for Object-centric Video
Segmentation. In comparison to pixel-wise motion, Inst-
Move mainly relies on instance-level motion information
that is free from image feature embeddings, and features
physical interpretations, making it more accurate and ro-
bust toward occlusion and fast-moving objects. To better
fit in with the video segmentation tasks, InstMove uses in-
stance masks to model the physical presence of an object
and learns the dynamic model through a memory network
to predict its position and shape in the next frame. With only
a few lines of code, InstMove can be integrated into current
SOTA methods for three different video segmentation tasks
and boost their performance. Specifically, we improve the
previous arts by 1.5 AP on OVIS dataset, which features
heavy occlusions, and 4.9 AP on YouTubeVIS-Long dataset,
which mainly contains fast moving objects. These results
suggest that instance-level motion is robust and accurate,
and hence serving as a powerful solution in complex sce-
narios for object-centric video segmentation.
|
1. Introduction
Segmenting and tracking object instances in a given
video is a critical topic in computer vision, with vari-
ous applications in video understanding, video editing, au-
tonomous driving, augmented reality, etc. Three represen-
tative tasks include video object segmentation (VOS), video
*First two authors contributed equally. Work done during an internship
at ByteDance. The code and models are available for research purposes at
https://github.com/wjf5203/VNext.
Framet-1Framet
GTmaskFramet-1MaskOpticalFlowPropagatedMaskMotionMaskInstMove
PreviousMasks+InstanceMotionPixelMotionFigure 1. Different from optical flow that estimates pixel-level
motion, InstMove learns instance-level motion and deformation
directly from previous instance masks and predicts more accurate
and robust position and shape estimates for the current frame, even
in scenarios with occlusions and rapid motion.
instance segmentation (VIS), and multi-object tracking and
segmentation (MOTS). These tasks differ significantly from
video semantic segmentation [16, 19, 31, 52], which aims
to classify every pixel in a video frame, hence we refer to
them as object-centric video segmentation in this paper. De-
spite significant progress, state-of-the-art (SOTA) methods
are still struggle with occlusion, rapid motion, and signifi-
cant changes in objects, resulting in a marked drop in han-
dling longer or more complex videos.
One reason we observe is that most methods rely solely
on appearance to localize objects and track them across
frames. Specifically, a majority of VOS methods [10, 30,
35, 40, 51, 67] use the previous frames as target templates
and construct a feature memory bank of embeddings for all
target objects. This is then used to match the pixel-level fea-
ture in the new frame. Online VIS [6, 15, 21, 28, 65, 72, 73]
and MOTS [25, 71] methods directly perform per-frame in-
stance segmentation based on image features and use the
object embeddings to track them through the video. While
these paradigms work well on simple videos, they are sen-
sitive to intense appearance changes and struggle with han-
dling multiple object instances with similar appearances, re-
sulting in large errors when dealing with complex scenarios
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
6344
with complex motion patterns, occlusion, or deformation.
Apart from appearance cues, object motion, which is an-
other crucial piece of information provided by videos, has
also been extensively studied for video segmentation. The
majority of motion models in related fields fall into two cat-
egories: One line of work uses optical flow to learn pixel-
level motion. However, this approach does not help solve
the problem of occlusion or fast motion since flow itself is
often inaccurate in these scenarios [39,57]. The main reason
causing the failure we argue is that optical flow still heavily
relies on appearance cues to compute the pixel-level mo-
tion across the frame. The other line of work uses a linear
speed model, which helps alleviate these tracking problems
caused by occlusion and fast motion in MOT [5,44,63,77].
However, it oversimplifies the problem and thus provides
limited benefits in other tasks such as VOS and VIS.
In this work, we aim at narrowing the gap between the
two aforementioned lines of work by reformulating the mo-
tion module and providing InstMove, a simple yet efficient
motion prediction plugin that enjoys the advantages of both
solutions. First, it is portable and is compatible with and
beneficial to approaches of video segmentation tasks. More
importantly, similar to optical flow, it also provides high-
dimensional information of position and shape, which can
be beneficial for a range of downstream tasks in a variety of
ways, and, similar to the dynamic motion model, it learns
physical interpretation to model motion information, im-
proving robustness toward occlusion and fast motion.
To achieve our objective, we utilize an instance mask
to indicate the position and shape of a target object, and
provide an RNN-based module with a memory network to
extract motion features from previous masks, store and re-
trieve dynamic information, and predict the position and
shape information of the next frame based on motion cues.
However, while being robust towards appearance changes,
predicting shape without the object appearance or image
features results in an even less accurate boundary in sim-
ple cases. To solve this, we incorporate the low-level image
features at the end of InstMove. Finally, to prove the effec-
tiveness of InstMove on object-centric video segmentation
tasks, we present two simple ways to integrate InstMove
into recent SOTA methods in VIS, VOS, and MOTS, which
improve their robustness with minimal modifications.
In the experiments section, we first validate that our mo-
tion module is more accurate and compatible with existing
methods compared with learning motion and deformation
with optical flow methods such as RAFT [55]. We also
show that it is more robust to occlusion and rapid move-
ments. Then we demonstrate the improvement of integrat-
ing our motion plugin into all recent SOTA methods in
VOS, VIS, and MOTS tasks, particularly in complex sce-
narios with heavy occlusion and rapid motion. Remarkably,
with only a few lines of code, we significantly boost the cur-rent art by 1.5 AP on OVIS [47], 4.9 AP on YouTubeVIS-
Long [69], and reduce IDSw on BDD100K [75] by 28.6%.
In summary, we have revisited the motion models used
in video segmentation tasks and propose InstMove, which
contains both pixel-level information and instance-level dy-
namic information to predict shape and position. It pro-
vides additional information that is robust to occlusion and
rapid motion. The improvements in SOTA methods of all
three tasks demonstrate the effectiveness of incorporating
instance-level motion in tackling complex scenarios.
|
Li_Exploring_the_Effect_of_Primitives_for_Compositional_Generalization_in_Vision-and-Language_CVPR_2023
|
Abstract
Compositionality is one of the fundamental properties of
human cognition (Fodor & Pylyshyn, 1988). Compositional
generalization is critical to simulate the compositional ca-
pability of humans, and has received much attention in the
vision-and-language (V&L) community. It is essential to
understand the effect of the primitives, including words, im-
age regions, and video frames, to improve the compositional
generalization capability. In this paper, we explore the ef-
fect of primitives for compositional generalization in V&L.
Specifically, we present a self-supervised learning based
framework that equips existing V&L methods with two char-
acteristics: semantic equivariance andsemantic invari-
ance . With the two characteristics, the methods understand
primitives by perceiving the effect of primitive changes on
sample semantics and ground-truth. Experimental results
on two tasks: temporal video grounding and visual question
answering, demonstrate the effectiveness of our framework.
|
1. Introduction
Compositionality is one of the fundamental properties of
human cognition argued by Fodor and Pylyshyn [11]. Com-
positional generalization in vision-and-language (V&L) has
received increasing attention and significant progress in re-
cent years, but has not been fully explored. Compositional
generalization requires V&L methods to generalize well to
sentences with novel combinations of seen words, which is
critical to simulate the compositional properties of human
cognition.
*Corresponding author: Chenchen Jing and Yuwei Wu
Query: Aperson opens the door. | |Video:
Query: The person opens adoor. | |Query: Aperson closes the door. | |Figure 1. An example in the context of temporal video grounding,
showing that primitives are the determinants of sample semantics
and ground-truth.
An indispensable premise for improving compositional
generalization is to understand the effect of the primitives,
including words, image regions, and video frames. Primi-
tives are compositional building blocks mainly involved in
V&L tasks and the determinants of sample semantics. For
example, for a sample with the query “A person opens the
door” in the context of temporal video grounding (TVG),
its semantics are changed completely when the primitive
“opens” is changed to “closed”, but are unchanged when
the primitives “A” and “the” are modified to “The” and “a”,
respectively, as shown in Fig. 1. We investigate if existing
V&L methods are sensitive to the sample semantic changes
brought by primitive changes. Our observations show that
the methods erroneously keep almost 90% of the predic-
tions unchanged when the sample semantics are corrupted
by replacing 50% critical words ( e.g., nouns, verbs) in sen-
tences. This suggests that existing methods cannot correctly
establish the relationship between the primitives and the
sample semantics and thus the ground-truth, so they cannot
achieve compositional generalization.
In this paper, we explore the effect of primitives for com-
positional generalization from two aspects: semantic equiv-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19092
(a) An original example in the context of temporal video grounding.Prediction r | | Query: A person is smiling next to a refrigerator.
(b) Equivariant samples generated by masking critical primitives.
(c) Invariant samples generated by masking irrelevant primitives.Query: A [MASK] is [MASK] next to a refrigerator.
Query: A person is smiling next to a refrigerator.
Prediction v | |
Prediction v | |
Query: A person is smiling [MASK] to a refrigerator.
Query: A person is smiling next to a refrigerator.
Prediction r | |
Prediction r | |Figure 2. The samples generated in our framework by masking
different primitives.
ariance andsemantic invariance . Semantic equivariance
means that the predictions of the methods should be equiv-
ariant with the sample semantics, which are determined
by the primitives. Once the semantics of the sample are
changed, the predictions of the methods should faithfully
change. To ensure semantic equivariance, methods are en-
couraged to learn which primitives have a high effect on
the sample semantics and thus the ground-truth. Seman-
tic invariance means that the methods should maintain the
same predictions when irrelevant primitives ( e.g., function
words and background in visual content) are changed. This
helps the methods to learn the primitives with low richness
semantics and a low effect on ground-truth, and is comple-
mented with the semantic equivariance. With the two char-
acteristics, the methods understand the effect of primitive
changes on sample semantics and ground-truth.
We propose a self-supervised learning based framework
to equip existing methods with semantic equivariance and
semantic invariance . By masking critical and irrelevant
primitives, we generate numerous labeled training samples,
including equivariant samples and invariant samples, re-
spectively, as shown in Fig. 2. To assign labels to the gen-
erated samples, we estimate the effect of masked primitives
on ground-truth. The larger the effect of the masked prim-
itives, we assign the generated sample with a label more
different from the ground-truth of the original sample. By
training with the generated samples, the methods learn to
make equivariant and invariant predictions when the sam-
ple semantics change and do not, respectively. Extensiveexperiments on two V&L tasks: temporal video grounding
[2] and visual question answering [3], demonstrate that our
framework improves the compositional generalization ca-
pability of existing methods.
In summary, our contributions are as follows:
• We explore the effect of primitives on improving
the compositional generalization capability of exist-
ing V&L methods by perceiving the effect of primitive
changes on sample semantics and ground-truth.
• We propose a self-supervised learning based frame-
work for compositional generalization, in which nu-
merous labeled samples are generated to equip existing
V&L methods with semantic equivariance and seman-
tic invariance.
|
Liu_Marching-Primitives_Shape_Abstraction_From_Signed_Distance_Function_CVPR_2023
|
Abstraction from Signed Distance Function
Weixiao Liu1,2Yuwei Wu1Sipu Ruan1Gregory S. Chirikjian1*
1National University of Singapore2Johns Hopkins University
{mpewxl, yw.wu, ruansp, mpegre }@nus.edu.sg
Abstract
Representing complex objects with basic geometric
primitives has long been a topic in computer vision.
Primitive-based representations have the merits of com-
pactness and computational efficiency in higher-level tasks
such as physics simulation, collision checking, and robotic
manipulation. Unlike previous works which extract polyg-
onal meshes from a signed distance function (SDF), in
this paper, we present a novel method, named Marching-
Primitives, to obtain a primitive-based abstraction directly
from an SDF . Our method grows geometric primitives (such
as superquadrics) iteratively by analyzing the connectivity
of voxels while marching at different levels of signed dis-
tance. For each valid connected volume of interest, we
march on the scope of voxels from which a primitive is
able to be extracted in a probabilistic sense and simulta-
neously solve for the parameters of the primitive to cap-
ture the underlying local geometry. We evaluate the per-
formance of our method on both synthetic and real-world
datasets. The results show that the proposed method out-
performs the state-of-the-art in terms of accuracy, and is di-
rectly generalizable among different categories and scales.
The code is open-sourced at https://github.com/
ChirikjianLab/Marching-Primitives.git .
|
1. Introduction
Recent years have witnessed great progress in the areas
of 3D shape representation and environmental perception.
Low-level representations such as surface meshes, point
clouds, and occupancy grids are widely used as inputs to
high-level computer vision algorithms and artificial intel-
ligence tasks. They have the advantage of being able to
represent and visualize objects with high accuracy and rich
local geometric features. However, the low-level represen-
tations are ineffective in delivering a general and intuitive
sense of structural geometry as well as part-level scene un-
derstanding. Studies [3, 20] show that human vision, unlike
*Corresponding author
Figure 1. Primitive-base representation versus mesh. For each pair
of objects, the left one is the superquadric abstraction obtained by
our algorithm, and the right one is the original mesh. The mesh
of the chair is 6MB in size, while our representation only needs
4KB. An SDF representation discretized on a 1283voxel grid oc-
cupies 19MB. Our abstraction is equivalent to an implicit continu-
ous SDF, which is an approximation to the discrete SDF.
computer vision, tends to perceive and understand scenes
as combinations of simple primitive shapes. Human beings
perform well and robustly in complex tasks, providing a
basic geometric description of the scene is available [32].
Therefore, researchers turn to exploring the possibility of
interpreting complex objects and scenes with basic geo-
metric primitives. Taking advantage of the primitive-based
representation, many higher-level tasks, such as segmenta-
tion [14, 16, 21, 30], scene understanding [29, 31, 41, 47],
grasping [33, 44, 45] and motion planning [35, 36], are able
to be solved efficiently.
However, it still remains challenging to extract primitive-
based abstractions from low-level representations. Start-
ing from the 1990s, Solina et al. [1, 17, 39] aim to extract
a single superquadric representation from a simple object
by minimizing the least-square error between the primitive
and the measured points. Later in [7, 22], their method is
extended to represent more complex objects with multiple
primitives. More recently, the authors of [24, 47] reformu-
late the task as a probabilistic inference problem with en-
hanced accuracy and robustness to noise and outliers. At
the same time, with the surge of data-driven techniques, re-
searchers attempt to train neural networks to infer cuboids
[27,38,41,48,50] and superquadrics [29,31] representations
in an end-to-end fashion. However, both the computational
and learning-based approaches have their own limitations.
The computational methods are vulnerable to the inherent
ambiguity of the point-to-surface relationship. For exam-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8771
ple, the algorithms tend to fill empty spaces of a non-convex
object with primitives by mistake, due to the inside/outside
ambiguity of a surface depicted by a set of points [24, 48].
The main drawback of the learning approaches lies in the
lack of generalizability beyond the object category on which
the model is trained [24,31,47,48]. Also, the shape abstrac-
tion accuracy is inferior to the computational methods.
The signed distance function (SDF) has been a success-
ful 3D volumetric representation in varieties of computer vi-
sion and graphics tasks. It is the basic framework for many
classic 3D reconstruction algorithms such as TSDF volume
reconstruction [10, 15], KinectFusion [19], and Dynamic-
Fusion [26]. Recently, the SDF representation is adapted to
the deep learning frameworks, and exhibits boosted poten-
tials in shape encoding [8, 18, 28, 43], surface reconstruc-
tion [23, 46], and shape completion [11, 12, 34]. Usually,
triangular mesh surfaces are extracted from the SDF rep-
resentation with the marching cubes algorithm [25]. Point
cloud and occupancy grid representations are also obtained
by keeping the vertices of the meshes and the sign of each
voxel point, respectively. The SDF is among the most in-
formative 3D representations since it encodes not only the
surface geometry but also the distance and side of a point
relative to the shape. Meanwhile, it is easily achievable via
range images from 3D sensors [10], or learnable from other
input modalities [8, 18, 28]. Since we are able to extract
meshes from an SDF, it is natural to think about the pos-
sibility of extracting primitives as well. Furthermore, the
primitive-based abstraction is a continuous interpretation of
the complete geometric information encoded in the original
discrete SDF, but requires much less storage size (Fig.1).
Motivated by the aforementioned facts and the bottle-
neck of the current shape abstraction algorithms, we pro-
posed a general shape abstraction method by reasoning di-
rectly on the informative SDF representation. The goal of
our method is to find a combination of geometric primi-
tives whose underlying SDF values match the target values
evaluated on the evenly spaced discrete grid points (Sec.
3.1 and Sec. 3.2). To solve this problem, we propose a
two-step iterative algorithm called the Marching-Primitives.
Our algorithm ‘marches’ on two domains: the signed dis-
tance domain and the voxelized space domain, alternately.
Firstly, the connectivity of volumes are analyzed by gen-
erating isosurfaces on a sequence of decreasing levels of
negative signed distances (Sec.3.3). By doing so, volumes
of interest (VOIs) where primitives are likely to be encoded
can be identified sequentially. In the second step, for each of
the VOIs, our algorithm marches on the neighbouring vox-
els to infer their probabilistic correspondences to the primi-
tive and simultaneously optimizes the shape and pose of the
primitive (Sec.3.4). After the primitive representation of a
VOI is achieved, the fitted volumes are deactivated from the
voxel grid. Our algorithm continues marching on the signeddistance domain until it approaches zero, i.e., all the inte-
rior volumes of the SDF have been captured by the recov-
ered primitives. We compare our algorithm with the state-
of-the-art of both the computational and learning-based ap-
proaches on the ShapeNet object dataset [6] and D-FAUST
human shape dataset [4] (Sec. 4.1). We also study the per-
formance of our algorithm on different conditions(Sec. 4.2).
Finally, we demonstrate the scene abstraction result of the
Stanford Reading Room [49], which contains several pieces
of furniture of various categories(Sec. 4.3).
|
Liu_SAP-DETR_Bridging_the_Gap_Between_Salient_Points_and_Queries-Based_Transformer_CVPR_2023
|
Abstract
Recently, the dominant DETR-based approaches apply
central-concept spatial prior to accelerating Transformer
detector convergency. These methods gradually refine the
reference points to the center of target objects and imbue ob-
ject queries with the updated central reference information
for spatially conditional attention. However, centralizing ref-
erence points may severely deteriorate queries’ saliency and
confuse detectors due to the indiscriminative spatial prior.
To bridge the gap between the reference points of salient
queries and Transformer detectors, we propose SAlient
Point-based DETR (SAP-DETR ) by treating object detec-
tion as a transformation from salient points to instance ob-
jects. Concretely, we explicitly initialize a query-specific
reference point for each object query, gradually aggregate
them into an instance object, and then predict the distance
from each side of the bounding box to these points. By
rapidly attending to query-specific reference regions and the
conditional box edges, SAP-DETR can effectively bridge the
gap between the salient point and the query-based Trans-
former detector with a significant convergency speed. Exper-
imentally, SAP-DETR achieves 1.4 ×convergency speed with
competitive performance and stably promotes the SoTA ap-
proaches by ∼1.0 AP . Based on ResNet-DC-101, SAP-DETR
achieves 46.9 AP . The code will be released at https:
//github.com/liuyang-ict/SAP-DETR.
|
1. Introduction
Object detection is a fundamental task in computer vi-
sion, whose target is to recognize and localize each ob-
ject from input images. In the last decade, various detec-
*This work was done when working as an intern at AI Lab, Lenovo
Research, Beijing, China.
†Corresponding author.
Figure 1. Comparison of SAP-DETR and DAB-DETR under 3-
layer decoder model and 36-epoch training scheme. (a) Statistics
of the query count in different classification score intervals. (b)
and (c) Visualize the distribution of all reference points (pink) and
highlight the top-20 classification score queries with their bounding
boxes (blue) and reference points (red) in different decoder layers.
(d) Visualize the outputs of positive queries and ground truth (red)
during the training process, each query has a representative color
for its reference point and bounding box. Best viewed in color.
tors [6,11,14,18,20,22] based on Convolutional Neural Net-
works (CNNs), have received widespread attention and made
significant progress. Recently, Carion et al. [2] proposed a
new end-to-end paradigm for object detection based on the
Transformer [24], called DEtection TRansformer (DETR),
which treats object detection as a problem of set prediction.
In DETR, a set of learnable positional encodings, namely
object queries, are employed to aggregate instance features
from the context image in Transformer Decoder. The predic-
tions of queries are finally assigned to the ground truth via
bipartite matching to achieve end-to-end detection.
Despite the promising results of DETR, its application
is largely limited by considerably longer training time com-
pared to conventional CNNs. To address this problem, many
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15539
variants attempted to take a close look at query paradigm and
introduced various spatial priors for model convergency and
efficacy. According to the type of spatial prior, they can be
categorized into implicit and explicit methods. The implicit
ones [5, 16, 31] attempt to decouple a reference point from
the object query and make use of this spatial prior to attend
to the image context features efficiently. The current state-of-
the-arts (SoTAs) are dominated by the explicit ones [13, 25],
which suggest to instantiate a position with spatial prior for
each query, i.e., explicit reference coordinates with a center
point or an anchor box. These reference coordinates serve
as helpful priors and enable the queries to focus on their
expected regions easily. For instance, Anchor DETR [25] in-
troduced an anchor concept (center point with different box
size patterns) to formulate the query position and directly
regressed the central offsets of the bounding boxes. DAB-
DETR [13] further stretched the center point to a 4D anchor
box concept [cx, cy, w, h ]to refine proposal bonding boxes
in a cascaded manner. However, instantiating the query loca-
tion as a target center may severely degrade the classification
accuracy and convergency speed. As illustrated in Fig. 1,
there exist many plausible queries [19] with high-quality
classification scores (Fig. 1(a) within red box) and box In-
tersection over Union (IoU, see the redundant blue boxes
in Fig. 1(b) and (c)), which only brings a slight improvement
on precision rate but inevitably confuses the detector on
the positive query assignments when training with bipartite
matching strategy. This is because the plausible predictions
are considered in negative classification loss, which severely
decelerates the model convergency. As shown in Fig. 1(b)
and (c), the predefined reference point of the positive query
may not be the nearest one to the center of the ground truth
bounding box, and the reference points tend to be centralized
or marginalized (cyan arrows in Fig. 1(b)), hence losing the
spatial specificity. With further insight into the one-to-one
label assignment during the training process, we find that the
query, whose reference point is closest to the center point,
also has a high-quality IoU, but it still exists a disparity with
the positive query in the classification confidence. Therefore,
we argue that such a centralized spatial prior may cause de-
generation of target consistency in both classification and
localization tasks, which leads to inconsistent predictions.
Furthermore, the mentioned central point-based variants
also have difficulties in detecting occluded objects. For ex-
ample, Fig. 1(d) shows that DAB-DETR detects the left
baseman twice, while the query point in SAP-DETR is not
necessarily the center point, so the query point of the bound-
ing box for the occluded baseman is from a pixel on the
occluded baseman on the top right area instead of from the
left baseman. One solution for center-based method [25] is
to predefine different receptive fields (similar to the scaling
anchor box in YOLO [17]) for the position of each query.
However, increasing the diversity of the receptive fields foreach position query is unsuitable for non-overlapped targets,
as it still generates massive indistinguishable predictions for
one position as same as other center-based models.
To bridge these gaps, in this paper, we present a novel
framework for Transformer detector, called SAlient Point-
based DETR (SAP-DETR), which treats object detection
as a transformation from salient points to instance objects.
Instead of regressing the reference point to the target center,
we define the reference point belonging to one positive query
as a salient point , keep this query-specific spatial prior with
a scaling amplitude, and then gradually update them to an in-
stance object by predicting the distance from each side of the
bounding box. Specifically, we tile the mesh-grid referenced
points and initialize their center/corner as the query-specific
reference point. To disentangle the reference sparsity as well
as stabilize the training process, a movable strategy with
scaling amplitude is applied for reference point adjustment,
which prompts queries to consider their reference grid as
the salient region to perform image context attention. By
localizing each side of the bounding box layer by layer, such
query-specific spatial prior enables compensation for the
over-smooth/inadequacy problem during center-based de-
tection, thereby vastly promoting model convergency speed.
Inspired by [5, 13, 16], we also take advantage of both Gaus-
sian spatial prior and conditional cross-attention mechanism,
and then a salient point enhanced cross-attention mecha-
nism is developed to distinguish the salient region and other
conditional extreme regions from the context image features.
We bridge the gap between salient points and query-based
Transformer detector by speedily attending to the query-
specific region and other conditional regions. The extensive
experiments have shown that SAP-DETR achieves superior
convergency speed and performance. To the best of our
knowledge, this is the first work to introduce the salient point
based regression into end-to-end query-based Transformer
detectors. Our contributions can be summarized as follows.
1)We introduce the salient point concept into query-based
Transformer detectors by assigning query-specific reference
points to object queries. Unlike center-based methods, we
restrict the reference location and define the point of the posi-
tive query as the salient one , hence enlarging the discrepancy
of query as well as reducing the redundant predictions (see
Fig. 1). Thanks to the efficacy of the query-specific prior,
our SAP-DETR accelerates the convergency speed greatly,
achieving competitive performance with 30% fewer train-
ing epochs. The proposed movable strategy further boosts
SAP-DETR to a new SoTA performance.
2)We devise a point-enhanced cross-attention mechanism
to imbue query with spatial prior based on both reference
point and box sides for final specific region attention.
3)Evaluation over COCO dataset has demonstrated that
SAP-DETR achieves superior convergency speed and detec-
tion accuracy. Under the same training settings, SAP-DETR
15540
outperforms the SoTA approaches with a large margin.
|
Li_DropKey_for_Vision_Transformer_CVPR_2023
|
Abstract
In this paper, we focus on analyzing and improving the
dropout technique for self-attention layers of Vision Trans-
former, which is important while surprisingly ignored by
prior works. In particular, we conduct researches on three
core questions: First, what to drop in self-attention layers?
Different from dropping attention weights in literature, we
propose to move dropout operations forward ahead of atten-
tion matrix calculation and set the Key as the dropout unit,
yielding a novel dropout-before-softmax scheme. We theo-
retically verify that this scheme helps keep both regulariza-
tion and probability features of attention weights, alleviat-
ing the overfittings problem to specific patterns and enhanc-
ing the model to globally capture vital information; Second,
how to schedule the drop ratio in consecutive layers? In
contrast to exploit a constant drop ratio for all layers, we
present a new decreasing schedule that gradually decreases
the drop ratio along the stack of self-attention layers. We
experimentally validate the proposed schedule can avoid
overfittings in low-level features and missing in high-level
semantics, thus improving the robustness and stableness
of model training; Third, whether need to perform struc-
tured dropout operation as CNN? We attempt patch-based
block-version of dropout operation and find that this use-
ful trick for CNN is not essential for ViT. Given exploration
on the above three questions, we present the novel Drop-
Key method that regards Key as the drop unit and exploits
decreasing schedule for drop ratio, improving ViTs in a gen-
eral way. Comprehensive experiments demonstrate the ef-
fectiveness of DropKey for various ViT architectures, e.g.
T2T, VOLO, CeiT and DeiT, as well as for various vision
*Equal contribution
†Corresponding authortasks, e.g., image classification, object detection, human-
object interaction detection and human body shape recov-
ery.
|
1. Introduction
Vision Transformer (ViT) [6] has achieved great suc-
cess for various vision tasks, e.g., image recognition [7, 12,
20, 34, 35], object detection [1], human body shape esti-
mation [18], etc. Prior works mainly focus on researches
of patch division, architecture design and task extension.
However, the dropout technique for self-attention layer,
which plays the essential role to achieve good generaliz-
ability, is surprisingly ignored by the community.
Different from the counterpart for Convolutional Neural
Networks (CNNs), the dropout in ViT directly utilizes the
one in original Transformer designed for Natural Language
Processing, which sets attention weights as the manipula-
tion unit with a constant dropout ratio for all layers. Despite
of its simplicity, this vanilla design faces three major prob-
lems. First, it breaks the probability distribution of atten-
tion weights due to the averaging operation on non-dropout
units after softmax normalization. Although this regularizes
the attention weights, it still overfits specific patterns locally
due to the failure on penalizing score peaks, as shown in
Fig. 1 (a) and (b); Second, the vanilla design is sensitive to
the constant dropout ratio, since high ratio occurs missing
of semantic information in high-level representations while
low ratios overfitting in low-level features, resulting in the
unstable training process; Third, it ignores the structured
characteristic of input patch grid to ViT, which plays an ef-
fective role to improve performance with blockwise dropout
in CNNs. These three problems degrade the performance
and limit the generalizability of ViTs.
Motivated by this, we propose to analyze and improve
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22700
the dropout technique in self-attention layer, further push-
ing forward the frontier of ViTs for vision tasks in a general
way. Specifically, we focus on three core aspects:
What to drop in self-attention layer Different from drop-
ping attention weights as in the vanilla design, we propose
to set the Key as the dropout unit, which is essential input
of self-attention layer and significantly affects the output.
This moves the dropout operation forward before calculat-
ing the attention matrix as shown in Fig. 1 (c) and yields
a novel dropout-before-softmax scheme. This scheme reg-
ularizes attention weights and keeps their probability dis-
tribution at the same time, which intuitively helps penalize
weight peaks and lift weight foots. We theoretically verify
this property via implicitly introducing an adaptive smooth-
ing coefficient for the attention operator from the perspec-
tive of gradient optimization by formulating a Lagrange
function. With the dropout-before-softmax scheme, self-
attention layers can capture vital information in a global
manner, thus overcoming the overfittings problem to spe-
cific patterns occurred in the vanilla dropout and enhancing
the model generalizability as visualization of feature map in
Fig. 1 (c). For the training phase, this scheme can be simply
implemented by swapping the operation order of softmax
and dropout in vanilla design, which provides a general way
to effectively enhance ViTs. For inference phase, we con-
duct an additional finetune phase to align the expectations
to training phase, further improving the performance.
How to schedule the drop ratio In contrast to exploiting
a constant drop ratio for all layers, we present a new linear
decreasing schedule that gradually decreases the drop ratio
along the stack of self-attention layers. This schedule leads
to a high drop ratio in shallow layers while the low one in
deep layers, thus avoiding overfittings to low-level features
and preserving sufficient high-level semantics. We experi-
mentally verify the effectiveness of the proposed decreasing
schedule for drop ratio to stable the training phase and im-
prove the robustness.
Whether need to perform structured drop Inspired by
the DropBlock [10] method for CNNs, we implement two
structured versions of the dropout operation for ViTs: the
block-version dropout that drops keys corresponding to
contiguous patches in images or feature maps; the cross-
version dropout that drops keys corresponding to patches
in horizontal and vertical stripes. We conduct thorough ex-
periments to validate their efficacy and find that the struc-
ture trick useful for CNN is not essential for ViT, due to the
powerful capability of ViT to grasp contextual information
in full image range.
Given exploration on the above three aspects, we present
a novel DropKey method that utilizes Key as the drop unit
and decreasing schedule for drop ratio. In particular, Drop-
Key overcomes drawbacks of the vanilla dropout technique
for ViTs, improving performance in a general and effectiveway. Comprehensive experiments on different ViT archi-
tectures and vision tasks demonstrate the efficacy of Drop-
Key. Our contributions are in three folds: First, to our best
knowledge, we are the first to theoretically and experimen-
tally analyze dropout technique for self-attention layers in
ViT from three core aspects: drop unit, drop schedule and
structured necessity; Second, according to our analysis, we
present a novel DropKey method to effectively improve the
dropout technique in ViT. Third, with DropKey, we improve
multiple ViT architectures to achieve new SOTAs on vari-
ous vision tasks.
|
Liang_Visual_Exemplar_Driven_Task-Prompting_for_Unified_Perception_in_Autonomous_Driving_CVPR_2023
|
Abstract
Multi-task learning has emerged as a powerful paradigm
to solve a range of tasks simultaneously with good efficiency
in both computation resources and inference time. However,
these algorithms are designed for different tasks mostly not
within the scope of autonomous driving, thus making it
hard to compare multi-task methods in autonomous driving.
Aiming to enable the comprehensive evaluation of present
multi-task learning methods in autonomous driving, we ex-
tensively investigate the performance of popular multi-task
methods on the large-scale driving dataset, which covers
four common perception tasks, i.e., object detection, seman-
tic segmentation, drivable area segmentation, and lane de-
tection. We provide an in-depth analysis of current multi-
task learning methods under different common settings and
find out that the existing methods make progress but there
is still a large performance gap compared with single-task
baselines. To alleviate this dilemma in autonomous driving,
we present an effective multi-task framework, VE-Prompt,
which introduces visual exemplars via task-specific prompt-
ing to guide the model toward learning high-quality task-
specific representations. Specifically, we generate visual
exemplars based on bounding boxes and color-based mark-
ers, which provide accurate visual appearances of target
categories and further mitigate the performance gap. Fur-
thermore, we bridge transformer-based encoders and con-
volutional layers for efficient and accurate unified percep-
tion in autonomous driving. Comprehensive experimental
results on the diverse self-driving dataset BDD100K show
that the VE-Prompt improves the multi-task baseline and
further surpasses single-task models.
|
1. Introduction
Multi-task learning (MTL) has been the source of a num-
ber of breakthroughs in autonomous driving over the last
†Corresponding author.
[CLASS]A photo of a [class].
...Text
Encoder
ImageImage
EncoderAlignment
(a) Textual PromptImage Patch...Embed... ...
Learnable
Prompts
Transformer
(b) Visual Prompt
...
...
...
...Prompt
Generator
ImageImage
EncoderTask Prompts
Task
PromptingHead A
Head B
Head C
Head D
(c) Visual Exemplar Driven Prompt (Ours)Figure 1. Comparison of different prompts in computer vi-
sion. (a) Extracting textual prompts from a text encoder to per-
form image-text alignment [62]. (b) Prepend learnable prompts
to the embeddings of image patches [20]. (c) Visual exemplar
driven prompts for multi-task learning (ours). The generated task
prompts encode high-quality task-specific knowledge for down-
stream tasks.
few years [23, 50, 56] and general vision tasks recently
[2, 12, 26, 34, 55]. As the foundation of autonomous driv-
ing, a robust vision perception system is required to pro-
vide critical information, including the position of traffic
participants, traffic signals like lights, signs, lanes, and ob-
stacles that influence the drivable space, to ensure driving
safety and comfort. These tasks gain knowledge from the
same data source and present prominent relationships be-
tween each other, like traffic participants, are more likely to
appear within drivable spaces and traffic signs may appear
near traffic lights, etc. Training these tasks independently is
time costing and fails to mine the latent relationship among
them. Therefore, it is crucial to solve these multiple tasks
simultaneously, which can improve data efficiency and re-
duce training and inference time.
Some recent works have attempted to apply unified train-
ing on multiple tasks in autonomous training. Uncertainty
[21] trains per-pixel depth prediction, semantic segmenta-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9611
tion, and instance segmentation in a single model. CIL [19]
introduces an extra traffic light classifier to learn different
traffic patterns following traffic light changes. CP-MTL
[4] learns object detection and depth prediction together to
identify dangerous traffic scenes. However, these works dif-
fer in task types, evaluation matrix, and dataset, making it
hard to compare their performances. For example, most of
them are developed upon dense prediction [2, 55] and natu-
ral language understanding [7,47], rather than being tailored
for more common perception tasks for autonomous driving,
thus these methods may produce poor results when applied
to a self-driving system. As a result, there is an emerg-
ing demand for a thorough evaluation of existing multi-task
learning methods covering common tasks in autonomous
driving.
In this paper, we focus on heterogeneous multi-task
learning in common scenarios of autonomous driving and
cover popular self-driving tasks, i.e., object detection, se-
mantic segmentation, drivable area segmentation, and lane
detection. We provide a systematic study of present MTL
methods on large-scale driving dataset BDD100K [58].
Specifically, we find that task scheduling [26] is better than
zeroing loss [51], but worse than pseudo labeling [15] on
most tasks. Interestingly, in task-balancing methods, Un-
certainty [21] produces satisfactory results on most tasks,
while MGDA [41] only performs well on lane detection.
This indicates that negative transfer [8], which is a phe-
nomenon that increasing the performance of a model on one
task will hurt the performance on another task with different
needs, is common among these approaches.
To mitigate the negative transfer problem, we introduce
the visual exemplar-driven task-prompting (shorten as VE-
Prompt ) based on the following motivations: (1) Given the
visual clues of each task, the model can extract task-related
information from the pre-trained model. Different from cur-
rent prompting methods which introduce textual prompts
[6, 38, 62, 63] or learnable context [20], we leverage exem-
plars containing information of target objects to generate
task-specific prompts by considering that the visual clues
should represent the specific task to some extent, and give
hints for learning task-specific information; (2) Transformer
has achieved competitive performance on many vision tasks
but usually requires long training time, thus tackling four
tasks simultaneously on a pure transformer is resource-
intensive. To overcome this challenge, we efficiently bridge
transformer encoders and convolutional layers to build the
hybrid multi-task architecture. Extensive experiments show
that VE-Prompt surpasses multi-task baselines by a large
margin.
We summarize the main contributions of our work be-
low:
• We provide an in-depth analysis of current multi-task
learning approaches under multiple settings that complywith real-world scenarios, consisting of three common
multi-task data split settings, two partial-label learning
approaches, three task scheduling techniques, and three
task balancing strategies.
• We propose an effective framework VE-Prompt, which
utilizes visual exemplars to provide task-specific visual
clues and guide the model toward learning high-quality
task-specific representations.
• The VE-Prompt framework is constructed in a computa-
tionally efficient way and outperforms competitive multi-
task methods on all tasks.
|
Li_DynIBaR_Neural_Dynamic_Image-Based_Rendering_CVPR_2023
|
Abstract
We address the problem of synthesizing novel views from
a monocular video depicting a complex dynamic scene. State-
of-the-art methods based on temporally varying Neural Ra-
diance Fields (aka dynamic NeRFs ) have shown impressive
results on this task. However, for long videos with com-
plex object motions and uncontrolled camera trajectories,
these methods can produce blurry or inaccurate renderings,
hampering their use in real-world applications. Instead of
encoding the entire dynamic scene within the weights of
MLPs, we present a new approach that addresses these lim-
itations by adopting a volumetric image-based rendering
framework that synthesizes new viewpoints by aggregating
features from nearby views in a scene motion–aware manner.
Our system retains the advantages of prior methods in its
ability to model complex scenes and view-dependent effects,
but also enables synthesizing photo-realistic novel views
from long videos featuring complex scene dynamics with
unconstrained camera trajectories. We demonstrate signifi-cant improvements over state-of-the-art methods on dynamic
scene datasets, and also apply our approach to in-the-wild
videos with challenging camera and object motion, where
prior methods fail to produce high-quality renderings.
|
1. Introduction
Computer vision methods can now produce free-
viewpoint renderings of static 3D scenes with spectacular
quality. What about moving scenes, like those featuring peo-
ple or pets? Novel view synthesis from a monocular video of
a dynamic scene is a much more challenging dynamic scene
reconstruction problem. Recent work has made progress
towards synthesizing novel views in both space and time,
thanks to new time-varying neural volumetric representa-
tions like HyperNeRF [ 50] and Neural Scene Flow Fields
(NSFF) [ 35], which encode spatiotemporally varying scene
content volumetrically within a coordinate-based multi-layer
perceptron (MLP).
However, these dynamic NeRF methods have limitations
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
4273
that prevent their application to casual, in-the-wild videos.
Local scene flow–based methods like NSFF struggle to
scale to longer input videos captured with unconstrained
camera motions: the NSFF paper only claims good perfor-
mance for 1-second, forward-facing videos [ 35]. Methods
like HyperNeRF that construct a canonical model are mostly
constrained to object-centric scenes with controlled camera
paths, and can fail on scenes with complex object motion.
In this work, we present a new approach that is scalable to
dynamic videos captured with 1) long time duration, 2) un-
bounded scenes, 3) uncontrolled camera trajectories, and 4)
fast and complex object motion. Our approach retains the ad-
vantages of volumetric scene representations that can model
intricate scene geometry with view-dependent effects, while
significantly improving rendering fidelity for both static and
dynamic scene content compared to recent methods [ 35,50],
as illustrated in Fig. 1.
We take inspiration from recent methods for rendering
static scenes that synthesize novel images by aggregating
local image features from nearby views along epipolar
lines [ 39,64,70]. However, scenes that are in motion vi-
olate the epipolar constraints assumed by those methods. We
instead propose to aggregate multi-view image features in
scene motion–adjusted ray space, which allows us to cor-
rectly reason about spatio-temporally varying geometry and
appearance.
We also encountered many efficiency and robustness chal-
lenges in scaling up aggregation-based methods to dynamic
scenes. To efficiently model scene motion across multiple
views, we model this motion using motion trajectory fields
that span multiple frames, represented with learned basis
functions. Furthermore, to achieve temporal coherence in
our dynamic scene reconstruction, we introduce a new tem-
poral photometric loss that operates in motion-adjusted ray
space. Finally, to improve the quality of novel views, we pro-
pose to factor the scene into static and dynamic components
through a new IBR-based motion segmentation technique
within a Bayesian learning framework.
On two dynamic scene benchmarks, we show that our
approach can render highly detailed scene content and sig-
nificantly improves upon the state-of-the-art, leading to an
average reduction in LPIPS errors by over 50% both across
entire scenes, as well as on regions corresponding to dynamic
objects. We also show that our method can be applied to in-
the-wild videos with long duration, complex scene motion,
and uncontrolled camera trajectories, where prior state-of-
the-art methods fail to produce high quality renderings. We
hope that our work advances the applicability of dynamic
view synthesis methods to real-world videos.
|
Liu_AdaptiveMix_Improving_GAN_Training_via_Feature_Space_Shrinkage_CVPR_2023
|
Abstract
Due to the outstanding capability for data generation, Gen-
erative Adversarial Networks (GANs) have attracted
considerable attention in unsupervised learning. However,
training GANs is difficult, since the training distribution is
dynamic for the discriminator, leading to unstable image
representation. In this paper, we address the problem
of training GANs from a novel perspective, i.e., robust
image classification. Motivated by studies on robust image
representation, we propose a simple yet effective module,
namely AdaptiveMix, for GANs, which shrinks the regions
of training data in the image representation space of the
discriminator. Considering it is intractable to directly
bound feature space, we propose to construct hard samples
and narrow down the feature distance between hard and
easy samples. The hard samples are constructed by mixing
a pair of training images. We evaluate the effectiveness
of our AdaptiveMix with widely-used and state-of-the-art
GAN architectures. The evaluation results demonstrate
that our AdaptiveMix can facilitate the training of GANs
and effectively improve the image quality of generated
samples. We also show that our AdaptiveMix can be further
applied to image classification and Out-Of-Distribution
(OOD) detection tasks, by equipping it with state-of-the-
art methods. Extensive experiments on seven publicly
available datasets show that our method effectively boosts
the performance of baselines. The code is publicly avail-
able at https://github.com/WentianZhang-
ML/AdaptiveMix .
|
1. Introduction
Artificial Curiosity [40, 41] and Generative Adversar-
ial Networks (GANs) have attracted extensive attention
due to their remarkable performance in image generation
†Equal Contribution
Corresponding Authors: Bing Li and Yuexiang Li.
StyleGAN2StyleGAN2+AdaptiveMixAFHQ-Cat-5k(5,153 img)FFHQ-5k(5,000 img)
Figure 1. Results generated by StyleGAN-V2 [20] and our method
(StyleGAN-V2 + AdaptiveMix) on AFHQ-Cat and FFHQ-5k. We
propose a simple yet effective module AdaptiveMix, which can
be used for helping the training of unsupervised GANs. When
trained on a small amount of data, StyleGAN-V2 generates images
with artifacts, due to unstable training. However, our AdaptiveMix
effectively boosts the performance of StyleGAN-V2 in terms of
image quality.
[18,45,55,57]. A standard GAN consists of a generator and
a discriminator network, where the discriminator is trained
to discriminate real/generated samples, and the generator
aims to generate samples that can fool the discriminator.
Nevertheless, the training of GANs is difficult and unstable,
leading to low-quality generated samples [23, 34].
Many efforts have been devoted to improving the train-
ing of GANs ( e.g. [1,5,12,25,27,34,39]). Previous studies
[37] attempted to co-design the network architecture of the
generator and discriminator to balance the iterative training.
Following this research line, PG-GAN [16] gradually trains
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16219
the GANs with progressive growing architecture according
to the paradigm of curriculum learning. More recently, data
augmentation-based methods, such as APA [15], ADA [17],
and adding noises into the generator [20], were further pro-
posed to stabilize the training of GANs. A few works ad-
dress this problem on the discriminator side. For example,
WGAN [1] proposes to enforce a Lipschitz constraint by us-
ing weight clipping. Instead, WGAN-GP [6] directly penal-
izes the norm of the discriminator’s gradient. These meth-
ods have shown that revisions of discriminators can achieve
promising performance. However, improving the training
of GANs remains an unsolved and challenging problem.
In this paper, considering that the discriminator is critical
to the training of GANs, we address the problem of training
GANs from a novel perspective, i.e.,robust image classifi-
cation. In particular, the discriminator can be regarded as
performing a classification task that discriminates real/fake
samples. Our insight is that controlling the image repre-
sentation ( i.e., feature extractor) of the discriminator can
improve the training of GANs, motivated by studies on ro-
bust image classification [28, 44]. More specifically, recent
work [44] on robust image representation presents inspir-
ing observations that training data is scattered in the learn-
ing space of vanilla classification networks; hence, the net-
works would improperly assign high confidences to samples
that are off the underlying manifold of training data. This
phenomenon also leads to the vulnerability of GANs, i.e.,
the discriminator cannot focus on learning the distribution
of real data. Therefore, we propose to shrink the regions of
training data in the image representation space of the dis-
criminator.
Different from existing works [15, 17], we explore a
question for GANs: Would the training stability of GANs be
improved if we explicitly shrink the regions of training data
in the image representation space supported by the discrim-
inator? To this end, we propose a module named Adap-
tiveMix to shrink the regions of training data in the latent
space constructed by a feature extractor. However, it is non-
trivial and challenging to directly capture the boundaries of
feature space. Instead, our insight is that we can shrink the
feature space by reducing the distance between hard and
easy samples in the latent space, where hard samples are
regarded as the samples that are difficult for classification
networks to discriminate/classify. To this end, AdaptiveMix
constructs hard samples by mixing a pair of training images
and then narrows down the distance between mixed images
and easy training samples represented by the feature extrac-
tor for feature space shrinking. We evaluate the effective-
ness of our AdaptivelyMix with state-of-the-art GAN archi-
tectures, including DCGAN [37] and StyleGAN-V2 [20],
which demonstrates that the proposed AdaptivelyMix facil-
itates the training of GANs and effectively improves the im-
age quality of generated samples.
Besides image generation, our AdaptiveMix can be ap-
plied to image classification [9,53] and Out-Of-Distribution(OOD) detection [11, 14, 50] tasks, by equipping it with
suitable classifiers. To show the way of applying Adap-
tiveMix, we integrate it with the Orthogonal classifier in
recent start-of-the-art work [52] in OOD. Extensive experi-
mental results show that our AdaptiveMix is simple yet ef-
fective, which consistently boosts the performance of [52]
on both robust image classification and Out-Of-Distribution
tasks on multiple datasets.
In a nutshell, the contribution of this paper can be sum-
marized as:
• We propose a novel module, namely AdaptiveMix, to
improve the training of GANs. Our AdaptiveMix is
simple yet effective and plug-and-play, which is help-
ful for GANs to generate high-quality images.
• We show that GANs can be stably and efficiently
trained by shrinking regions of training data in image
representation supported by the discriminator.
• We show our AdaptiveMix can be applied to not only
image generation, but also OOD and robust image
classification tasks. Extensive experiments show that
our AdaptiveMix consistently boosts the performance
of baselines for four different tasks ( e.g., OOD) on
seven widely-used datasets.
|
Liu_StyleRF_Zero-Shot_3D_Style_Transfer_of_Neural_Radiance_Fields_CVPR_2023
|
Abstract
3D style transfer aims to render stylized novel views of
a 3D scene with multi-view consistency. However, most ex-
isting work suffers from a three-way dilemma over accu-
rate geometry reconstruction, high-quality stylization, and
being generalizable to arbitrary new styles. We propose
StyleRF (Style Radiance Fields), an innovative 3D style
transfer technique that resolves the three-way dilemma by
performing style transformation within the feature space
of a radiance field. StyleRF employs an explicit grid of
high-level features to represent 3D scenes, with which high-
fidelity geometry can be reliably restored via volume ren-
dering. In addition, it transforms the grid features ac-
cording to the reference style which directly leads to high-
quality zero-shot style transfer. StyleRF consists of two
innovative designs. The first is sampling-invariant con-
tent transformation that makes the transformation invari-
*Shijian Lu is the corresponding author.ant to the holistic statistics of the sampled 3D points and
accordingly ensures multi-view consistency. The second is
deferred style transformation of 2D feature maps which is
equivalent to the transformation of 3D points but greatly re-
duces memory footprint without degrading multi-view con-
sistency. Extensive experiments show that StyleRF achieves
superior 3D stylization quality with precise geometry recon-
struction and it can generalize to various new styles in a
zero-shot manner. Project website: https://kunhao-
liu.github.io/StyleRF/
|
1. Introduction
Given a set of multi-view images of a 3D scene and an
image capturing a target style, 3D style transfer aims to gen-
erate novel views of the 3D scene that have the target style
consistently across the generated views (Fig. 1). Neural
style transfer has been investigated extensively, and state-
of-the-art methods allow transferring arbitrary styles in a
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8338
zero-shot manner. However, most existing work focuses
on style transfer across 2D images [15, 21, 24] but cannot
extend to a 3D scene that has arbitrary new views. Prior
studies [19, 22, 37, 39] have shown that naively combining
3D novel view synthesis and 2D style transfer often leads
to multi-view inconsistency or poor stylization quality, and
3D style transfer should optimize novel view synthesis and
style transfer jointly.
However, the current 3D style transfer is facing a three-
way dilemma over accurate geometry reconstruction, high-
quality stylization, and being generalizable to new styles.
Different approaches have been investigated to resolve the
three-way dilemma. For example, multiple style trans-
fer [11, 22] requires a set of pre-defined styles but can-
not generalize to unseen new styles. Point-cloud-based
style transfer [19, 37] requires a pre-trained depth estima-
tion module that is prone to inaccurate geometry reconstruc-
tion. Zero-shot style transfer with neural radiance fields
(NeRF) [8] cannot capture detailed style patterns and tex-
tures as it implicitly injects the style information into neu-
ral network parameters. Optimization-based style trans-
fer [17, 39, 63] suffers from slow optimization and cannot
scale with new styles.
In this work, we introduce StyleRF to resolve the three-
way dilemma by performing style transformation in the fea-
ture space of a radiance field. A radiance field is a contin-
uous volume that can restore more precise geometry than
point clouds or meshes. In addition, transforming a radi-
ance field in the feature space is more expressive with bet-
ter stylization quality than implicit methods [8], and it can
also generalize to arbitrary styles. We construct a 3D scene
representation with a grid of deep features to enable fea-
ture transformation. In addition, multi-view consistent style
transformation in the feature space could be achieved by
either transforming the whole feature grid or transforming
the sampled 3D points. We adopt the latter as the former in-
curs much more computational cost during training to styl-
ize the whole feature grid in every iteration, whereas the
latter can reduce computational cost through decreasing the
size of training patch and the number of sampled points.
However, applying off-the-shelf style transformations to a
batch of sampled 3D points impairs the multi-view consis-
tency as they are conditioned on the holistic statistics of the
batch. Beyond that, transforming every sampled 3D point is
memory-intensive since NeRF needs to query hundreds of
sampled points along each ray for rendering a single pixel.
We decompose the style transformation into sampling-
invariant content transformation (SICT) and deferred style
transformation (DST), the former eliminating the depen-
dency on holistic statistics of sampled point batch and the
latter deferring style transformation to 2D feature maps for
better efficiency. In SICT, we introduce volume-adaptive
normalization that learns the mean and variance of thewhole volume instead of computing them from a sampled
batch. In addition, we apply channel-wise self-attention to
transform each 3D point independently to make it condi-
tioned on the feature of that point regardless of the holis-
tic statistics of the sampled batch. In DST, we defer the
style transformation to the volume-rendered 2D feature
maps based on the observation that the style transforma-
tion of each point is the same. By formulating the style
transformation by pure matrix multiplication and adaptive
bias addition, transforming 2D feature maps is mathemat-
ically equivalent to transforming 3D point features but it
saves computation and memory greatly. Thanks to the
memory-efficient representation of 3D scenes and deferred
style transformation, our network can train with 256×256
patches directly without requiring sub-sampling like previ-
ous NeRF-based 3D style transfer methods [8, 11, 22].
The contributions of this work can be summarized in
three aspects. First , we introduce StyleRF, an innovative
zero-shot 3D style transfer framework that can generate
zero-shot high-quality 3D stylization via style transforma-
tion within the feature space of a radiance field. Second ,
we design sampling-invariant content transformation and
deferred style transformation, the former achieving multi-
view consistent transformation by eliminating dependency
on holistic statistics of sampled point batch while the lat-
ter greatly improves stylization efficiency by deferring style
transformation to 2D feature maps. Third , extensive experi-
ments show that StyleRF achieves superior 3D style transfer
with accurate geometry reconstruction, high-quality styliza-
tion, and great generalization to new styles.
|
Liu_Fine-Grained_Face_Swapping_via_Regional_GAN_Inversion_CVPR_2023
|
Abstract
We present a novel paradigm for high-fidelity face swap-
ping that faithfully preserves the desired subtle geometry
and texture details. We rethink face swapping from the per-
spective of fine-grained face editing, i.e., “editing for swap-
ping” (E4S), and propose a framework that is based on the
explicit disentanglement of the shape and texture of facial
components. Following the E4S principle, our framework
enables both global and local swapping of facial features,
as well as controlling the amount of partial swapping spec-
ified by the user. Furthermore, the E4S paradigm is in-
herently capable of handling facial occlusions by means of
facial masks. At the core of our system lies a novel Re-
gional GAN Inversion (RGI) method, which allows the ex-
plicit disentanglement of shape and texture. It also allows
face swapping to be performed in the latent space of Style-
GAN. Specifically, we design a multi-scale mask-guided en-
coder to project the texture of each facial component into
regional style codes. We also design a mask-guided injec-
tion module to manipulate the feature maps with the style
Work done when Zhian Liu was an intern at Tencent AI Lab
†Equal contribution.
⇤Corresponding author: [email protected]. Based on the disentanglement, face swapping is re-
formulated as a simplified problem of style and mask swap-
ping. Extensive experiments and comparisons with current
state-of-the-art methods demonstrate the superiority of our
approach in preserving texture and shape details, as well as
working with high resolution images. The project page is
https://e4s2022.github.io
|
1. Introduction
Face swapping aims at transferring the identity infor-
mation ( e.g., shape and texture of facial components) of
a source face to a given target face, while retaining the
identity-irrelevant attribute information of the target ( e.g.,
expression, head pose, background, etc.). It has immense
potential applications in the entertainment and film produc-
tion industry, and thus has drawn considerable attention in
the field of computer vision and graphics.
The first and foremost challenge in face swapping is
identity preservation ,i.e., how to faithfully preserve the
unique facial characteristics of the source image. Most
existing methods [ 9,24,38] rely on a pre-trained 2D face
recognition network [ 12] or a 3D morphable face model
(3DMM) [ 7,13] to extract the global identity-related fea-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8578
tures, which are then injected into the face generation pro-
cess. However, these face models are mainly designed for
classification rather than generation, thus some informative
and important visual details related to facial identity may
not be captured. Furthermore, the 3D face model built from
a single input image can hardly meet the requirement of ro-
bust and accurate facial shape recovery. Consequently, re-
sults from previous methods often exhibit the “in-between
effect”: i.e., the swapped face resembles both the source
and the target faces, which looks like a third person instead
of faithfully preserving the source identity. A related prob-
lem is skin color , where we argue that skin color is some-
times an important aspect of the source identity and should
be preserved, while previous methods will always maintain
the skin color of the target face, resulting in unrealistic re-
sults when swapping faces with distinct skin tones.
Another challenge is how to properly handle facial oc-
clusion. In real applications, for example, it is a common
situation that some face regions are occluded by hair in the
input images. An ideal swapped result should maintain the
hair from the target, meaning that the occluded part should
be recovered in the source image. To handle occlusion, FS-
GAN [ 25] designs an inpainting sub-network to estimate
the missing pixels of the source, but their inpainted faces
are blurry. A refinement network is carefully designed in
FaceShifter [ 24] to maintain the occluded region in the tar-
get; however, the refinement network may bring back some
identity information of the target.
To address the above challenges more effectively, we re-
think face swapping from a new perspective of fine-grained
face editing, i.e.,“editing for swapping” (E4S) . Given that
both the shape and texture of individual facial components
are correlated with facial identity, we consider to disentan-
gle shape and texture explicitly for better identity preserva-
tion. Instead of using a face recognition model or 3DMMs
to extract global identity features, inspired by fine-grained
face editing [ 23], we exploit component masks for local fea-
ture extraction. With such disentanglement, face swapping
can be transformed as the replacement of local shape and
texture between two given faces. The locally-recomposed
shapes and textures are then fed into a mask-guided gener-
ator to synthesize the final result. One additional advantage
of our E4Sframework is that the occlusion challenge can be
naturally handled by the masks, as the current face parsing
network [ 44] can provide the pixel-wise label of each face
region. The generator can fill out the missing pixels with
the swapped texture features adaptively according to those
labels. It requires no additional effort to design a dedicated
module as in previous methods [ 24,25].
The key to our E4S is the disentanglement of shape and
texture of facial components. Recently, StyleGAN [ 18]
has been applied to various image synthesis tasks due to
its amazing performance on high-quality image generation,which inspires us to exploit a pre-trained StyleGAN for the
disentanglement. This is an ambitious goal because current
GAN inversion methods [ 30,33,36] only focus on global
attribute editing (age, gender, expression, etc.) in the global
style space of StyleGAN, and provide no mechanism for
local shape and texture editing.
To solve this, we propose a novel Regional GAN Inver-
sion (RGI) method that resides in a new regional-wise W+
space, denoted as Wr+. Specifically, we design a mask-
guided multi-scale encoder to project an input face into the
style space of StyleGAN. Each facial component has a set
of style codes for different layers of the StyleGAN gener-
ator. We also design a mask-guided injection module that
uses the style codes to manipulate the feature maps in the
generator according to the given masks. In this way, the
shape and texture of each facial component are fully disen-
tangled, where the texture is represented by the style codes
while the shape is by the masks. Moreover, this new in-
version latent space supports the editing of each individual
face component in shape and texture, enabling various ap-
plications such as face beautification, hairstyle transfer, and
controlling the swapping extent of face swapping. To sum
up, our contributions are:
•We tackle face swapping from a new perspective of
fine-grained editing, i.e.,editing for swapping , and
propose a novel framework for high-fidelity face swap-
ping with identity preservation and occlusion handling.
•We propose a StyleGAN-based Regional GAN Inver-
sion (RGI) method that resides in a novel Wr+space,
for the explicit disentanglement of shape and texture.
It simplifies face swapping as the swapping of the cor-
responding style codes and masks.
•The extensive experiments on face swapping, face edit-
ing, and other extension tasks demonstrate the effec-
tiveness of our E4S framework and RGI.
|
Khurana_Point_Cloud_Forecasting_as_a_Proxy_for_4D_Occupancy_Forecasting_CVPR_2023
|
Abstract
Predicting how the world can evolve in the future is cru-
cial for motion planning in autonomous systems. Classi-
cal methods are limited because they rely on costly human
annotations in the form of semantic class labels, bounding
boxes, and tracks or HD maps of cities to plan their mo-
tion — and thus are difficult to scale to large unlabeled
datasets. One promising self-supervised task is 3D point
cloud forecasting [11, 18–20] from unannotated LiDAR se-
quences. We show that this task requires algorithms to im-
plicitly capture (1) sensor extrinsics (i.e., the egomotion
of the autonomous vehicle), (2) sensor intrinsics (i.e., the
sampling pattern specific to the particular LiDAR sensor),
and (3) the shape and motion of other objects in the scene.
But autonomous systems should make predictions about the
world and not their sensors! To this end, we factor out (1)
and (2) by recasting the task as one of spacetime (4D) oc-
Equal contributioncupancy forecasting. But because it is expensive to obtain
ground-truth 4D occupancy, we “render” point cloud data
from 4D occupancy predictions given sensor extrinsics and
intrinsics, allowing one to train and test occupancy algo-
rithms with unannotated LiDAR sequences. This also al-
lows one to evaluate and compare point cloud forecasting
algorithms across diverse datasets, sensors, and vehicles.
|
1. Introduction
Motion planning in a dynamic environment requires au-
tonomous agents to predict the motion of other objects.
Standard solutions consist of perceptual modules such as
mapping, object detection, tracking, and trajectory fore-
casting. Such solutions often rely on human annotations
in the form of HD maps of cities, or semantic class labels,
bounding boxes, and object tracks, and therefore are diffi-
cult to scale to large unlabeled datasets. One promising self-
supervised task is 3D point cloud forecasting [11, 18–20].
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1116
Figure 2. Points depend on the intersection of rays from the
depth sensor and the environment. Therefore, accurately predict-
ing points requires accurately predicting sensor extrinsics (sensor
egomotion) and intrinsics (ray sampling pattern). But we want to
understand dynamics of the environment, not our LiDAR sensor!
Since points appear where lasers from the sensor and scene
intersect, the task of forecasting point clouds requires algo-
rithms to implicitly capture (1) sensor extrinsics ( i.e., the
ego-motion of the autonomous vehicle), (2) sensor intrin-
sics ( i.e., the sampling pattern specific to the LiDAR sen-
sor), and (3) the shape and motion of other objects in the
scene. This task can be non-trivial even in a static scene
(Fig. 2). We argue that autonomous systems should focus
on making predictions about the world and not themselves,
since an ego-vehicle has access to its future motion plans
(extrinsics) and calibrated sensor parameters (intrinsics).
We factor out these (1) sensor extrinsics and (2) intrin-
sics by recasting the task of point cloud forecasting as one
of spacetime (4D) occupancy forecasting. This disentan-
gles and simplifies the formulation of point cloud forecast-
ing, which now focuses solely on forecasting the central
quantity of interest, the 4D occupancy. Because it is ex-
pensive to obtain ground-truth 4D occupancy, we “render”
point cloud data from 4D occupancy predictions given sen-
sor extrinsics and intrinsics. In some ways, our approach
can be seen as the spacetime analog of novel-view syn-
thesis from volumetric models such as NeRFs [12]; rather
than rendering images by querying a volumetric model with
rays from a known camera view, we render a LiDAR scan
by querying a 4D model with rays from known sensor in-
trinsics and extrinsics. This allows one to train and test
4D occupancy forecasting algorithms with un-annotated Li-
DAR sequences. This also allows one to evaluate and
compare point cloud forecasting algorithms across diverse
datasets, sensors, and vehicles. We find that our approach
to 4D occupancy forecasting, which can also render point
clouds, performs drastically better than SOTAs in point
cloud forecasting, both quantitatively (by up to 3.26m L1
error, Tab. 1) and qualitatively (Fig. 6). Our method beats
prior art with zero-shot cross-sensor generalization (Tab. 2).
To our knowledge, these are first results that generalize
across train/test sensor rigs, illustrating the power of dis-
entangling sensor motion from scene motion.
|
Liu_FlatFormer_Flattened_Window_Attention_for_Efficient_Point_Cloud_Transformer_CVPR_2023
|
Abstract
Transformer, as an alternative to CNN, has been proven
effective in many modalities ( e.g., texts and images). For 3D
point cloud transformers, existing efforts focus primarily on
pushing their accuracy to the state-of-the-art level. However,
their latency lags behind sparse convolution-based models
(3×slower ), hindering their usage in resource-constrained,
latency-sensitive applications (such as autonomous driving).
This inefficiency comes from point clouds’ sparse and irreg-
ular nature, whereas transformers are designed for dense,
regular workloads. This paper presents FlatFormer to close
this latency gap by trading spatial proximity for better com-
putational regularity. We first flatten the point cloud with
window-based sorting and partition points into groups of
equal sizes rather than windows of equal shapes . This effec-
tively avoids expensive structuring and padding overheads.
We then apply self-attention within groups to extract local
features, alternate sorting axis to gather features from differ-
ent directions, and shift windows to exchange features across
groups. FlatFormer delivers state-of-the-art accuracy on
Waymo Open Dataset with 4.6×speedup over (transformer-
based) SST and 1.4×speedup over (sparse convolutional)
CenterPoint. This is the first point cloud transformer that
achieves real-time performance on edge GPUs and is faster
than sparse convolutional methods while achieving on-par
or even superior accuracy on large-scale benchmarks.
|
1. Introduction
Transformer [75] has become the model of choice in nat-
ural language processing (NLP), serving as the backbone
of many successful large language models (LLMs) [2, 17].
Recently, its impact has further been expanded to the vision
community, where vision transformers (ViTs) [18, 45, 74]
have demonstrated on-par or even superior performance com-
pared with CNNs in many visual modalities ( e.g., image and
video). 3D point cloud, however, is not yet one of them.
∗indicates equal contributions.
Mean mAPH L2646668707274
Latency (ms)01326395265
FlatFormer (Ours)
CenterPointSST (Center)SSTVoxSetCenterPoint++4x faster3x fasterTransformerConvolutionPoint Cloud TransformersSparse Convolutional ModelsFigure 1. Previous point cloud transformers ( ⋆) are 3-4×slower
than sparse convolution-based models ( •) despite achieving similar
detection accuracy. FlatFormer is the first point cloud transformer
that is faster than sparse convolutional methods with on-par accu-
racy. Latency is measured on an NVIDIA Quadro RTX A6000.
Different from images and videos, 3D point clouds are
intrinsically sparse and irregular. Most existing point cloud
models [94] are based on 3D sparse convolution [24], which
computes convolution only on non-zero features. They re-
quire dedicated system support [14, 71, 91] to realize high
utilization on parallel hardware ( e.g., GPUs).
Many efforts have been made toward point cloud trans-
formers (PCTs) to explore their potential as an alternative to
sparse convolution. Global PCTs [26] benefit from the reg-
ular computation pattern of self-attention but suffer greatly
from the quadratic computational cost (w.r.t. the number of
points). Local PCTs [50, 98] apply self-attention to a local
neighborhood defined in a similar way to point-based mod-
els [57] and are thus bottlenecked by the expensive neighbor
gathering [47]. These methods are only applicable to single
objects or partial indoor scans (with <4k points) and cannot
be efficiently scaled to outdoor scenes (with >30k points).
Inspired by Swin Transformer [45], window PCTs [19,70]
compute self-attention at the window level, achieving im-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1200
pressive accuracy on large-scale 3D detection benchmarks.
Despite being spatially regular, these windows could have
drastically different numbers of points (which differ by more
than 80×) due to the sparsity. This severe imbalance results
in redundant computation with inefficient padding and par-
titioning overheads. As a result, window PCTs can be 3×
slower than sparse convolutional models (Figure 1), limiting
their applications in resource-constrained, latency-sensitive
scenarios ( e.g., autonomous driving, augmented reality).
This paper introduces FlatFormer to close this huge la-
tency gap. Building upon window PCTs, FlatFormer trades
spatial proximity for better computational regularity by par-
titioning 3D point cloud into groups of equal sizes instead
ofwindows of equal shapes . It applies self-attention within
groups to extract local features, alternates the sorting axis
to aggregate features from different orientations, and shifts
windows to exchange features across groups. Benefit from
the regular computation pattern, FlatFormer achieves 4.6×
speedup over (transformer-based) SST and 1.4×speedup
over (sparse convolutional) CenterPoint while delivering the
state-of-the-art accuracy on Waymo Open Dataset.
To the best of our knowledge, FlatFormer is the first point
cloud transformer that achieves on-par or superior accuracy
than sparse convolutional methods with lower latency. It is
also the first to achieve real-time performance on edge GPUs.
With better hardware support for transformers ( e.g., NVIDIA
Hopper), point cloud transformers will have a huge potential
to be the model of choice in 3D deep learning. We believe
our work will inspire future research in this direction.
|
Kobayashi_Two-Way_Multi-Label_Loss_CVPR_2023
|
Abstract
A natural image frequently contains multiple classifi-
cation targets, accordingly providing multiple class labels
rather than a single label per image. While the single-label
classification is effectively addressed by applying a softmax
cross-entropy loss, the multi-label task is tackled mainly in
a binary cross-entropy (BCE) framework. In contrast to the
softmax loss, the BCE loss involves issues regarding imbal-
ance as multiple classes are decomposed into a bunch of
binary classifications; recent works improve the BCE loss
to cope with the issue by means of weighting. In this pa-
per, we propose a multi-label loss by bridging a gap be-
tween the softmax loss and the multi-label scenario. The
proposed loss function is formulated on the basis of relative
comparison among classes which also enables us to fur-
ther improve discriminative power of features by enhanc-
ing classification margin. The loss function is so flexible
as to be applicable to a multi-label setting in two ways for
discriminating classes as well as samples. In the exper-
iments on multi-label classification, the proposed method
exhibits competitive performance to the other multi-label
losses, and it also provides transferrable features on single-
label ImageNet training. Codes are available at https:
//github.com/tk1980/TwowayMultiLabelLoss .
|
1. Introduction
Deep neural networks are successfully applied to super-
vised learning [12, 26, 38] through back-propagation based
on a loss function exploiting plenty of annotated samples.
In the supervised learning, classification is one of primary
tasks to utilize as annotation a class label to which an image
sample belongs. As in ImageNet [7], most of image datasets
provide a single class label per image, and a softmax loss is
widely employed to deal with the single-label annotation,
producing promising performance on various tasks.
The single-label setting, however, is a limited scenario
from practical viewpoints. An image frequently contains
multiple classification targets [39], such as objects, requir-ing laborious cropping to construct single-label annotations.
There are also targets, such as visual attributes [25], which
are hard to be disentangled and thereby incapable of produc-
ing single-label instances. Those realistic situations pose
so-called multi-label classification where an image sample
is equipped with multiple labels beyond a single label.
While a softmax loss works well in a single-label learn-
ing, the multi-label tasks are addressed mainly by applying
a binary cross-entropy (BCE) loss. Considering multiple
labels are drawn from Cclass categories, the multi-label
classification can be decomposed into Cbinary classifica-
tion tasks, each of which focuses on discriminating samples
in a target class category [28]; the BCE loss is well coupled
with the decomposition approach. Such a decomposition,
however, involves an imbalance issue. Even in a case of
balanced class distribution, the number of positive samples
is much smaller than that of negatives, as small portion of
wholeC-class categories are assigned to each sample as an-
notation (positive) labels. The biased distribution is prob-
lematic in a naive BCE loss. To cope with the imbalance
issue in BCE, a simple weighting approach based on class
frequencies [25] is commonly applied and in recent years it
is further sophisticated by incorporating adaptive weighting
scheme such as in Focal loss [18] and its variant [2]. On
the other hand, the softmax loss naturally copes with multi-
ple classes without decomposition nor bringing the above-
mentioned imbalance issue; it actually works well in the
balanced (single-label) class distribution. The softmax loss
is intrinsically based on relative comparison among classes
(3) which is missed in the BCE-based losses, though being
less applicable to multi-label classification.
In this paper, we propose a multi-label loss to effec-
tively deal with multiple labels in a manner similar to the
softmax loss. Through analyzing the intrinsic loss func-
tion of the softmax loss, we formulate an efficient multi-
label loss function to exploit relative comparison between
positive and negative classes. The relative comparison is
related to classification margin between positive and neg-
ative classes, and we propose an approach to enlarge the
margin by s
|
Liao_A_Light_Weight_Model_for_Active_Speaker_Detection_CVPR_2023
|
Abstract
Active speaker detection is a challenging task in audio-
visual scenarios, with the aim to detect who is speaking in
one or more speaker scenarios. This task has received con-
siderable attention because it is crucial in many applica-
tions. Existing studies have attempted to improve the per-
formance by inputting multiple candidate information and
designing complex models. Although these methods have
achieved excellent performance, their high memory and
computational power consumption render their application
to resource-limited scenarios difficult. Therefore, in this
study, a lightweight active speaker detection architecture
is constructed by reducing the number of input candidates,
splitting 2D and 3D convolutions for audio-visual feature
extraction, and applying gated recurrent units with low
computational complexity for cross-modal modeling. Ex-
perimental results on the AVA-ActiveSpeaker dataset reveal
that the proposed framework achieves competitive mAP per-
formance (94.1% vs. 94.2%), while the resource costs are
significantly lower than the state-of-the-art method, partic-
ularly in model parameters (1.0M vs. 22.5M, approximately
23×) and FLOPs (0.6G vs. 2.6G, approximately 4×). Ad-
ditionally, the proposed framework also performs well on
the Columbia dataset, thus demonstrating good robustness.
The code and model weights are available at https:
//github.com/Junhua-Liao/Light-ASD .
|
1. Introduction
Active speaker detection is a multi-modal task that aims
to identify active speakers from a set of candidates in ar-
bitrary videos. This task is crucial in speaker diariza-
tion [7, 42], speaker tracking [28, 29], automatic video edit-
ing [10, 20], and other applications, and thus has attracted
considerable attention from both the industry and academia.
*Corresponding author
Figure 1. mAP vs. FLOPs, size ∝parameters. This figure shows
the mAP of different methods [1,2,19,23,37,45] on the benchmark
and the FLOPs required to predict one frame containing three can-
didates. The size of the blobs is proportional to the number of
model parameters. The legend shows the size of blobs correspond-
ing to the model parameters from 1×106to30×106.
Research on active speaker detection dates back more
than two decades [8, 35]. However, the lack of reliable
large-scale data has delayed the development of this field.
With the release of the first large-scale active speaker detec-
tion dataset, A V A-ActiveSpeaker [33], significant progress
has been made in this field [15, 37, 38, 40, 47], following
the rapid development of deep learning for audio-visual
tasks [22]. These studies improved the performance of ac-
tive speaker detection by inputting face sequences of mul-
tiple candidates simultaneously [1, 2, 47], extracting visual
features with 3D convolutional neural networks [3, 19, 48],
and modeling cross-modal information with complex atten-
tion modules [9,44,45], etc, which resulted in higher mem-
ory and computation requirements. Therefore, applying the
existing methods to scenarios requiring real-time process-
ing with limited memory and computational resources, such
as automatic video editing and live television, is difficult.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22932
This study proposes a lightweight end-to-end architec-
ture designed to detect active speakers in real time, where
improvements are made from the three aspects of: (a) Sin-
gle input: inputting a single candidate face sequence with
the corresponding audio; (b) Feature extraction: split-
ting the 3D convolution of visual feature extraction into 2D
and 1D convolutions to extract spatial and temporal infor-
mation, respectively, and splitting the 2D convolution for
audio feature extraction into two 1D convolutions to ex-
tract the frequency and temporal information; (c) Cross-
modal modeling: using gated recurrent unit (GRU) [6]
with less calculation, instead of complex attention mod-
ules, for cross-modal modeling. Based on the character-
istics of the lightweight architecture, a novel loss function
is designed for training. Figure 1 visualizes multiple met-
rics of different active speaker detection approaches. The
experimental results reveal that the proposed active speaker
detection method (1.0M params, 0.6G FLOPs, 94.1% mAP)
significantly reduces the model size and computational
cost, and its performance is still comparable to that of the
state-of-the-art method [23] (22.5M params, 2.6G FLOPs,
94.2% mAP) on the benchmark. Moreover, the proposed
method demonstrates good robustness in cross-dataset test-
ing. Finally, the single-frame inference time of the proposed
method ranges from 0.1ms to 4.5ms, which is feasible for
deployment in real-time applications.
The major contributions can be summarized as follows:
• A lightweight design is developed from the three as-
pects of information input, feature extraction, and
cross-modal modeling; subsequently, a lightweight
and effective end-to-end active speaker detection
framework is proposed. In addition, a novel loss func-
tion is designed for training.
• Experiments on A V A-ActiveSpeaker [33], a bench-
mark dataset for active speaker detection released by
Google, reveal that the proposed method is comparable
to the state-of-the-art method [23], while still reducing
model parameters by 95.6% and FLOPs by 76.9%.
• Ablation studies, cross-dataset testing, and qualitative
analysis demonstrate the state-of-the-art performance
and good robustness of the proposed method.
|
Li_Adversarially_Masking_Synthetic_To_Mimic_Real_Adaptive_Noise_Injection_for_CVPR_2023
|
Abstract
This paper considers the synthetic-to-real adaptation of
point cloud semantic segmentation, which aims to segment
the real-world point clouds with only synthetic labels avail-
able. Contrary to synthetic data which is integral and
clean, point clouds collected by real-world sensors typi-
cally contain unexpected and irregular noise because the
sensors may be impacted by various environmental condi-
tions. Consequently, the model trained on ideal synthetic
data may fail to achieve satisfactory segmentation results
on real data. Influenced by such noise, previous adversar-
ial training methods, which are conventional for 2D adap-
tation tasks, become less effective. In this paper, we aim to
mitigate the domain gap caused by target noise via learn-
ing to mask the source points during the adaptation pro-
cedure. To this end, we design a novel learnable masking
module, which takes source features and 3D coordinates as
inputs. We incorporate Gumbel-Softmax operation into the
masking module so that it can generate binary masks and be
trained end-to-end via gradient back-propagation. With the
help of adversarial training, the masking module can learn
to generate source masks to mimic the pattern of irregular
target noise, thereby narrowing the domain gap. We name
our method “Adversarial Masking” as adversarial training
and learnable masking module depend on each other and
cooperate with each other to mitigate the domain gap. Ex-
periments on two synthetic-to-real adaptation benchmarks
verify the effectiveness of the proposed method.
|
1. Introduction
Recently, point cloud semantic segmentation task at-
tracts increasing attention because of its important role in
various real-world applications, e.g., autonomous driving,
augmented reality, etc. Despite remarkable progress [5,
*Work done during an internship at Baidu.
†Corresponding author: Guoliang Kang
LiDARscanfromSynLiDAR
LiDARscanformSemKITTI
LiDARscanfromasyntheticdataset(SynLiDAR)
LiDARscancollectedfromrealworld(SemKITTI)Figure 1. Comparison between a synthetic LiDAR scan (upper)
and a real scan (lower). Both original point clouds and projected
LiDAR images are given. Black points denote noise and other col-
ors denote points from different classes. Compared with synthetic
data which is integral and clean, point clouds collected from the
real world typically contain unexpected and irregular noise which
may impede the adaptation.
19, 30, 33, 39, 40, 62, 63], most algorithms are designed
for the fully-supervised setting, where massive annotated
data is available. In the real world, it is costly and time-
consuming to annotate large amounts of data, especially
for labeling each point in the segmentation task. Syn-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20464
thetic data is easy to obtain and its label can be automat-
ically generated, which largely reduces the human effort
of annotating data. However, it is usually infeasible to di-
rectly apply networks trained on synthetic data to real-world
data due to the apparent domain gap between them. In
this paper, we consider the synthetic-to-real domain adap-
tation [7, 10, 29, 38, 42, 45, 56, 68] for point cloud segmen-
tation. Specifically, we aim to utilize the fully-annotated
synthetic point clouds (source domain) and unlabeled point
clouds collected from imperfect real-world sensors (target
domain) to train a network to support the segmentation of
real-world point clouds (target domain).
Domain adaptation solutions [8, 9, 20, 21] aim to dis-
cover and mitigate the domain shift from source to target
domain. Through comparing the synthetic and real-world
point clouds, we observe that the domain shift can be largely
attributed to the unexpected and irregular noise existing in
the target domain data. As with [53], we consider “noise”
to be the missing points of certain instances/objects, where
all pixel channels are zero. Such noise may be caused by
various factors such as non-reflective surfaces ( e.g., glass).
As shown in Fig. 1, the synthetic point cloud is integral
and clean, but the real one contains large amounts of noisy
points. A model trained on clean source data may find it
hard to understand the scene context under the distraction
of noises and thus cannot achieve satisfactory segmentation
results on target point clouds.
Previous domain adaptation methods [4, 13, 14, 18, 28,
31, 32, 38, 61] ( e.g., adversarial training), which have been
proven effective in the 2D visual tasks, can be applied
to this 3D segmentation setting. For example, Squeeze-
SegV2 [54] employs geodesic correlation alignment [37] to
align the point-wise feature distributions of two domains.
However, without explicitly modeling and dealing with the
noise, these methods bring quite weak benefits to the adap-
tation performance. Recently, several works attempt to deal
with the target noise to mitigate the domain gap. Rochan
et al. [43] randomly select target noise masks and apply the
selected mask to source samples. Wu et al. [53] compute
one dataset-level mask and apply it to all source samples.
Zhao et al. [67] use CycleGAN [69] to perform noise in-
painting which is then used to learn synthetic noise gen-
eration module. The issues of these previous works are
two-fold: 1) they cannot adaptively determine the injected
noises according to the context of source samples; 2) the
generated mask cannot be guaranteed to reduce the domain
shift. Thus, these methods may achieve sub-optimal results.
In this paper, we aim to mitigate the domain shift caused
by the target noise by learning to adaptively mask the source
points during the adaptation procedure. To reach this goal,
we need to deal with two problems: 1) how to learn a spa-
tial mask that can be adaptively determined according to the
specific context of a source sample, and 2) how to guaran-tee the learned masks help narrow the domain gap. To solve
the first problem, we design a learnable masking module
named “Adaptive Spatial Masking (ASM)” module, which
takes source Cartesian coordinates and features as input, to
generate point-wise source masks. We incorporate Gumbel-
Softmax operation into the masking module so that it can
generate binary masks and be trained end-to-end via gra-
dient back-propagation. To solve the second problem, we
incorporate adversarial training into the masking module
learning process. Specifically, during training, we add an
additional domain discriminator on top of the feature ex-
tractor. By encouraging features from two domains (fea-
tures of masked source samples and those of normal tar-
get samples) to be indistinguishable, the masking module is
able to learn to generate masks mimicking the pattern of tar-
get noise and narrow the domain gap. Note that these two
designs cooperate with each other to better align features
across domains and improve the adaptation performance.
In a nutshell, our contributions can be summarized as:
• We notice that the pattern of target noise is unexpected
and irregular. Thus, we propose to model the target
noise in a learnable way. Previous works, which don’t
explicitly model the target noise or ignore such char-
acteristics, are less effective.
• We propose to adversarially mask source samples to
mimic the target noise patterns. In detail, we design a
novel learnable masking module and incorporate ad-
versarial training. Both components cooperate with
each other to promote the adaptation.
• Experiments on two synthetic-to-real adaptation
benchmarks, i.e. SynLiDAR →SemKITTI and Syn-
LiDAR →nuScenes, demonstrate that our method can
effectively improve the adaptation performance.
|
Liu_Learned_Image_Compression_With_Mixed_Transformer-CNN_Architectures_CVPR_2023
|
Abstract
Learned image compression (LIC) methods have exhib-
ited promising progress and superior rate-distortion per-
formance compared with classical image compression stan-
dards. Most existing LIC methods are Convolutional Neural
Networks-based (CNN-based) or Transformer-based, which
have different advantages. Exploiting both advantages is a
point worth exploring, which has two challenges: 1) how
to effectively fuse the two methods? 2) how to achieve
higher performance with a suitable complexity? In this pa-
per, we propose an efficient parallel Transformer-CNN Mix-
ture (TCM) block with a controllable complexity to incor-
porate the local modeling ability of CNN and the non-local
modeling ability of transformers to improve the overall ar-
chitecture of image compression models. Besides, inspired
by the recent progress of entropy estimation models and at-
tention modules, we propose a channel-wise entropy model
with parameter-efficient swin-transformer-based attention
(SWAtten) modules by using channel squeezing. Experi-
mental results demonstrate our proposed method achieves
state-of-the-art rate-distortion performances on three dif-
ferent resolution datasets (i.e., Kodak, Tecnick, CLIC Pro-
fessional Validation) compared to existing LIC methods.
The code is at https://github.com/jmliu206/
LIC_TCM .
|
1. Introduction
Image compression is a crucial topic in the field of im-
age processing. With the rapidly increasing image data,
lossy image compression plays an important role in storing
and transmitting efficiently. In the passing decades, there
were many classical standards, including JPEG [31], WebP
[11], and VVC [29], which contain three steps: transform,
quantization, and entropy coding, have achieved impres-
sive Rate-Distortion (RD) performance. On the other hand,
different from the classical standards, end-to-end learned
*Heming Sun is the corresponding author.
VVC (VTM- 12.1)
0.131bpp|34.69dB|15.04dB
Ground Truth
Ours [MSE]
0.127bpp|35.78dB|16.25dB
Ours [MS- SSIM]
0.114bpp|30.81dB|17.07dB
WebP
0.180bpp|30.77dB|12.47dBFigure 1. Visualization of decompressed images of kodim 23from
Kodak dataset based on different methods (The subfigure is titled
as “Method |Bit rate |PSNR|MS-SSIM”).
image compression (LIC) is optimized as a whole. Some
very recent LIC works [5, 13, 32, 34, 36, 37] have outper-
formed VVC which is the best classical image and video
coding standards at present, on both Peak signal-to-noise
ratio (PSNR) and Multi-Scale Structural Similarity (MS-
SSIM). This suggests that LIC has great potential for next-
generation image compression techniques.
Most LIC methods are CNN-based methods [6, 10, 20,
21, 33] using the variational auto-encoder (V AE) which is
proposed by Ball ´eet al. [3]. With the development of vision
transformers [9,22] recently, some vision transformer-based
LIC methods [23, 36, 37] are also investigated. For CNN-
based example, Cheng et al. [6] proposed a residual block-
based image compression model. For transformer-based ex-
ample, Zou et al. [37] tried a swin-transformer-based image
compression model. These two kinds of methods have dif-
ferent advantages. CNN has the ability of local modeling,
while transformers have the ability to model non-local in-
formation. It is still worth exploring whether the advantages
of these two methods can be effectively combined with a
suitable complexity. In our method, we try to efficiently
incorporate both advantages of CNN and transformers by
proposing an efficient parallel Transformer-CNN Mixture
(TCM) block under a controllable complexity to improve
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14388
the RD performance of LIC.
In addition to the type of the neural network, the design
of entropy model is also an important technique in LIC.
The most common way is to introduce extra latent variables
as hyper-prior to convert the probability model of compact
coding-symbols to a joint model [3]. On that basis, many
methods spring up. Minnen et al. [26] utilized the masked
convolutional layer to capture the context information. Fur-
thermore, they [27] proposed a parallel channel-wise auto-
regressive entropy model by splitting the latent to 10 slices.
The results of encoded slices can assist in the encoding of
remaining slices in a pipeline manner.
Recently, many different attention modules [6, 21, 37]
were designed and proposed to improve image compres-
sion. Attention modules can help the learned model pay
more attention to complex regions. However, many of them
are time-consuming, or can only capture local informa-
tion [37]. At the same time, these attention modules are
usually placed in both the main and the hyper-prior path
of image compression network, which will further intro-
duce large complexity because of a large input size for the
main path. To overcome that problem, we try to move at-
tention modules to the channel-wise entropy model which
has1/16input size compared with that of main path to re-
duce complexity. Nevertheless, if the above attention mod-
ules are directly added to the entropy model, a large num-
ber of parameters will be introduced. Therefore, we pro-
pose a parameter-efficient swin-transformer-based attention
module (SWAtten) with channel squeezing for the channel-
wise entropy model. At the same time, to avoid the latency
caused by too many slices, we reduce the number of slices
from 10 to 5 to achieve the balance between running speed
and RD-performance. As Fig. 1 shows, our method can get
pleasant results compared with other methods.
The contributions of this paper can be summarized as
follows:
• We propose a LIC framework with parallel
transformer-CNN mixture (TCM) blocks that ef-
ficiently incorporate the local modeling ability of
CNN and the non-local modeling ability of transform-
ers, while maintaining controllable complexity.
• We design a channel-wise auto-regressive entropy
model by proposing a parameter-efficient swin-
transformer-based attention (SWAtten) module with
channel squeezing.
• Extensive experiments demonstrate that our approach
achieves state-of-the-art (SOTA) performance on three
datasets (i.e., Kodak, Tecnick, and CLIC datasets)
with different resolutions. The method outperforms
VVC (VTM-12.1) by 12.30%, 13.71%, 11.85% in
Bjøntegaard-delta-rate (BD-rate) [4] on Kodak, Tec-
nick, and CLIC datasets, respectively.
|
Liu_COT_Unsupervised_Domain_Adaptation_With_Clustering_and_Optimal_Transport_CVPR_2023
|
Abstract
Unsupervised domain adaptation (UDA) aims to trans-
fer the knowledge from a labeled source domain to an
unlabeled target domain. Typically, to guarantee desir-
able knowledge transfer, aligning the distribution between
source and target domain from a global perspective is
widely adopted in UDA. Recent researchers further point
out the importance of local-level alignment and propose to
construct instance-pair alignment by leveraging on Optimal
Transport (OT) theory. However, existing OT-based UDA
approaches are limited to handling class imbalance chal-
lenges and introduce a heavy computation overhead when
considering a large-scale training situation. To cope with
two aforementioned issues, we propose a Clustering-based
Optimal Transport (COT) algorithm, which formulates the
alignment procedure as an Optimal Transport problem and
constructs a mapping between clustering centers in the
source and target domain via an end-to-end manner. With
this alignment on clustering centers, our COT eliminates
the negative effect caused by class imbalance and reduces
the computation cost simultaneously. Empirically, our COT
achieves state-of-the-art performance on several authorita-
tive benchmark datasets.
|
1. Introduction
Benefiting from the availability of large-scale data, deep
learning has achieved tremendous success over the past
few years. However, directly applying a well-trained con-
volution neural network on a new domain frequently suf-
fers from the domain gap/discrepancy challenge, resulting
in spurious predictions on the new domain. To remedy
this, Unsupervised Domain Adaptation (UDA) has attracted
many researchers’ attention, which can transfer the knowl-
edge from a labeled domain to an unlabeled domain.
A major line of UDA approaches [1,1,28,42,49,53] aim
Email: [email protected]
*Equal Contribution
†Corresponding Authorto learn a global domain shift by aligning the global source
and target distribution while ignoring the local-level align-
ment between two domains. By leveraging on global do-
main adaptation, the global distributions of source and tar-
get domain are almost the same, thus losing the fine-grained
information for each class (class-structure) on the source
and target domain.
Recently, to preserve class structure in both domains,
several works [6,15,23,30,38,40,44,51,54] adopt optimal
transport (OT) to minimize the sample-level transportation
cost between source and target domain, achieving a signifi-
cant performance on UDA. However, there exist two issues
on recent OT-based UDA approaches. (i) When considering
a realistic situation, i.e. the class imbalance1phenomenon
occurs between the source and target domain, samples be-
longing to the same class in the target domain are assigned
with false pseudo labels due to the mechanism of optimal
transport, which requires each sample in source domain can
be mapped to target samples. As a result, existing OT-based
UDA methods provide poor pair-wise matching when fac-
ing class imbalance challenges. (ii) OT-based UDA meth-
ods tend to find a sample-level optimal counterpart, which
requires a large amount of computation overhead, especially
training on large-scale datasets.
To solve two aforementioned issues, we propose a
Clustering-based Optimal Transport algorithm, termed
COT, to construct a clustering-level instead of sample-level
mapping between source and target domain. Clusters in the
source domain are obtained from the classifiers supervised
by the labeled source domain data. While for the target do-
main, COT utilizes a set of learnable clusters to represent
the feature distribution of the target domain, which can de-
scribe the sub-domain information [50,57]. For instance,
in many object recognition tasks [13,20] an object could
contain many attributes. Each attribute can be viewed as
a sub-domain. To this end, the clusters on the source and
target domain can represent the individual sub-domain in-
formation, respectively, such that optimal transport between
clusters intrinsically provides a local mapping from the sub-
domain in the source domain to those in the target domain.
Moreover, we provide a theoretical analysis and compre-
1label distribution are different in two domains, Ps(y)̸=Pt(y)
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19998
hensive experimental results to guarantee that (i) COT can
alleviate the negative effect caused by class imbalance; (ii)
Compared to existing OT-based UDA approaches, our COT
economizes much computation head.
In summary, our main contributions include:
•We propose a novel Clustering-based Optimal Trans-
port module as well as a specially designed loss de-
rived from the discrete type of Kantorovich dual form,
which resolves two aforementioned challenges on the
existing OT-based UDA algorithms, facilitating the de-
velopment of OT-based UDA community.
•We provide a theoretical analysis to guarantee the ad-
vantages of our COT.
•Our COT achieves state-of-the-art performance on sev-
eral UDA benchmark datasets.
|
Liu_What_You_Can_Reconstruct_From_a_Shadow_CVPR_2023
|
Abstract
3D reconstruction is a fundamental problem in computer
vision, and the task is especially challenging when the object
to reconstruct is partially or fully occluded. We introduce a
method that uses the shadows cast by an unobserved object
in order to infer the possible 3D volumes under occlusion.
We create a differentiable image formation model that allows
us to jointly infer the 3D shape of an object, its pose, and
the position of a light source. Since the approach is end-to-
end differentiable, we are able to integrate learned priors of
object geometry in order to generate realistic 3D shapes of
different object categories. Experiments and visualizations
show that the method is able to generate multiple possible
solutions that are consistent with the observation of the
shadow. Our approach works even when the position of
the light source and object pose are both unknown. Our
approach is also robust to real-world images where ground-
truth shadow mask is unknown.
|
1. Introduction
Reconstructing the 3D shape of objects is a fundamental
challenge in computer vision, with a number of applications
in robotics, graphics, and data science. The task aims toestimate a 3D model from one or more camera views, and re-
searchers over the last twenty years have developed excellent
methods to reconstruct visible objects [1, 13 –15, 24, 25, 43].
However, objects are often occluded, with the line of sight
obstructed either by another object in the scene, or by them-
selves (self-occlusion). Reconstruction from a single image
is an under-constrained problem, and occlusions further re-
duce the number of constraints. To reconstruct occluded
objects, we need to rely on additional context.
One piece of evidence that people use to uncover occlu-
sions is the shadow cast on the floor by the hidden object.
For example, figure 1 shows a scene with an object that has
become fully occluded. Even though no appearance features
are visible, the shadow reveals that another object exists be-
hind the chair, and the silhouette constrains the possible 3D
shapes of the occluded object. What hidden object caused
that shadow?
In this paper, we introduce a framework for reconstructing
3D objects from their shadows. We formulate a generative
model of objects and their shadows cast by a light source,
which we use to jointly infer the 3D shape, its pose, and
the location of the light source. Our model is fully differen-
tiable, which allows us to use gradient descent to efficiently
search for the best shapes that explain the observed shadow.
Our approach integrates both learned empirical priors about
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17059
the geometry of typical objects and the geometry of cam-
eras in order to estimate realistic 3D volumes that are often
encountered in the visual world.
Since we model the image formation process, we are able
to jointly reason over the object geometry and the parame-
ters of the light source. When the light source is unknown,
we recover multiple different shapes and multiple different
positions of the light source that are consistent with each
other. When the light source location is known, our approach
can make use of that information to further refine its outputs.
We validate our approach for a number of different object
categories on a new ground truth dataset.
The primary contribution of this paper is a method to use
the shadows in a scene to infer the 3D structure, and the rest
of the paper will analyze this technique in detail. Section 2
provides a brief overview of related work for using shadows.
Section 3 formulates a generative model for objects and how
they cast shadows, which we are able to invert in order to in-
fer shapes from shadows. Section 4 analyzes the capabilities
of this approaches with a known and unknown light source.
We believe the ability to use shadows to estimate the spatial
structure of the scene will have a large impact on computer
vision systems’ ability to robustly handle occlusions.
|
Khattak_MaPLe_Multi-Modal_Prompt_Learning_CVPR_2023
|
Abstract
Pre-trained vision-language (V-L) models such as CLIP
have shown excellent generalization ability to downstream
tasks. However, they are sensitive to the choice of input text
prompts and require careful selection of prompt templates to
perform well. Inspired by the Natural Language Processing
(NLP) literature, recent CLIP adaptation approaches learn
prompts as the textual inputs tone-tune CLIP for down-
stream tasks. We note that using prompting to adapt repre-
sentations in a single branch of CLIP (language or vision) is
sub-optimal since it does not allow theexibility to dynam-
ically adjust both representation spaces on a downstream
task. In this work, we propose Multi-modal Prompt Learn-
ing (MaPLe) forbothvision and language branches to im-
prove alignment between the vision and language represen-
tations. Our design promotes strong coupling between the
vision-language prompts to ensure mutual synergy and dis-
courages learning independent uni-modal solutions. Fur-
ther, we learn separate prompts across different early stages
to progressively model the stage-wise feature relationships
to allow rich context learning. We evaluate the effectiveness
of our approach onthreerepresentative tasks of generaliza-
tion to novel classes, new target datasets and unseen do-
main shifts. Compared with the state-of-the-art method Co-
CoOp, MaPLe exhibits favorable performance and achieves
an absolute gain of 3.45% on novel classes and 2.72% on
overall harmonic-mean, averaged over 11 diverse image
recognition datasets. Our code and pre-trained models are
available at https://github.com/muzairkhattak/multimodal-
prompt-learning .
|
1. Introduction
Foundational vision-language (V-L) models such as CLIP
(Contrastive Language-Image Pretraining) [ 32] have shown
excellent generalization ability to downstream tasks. Such
models are trained to align language and vision modali-
ties on web-scale datae.g., 400 million text-image pairs in
CLIP. These models can reason about open-vocabulary vi-
sual concepts, thanks to the rich supervision provided bynatural language. During inference, hand-engineered text
prompts are usede.g., ‘a photo of a<category>’ as
a query for text encoder. The output text embeddings are
matched with the visual embeddings from an image encoder
to predict the output class. Designing high quality contex-
tual prompts have been proven to enhance the performance
of CLIP and other V-L models [ 17,42].
Despite the effectiveness of CLIP towards generalization
to new concepts, its massive scale and scarcity of training
data (e.g., few-shot setting) makes it infeasible tone-tune
the full model for downstream tasks. Suchne-tuning can
also forget the useful knowledge acquired in the large-scale
pretraining phase and can pose a risk of overtting to the
downstream task. To address the above challenges, exist-
ing works propose language prompt learning to avoid man-
ually adjusting the prompt templates and providing a mech-
anism to adapt the model while keeping the original weights
frozen [ 14,25,29,48,49]. Inspired from Natural Language
Processing (NLP), these approaches only explore prompt
learning for the text encoder in CLIP (Fig. 1:a) while adap-
tation choices together with an equally important image en-
coder of CLIP remains an unexplored topic in the literature.
Our motivation derives from the multi-modal nature of
CLIP, where a text and image encoder co-exist andboth
contribute towards properly aligning the V-L modalities.
We argue that any prompting technique should adapt the
model completely and therefore, learning prompts only for
the text encoder in CLIP is not sufcient to model the adap-
tations needed for the image encoder. To this end, we set out
to achieve completeness in the prompting approach and pro-
poseMulti-modalPromptLearning (MaPLe) to adequately
ne-tune the text and image encoder representations such
that their optimal alignment can be achieved on the down-
stream tasks (Fig. 1:b). Our extensive experiments on three
key representative settings including base-to-novel gener-
alization, cross-dataset evaluation, and domain generaliza-
tion demonstrate the strength of MaPLe. On base-to-novel
generalization, our proposed MaPLe outperforms existing
prompt learning approaches across 11 diverse image recog-
nition datasets (Fig. 1:c) and achieves absolute average gain
of 3.45% on novel classes and 2.72% on harmonic-mean
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19113
Figure 1. Comparison of MaPLe with standard prompt learning methods. (a)Existing methods adopt uni-modal prompting techniques
tone-tune CLIP representations as prompts are learned only in a single branch of CLIP (language or vision). (b)MaPLe introduces
branch-aware hierarchical prompts that adapt both language and vision branches simultaneously for improved generalization. (c)MaPLe
surpasses state-of-the-art methods on 11 diverse image recognition datasets for novel class generalization task.
over the state-of-the-art method Co-CoOp [ 48]. Further,
MaPLe demonstrates favorable generalization ability and
robustness in cross-dataset transfer and domain generaliza-
tion settings, leading to consistent improvements compared
to existing approaches. Owing to its streamlined architec-
tural design, MaPLe exhibits improved efciency during
both training and inference without much overhead, as com-
pared to Co-CoOp which lacks efciency due to its image
instance conditioned design. In summary, the main contri-
butions of this work include:
• We proposemulti-modalprompt learning in CLIP to
favourably align its vision-language representations.
To the best of our knowledge, this is therst multi-
modal prompting approach forne-tuning CLIP.
• To link prompts learned in text and image encoders, we
propose acoupling functionto explicitly condition vi-
sion prompts on their language counterparts. It acts as
a bridge between the two modalities and allows mutual
propagation of gradients to promote synergy.
• Our multi-modal prompts are learned across multi-
ple transformer blocks in both vision and language
branches toprogressivelylearn the synergistic be-
haviour of both modalities. This deep prompting strat-
egy allows modeling the contextual relationships inde-
pendently, thus providing moreexibility to align the
vision-language representations.
|
Li_MAGE_MAsked_Generative_Encoder_To_Unify_Representation_Learning_and_Image_CVPR_2023
|
Abstract
Generative modeling and representation learning are
two key tasks in computer vision. However, these models
are typically trained independently, which ignores the po-
tential for each task to help the other, and leads to training
and model maintenance overheads. In this work, we pro-
pose MAsked Generative Encoder (MAGE), the first frame-
work to unify SOTA image generation and self-supervised
representation learning. Our key insight is that using vari-
able masking ratios in masked image modeling pre-training
can allow generative training (very high masking ratio)
and representation learning (lower masking ratio) under
the same training framework. Inspired by previous gen-
erative models, MAGE uses semantic tokens learned by
a vector-quantized GAN at inputs and outputs, combining
this with masking. We can further improve the represen-
tation by adding a contrastive loss to the encoder output.
We extensively evaluate the generation and representation
learning capabilities of MAGE. On ImageNet-1K, a single
MAGE ViT-L model obtains 9.10 FID in the task of class-
unconditional image generation and 78.9% top-1 accuracy
for linear probing, achieving state-of-the-art performance
in both image generation and representation learning. Code
is available at https://github.com/LTH14/mage .
|
1. Introduction
In recent years, we have seen rapid progress in both gen-
erative models and representation learning of visual data.
Generative models have demonstrated increasingly spectac-
ular performance in generating realistic images [3,7,15,46],
while state-of-the-art self-supervised representation learn-
ing methods can extract representations at a high seman-
tic level to achieve excellent performance on a number of
This work was done when Tianhong Li interned at Google Research,
and was partially supported by the MIT-IBM Watson Research Collabo-
ration grant. Correspondence to: Tianhong Li <[email protected]> ,
Huiwen Chang <[email protected]> .
Figure 1. Linear probing and class unconditional generation per-
formance of different methods trained and evaluated on ImageNet-
1K. MAGE achieves SOTA performance in linear probing and es-
tablishes a new SOTA in class unconditional generation.
downstream tasks such as linear probing and few-shot trans-
fer [2, 6, 8, 13, 25, 26].
Currently, these two families of models are typically
trained independently. Intuitively, since generation and
recognition tasks require both visual and semantic under-
standing of data, they should be complementary when com-
bined in a single framework. Generation benefits represen-
tation by ensuring that both high-level semantics and low-
level visual details are captured; conversely, representation
benefits generation by providing rich semantic guidance.
Researchers in natural language processing have observed
this synergy: frameworks such as BERT [14] have both
high-quality text generation and feature extraction. Another
example is DALLE-2 [43], where latents conditioned on
apre-trained CLIP representation are used to create high-
quality text-to-image generations.
However, in computer vision, there are currently no
widely adopted models that unify image generation and
representation learning in the same framework. Such uni-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2142
Original MAGE seed=0 iter=6 MAGE seed=1 iter=6 Mask MAE MAGE seed=0 iter=1 MAGE seed= 2iter=6
Figure 2. Reconstruction results using MAE and MAGE with 75% masking ratio. MAE reconstructs blurry images with low quality,
while MAGE can reconstruct high-quality images with detail, and further improves quality through iterative decoding (see subsection 3.2
for details). With the same mask, MAGE generates diverse reconstruction results with different random seeds. Note that the mask for
MAGE is on semantic tokens whereas that of MAE is on patches in the input image.
fication is non-trivial due to the structural difference be-
tween these tasks: in generative modeling, we output
high-dimensional data, conditioned on low-dimension in-
puts such as class labels, text embeddings, or random noise.
In representation learning, we input a high-dimensional im-
age and create a low-dimensional compact embedding use-
ful for downstream tasks.
Recently, a number of papers have shown that represen-
tation learning frameworks based on masked image mod-
eling (MIM) can obtain high-quality representations [2, 18,
26,31], often with very high masking ratios (e.g. 75%) [26].
Inspired by NLP, these methods mask some patches at the
input, and the pre-training task is to reconstruct the origi-
nal image by predicting these masked patches. After pre-
training, task-specific heads can be added to the encoder to
perform linear probe or fine-tuning.
These works inspire us to revisit the unification question.
Our key insight in this work is that generation is viewed as
“reconstructing” images that are 100% masked, while rep-
resentation learning is viewed as “encoding” images that are
0%masked. We can therefore enable a unified architecture
by using a variable masking ratio during MIM pre-training.
The model is trained to reconstruct over a wide range of
masking ratios covering high masking ratios that enable
generation capabilities, and lower masking ratios that en-
able representation learning. This simple but very effec-
tive approach allows a smooth combination of generative
training and representation learning in the same framework :
same architecture, training scheme, and loss function.
However, directly combining existing MIM methodswith a variable masking ratio is insufficient for high qual-
ity generation because such methods typically use a simple
reconstruction loss on pixels, leading to blurry outputs. For
example, as a representative of such methods, the recon-
struction quality of MAE [27] is poor: fine details and tex-
tures are missing (Figure 2). A similar issue exists in many
other MIM methods [11, 36].
This paper focuses on bridging this gap. We propose
MAGE, a framework that can both generate realistic im-
ages and extract high-quality representations from images.
Besides using variable masking ratio during pre-training,
unlike previous MIM methods whose inputs are pixels,
both the inputs and the reconstruction targets of MAGE are
semantic tokens . This design improves both generation
and representation learning, overcoming the issue described
above. For generation, as shown in Figure 2, operating in
token space not only allows MAGE to perform image gen-
eration tasks iteratively (subsection 3.2), but also enables
MAGE to learn a probability distribution of the masked to-
kens instead of an average of all possible masked pixels,
leading to diverse generation results. For representation
learning, using tokens as inputs and outputs allows the net-
work to operate at a high semantic level without losing low-
level details, leading to significantly higher linear probing
performances than existing MIM methods.
We evaluate MAGE on multiple generative and repre-
sentation downstream tasks. As shown in Figure 1, for
class- unconditional image generation on ImageNet-1K, our
method surpasses state of the art with both ViT-B and ViT-
L (ViT-B achieves 11.11 FID [29] and ViT-L achieves 9.10
2143
FID), outperforming the previous state-of-the-art result by
a large margin (MaskGIT [7] with 20.68 FID). This signif-
icantly push the limit of class-unconditional generation to a
level even close to the state-of-the-art of class-conditional
image generation ( 6 FID [7, 46]), which is regarded as a
much easier task in the literature [38]. For linear probing on
ImageNet-1K, our method with ViT-L achieves 78.9% top-
1 accuracy, surpassing all previous MIM-based represen-
tation learning methods and many strong contrastive base-
lines such as MoCo-v3 [13]. Moreover, when combined
with a simple contrastive loss [9], MAGE-C with ViT-L
can further get 80.9% accuracy, achieving state-of-the-art
performance in self-supervised representation learning. We
summarize our contributions:
We introduce MAGE, a novel method that unifies genera-
tive model and representation learning by a single token-
based MIM framework with variable masking ratios, in-
troducing new insights to resolve the unification problem.
MAGE establishes a new state of the art on the task of
class-unconditional image generation on ImageNet-1K.
MAGE further achieves state of the art in different down-
stream tasks, such as linear probing, few-shot learning,
transfer learning, and class-conditional image generation.
|
Lan_Vision_Transformers_Are_Good_Mask_Auto-Labelers_CVPR_2023
|
Abstract
We propose Mask Auto-Labeler (MAL), a high-quality
Transformer-based mask auto-labeling framework for in-
stance segmentation using only box annotations. MAL takes
box-cropped images as inputs and conditionally generates
their mask pseudo-labels. We show that Vision Transform-
ers are good mask auto-labelers. Our method significantly
reduces the gap between auto-labeling and human annota-
tion regarding mask quality. Instance segmentation models
trained using the MAL-generated masks can nearly match
the performance of their fully-supervised counterparts, re-
taining up to 97.4% performance of fully supervised mod-
els. The best model achieves 44.1% mAP on COCO in-
stance segmentation (test-dev 2017), outperforming state-
of-the-art box-supervised methods by significant margins.
Qualitative results indicate that masks produced by MAL
are, in some cases, even better than human annotations.
|
1. Introduction
Computer vision has seen significant progress over the
last decade. Tasks such as instance segmentation have made
it possible to localize and segment objects with pixel-level
accuracy. However, these tasks rely heavily on expan-
sive human mask annotations. For instance, when creat-ing the COCO dataset, about 55k worker hours were spent
on masks, which takes about 79% of the total annotation
time [1]. Moreover, humans also make mistakes. Human
annotations are often misaligned with actual object bound-
aries. On complicated objects, human annotation quality
tends to drop significantly if there is no quality control. Due
to the expensive cost and difficulty of quality control, some
other large-scale detection datasets such as Open Images [2]
and Objects365 [3], only contain partial or even no instance
segmentation labels.
In light of these limitations, there is an increasing in-
terest in pursuing box-supervised instance segmentation,
where the goal is to predict object masks from bounding
box supervision directly. Recent box-supervised instance
segmentation methods [4–8] have shown promising perfor-
mance. The emergence of these methods challenges the
long-held belief that mask annotations are needed to train
instance segmentation models. However, there is still a non-
negligible gap between state-of-the-art approaches and their
fully-supervised oracles.
Our contributions: To address box-supervised instance
segmentation, we introduce a two-phase framework consist-
ing of a mask auto-labeling phase and an instance segmenta-
tion training phase (see Fig. 2). We propose a Transformer-
based mask auto-labeling framework, Mask Auto-Labeler
(MAL), that takes Region-of-interest (RoI) images as inputs
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23745
Box-sup LossCroppedRegionsSupervisedMask LossMask LabelsMALGenerateMasksPhase 1: Mask Auto-labelingInst SegImagePhase 2: Instance Segmentation TrainingTrain
Figure 2. An overview of the two-phase framework of box-
supervised instance segmentation. For the first phase, we train
Mask Auto-Labeler using box supervision and conditionally gen-
erate masks of the cropped regions in training images (top). We
then train the instance segmentation models using the generated
masks (bottom).
and conditionally generates high-quality masks (demon-
strated in Fig. 1) within the box. Our contributions can be
summarized as follows:
• Our two-phase framework presents a versatile design
compatible with any instance segmentation architecture.
Unlike existing methods, our framework is simple and
agnostic to instance segmentation module designs.
• We show that Vision Transformers (ViTs) used as image
encoders yield surprisingly strong auto-labeling results.
We also demonstrate that some specific designs in MAL,
such as our attention-based decoder, multiple-instance
learning with box expansion, and class-agnostic training,
crucial for strong auto-labeling performance. Thanks to
these components, MAL sometimes even surpasses hu-
mans in annotation quality.
• Using MAL-generated masks for training, instance seg-
mentation models achieve up to 97.4% of their fully
supervised performance on COCO and LVIS. Our re-
sult significantly narrows down the gap between box-
supervised and fully supervised approaches. We also
demonstrate the outstanding open-vocabulary general-
ization of MAL by labeling novel categories not seen
during training.
Our method outperforms all the existing state-of-the-
art box-supervised instance segmentation methods by large
margins. This might be attributed to good representations
of ViTs and their emerging properties such as meaningful
grouping [9], where we observe that the attention to objects
might benefit our task significantly (demonstrated in Fig.
6). We also hypothesize that our class-agnostic training de-
sign enables MAL to focus on learning general grouping
instead of focusing on category information. Our strong re-
sults pave the way to remove the need for expensive human
annotation for instance segmentation in real-world settings.
|
Li_DANI-Net_Uncalibrated_Photometric_Stereo_by_Differentiable_Shadow_Handling_Anisotropic_Reflectance_CVPR_2023
|
Abstract
Uncalibrated photometric stereo (UPS) is challenging
due to the inherent ambiguity brought by the unknown light.
Although the ambiguity is alleviated on non-Lambertian ob-
jects, the problem is still difficult to solve for more general
objects with complex shapes introducing irregular shad-
ows and general materials with complex reflectance like
anisotropic reflectance. To exploit cues from shadow and
reflectance to solve UPS and improve performance on gen-
eral materials, we propose DANI-Net, an inverse render-
ing framework with differentiable shadow handling and
anisotropic reflectance modeling. Unlike most previous
methods that use non-differentiable shadow maps and as-
sume isotropic material, our network benefits from cues of
shadow and anisotropic reflectance through two differen-
tiable paths. Experiments on multiple real-world datasets
demonstrate our superior and robust performance.
|
1. Introduction
Photometric stereo (PS) [48] aims at recovering the sur-
face normal from several images captured under varying
light conditions with a fixed viewpoint. It has been ap-
plied to many fields ( e.g., movies production [6], indus-
trial quality inspection [47], and biometrics [53]) due to
its advantage in recovering fine-detailed surfaces over other
approaches [10, 16] ( e.g., multi-view stereo [38], active
sensor-based solutions [61]). Light calibration is crucial to
the performance [52]. However, it is also tedious, restricting
the applicability of PS. Therefore, uncalibrated photometric
stereo (UPS) methods estimating surface normal with un-
known lights have been widely studied in the literature.
*Corresponding authorUncalibrated photometric stereo suffers from General
Bas-Relief (GBR) ambiguity [4] for an integrable, Lam-
bertian surface. However, GBR ambiguity is alleviated
on a non-Lambertian surface [13]. Therefore, recent ad-
vances in UPS ( e.g., [26,56]) adopt the isotropic reflectance
model accounting for non-Lambertian effects to solve UPS.
Nonetheless, such a model restricts methods’ performance
on objects with more general ( e.g., anisotropic) materials,
while modeling general reflectance is challenging due to
extra unknowns, which eventually make UPS intractable.
Other works ( e.g., [25, 56, 57]) notice the benefits of the
shadow cues in utilizing global shape-light information to
solve PS/UPS because the shadow reflects the interaction
of shape and light [24, 59]. However, these methods either
fail to exploit the shadow cues due to the lack of a differ-
entiable path from the shadow to the concerned unknowns
like shape [25], or the shadow cues have limited effects on
the visible shape reconstruction due to the implicit shape
representation [56, 57].
To this end, this paper proposes the DANI-Net , which
solves UPS by Differentiable shadow handling, Anisotropic
reflectance modeling, and Neural Inverse Rendering.
DANI-Net builds the differentiable path in the sequence
of inverse rendering errors, shadow maps, and surface nor-
mal maps (or light conditions) (Fig. 1) to fully exploit the
shadow cues to solve UPS. Since those cues facilitate solv-
ing extra unknowns introduced by a more sophisticated
reflectance model, DANI-Net manages to build up such
a model (Fig. 1) to improve the performance on general
materials. During optimization, DANI-Net propagates in-
verse rendering errors via two paths of shadow cues and
anisotropic reflectance, respectively, and simultaneously
optimizes the shape ( i.e., the depth map and surface nor-
mal map), anisotropic reflectance model, shadow map, and
light conditions ( i.e., direction and intensity). As a result,
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8381
OtherPSMethods
DANI-Net
Shadow HandlingReflectance Modeling
Depth MapInverse Rendering ErrorShadowMapRenderedImageBRDFSphereSurface Normaland LightConditionsNon-differentiable PathRegion of InterestDifferentiable PathReference Point
Figure 1. The proposed DANI-Net differs from other PS or UPS
methods in two aspects: 1) Shadow Handling. The path in the se-
quence of inverse rendering errors, shadow maps, and surface nor-
mal maps (or light conditions) of the DANI-Net is differentiable;
2)Reflectance Modeling. DANI-Net adopts an anisotropic re-
flectance model. The state-of-the-art UPS method SCPS-NIR [26]
is compared in this figure. As can be observed, the proposed
DANI-Net produces a smoother and more realistic shadow map of
copper B UNNY thanks to the differentiable shadow handling and
renders a more realistic copper B ALL image and Bidirectional Re-
flectance Distribution Function (BRDF) sphere (of the reference
point) due to the anisotropic reflectance modeling. Data of copper
BUNNY and B ALL are from D ILIGENT102[36].
DANI-Net achieves state-of-the-art performance on several
real-world benchmark datasets. In a nutshell, our contribu-
tions are summarized as follows:
• We propose a differentiable shadow handling method
that facilitates exploiting shadow cues with global
shape-light information to solve UPS. Experimental
results demonstrate its effectiveness in shadow map re-
covery and surface normal estimation.
• We introduce an anisotropic reflectance model that de-
scribes both isotropic and anisotropic materials to im-
prove performance on general materials. Experimental
results demonstrate its effectiveness on surface normal
estimation for objects with a broad range of isotropic
and anisotropic materials.
• We propose the DANI-Net that simultaneously opti-
mizes shape, anisotropic reflectance, shadow map, and
light conditions in an unsupervised manner, propagat-
ing inverse rendering errors through two paths involv-
ing the shadow cues and anisotropic reflectance, re-
spectively. DANI-Net achieves state-of-the-art perfor-
mance on several real-world benchmark datasets.
|
Li_3D-Aware_Face_Swapping_CVPR_2023
|
Abstract
Face swapping is an important research topic in com-
puter vision with wide applications in entertainment and
privacy protection. Existing methods directly learn to swap
2D facial images, taking no account of the geometric in-
formation of human faces. In the presence of large pose
variance between the source and the target faces, there
always exist undesirable artifacts on the swapped face.
In this paper, we present a novel 3D-aware face swap-
ping method that generates high-fidelity and multi-view-
consistent swapped faces from single-view source and tar-
get images. To achieve this, we take advantage of the strong
geometry and texture prior of 3D human faces, where the
2D faces are projected into the latent space of a 3D genera-
tive model. By disentangling the identity and attribute fea-
tures in the latent space, we succeed in swapping faces in
a 3D-aware manner, being robust to pose variations while
transferring fine-grained facial details. Extensive experi-
ments demonstrate the superiority of our 3D-aware face
swapping framework in terms of visual quality, identity sim-
ilarity, and multi-view consistency. Code is available at
https://lyx0208.github.io/3dSwap .
∗Corresponding authors.
|
1. Introduction
Face swapping aims to transfer the identity of a person
in the source image to another person in the target image
while preserving other attributes like head pose, expression,
illumination, background, etc. It has attracted extensive at-
tention recently in the academic and industrial world for its
potential wide applications in entertainment [14,30,38] and
privacy protection [7, 37, 48].
The key of face swapping is to transfer the geometric
shape of the facial region ( i.e., eyes, nose, mouth) and
detailed texture information (such as the color of eyes)
from the source image to the target image while pre-
serving both geometry and texture of non-facial regions
(i.e., hair, background, etc). Currently, some 3D-based
methods consider geometry prior of human faces by fit-
ting the input image to 3D face models such as 3D Mor-
phable Model (3DMM) [8] to overcome the differences of
face orientation and expression between sources and tar-
gets [7, 15, 34, 43]. However, these parametric face mod-
els only produce coarse frontal faces without fine-grained
details, leading to low-resolution and fuzzy swapping re-
sults. On the other hand, following Generative Adversarial
Network [24], GAN-based [6, 23, 32, 39, 40, 42] or GAN-
inversion-based [44, 55, 57, 60] approaches adopt the ad-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12705
versarial training strategy to learn texture information from
inputs. Despite the demonstrated photorealistic and high-
resolution images, the swapped faces via 2D GANs sustain
undesirable artifacts when two input faces undergo large
pose variation since the strong 3D geometry prior of human
faces is ignored. Moreover, learning to swap faces in 2D
images makes little use of the shaped details from sources,
leading to poorer performance on identity transferring.
Motivated by the recent advances of 3D generative mod-
els [12, 13, 20, 25, 45] in synthesizing multi-view consis-
tent images and high-quality 3D shapes, it naturally raises
a question: can we perform face swapping in a 3D-aware
manner to exploit the strong geometry and texture priors?
To answer this question, two challenges arise. First , how to
infer 3D prior directly from 3D-GAN models still remains
open. Current 3D-aware generative models synthesize their
results from a random Gaussian noise z, so that their output
images are not controllable. This increases the complexity
of inferring the required prior from arbitrary input. Second ,
the inferred prior corresponding to input images is in the
form of a high-dimension feature vector in the latent space
of 3D GANs. Simply synthesizing multi-view target im-
ages referring to the prior and applying 2D face swapping
to them produces not only inconsistent artifacts but also a
heavy computational load.
To address these challenges, we systematically inves-
tigate the geometry and texture prior of these 3D gener-
ative models and propose a novel 3D-aware face swap-
ping framework 3dSwap. We introduce a 3D GAN inver-
sion framework to project the 2D inputs into the 3D latent
space, motivated by recent GAN inversion approaches [46,
47, 51]. Specifically, we design a learning-based inver-
sion algorithm that trains an encoding network to efficiently
and robustly project input images into the latent space of
EG3D [12]. However, directly borrowing the architecture
from 2D approaches is not yet enough since a single-view
input provides limited information about the whole human
face. To further improve the multi-view consistency of la-
tent code projection, we design a pseudo-multi-view train-
ing strategy. This design effectively bridges the domain
gap between 2D and 3D. To tackle the second problem,
we design a face swapping algorithm based on the 3D la-
tent codes and directly synthesize the swapped faces with
the 3D-aware generator. In this way, we achieve 3D GAN-
inversion-based face swapping by a latent code manipulat-
ing algorithm consisting of style-mixing and interpolation,
where latent code interpolation is responsible for identity
transferring while style-mixing helps to preserve attributes.
In summary, our contributions are threefold:
• To the best of our knowledge, we first address the
3D-aware face swapping task. The proposed 3dSwap
method sets a strong baseline and we hope this work
will foster future research into this task.• We design a learning-based 3D GAN inversion with
the pseudo-multi-view training strategy to extract ge-
ometry and texture prior from arbitrary input images.
We further utilize these strong prior by designing a la-
tent code manipulating algorithm, with which we di-
rectly synthesize the final results with the pretrained
generator.
• Extensive experiments on benchmark datasets demon-
strate the superiority of the proposed 3dSwap over
state-of-the-art 2D face swapping approaches in iden-
tity transferring. Our reconstruction module for 3D-
GAN inversion performs favorably over the state-of-
the-art methods as well.
|
Li_Correlational_Image_Modeling_for_Self-Supervised_Visual_Pre-Training_CVPR_2023
|
Abstract
We introduce Correlational Image Modeling ( CIM), a
novel and surprisingly effective approach to self-supervised
visual pre-training. Our CIM performs a simple pretext
task: we randomly crop image regions (exemplars) from
an input image (context) and predict correlation maps be-
tween the exemplars and the context. Three key designs
enable correlational image modeling as a nontrivial and
meaningful self-supervisory task. First, to generate useful
exemplar-context pairs, we consider cropping image regions
with various scales, shapes, rotations, and transformations.
Second, we employ a bootstrap learning framework that in-
volves online and target encoders. During pre-training, the
former takes exemplars as inputs while the latter converts the
context. Third, we model the output correlation maps via a
simple cross-attention block, within which the context serves
as queries and the exemplars offer values and keys. We show
thatCIM performs on par or better than the current state of
the art on self-supervised and transfer benchmarks. Code
is available at https://github.com/weivision/
Correlational-Image-Modeling.git .
|
1. Introduction
Recent advances in self-supervised visual pre-training
have shown great capability in harvesting meaningful
representations from hundreds of millions of—often eas-ily accessible— unlabeled images. Among existing pre-
training paradigms, Multi-View Self-Supervised Learning
(MV-SSL) [8 –12, 21, 23] and Masked Image Modeling
(MIM) [2, 22, 54, 68] are two leading methods in the self-
supervised learning racetrack, thanks to their nontrivial and
meaningful self-supervisory pretext tasks .
MV-SSL follows an augment-and-compare paradigm
(Figure 1(a)) – randomly transforming an input image into
two augmented views and then comparing two different
views in the representation space. Such an instance-wise
discriminative task is rooted in view-invariant learning [43],
i.e., changing views of data does not affect the conveyed
information. On the contrary, following the success of
Masked Language Modeling (MLM) [16], MIM conducts
amask-and-predict pretext task within a single view (Fig-
ure 1(b)) – removing a proportion of random image patches
and then learning to predict the missing information. This
simple patch-wise generative recipe enables Transformer-
based deep architectures [17] to learn generalizable repre-
sentations from unlabeled images.
Beyond augment-and-compare ormask-and-predict pre-
text tasks in MV-SSL and MIM, in this paper, we endeavor
to investigate another simple yet effective paradigm for self-
supervised visual representation learning. We take inspira-
tion from visual tracking [70] in computer vision that defines
the task of estimating the motion or trajectory of a target
object ( exemplar ) in a sequence of scene images ( contexts ).
To cope with challenging factors such as scale variations,
deformations, and occlusions, one typical tracking pipeline
1
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15105
is formulated as maximizing the correlation between the spe-
cific exemplar and holistic contexts [3,5,46,52]. Such simple
correlational modeling can learn meaningful representations
in the capability of both localization and discrimination, thus
making it appealing to serve as a promising pretext task for
self-supervised learning.
Training a standard correlational tracking model, however,
requires access to numerous labeled data, which is unavail-
able in unsupervised learning. Also, the task goal of vi-
sual tracking is intrinsically learning toward one-shot object
detection—demanding rich prior knowledge of objectness—
while less generic for representation learning. Therefore, it
is nontrivial to retrofit supervised correlational modeling for
visual tracking into a useful self-supervised pretext task.
Driven by this revelation, we present a novel crop-and-
correlate paradigm for self-supervised visual representa-
tion learning, dubbed as Correlational Image Modeling
(CIM). To enable correlational modeling for effectively self-
supervised visual pre-training, we introduce three key de-
signs. First, as shown in Figure 1(c), we randomly crop
image regions (treated as exemplars ) with various scales,
shapes, rotations, and transformations from an input image
(context ). The corresponding correlation maps can be de-
rived from the exact crop regions directly. This simple crop-
ping recipe allows us to easily construct the exemplar-context
pairs together with ground-truth correlation maps without
human labeling cost. Second, we employ a bootstrap learn-
ing framework that is comprised of two networks: an online
encoder and a target encoder, which, respectively, encode
exemplars andcontext into latent space. This bootstrapping
effect works in a way that the model learns to predict the
spatial correlation between the updated representation of
exemplars and the slow-moving averaged representation of
context . Third, to realize correlational learning, we introduce
a correlation decoder built with a cross-attention layer and a
linear predictor, which computes queries from context , with
keys and values from exemplars , to predict the corresponding
correlation maps.
Our contributions are summarized as follows: 1)We
present a simple yet effective pretext task for self-supervised
visual pre-training, characterized by a novel unsupervised
correlational image modeling framework ( CIM).2)We
demonstrate the advantages of our CIM in learning trans-
ferable representations for both ViT and ResNet models that
can perform on par or better than the current state-of-the-art
MIM and MV-SSL learners while improving model robust-
ness and training efficiency. We hope our work can motivate
future research in exploring new useful pretext tasks for
self-supervised visual pre-training.
|
Khan_Temporally_Consistent_Online_Depth_Estimation_Using_Point-Based_Fusion_CVPR_2023
|
Abstract
Depth estimation is an important step in many computer
vision problems such as 3D reconstruction, novel view syn-
thesis, and computational photography. Most existing work
focuses on depth estimation from single frames. When ap-
plied to videos, the result lacks temporal consistency, show-
ing flickering and swimming artifacts. In this paper we
aim to estimate temporally consistent depth maps of video
streams in an online setting. This is a difficult problem as
future frames are not available and the method must choose
between enforcing consistency and correcting errors from
previous estimations. The presence of dynamic objects fur-
ther complicates the problem. We propose to address these
challenges by using a global point cloud that is dynami-
cally updated each frame, along with a learned fusion ap-
proach in image space. Our approach encourages consis-
tency while simultaneously allowing updates to handle er-
rors and dynamic objects. Qualitative and quantitative re-
sults show that our method achieves state-of-the-art quality
for consistent video depth estimation.
|
1. Introduction
Depth reconstruction is a long-standing, fundamental
problem in computer vision. For decades the most popular
depth estimation techniques were based on stereo match-
ing [37] or structure-from-motion [36]. However, more re-
cently the best results have come from learning-based ap-
proaches [7]. As the overall reconstruction quality has im-
proved, focus has shifted to new areas such as monocular es-
timation [21,32,33], edge quality [58], and temporal consis-
tency [20, 26]. The latter is particularly important for video
applications in computational photography and virtual re-
ality [3, 52] as inconsistent depth can cause objectionable
flickering and swimming artifacts.
Consistent video depth estimation, however, remains a
difficult problem as even the best method will suffer from
unpredictable errors and imperfections based on scene con-
tent, especially in textureless and specular regions. This
difficulty is aggravated by many of the aforementioned ap-
plications requiring online reconstruction: future frames are
Figure 1. Comparing monocular depth estimation on the ScanNet
dataset across four frames. Clockwise from Top-left: Input RGB
image, Ranftl et al.’s DPT [32], our result with a DPT backbone,
RGB-D sensor ground truth.
not known beforehand and temporal consistency must be
balanced with error-correction. Furthermore, the presence
of dynamic objects — which are inherently inconsistent —
adds an additional layer of complication. Luo et al. [26]
address these problems by assuming all frames are known
beforehand and fine-tuning their method for each input
video. Other approaches seek an online solution by encod-
ing consistency in network weights either through a training
loss [21], by using recurrent architectures [8, 30, 56], or by
conditioning it on the input [24]. Each method, however,
ultimately relies on the raw output of a neural network be-
ing consistent, which is difficult to achieve due to camera
noise and aliasing in convolutional networks [14, 44, 57].
In this work, we propose the use of a global point cloud
to encourage temporal consistency in online video depth es-
timation. We demonstrate how to tackle the twin problems
of handling dynamic objects and updating a static — and
potentially erroneous — point cloud when future frames
are not known. With quantitative and qualitative results,
we show our approach significantly improves the temporal
consistency of both stereo and monocular depth estimation
without sacrificing spatial quality. In summary, our contri-
butions are as follows:
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9119
k
Global
Point
CloudUpdate Global
Point Cloud𝑑𝑑f𝑡𝑡Temporal
Fusion
Spatial Fusion
Splat Point Cloud𝑐𝑐p𝑡𝑡,𝑑𝑑p𝑡𝑡
RGBD video
Time….t
t-1
t-2
t
t-1
t-2
Temporally
Consistent Depth
𝑐𝑐𝑡𝑡,𝑑𝑑𝑡𝑡𝑑𝑑o𝑡𝑡Figure 2. We generate temporally consistent depth maps for each RGB video frame tby fusing the projected depth from a prior point cloud
dt
pwith the estimated depth dt. This is done by temporally fusing dt
pto update dynamic regions, followed by spatial fusion with dtbased
on confidence estimates of accuracy. The final result dt
ois used to update the point cloud for the next frame.
1. We propose point cloud-based fusion for temporally
consistent video depth estimation.
2. We present a three-stage approach to encourage con-
sistency in online settings, while simultaneously al-
lowing updates to improve the accuracy of reconstruc-
tion and handle dynamic scenes.
3. We present an image-space approach to dynamics es-
timation and depth fusion that is lightweight and has
low runtime overhead.
|
Kong_LaserMix_for_Semi-Supervised_LiDAR_Semantic_Segmentation_CVPR_2023
|
Abstract
Densely annotating LiDAR point clouds is costly, which
often restrains the scalability of fully-supervised learning
methods. In this work, we study the underexplored semi-
supervised learning (SSL) in LiDAR semantic segmenta-
tion. Our core idea is to leverage the strong spatial cues
of LiDAR point clouds to better exploit unlabeled data. We
propose LaserMix to mix laser beams from different Li-
DAR scans and then encourage the model to make con-
sistent and confident predictions before and after mixing.
Our framework has three appealing properties. 1) Generic:
LaserMix is agnostic to LiDAR representations (e.g., range
view and voxel), and hence our SSL framework can be uni-
versally applied. 2) Statistically grounded: We provide
a detailed analysis to theoretically explain the applicabil-
ity of the proposed framework. 3) Effective: Comprehen-
sive experimental analysis on popular LiDAR segmentation
(∗)Lingdong and Jiawei contributed equally to this work. (B)Ziwei
serves as the corresponding author. E-mail: [email protected] .datasets (nuScenes, SemanticKITTI, and ScribbleKITTI)
demonstrates our effectiveness and superiority. Notably,
we achieve competitive results over fully-supervised coun-
terparts with 2×to5×fewer labels and improve the
supervised-only baseline significantly by relatively 10.8%.
We hope this concise yet high-performing framework could
facilitate future research in semi-supervised LiDAR seg-
mentation. Code is publicly available1.
|
1. Introduction
LiDAR segmentation is one of the fundamental tasks in
autonomous driving perception [41]. It enables autonomous
vehicles to semantically perceive the dense 3D structure of
the surrounding scenes [15, 34, 39]. However, densely an-
notating LiDAR point clouds is inevitably expensive and
labor-intensive [18, 23, 47], which restrains the scalability
of fully-supervised LiDAR segmentation methods. Semi-
supervised learning (SSL) that directly leverages the easy-
to-acquire unlabeled data is hence a viable and promising
1https://github.com/ldkong1205/LaserMix .
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21705
solution to achieve scalable LiDAR segmentation [13, 14].
Yet, semi-supervised LiDAR segmentation is still under-
explored. Modern SSL frameworks are mainly designed
for image recognition [2, 3, 42] and semantic segmenta-
tion [6,21,37] tasks, which only yield sub-par performance
on LiDAR data due to the large modality gap between 2D
and 3D. Recent research [20] proposed to consider semi-
supervised point cloud semantic segmentation as a fresh
task and proposed a point contrastive learning framework.
However, their solutions do not differentiate indoor and out-
door scenes and therefore overlook the intrinsic and impor-
tant properties that only exist in LiDAR point clouds.
In this work, we explore the use of spatial prior for semi-
supervised LiDAR segmentation. Unlike the general 2D/3D
segmentation tasks, the spatial cues are especially signif-
icant in LiDAR data. In fact, LiDAR point clouds serve
as a perfect reflection of real-world distributions, which
is highly dependent on the spatial areas in the LiDAR-
centered 3D coordinates. As shown in Fig. 1 (left), the
top laser beams travel outward long distance and perceive
mostly vegetation , while the middle and bottom beams tend
to detect carandroad from the medium and close distances,
respectively. To effectively leverage this strong spatial prior,
we propose LaserMix to mix laser beams from different
LiDAR scans, and then encourage the LiDAR segmenta-
tion model to make consistent and confident predictions be-
fore and after mixing. Our SSL framework is statistically
grounded, which consists of the following components:
1)Partitioning the LiDAR scan into low-variation areas.
We observe a strong distribution pattern on laser beams as
shown in Fig. 1 (left) and thus propose the laser partition.
2)Efficiently mixing every area in the scan with foreign
data and obtaining model predictions. We propose Laser-
Mix to manipulate the laser-grouped areas from two LiDAR
scans in an intertwining way as depicted in Fig. 1 (middle)
and serves as an efficient LiDAR mixing strategy for SSL.
3)Encouraging models to make confident and consistent
predictions on the same area in different mixing. We hence
propose a mixing-based teacher-student training pipeline.
Despite the simplicity of our overall pipeline, it achieves
competitive results over the fully supervised counterpart us-
ing2×to5×fewer labels as shown in Fig. 1 (right) and sig-
nificantly outperforms all prevailing semi-supervised seg-
mentation methods on nuScenes [11] (up to +5.7%mIoU)
and SemanticKITTI [1] (up to +3.5%mIoU). Moreover,
LaserMix directly operates on point clouds so as to be
agnostic to different LiDAR representations, e.g., range
view [32] and voxel [58]. Therefore, our pipeline is
highly compatible with existing state-of-the-art (SoTA)
LiDAR segmentation methods under various representa-
tions [46, 56, 57]. Besides, our pipeline achieves compet-
itive performance using very limited annotations on weak
supervision dataset [47]: it achieves 54.4%mIoU on Se-manticKITTI [1] using only 0.8%labels, which is on-par
with PolarNet [56] ( 54.3%), RandLA-Net [19] ( 53.9%),
and RangeNet++ [32] ( 52.2%) using 100% labels. Spatial
prior is proven to play a pivotal role in the success of our
framework through comprehensive empirical analysis. To
summarize, this work has the following key contributions:
• We present a statistically grounded SSL framework
that effectively leverages the spatial cues in LiDAR
data to facilitate learning with semi-supervisions.
• We propose LaserMix, a novel and representation-
agnostic mixing technique that strives to maximize the
“strength” of the spatial cues in our SSL framework.
• Our overall pipeline significantly outperforms previ-
ous SoTA methods in both low- and high-data regimes.
We hope this work could lay a solid foundation for
semi-supervised LiDAR segmentation.
|
Lei_EFEM_Equivariant_Neural_Field_Expectation_Maximization_for_3D_Object_Segmentation_CVPR_2023
|
Abstract
We introduce Equivariant Neural Field Expectation
Maximization ( EFEM ), a simple, effective, and robust ge-
ometric algorithm that can segment objects in 3D scenes
without annotations or training on scenes. We achieve
such unsupervised segmentation by exploiting single ob-
ject shape priors. We make two novel steps in that direc-
tion. First, we introduce equivariant shape representations
to this problem to eliminate the complexity induced by the
variation in object configuration. Second, we propose a
novel EM algorithm that can iteratively refine segmenta-
tion masks using the equivariant shape prior. We collect
a novel real dataset Chairs and Mugs that contains vari-
ous object configurations and novel scenes in order to verify
the effectiveness and robustness of our method. Experimen-
tal results demonstrate that our method achieves consis-
tent and robust performance across different scenes where
the (weakly) supervised methods may fail. Code and data
available at https://www.cis.upenn.edu/ ˜leijh/
projects/efem
|
1. Introduction
Learning how to decompose 3D scenes into object in-
stances is a fundamental problem in visual perception sys-
tems. Past developments in 3D computer vision have
made huge strides on this problem by training neural net-
works on 3D scene datasets with segmentation masks [55,
63, 67]. However, these works heavily rely on large la-
beled datasets [3, 15] that require laborious 3D annotation
based on special expertise. Few recent papers alleviate this
problem by reducing the need to either sparse point label-
ing [24, 60] or bounding boxes [12].
In this work, we follow an object-centric approach in-
spired by the Gestalt school of perception that captures an
object as a whole shape [32, 47] invariant to its pose and
scale [31]. A holistic approach builds up a prior for each
object category, that then enables object recognition in dif-
ferent complex scenes with varying configurations. Directly
learning object-centric priors instead of analyzing each 3D
Figure 1. We present EFEM, an unsupervised 3D object segmen-
tation method applicable to real-world scenes (results on the right)
by only training on ShapeNet single object reconstruction.
scene inspires a more efficient way of learning instance seg-
mentation: both a mug on the table and a mug in the dish-
washer are mugs, and one does not have to learn to seg-
ment out a mug in all possible environmental contexts if
we have a unified shape concept for mugs. Such object-
centric recognition facilitates a robust scene analysis for au-
tonomous systems in many interactive real-world environ-
ments with a diversity of object configurations: Imagine a
scenario where a robot is doing the dishes in the kitchen.
Dirty bowls are piled in the sink and the robot is clean-
ing them and placing them into a cabinet. Objects of the
same category appear in the scene repeatedly under differ-
ent configurations (piles, neat lines in the cabinet). What
is even more challenging is that even within this one single
task (doing dishes) the scene configuration can drastically
change when objects are moved. We show that such scenar-
ios cannot be addressed by the state-of-the-art strongly or
weakly supervised methods that struggle under such scene
configuration variations.
In this paper, we introduce a method that can segment
3D object instances from 3D static scenes by learning pri-
ors of single object shapes (ShapeNet [4]) without using any
scene-level labels. Two main challenges arise when we re-
move the scene-level annotation. First, objects in the scene
can have a different position, rotation, and scale than the
canonical poses where the single object shapes were trained.
Second, the shape encoder which is trained on object-level
input cannot be directly applied to the scene observations
unless the object masks are known. We address the first
challenge by introducing equivariance to this problem. By
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
4902
learning a shape prior that is equivariant to the similitude
group SIM(3), the composition of a rotation, a translation,
and a uniform scaling in 3D (Sec. 3.1), we address the com-
plexity induced by the SIM(3) composition of objects. For
the second challenge, we introduce a simple and effective
iterative algorithm, Equivariant neural Field Expectation
Maximization ( EFEM ), that refines the object segmentation
mask, by alternately iterating between mask updating and
shape reconstruction (Sec. 3.2). The above two steps en-
able us to directly exploit the learned single instance shape
prior to perform segmentation in real-world scenes. We col-
lected and annotated a novel real-world test set (240 scenes)
(Sec. 4.4) that contains diverse object configurations and
novel scenes to evaluate the generalizability and robustness
to novel object instances and object configuration changes.
Experiments on both synthetic data (Sec. 4.3) and our novel
real dataset (Sec. 4.4) give us an insight to the effectiveness
of the method. Compared to weakly supervised methods,
when the testing scene setup is similar to the training setup,
our method has a small performance gap to the (weakly)
supervised baselines. However, when the testing scenes are
drawn from novel object configurations, our method consis-
tently outperforms the (weakly) supervised baselines.
Our paper makes the following novel contributions to the
3D scene segmentation problem: (1) a simple and effective
iterative EM algorithm that can segment objects from the
scenes using only single object shape priors. (2) addressing
the diversity of object composition in 3D scenes by combin-
ing representations equivariant to rotation, translation, and
scaling of the objects. (3) an unsupervised pipeline for 3D
instance segmentation that works in real-world data and can
generalize to novel setups. (4) a novel real-world test set
Chairs and Mugs that contains diverse object configura-
tions and scenes.
|
Ke_Neural_Preset_for_Color_Style_Transfer_CVPR_2023
|
Abstract
In this paper, we present a Neural Preset technique to
address the limitations of existing color style transfer meth-
ods, including visual artifacts, vast memory requirement,
and slow style switching speed. Our method is based on two
core designs. First, we propose Deterministic Neural Color
Mapping (DNCM) to consistently operate on each pixel via
an image-adaptive color mapping matrix, avoiding artifacts
and supporting high-resolution inputs with a small memory
footprint. Second, we develop a two-stage pipeline by divid-
ing the task into color normalization and stylization, which
allows efficient style switching by extracting color styles as
presets and reusing them on normalized input images. Due
to the unavailability of pairwise datasets, we describe how
to train Neural Preset via a self-supervised strategy. Vari-
ous advantages of Neural Preset over existing methods are
demonstrated through comprehensive evaluations. Besides,
we show that our trained model can naturally support mul-
tiple applications without fine-tuning, including low-light
image enhancement, underwater image correction, image
dehazing, and image harmonization. The project page is:
https://ZHKKKe.github.io/NeuralPreset.
|
1. Introduction
With the popularity of social media ( e.g., Instagram and
Facebook), people are increasingly willing to share pho-
tos in public. Before sharing, color retouching becomes
an indispensable operation to help express the story cap-
tured in images more vividly and leave a good first impres-
sion. Photo editing tools usually provide color style presets,
such as image filters or Look-Up Tables (LUTs), to help
users explore efficiently. However, these filters/LUTs are
handcrafted with pre-defined parameters, and are not able
to generate consistent color styles for images with diverse
appearances. Therefore, careful adjustments by the users is
still necessary. To address this problem, color style trans-
fer techniques have been introduced to automatically map
the color style from a well-retouched image ( i.e., the style
image) to another ( i.e., the input image).
Earlier color style transfer methods [41–43, 49] focus
on retouching the input image according to low-level fea-
ture statistics of the style image. They disregard high-level
information, resulting in unexpected changes in image in-
†Corresponding author. This project is in part supported by a General
Research Fund from RGC of Hong Kong (RGC Ref.: 11205620).
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14173
herent colors. Although recent deep learning based mod-
els [1,6,19,34,36,54] give promising results, they typically
suffer from three obvious limitations in practice (Fig. 1 (a)).
First, they produce unrealistic artifacts ( e.g., distorted tex-
tures or inharmonious colors) in the stylized image since
they perform color mapping based on convolutional mod-
els, which operate on image patches and may have incon-
sistent outputs for pixels with the same value. Although
some auxiliary constraints [36] or post-processing strate-
gies [34] have been proposed, they still fail to prevent ar-
tifacts robustly. Second, they cannot handle high-resolution
(e.g., 8K) images due to their huge runtime memory foot-
print. Even using a GPU with 24GB of memory, most recent
models suffer from the out-of-memory problem when pro-
cessing 4K images. Third, they are inefficient in switching
styles because they carry out color style transfer as a single-
stage process, requiring to run the whole model every time.
In this work, we present a Neural Preset technique with
two core designs to overcome the above limitations:
(1)Neural Preset leverages Deterministic Neural Color
Mapping (DNCM) as an alternative to the color mapping
process based on convolutional models. By multiplying
an image-adaptive color mapping matrix, DNCM converts
pixels of the same color to a specific color, avoiding un-
realistic artifacts effectively. Besides, DNCM operates on
each pixel independently with a small memory footprint,
supporting very high-resolution inputs. Unlike adaptive 3D
LUTs [7, 55] that need to regress tens of thousands of pa-
rameters or automatic filters [23, 27] that perform particu-
lar color mappings, DNCM can model arbitrary color map-
pings with only a few hundred learnable parameters.
(2)Neural Preset carries out color style transfer in two
stages to enable fast style switching. Specifically, the first
stage builds a nDNCM from the input image for color nor-
malization, which maps the input image to a normalized
color style space representing the “image content”; the sec-
ond stage builds a sDNCM from the style image for color
stylization, which transfers the normalized image to the tar-
get color style. Such a design has two advantages in terms
of efficiency: the parameters of sDNCM can be stored as
color style presets and reused by different input images,
while the input image can be stylized by diverse color style
presets after normalized once with nDNCM .
In addition, since there are no pairwise datasets avail-
able, we propose a new self-supervised strategy for Neu-
ral Preset to be trainable. Our comprehensive evaluations
demonstrate that Neural Preset outperforms state-of-the-art
methods significantly in various aspects. Notably, Neural
Preset can produce faithful results for 8K images (Fig. 1 (b))
and can provide consistent color style transfer results across
video frames without post-processing. Compared to re-
cent deep learning based models, Neural Preset achieves
∼28×speedup on a Nvidia RTX3090 GPU, supportingreal-time performances at 4K resolution. Finally, we show
that our trained model can be applied to other color map-
ping tasks without fine-tuning, including low-light image
enhancement [30], underwater image correction [52], im-
age dehazing [16], and image harmonization [37].
|
Ling_Learning_Optical_Expansion_From_Scale_Matching_CVPR_2023
|
Abstract
This paper address the problem of optical expansion
(OE). OE describes the object scale change between two
frames, widely used in monocular 3D vision tasks. Previ-
ous methods estimate optical expansion mainly from opti-
cal flow results, but this two-stage architecture makes their
results limited by the accuracy of optical flow and less ro-
bust. To solve these problems, we propose the concept of
3D optical flow by integrating optical expansion into the
2D optical flow, which is implemented by a plug-and-play
module, namely TPCV . TPCV implements matching features
at the correct location and scale, thus allowing the simul-
taneous optimization of optical flow and optical expansion
tasks. Experimentally, we apply TPCV to the RAFT optical
flow baseline. Experimental results show that the baseline
optical flow performance is substantially improved. More-
over, we apply the optical flow and optical expansion re-
sults to various dynamic 3D vision tasks, including motion-
in-depth, time-to-collision, and scene flow, often achiev-
ing significant improvement over the prior SOTA. Code is
available at https://github.com/HanLingsgjk/
TPCV .
|
1. Introduction
Optical expansion (OE) is a fundamental and important
concept in monocular dynamic 3D vision tasks [1, 3, 22, 23,
32]. OE describes the scale change of an object between
two frames, which can be translated into motion in the depth
direction. It has essential applications in time-to-collision,
scene flow, and motion-in-depth estimation. OE schemes
have unique advantages in 3D motion estimation tasks, re-
quiring only a single camera and enabling dense and fixed
baseline independent results. In this paper, we discuss a ro-
bust and novel approach for OE estimation.
*Corresponding author
†Corresponding author
𝐹rame1
𝑀𝑢𝑙𝑡𝑖 𝑆𝑐𝑎𝑙𝑒 𝐹rame2
𝑓𝑟𝑎𝑚𝑒 1
𝑠ൌ 1
𝑓𝑟𝑎𝑚𝑒 2
𝑠ൌ 1
𝑠ൌ 0.7
𝑠ൌ 0.5Figure 1. Scale matching idea. Left: multi-scale matching be-
tween two frames. Right: texture around the matching point,
where sis the size of the image scaling. The core idea of scale
matching is to match texture features at the correct location and
scale. As seen above, the texture at the license plate can be better
matched when the second frame is scaled 0.7 times in size, where
the scaling magnification also reflects the optical expansion of that
local pixel between the two frames. Furthermore, scale matching
can better handle the motion in the depth direction and contains
potential 3D motion information.
Prior works In early time-to-collision (TTC) studies
[4, 22, 23], OE was obtained from motion modeling, where
the motion estimation was provided by optical flow or SIFT
[10]. Such algorithms relied on optical flow results and
specific model assumptions, often yielding only sparse and
low-accuracy results. Some recent methods [32] regress OE
based on existing optical flow results and achieve better out-
comes. However, these two-stage estimation methods rely
on accurate optical flow results and decrease computational
efficiency. Instead, we consider optical flow and expan-
sion estimation as two complementary tasks. Introduc-
ing OE in optical flow can realize matching features at the
correct location and scale. As shown in Fig. 1, this fusion
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5445
𝑠ൌ 1.25
𝑠ൌ 1
𝑠ൌ 1/1.25
𝑠ൌ 0.5𝑎 𝑖𝑚𝑎𝑔𝑒 𝑡𝑜 𝑖𝑚𝑎𝑔𝑒
𝑏 𝑖𝑚𝑎𝑔𝑒 𝑡𝑜 𝑝𝑦𝑟𝑎𝑚𝑖𝑑𝑓𝑟𝑎𝑚𝑒 1 𝑓𝑟𝑎𝑚𝑒 2
𝑠ൌ 0.75
𝑠ൌ 1
𝑠ൌ 0.5
𝑠ൌ 0.75
𝑐 𝑇𝑃𝐶𝑉Figure 2. Three different matching modes . We match objects
between two consecutive frames, where the cat is away from the
camera and the car is close to the camera. (a) The 2D optical
flow matches the cat and car in the original size image. (b) Match
the second frame with the first frame after multiscale scaling, we
found that cat can be better matched when magnified by 1.25
times, and car can be better matched when shrunk by 0.75 times.
However, obtaining an accurate enlarged picture of a cat is impos-
sible. (c) Transpose matching, where the cat zoomed in in frame
1 to
|
Liu_MMVC_Learned_Multi-Mode_Video_Compression_With_Block-Based_Prediction_Mode_Selection_CVPR_2023
|
Abstract
Learning-based video compression has been extensively
studied over the past years, but it still has limitations in
adapting to various motion patterns and entropy models.
In this paper, we propose multi-mode video compression
(MMVC), a block wise mode ensemble deep video com-
pression framework that selects the optimal mode for fea-
ture domain prediction adapting to different motion pat-
terns. Proposed multi-modes include ConvLSTM-based fea-
ture domain prediction, optical flow conditioned feature do-
main prediction, and feature propagation to address a wide
range of cases from static scenes without apparent mo-
tions to dynamic scenes with a moving camera. We parti-
tion the feature space into blocks for temporal prediction
in spatial block-based representations. For entropy coding,
we consider both dense and sparse post-quantization resid-
ual blocks, and apply optional run-length coding to sparse
residuals to improve the compression rate. In this sense, our
method uses a dual-mode entropy coding scheme guided by
a binary density map, which offers significant rate reduc-
tion surpassing the extra cost of transmitting the binary se-
lection map. We validate our scheme with some of the most
popular benchmarking datasets. Compared with state-of-
the-art video compression schemes and standard codecs,
our method yields better or competitive results measured
with PSNR and MS-SSIM.
|
1. Introduction
Over the past several years, with the emergence and
booming of short videos and video conferences across the
world, video has become the major container of informa-
tion and interaction among people on a daily basis. Conse-
quently, we have been witnessing a vast demand increase on
transmission bandwidth and storage space, together with the
vibrant growth and discovery of handcrafted codecs such
as A VC/H.264 [23], HEVC [23], and the recently released
*Equally contributed authors.VVC [22], along with a number of learning based meth-
ods [1, 7, 9, 11, 12, 15–17, 21, 27, 30, 31].
Prior works in deep video codecs have underlined the
importance of utilizing and benefiting from deep neural net-
work models, which can exploit complex spatial-temporal
correlations and have the ability of ‘learning’ contextual and
motion features. The main objective of deep video com-
pression is to predict the next frame from previous frames
or historical data, which results in the reduction of amount
of residual information that needs to be encoded and trans-
mitted. This has so far led to two directions: (1) to build ef-
ficient prediction or estimation models, and extract motion
information by leveraging the temporal correlation across
the frames [1, 9, 16, 31]; (2) to make accurate estimation
of the distribution of residual data and push down the in-
formation entropy statistically by appropriate conditioning
[7, 11, 30]. The existing works usually fall in one or a com-
bination of the above two realms. In the light of learning
capability that deep neural networks can offer, we argue and
demonstrate in this work that some measures of adaptively
selecting the right mode among different available models
in the encoding path can be advantageous on top of the ex-
isting schemes, especially when the adaptive model selec-
tion is applied at the block level in the feature space.
Drawing wisdom from conventional video codec stan-
dards that typically address various types of motions (in-
cluding the unchanged contextual information) in the unit of
macroblocks, we present a learning-based, block wise video
compression scheme that applies content-driven mode se-
lection on the fly. Our proposed method consists of four
modes targeting different scenarios:
•Skip mode (S) aims to utilize the frame buffer on the
decoder and find the most condensed representation to
transmit unchanged blocks to achieve the best possi-
ble bitrate. This mode is particularly useful for static
scenes where same backgrounds are captured by a
fixed view camera.
•Optical Flow Conditioned Feature Prediction mode
(OFC) leverages the temporal locality of motions. In
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18487
Figure 1. Overview of our proposed multi-mode video coding method. The current and previous frames are fed into the feature extractor
and then go through branches of prediction modes followed by residual channel removal, quantization, and entropy coding process. We
then select the optimal prediction and entropy coding schemes for each block that lead to the smallest code size.
this mode, we capture the optical flow [24] between the
past two frames, and the warped new frame is treated
as a preliminary prediction to the current frame. This
warping serves as the condition to provide guidance to
the temporal prediction DNN model.
•Feature Propagation mode (FPG) applies to blocks
where changes are detected, but there is no better pre-
diction mode available. This mode copies the previ-
ously reconstructed feature block as the prediction, and
encodes the residual from there.
• For other generic cases, we propose the Feature Pre-
diction mode (FP) for feature domain inter-frame pre-
diction with a ConvLSTM network to produce a pre-
dicted current frame (block).
Prior to the mode selection step, The transmitter pro-
duces the optimal low-dimensional representation of each
frame using a learned encoder and decoder pair based on
the image compression framework in [14] for the mapping
from pixel to feature space. The block by block difference
between the previous frame and the current frame repre-
sents the block wise motion. Unlike some state-of-the-art
video compression frameworks that separately encode mo-
tions and residuals, our method does not encode the motion
as it is automatically generated by the prediction using the
information available on both the transmitter and receiver.
To adapt to different dynamics that may exist even within
a single frame for different blocks, our method evaluatesmultiple prediction modes that are listed above at the block
level. As a result, we can always obtain residuals that have
the highest sparsity thereby the shortest code length per
block. Furthermore, we propose a residual channel remov-
ing strategy to mask out residual channels that are inessen-
tial to frame reconstruction, exploiting favorable tradeoffs
between noticeably higher compression ratio and negligible
quality degradation.
Technical contributions of this work are summarized as
follows:
• We present MMVC, a dynamic mode selection-based
video compression approach that adapts to different
motion and contextual patterns with Skip mode and
different feature-domain prediction paths in the unit of
block.
• To improve the residual sparsity without losing much
quality while minimizing the bitrate, we propose a
block wise channel removal scheme and a density-
adaptive entropy coding strategy.
• We perform extensive experiments and a compara-
tive study to showcase that MMVC exhibits supe-
rior or similar performance compared to state-of-the-
art learning-based methods and conventional codecs.
In the ablation study, we show the effectiveness of
our scheme by quantifying the utilization of different
modes that varies by video contents and scenes.
18488
|
Lin_DynamicDet_A_Unified_Dynamic_Architecture_for_Object_Detection_CVPR_2023
|
Abstract
Dynamic neural network is an emerging research topic
in deep learning. With adaptive inference, dynamic mod-
els can achieve remarkable accuracy and computational
efficiency. However, it is challenging to design a power-
ful dynamic detector, because of no suitable dynamic ar-
chitecture and exiting criterion for object detection. To
tackle these difficulties, we propose a dynamic framework
for object detection, named DynamicDet. Firstly, we care-
fully design a dynamic architecture based on the nature of
the object detection task. Then, we propose an adaptive
router to analyze the multi-scale information and to de-
cide the inference route automatically. We also present a
novel optimization strategy with an exiting criterion based
on the detection losses for our dynamic detectors. Last, we
present a variable-speed inference strategy, which helps to
realize a wide range of accuracy-speed trade-offs with only
one dynamic detector. Extensive experiments conducted
on the COCO benchmark demonstrate that the proposed
DynamicDet achieves new state-of-the-art accuracy-speed
trade-offs. For instance, with comparable accuracy, the
inference speed of our dynamic detector Dy-YOLOv7-W6
surpasses YOLOv7-E6 by 12%, YOLOv7-D6 by 17%, and
YOLOv7-E6E by 39%. The code is available at https:
//github.com/VDIGPKU/DynamicDet .
|
1. Introduction
Object detection is an essential topic in computer vision,
as it is a fundamental component for other vision tasks,
e.g., autonomous driving [26, 40, 56], multi-object track-
ing [52,57], intelligent transportation [36,55], etc. In recent
years, tremendous progress has been made toward more ac-
curate and faster detectors, such as Network Architecture
Search (NAS)-based detectors [10,25,48] and YOLO series
models [2, 9, 11, 21, 44, 45]. However, these methods need
to design and train multiple models to achieve a few good
trade-offs between accuracy and speed, which is not flexible
enough for various application scenarios. To alleviate this
†Corresponding author.
12.5 15.0 17.5 20.0 22.5 25.0 27.5 30.0
V100 batch 1 inference time (ms)525354555657COCO APtest (%)
Dy-YOLOv7 (Ours)
YOLOv7
PP-YOLOE+
YOLOv5 (r6.2)
YOLOv6Figure 1. Comparison of the proposed dynamic detectors and other
efficient object detectors. Our method can achieve a wide range
of state-of-the-art trade-offs between accuracy and speed with a
single model.
“Easy” image“Hard” image
Figure 2. Examples of “easy” and “hard” images for the object
detection task.
problem, we focus on dynamic inference for the object de-
tection task, and attempt to use only one dynamic detector
to achieve a wide range of good accuracy-speed trade-offs,
as shown in Fig. 1.
The human brain inspires many fields of deep learning,
and the dynamic neural network [12] is a typical one. As
two examples shown in Fig. 2, we can quickly identify
all objects on the left “easy” image, while we need more
time to achieve the same effect for the right one. In other
words, the processing speeds of images are different in our
brains [18, 34], which depend on the difficulties of the im-
ages. This property motivates the image-wise dynamic neu-
ral network, and many exciting works have been proposed
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
6282
(e.g., Branchynet [43], MSDNet [17], DVT [50]). Although
these approaches have achieved remarkable performance,
they are all designed specifically for the image classifica-
tion task and are not suitable for other vision tasks, espe-
cially for the object detection [12]. The main difficulties in
designing an image-wise dynamic detector are as follows.
Dynamic detectors cannot utilize the existing dy-
namic architectures. Most existing dynamic architectures
are cascaded with multiple stages ( i.e., a stack of multiple
layers) [17, 20, 33, 54], and predict whether to stop the in-
ference at each exiting point. Such a paradigm is feasible
in image classification but is ineffective in object detection,
since an image has multiple objects and each object usu-
ally has different categories and scales, as shown in Fig. 2.
Hence, almost all detectors depend heavily on multi-scale
information, utilizing the features on different scales to de-
tect objects of different sizes (which are obtained by fusing
the multi-scale features of the backbone with a detection
neck, i.e., FPN [27]). In this case, the exiting points for
detectors can only be placed behind the last stage. Con-
sequently, the entire backbone module has to be run com-
pletely [58], and it is impossible to achieve dynamic infer-
ence on multiple cascaded stages.
Dynamic detectors cannot exploit the existing exiting
criteria for image classification. For the image classi-
fication task, the threshold of top-1 accuracy is a widely
used criterion for decision-making [17, 50]. Notably, it
only needs one fully connected layer to predict the top-1
accuracy at intermediate layer, which is easy and costless.
However, object detection task requires the neck and the
head to predict the categories and locations of the object in-
stances [3, 14, 27, 39]. Hence, the existing exiting criteria
for image classification is not suitable for object detection.
To deal with the above difficulties, we propose a dynamic
framework to achieve dynamic inference for object detec-
tion, named DynamicDet. Firstly, We design a dynamic ar-
chitecture for the object detection task, which can exit with
multi-scale information during the inference. Then, we pro-
pose an adaptive router to choose the best route for each
image automatically. Besides, we present the correspond-
ing optimization and inference strategies for the proposed
DynamicDet.
Our main contributions are as follows:
• We propose a dynamic architecture for object detec-
tion, named DynamicDet, which consists of two cas-
caded detectors and a router. This dynamic architec-
ture can be easily adapted to mainstream detectors,
e.g., Faster R-CNN and YOLO.
• We propose an adaptive router to predict the difficulty
scores of the images based on the multi-scale features,
and achieve automatic decision-making. In addition,
we propose a hyperparameter-free optimization strat-
egy and a variable-speed inference strategy for our dy-namic architecture.
• Extensive experiments show that DynamicDet can ob-
tain a wide range of accuracy-speed trade-offs with
only one dynamic detector. We also achieve new state-
of-the-art trade-offs for real-time object detection ( i.e.,
56.8% AP at 46 FPS).
|
Lentsch_SliceMatch_Geometry-Guided_Aggregation_for_Cross-View_Pose_Estimation_CVPR_2023
|
Abstract
This work addresses cross-view camera pose estimation,
i.e., determining the 3-Degrees-of-Freedom camera pose of
a given ground-level image w.r.t. an aerial image of the lo-
cal area. We propose SliceMatch, which consists of ground
and aerial feature extractors, feature aggregators, and a
pose predictor. The feature extractors extract dense features
from the ground and aerial images. Given a set of can-
didate camera poses, the feature aggregators construct a
single ground descriptor and a set of pose-dependent aerial
descriptors. Notably, our novel aerial feature aggregator
has a cross-view attention module for ground-view guided
aerial feature selection and utilizes the geometric projec-
tion of the ground camera’s viewing frustum on the aerial
image to pool features. The efficient construction of aerial
descriptors is achieved using precomputed masks. Slice-
Match is trained using contrastive learning and pose es-
timation is formulated as a similarity comparison between
the ground descriptor and the aerial descriptors. Compared
to the state-of-the-art, SliceMatch achieves a 19% lower
median localization error on the VIGOR benchmark using
the same VGG16 backbone at 150 frames per second, and
a 50% lower error when using a ResNet50 backbone.
|
1. Introduction
Cross-view camera pose estimation aims to estimate the
3-Degrees-of-Freedom (3-DoF) ground camera pose, i.e.,
planar location and orientation, by comparing the captured
ground-level image to a geo-referenced overhead aerial im-
age containing the camera’s local surroundings. In prac-
tice, the local aerial image can be obtained from a reference
database using any rough localization prior, e.g., Global
Navigation Satellite Systems (GNSS), image retrieval [19],
or dead reckoning [13]. However, this prior is not nec-
essarily accurate, for example, GNSS can contain errors
up to tens of meters in urban canyons [2, 48, 49]. The
cross-view formulation provides a promising alternative to
*indicates equal contribution.
(a) Ground image (b) Aerial image
1
0°8 234567
90° 180° 180° 270°
(d) Pose heatmap
(c) Aerial slicing
Cross -view
attention
Slice masks for a
candidate pose…
…
4
2
4
2Figure 1. SliceMatch identifies for a ground-level image (a) its
camera’s 3-DoF pose within a corresponding aerial image (b).
It divides the camera’s Horizontal Field-of-View (HFoV) into
‘slices’, i.e., vertical regions in (a). After self-attention, our novel
aggregation step (c) applies cross-view attention to create ground
slice-specific aerial feature maps. To efficiently test many candi-
date poses, the slice features are aggregated using pose-dependent
aerial slice masks that represent the camera’s sliced HFoV at that
pose. The slice masks for each pose are precomputed. All aerial
pose descriptors are compared to the ground descriptor, resulting
in a dense scoring map (d). Our output is the best-scoring pose.
ground-level camera pose estimation techniques that require
detailed 3D point cloud maps [31] or semantic maps [3,43],
since the aerial imagery provides continuous coverage of
the Earth’s surface including the area where accurate point
clouds are difficult to collect. Moreover, acquiring up-to-
date aerial imagery is less costly than maintaining and up-
dating large-scale 3D point clouds or semantics maps.
Recently, several works have addressed cross-view cam-
era localization [55] or 3-DoF pose estimation [33, 36,
44, 50]. Roughly, those methods can be categorized into
global image descriptor-based [50, 55] and dense pixel-
level feature-based [33, 36, 44] methods. Global descriptor-
based methods take advantage of the compactness of the
image representation and often have relatively fast infer-
ence time [50, 55]. Dense pixel-level feature-based meth-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17225
ods [33, 36, 44] are potentially more accurate as they pre-
serve more details in the image representation. They use
the geometric relationship between the ground and aerial
view to project features across views and estimate the cam-
era pose via computationally expensive iterations. Aiming
for both accurate and efficient camera pose estimation, in
this work, we improve the global descriptor-based approach
and enforce feature locality in the descriptor.
We observe several limitations in existing global
descriptor-based cross-view camera pose estimation meth-
ods [50,55]. First, they rely on the aerial encoder to encode
all spatial context and the aerial encoder has to learn how
to aggregate local information, e.g., via the SAFA mod-
ule [34], into the global descriptor, without accessing the
information in the ground view or exploiting geometric con-
straints between the ground-camera viewing frustum and
the aerial image. Second, existing global descriptor-based
methods for cross-view localization [50, 55] do not explic-
itly consider the orientation of the ground camera in their
descriptor construction. As a result, they either do not esti-
mate the orientation [55] or require multiple forward passes
on different rotated samples to infer the orientation [50].
Third, existing global descriptors-based methods [50, 55]
are not trained discriminatively against different orienta-
tions. Therefore, the learned features may be less discrimi-
native for orientation prediction.
To address the observed gaps, we devise a novel, accu-
rate, and efficient method for cross-view camera pose es-
timation called SliceMatch (see Figure 1). Its novel aerial
feature aggregation explicitly encodes directional informa-
tion and pools features using known camera geometry to ag-
gregate the extracted aerial features into an aerial global de-
scriptor. The proposed aggregation step ‘slices’ the ground
Horizontal Field-of-View (HFoV) into orientation-specific
descriptors. For each pose in a set of candidates, it aggre-
gates the extracted aerial features into corresponding aerial
slice descriptors. The aggregation uses cross-view attention
to weigh aerial features w.r.t. to the ground descriptor, and
exploits the geometric constraint that every vertical slice in
the ground image corresponds to an azimuth range extrud-
ing from the projected ground camera position in the aerial
image. The feature extraction is done only once for con-
structing the descriptors for all pose candidates, resulting in
fast training and inference speed. We contrastively train the
model by pairing the ground image descriptor with aerial
descriptors at different locations and orientations. Hence,
the model learns to extract discriminative features for both
localization and orientation estimation.
Contributions: i) A novel aerial feature aggregation
step that uses a cross-view attention module for ground-
view guided aerial feature selection, and the geometric rela-
tionship between the ground camera’s viewing frustum and
the aerial image to construct pose-dependent aerial descrip-tors. ii)SliceMatch’s design allows for efficient implemen-
tation, which runs significantly faster than previous state-of-
the-art methods. Namely, for an input ground-aerial image
pair, SliceMatch extracts dense features only once, aggre-
gates aerial descriptors at a set of poses without extra com-
putation, and compares the aerial descriptor of each pose
with the ground descriptor. iii)Compared to the previous
state-of-the-art global descriptor-based cross-view camera
pose estimation method, SliceMatch constructs orientation-
aware descriptors and adopts contrastive learning for both
locations and orientations. Powered by the above designs,
SliceMatch sets the new state-of-the-art for cross-view pose
estimation on two commonly used benchmarks.
|
Li_Center_Focusing_Network_for_Real-Time_LiDAR_Panoptic_Segmentation_CVPR_2023
|
Abstract
LiDAR panoptic segmentation facilitates an autonomous
vehicle to comprehensively understand the surrounding ob-
jects and scenes and is required to run in real time. The
recent proposal-free methods accelerate the algorithm, but
their effectiveness and efficiency are still limited owing to
the difficulty of modeling non-existent instance centers and
the costly center-based clustering modules. To achieve ac-
curate and real-time LiDAR panoptic segmentation, a novel
center focusing network (CFNet) is introduced. Specifically,
the center focusing feature encoding (CFFE) is proposed to
explicitly understand the relationships between the origi-
nal LiDAR points and virtual instance centers by shifting
the LiDAR points and filling in the center points. More-
over, to leverage the redundantly detected centers, a fast
center deduplication module (CDM) is proposed to select
only one center for each instance. Experiments on the Se-
manticKITTI and nuScenes panoptic segmentation bench-
marks demonstrate that our CFNet outperforms all existing
methods by a large margin and is 1.6 times faster than the
most efficient method.
|
1. Introduction
Panoptic segmentation [18] combines both semantic seg-
mentation and instance segmentation in a single framework.
It predicts semantic labels for the uncountable stuff classes
(e.g.road,sidewalk ), while it simultaneously provides se-
mantic labels and instance IDs for the countable things
classes ( e.g.car,pedestrian ). The LiDAR panoptic segmen-
tation is one of the bases for the safety of autonomous driv-
ing, which employs the point clouds collected by the Light
Detection and Ranging (LiDAR) sensors to effectively de-
pict the surroundings. Existing LiDAR panoptic segmenta-
tion methods first conduct semantic segmentation, and then
achieve instance segmentation for the things categories in
Figure 1. PQ vs. runtime on the SemanticKITTI test set. Runtime
measurements are taken on a single NVIDIA RTX 3090 GPU. The
panoptic quality (PQ) is introduced in section 4.1.
two ways, the proposal-based and proposal-free methods.
The proposal-based methods [17, 31, 37] adopt a two-
stage process similar to the well-known Mask R-CNN [14]
in the image domain. It first generates object proposals for
thethings points by using 3D detection networks [19, 30]
and then refines the instance segmentation results within
each proposal. As shown in Fig. 1, these methods are usu-
ally complicated and hardly achieve real-time processing,
owing to their sequential multi-stage pipelines.
The proposal-free frameworks [13, 15, 21, 22, 29, 35, 39]
are more compact. To associate the things points with in-
stance IDs, these methods usually leverage the instance cen-
ters. Specifically, they regress the offsets from the points
to their corresponding centers, and then adopt the class-
agnostic center-based clustering modules [13, 15, 29] or the
bird’s-eye view (BEV) center heatmap [22, 35, 39]. How-
ever, two problems exist in these methods. First, for center
feature extracting and center modeling, the non-existent in-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13425
(a) CFNet without CFFE (b) CFNet with CFFEFigure 2. Instance segmentation of a car. Without our CFFE, the
car is split into parts (a), while the CFFE significantly alleviates
this problem (b). Different colors represent different instances.
stance centers increase the difficulty, considering that the
LiDAR points are usually surface-aggregated [35] and an
instance center is imaginary in most cases. As shown in
Fig. 2(a), the difficulty often results in the fault that one
instance is incorrectly split into several parts. Second,
for exploiting the redundantly detected centers, the clus-
tering modules ( e.g. MeanShift, DBSCAN) are too time-
consuming to support the real-time autonomous driving per-
ception systems, while the BEV center heatmap cannot dis-
tinguish objects with different altitudes in the same BEV
grid.
For accurate and fast LiDAR panoptic segmentation, a
proposal-free center focusing network (CFNet) is proposed.
For better encoding center features, a novel center focusing
feature encoding (CFFE) is proposed to generate center-
focusing feature maps by shifting the things points to fill
in the non-existent instance centers for more accurate pre-
dictions (as shown in Fig. 2(b)). For center modeling, the
CFNet not only decomposes the panoptic segmentation task
into the widely-used semantic segmentation and center off-
set regression, but also proposes a new confidence score
prediction for indicating the accuracy of the center offset re-
gression. Subsequently, for the detected centers exploiting,
a novel center deduplication module (CDM) is designed to
select one center for a single instance. The CDM keeps
the predicted centers with higher confidence scores, while
suppressing the ones with lower confidence. Finally, in-
stance segmentation is achieved by assigning the shifted
things points to the closest center. For efficiency, the pro-
posed CFNet is built on the 2D projection-based segmenta-
tion paradigm. Our contributions are as follows:
• A proposal-free CFNet is proposed to achieve accu-
rate and fast LiDAR panoptic segmentation by solving
the bottleneck problems of center modeling and center-
based clustering in previous methods.
• The CFFE is proposed to alleviate the difficulty of
modeling the non-existent instance centers and the
CDM is designed to efficiently keep one center for
each instance.• The proposed CFNet is evaluated on the nuScenes and
SemanticKITTI LiDAR panoptic segmentation bench-
marks. Our CFNet achieves the state-of-the-art perfor-
mance with a real-time inference speed.
|
Lin_Deep_Frequency_Filtering_for_Domain_Generalization_CVPR_2023
|
Abstract
Improving the generalization ability of Deep Neural Net-
works (DNNs) is critical for their practical uses, which has
been a longstanding challenge. Some theoretical studies
have uncovered that DNNs have preferences for some fre-
quency components in the learning process and indicated
that this may affect the robustness of learned features. In
this paper, we propose Deep Frequency Filtering (DFF) for
learning domain-generalizable features, which is the first
endeavour to explicitly modulate the frequency components
of different transfer difficulties across domains in the latent
space during training. To achieve this, we perform Fast
Fourier Transform (FFT) for the feature maps at different
layers, then adopt a light-weight module to learn attention
masks from the frequency representations after FFT to en-
hance transferable components while suppressing the com-
ponents not conducive to generalization. Further, we empir-
ically compare the effectiveness of adopting different types
of attention designs for implementing DFF . Extensive exper-
iments demonstrate the effectiveness of our proposed DFF
and show that applying our DFF on a plain baseline out-
performs the state-of-the-art methods on different domain
generalization tasks, including close-set classification and
open-set retrieval.
|
1. Introduction
Domain Generalization (DG) seeks to break through the
i.i.d. assumption that training and testing data are identi-
cally and independently distributed. This assumption does
not always hold in reality since domain gaps are commonly
seen between the training and testing data. However, col-
lecting enough training data from all possible domains is
costly and even impossible in some practical environments.
Thus, learning generalizable feature representations is of
*This work was done when Shiqi Lin and Zhipeng Huang were interns
at Microsoft Research Asia.high practical value for both industry and academia.
Recently, a series of research works [78] analyze deep
learning from the frequency perspective. These works,
represented by the F-Principle [75], uncover that there
are different preference degrees of DNNs for the infor-
mation of different frequencies in their learning processes.
Specifically, DNNs optimized with stochastic gradient-
based methods tend to capture low-frequency components
of the training data with a higher priority [74] while exploit-
ing high-frequency components to trade the robustness (on
unseen domains) for the accuracy (on seen domains) [66].
This observation indicates that different frequency compo-
nents are of different transferability across domains.
In this work, we seek to learn generalizable features from
a frequency perspective. To achieve this, we conceptual-
ize Deep Frequency Filtering (DFF), which is a new tech-
nique capable of enhancing the transferable frequency com-
ponents and suppressing the ones not conducive to general-
ization in the latent space. With DFF, the frequency compo-
nents of different cross-domain transferability are dynam-
ically modulated in an end-to-end manner during training.
This is conceptually simple, easy to implement, yet remark-
ably effective. In particular, for a given intermediate fea-
ture, we apply Fast Fourier Transform (FFT) along its spa-
tial dimensions to obtain the corresponding frequency rep-
resentations where different spatial locations correspond to
different frequency components. In such a frequency do-
main, we are allowed to learn a spatial attention map and
multiply it with the frequency representations to filter out
the components adverse to the generalization across do-
mains.
The attention map above is learned in an end-to-end
manner using a lightweight module, which is instance-
adaptive. As indicated in [66, 74], low-frequency com-
ponents are relatively easier to be generalized than high-
frequency ones while high-frequency components are com-
monly exploited to trade robustness for accuracy. Although
this phenomenon can be observed consistently over differ-
ent instances, it does not mean that high-frequency com-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
11797
ponents have the same proportion in different samples or
have the same degree of effects on the generalization abil-
ity. Thus, we experimentally compare the effectiveness of
task-wise filtering with that of instance-adaptive filtering.
Here, the task-wise filtering uses a shared mask over all in-
stances while the instance-adaptive filtering uses unshared
masks. We find the former one also works but is inferior to
our proposed design by a clear margin. As analyzed in [10],
the spectral transform theory [32] shows that updating a sin-
gle value in the frequency domain globally affects all orig-
inal data before FFT, rendering frequency representation as
a global feature complementary to the local features learned
through regular convolutions. Thus, a two-branch architec-
ture named Fast Fourier Convolution (FFC) is introduced
in [32] to exploit the complementarity of features in the fre-
quency and original domains with an efficient ensemble. To
evaluate the effectiveness of our proposed DFF, we choose
this two-branch architecture as a base architecture and apply
our proposed frequency filtering mechanism to its spectral
transform branch. Note that FFC provides an effective im-
plementation for frequency-space convolution while we in-
troduce a novel frequency-space attention mechanism. We
evaluate and demonstrate our effectiveness on top of it.
Our contributions can be summarized in the following:
• We discover that the cross-domain generalization ability
of DNNs can be significantly enhanced by a simple learn-
able filtering operation in the frequency domain.
• We propose an effective Deep Frequency Filtering (DFF)
module where we learn an instance-adaptive spatial mask
to dynamically modulate different frequency components
during training for learning generalizable features.
• We conduct an empirical study for the comparison of dif-
ferent design choices on implementing DFF, and find that
the instance-level adaptability is required when learning
frequency-space filtering for domain generalization.
|
Lee_Multimodal_Prompting_With_Missing_Modalities_for_Visual_Recognition_CVPR_2023
|
Abstract
In this paper, we tackle two challenges in multimodal
learning for visual recognition: 1) when missing-modality
occurs either during training or testing in real-world sit-
uations; and 2) when the computation resources are not
available to finetune on heavy transformer models. To
this end, we propose to utilize prompt learning and miti-
gate the above two challenges together. Specifically, our
modality-missing-aware prompts can be plugged into mul-
timodal transformers to handle general missing-modality
cases, while only requiring less than 1%learnable param-
eters compared to training the entire model. We further ex-
plore the effect of different prompt configurations and an-
alyze the robustness to missing modality. Extensive experi-
ments are conducted to show the effectiveness of our prompt
learning framework that improves the performance under
various missing-modality cases, while alleviating the re-
quirement of heavy model re-training. Code is available.1
|
1. Introduction
Our observation perceived in daily life is typically mulit-
modal, such as visual, linguistic, and acoustic signals, thus
modeling and coordinating multimodal information is of
great interest and has broad application potentials. Re-
cently, multimodal transformers [13, 17, 22, 25, 35] emerge
as the pre-trained backbone models in several multimodal
downstream tasks, including genre classification [22], mul-
timodal sentiment analysis [25, 35], and cross-modal re-
trieval [13,15,17,30], etc. Though providing promising per-
formance and generalization ability on various tasks, there
are still challenges for multimodal transformers being ap-
plied in practical scenarios: 1) how to efficiently adapt the
multimodal transformers without using heavy computation
resource to finetune the entire model? 2) how to ensure the
robustness when there are missing modalities, e.g., incom-
plete training data or observations in testing?
1https://github.com/YiLunLee/missing aware prompts
Figure 1. Illustration of missing-modality scenarios in training
multimodal transformers. Prior work [22] investigates the robust-
ness of multimodal transformers to modality-incomplete test data,
with the requirement to finetune the entire model using modality-
complete training data. In contrast, our work studies a more gen-
eral scenario where various modality-missing cases would occur
differently not only for each data sample but also learning phases
(training, testing, or both), and we adopt prompt learning to adapt
the pre-trained transformer for downstream tasks without requir-
ing heavy computations on finetuning the entire model.
Most multimodal transformer-based methods have a
common assumption on the data completeness, which may
not hold in practice due to the privacy, device, or security
constraints. Thus, the performance may degrade when the
data is modality-incomplete (regardless of training or test-
ing). On the other hand, transformers pretrained on large-
scale datasets are frequently adopted as backbone and fine-
tuned for addressing various downstream tasks, thanks to
the strong generalizability of transformers. However, as the
model size of transformers increases (e.g., up to billions of
parameters [5,26,27]), finetuning becomes significantly ex-
pensive (e.g., up to millions of A100-GPU-hours [31]) and
is even not feasible for practitioners due to the limited com-
putation resources in most real-world applications. In addi-
tion, finetuning a transformer on relatively small-scale tar-
get datasets can result in restricted generalizability [9, 10]
and stability [24], thus hindering it from being reused for
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14943
further learning with new data or in other tasks.
This motivates us to design a method that allows multi-
modal transformers to alleviate these two real-world chal-
lenges. One pioneer work [22] investigates the sensitiv-
ity of vision-language transformers against the presence of
modal-incomplete test data (i.e., either texts or images are
missing). However, they only consider the case of miss-
ing a specific modality for all the data samples, while in
real-world scenarios the missing modality for each input
data could not be known in advance. Moreover, [22] in-
troduces additional task tokens to handle different missing-
modal scenarios (e.g., text-only token when missing visual
modality) and requires to optimize cross-modal features in
the model. Hence finetuning the entire transformer becomes
inevitable, leading to significant computation expense.
In this paper, we study multimodal transformers under
a more general modality-incomplete scenario, where var-
ious missing-modality cases may occur in any data sam-
ples, e.g., there can be both text-only and image-only data
during training or testing. In particular, we also focus on
alleviating the requirement of finetuning the entire trans-
formers. To this end, we propose a framework stemmed
from prompt learning techniques for addressing the afore-
mentioned challenges. Basically, prompt learning meth-
ods [2,5,8,16,18,32,42] emerge recently as efficient and ef-
fective solutions for adapting pre-trained transformers to the
target domain via only training very few parameters (i.e.,
prompts), and achieve comparable performance with fine-
tuning the whole heavy model. As motivated by [29] which
shows that prompts are good indicators for different distri-
butions of input, we propose to regard different situations of
missing modalities as different types of input and adopt the
learnable prompts to mitigate the performance drop caused
by missing modality. As a result, the size of our learnable
prompts can be less than 1% of the entire transformer, and
thus the computation becomes more affordable compared to
holistic finetuning. The key differences between our work
and [22] are illustrated in Figure 1.
In order to further explore the prompt designs for
multimodal transformers to tackle the general modality-
incomplete scenario, we investigate two designs of inte-
grating our missing-aware prompts2into pre-trained mul-
timodal transformers: 1) input-level, and 2) attention-level
prompt learning. We find that, the location of attaching
prompts to transformers is crucial for the studied missing-
modality cases in this paper, which also aligns the findings
in [36], though under a different problem setting.
We conduct experiments to explore different prompt
configurations and have observations of the impact on the
length and location of prompts: 1) As the number of
prompting layers increases, the model performs better in-
2In this paper, we use “missing-aware prompts” and “modality-
missing-aware prompts” interchangeably.tuitively but it is not the most important factor; 2) Attach-
ing prompts to the layers near the data input achieves better
performance; 3) The prompts’ length has slight impact on
model performance for attention-level prompts but may in-
fluence input-level prompts more on certain datasets. More-
over, we show extensive results to validate the effective-
ness of adopting our prompting framework to alleviate the
missing-modality issue under various cases, while reducing
the learnable parameters to less than 1%compared to the
entire model. Our main contributions are as follows:
• We introduce a general scenario for multimodal learn-
ing, where the missing modality may occur differently
for each data sample, either in training or testing phase.
• We propose to use missing-aware prompts to tackle the
missing modality situations, while only requiring less
than 1% parameters to adapt pre-trained models, thus
avoiding finetuning heavy transformers.
• We further study two designs of attaching prompts
onto different locations of a pretrained transformer,
input-level and attention-level prompting, where the
input-level prompting is generally a better choice but
the attention-level one can be less sensitive to certain
dataset settings.
|
Liu_RIATIG_Reliable_and_Imperceptible_Adversarial_Text-to-Image_Generation_With_Natural_Prompts_CVPR_2023
|
Abstract
The field of text-to-image generation has made remark-
able strides in creating high-fidelity and photorealistic im-
ages. As this technology gains popularity, there is a grow-
ing concern about its potential security risks. However,
there has been limited exploration into the robustness of
these models from an adversarial perspective. Existing re-
search has primarily focused on untargeted settings, and
lacks holistic consideration for reliability (attack success
rate) and stealthiness (imperceptibility).
In this paper, we propose RIATIG, a reliable and im-
perceptible adversarial attack against text-to-image mod-
els via inconspicuous examples. By formulating the exam-
ple crafting as an optimization process and solving it using
a genetic-based method, our proposed attack can generate
imperceptible prompts for text-to-image generation models
in a reliable way. Evaluation of six popular text-to-image
generation models demonstrates the efficiency and stealthi-
ness of our attack in both white-box and black-box settings.
To allow the community to build on top of our findings,
we’ve made the artifacts available1.
|
1. Introduction
The text-to-image generation has captured widespread
attention from the research community with its creative and
realistic image generation capability [52, 55, 56]. The abil-
ity to generate text-consistent images from natural language
descriptions could potentially bring tremendous benefits to
many areas of life, such as multimedia editing, computer-
aided design, and art creation [23, 27, 37, 43].
Driven by recent advances in models trained with large
datasets [42, 57] and multimodal learning [45] (e.g., diffu-
sion models [17]), text-to-image generation has made sig-
nificant progress in synthesizing high-fidelity and photore-
alistic images, such as DALL ·E [43], DALL ·E 2 [42] and
Imagen [45]. At the same time, there are a growing num-
1Code is available at: https://github.com/WUSTL-CSPL/RIATIG
a cassavas that is on
the side of a towe r a holsel is on the
side of evergeren a tr ee standing
next to Baruntse Target Images Adversarial Images
DALL E mini . DALL E 2 . Imagen Figure 1. Examples of RIATIG attacks. The top texts are the
adversarial prompts, and the bottom texts are the target models.
The first row represents the target images, while the second row
represents the adversarial image generated by the prompt.
ber of ethical concerns about the potential misuse of this
technology [36,45,53]. Generative models could be used to
generate synthetic video/audio/images of individuals (e.g.,
Deepfakes [14]), or synthetic contents with harmful stereo-
types, violence, or obscenities [10, 45, 53]. To prevent the
generation of such harmful content, content moderation fil-
ters are deployed in public APIs (e.g., DALL ·E 2) to filter
unsafe text prompts that may lead to harmful content. How-
ever, despite their best intentions, existing model-based text
filters remain susceptible to adaptive adversarial attacks.
Deep neural networks (DNNs) have been shown to be
vulnerable to adversarial examples [9, 31, 32]. By applying
these techniques, it is possible to craft an adversarial text
that looks natural to bypass the content filters, yet gener-
ates a completely different category of potentially malicious
images. However, the adversarial attacks on text-to-image
generators are less explored. To the best of our knowledge,
there are two closely related studies [15, 36]. Nevertheless,
two challenges remain:
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20585
1) Reliability. One significant limitation is that the existing
works [15, 36] do not offer a reliable method to find ad-
versarial examples. [15] discovers that DALL ·E 2 has cer-
tain hidden vocabularies that can be used to generate images
with some absurd (non-natural) prompts; however, this vo-
cabulary is often limited and not stealthy (natural). Built on
evocative prompting, [36] crafts adversarial examples via
the morphological similarity between existing words. How-
ever, it is very difficult to find texts with such linguistic sim-
ilarity, and as a result, it can be challenging for this method
to be adopted and generalized in different scenarios.
2) Stealthiness. The existing approaches can only craft ad-
versarial examples that appear to be non-natural compared
to normal texts or retain similar meanings, making them
easily filtered and recognized by human examiners. For ex-
ample, [15] crafts Apoploe vesrreaitais to represent bugs,
and [36] crafts falaiscoglieklippantilado to represent cliff.
Also, [36] combines creepy and spooky into creepooky to
generate an image that looks creepy and scary, yet the in-
ferred meaning of this new word is highly related to the
generated image, limiting its stealthiness.
To address these challenges, we propose RIATIG , a re-
liable and imperceptible adversarial attack against text-to-
image models using natural examples. To achieve this, we
first formulate the generation of the adversarial examples as
an optimization problem and apply genetic-based optimiza-
tion methods to solve it, thus making our methods much
more reliable in finding working adversarial examples. Fur-
thermore, in order to improve the stealthiness, we propose
a new text mutation technique to generate adversarial text
that is visually and semantically similar to its normal ver-
sion (some example results are shown in Figure 1).
RIATIG is evaluated on six popular text-to-image mod-
els with both white-box and black-box attack settings. Ex-
perimental results show that compared with the state-of-
the-art text-to-image-oriented adversarial attacks, RIATIG
demonstrates significantly better performance in terms of
attack effectiveness and sample quality. Overall, the contri-
butions of this work are summarized as follows:
• We are the first to systematically analyze the adver-
sarial robustness of text-to-image generation models in
both the white-box and black-box settings.
• We propose genetic-based optimization methods to
find natural adversarial examples reliably.
• We evaluate our attacks on six popular text-to-image
generation models and compare our attacks with five
baselines. The evaluation results show that our meth-
ods achieve a much higher success rate and sample
quality, raising awareness of improving and securing
the robustness of text-to-image models.
|
Koneputugodage_Octree_Guided_Unoriented_Surface_Reconstruction_CVPR_2023
|
Abstract
We address the problem of surface reconstruction from
unoriented point clouds. Implicit neural representations
(INRs) have become popular for this task, but when infor-
mation relating to the inside versus outside of a shape is
not available (such as shape occupancy, signed distances or
surface normal orientation) optimization relies on heuris-
tics and regularizers to recover the surface. These meth-
ods can be slow to converge and easily get stuck in local
minima. We propose a two-step approach, OG-INR, where
we (1) construct a discrete octree and label what is inside
and outside (2) optimize for a continuous and high-fidelity
shape using an INR that is initially guided by the octree’s
labelling. To solve for our labelling, we propose an en-
ergy function over the discrete structure and provide an ef-
ficient move-making algorithm that explores many possible
labellings. Furthermore we show that we can easily inject
knowledge into the discrete octree, providing a simple way
to influence the result from the continuous INR. We evaluate
the effectiveness of our approach on two unoriented surface
reconstruction datasets and show competitive performance
compared to other unoriented, and some oriented, methods.
Our results show that the exploration by the move-making
algorithm avoids many of the bad local minima reached by
purely gradient descent optimized methods (see Figure 1).
|
1. Introduction
Surface reconstruction from 3D point clouds has been
studied extensively in computer vision and computer graph-
ics. The task requires estimating a mesh of the shape’s sur-
face from a point cloud sampled from the surface of the
shape. We focus on the reconstruction of watertight 3D
shapes, i.e., shapes that have a well defined interior and ex-
terior. Such shapes are often represented as signed distance
fields (SDFs) or occupancy fields, which can be efficiently
encoded by a neural network [28, 30]. As these representa-
tions are fields parameterized by neural networks, they are
often called neural fields. Furthermore, they implicitly rep-
Figure 1. Our method with a SIREN INR (OG-SIREN, left) com-
pared to SIREN wo n (right) for two shapes from the ShapeNet
dataset. Our octree guidance allows for consistent inside-outside
determinism. On the other hand, SIREN wo n gets stuck in lo-
cal minima from which it cannot escape (due to needing to com-
pletely change the occupancy of certain areas) creating extraneous
surfaces (often called ghost geometries in the literature).
resent the shape by a level set of the field, thus they are also
often referred to as implicit neural representations (INRs).
The broader class of neural fields, including INRs, have
been very popular over the last few years as they can handle
arbitrary topology, are memory efficient, and are continu-
ous with potentially infinite resolution [39, 46]. Among the
INRs for 3D shapes, SDFs are the most popular as they pro-
vide more useful information (distances not just occupancy)
and are required for downstream graphics algorithms such
as sphere tracing and approximate soft shadows [29, 35].
When learning an implicit representation of a shape, a
major difficulty is predicting whether points in space are
inside or outside the shape. Many methods require ori-
ented surface normals for the input points or signed dis-
tances from the surface, which usually are not given by raw
data from scans. While this can be estimated using the line
of sight information [17] or algorithms [4, 6, 15, 21], they
yield noisy predictions that after postprocessing still can
lead to bad results (see Section 4.4). We consider the task
of unoriented surface reconstruction, where such informa-
tion is not available, and only the sampled surface points
are given. We demonstrate that our method performs com-
petitively and sometimes better than oriented methods, even
when they are given the ground truth (GT) normals.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16717
Figure 2. An illustration of our method, OG-INR. Given an unoriented point cloud (left), we progressively build and label an octree around
the points (middle). The octree at depths 3-7 are shown. Surface leaves of the octree (yellow) are leaves that contain points from the point
cloud, other leaves are labelled as inside (blue) or outside (transparent) by minimizing an energy function. We then train an INR model to
obtain an SDF, using the labelling as supervision for the initial training, after which we can extract a mesh (right).
We propose OG-INR, which uses a discrete representa-
tion in conjunction with the continuous INR representation
(see Figure 2). Given an input unoriented point cloud, we
progressively build an octree from the input points and de-
termine which leaf nodes are surface leaves ( i.e., leaves that
contain a point). We also label all other leaves as inside or
outside leaves (if they should be within or outside the shape
respectively). To do this we minimize an energy function
that trades off the watertight surface property, that every
surface point should border with the inside and outside of
the shape, with a minimal surface constraint. We then train
standard INR architectures, initially guiding the training by
the octree labelling, and show that the INR converges much
faster and is less prone to large failure regions.
Our main contributions are:
• We introduce a novel method to guide the initial stage
of INR training. It uses a labelled octree structure to
allow the INR to converge significantly faster and alle-
viates the local minima problem of INRs.
• We propose an energy function over octree labels that
captures the task of surface reconstruction. It balances
the constraint of maintaining known surface regions
with minimising the overall surface area.
• We provide an efficient move-making algorithm to
optimize the energy function, which explores many
inside-outside possibilities in a structured manner.
• Our discrete representation is easily understandable
and human-modifiable, giving an intuitive method for
applying changes to the resulting SDF.
|
Lei_RGBD2_Generative_Scene_Synthesis_via_Incremental_View_Inpainting_Using_RGBD_CVPR_2023
|
Abstract
We address the challenge of recovering an underlying
scene geometry and colors from a sparse set of RGBD
view observations. In this work, we present a new solutiontermed RGBD
2that sequentially generates novel RGBD
views along a camera trajectory, and the scene geometryis simply the fusion result of these views. More specifically,
we maintain an intermediate surface mesh used for render-ing new RGBD views, which subsequently becomes com-
plete by an inpainting network; each rendered RGBD view
is later back-projected as a partial surface and is supple-
mented into the intermediate mesh. The use of intermediate
mesh and camera projection helps solve the tough problem
of multi-view inconsistency. We practically implement the
RGBD inpainting network as a versatile RGBD diffusionmodel, which is previously used for 2D generative model-
ing; we make a modification to its reverse diffusion processto enable our use. We evaluate our approach on the task
of 3D scene synthesis from sparse RGBD inputs; extensiveexperiments on the ScanNet dataset demonstrate the supe-
riority of our approach over existing ones. Project page:
https://jblei.site/proj/rgbd-diffusion .
|
1. Introduction
Scene synthesis is an essential requirement for many
practical applications. The resulting scene representationcan be readily utilized in diverse fields, such as virtual re-
ality, augmented reality, computer graphics, and game de-
velopment. Nevertheless, conventional approaches to scenesynthesis usually involve reconstructing scenes (e.g., indoor
scenes with varying sizes) by fitting given observations,
such as multi-view images or point clouds. The increas-ing prevalence of RGB/RGBD scanning devices has estab-
lished multi-view data as a favored input modality, driving
and promoting technical advancements in the realm of scenereconstruction from multi-view images.
Neural Radiance Fields (NeRFs) [ 42] have demonstrated
†Correspondence to Kui Jia: <[email protected] >.
ݐൌͲ ݐൌͳ ݐൌʹ
ݐൌ͵
Final Result…
Figure 1. Illustration of Our Generative Scene Synthesis. We
incrementally reconstruct the scene geometry by inpainting RGBD
views as the camera moves in the scene.
potential in this regard, yet they are not exempt from limi-
tations. NeRFs are designed to reconstruct complete scenesby fitting multi-view images, and they cannot generate or
infer missing parts when the input is inevitably incomplete
or missing. While recently some studies [ 3,5,8,56,63]h a v e
attempted to equip NeRFs with generative and extrapola-tion capabilities, this functionality relies on a comparativelyshort representation with limited elements (e.g. typically,the length of a global latent code is much shorter than that of
an image: (F= 512)/lessmuch(H×W= 128×128 = 16 ,384) )
that significantly constrains their capacity to accurately cap-ture fine-grained details in the observed data. Consequently,
the effectiveness of these methods has only been established
for certain categories of canonical objects, such as faces or
cars [ 8,63], or relatively small toy scenes [ 5].
We introduce a novel task of generative scene synthesis
from sparse RGBD views, which involves learning across
multiple scenes to later enable scene synthesis from a sparse
set of multi-view RGBD images. This task presents a chal-
lenging setting wherein a desired solution should simulta-neously (1) preserve observed regions, hallucinate missingparts of the scene, (2) eliminate additional computational
costs during inference for each individual test scene, (3) en-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8422
sure exact 3D consistency, and (4) maintain scalability to
scenes with unfixed scales.
We will elaborate on them in detail as follows. Firstly, to
maximize the preservation of intricate details while simulta-neously hallucinating potentially absent parts that may be-come more pronounced when views are exceedingly sparse,we perform straightforward reconstruction whose detailscome from images that can describe fine structures using
a maximum of H×Welements (i.e. an image size) in a
view completion manner. This is particularly compatible
with diffusion models that operate at full image resolutionwith an inpainting mechanism. We also found that RGBDdiffusion models greatly simplify the training complexityof a completion model, thanks to their versatile generativeability to inpaint missing RGBD pixels while preserving
the integrity of known regions through a convenient train-ing process solely operated on complete RGBD data. Sec-
ondly, our method employs back-projection that requiresno optimization, thus eliminating the necessity for test-time
training for each individual scene, ultimately leading to a
significant enhancement in test-time efficiency. Thirdly, to
ensure consistency among multi-view images, an interme-
diate mesh representation is utilized as a means of bridging
the 2D domain (i.e. multi-view RGBD images) with the 3D
domain (i.e. the 3D intermediate mesh) through the aid ofcamera projection. Fourthly, to enable our method to handle
scenes of indeterminate sizes, we utilize images with freelydesignated poses as the input representation. Such manner
naturally ensures SE(3) equivariance, and thus offers scala-
bility due to the ease with which the range of the generated
content can be controlled by simply specifying their cameraextrinsic matrices.
Our proposal involves generating multi-view consistent
RGBD views along a predetermined camera trajectory, us-
ing an intermediate mesh to render novel RGBD images
that are subsequently inpainted using a diffusion model,
and transforming each RGBD view into a 3D partial meshvia back-projection, and finally merging it with the inter-
mediate scene mesh to produce the final output. Specifi-cally, our proposed approach initiates by ingesting multipleposed RGBD images as input and utilizing back-projectionto construct an intermediate scene mesh. This mesh encom-
passes color attributes that facilitate the rendering of RGBD
images from the representation under arbitrarily specified
camera viewpoints. Once a camera pose is selected from
the test-time rendering trajectory, the intermediate mesh isrendered to generate a new RGBD image for this pose. No-
tably, the test-time view typically exhibits only slight over-lap with the known cameras, leading to naturally partially
rendered RGBD images. To fill the gaps in the incom-
plete view, we employ an inpainting network implemented
as an RGBD diffusion model with minor modifications toits reverse sampling process. The resulting inpainted out-put is then back-projected into 3D space, forming a partial
mesh that complements the entire intermediate scene mesh.
We iterate these steps until all test-time camera viewpointsare covered, and the intermediate scene mesh gradually be-comes complete during this process. The final output of ourpipeline is the mesh outcome acquired from the last step.
Extensive experiments on ScanNet [ 12] dataset demon-
strate the superiority of our approach over existing solutionson the task of scene synthesis from sparse RGBD inputs.
|
Lin_Harmonious_Feature_Learning_for_Interactive_Hand-Object_Pose_Estimation_CVPR_2023
|
Abstract
Joint hand and object pose estimation from a single
image is extremely challenging as serious occlusion often
occurs when the hand and object interact. Existing ap-
proaches typically first extract coarse hand and object fea-
tures from a single backbone, then further enhance them
with reference to each other via interaction modules. How-
ever, these works usually ignore that the hand and ob-
ject are competitive in feature learning, since the backbone
takes both of them as foreground and they are usually mu-
tually occluded. In this paper, we propose a novel Har-
monious Feature Learning Network (HFL-Net). HFL-Net
introduces a new framework that combines the advantages
of single- and double-stream backbones: it shares the pa-
rameters of the low- and high-level convolutional layers of
a common ResNet-50 model for the hand and object, leav-
ing the middle-level layers unshared. This strategy enables
the hand and the object to be extracted as the sole targets
by the middle-level layers, avoiding their competition in
feature learning. The shared high-level layers also force
their features to be harmonious, thereby facilitating their
mutual feature enhancement. In particular, we propose to
enhance the feature of the hand via concatenation with the
feature in the same location from the object stream. A sub-
sequent self-attention layer is adopted to deeply fuse the
concatenated feature. Experimental results show that our
proposed approach consistently outperforms state-of-the-
art methods on the popular HO3D and Dex-YCB databases.
Notably, the performance of our model on hand pose esti-
mation even surpasses that of existing works that only per-
form the single-hand pose estimation task. Code is avail-
able at https://github.com/lzfff12/HFL-Net.
|
1. Introduction
When humans interact with the physical world, they pri-
marily do so by using their hands. Thus, an accurate un-
derstanding of how hands interact with objects is essen-
*Corresponding author.
Image Front View Other View
Figure 1. HFL-Net predicts the 3D hand and object poses from
single monocular RGB images accurately, even in serious occlu-
sion scenarios.
tial to the understanding of human behavior. It can be
widely applied to a range of fields, including the devel-
opment of virtual reality [36], augmented reality [33, 34],
and imitation-based robot learning [35], among others. Re-
cently, hand pose estimation [12–16] and 6D object pose
estimation [17–19] based on monocular RGB images have
respectively achieved remarkable results. However, the re-
search into joint hand-object pose estimation under circum-
stances of interaction remains in its infancy [2,3,23,26–28].
As illustrated in Figure 1, joint hand-object pose esti-
mation from a single image is extremely challenging. The
main reason for this is that when the hand and object inter-
act with each other, serious occlusion occurs; occlusion, in
turn, results in information loss, increasing the difficulty of
each task.
One mainstream solution to this problem is to utilize
context. Due to physical constraints, the interacting hand
and object tend to be highly correlated in terms of their
poses, meaning that the appearance of one can be useful
context for the other [1–3]. Methods that adopt this solu-
tion typically employ a single backbone to extract features
for the hand and object, respectively [2, 22, 27]. This uni-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12989
fied backbone model ensures that the hand and object fea-
tures are in the same space, which facilitates the subsequent
mutual feature enhancement between hand and object via
attention-based methods [2].
However, the hand and object pose estimation tasks are
competitive in feature learning if a single backbone model
is utilized. In more detail, when the hand and object are
close to each other, the backbone model treats them both as
foreground, and may thus be unable to differentiate the hand
features from those of the object. A straightforward solution
is to utilize two backbones [1, 3, 23], one for the hand and
the other one for the object; when this approach is adopted,
each backbone has only one target as the foreground. The
main downsides of this strategy include large model size
and (more importantly) the different feature spaces between
backbones, which introduce difficulties with regard to mu-
tual feature enhancement between the hand and object.
To solve the aforementioned problems, we propose a
novel Harmonious Feature Learning Network (HFL-Net).
HFL-Net introduces a new framework that combines the
advantages of single- and double-stream backbones. In
more detail, our backbone shares the parameters of the low-
and high-level convolutional layers of a common ResNet-50
model [4] for the hand and object, leaving the middle-level
layers unshared. Feature maps produced by low-level layers
are fed into the two sets of middle-level layers, which regard
the hand and object respectively as the sole foreground tar-
get. As a result, feature learning for the hand and object is
no longer competitive. Finally, through sharing the param-
eters of the high-level convolutional layers, the hand and
object features are forced to be in similar feature spaces. In
this way, our backbone realizes harmonious feature learning
for the hand and object pose estimation.
We further enhance the representation power of the hand
and object features through the use of efficient attention
models. Several existing methods have successfully real-
ized hand-to-object feature enhancement via cross-attention
operations [1, 2]; however, object-to-hand feature enhance-
ment usually turns out to be difficult [1, 2]. Motivated by
the observation that when one pixel on the hand is occluded,
the object feature in the same location usually provides use-
ful cues, we propose a simple but effective strategy for fa-
cilitating object-to-hand feature enhancement. Specifically,
we adopt ROIAlign [6] to extract fixed-size feature maps
from the two output streams of our backbone respectively
according to the hand bounding box. We then concatenate
the two feature maps along the channel dimension and feed
the obtained feature maps into a self-attention module [7].
Object-to-hand feature enhancement is automatically real-
ized via the fully-connected and multi-head attention layers
in the self-attention module. Finally, we split the output
feature maps by the self-attention layer along the channel
dimension, and take the first half as the enhanced hand fea-ture maps.
We demonstrate the effectiveness of HFL-Net through
comprehensive experiments on two benchmarks: HO3D [9]
and Dex-YCB [10], and find that our method consistently
outperforms state-of-the-art works on the joint hand-object
pose estimation task. Moreover, benefiting from the learned
harmonious hand and object features, the hand and object
pose estimation tasks in HFL-Net are mutually beneficial
rather than competitive. In our experiments, the perfor-
mance of HFL-Net on the hand pose estimation task sur-
passes even recent works [12,15,32] that only estimate hand
poses in both the training and testing stages.
|
Kan_Self-Correctable_and_Adaptable_Inference_for_Generalizable_Human_Pose_Estimation_CVPR_2023
|
Abstract
A central challenge in human pose estimation, as well
as in many other machine learning and prediction tasks, is
the generalization problem. The learned network does not
have the capability to characterize the prediction error, gen-
erate feedback information from the test sample, and cor-
rect the prediction error on the fly for each individual test
sample, which results in degraded performance in general-
ization. In this work, we introduce a self-correctable and
adaptable inference (SCAI) method to address the general-
ization challenge of network prediction and use human pose
estimation as an example to demonstrate its effectiveness
and performance. We learn a correction network to correct
the prediction result conditioned by a fitness feedback er-
ror. This feedback error is generated by a learned fitness
feedback network which maps the prediction result to the
original input domain and compares it against the original
input. Interestingly, we find that this self-referential feed-
back error is highly correlated with the actual prediction
error. This strong correlation suggests that we can use this
error as feedback to guide the correction process. It can
be also used as a loss function to quickly adapt and opti-
mize the correction network during the inference process.
Our extensive experimental results on human pose estima-
tion demonstrate that the proposed SCAI method is able to
significantly improve the generalization capability and per-
formance of human pose estimation.
|
1. Introduction
Human pose estimation (HPE) aims to correctly predict
and localize human body joints. A variety of downstream
applications are based on human pose estimation, such as
motion capture [7, 27], activity recognition [1, 6, 37], per-
son tracking [36, 41] and video surveillance [18]. Recently,
deep learning-based methods for human pose estimation
*Corresponding author.have achieved remarkable success [2,4,12,26,28,30]. How-
ever, in complex or unseen scenarios, pose estimation re-
mains very challenging due to occlusions, cluttered back-
ground, and large variations of appearance and scenes, es-
pecially for those distal keypoints at the end locations of
body parts, such as wrists and ankles, which have large de-
grees of motion freedom and often suffer from severe oc-
clusions [13, 39].
We recognize that one major challenge in current human
pose estimation, as well as in many other prediction tasks,
is generalization. Network models, which have been well
learned on the training set, often experience significant per-
formance degradation on the test samples which are col-
lected from different environments or scenarios. For exam-
ple, in human pose estimation, there are different types of
occlusions of body parts due to complex scene structures
and free-style motions of human bodies. More importantly,
the occlusion scenarios of the test samples could be much
different from those in the training samples. This often
leads to the significant performance degradation of human
pose estimation from the training data to the test data. For
example, in our experiment, the average prediction accuracy
on the training samples is 95.5%. However, on the test set,
this accuracy drops to 67%. For those distal keypoints at tip
locations of body parts which often experience more signif-
icant occlusions, their average performance drop is much
more significant, from 95.3% to 57%.
To address this performance degradation or generaliza-
tion problem, there are two major questions that need to be
carefully answered: (1) how can we tell if the prediction
is accurate or not during testing and how to characterize
the prediction error? This is difficult because the ground
truth values of the test samples are not available during test-
ing. Specifically, in pose estimation, we do not have the
labeled ground truth locations of the body keypoints. (2)
How to correct the prediction error based on the specific
characteristics of the test sample? Current network models,
once successfully trained with labeled samples at the train-
ing side, remain fixed during testing, performing the feed-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5537
forward-only inference process to generate the prediction
result. There is no mechanism for us to examine the spe-
cific characteristics of the test sample and use them as feed-
back to correct the prediction error or adjust the network
model. We believe that this unique capability of sample-
specific prediction error characterization, error correction,
and model optimization is very important for the general-
ization performance of learned network models. It also has
the potential to significantly improve the prediction accu-
racy of test samples.
To address these two challenging issues, in this work,
we propose to explore a learning-based feedback-control or
correction method for prediction, with applications to hu-
man pose prediction. Specifically, let ˆ v=Φ(u)be the
prediction network which is tasked to predict the true value
ofvfrom input u. To answer the first question, we design
and learn a fitness feedback network Γwhich compares the
prediction result ˆ v=Φ(u)of the prediction network Φ
against the original input uand generate a self-referential
feedback error. Very interestingly, in this work, we find that
this self-referential feedback error is highly correlated with
the prediction error of the network Φ. Note that, when com-
puting the self-referential error, we do not need the ground
truth data. It can be directly computed on the input sample
using the prediction-feedback networks. This allows us to
characterize the prediction error of test samples.
Under the guidance of self-referential error feedback, we
train a prediction error correction network Cto adjust the
inference results during the prediction process to improve
the prediction accuracy for the test samples. Besides, we
find that the self-referential error and the fitness feedback
network (FFN) can be used to construct a self-referential
loss function on the test samples to quickly adapt and opti-
mize the network model during the inference stage, making
the model learnable on the test side. We apply the above
self-correctable and adaptable inference (SCAI) method to
human pose estimation. Our extensive experimental results
on benchmark datasets demonstrate that the proposed SCAI
method is able to significantly improve the generalization
capability of the underlying prediction algorithm. It out-
performs the existing state-of-the-art methods on human
pose estimation by large margins. For example, on the MS
COCO-testdev dataset, our method improves upon the cur-
rent best method by up to 1.4%, which is quite significant.
|
Kim_PartMix_Regularization_Strategy_To_Learn_Part_Discovery_for_Visible-Infrared_Person_CVPR_2023
|
Abstract
Modern data augmentation using a mixture-based tech-
nique can regularize the models from overfitting to the
training data in various computer vision applications,
but a proper data augmentation technique tailored for
the part-based Visible-Infrared person Re-IDentification
(VI-ReID) models remains unexplored. In this paper, we
present a novel data augmentation technique, dubbed
PartMix , that synthesizes the augmented samples by mixing
the part descriptors across the modalities to improve the
performance of part-based VI-ReID models. Especially,
we synthesize the positive and negative samples within
the same and across different identities and regularize
the backbone model through contrastive learning. In
addition, we also present an entropy-based mining strat-
egy to weaken the adverse impact of unreliable positive
and negative samples. When incorporated into existing
part-based VI-ReID model, PartMix consistently boosts
the performance. We conduct experiments to demonstrate
the effectiveness of our PartMix over the existing VI-ReID
methods and provide ablation studies.
|
1. Introduction
Person Re-IDentification (ReID), aiming to match per-
son images in a query set to ones in a gallery set cap-
tured by non-overlapping cameras, has recently received
substantial attention in numerous computer vision applica-
tions, including video surveillance, security, and persons
analysis [64, 76]. Many ReID approaches [1, 2, 25, 26, 32,
41, 59, 73, 78] formulate the task as a visible-modality re-
trieval problem, which may fail to achieve satisfactory re-
sults under poor illumination conditions. To address this,
most surveillance systems use an infrared camera that can
capture the scene even in low-light conditions. However,
*Corresponding author
This research was supported by the National Research Founda-
tion of Korea (NRF) grant funded by the Korea government (MSIP)
(NRF2021R1A2C2006703).
Vis -ID 1
Vis -ID 2Mix Mix
Mix
Inf-ID 2
Vis -ID 1Mix
Vis -ID 1
Vis -ID 2Mix Mix
Inf-ID 2
Vis -ID 1Mix Mix
Inf-ID 2
Vis -ID 1Mix
Vis -ID 1
Vis -ID 2Mix
(a) MixUp [69]
Vis -ID 1
Vis -ID 2Mix Mix
Mix
Inf-ID 2
Vis -ID 1Mix
Vis -ID 1
Vis -ID 2Mix Mix
Inf-ID 2
Vis -ID 1Mix Mix
Inf-ID 2
Vis -ID 1Mix
Vis -ID 1
Vis -ID 2Mix
(b) CutMix [68]
Vis -ID 1
Vis -ID 2Mix Mix
Mix
Inf-ID 2
Vis -ID 1Mix
Vis -ID 1
Vis -ID 2Mix Mix
Inf-ID 2
Vis -ID 1Mix Mix
Inf-ID 2
Vis -ID 1Mix
Vis -ID 1
Vis -ID 2Mix
(c) PartMix (Ours)
Figure 1. Comparison of data augmentation methods for VI-
ReID. (a) MixUp [69] using a global image mixture and (b) Cut-
Mix [68] using a local image mixture can be used to regularize
a model for VI-ReID, but these methods provide limited perfor-
mances because they yield unnatural patterns or local patches with
only background or single human part. Unlike them, we present
(c) PartMix using a part descriptor mixing strategy, which boosts
the VI-ReID performance (Best viewed in color).
directly matching these infrared images to visible ones for
ReID poses additional challenges due to an inter-modality
variation [60, 62, 65].
To alleviate these inherent challenges, Visible-Infrared
person Re-IDentification (VI-ReID) [5–7, 12, 24, 35, 48,
54, 60–62, 66] has been popularly proposed to handle the
large intra- and inter-modality variations between visible
images and their infrared counterparts. Formally, these ap-
proaches first extract a person representation from whole
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18621
visible and infrared images, respectively, and then learn
a modality-invariant feature representation using feature
alignment techniques, e.g., triplet [5,7,24,60–62] or ranking
criterion [12, 66], so as to remove the inter-modality varia-
tion. However, these global feature representations solely
focus on the most discriminative part while ignoring the di-
verse parts which are helpful to distinguish the person iden-
tity [53, 63].
Recent approaches [53, 57, 63] attempted to further en-
hance the discriminative power of person representation for
VI-ReID by capturing diverse human body parts across dif-
ferent modalities. Typically, they first capture several hu-
man parts through, e.g., horizontal stripes [63], cluster-
ing [53], or attention mechanisms [57] from both visible
and infrared images, extract the features from these human
parts, and then reduce inter-modality variation in a part-
level feature representation. Although these methods re-
duce inter-modality variation through the final prediction
(e.g., identity probability), learning such part detector still
leads to overfitting to the specific part because the model
mainly focuses on the most discriminative part to classify
the identity, as demonstrated in [4, 15, 31, 52]. In addition,
these parts are different depending on the modality, it accu-
mulates errors in the subsequent inter-modality alignment
process, which hinders the generalization ability on unseen
identity in test set.
On the other hand, many data augmentation [19, 22, 38,
45, 68, 69] enlarge the training set through the image mix-
ture technique [69]. They typically exploit the samples that
linearly interpolate the global [38, 44, 69] or local [19, 68]
images and label pairs for training, allowing the model
to have smoother decision boundaries that reduce overfit-
ting to the training samples. This framework also can be
a promising solution to reduce inter-modality variation by
mixing the different modality samples to mitigate overfit-
ting to the specific modality, but directly applying these
techniques to part-based VI-ReID models is challenging in
that they inherit the limitation of global and local image
mixture methods ( e.g., ambiguous and unnatural patterns,
and local patches with only background or single human
part). Therefore, the performance of part-based VI-ReID
with these existing augmentations would be degraded.
In this paper, we propose a novel data augmentation tech-
nique for VI-ReID task, called PartMix , that synthesizes
the part-aware augmented samples by mixing the part de-
scriptors. Based on the observation that learning with the
unseen combination of human parts may help better regu-
larize the VI-ReID model, we randomly mix the inter- and
intra-modality part descriptors to generate positive and neg-
ative samples within the same and across different identi-
ties, and regularize the model through the contrastive learn-
ing. In addition, we also present an entropy-based mining
strategy to weaken the adverse impact of unreliable posi-tive and negative samples. We demonstrate the effective-
ness of our method on several benchmarks [33,55]. We also
provide an extensive ablation study to validate and analyze
components in our model.
|
Lee_BAAM_Monocular_3D_Pose_and_Shape_Reconstruction_With_Bi-Contextual_Attention_CVPR_2023
|
Abstract
3D traffic scene comprises various 3D information about
car objects, including their pose and shape. However,
most recent studies pay relatively less attention to recon-
structing detailed shapes. Furthermore, most of them treat
each 3D object as an independent one, resulting in losses
of relative context inter-objects and scene context reflect-
ing road circumstances. A novel monocular 3D pose and
shape reconstruction algorithm, based on bi-contextual at-
tention and attention-guided modeling (BAAM), is proposed
in this work. First, given 2D primitives, we reconstruct
3D object shape based on attention-guided modeling that
considers the relevance between detected objects and ve-
hicle shape priors. Next, we estimate 3D object pose
through bi-contextual attention, which leverages relation-
context inter objects and scene-context between an object
and road environment. Finally, we propose a 3D non-
maximum suppression algorithm to eliminate spurious ob-
jects based on their Bird-Eye-View distance. Extensive
experiments demonstrate that the proposed BAAM yields
state-of-the-art performance on ApolloCar3D. Also, they
show that the proposed BAAM can be plugged into any
mature monocular 3D object detector on KITTI and sig-
nificantly boost their performance. Code is available at
https://github.com/gywns6287/BAAM.
|
1. Introduction
3D traffic scene understanding provides enriched de-
scriptions of the dynamic objects, e.g., 3D shape, pose,
and location, compared to representing objects as bounding
boxes. 3D visual perception is crucial for the autonomous
driving system to develop downstream tasks such as mo-
tion prediction and planning, and aids to faithfully recon-
Corresponding author
yThese authors contributed equally to this work.
3D Scene Input image
Figure 1. Reconstructed 3D scene with rough bounding box (right
up) and with detailed shape (right down). For better 3D recon-
struction, detailed 3D shapes are needed rather than the simple 3D
bounding boxes.
struct the traffic scene from recorded data. To acquire pre-
cise 3D information, some prior arts have relied on specific
devices such as LiDAR [3,10,42] and stereo vision [26,44].
However, as the system becomes complex and costly, it
quickly reaches the limit to scalability. To contrary, areas
of study about 3D perception using monocular vision have
been receiving attention due to its simplicity and cost effi-
ciency [4, 7, 19, 29, 33, 50, 51].
Monocular 3D perception is an ill-posed problem in that
projective geometry inherently loses depth information. In
particular, traffic scene contains partially observable ob-
jects, and shows fine-grained classes which are visually
confusing. Pseudo-LiDAR [49] presents a feasible solution
of the image based 3D object detection. To reconstruct 3D
poses of the objects, many studies [25, 27, 30, 33, 40, 41, 50,
51] focus on using geometry constraints between 2D and
3D. Yet, it is less studied in the line of research that leverage
relative context among the objects and global scene context
depending on road environment.
Figure 1 compares the reconstructed 3D scene with 3D
bounding boxes and detailed 3D shapes. With a detailed 3D
shape, we render the traffic scene in realistic and provide
intuitive representations of the objects. Despite scale ambi-
guity of the monocular 3D perception, 3D mesh provides a
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9011
strong clue to align instances’ scales and orientations. Con-
currently, there have been many attempts [8, 20, 22, 31, 45,
46] to reconstruct the 3D shape of human objects. These
methods mainly focus on learning PCA-basis to represent
human shapes. Inspired by human shape reconstruction,
recent methods [21, 24] also design PCA-basis for vehicle
shape reconstruction. However, as pointed out in [1, 34],
PCA-basis often loses object details and thus leads to un-
satisfactory reconstruction.
In this work, we propose a novel 3D pose and shape
estimation algorithm, utilizing bi-contextual attention and
attention-guided modeling (BAAM). Given a monocular
RGB image, the proposed BAAM first extracts various 2D
primitive features such as appearance, position, and size.
And it constructs object features to embed internal object
structures by aggregating primitive features. For detailed
object shapes, we introduce shape priors consisting of the
mean shape and various template offsets to represent de-
tails of vehicle shapes. Then, BAAM reconstructs objects’
3D shapes as mesh structures with attention-guided mod-
eling, which combines shape prior and individual object
features based on their relevance. For accurate pose es-
timation, we present the notion of bi-contextual attention
consisting of relation-context and scene-context, which de-
scribe the relationship inter objects and between object and
road environment, respectively. Based on this rich infor-
mation, BAAM integrates object features to predict ob-
jects’ 3D poses through a carefully designed bi-contextual
attention module. Finally, we proposed a novel 3D non-
maximum suppression (NMS) algorithm that effectually re-
moves spurious objects based on Bird-Eye-view (BEV) ge-
ometry. Extensive experiments on Apollocar3D [43] and
KITTI [12] datasets demonstrate the effectiveness of the
proposed BAAM algorithm. Also, experiments show that
the proposed method significantly outperforms state-of-the
arts [21, 43] in both pose and shape estimation. The main
contributions of our work are four folds:
We propose the attention-guided modeling that recon-
structs objects’ shapes based on the relevance between
objects and vehicle shape priors.
We proposed the bi-contextual attention module that
estimates objects’ pose by exploiting relation-context
inter objects and scene-context between an object and
road environment.
We also develop the novel 3D non-maximum suppres-
sion algorithm to remove spurious objects based on
their Bird-Eye-view distance.
The proposed BAAM algorithm achieves the state-of
the art performance on ApolloCar3D [43]. Also, ex-
periments on KITTI [12] show that the proposed al-
gorithm can significantly improve the performance of
existing monocular 3D detectors.
|
Li_Diffusion-SDF_Text-To-Shape_via_Voxelized_Diffusion_CVPR_2023
|
Abstract
With the rising industrial attention to 3D virtual mod-
eling technology, generating novel 3D content based on
specified conditions ( e.g. text) has become a hot issue.
In this paper, we propose a new generative 3D model-
ing framework called Diffusion-SDF for the challenging
task of text-to-shape synthesis. Previous approaches lack
flexibility in both 3D data representation and shape gen-
eration, thereby failing to generate highly diversified 3D
shapes conforming to the given text descriptions. To ad-
dress this, we propose a SDF autoencoder together with
the Voxelized Diffusion model to learn and generate rep-
resentations for voxelized signed distance fields (SDFs) of
3D shapes. Specifically, we design a novel UinU-Net ar-
chitecture that implants a local-focused inner network in-
side the standard U-Net architecture, which enables better
reconstruction of patch-independent SDF representations.
We extend our approach to further text-to-shape tasks in-
cluding text-conditioned shape completion and manipula-
tion. Experimental results show that Diffusion-SDF gen-
erates both higher quality and more diversified 3D shapes
that conform well to given text descriptions when compared
to previous approaches. Code is available at: https:
//github.com/ttlmh/Diffusion-SDF .
†Corresponding author.
|
1. Introduction
Exploring data representations for 3D shapes has been a
fundamental and critical issue in 3D computer vision. Ex-
plicit 3D representations including point clouds [38, 39],
polygon meshes [17, 24] and occupancy voxel grids [9, 53]
have been widely applied in various 3D downstream ap-
plications [1, 35, 56]. While explicit 3D representations
achieve encouraging performance, there are some primary
limitations including not being suitable for generating wa-
tertight surfaces ( e.g. point clouds), or being subject to topo-
logical constraints ( e.g. meshes). On the other hand, im-
plicit 3D representations have been widely studied more
recently [3, 14, 37], with representative works including
DeepSDF [37], Occupancy Network [32] and IM-Net [8].
In general, implicit functions encode the shapes by the iso-
surface of the function, which is a continuous field and can
be evaluated at arbitrary resolution.
In recent years, numerous explorations have been con-
ducted for implicit 3D generative models, which show
promising performance on several downstream applications
such as single/multi-view 3D reconstruction [23, 54] and
shape completion [12, 34]. Besides, several studies have
also explored the feasibility of directly generating novel 3D
shapes based on implicit representations [15, 21]. How-
ever, these approaches are incapable of generating specified
3D shapes that match a given condition, e.g. a short text
describing the shape characteristics as shown in Figure 1.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12642
Text-based visual content synthesis has the advantages of
the flexibility and generality [41, 42]. Users may generate
rich and diverse 3D shapes based on easily obtained natural
language descriptors. In addition to generating 3D shapes
directly based on text descriptions, manipulating 3D data
with text guidance can be further utilized for iterative 3D
synthesis and fine-grained 3D editing, which can be benefi-
cial for non-expert users to create 3D visual content.
In the literature, there have been few attempts on the
challenging task of text-to-shape generation based on im-
plicit 3D representations [29, 34, 48]. For example, Au-
toSDF [34] introduced a vector quantized SDF autoen-
coder together with an autoregressive generator for shape
generation. While encouraging progress has been made,
the quality and diversity of generated shapes still requires
improvement. The current approaches struggle to gener-
ate highly diversified 3D shapes that both guarantee gen-
eration quality and conform to the semantics of the input
text. Motivated by the success of denoising diffusion mod-
els in 2D image [13, 19, 36] and even explicit 3D point
cloud [31, 58, 59] generation, we find that DMs achieve
high-quality and highly diversified generation while being
robust to model training. To this end, we aim to design an
implicit 3D representation-based generative diffusion pro-
cess for text-to-shape synthesis that can achieve better gen-
eration flexibility and generalization performance.
In this paper, we propose the Diffusion-SDF framework
for text-to-shape synthesis based on truncated signed dis-
tance fields (TSDFs). Considering that 3D shapes share
structural similarities at local scales, and the cubic data vol-
ume of 3D voxels may lead to slow sampling speed for dif-
fusion models, we propose a two-stage separated generation
pipeline. First, we introduce a patch-based SDF autoen-
coder that map the original signed distance fields into patch-
independent local Gaussian latent representations. Sec-
ond, we introduce the Voxelized Diffusion model that cap-
tures the intra-patch information along with both patch-to-
patch and patch-to-global relations. Specifically, we de-
sign a novel UinU -Net architecture to replace the standard
U-Net [46] for the noise estimator in the reverse process.
UinU -Net implants a local-focused inner network inside
the outer U-Net backbone, which takes into account the
patch-independent prior of SDF representations to better re-
construct local patch features from noise. Our work digs
deeper into the further potential of diffusion model-based
approaches towards text-conditioned 3D shape synthesis
based on voxelized TSDFs. Experiments on the largest ex-
isting text-shape dataset [6] show that our Diffusion-SDF
approach achieves promising generation performance on
text-to-shape tasks compared to existing state-of-the-art ap-
proaches, in both qualitative and quantitative evaluations.
Strictly speaking, our approach employs a combined explicit-implicit
representation in the form of voxelized signed distance fields.
|
Liu_Diversity-Measurable_Anomaly_Detection_CVPR_2023
|
Abstract
Reconstruction-based anomaly detection models achieve
their purpose by suppressing the generalization ability for
anomaly. However, diverse normal patterns are conse-
quently not well reconstructed as well. Although some ef-
forts have been made to alleviate this problem by modeling
sample diversity, they suffer from shortcut learning due to
undesired transmission of abnormal information. In this
paper, to better handle the tradeoff problem, we propose
Diversity-Measurable Anomaly Detection (DMAD) frame-
work to enhance reconstruction diversity while avoid the
undesired generalization on anomalies. To this end, we
design Pyramid Deformation Module (PDM) , which mod-
els diverse normals and measures the severity of anomaly
by estimating multi-scale deformation fields from recon-
structed reference to original input. Integrated with an in-
formation compression module, PDM essentially decouples
deformation from prototypical embedding and makes the fi-
nal anomaly score more reliable. Experimental results on
both surveillance videos and industrial images demonstrate
the effectiveness of our method. In addition, DMAD works
equally well in front of contaminated data and anomaly-like
normal samples.
|
1. Introduction
Visual anomaly detection is a fundamental and important
problem in computer vision community, with wide applica-
tions in video surveillance and industrial inspection. It aims
to detect outliers from seen classes and novel patterns from
unseen classes. This task is very challenging because abnor-
mal data is diversely distributed and expensive to collect. So
we have to construct models based on only normal samples
under unsupervised setting, targeting at high discrimination
between normal and abnormal samples.
During the past decade, reconstruction-based methods
have achieved great progress in anomaly detection. These
methods use Autoencoders (AEs) [8, 9, 20, 24, 26, 29, 40] or
Generative Adversarial Networks (GANs) [1, 19, 32] to re-
Figure 1. Illustration of difficulty in anomaly detection in MNIST
dataset. The prototype is indicated by orange triangle and the
anomaly by red point. In this case, the anomaly can hardly
be detected based on reconstruction error or distance in high-
dimensional feature space. Our solution is illustrated in Fig. 2.
construct the normal counterparts from any input images or
video frames. AE-based methods firstly compress the in-
puts to discard the information beyond normal prototypes,
and then decode the embedding to reconstruct the inputs.
According to the estimated reconstruction error, the anoma-
lies can be detected.
However, the performance of reconstruction-based
methods for anomaly detection has long been limited by a
tough problem, i.e.the tradeoff between reconstructing di-
verse normals and detecting unknown anomalies . In order
to discriminate anomalies more easily, previous works [8]
imposes more constraints to suppress abnormal information
during autoencoding, which leads to high reconstruction er-
ror for diverse normal instances. For example, in Figs. 1
and 2 g, the severely deformed normal (a.k.a. anomaly-like)
sample “7” has even higher error than the abnormal sam-
ple “4”. To better reconstruct diverse normals, each query
vector correspond to multiple prototypes in the memory,
which may be combined into abnormal embedding even if
abnormal projection is far away from the prototype. As
a consequence, anomalies that distribute in low likelihood
area between prototypical embedding are difficult to iden-
tify from diverse normals. MNAD [29] introduces skip-
connection for diverse reconstruction and additional con-
straints to get round the incorrect combination problem.
But the latter forces model transmit more unrestrained in-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12147
Figure 2. Illustration of our diversity-measurable method in ad-
dressing the detection difficulty. Numbers in white are anomaly
scores. a)Original input; b)Reconstructed reference; c)Coarse
deformation; d)Fine deformation; e)Measurement of diversity1;
f)Deformation-augmented error map assigns lower anomaly score
to the anomaly-like sample than the true anomaly; g)Pixel-wise
reconstruction error yields incorrect anomaly scores.
formation with abnormal part by skip-connection, resulting
in shortcut learning and undesired reconstruction of anoma-
lies.
A key to address the above tradeoff problem is to find
a proper measurement of diversity that normal and abnor-
mal samples have, which is positively correlated with the
severity of anomaly. With such a measure, we do not need
to fight against imperfect reconstruction of normals or un-
desired reconstruction of anomalies, because anomalies can
be detected more accurately by the diversity measure to-
gether with the reconstruction error. Note that pixel-wise
reconstruction error is not an ideal measurement of diver-
sity, because the high-error region often confuses anomalies
with diverse normals, e.g. normals with structural deforma-
tion and anomalies with colors close to the background may
yield unreliable reconstruction error.
In this paper, we propose a Diversity-Measurable
Anomaly Detection (DMAD) framework to enhance the
measurability of reconstruction diversity so as to measure
abnormality more accurately. Our basic idea is to decouple
the reconstruction into compact representation of prototyp-
ical normals and measurable deformations of more diverse
normals and anomalies. The under-estimated reconstruc-
tion error can be compensated by the diversity, which can
be properly measured. To this end, the DMAD framework
includes a Pyramid Deformation Module (PDM) to model
and measure the diversity and an Information Compression
Module (ICM) to learn the prototypical normal patterns.
Inspired by [4, 15], we assume anomalies ( e.g. in video
surveillance) can be represented as significant deformation
of appearances, including positional changes and fine mo-
tions. In contrast, diverse normal samples can be repre-
sented as weaker deformations thus easily distinguished
1In this case, we only count fine deformation because deformations in
position and angle are considered as normal. In real-world experiments,
we consider both coarse and fine deformations.from the abnormal ones. Therefore, we design PDM to
model the diversity of normals as well as the severity of
anomalies. More specifically, PDM learns hierarchical two-
dimensional deformation fields (Fig. 2 c,d) that describe the
pixel-level transformation direction and distance from ref-
erence (Fig. 2 b, which is reconstructed from prototypes in
memory) to original input. In ICM, we learns compressed
representation as sparse prototypes. As a result, a sin-
gle memory item is enough to represent each normal clus-
ter. This is more compact than other memory-based works
which require multiple memory items. Integrating PDM
with ICM, DMAD essentially decouples the deformation
information (Fig. 2 e) from class prototypes and makes the
final anomaly score more discriminative (Fig. 2 f).
We evaluate our anomaly detection framework in scenar-
ios of video surveillance and industrial defect detection. To
apply DMAD in the latter scenario, we propose a variant of
PDM, PPDM, to deal with the false positive issue in texture
reconstruction. Extensive experimental results verify the ef-
ficacy of our approach. Moreover, our method works well
even in front of contaminated data and anomaly-like nor-
mals. The main contributions of our work are as follows:
• We introduce diversity-measurable anomaly detection
framework which allows reconstruction-based models to
achieve better tradeoff between reconstructing diverse
normals and detecting unknown anomalies.
• We propose pyramid deformation module to implement
diversity measurement, in which the deformation infor-
mation is explicitly separated from compact class pro-
totypes and the resulting diversity measure is positively
correlated to abnormality.
• Our approach outperforms previous works on video
anomaly detection and industrial defect detection, and
works well in front of contaminated data and anomaly-
like normals, demonstrating its broad suitability and ro-
bustness.
|
Lin_Learning_To_Detect_Mirrors_From_Videos_via_Dual_Correspondences_CVPR_2023
|
Abstract
Detecting mirrors from static images has received signif-
icant research interest recently. However, detecting mirrors
over dynamic scenes is still under-explored due to the lack
of a high-quality dataset and an effective method for video
mirror detection (VMD). To the best of our knowledge, this
is the first work to address the VMD problem from a deep-
learning-based perspective. Our observation is that there
are often correspondences between the contents inside (re-
flected) and outside (real) of a mirror, but such correspon-
dences may not always appear in every frame, e.g., due to
the change of camera pose. This inspires us to propose a
video mirror detection method, named VMD-Net, that can
tolerate spatially missing correspondences by considering
the mirror correspondences at both the intra-frame level as
well as inter-frame level via a dual correspondence mod-
ule that looks over multiple frames spatially and tempo-
rally for correlating correspondences. We further propose
a first large-scale dataset for VMD (named VMD-D), which
contains 14,987 image frames from 269 videos with corre-
sponding manually annotated masks. Experimental results
show that the proposed method outperforms SOTA methods
from relevant fields. To enable real-time VMD, our method
efficiently utilizes the backbone features by removing the
redundant multi-level module design and gets rid of post-
processing of the output maps commonly used in existing
methods, making it very efficient and practical for real-time
video-based applications. Code, dataset, and models are
available at https://jiaying.link/cvpr2023-vmd/
|
1. Introduction
Mirrors appear everywhere. They can adversely affect
the performance of computer vision tasks ( e.g., depth esti-
mation [ 35], vision-and-language navigation [ 2], semantic
segmentation [ 49]), due to their fundamental property that
they reflect objects from their surroundings. Thus, it is nec-
*Joint first authors.
†Corresponding author.
Image
VCNet
Ours
GTinter-frame correspondenceintra-frame correspondenceFigure 1. Although state-of-the-art single-image mirror detection
method VCNet [ 36] performs well on a single image ( e.g., the first
row) by using implicitly intra-frame correspondence, it may fail
when the intra-frame cue is weak or even absent in some video
frames ( e.g., the second and third rows). The lack in exploiting
inter-frame information causes the current mirror detection meth-
ods to produce inaccurate and inconsistent results when applied to
VMD. In contrast, our method can perform well in both situations
by utilizing the proposed dual correspondence module to exploit
intra-frame (spatial) and inter-frame (temporal) correspondences.
essary to build a robust computer vision model that can dis-
tinguish mirrors from their surrounding objects correctly.
Existing single-image mirror detection methods exploit
different cues, such as context contrast [ 42], explicity cor-
respondences [ 22], semantics association [ 14], and chirality
and implicit correspondences [ 36], to detect mirrors from
single RGB input images. Despite these recent efforts being
put into the mirror detection problem, none of them focuses
on detecting mirrors from videos. However, a lot of real-
world computer vision applications are video-based ( e.g.,
robotic navigation, autonomous driving, and surveillance),
rather than image-based. Hence, solving the video mirror
detection (VMD) problem can benefit these applications.
In this paper, we aim to address the VMD problem.
There are two major challenges with this problem. First, to
the best of our knowledge, there are no existing large-scale
datasets that can be used for training and evaluation on the
VMD problem. Second, existing mirror detection methods,
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9109
Figure 2. Quantitative comparison on the performance and effi-
ciency between existing mirror detection methods and our method
for VMD. All models are trained/tested on the proposed VMD-D
dataset, under a single RTX 3090 GPU. Our model has ⇠5 times
smaller network parameters and runs ⇠18⇥faster than the state-
of-the-art image-based mirror detection method, VCNet [ 36], and
still outperforms it by a large margin.
which are all developed for the image-based mirror detec-
tion task, are all based on static cues. None of them take ad-
vantages of the dynamic nature of videos in the VMD prob-
lem. Figure 1shows that the current state-of-the-art mirror
detection method, VCNet [ 36], may fail when correspon-
dences are missing in some challenging frames ( e.g., second
and third rows) due to, for example, the change of camera
pose, even though it may perform well in some easy cases
(e.g., the first row). Besides, as the image-based mirror de-
tection task is already very challenging, existing methods
for this task often adopt heavy network design and time-
consuming post-processing techniques [ 19] to improve their
results. Figure 2shows that existing image-based mirror
detection models run at about 1fps, even on one of the lat-
est GPUs, which cannot support real-time VMD. All these
drawbacks motivate us to develop a large-scale dataset and
an effective/efficient method for video mirror detection.
In this paper, we address the VMD problem in two ways.
First, we construct the first large-scale video mirror detec-
tion benchmark dataset (VMD-D). It contains 14,987 im-
age frames in 269 videos, coming from diverse scenes. The
constructed VMD-D dataset provides large-scale and high-
diversity data for training and evaluation on the VMD prob-
lem. Second, we propose an effective and efficient method,
called VMD-Net, for the VMD problem. The proposed
method exploits multi-frame correspondences at both intra-
frame (spatial) and inter-frame (temporal) levels. Compared
with state-of-the-art image-based mirror detection methods,
which typically adopt heavy pipelines, our method uses a
light-weight network architecture without the need for anypost-processing techniques. As a result, our method is effi-
cient for real-time applications. In particular, our method
has⇠5 times fewer network parameters and runs ⇠18⇥
faster than the latest state-of-the-art image-based mirror de-
tection method, VCNet [ 36]. We conduct comprehensive
experiments to demonstrate the effectiveness and efficiency
of our proposed method. Experimental results show that our
method outperforms state-of-the-art methods from relevant
tasks on the proposed large-scale VMD-D dataset.
Our key contributions can be summarized as follows:
•We construct the first large-scale video mirror detec-
tion dataset, called VMD-D. The new dataset contains
14,988 image frames from 269 videos with precise an-
notated masks.
•We propose a novel network, called VMD-Net, to
exploit both intra-frame and inter-frame correspon-
dences via a dual correspondence (DC) module. This
DC module allows VMD-Net to tolerate occassionally
missing correspondences in the temporal dimension.
•Extensive evaluations show that our method outper-
forms existing state-of-the-art methods from relevant
tasks on our proposed VMD-D dataset.
|
Liu_Building_Rearticulable_Models_for_Arbitrary_3D_Objects_From_4D_Point_CVPR_2023
|
Abstract
We build rearticulable models for arbitrary everyday
man-made objects containing an arbitrary number of parts
that are connected together in arbitrary ways via 1 degree-
of-freedom joints. Given point cloud videos of such every-
day objects, our method identifies the distinct object parts,
what parts are connected to what other parts, and the prop-
erties of the joints connecting each part pair. We do this
by jointly optimizing the part segmentation, transformation,
and kinematics using a novel energy minimization frame-
work. Our inferred animatable models, enables retargeting
to novel poses with sparse point correspondences guidance.
We test our method on a new articulating robot dataset,
and the Sapiens dataset with common daily objects. Ex-
periments show that our method outperforms two leading
prior works on various metrics.
|
1. Introduction
Consider the sequence of points clouds observations of
articulating everyday objects shown in Figure 1. As hu-
*equal advising, alphabetic order
Figure 2. Many man-made everyday objects can be explained
with rigid parts connected in a kinematic tree with 1DOF joints.
mans, we can readily infer the kinematic structure of the
underlying object, i.e. the different object parts and their
connectivity and articulation relative with one another [21].
This paper develops computational techniques with similar
abilities. Given point cloud videos of arbitrary everyday
objects (with an arbitrary number of parts) undergoing ar-
ticulation, we develop techniques to build animatable 3D re-
constructions of the underlying object by a) identifying the
distinct object parts, b) inferring what parts are connected
to what other parts, and c) the properties of the joint be-
tween each connected part pair. Success at this task enables
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21138
Arbitrary Realistic Joint Arbitrary
Parts Constraints Kinematics
Category-specific e.g.
people [31] no yes no
quadrupeds [51, 61] no yes no
cartoons [53] no yes no
DeepPart [57] yes no no
NPP [11] yes no no
ScrewNet [17] yes yes no
UnsupMotion [42] no yes no
Ditto [20] yes yes no
MultiBodySync [15] yes no no
WatchItMove [35] yes no yes
Ours yes yes yes
Table 1. Most past work on inferring rearticulable models is cat-
egory specific. Building rearticulable models for arbitrary every-
day man-made objects requires reasoning about arbitrary part ge-
ometries, arbitrary part connectivity, and realistic joint constraints
(1DOF w.r.t. parent part). We situate past work along these 3 di-
mensions, and discuss major trends in Sec. 2.
rearticulation of objects. Given just a few user clicks spec-
ifying what point goes where, we can fill in the remaining
geometry as shown on the right side in Figure 1.
Most past work on inferring how objects articulate tack-
les it in a category-specific manner, be it for people [31,
36, 40], quadrupeds [51, 61], or even everyday objects [34].
Category-specific treatment allows the use of specialized
shape models (such as the SMPL model [31] for people),
or defines the output space ( e.g. location of 2 hinge joints
for the category eye-glasses). This limits applicability of
such methods to categories that have a canonical topology,
leaving out categories with large intra-class variation ( e.g.
boxes that can have 1-4 hinge joints), or in-the-wild objects
which may have an arbitrary number of parts connected in
arbitrary ways ( e.g. robots).
Only a very few past works tackle the problem of in-
ferring rearticulable models in a category-agnostic manner.
Huang et al. [15] only infer part segmentations, which by
itself, is insufficient for rearticulation. Jiang et al. [20] only
consider a single 1-DOF joint per object, dramatically re-
stricting its application (think about a humanoid robot with
four limbs, but the articulable model can only articulate
one). Noguchi et al. [35] present the most complete solution
but instead work with visual input and don’t incorporate the
1DOF constraint, i.e. a part can only rotate or translate about
a fixed axis on the parent part, common to a large number
of man-made objects as can be seen in Fig. 2. Inferring
3DOF / 6DOF joints leads to unrealistic rearticulation and
is thus undesirable (consider the leg of eyeglasses can freely
move or rotate). Our work fills this gap in the literature. Our
method extracts 3D rearticulable models for arbitrary every-
day objects (containing an arbitrary number of parts that are
connected together in arbitrary ways via 1DOF joint) from
point cloud videos. To the best of our knowledge, this is thefirst work to tackle this specific problem.
Our proposed method jointly reasons about part geome-
tries and their 1-DOF inter-connectivity with one another.
At the heart of our approach is a novel continuous-discrete
energy formulation that seeks to jointly learn parameters of
the object model ( i.e. assignments of points in the canon-
ical view to parts, and the connectivity of parts to one an-
other) by minimizing shape and motion reconstruction error
(after appropriate articulation of the inferred model) on the
given views. As it is difficult to directly optimize in the
presence of continuous and discrete variables with struc-
tured constraints, we first estimate a relaxed model that in-
fers parts that are free to follow an arbitrary 6DOF trajec-
tory over time ( i.e. doesn’t require parts to be connected
in a kinematic tree with 1DOF joints). We project the es-
timated relaxed model to a kinematic model and continue
to optimize with the reconstruction error to further finetune
the estimated joint parameters. Our joint approach leads to
better models and improved rearticulation as compared to
adaptations of past methods [15, 35] to this task.
|
Li_3D_Cinemagraphy_From_a_Single_Image_CVPR_2023
|
Abstract
We present 3D Cinemagraphy, a new technique that mar-
ries 2D image animation with 3D photography. Given a
single still image as input, our goal is to generate a video
that contains both visual content animation and camera mo-
tion. We empirically find that naively combining existing 2D
image animation and 3D photography methods leads to ob-
vious artifacts or inconsistent animation. Our key insight
is that representing and animating the scene in 3D space
offers a natural solution to this task. To this end, we first
convert the input image into feature-based layered depth
images using predicted depth values, followed by unproject-
ing them to a feature point cloud. To animate the scene, we
perform motion estimation and lift the 2D motion into the
3D scene flow. Finally, to resolve the problem of hole emer-
gence as points move forward, we propose to bidirectionally
displace the point cloud as per the scene flow and synthe-
size novel views by separately projecting them into target
image planes and blending the results. Extensive experi-
ments demonstrate the effectiveness of our method. A user
study is also conducted to validate the compelling rendering
results of our method.
*Corresponding author.
|
1. Introduction
Nowadays, since people can easily take images using
smartphone cameras, the number of online photos has in-
creased drastically. However, with the rise of online video-
sharing platforms such as YouTube and TikTok, people are
no longer content with static images as they have grown ac-
customed to watching videos. It would be great if we could
animate those still images and synthesize videos for a bet-
ter experience. These living images, termed cinemagraphs,
have already been created and gained rapid popularity on-
line [1, 71]. Although cinemagraphs may engage people
with the content for longer than a regular photo, they usu-
ally fail to deliver an immersive sense of 3D to audiences.
This is because cinemagraphs are usually based on a static
camera and fail to produce parallax effects. We are there-
fore motivated to explore ways of animating the photos and
moving around the cameras at the same time. As shown in
Fig. 1, this will bring many still images to life and provide
a drastically vivid experience.
In this paper, we are interested in making the first step
towards 3D cinemagraphy that allows both realistic anima-
tion of the scene and camera motions with compelling par-
allax effects from a single image. There are plenty of at-
tempts to tackle either of the two problems. Single-image
animation methods [12, 19, 35] manage to produce a real-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
4595
istic animated video from a single image, but they usually
operate in 2D space, and therefore they cannot create cam-
era movement effects. Classic novel view synthesis meth-
ods [5, 6, 9, 14, 25] and recent implicit neural representa-
tions [37, 40, 58] entail densely captured views as input to
render unseen camera perspectives. Single-shot novel view
synthesis approaches [21, 39, 52, 66] exhibit the potential
for generating novel camera trajectories of the scene from a
single image. Nonetheless, these methods usually hypoth-
esize that the observed scene is static without moving el-
ements. Directly combining existing state-of-the-art solu-
tions of single-image animation and novel view synthesis
yields visual artifacts or inconsistent animation.
To address the above challenges, we present a novel
framework that solves the joint task of image animation and
novel view synthesis. This framework can be trained to cre-
ate 3D cinemagraphs from a single still image. Our key
intuition is that handling this new task in 3D space would
naturally enable both animation and moving cameras simul-
taneously. With this in mind, we first represent the scene as
feature-based layered depth images (LDIs) [50] and unpro-
ject the feature LDIs into a feature point cloud. To ani-
mate the scene, we perform motion estimation and lift the
2D motion to 3D scene flow using depth values predicted
by DPT [45]. Next, we animate the point cloud according
to the scene flow. To resolve the problem of hole emer-
gence as points move forward, we are inspired by prior
works [3, 19, 38] and propose a 3D symmetric animation
technique to bidirectionally displace point clouds, which
can effectively fill in those unknown regions. Finally, we
synthesize novel views at time tby rendering point clouds
into target image planes and blending the results. In this
manner, our proposed method can automatically create 3D
cinemagraphs from a single image. Moreover, our frame-
work is highly extensible, e.g., we can augment our motion
estimator with user-defined masks and flow hints for accu-
rate flow estimation and controllable animation.
In summary, our main contributions are:
• We propose a new task of creating 3D cinemagraphs
from single images. To this end, we propose a novel
framework that jointly learns to solve the task of image
animation and novel view synthesis in 3D space.
• We design a 3D symmetric animation technique to ad-
dress the hole problem as points move forward.
• Our framework is flexible and customized. We can
achieve controllable animation by augmenting our mo-
tion estimator with user-defined masks and flow hints.
|
Li_Are_Data-Driven_Explanations_Robust_Against_Out-of-Distribution_Data_CVPR_2023
|
Abstract
As black-box models increasingly power high-stakes ap-
plications, a variety of data-driven explanation methods
have been introduced. Meanwhile, machine learning mod-
els are constantly challenged by distributional shifts. A
question naturally arises: Are data-driven explanations ro-
bust against out-of-distribution data? Our empirical re-
sults show that even though predict correctly, the model
might still yield unreliable explanations under distribu-
tional shifts. How to develop robust explanations against
out-of-distribution data? To address this problem, we
propose an end-to-end model-agnostic learning framework
Distributionally Robust Explanations (DRE). The key idea
is, inspired by self-supervised learning, to fully utilizes the
inter-distribution information to provide supervisory sig-
nals for the learning of explanations without human anno-
tation. Can robust explanations benefit the model’s general-
ization capability? We conduct extensive experiments on a
wide range of tasks and data types, including classification
and regression on image and scientific tabular data. Our
results demonstrate that the proposed method significantly
improves the model’s performance in terms of explanation
and prediction robustness against distributional shifts.
|
1. Introduction
There has been an increasing trend to apply black-box
machine learning (ML) models for high-stakes applications.
The lack of explainability of models can have severe con-
sequences in healthcare [48], criminal justice [61], and
other domains. Meanwhile, ML models are inevitably ex-
posed to unseen distributions that lie outside their train-
ing space [28, 56]; a highly accurate model on average can
fail catastrophically on out-of-distribution (OOD) data due
to naturally-occurring variations, sub-populations, spurious
correlations, and adversarial attacks. For example, a can-
cer detector would erroneously predict samples from hospi-
tals having different data acquisition protocols or equipment
1The source code and pre-trained models are available at: https:
//github.com/tangli-udel/DRE .
OriginalGroupDROERMIRMLocation 100Location 38Location 43DRE (ours)Location 46In-distribution (train)OOD (test)
Figure 1. The explanations for in-andout-of-distribution data
ofTerra Incognita [4] dataset. Note that GroupDRO [50] and
IRM [2] are explicitly designed methods that can predict accu-
rately across distributions. Although with correct predictions, the
explanations of models trained by such methods would also high-
light distribution-specific associations ( e.g., tree branches) except
the object. This leads to unreliable explanations on OOD data. On
the contrary, our model consistently focuses on the most discrimi-
native features shared across distributions.
manufacturers. Therefore, reliable explanations across dis-
tributions are crucial for the safe deployment of ML models.
However, existing works focus on the reliability of data-
driven explanation methods [1, 64] while ignoring the ro-
bustness of explanations against distributional shifts.
A question naturally arises: Are data-driven explana-
tions robust against out-of-distribution data? We empir-
ically investigate this problem across different methods.
Results of the Grad-CAM [51] explanations are shown in
Fig. 1. We find that the distributional shifts would fur-
ther obscure the decision-making process due to the black-
box nature of ML models . As shown, the explanations fo-
cus not only on the object but also spurious factors ( e.g.,
background pixels). Such distribution-specific associations
would yield inconsistent explanations across distributions.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3821
Eventually, it leads to unreliable explanations ( e.g., tree
branches) on OOD data. This contradicts with human prior
that the most discriminative features ought to be invariant.
How to develop robust explanations against out-of-
distribution data? Existing works on OOD generalization
are limited to data augmentation [42, 52, 59], distribution
alignment [3, 16, 31], Meta learning [12, 29, 40], or invari-
ant learning [2,26]. However, without constraints on expla-
nations, the model would still recklessly absorb all associ-
ations found in the training data, including spurious cor-
relations . To constrain the learning of explanations, ex-
isting methods rely on explanation annotations [44, 54] or
one-to-one mapping between image transforms [8, 19, 39].
However, there is no such mapping in general naturally-
occurring distributional shifts. Furthermore, obtaining
ground truth explanation annotations is prohibitively expen-
sive [60], or even impossible due to subjectivity in real-
world tasks [46]. To address the aforementioned limita-
tions, we propose an end-to-end model-agnostic training
framework Distributionally Robust Explanations (DRE) .
The key idea is, inspired by self-supervised learning, to
fully utilize the inter-distribution information to provide su-
pervisory signals for explanation learning.
Can robust explanations benefit the model’s generaliza-
tion capability? We evaluate the proposed methods on a
wide range of tasks in Sec. 4, including the classification
and regression tasks on image and scientific tabular data.
Our empirical results demonstrate the robustness of our ex-
planations. The explanations of the model trained via the
proposed method outperform existing methods in terms of
explanation consistency, fidelity, and scientific plausibility.
The extensive comparisons and ablation studies prove that
our robust explanations significantly improve the model’s
prediction accuracy on OOD data. As shown, the robust
explanations would alleviate the model’s excessive reliance
onspurious correlations , which are unrelated to the causal
correlations of interest [2]. Furthermore, the enhanced ex-
plainability can be generalized to a variety of data-driven
explanation methods.
In summary, our main contributions:
• We comprehensively study the robustness of data-
driven explanations against naturally-occurring distri-
butional shifts.
• We propose an end-to-end model-agnostic learn-
ing framework Distributionally Robust Explanations
(DRE). It fully utilizes inter-distribution information
to provide supervisory signals for explanation learning
without human annotations.
• Empirical results in a wide range of tasks including
classification and regression on image and scientific
tabular data demonstrate superior explanation and pre-
diction robustness of our model against OOD data.
|
Li_Adjustment_and_Alignment_for_Unbiased_Open_Set_Domain_Adaptation_CVPR_2023
|
Abstract
Open Set Domain Adaptation (OSDA) transfers the
model from a label-rich domain to a label-free one con-
taining novel-class samples. Existing OSDA works over-
look abundant novel-class semantics hidden in the source
domain, leading to a biased model learning and trans-
fer. Although the causality has been studied to remove the
semantic-level bias, the non-available novel-class samples
result in the failure of existing causal solutions in OSDA.
To break through this barrier, we propose a novel causality-
driven solution with the unexplored front-door adjustment
theory, and then implement it with a theoretically grounded
framework, coined A djustmen t and Alignment (ANNA), to
achieve an unbiased OSDA. In a nutshell, ANNA consists of
Front-Door Adjustment (FDA) to correct the biased learn-
ing in the source domain and Decoupled Causal Align-
ment (DCA) to transfer the model unbiasedly. On the one
hand, FDA delves into fine-grained visual blocks to discover
novel-class regions hidden in the base-class image. Then,
it corrects the biased model optimization by implementing
causal debiasing. On the other hand, DCA disentangles the
base-class and novel-class regions with orthogonal masks,
and then adapts the decoupled distribution for an unbiased
model transfer. Extensive experiments show that ANNA
achieves state-of-the-art results. The code is available at
https://github.com/CityU-AIM-Group/Anna.
|
1. Introduction
Unsupervised Domain Adaptation (UDA) [5, 8, 11, 13]
has been well studied to transfer a model from a labeled
domain to an unlabeled novel one, notably saving the label-
ing labor for model re-implementation. However, existing
UDA research follows a strong assumption that the two do-
mains must share the same class space, which cannot make
correct predictions for novel-class samples. This severely
*Corresponding author.
This work was supported by Hong Kong Research Grants Council
(RGC) General Research Fund 11211221, and Innovation and Technology
Commission-Innovation and Technology Fund ITS/100/20.limits real-world applications [25, 29], e.g., product recom-
mendation and pathology identification with unseen classes.
Aiming at addressing this issue, Open Set Domain Adap-
tation (OSDA) [3, 17, 20, 29, 35] has been studied, which
also needs to recognize the novel-class samples in the target
domain as unknown . As shown in Figure 1(a) (top), follow-
ing a similar pipeline, most existing works [3,17,20,29,35]
utilize labeled base-class data to train a closed-set classi-
fier in the source domain. Then, in the target domain, they
adjust the model with two objectives, i.e., exploring novel
samples to achieve base/novel-class separation (novel-class
detection) and adapting the base-class distribution (domain
alignment). Based on this pipeline, these works can suc-
cessfully recognize some novel samples in the unlabelled
target domain and align the base-class distribution well.
While achieving great success, existing works [17, 20,
29] only consider base-class semantics in the source do-
main, ignoring the novel-class spreading everywhere. This
leads to a semantic-level bias between the base and novel
class, further yielding a biased domain transfer for OSDA.
To explore the deficiency of this bias, we visualize the
base/novel-class activated regions, as shown in col. 1-2 of
Figure 1(a) (bottom). It can be observed that existing ap-
proaches can successfully find the base-class regions con-
sistent with the image-level ground-truth chair , but can-
not discover novel-class semantics, e.g., the yacht ,sea, and
ground , etc. (The base and novel regions are highlighted
in Figure 1(c) for better view.) Further, we conduct a per-
pixel prediction on deepest features without global average
pooling (col. 3), illustrating that the novel regions are mis-
classified as some non-correlated base classes. These obser-
vations imply that this semantic-level bias severely affects
the judgment of the classifier even though the classifier can
give a correct prediction for the whole image.
Recently, several causality-based approaches [36,44,45]
have been proposed to solve the semantic-level bias in the
closed-set setting. These works [36,44,45] first conduct per-
class statistics over the whole dataset to decouple the con-
text, and then use decoupled components to correct the bi-
ased model training in a class-balanced manner. This causal
solution can successfully avoid biased model learning since
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
24110
Figure 1. Illustration of the general pipeline (top) and observed bias (bottom) with the base/novel-class activation and per-pixel prediction
(we conduct dense classification on each pixel of the 7 ×7 ResNet-50 [9] feature and highlight the pixels with the same result in the same
color.) for (a) existing OSDA approaches, (b) our solution, and (c) base and novel regions in each image.
the knowledge of all classes contributes to training each
sample. Hence, the rational idea is to explore the causal-
ity to solve the newly observed OSDA bias.
However, it is intractable to implement existing causal
solutions [36, 45] in OSDA since the context is unobserv-
able in open-set setting [42,43]. Existing works [36,45] use
backdoor adjustment theory [26] to remove the bias, which
relies on the observable context with available data samples.
Differently, in OSDA, the context is unobservable [42] since
novel-class samples are missing in the source domain [29]
and labels are non-available in the target domain, leading to
the failure [42] of existing backdoor solutions [36, 44, 45].
Although the front-door adjustment [26] can break through
this unobservable dilemma [26] by decoupling data samples
instead of context1, it is still tricky to implement a semantic-
level decoupling [36,45] on each data sample since each im-
age is only assigned a single class label in classification [9].
Fortunately, as shown in Figure 1(c), we observe that each
image can be decoupled into base/novel-class regions in this
open-set setting, which motivates us to use the unexplored
yet effective front-door adjustment [26] to remove the bias.
Thus, we aim to correct the biased learning in the source
domain and then align the decoupled cross-domain distribu-
tion to achieve unbiased OSDA. See Sec. 3 for a theoretical
analysis with Structural Causal Model.
To address the problems mentioned above, we pro-
pose a theoretically grounded framework, A djustmen t and
Alignment (ANNA) for OSDA (see Figure 1(b) (top)) with
causality, which consists of Front-Door Adjustment (FDA)
to address the biased learning in the source domain, and
Decoupled Causal Alignment (DCA) to transfer the model
to the target domain unbiasedly. Specifically, in each base-
class image, FDA delves into fine-grained visual blocks to
discover novel-class regions, serving for correcting biased
model learning with causal adjustment. As for the DCA
module, we disentangle cross-domain images into base-
class and novel-class regions with orthogonal masks, and
then align the decoupled distribution free of bias. As shown
1See supplementary materials for a more detailed explanation.in Figure 1(b) (bottom), after eliminating the OSDA bias,
the model can capture labeled base-class regions (col. 1)
and unlabeled novel-class regions (col. 2) well. Besides,
the per-pixel prediction (col. 3) gives a closer look at model
inference, showing that ANNA fully considers fine-grained
novel semantics like humans before making an image-level
prediction. Our main contributions are as follows,
• This work represents the first attempt that observes and
formulates the ever-overlooked semantic-level bias in
OSDA. To address this issue, we propose a theoreti-
cally grounded framework, A djustmen t and Alignment
(ANNA) with causality, achieving an unbiased OSDA.
• We propose a Front-Door Adjustment (FDA) module
to correct the biased closed-set learning, discovering
and fully using novel-class regions hidden in images.
• We design a Decoupled Causal Alignment (DCA) to
achieve an unbiased model transfer, which decouples
cross-domain images with fine-grained regions and
aligns the decoupled distribution unbiasedly.
• Extensive experiments on three benchmarks verify that
ANNA achieves state-of-the-art performance. ANNA
achieves the best HOS on all 12 sub-tasks of the chal-
lenging Office-Home benchmark.
|
Kim_DATID-3D_Diversity-Preserved_Domain_Adaptation_Using_Text-to-Image_Diffusion_for_3D_Generative_CVPR_2023
|
Abstract
Recent 3D generative models have achieved remarkable
performance in synthesizing high resolution photorealistic
images with view consistency and detailed 3D shapes, but
training them for diverse domains is challenging since it re-
quires massive training images and their camera distribution
information. Text-guided domain adaptation methods have
shown impressive performance on converting the 2D genera-
tive model on one domain into the models on other domains
with different styles by leveraging the CLIP (Contrastive
Language-Image Pre-training), rather than collecting mas-
sive datasets for those domains. However, one drawback of
them is that the sample diversity in the original generative
model is not well-preserved in the domain-adapted genera-
tive models due to the deterministic nature of the CLIP text
encoder. Text-guided domain adaptation will be even more
challenging for 3D generative models not only because of
catastrophic diversity loss, but also because of inferior text-
image correspondence and poor image quality. Here we pro-
pose DATID-3D, a domain adaptation method tailored for
†Corresponding author.3D generative models using text-to-image diffusion models
that can synthesize diverse images per text prompt without
collecting additional images and camera information for the
target domain. Unlike 3D extensions of prior text-guided
domain adaptation methods, our novel pipeline was able
to fine-tune the state-of-the-art 3D generator of the source
domain to synthesize high resolution, multi-view consistent
images in text-guided targeted domains without additional
data, outperforming the existing text-guided domain adap-
tation methods in diversity and text-image correspondence.
Furthermore, we propose and demonstrate diverse 3D image
manipulations such as one-shot instance-selected adapta-
tion and single-view manipulated 3D reconstruction to fully
enjoy diversity in text.
|
1. Introduction
Recently, 3D generative models [5, 6, 13, 18, 19, 22, 31,
40–42, 59, 60, 65, 69, 74, 75] have been developed to extend
2D generative models for multi-view consistent and explic-
itly pose-controlled image synthesis. Especially, some of
them [5, 18, 74] combined 2D CNN generators like Style-
GAN2 [28] with 3D inductive bias from the neural ren-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14203
dering [38], enabling efficient synthesis of high-resolution
photorealistic images with remarkable view consistency and
detailed 3D shapes. These 3D generative models can be
trained with single-view images and then can sample infinite
3D images in real-time, while 3D scene representation as
neural implicit fields using NeRF [38] and its variants [3, 4,
8, 10, 14, 17, 20, 32 –34, 36, 45, 47, 50, 53, 54, 64, 66, 70 –73]
require multi-view images and training for each scene.
Training these state-of-the-art 3D generative models is
challenging because it requires not only a large set of images
but also the information on the camera pose distribution of
those images. This requirement, unfortunately, has restricted
these 3D models to the handful domains where camera pa-
rameters are annotated (ShapeNetCar [7,61]) or off-the-shelf
pose extractors are available (FFHQ [27], AFHQ [9, 26]).
StyleNeRF [18] assumed the camera pose distribution as
either Gaussian or uniform, but this assumption is valid only
for a few pre-processed datasets. Transfer learning methods
for 2D generative models [30, 39, 43, 44, 48, 55, 67, 68] with
small dataset can widen the scope of 3D models potentially
for multiple domains, but are also limited to a handful of
domains with similar camera pose distribution as the source
domain in practice.
Text-guided domain adaptation methods [1,16] have been
developed for 2D generative models as a promising approach
to bypass the additional data curation issue for the target do-
main. Leveraging the CLIP (Contrastive Language-Image
Pre-training) models [51] pre-trained on a large number
of image-text pairs with non-adversarial fine-tuning strate-
gies, these methods perform text-driven domain adaptation.
However, one drawback of them is the catastrophic loss of
diversity inherent in a text prompt due to the determinis-
tic embedding of the CLIP text encoder so that the sample
diversity of the source domain 2D generative model is not
preserved in the target domain 2D generative models.
We confirmed this diversity loss with experiments. A
text prompt “a photo of a 3D render of a face in Pixar
style” should include lots of different characters’ styles in
Pixar films such as Toy Story, Incredible, etc. However,
CLIP-guided adapted generator can only synthesize samples
with alike styles as illustrated in Figure 1 (see StyleGAN-
NADA∗). Thus, we confirmed that naive extensions of these
for 3D generative models show inferior text-image corre-
spondence and poor quality of generated images in diversity.
Optimizing with one text embedding yielded almost similar
results even with different training seeds as shown in Fig-
ure 2(a). Paraphrasing the text for obtaining different CLIP
embeddings was also trained, but it also did not yield that
many different results as illustrated in Figure 2(b). Using
different CLIP encoders for a single text as in Figure 2(c)
did provide different samples, but it was not an option in
general since only a few CLIP encoders have been released,
and retraining them requires massive servers in practice.
Figure 2. Existing text-guided domain adaptation [1, 16] did not
preserve the diversity in the source domain for the target domain.
We propose a novel DATID-3D, a method of Domain
Adaptation using Text-to-Image Diffusion tailored for 3D-
aware Generative Models. Recent progress in text-to-image
diffusion models enables to synthesize diverse high-quality
images from one text prompt [52, 56, 58]. We first lever-
age them to convert the samples from the pre-trained 3D
generator into diverse pose-aware target images. Then, the
target images are rectified through our novel CLIP and pose
reconstruction-based filtering process. Using these filtered
target images, 3D domain adaptation is performed while pre-
serving diversity in the text as well as multi-view consistency.
We apply our novel pipeline to the EG3D [5], a state-of-the-
art 3D generator, enabling the synthesis of high-resolution
multi-view consistent images in text-guided target domains
as illustrated in Figure 1, without collecting additional im-
ages with camera information for the target domains. Our
results demonstrate superior quality, diversity, and high text-
image correspondence in qualitative comparison, KID, and
human evaluation compared to those of existing 2D text-
guided domain adaptation methods for the 3D generative
models. Furthermore, we propose one-shot instance-selected
adaptation and single-view manipulated 3D reconstruction
to fully enjoy diversity in the text by extending useful 2D
applications of generative models.
|
Kulinski_StarCraftImage_A_Dataset_for_Prototyping_Spatial_Reasoning_Methods_for_Multi-Agent_CVPR_2023
|
Abstract
Spatial reasoning tasks in multi-agent environments such
as event prediction, agent type identification, or miss-
ing data imputation are important for multiple applica-
tions (e.g., autonomous surveillance over sensor networks
and subtasks for reinforcement learning (RL)). StarCraft
II game replays encode intelligent (and adversarial) multi-
agent behavior and could provide a testbed for these tasks;
however, extracting simple and standardized representa-
tions for prototyping these tasks is laborious and hinders
reproducibility. In contrast, MNIST and CIFAR10, despite
their extreme simplicity, have enabled rapid prototyping
and reproducibility of ML methods. Following the simplic-
ity of these datasets, we construct a benchmark spatial rea-
soning dataset based on StarCraft II replays that exhibit
complex multi-agent behaviors, while still being as easy to
use as MNIST and CIFAR10. Specifically, we carefully sum-
marize a window of 255 consecutive game states to create
3.6 million summary images from 60,000 replays, includ-
ing all relevant metadata such as game outcome and player
races. We develop three formats of decreasing complexity:
Hyperspectral images that include one channel for every
unit type (similar to multispectral geospatial images), RGB
images that mimic CIFAR10, and grayscale images that
mimic MNIST. We show how this dataset can be used for
prototyping spatial reasoning methods. All datasets, code
for extraction, and code for dataset loading can be found at
https://starcraftdata.davidinouye.com/.
|
1. Introduction
Spatial tasks in multi-agent environments require rea-
soning over both agents’ positions and the environmental
context such as buildings, obstacles, or terrain features.
These complex spatial reasoning tasks have applications in
autonomous driving, autonomous surveillance over sensor
networks, or reinforcement learning (RL) as subtasks of the
‡DEVCOM Army Research Laboratory
†Corresponding Authors: Sean Kulinski [email protected] and
David I. Inouye [email protected].
Figure 1. Two samples (one per row) showing (Blue box/left) our
64 x 64 StarCraftHyper dataset which contains all unit IDs and
corresponding values for both players (color for unit IDs denotes
categorical unit ids), (Green box/middle) StarCraftCIFAR10 (32 x
32) which is easy to interpret where blue is player 1, red is player
2, and green are neutral units such as terrain or resources, and
(Orange box/right) StarCraftMNIST (28 x 28) which are grayscale
images further simplified to show player 1 as light-gray, player 2
as dark-gray, and neutral as medium-level shades of gray.
RL agent. For example, to predict a car collision, an au-
tonomous driving system needs to reason about other cars,
road conditions, road signs, and buildings. For autonomous
surveillance over sensor networks, the system would need
to reason over the positions of objects, buildings, and other
agents to determine if a new agent is normal or abnormal or
to impute missing sensor values. An RL system may want
to predict the cumulative or final reward or impute miss-
ing values given only an incomplete snapshot of the world
state, i.e., partial observability. Yet, collecting large realistic
datasets for these tasks is expensive and laborious.
Due to the challenge of collecting real-world data, prac-
titioners have turned to (semi-)synthetic sources for creat-
ing large clean datasets of photo-realistic images or videos
[11, 12, 24, 40]. For example, [11] leveraged the Grand
Theft Auto V game engine to collect a synthetic video
dataset for pedestrian detection and tracking. [4] overlays
aerial images with crowd simulations to provide a crowd
density estimation dataset. Yet, despite near photo-realism,
these prior datasets focus on simple multi-agent environ-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22004
ments (e.g., pedestrian-like simulations [11, 40]) and thus
lack complex (or strategic) agent and object positioning. In
sharp contrast to these prior datasets, human-based replays
of the real-time strategy game StarCraft II capture complex
strategic and naturally adversarial positioning of agents and
objects (e.g., buildings and outposts). Indeed, the human
player provides thousands of micro-commands that produce
an overall intelligent and strategic positioning of agents and
building units. The release of the S
|
Li_BBDM_Image-to-Image_Translation_With_Brownian_Bridge_Diffusion_Models_CVPR_2023
|
Abstract
Image-to-image translation is an important and chal-
lenging problem in computer vision and image process-
ing. Diffusion models (DM) have shown great potentials
for high-quality image synthesis, and have gained competi-
tive performance on the task of image-to-image translation.
However, most of the existing diffusion models treat image-
to-image translation as conditional generation processes,
and suffer heavily from the gap between distinct domains.
In this paper, a novel image-to-image translation method
based on the Brownian Bridge Diffusion Model (BBDM)
is proposed, which models image-to-image translation as a
stochastic Brownian Bridge process, and learns the trans-
lation between two domains directly through the bidirec-
tional diffusion process rather than a conditional gener-
ation process. To the best of our knowledge, it is the
first work that proposes Brownian Bridge diffusion process
for image-to-image translation. Experimental results on
various benchmarks demonstrate that the proposed BBDM
model achieves competitive performance through both vi-
sual inspection and measurable metrics.
|
1. Introduction
Image-to-image translation [14] refers to building a map-
ping between two distinct image domains. Numerous prob-
lems in computer vision and graphics can be formulated as
image-to-image translation problems, such as style trans-
fer [3,9,13,22], semantic image synthesis [21,24,34,36,37,
40] and sketch-to-photo synthesis [2, 14, 43].
A natural approach to image-to-image translation is to
learn the conditional distribution of the target images given
the samples from the input domain. Pix2Pix [14] is one
of the most popular image-to-image translation methods.
It is a typical conditional Generative Adversarial Network
(GAN) [26], and the domain translation is accomplished by
learning a mapping from the input image to the output im-
Figure 1. Comparison of directed graphical models of BBDM
(Brownian Bridge Diffusion Model) and DDPM (Denoising Dif-
fusion Probabilistic Model).
age. In addition, a specific adversarial loss function is also
trained to constrain the domain mapping. Despite the high
fidelity translation performance, they are notoriously hard
to train [1, 10] and often drop modes in the output distri-
bution [23, 27]. In addition, most GAN-based image-to-
image translation methods also suffer from the lack of di-
verse translation results since they typically model the task
as a one-to-one mapping. Although other generative models
such as Autoregressive Models [25, 39], V AEs (Variational
Autoencoders) [16,38], and Normalizing Flows [7,15] suc-
ceeded in some specific applications, they have not gained
the same level of sample quality and general applicability
as GANs.
Recently, diffusion models [12, 31] have shown compet-
itive performance on producing high-quality images com-
pared with GAN-based models [6]. Several conditional dif-
fusion models [2, 4, 28–30] have been proposed for image-
to-image translation tasks. These methods treat image-to-
image translation as conditional image generation by in-
tegrating the encoded feature of the reference image into
the U-Net in the reverse process (the first row of Figure 1)
to guide the diffusion towards the target domain. De-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1952
Figure 2. Architecture of our proposed Brownian Bridge Diffusion Model (BBDM).
spite some practical success, the above condition mecha-
nism does not have a clear theoretical guarantee that the
final diffusion result yields the desired conditional distri-
bution. Therefore, most of the conditional diffusion mod-
els suffer from poor model generalization, and can only be
adapted to some specific applications where the conditional
input has high similarity with the output, such as inpaint-
ing and super-resolution [2, 4, 30]. Although LDM (Latent
Diffusion Model) [28] improved the model generalization
by conducting diffusion process in the latent space of cer-
tain pre-trained models, it is still a conditional generation
process and the multi-modal condition is projected and en-
tangled via a complex attention mechanism which makes
LDM much more difficult to get such a theoretical guaran-
tee. Meanwhile, the performance of LDM differs greatly
across different levels of latent features showing instability.
In this paper, we propose a novel image-to-image trans-
lation framework based on Brownian Bridge diffusion pro-
cess. Compared with the existing diffusion methods, the
proposed method directly builds the mapping between the
input and the output domains through a Brownian Bridge
stochastic process, rather than a conditional generation pro-
cess. In order to speed up the training and inference pro-
cess, we conduct the diffusion process in the same latent
space as used in LDM [28]. However, the proposed method
differs from LDM inherently in the way the mapping be-
tween two image domains is modeled. The framework of
BBDM is shown in the second row of Figure 1. It is easy
to find that the reference image ysampled from domain B
is only set as the initial point xT=yof the reverse diffu-
sion, and it will not be utilized as a conditional input in the
prediction network µθ(xt, t)at each step as done in related
works [2, 4, 28, 30]. The main contributions of this paper
include:
1. A novel image-to-image translation method based onBrownian Bridge diffusion process is proposed in this
paper. As far as we know, it is the first work of Brow-
nian Bridge diffusion process proposed for image-to-
image translation.
2. The proposed method models image-to-image trans-
lation as a stochastic Brownian Bridge process, and
learns the translation between two domains directly
through the bidirectional diffusion process. The
proposed method avoids the conditional information
leverage existing in related work with conditional dif-
fusion models.
3. Quantitative and qualitative experiments demonstrate
the proposed BBDM method achieves competitive per-
formance on various image-to-image translation tasks.
|
Li_3D-Aware_Multi-Class_Image-to-Image_Translation_With_NeRFs_CVPR_2023
|
Abstract
Recent advances in 3D-aware generative models (3D-
aware GANs) combined with Neural Radiance Fields
(NeRF) have achieved impressive results. However no prior
works investigate 3D-aware GANs for 3D consistent multi-
class image-to-image (3D-aware I2I) translation. Naively
using 2D-I2I translation methods suffers from unrealistic
shape/identity change. To perform 3D-aware multi-class
I2I translation, we decouple this learning process into a
multi-class 3D-aware GAN step and a 3D-aware I2I trans-
*The corresponding author.lation step. In the first step, we propose two novel tech-
niques: a new conditional architecture and an effective
training strategy. In the second step, based on the well-
trained multi-class 3D-aware GAN architecture, that pre-
serves view-consistency, we construct a 3D-aware I2I trans-
lation system. To further reduce the view-consistency prob-
lems, we propose several new techniques, including a U-
net-like adaptor network design, a hierarchical representa-
tion constrain and a relative regularization loss. In exten-
sive experiments on two datasets, quantitative and qualita-
tive results demonstrate that we successfully perform 3D-
aware I2I translation with multi-view consistency. Code is
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12652
available in 3DI2I.
|
1. Introduction
Neural Radiance Fields (NeRF) have increasingly gained
attention with their outstanding capacity to synthesize high-
quality view-consistent images [31,39,66]. Benefiting from
the adversarial mechanism [11], StyleNeRF [12] and con-
current works [4, 8, 44, 69] have successfully synthesized
high-quality view-consistent, detailed 3D scenes by com-
bining NeRF with StyleGAN-like generator design [22].
This recent progress in 3D-aware image synthesis has not
yet been extended to 3D-aware I2I translation, where the
aim is to translate in a 3D-consistent manner from a source
scene to a target scene of another class (see Figure 1).
A naive strategy is to use well-designed 2D-I2I trans-
lation methods [15, 16, 26, 28, 46, 63, 65, 70]. These meth-
ods, however, suffer from unrealistic shape/identity changes
when changing the viewpoint, which are especially notable
when looking at a video. Main target class characteristics,
such as hairs, ears, and noses, are not geometrically realis-
tic, leading to unrealistic results which are especially dis-
turbing when applying I2I to translate videos. Also, these
methods typically underestimate the viewpoint change and
result in target videos with less viewpoint change than
the source video. Another direction is to apply video-to-
video synthesis methods [2, 3, 6, 30, 53]. These approaches,
however, either rely heavily on labeled data or multi-view
frames for each object. In this work, we assume that we
only have access to single-view RGB data.
To perform 3D-aware I2I translation, we extend the the-
ory developed for 2D-I2I with recent developments in 3D-
aware image synthesis. We decouple the learning process
into a multi-class 3D-aware generative model step and a
3D-aware I2I translation step. The former can synthesize
view-consistent 3D scenes given a scene label, thereby ad-
dressing the 3D inconsistency problems we discussed for
2D-I2I. We will use this 3D-aware generative model to ini-
tialize our 3D-aware I2I model. It therefore inherits the ca-
pacity of synthesizing 3D consistent images. To train ef-
fectively a multi-class 3D-aware generative model (see Fig-
ure 2(b)), we provide a new training strategy consisting of:
(1) training an unconditional 3D-aware generative model
(i.e., StyleNeRF) and (2) partially initializing the multi-
class 3D-aware generative model (i.e., multi-class StyleN-
eRF) with the weights learned from StyleNeRF. In the 3D-
aware I2I translation step, we design a 3D-aware I2I trans-
lation architecture (Figure 2(f)) adapted from the trained
multi-class StyleNeRF network. To be specific, we use the
main network of the pretrained discriminator (Figure 2(b))
to initialize the encoder Eof the 3D-aware I2I translation
model (Figure 2(f)), and correspondingly, the pretrained
generator (Figure 2(b)) to initialize the 3D-aware I2I gen-erator (Figure 2(f)). This initialization inherits the capacity
of being sensitive to the view information.
Directly using the constructed 3D-aware I2I translation
model (Figure 2(f)), there still exists some view-consistency
problem. This is because of the lack of multi-view consis-
tency regularization, and the usage of the single-view im-
age. Therefore, to address these problems we introduce
several techniques, including a U-net-like adaptor network
design, a hierarchical representation constrain and a relative
regularization loss.
In sum, our work makes the following contributions :
• We are the first to explore 3D-aware multi-class I2I trans-
lation, which allows generating 3D consistent videos.
• We decouple 3D-aware I2I translation into two steps.
First, we propose a multi-class StyleNeRF. To train this
multi-class StyleNeRF effectively, we provide a new
training strategy. The second step is the proposal of a
3D-aware I2I translation architecture.
• To further address the view-inconsistency problem of 3D-
aware I2I translation, we propose several techniques: a U-
net-like adaptor, a hierarchical representation constraint
and a relative regularization loss.
• On extensive experiments, we considerably outperform
existing 2D-I2I systems with our 3D-aware I2I method
when evaluating temporal consistency.
|
Liu_Soft_Augmentation_for_Image_Classification_CVPR_2023
|
Abstract
Modern neural networks are over-parameterized and
thus rely on strong regularization such as data augmenta-
tion and weight decay to reduce overfitting and improve
generalization. The dominant form of data augmentation
applies invariant transforms, where the learning target of a
sample is invariant to the transform applied to that sam-
ple. We draw inspiration from human visual classifica-
tion studies and propose generalizing augmentation with
invariant transforms to soft augmentation where the learn-
ing target softens non-linearly as a function of the de-
gree of the transform applied to the sample: e.g., more ag-
gressive image crop augmentations produce less confident
learning targets. We demonstrate that soft targets allow
for more aggressive data augmentation, offer more robust
performance boosts, work with other augmentation poli-
cies, and interestingly, produce better calibrated models
(since they are trained to be less confident on aggressively
cropped/occluded examples). Combined with existing ag-
gressive augmentation strategies, soft targets 1) double the
top-1 accuracy boost across Cifar-10, Cifar-100, ImageNet-
1K, and ImageNet-V2, 2) improve model occlusion perfor-
mance by up to 4×, and 3) half the expected calibration
error (ECE). Finally, we show that soft augmentation gen-
eralizes to self-supervised classification tasks. Code avail-
able at https://github.com/youngleox/soft_
augmentation
|
1. Introduction
Deep neural networks have enjoyed great success in the
past decade in domains such as visual understanding [42],
natural language processing [5], and protein structure pre-
diction [41]. However, modern deep learning models are
often over-parameterized and prone to overfitting. In addi-
tion to designing models with better inductive biases, strong
regularization techniques such as weight decay and data
augmentation are often necessary for neural networks to
achieve ideal performance. Data augmentation is often a
computationally cheap and effective way to regularize mod-els and mitigate overfitting. The dominant form of data aug-
mentation modifies training samples with invariant trans-
forms – transformations of the data where it is assumed that
the identity of the sample is invariant to the transforms.
Indeed, the notion of visual invariance is supported by
evidence found from biological visual systems [54]. The
robustness of human visual recognition has long been docu-
mented and inspired many learning methods including data
augmentation and architectural improvement [19, 47]. This
paper focuses on the counterpart of human visual robust-
ness, namely how our vision fails . Instead of maintaining
perfect invariance, human visual confidence degrades non-
linearly as a function of the degree of transforms such as
occlusion, likely as a result of information loss [44]. We
propose modeling the transform-induced information loss
for learned image classifiers and summarize the contribu-
tions as follows:
• We propose Soft Augmentation as a generalization of data
augmentation with invariant transforms. With Soft Aug-
mentation, the learning target of a transformed training
sample softens . We empirically compare several soften-
ing strategies and prescribe a robust non-linear softening
formula.
• With a frozen softening strategy, we show that replac-
ing standard crop augmentation with soft crop augmenta-
tion allows for more aggressive augmentation, and dou-
bles the top-1 accuracy boost of RandAugment [8] across
Cifar-10, Cifar-100, ImageNet-1K, and ImageNet-V2.
• Soft Augmentation improves model occlusion robustness
by achieving up to more than 4×Top-1 accuracy boost
on heavily occluded images.
• Combined with TrivialAugment [37], Soft Augmentation
further reduces top-1 error and improves model calibra-
tion by reducing expected calibration error by more than
half, outperforming 5-ensemble methods [25].
• In addition to supervised image classification models,
Soft Augmentation also boosts the performance of self-
supervised models, demonstrating its generalizability.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16241
-32 -24 -16 -8 0 816 24 32
tx
-32-24-16-808162432ty
Top-1 Error: 20.80Standard Hard Crop
-32 -24 -16 -8 0 816 24 32
tx
-32-24-16-808162432ty
22.99(+2.19)Aggressive Hard Crop
-32 -24 -16 -8 0 816 24 32
tx
-32-24-16-808162432ty
18.31(−2.49)Soft Augmentation
01
01
01
Target Confidence (p)
original image
77% visible
38% visible
22% visibleFigure 1. Traditional augmentation encourages invariance by requiring augmented samples to produce the same target label; we visualize
the translational offset range (tx, ty )of Standard Hard Crop augmentations for 32×32images from Cifar-100 on the left, reporting
the top-1 error of a baseline ResNet-18. Naively increasing the augmentation range without reducing target confidence increases error
(middle ), but softening the target label by reducing the target confidence for extreme augmentations reduces the error ( right ), allowing
for training with even more aggressive augmentations that may even produce blank images . Our work also shows that soft augmentations
produce models that are more robust to occlusions (since they encounter larger occlusions during training) and models that are better
calibrated (since they are trained to be less-confident on such occluded examples).
|
Lin_Supervised_Masked_Knowledge_Distillation_for_Few-Shot_Transformers_CVPR_2023
|
Abstract
Vision Transformers (ViTs) emerge to achieve impres-
sive performance on many data-abundant computer vision
tasks by capturing long-range dependencies among local
features. However, under few-shot learning (FSL) settings
on small datasets with only a few labeled data, ViT tends
to overfit and suffers from severe performance degradation
due to its absence of CNN-alike inductive bias. Previous
works in FSL avoid such problem either through the help
of self-supervised auxiliary losses, or through the dextile
uses of label information under supervised settings. But
the gap between self-supervised and supervised few-shot
Transformers is still unfilled. Inspired by recent advances
in self-supervised knowledge distillation and masked image
modeling (MIM), we propose a novel Supervised Masked
Knowledge Distillation model (SMKD) for few-shot Trans-
formers which incorporates label information into self-
distillation frameworks. Compared with previous self-
supervised methods, we allow intra-class knowledge dis-
tillation on both class and patch tokens, and introduce
the challenging task of masked patch tokens reconstruc-
tion across intra-class images. Experimental results on
four few-shot classification benchmark datasets show that
our method with simple design outperforms previous meth-
ods by a large margin and achieves a new start-of-the-art.
Detailed ablation studies confirm the effectiveness of each
component of our model. Code for this paper is available
here: https://github.com/HL-hanlin/SMKD.
|
1. Introduction
Vision Transformers (ViTs) [16] have emerge as a
competitive alternative to Convolutional Neural Networks
(CNNs) [31] in recent years, and have achieved impressive
performance in many vision tasks including image classi-
fication [16, 38, 59, 66], object detection [3, 10, 26–28, 79],
and object segmentation [50, 55]. Compared with CNNs,
which introduce inductive bias through convolutional ker-
nels with fixed receptive fields [35], the attention layers in
Equal contribution.yCorresponding author.
(b) Few-Shot Transformers with Self-Supervision Auxiliary Regularization
teacher student
EMA same-class cross-image
knowledge distillation (c) Few-Shot Transformers with Our Supervised Masked Knowledge Distillation
image B mask
same class label (a) Few-Shot Transformers with Soft Label Regularization
image encoder supervised loss
image A teacher student
EMA same-image cross-view
knowledge distillation image supervised loss
…
class labels
view1
view2
[cls] token (global info)
[patch] token (local info)
●Unified learning objective
●Maximizing similarity of both
[cls] and corresponding [patch]
tokens for intra-class images Our Idea Challenge
●How to establish
correspondence for patch
tokens across images
●Both CLS & patch tokens
●No negative samples
●Small batch size
●class
prototypes
Problems
●Hard to balance
supervised & self-supervised
learning objectives or
patch-level
soft labels Patch-level supervision from
pre-trained teacher model [14] Latent attribute surrogates [70] Complex expert-designed targets
…Figure 1. Comparison of the proposed idea and other exist-
ing methods for few-shot Transformers. Our model mitigates
the overfitting issue of few-shot Transformers, by extending the
masked knowledge distillation framework into the supervised set-
ting, and enforcing the alignment of [cls] and corresponding
[patch] tokens for intra-class images.
ViT allow it to model global token relations to capture long-
range token dependencies. However, such flexibility also
comes at a cost: ViT is data-hungry and it needs to learn
token dependencies purely from data. This property often
makes it easy to overfit to datasets with small training set
and suffer from severe generalization performance degra-
dation [36, 37]. Therefore, we are motivated to study how
to make ViTs generalize well on these small datasets, espe-
cially under the few-shot learning (FSL) setting [19, 39, 62]
which aims to recognize unseen new instances at test time
just from only a few (e.g. one or five) labeled samples from
each new categories.
Most of the existing methods mitigate the overfitting is-
sue of few-shot Transformers [14] using various regulariza-
tions. For instance, some works utilize label information
in a weaker [70], softer [42] way, or use label information
efficiently through patch-level supervision [14]. However,
these models usually design sophisticated learning targets.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19649
On the other hand, self-distillation techniques [4, 8], and
particularly, the recent masked self-distillation [29, 45, 77],
which distills knowledge learned from an uncorrupted im-
age to the knowledge predicted from a masked image, have
lead an emerging trend in self-supervised Transformers in
various fields [15, 68]. Inspired by such success, recent
works in FSL attempt to incorporate self-supervised pretext
tasks into the standard supervised learning through auxil-
iary losses [37, 43, 46], or to adopt a self-supervised pre-
training, supervised training two-stage framework to train
few-shot Transformers [18, 32]. Compared with traditional
supervised methods, self-supervision can learn less biased
representations towards base class, which usually leads to
better generalization ability for novel classes [40]. How-
ever, the two learning objectives of self-supervision and
supervision are conflicting and it is hard to balance them
during training. Therefore, how to efficiently leverage the
strengths of self-supervised learning to alleviate the overfit-
ting issue of supervised training remains a challenge.
In this work, we propose a novel supervised masked
knowledge distillation framework (SMKD) for few-shot
Transformers, which handles the aforementioned challenge
through a natural extension of the self-supervised masked
knowledge distillation framework into the supervised set-
ting (shown in Fig. 1). Different from supervised con-
trastive learning [33] which only utilizes global image fea-
tures for training, we leverage multi-scale information from
the images (both global [cls] token and local [patch]
tokens) to formulate the learning objectives, which has been
demonstrated to be effective in the recent self-supervised
Transformer methods [29, 77]. For global [cls] tokens,
we can simply maximize the similarity for intra-class im-
ages. However, it is non-trivial and challenging to formulate
the learning objectives for local [patch] tokens because
we do not have ground-truth patch-level annotations. To
address this problem, we propose to estimate the similarity
between [patch] tokens across intra-class images using
cross-attention, and enforce the alignment of correspond-
ing[patch] tokens. Particularly, reconstructing masked
[patch] tokens across intra-class images increases the
difficulty of model learning, thus encouraging learning gen-
eralizable few-shot Transformer models by jointly exploit-
ing the holistic knowledge of the image and the similarity of
intra-class images.
As shown in Fig. 2, we compare our model with the
existing self-supervised/supervised learning methods. Our
model is a natural extension of the supervised contrastive
learning method [33] and self-supervised knowledge dis-
tillation methods [4, 77]. Thus our model inherits both
the advantage of method [33] for effectively leveraging la-
bel information, and the advantages of methods [4, 77] for
not needing large batch size and negative samples. Mean-
while, the newly-introduced challenging task of masked
(momentum)
encoder encoder
EMA image similarity &
dissimilarity (a) self-supervised contrastive
(SimCLR, MOCO)
(momentum)
encoder encoder
EMA image A
similarity &
dissimilarity (c) supervised contrastive
(SupCon)
image B momentum
encoder encoder
EMA image similarity (b) self-supervised knowledge distillation
(DINO, iBOT)
momentum
encoder encoder
EMA image A
similarity (d) supervised masked knowledge distillation
(ours)
image B mask
same class label same class label view1
view2 view1
view2 [cls] (+[patch]) [cls]
[cls] [cls] + [patch] Figure 2. Comparison of other self-supervised/supervised
frameworks. Our method (d) is a natural extension of (b) and (c),
with the newly-introduced challenging task of masked [patch]
tokens reconstruction across intra-class images.
[patch] tokens reconstruction across intra-class images
makes our method more powerful for learning generalizable
few-shot Transformer models.
Compared with contemporary works on few-shot Trans-
formers [14, 32, 70], our framework enjoys several good
properties from a practical point of view. (1) Our method
does not introduce any additional learnable parameters be-
sides the ViT backbone and projection head, which makes it
easy to be combined with other methods [32,70,74]. (2) Our
method is both effective and training-efficient, with stronger
performance and less training time on four few-shot classi-
fication benchmarks, compared with [32, 70]. In a nutshell,
our main contributions can be summarized as follows:
We propose a new supervised knowledge distillation
framework (SMKD) that incorporates class label in-
formation into self-distillation, thus filling the gap be-
tween self-supervised knowledge distillation and tra-
ditional supervised learning.
Within the proposed framework, we design two
supervised-contrastive losses on both class and patch
levels, and introduce the challenging task of masked
patch tokens reconstruction across intra-class images.
Given its simple design, we test our SMKD on four
few-shot datasets, and show that it achieves a new
SOTA on CIFAR-FS and FC100 by a large margin,
as well as competitive performance on mini-ImageNet
andtiered -ImageNet using the simple prototype clas-
sification method for few-shot evaluation.
|
Liu_OSAN_A_One-Stage_Alignment_Network_To_Unify_Multimodal_Alignment_and_CVPR_2023
|
Abstract
Extending from unimodal to multimodal is a critical
challenge for unsupervised domain adaptation (UDA). Two
major problems emerge in unsupervised multimodal domain
adaptation: domain adaptation and modality alignment. An
intuitive way to handle these two problems is to fulfill these
tasks in two separate stages: aligning modalities followed
by domain adaptation, or vice versa. However, domains
and modalities are not associated in most existing two-stage
studies, and the relationship between them is not lever-
aged which can provide complementary information to each
other. In this paper, we unify these two stages into one to
align domains and modalities simultaneously. In our model,
a tensor-based alignment module (TAL) is presented to ex-
plore the relationship between domains and modalities. By
this means, domains and modalities can interact sufficiently
and guide them to utilize complementary information for
better results. Furthermore, to establish a bridge between
domains, a dynamic domain generator (DDG) module is
proposed to build transitional samples by mixing the shared
information of two domains in a self-supervised manner,
which helps our model learn a domain-invariant common
representation space. Extensive experiments prove that our
method can achieve superior performance in two real-world
applications. The code will be publicly available.
|
1. Introduction
With explosively emerging multimedia data on the In-
ternet, the field of multimodal analysis achieves more and
more attention [10, 13, 18, 19, 43]. Compared to extensive
unimodal models in NLP and CV , learning adequate knowl-
edge from multimodal signals is still preliminary but very
important. Abundant data plays a key role in different sce-
narios of multimodal analysis, such as pre-training or down-
stream multimedia tasks. However, it is prohibitively ex-
pensive and time-consuming to obtain large amounts of la-
beled data. To eliminate this issue, domain adaptation (DA)
PPPPPPPPPPPsharedspacePPPPPPPPPPPPNNNNNNPPPPNNN
PPPPPPPPNNNNNNPNNNsourcedomain
targetdomainNNNNNNNNNNNNNNNNNNmodality alignmentone-stagealignment
modality2positivePnegativeNmodality1modality3domain alignmentsourcetargetFigure 1. Conception of our one-stage model.
is raised to learn a model from a labeled dataset (source
domain) that can be generalized to other related tasks with-
out sufficient labeled data (target domain) [3]. Classical do-
main adaptation can be classified into different categories:
unsupervised domain adaptation (UDA), fully supervised
domain adaptation, and semi-supervised domain adaptation
[31]. In this paper, we focus on UDA where no samples
in target domain are annotated. With this technique, it is
not necessary to prepare a customized training dataset for
a specific task, but it can perform the task effectively and
efficiently.
There are two challenges when applying domain adap-
tation to multimodal scenarios [17]: (1) how to align the
source and target domains and remit domain discrepancy,
and (2) how to align multiple modalities and leverage mul-
timodal information. Most existing works address these
two problems in two consecutive stages: multimodal align-
ment followed by domain adaptation [34, 41], or vice versa
[14, 44]. However, they solve these two issues separately
without considering their relationship: domain and modal-
ity can be treated as two views to portray the intrinsic char-
acteristic of multimodal data [8], and the hidden underlying
relationship in these two views can provide complemen-
tary information to each other. Unimodal domain adapta-
tion methods can not work well in multimodal tasks due to
the inability to preserve the relations between modalities at
the same time. Through our experiments and analysis, we
observe that the two-stage model could not achieve ideal
performance. Fig.2 shows the learning curve of two-stage
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3551
model during training phase by 800 iterations for the task
of multimodal sentiment analysis. It can be found that the
learning curve of two-stage model is oscillating and con-
verges slowly, which indicates that two-stage model is prob-
ably not a superior solution. To handle these challenges, the
objective of multimodal domain adaptation can be defined
as: (1) Exploring the relationship between domains and
modalities; (2) Finding a common domain-invariant cross-
modal representation space to align domains and modalities
simultaneously.
Therefore, in this paper, we design a One-Stage
Alignment Network (OSAN) to unify multimodal align-
ment and domain adaptation in one stage. Fig.1 shows the
conception of our one-stage model. Our method benefits
from: (1) The modality and domain are associated and in-
teracted to capture the relationship between domains and
modalities, which can provide rich complementary infor-
mation to each other. (2) Multimodal alignment and domain
adaptation are unified in one stage, which allows our model
to perform domain adaptation and leverage multimodal in-
formation at the same time. In Fig.2, we observe that the
learning curve of our method is relatively stable and con-
verges better, which indicates that exploring the relation be-
tween modality and domain contributes to our task.
In summary, our contributions are as follows:
(1) To capture the relationship between domain and
modality, we propose a one-stage alignment network, called
OSAN, to associate domain and modality. In this way,
a joint domain-invariant and cross-modal representation
space is learned in one stage.
(2) We design a TAL module to bring sufficient interac-
tions between domains and modalities and guide them to
utilize complementary information for each other.
(3) To effectively bridge distinct domains, a DDG mod-
ule is developed to dynamically construct multiple new do-
mains by combining knowledge of source and target do-
mains and exploring intrinsic structure of data distribution.
(4) Extensive experiments on two totally different tasks
demonstrate the effectiveness of our method compared to
the supervised and strongly UDA methods.
|
Liu_Target-Referenced_Reactive_Grasping_for_Dynamic_Objects_CVPR_2023
|
Abstract
Reactive grasping, which enables the robot to success-
fully grasp dynamic moving objects, is of great interest in
robotics. Current methods mainly focus on the temporal
smoothness of the predicted grasp poses but few consider
their semantic consistency. Consequently, the predicted
grasps are not guaranteed to fall on the same part of the
same object, especially in cluttered scenes. In this paper,
we propose to solve reactive grasping in a target-referenced
setting by tracking through generated grasp spaces. Given
a targeted grasp pose on an object and detected grasp
poses in a new observation, our method is composed of two
stages: 1) discovering grasp pose correspondences through
an attentional graph neural network and selecting the one
with the highest similarity with respect to the target pose; 2)
refining the selected grasp poses based on target and histor-
ical information. We evaluate our method on a large-scale
benchmark GraspNet-1Billion. We also collect 30 scenes
of dynamic objects for testing. The results suggest that our
method outperforms other representative methods. Further-
more, our real robot experiments achieve an average suc-
cess rate of over 80 percent. Code and demos are available
at:https://graspnet.net/reactive .
|
1. Introduction
Reactive grasping is in great demand in the industry.
For instance, in places where human-robot collaboration is
heavily required like factories, stress on laborers will be sig-
nificantly relieved if robots can receive tools from humans
and complete the harder work for laborers. Such a vision is
based on reactive grasping.
On the contrary to static environments, in reactive grasp-
ing, dynamic task setting poses new challenges for algo-
rithm design. Previous research in this area mainly focuses
† Cewu Lu is the corresponding author, a member of Qing Yuan Re-
search Institute and MoE Key Lab of Artificial Intelligence, AI Institute,
Shanghai Jiao Tong University, Shanghai, China
(a)
(b)
Figure 1. (a) Classic reactive grasping guarantees the smoothness
of the grasp poses but cannot predicts grasps on the same part of
the hammer. (b) Our target-referenced reactive grasping takes se-
mantic consistency into consideration. The generated grasp poses
across frames are illustrated with blue grippers.
on planning temporally smooth grasps [22, 42] to avoid
wavy and jerky robot motion. Few of them pay attention
to its semantic consistency. In short, given a targeted grasp
at the first frame, we want the robot to grasp the same part
of the object in the following frames. Additionally, it is not
guaranteed that grasp predictions made by classical meth-
ods fall on the same object in cluttered scenes. Hence, most
of their experiments are conducted on single-object scenes.
Unlike previous works, this work is aimed at achieving tem-
porally smooth and semantically consistent reactive grasp-
ing in clutter given a targeted grasp. We refer to such a task
setting as target-referenced reactive grasping as shown in
Fig.1. Note that despite robot handover is a major applica-
tion scenario of reactive grasping, this work focuses on a
more general task setting - dynamic object grasping.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8824
A naive idea to solve this task is to generate reference
grasp poses for the initial scene and consecutively track the
object’s 6D pose. As the object moves, the initial grasp
pose can be projected to a new coming frame based on the
object’s 6D pose. Although such an idea seems to be natu-
ral and valid, some bottleneck greatly degrades its viabil-
ity. First of all, the solution to reactive grasping should
be able to handle objects’ motion in real-time, meaning
that it requires fast inference speed and immediate response
to continuous environmental changes. However, 6D pose
tracking may be time-consuming due to commonly-used in-
stance segmentation [11,40]. Second, 6D pose tracking usu-
ally requires objects’ prior knowledge, such as CAD mod-
els [4,41], which is not always available in the real world as
well or achieves only category-level generalization [37].
Different from tracking objects, we propose to track
grasps by a two-stage policy instead. We also comply
with the restriction that no prior knowledge of the ob-
jects is allowed. Given a target grasp on a partial-view
point cloud, we first discover its corresponding grasp among
future frame’s detected grasp poses as coarse estimation.
These gasp poses can be given by an off-the-shelf grasp de-
tector. Inspired by recent progress in local feature match-
ing, which often uses image descriptors like SIFT [17] to
describe interesting regions of images, we view grasp poses
and their corresponding features as geometric descriptors
on a partial-view point cloud. Based on such an assump-
tion, we can simply estimate correspondences between two
grasp sets from two different observation frames by match-
ing the associated grasp features. Note that in opposition
to classical local feature matching, features of the entire
scene are also incorporated to help achieve global aware-
ness. Furthermore, consecutive matching along an observa-
tion sequence may lead to the accumulation of error, on top
of the coarse estimation through correspondence matching,
we further use a memory-augmented coarse-to-fine module
which uses both target grasp features and historical grasp
features to refine the grasp tracking results for better tem-
poral smoothness and semantic consistency.
We conduct extensive experiments on two benchmarks
to evaluate our method and demonstrate its effectiveness.
The results show that our method outperforms two repre-
sentative baseline methods. We also conduct real robot ex-
periments on both single-object scenes and cluttered scenes.
We report success rates of 81.25% for single-object scenes
and81.67% for multi-object scenes.
|
Kennerley_2PCNet_Two-Phase_Consistency_Training_for_Day-to-Night_Unsupervised_Domain_Adaptive_Object_CVPR_2023
|
Abstract
Object detection at night is a challenging problem due
to the absence of night image annotations. Despite several
domain adaptation methods, achieving high-precision re-
sults remains an issue. False-positive error propagation is
still observed in methods using the well-established student-
teacher framework, particularly for small-scale and low-
light objects. This paper proposes a two-phase consistency
unsupervised domain adaptation network, 2PCNet, to ad-
dress these issues. The network employs high-confidence
bounding-box predictions from the teacher in the first phase
and appends them to the student’s region proposals for the
teacher to re-evaluate in the second phase, resulting in a
combination of high and low confidence pseudo-labels. The
night images and pseudo-labels are scaled-down before be-
ing used as input to the student, providing stronger small-
scale pseudo-labels. To address errors that arise from low-
light regions and other night-related attributes in images,
we propose a night-specific augmentation pipeline called
NightAug. This pipeline involves applying random aug-
mentations, such as glare, blur, and noise, to daytime im-
ages. Experiments on publicly available datasets demon-
strate that our method achieves superior results to state-of-
the-art methods by 20%, and to supervised models trained
directly on the target data.1
|
1. Introduction
Nighttime object detection is critical in many applica-
tions. However, the requirement of annotated data by su-
pervised methods is impractical, since night data with anno-
tations is few, and supervised methods are generally prone
to overfitting to the training data. Among other reasons,
this scarcity is due to poor lighting conditions which makes
nighttime images hard to annotate. Hence, methods that
1www.github.com/mecarill/2pcnet
Figure 1. Qualitative results of state-of-the-art DA methods, DA
Faster-RCNN [3], UMT [7], Adaptive Teacher (AT) [15] and our
method 2PCNet on the BDD100K [36] dataset. Unlike the SOTA
methods, our method is able to detect dark and small scale objects
with minimal additional false positive predictions.
do not assume the availability of the annotations are more
advantageous. Domain adaptation (DA) is an efficient solu-
tion to this problem by allowing the use of readily available
annotated source daytime datasets.
A few domain adaptation methods have been proposed,
e.g., adversarial learning which uses image and instance
level classifiers [3] and similar concepts [22, 32]. However,
these methods isolate the domain adaptation task purely to-
wards the feature extractor, and suppress features of the
target data for the sake of domain invariance. Recent un-
supervised domain adaptation methods exploit the student-
teacher framework (e.g. [1,7,11,15]). Since the student ini-
tially learns from the supervised loss, there is a bias towards
the source data. Augmentation [7,11] and adversarial learn-
ing [15] have been proposed to address this problem. Un-
fortunately, particularly for day-to-night unsupervised do-
main adaptation, these methods suffer from a large num-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
11484
ber of inaccurate pseudo-labels produced by the teacher. In
our investigation, the problem is notably due to insufficient
knowledge of small scale features in the nighttime domain,
which are then propagated through the learning process be-
tween the teacher and student, resulting in poor object de-
tection performance.
To address the problem, in this paper, we present 2PC-
Net, a two-phase consistency unsupervised domain adapta-
tion network for nighttime object detection. Our 2PCNet
merges the bounding-boxes of highly-confident pseudo-
labels, which are predicted in phase one, together with re-
gions proposed by the student’s region proposal network
(RPN). The merged proposals are then used by the teacher
to generate a new set of pseudo-labels in phase two. This
provides a combination of high and low confidence pseudo-
labels. These pseudo-labels are then matched with pre-
dictions generated by the student. We can then utilise a
weighted consistency loss to ensure that a higher weightage
of our unsupervised loss is based on stronger pseudo-labels,
yet allow for weaker pseudo-labels to influence the training.
Equipped with this two-phase strategy, we address the
problem of errors from small-scale objects. We devise
a student-scaling technique, where night images and their
pseudo-labels for the student are deliberately scaled down.
In order to generate accurate pseudo-labels, images to the
teacher remain at their full scale. This results in the pseudo-
labels of larger objects, which are easier to predict, to be
scaled down to smaller objects, allowing for an increase in
small scale performance of the student.
Nighttime images suffer from multiple complications not
found in daytime scenes such as dark regions, glare, promi-
nent noise, prominent blur, imbalanced lighting, etc. All
these cause a problem, since the student, which was trained
on daytime images, is much more biased towards the day-
time domain’s characteristics. To mitigate this problem,
we propose NightAug, a set of random nighttime specific
augmentations. NightAug includes adding artificial glare,
noise, blur, etc. that mimic the night conditions to day-
time images. With NightAug we are able to reduce the bias
of the student network towards the source data without re-
sulting to adversarial learning or compute-intensive trans-
lations. Overall, using 2PCNet, we can see the qualitative
improvements of our result in Figure 1. In summary, the
contributions of this paper are as follows:
• We present 2PCNet, a two-phase consistency approach
for student-teacher learning. 2PCNet takes advantage
of highly confident teacher labels augmented with less
confident regions, which are proposed by the scaled
student. This strategy produces a sharp reduction of
the error propagation in the learning process.
• To address the bias of the student towards the source
domain, we propose NightAug, a random night spe-cific augmentation pipeline to shift the characteristics
of daytime images toward nighttime.
• The effectiveness of our approach has been verified by
comparing it with the state-of-the-art domain adapta-
tion approaches. An improvement of +7.9AP(+20%)
and +10.2AP(26%) over the SOTA on BDD100K and
SHIFT has been achieved, respectively.
|
Ling_ShadowNeuS_Neural_SDF_Reconstruction_by_Shadow_Ray_Supervision_CVPR_2023
|
Abstract
By supervising camera rays between a scene and multi-
view image planes, NeRF reconstructs a neural scene rep-
resentation for the task of novel view synthesis. On the
other hand, shadow rays between the light source and the
scene have yet to be considered. Therefore, we propose a
novel shadow ray supervision scheme that optimizes both
the samples along the ray and the ray location. By su-
pervising shadow rays, we successfully reconstruct a neu-
ral SDF of the scene from single-view images under mul-
tiple lighting conditions. Given single-view binary shad-
ows, we train a neural network to reconstruct a complete
scene not limited by the camera’s line of sight. By further
modeling the correlation between the image colors and the
shadow rays, our technique can also be effectively extended
to RGB inputs. We compare our method with previous works
on challenging tasks of shape reconstruction from single-
view binary shadow or RGB images and observe signif-
icant improvements. The code and data are available at
https://github.com/gerwang/ShadowNeuS .
|
1. Introduction
Neural field [ 43] has been used for 3D scene representa-
tion in recent years. It achieves remarkable quality because
of the ability to continuously parameterize a scene with a
compact neural network. The neural network nature makes
it amenable to various optimization tasks in 3D vision, in-
cluding long-standing problems like image-based [ 28,51]
and point cloud-based [ 26,31] 3D reconstruction. So more
and more works are using neural fields as the 3D scene rep-
resentation for various related tasks.
Among these works, NeRF [ 27] is a representative
method that incorporates a part of physically based light
transport [ 38] into the neural field. The light transport de-
scribes light travels from the light source to the scene and
then from the scene to the camera. NeRF considers the latter
part to model the interaction between the scene and the cam-
eras along the camera rays (rays from the camera through
*Corresponding author
Single-view inputs Reconstruction viewed at novel views
Figure 1. Our method can reconstruct neural scenes from single-
view images captured under multiple lightings by effectively lever-
aging a novel shadow ray supervision scheme.
the scene). By supervising these camera rays of different
viewpoints with the corresponding recorded images, NeRF
optimizes a neural field to represent the scene. Then NeRF
casts camera rays from novel viewpoints through the opti-
mized neural field to generate novel-view images.
However, NeRF does not model the rays from the scene
to the light source, which motivates us to consider: can we
optimize a neural field by supervising these rays? These
rays are often called shadow rays as the light emitted from
the light source can be absorbed by scene particles along the
rays, resulting in varying light visibility (a.k.a. shadows) at
the scene surface. By recording the incoming radiance at
the surface, we should be able to supervise the shadow rays
to infer the scene geometry.
Given this observation, we derive a novel problem of su-
pervising the shadow rays to optimize a neural field rep-
resenting the scene, analogizing to NeRF that models the
camera rays. Like multiple viewpoints in NeRF, we illumi-
nate the scene multiple times using different light directions
to obtain sufficient observations. For each illumination, we
use a fixed camera to record the light visibility at the scene
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
175
surface as supervision labels for the shadow rays. As rays
connecting the scene and the light source march through the
3D space, we can reconstruct a complete 3D shape not con-
strained by the camera’s line of sight.
We solve several challenges when supervising the
shadow rays using camera inputs. In NeRF, each ray’s
position can be uniquely determined by the known cam-
era center, but shadow rays need to be determined by the
scene surface, which is not given and has yet to be recon-
structed. We solve this using an iterative updating strategy,
where we sample shadow rays starting at the current sur-
face estimation. More importantly, we make the sampled
locations differentiable to the geometry representation, thus
can optimize the starting positions of shadow rays. How-
ever, this technique is insufficient to derive correct gradi-
ents at surface boundaries with abrupt depth changes, which
coincides with recent findings in differentiable rendering
[2,20,23,40,54]. Thus, we compute surface boundaries
by aggregating shadow rays starting at multiple depth can-
didates. It remains efficient as boundaries only occupy a
small amount of surface, while it significantly improves
the surface reconstruction quality. In addition, RGB val-
ues recorded by the camera encode the outgoing radiance at
the surface instead of the incoming radiance. The outgoing
radiance is a coupling effect of light, material, and surface
orientation. We propose to model the material and surface
orientation to decompose the incoming radiance from RGB
inputs to achieve reconstruction without needing shadow
segmentation (Row 1 and 2 in Fig. 1). As material modeling
is optional, our framework can also take binary shadow im-
ages [ 18] to achieve shape reconstruction (Row 3 in Fig. 1).
We compare our method with previous single-view re-
construction methods (including shadow-only and RGB-
based) and observe significant improvements in shape re-
construction. Theoretically, our method handles a dual
problem of NeRF. So, comparing the corresponding parts
of the two techniques can inspire readers to get a deeper un-
derstanding of the essence of neural scene representation to
a certain extent, as well as the relationship between them.
Our contributions are:
• A framework that exploits light visibility to reconstruct
neural SDF from shadow or RGB images under multi-
ple light conditions.
• A shadow ray supervision scheme that embraces dif-
ferentiable light visibility by simulating physical inter-
actions along shadow rays, with efficient handling of
surface boundaries.
• Comparisons with previous works on either RGB or
binary shadow inputs to verify the accuracy and com-
pleteness of the reconstructed scene representation.
|
Lee_Decomposed_Cross-Modal_Distillation_for_RGB-Based_Temporal_Action_Detection_CVPR_2023
|
Abstract
Temporal action detection aims to predict the time inter-
vals and the classes of action instances in the video. Despite
the promising performance, existing two-stream models ex-
hibit slow inference speed due to their reliance on compu-
tationally expensive optical flow. In this paper, we intro-
duce a decomposed cross-modal distillation framework to
build a strong RGB-based detector by transferring knowl-
edge of the motion modality. Specifically, instead of direct
distillation, we propose to separately learn RGB and motion
representations, which are in turn combined to perform ac-
tion localization. The dual-branch design and the asymmet-
ric training objectives enable effective motion knowledge
transfer while preserving RGB information intact. In addi-
tion, we introduce a local attentive fusion to better exploit
the multimodal complementarity. It is designed to preserve
the local discriminability of the features that is important
for action localization. Extensive experiments on the bench-
marks verify the effectiveness of the proposed method in en-
hancing RGB-based action detectors. Notably, our frame-
work is agnostic to backbones and detection heads, bring-
ing consistent gains across different model combinations.
|
1. Introduction
With the popularization of mobile devices, a significant
number of videos are generated, uploaded, and shared ev-
ery single day through various online platforms such as
YouTube and TikTok. Accordingly, there arises the impor-
tance of automatically analyzing untrimmed videos. As one
of the major tasks, temporal action detection (or localiza-
tion) [ 56] has attracted much attention, whose goal is to find
the time intervals of action instances in the given video. In
recent years, a lot of efforts have been devoted to improving
the action detection performance [ 28–30,36,37,74,79,84].
Most existing action detectors take as input two-stream
data consisting of RGB frames and motion cues, e.g., opti-
cal flow [ 21,66,78]. Indeed, it is widely known that differ-
*Corresponding author
Distillation
DetectorDetector
(b) Decomposed distillation (Ours)Detector
RGB framesDetector
DetectorDistillation
(a) Conventional distillation
Optical flow
RGB framesOptical flowFigure 1. Comparison between conventional distillation and ours.
Framework MethodAverage mAP (%)
RGB+OF RGB ∆
Anchor-based G-TAD [ 74] 41.5 26.9 −14.6
Anchor-freeAFSD [ 34] 52.4 43.3 −9.1
Actionformer [ 80] 62.2 55.5 −6.7
DETR-like TadTR [ 42] 56.7 46.0 −10.7
Proposal-free TAGS [ 47] 52.8 47.9 −4.9
Table 1. Impact of motion modality. We measure the average mAP
under the IoU thresholds of [0.3:0.7:0.1] on THUMOS’14.
ent modalities provide complementary information [ 6,24,
58,69]. To examine how much two-stream action detec-
tors rely on the motion modality, we conduct an ablative
study using a set of representative models1. As shown in
Table 1, regardless of the framework types, all the mod-
els experience sharp performance drops when the motion
modality is absent, probably due to the static bias of video
models [ 11,27,31,32]. This indicates that explicit motion
cues are essential for accurate action detection.
However, two-stream action detectors impose a cycle of
dilemmas for real-world applications due to the heavy com-
putational cost of motion modality. For instance, the most
popular form of motion cues for action detection, TV- L1
optical flow [ 66], is not real-time, taking 1.8 minutes to
process a 1-min 224×224 video of 30 fps on a single
GPU [ 58]. Although cheaper motion clues such as temporal
gradient [ 63,70,85] can be alternatives, two-stream models
still exhibit inefficiency at inference by doubling the net-
1Each model is reproduced by its official codebase.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2373
work forwarding process. Therefore, it would be desirable
to build strong RGB-based action detectors that can bridge
the performance gap with two-stream methods.
To this end, we focus on cross-modal knowledge distilla-
tion [ 12,16], where the helpful knowledge of motion modal-
ity is transferred to an RGB-based action detector during
training in order to improve its performance. In contrast to
conventional knowledge distillation [ 19,20,46,52] where
the superior teacher guides the weak student, cross-modal
distillation requires exploiting the complementarity of the
teacher and student. However, existing cross-modal distil-
lation approaches [ 12,13] fail to consider the difference and
directly transfer the motion knowledge to the RGB model
(Fig. 1a), as conventional distillation does. By design, the
RGB and motion information are entangled with each other,
making it difficult to balance between them. As a result,
they often achieve limited gains without careful tuning.
To tackle the issue, we introduce a novel framework,
named decomposed cross-modal distillation (Fig. 1b). In
detail, our model adopts the split-and-merge paradigm,
where the high-level features are decomposed into appear-
ance and motion components within a dual-branch design.
Then only the motion branch receives the distillation signal,
while the other branch remains intact to learn appearance in-
formation. For explicit decomposition, we adopt the shared
detection head and the asymmetric objective functions for
the branches. Moreover, we design a novel attentive fusion
to effectively combine the multimodal information provided
by the two branches. In contrast to existing attention meth-
ods, the proposed fusion preserves local sensitivity which
is important for accurate action detection. With these key
components, we build a strong action detector that produces
precise action predictions given only RGB frames.
We conduct extensive experiments on the popular bench-
marks, THUMOS’14 [ 22] and ActivityNet1.3 [ 4]. Experi-
mental results show that the proposed framework enables
effective cross-modal distillation by separating the RGB
and motion features. Consequently, our model largely im-
proves the performance of RGB-based action detectors, ex-
hibiting its superiority over conventional distillation. The
resulting RGB-based action detectors effectively bridge the
gap with two-stream models. Moreover, we validate our ap-
proach by utilizing another motion clue, i.e., temporal gra-
dient, which has been underexplored for action detection.
To summarize, our contributions are three-fold: 1) We
propose a decomposed cross-modal distillation framework,
where motion knowledge is transferred in a separate way
such that appearance information is not harmed. 2) We de-
sign a novel attentive fusion method that is able to exploit
the complementarity of two modalities while sustaining the
local discriminability of features. 3) Our method is gener-
alizable to various backbones and detection heads, showing
consistent improvements.
|
Liu_Class_Adaptive_Network_Calibration_CVPR_2023
|
Abstract
Recent studies have revealed that, beyond conventional
accuracy, calibration should also be considered for train-
ing modern deep neural networks. To address miscalibra-
tion during learning, some methods have explored different
penalty functions as part of the learning objective, along-
side a standard classification loss, with a hyper-parameter
controlling the relative contribution of each term. Never-
theless, these methods share two major drawbacks: 1) the
scalar balancing weight is the same for all classes, hinder-
ing the ability to address different intrinsic difficulties or
imbalance among classes; and 2) the balancing weight is
usually fixed without an adaptive strategy, which may pre-
vent from reaching the best compromise between accuracy
and calibration, and requires hyper-parameter search for
each application. We propose Class Adaptive Label Smooth-
ing (CALS) for calibrating deep networks, which allows
to learn class-wise multipliers during training, yielding a
powerful alternative to common label smoothing penalties.
Our method builds on a general Augmented Lagrangian
approach, a well-established technique in constrained opti-
mization, but we introduce several modifications to tailor it
for large-scale, class-adaptive training. Comprehensive eval-
uation and multiple comparisons on a variety of benchmarks,
including standard and long-tailed image classification, se-
mantic segmentation, and text classification, demonstrate the
superiority of the proposed method. The code is available at
https://github.com/by-liu/CALS .
|
1. Introduction
Deep Neural Networks (DNNs) have become the prevail-
ing model in machine learning, particularly for computer
vision [ 13] and natural language processing applications [ 44].
Increasingly powerful architectures [ 3,13,24], learning meth-
ods [ 4,12] and a large body of other techniques [ 15,27] are
constantly introduced. Nonetheless, recent studies [ 11,31]
have shown that regardless of their superior discriminative
*Equal Contributions. Correspondence to: { liubingyuan1988@
gmail.com ,[email protected] }performance, high-capacity modern DNNs are poorly cali-
brated, i.e. failing to produce reliable predictive confidences.
Specifically, they tend to yield over-confident predictions,
where the probability associated with the predicted class
overestimates the actual likelihood. Since this is a critical is-
sue in safety-sensitive applications like autonomous driving
or computational medical diagnosis, the problem of DNN
calibration has been attracting increasing attention in recent
years [ 11,31,38].
Current calibration methods can be categorized into two
main families. The first family involves techniques that per-
form an additional post-processing parameterized operation
on the output logits (or pre-softmax activations) [ 11], with
the calibration parameters of that operation obtained from a
validation set by either learning or grid-search. Despite the
simplicity and low computational cost, these methods have
empirically proven to be highly effective [ 8,11]. However,
their main drawback is that the choice of the optimal cali-
bration parameters is highly sensitive to the trained model
instance and validation set [ 22,31].
The second family of methods attempts to simultaneously
optimize for accuracy and calibration during network train-
ing. This is achieved by introducing, explicitly or implicitly,
a secondary optimization goal involving the model’s predic-
tive uncertainty, alongside the main training objective. As
a result, a scalar balancing hyper-parameter is required to
tune the relative contribution of each term in the overall loss
function. Some examples of this type of approaches include:
Explicit Confidence Penalty (ECP) [ 38], Label Smoothing
(LS) [ 32], Focal Loss (FL) [ 21] and its variant, Sample-
Dependent Focal Loss (FLSD) [ 31]. It has been recently
demonstrated in [ 22] that all these methods can be formu-
lated as different penalty terms that enforce the same equal-
ity constraint on the logits of the DNN: driving the logit
distances towards zero. Here, logit distances refers to the
vector of L1 distances between the highest logit value and the
rest. Observing the non-informative nature of this equality
constraint, [ 22] proposed to use a generalized inequality con-
straint, only penalizing those logits for which the distance is
larger than a pre-defined margin, achieving state-of-the-art
calibration performance on many different benchmarks.
Although learning based methods achieve greater calibra-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16070
Figure 1. Many techniques have been proposed for jointly improving accuracy and calibration during training [ 11,31], but they fail to consider
uneven learning scenarios like high class imbalance or long-tail distributions. We show a comparison of the proposed CALS-ALM method
and different learning approaches in terms of Calibration Error (ECE) vs Accuracy on the (a) ImageNet and (b) ImageNet-LT (long-tailed
ImageNet) datasets. A lower ECE indicates better calibration: a better model should attain high ACC and low ECE . Among all the
considered methods, CALS-ALM shows superior performance when considering both discriminative power and well-balanced probabilistic
predictions, achieving best accuracy and calibration on ImageNet, and best calibration and second best accuracy on ImageNet-LT.
tion performance [ 22,31], they have two major limitations:
1) The scalar balancing weight is equal for all classes. This
hinders the network performance when some classes are
harder to learn or less represented than others, such as in
datasets with a large number of categories (ImageNet) or
considerable class imbalance (ImageNet-LT). 2) The balanc-
ing weight is usually fixed before network optimization, with
no learning or adaptive strategy throughout training. This
can prevent the model from reaching the best compromise
between accuracy and calibration. To address the above is-
sues, we introduce Class Adaptive Label Smoothing method
based on an Augmented Lagrangian Multiplier algorithm,
which we refer to as CALS-ALM. Our Contributions can
be summarized as follows:
•We propose Class Adaptive Label Smoothing (CALS) for
network calibration. Adaptive class-wise multipliers are
introduced instead of the widely used single balancing
weight, which addresses the above two issues: 1) CALS
can handle a high number of classes with different intrinsic
difficulties, e.g. ImageNet; 2) CALS can effectively learn
from data suffering from class imbalance or a long-tailed
distribution, e.g. ImageNet-LT.
•Different from previous penalty based methods, we solve
the resulting constrained optimization problem by imple-
menting a modified Augmented Lagrangian Multiplier
(ALM) algorithm, which yields adaptive and optimal
weights for the constraints. We make some critical de-
sign decisions in order to adapt ALM to the nature of
modern learning techniques: 1) The inner convergence cri-
terion in ALM is relaxed to a fixed number of iterations in
each inner stage, which is amenable to mini-batch stochas-tic gradient optimization in deep learning. 2) Popular
techniques, such as data augmentation, batch normaliza-
tion [ 15] and dropout [ 10], rule out the possibility of track-
ing original samples and applying sample-wise multipliers.
To overcome this complication, we introduce class-wise
multipliers, instead of sample-wise multipliers in the stan-
dard ALM. 3) The outer-step update for estimating optimal
ALM multipliers is performed on the validation set, which
is meaningful for training on large-scale training set and
avoids potential overfitting.
•Comprehensive experiments over a variety of applications
and benchmarks, including standard image classification
(Tiny-ImageNet and ImageNet), long-tailed image clas-
sification (ImageNet-LT), semantic segmentation (PAS-
CAL VOC 2012), and text classification (20 Newsgroups),
demonstrate the effectiveness of our CALS-ALM method.
As shown in Figure 1 , CALS-ALM yields superior per-
formance over baselines and state-of-the-art calibration
losses when considering both accuracy and calibration, es-
pecially for more realistic large-scaled datasets with large
number of classes or class imbalance.
|
Kim_Grounding_Counterfactual_Explanation_of_Image_Classifiers_to_Textual_Concept_Space_CVPR_2023
|
Abstract
Concept-based explanation aims to provide concise and
human-understandable explanations of an image classifier.
However, existing concept-based explanation methods typ-
ically require a significant amount of manually collected
concept-annotated images. This is costly and runs the
risk of human biases being involved in the explanation.
In this paper, we propose Counterfactual explanation with
text-driven concepts (CounTEX), where the concepts are
defined only from text by leveraging a pre-trained multi-
modal joint embedding space without additional concept-
annotated datasets. A conceptual counterfactual explana-
tion is generated with text-driven concepts. To utilize the
text-driven concepts defined in the joint embedding space to
interpret target classifier outcome, we present a novel pro-
jection scheme for mapping the two spaces with a simple yet
effective implementation. We show that CounTEX generates
faithful explanations that provide a semantic understanding
of model decision rationale robust to human bias.
|
1. Introduction
Explainable artificial intelligence (XAI) aims to unveil
the reasoning process of a black-box deep neural network.
In the vision field, heatmap-style explanation has been ex-
tensively studied to interpret image classifiers [20, 21, 24].
However, simply highlighting the pixels that significantly
contribute to model outcome does not answer intuitive and
actionable questions such as “What aspect of the region
is important? Is it color? Or pattern?”. On the other
hand, drawing human-understandable rationale from the
highlighted pixels requires domain expert’s intervention and
can thus be impacted by the human subjectivity [11].
In contrast, concept-based explanation can provide a
more human-understandable and high-level semantic expla-
†Work done during the internship at Amazon Alexa AI
BlackBox
DNN
(a) Conventional concept activation vector
(b) Proposed method (CounTEX)CLIP
(text)
CLIP latent space
BlackBox latent spaceNegative dataset ( )
...
Positive dataset ( )
...Concept activation vector
(CAV)
Concept direction“object” ( )
“striped object” ( )Figure 1. (a) Conventional concept-based explanation derives a
CA V with the target model’s embedding of manually collected
concept-annotated images. (b) CounTEX derives the concept di-
rection directly from texts in CLIP latent space.
nation [3, 4, 7, 9, 11]. Concept fundamentally indicates an
abstract idea, and it is generally equated as a word such as
“stripe” or “red”. The earliest approach to interpret how
a specific concept affects the outcome of the target image
classifier is concept activation vector, or CA V [11]. A CA V
represents the direction of a concept within the target clas-
sifier embedding space and has been widely adopted to sub-
sequent concept-based explanations [15, 19, 25].
However, the CA Vs acquisition requires collections of
human annotations. The CA V of a concept is typically pre-
computed via two steps as depicted in Figure 1 (a); 1) col-
lecting a number of positive and negative images that best
represent a concept (e.g., images with and without stripes),
2) training a linear classifier (commonly support vector ma-
chine) with the images. The vector normal to the deci-
sion boundary serves as a CA V . Collecting positive/negative
datasets in step 1 is not only costly but also poses the risk of
admitting human biases in two aspects; diverging CA Vs for
the same concept and unintended entanglement of multiple
concepts. We will demonstrate in Section 2 that this may
threaten credibility of explanation.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10942
To tackle such challenges, we propose Counterfactual
explanation with text-driven concepts (CounTEX), which
derives the concept direction only from a text by leveraging
the text-image joint embedding space, CLIP [16] (Figure 1
(
|
Liu_VLPD_Context-Aware_Pedestrian_Detection_via_Vision-Language_Semantic_Self-Supervision_CVPR_2023
|
Abstract
Detecting pedestrians accurately in urban scenes is sig-
nificant for realistic applications like autonomous driving
or video surveillance. However, confusing human-like ob-
jects often lead to wrong detections, and small scale or
heavily occluded pedestrians are easily missed due to their
unusual appearances. To address these challenges, only
object regions are inadequate, thus how to fully utilize
more explicit and semantic contexts becomes a key problem.
Meanwhile, previous context-aware pedestrian detectors ei-
ther only learn latent contexts with visual clues, or need
laborious annotations to obtain explicit and semantic con-
texts. Therefore, we propose in this paper a novel approach
via Vision-Language semantic self-supervision for context-
aware Pedestrian Detection (VLPD) to model explicitly se-
mantic contexts without any extra annotations. Firstly, we
propose a self-supervised Vision-Language Semantic (VLS)
segmentation method, which learns both fully-supervised
pedestrian detection and contextual segmentation via self-
generated explicit labels of semantic classes by vision-
language models. Furthermore, a self-supervised Prototyp-
ical Semantic Contrastive (PSC) learning method is pro-
posed to better discriminate pedestrians and other classes,
based on more explicit and semantic contexts obtained from
VLS. Extensive experiments on popular benchmarks show
that our proposed VLPD achieves superior performances
over the previous state-of-the-arts, particularly under chal-
lenging circumstances like small scale and heavy occlusion.
Code is available at https://github.com/lmy98129/VLPD.
|
1. Introduction
With the recent advances of pedestrian detection, enor-
mous applications benefit from such a fundamental per-
∗Equal contribution. †Corresponding author.
(a)
(c)HumanCarTruck…
Self-SupervisedPretrainedLinguisticVectorsMappingtoVision(b)
Task1:“LearntoRecognizetheContextsbyCross-ModalMapping”Task2:“LearntoDiscriminatePedestriansandContexts”…NegativePrototypes
PedestrianBoundingBoxesPositivePrototypeandQueriesPrototypicalSemanticContrastiveLearning(Self-Supervised)(d)SemanticClasses
PedestrianDetectorPretrainedVL-ModelMappingtoVision
PedestrianDetectorAggregatePixelFeaturesSegmentPredictionInitializeloss
Figure 1. Illustration of the problems by previous works (top) and
our proposed method to tackle them (bottom). (a) and (b) are pre-
dicted by [27]. Green boxes are correct, red ones are human-like
traffic signs, and dashed blue ones are missing heavily occluded or
small scale pedestrians. (c) and (d): We propose self-supervisions
to recognize the contexts and discriminate them from pedestrians.
ception technique, including person re-identification, video
surveillance and autonomous driving. In the meantime,
various challenges from the urban contexts, i.e., pedestri-
ans and non-human objects, still hinder the better perfor-
mances of detection. For example, confusing appearances
of human-like objects often mislead the detector, as shown
in Figure 1(a). Moreover, heavily occluded or small scale
pedestrians have unusual appearances and cause missing
detections as Figure 1(a) and (b). Apart from the object re-
gions, the contexts are crucial to address these challenges.
Nevertheless, previous methods still make inadequate in-
vestigations on the contexts in urban scenarios. For in-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
6662
*PretrainedVisualEncoderPretrainedVisualEncoder
“Apictureof[CLS].”
*PretrainedTextEncoder
PedestrianDetectionHead
PedestrianBoundingBoxesHumanCarTruck…PretrainedLinguisticVectors
*Cross-ModalMappingPixel-wiseAggregation
Frozen
Training𝓛𝑫𝒆𝒕𝓛𝑽𝑳𝑺VisualFeaturesVisualFeatures
…NegProtos𝓛𝑷𝑺𝑪
…PosProtos&Queries(a)
(b)
Figure 2. The overall architecture of our proposed VLPD approach. (a) Vision-Language Semantic (VLS) segmentation obtains pseudo
labels via Cross-Modal Mapping, then the Pretrained Visual Encoder learns fully-supervised detection ( LDet) and self-supervised segmen-
tation to recognize semantic classes for explicit contexts without any annotations. (b) Prototypical Semantic Contrastive (PSC) learning
lets the pixel-wise pedestrian features as queries closer to positive prototypes and further to negative ones based on Pixel-wise Aggregation.
stance, manual contextual annotations from CityScapes [6]
boost SMPD [15] on the pedestrian benchmark CityPersons
[43], because they share homologous image data. Besides,
a semi-supervised model yields pseudo labels for the Cal-
tech dataset [9]. However, both these two solutions require
expensive fine-grained annotations, especially for training
the semi-supervised model. Moreover, other methods learn
regional latent contexts merely from limited visual neigh-
borhood [47], or non-human local proposals as negative
samples for contrastive learning [23]. Without an explicit
awareness of semantic classes in the contexts, these meth-
ods thus still suffer from unsatisfactory performance.
Besides, some pedestrian detection methods also indi-
rectly handle the contexts. For the occlusion problems,
many part-aware methods [4, 13, 19, 20, 28, 33, 41, 44, 45]
adopt visible annotations for the occluded pedestrians,
which indicate the occlusion by other pedestrians or non-
human objects in the contexts. Whereas, these labels still
need heavy labors of human annotators. For scale varia-
tion [2,8,21,39,46], crowd occlusion [14,25,38,40,48,50]
or generic hard pedestrians [1, 24, 26, 27, 34], most previ-
ous works are intra-class, e.g., small pedestrians or crowded
scenes, and thus irrelevant to context modeling problems.
Inspired by the vision-language models, we notice a
more explicit context modeling without any annotations via
cross-modal mapping. For instance, DenseCLIP [32] is ini-
tialized with vision-language pretrained CLIP model [31]
to learn cross-modal mapping from pixel-wise features to
linguistic vectors of human-annotated classes. Meanwhile,
MaskCLIP [49] generates pseudo labels via cross-modal
mapping and train another visual model. Hence, comple-
menting the initialized mapping and pseudo labeling, we
propose to recognize the semantic classes for explicit con-texts via self-supervised Vision-Language Semantic (VLS)
segmentation, as shown in Figure 1(c) and 2(a).
Furthermore, we consider that only pixel-wise scores are
ambiguous to discriminate pedestrians and contexts. Due to
the coarse-grained pseudo labels, some parts of pedestrians
might have higher scores of other classes. Different from
the regional contrastive learning [23], we introduce the con-
cept of prototype [35, 51] for a global discrimination. Each
pixel of pedestrian features is pulled closer to pixel-wise
aggregated positive prototypes and pushed away from the
negative ones of other classes based on the explicit contexts
obtained from VLS. As illustrated in Figure 1(d) and 2(b),
a novel contrastive self-supervision for pedestrian detection
is proposed to better discriminate pedestrians and contexts.
In conclusion, we have observed a dilemma between the
heavy burden of manual annotation for explicit contexts and
local implicit context modeling. Hence, we propose a novel
approach to tackle these problems via Vision- Language se-
mantic self-supervision for Pedestrian Detection ( VLPD ).
The main contributions of this paper are as follows:
• Firstly, the Vision-Language Semantic (VLS) segmen-
tation method is proposed to model explicit seman-
tic contexts by vision-language models. With pseudo
labels via cross-modal mapping, the visual encoder
learns fully-supervised detection and self-supervised
segmentation to recognize the semantic classes for ex-
plicit contexts. To our best knowledge, this is the
first work to propose such a vision-language extra-
annotation-free method for pedestrian detection .
• Secondly, we further propose the Prototypical Seman-
tic Contrastive (PSC) learning method to better dis-
criminate pedestrians and contexts. The negative and
6663
positive prototypes are aggregated via the score maps
of contextual semantic classes obtained from VLS and
pedestrian bounding boxes, respectively. Each pixel
of pedestrian features is pulled close to positive proto-
types and pushed away from the negative ones, in order
to strengthen the discrimination power of the detector.
• Finally, by the integration of VLS and PSC, our pro-
posed approach VLPD achieves superior performances
over the previous state-of-the-art methods on popu-
lar Caltech and CityPersons benchmarks, especially on
the challenging small scale and occlusion subsets.
|
Lin_Memory-Friendly_Scalable_Super-Resolution_via_Rewinding_Lottery_Ticket_Hypothesis_CVPR_2023
|
Abstract
Scalable deep Super-Resolution (SR) models are in-
creasingly in demand, whose memory can be customized
and tuned to the computational recourse of the platform.The existing dynamic scalable SR methods are not memory-
friendly enough because multi-scale models have to be
saved with a fixed size for each model. Inspired by the suc-cess of Lottery Tickets Hypothesis (LTH) on image classi-fication, we explore the existence of unstructured scalable
SR deep models, that is, we find gradual shrinkage sub-networks of extreme sparsity named winning tickets. In thispaper , we propose a Memory-friendly Scalable SR frame-work (MSSR). The advantage is that only a single scalablemodel covers multiple SR models with different sizes, in-
stead of reloading SR models of different sizes. Concretely,
MSSR consists of the forward and backward stages, the for-
mer for model compression and the latter for model expan-sion. In the forward st age, we take advant age of LT H with
rewinding weights to progressively shrink the SR model and
the pruning-out masks that form nested sets. Moreover ,
stochastic self-distillation (SSD) is conducted to boost theperformance of sub-networks. By stochastically selecting
multiple depths, the current model inputs the selected fea-
tures into the corresponding parts in the larger model andimproves the performance of the current model based onthe feedback results of the larger model. In the backward
stage, the smaller SR model could be expanded by recov-
ering and fine-tuning the pruned parameters according tothe pruning-out masks obtained in the forward. Extensive
experiments show the effectiveness of MMSR. The smallest-
scale sub-network could achieve the sparsity of 94% andoutperforms the compared lightweight SR methods.
*Equal contribution
†Corresponding authorsModel Scalability
20% sparsest sub-model 40% sparse sub-model
80% sparse sub-mode l 100% whole modelModel Deployment
*KVRU_
LM
H
:<20% :40%-80% MM :80%- 100% H LL :S_n :M_n :Cur_n :Free_n
:S_w :M_w :Cur_w
Figure 1. The flowchart of scalable SR network. S n and S w
denote the neurons and the neural connections (weights) of the
simplest subnetwork; M n and M w denote the intermediate neu-
rons and neural connections; Cur n and Cur w denote the specific
neurons and neural connections belonging to the current subnet-
work. Free n denotes the pruning-out neurons. The final model
(with 100% recovered parameters) reaches the original size. Thescalable model is adjustable to the memory resource allocation.
|
1. Introduction
Single image super-resolution (SISR) aims to reconstruct
a high-resolution (HR) image from the corresponding low-
resolution (LR) one. With the rising of deep learning, deep
SR methods have made incredible progress. However, theexisting SR models mostly require computational and mem-
ory resources, so they do not favor resource-limited devices
such as mobile phones, robotics, and some edge devices.
The lightweight SR methods are attracting more at-
tention for better application to resource-limited devices.
The existing lightweight SR methods mainly focus on de-
signing compact architectures [ 17,20] with a fixed size,
such as multi-scale pyramid [ 20], multiple-level receptive
fields [ 17,18], and recursive learning [ 19]. However, most
lightweight SR models with fixed sizes are not flexible inapplications. If one model does not match the resources of
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14398
the platform, it has to be retrained by compression methods
to match the resources and then reloaded onto the devices.
The urgent demand to customize models based on de-
ployment resources is increasing. Dynamic neural networksfor SR [ 14,22] are proposed to adjust the network architec-
ture according to different computational resources. Theexisting dynamic deep SR models often explore dynamic
depth or width [ 22,26], but they either require large mem-
ory resources or are not convenient for users to wait for re-
training another SR model. The former leads to saving themulti-scale SR models of different sizes and the latter leadsto retraining the model before being reused again. The lim-itation lies in that they are not memory-friendly. In manyedge-device SR applications, the devices may be scalable,that is, their memories may be small in the beginning andbe expanded later. Thus, we discuss two issues in this pa-per: 1) how to make a scalable lightweight model for themulti-scale computational recourse. 2) how to make the
lightweight model expand to a larger-size model for better
performance if the computational recourse is increased.
As for the first issue, inspired by the success of Lottery
Ticket Hypothesis [ 10] which points out that there could ex-
ist a properly pruned sub-network named winning tickets to
achieve comparable performance against the original densenetwork in model compression of classification, it is usedto find the sub-network for SR. We are the first to study the
existence of scalable winning tickets for SR. Iterative prun-ing and rewinding weights in LTH are beneficial to the scal-
able lightweight SR model. Iterative pruning may compress
the SR model according to an arbitrary size. It is observed
in [24] that the winning tickets are related to an insufficient
DNN, and rewinding LTH outperforms the original LTH.
That is, the initial weights in LTH are replaced with the T-iteration weights during pruning and fine-tuning. The scal-
able deep SR model is shown in Fig. 1.
As for the second issue, the scalable SR model can cus-
tomize parameters to adapt to different memory resources
rather than load or offload different models for different de-
vices. In other words, during real applications, there will beonly one simple model to be employed for inference whose
size is decided by the computational resource.
In this paper, we propose a memory-friendly scalable
deep SR model (MSSR) via rewinding LTH. We use the
rewinding LTH [ 10] to generate our unstructured scalable
mask. MSSR is backtracking and contains forward and
backward stages. The former focuses on model compres-sion by rewinding LTH with iterative pruning and fine-
tuning, and the latter focuses on iterative model expansionuntil it goes back to its original size. Multi-scale winning
tickets together with the pruning-out masks are obtained by
rewinding LTH in the forward stage with the decrease in the
number of parameters. The pruning-out masks are nested.In order to make the compressed SR model not degrade sig-nificantly, stochastic self-distillation (SSD) is used to im-
prove the representation of the small-scale SR model, andknowledge is transferred from the last-scale model to thecurrent scale model. In the backward stage, the smallest
model is expanded gradually to the model with the originalsize with the expanded mask.
The main contributions of this work are three-fold:
• A memory-friendly scalable dynamic SR lightweight
model via rewinding LTH is proposed. MSSR isre-configurable and switchable to sub-networks withdifferent sizes according to on-device resource con-
straints on the fly.
• MSSR is backtracking, which contains forward and
backward stages. Multi-scale winning tickets form
nested masks for the multi-scale models. SSD is con-
ducted by replacing the features in randomly selectedlayers between Teacher and Student to improve the
performance of the scalable SR lightweight models.
• Extensive experiments demonstrate that MSSR can
generalize to different SR models as well as state-of-the-art attention-based models, ENLCN [ 1].
|
Kim_Relational_Context_Learning_for_Human-Object_Interaction_Detection_CVPR_2023
|
Abstract
Recent state-of-the-art methods for HOI detection typ-
ically build on transformer architectures with two decoder
branches, one for human-object pair detection and the other
for interaction classification. Such disentangled transform-
ers, however, may suffer from insufficient context exchange
between the branches and lead to a lack of context informa-
tion for relational reasoning, which is critical in discover-
ing HOI instances. In this work, we propose the multiplex
relation network (MUREN) that performs rich context ex-
change between three decoder branches using unary, pair-
wise, and ternary relations of human, object, and interac-
tion tokens. The proposed method learns comprehensive re-
lational contexts for discovering HOI instances, achieving
state-of-the-art performance on two standard benchmarks
for HOI detection, HICO-DET and V-COCO.
|
1. Introduction
The task of Human-Object Interaction (HOI) detection
is to discover the instances of ⟨human, object, interaction ⟩
from a given image, which reveal semantic structures of hu-
man activities in the image. The results can be useful for
a wide range of computer vision problems such as human
action recognition [1,25,42], image retrieval [9,33,37], and
image captioning [12,34,36] where a comprehensive visual
understanding of the relationships between humans and ob-
jects is required for high-level reasoning.
With the recent success of transformer networks [31] in
object detection [2, 45], transformer-based HOI detection
methods [4, 15, 16, 29, 38, 44, 46] have been actively devel-
oped to become a dominant base architecture for the task.
Existing transformer-based methods for HOI detection can
be roughly divided into two types: single-branch and two-
branch. The single-branch methods [16, 29, 46] update a
token set through a single transformer decoder and detect
HOI instances using the subsequent FFNs directly. As a sin-
gle transformer decoder is responsible for all sub-tasks ( i.e.,
< human,bicycle, riding>
Ternary
Unary
Pairwise
Figure 1. The illustration of relation context information in an HOI
instance. We define three types of relation context information in
an HOI instance: unary, pairwise, and ternary relation contexts.
Each relation context provides useful information for detecting an
HOI instance. For example, in our method, the unary context about
an interaction (green) helps to infer that a human (yellow) and
an object (red) are associated with the interaction, and vice versa.
Our method utilizes the multiplex relation context consisting of the
three relation contexts to perform context exchange for relational
reasoning.
human detection, object detection, and interaction classifi-
cation), they are limited in adapting to the different sub-
tasks with multi-task learning, simultaneously [38]. To re-
solve the issue, the two-branch methods [4, 15, 38, 40, 44]
adopt two separated transformer decoder branches where
one detects human-object pairs from a human-object to-
ken set while the other classifies interaction classes between
human-object pairs from an interaction token set. However,
the insufficient context exchange between the branches pre-
vents the two-branch methods [15,38,40] from learning re-
lational contexts, which plays a crucial role in identifying
HOI instances. Although some methods [4, 44] tackle this
issue with additional context exchange, they are limited to
propagating human-object context to interaction context.
To address the problem, we introduce the MUtiplex
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2925
RElation Network (MUREN) that performs rich context ex-
change using unary, pairwise, and ternary relations of hu-
man, object, and interaction tokens for relational reasoning.
As illustrated in Figure 1, we define three types of relation
context information in an HOI instance: unary, pairwise,
and ternary, each of which provides useful information to
discover HOI instances. The ternary relation context gives
holistic information about the HOI instance while the unary
and pairwise relation contexts provide more fine-grained in-
formation about the HOI instance. For example, as shown in
Figure 1, the unary context about an interaction ( e.g., ‘rid-
ing’) helps to infer which pair of a human and an object
is associated with the interaction in a given image, and the
pairwise context between a human and an interaction ( e.g.,
‘human’ and ‘riding’) helps to detect an object ( e.g., ‘bicy-
cle’). Motivated by this, our multiplex relation embedding
module constructs the context information that consists of
the three relation contexts, thus effectively exploiting their
benefits for relational reasoning. Since each sub-task re-
quires different context information for relational reason-
ing, our attentive fusion module selects requisite context in-
formation for each sub-task from multiplex relation context
and propagates the selected context information for con-
text exchange between the branches. Unlike previous meth-
ods [4, 15, 38, 44], we adopt three decoder branches which
are responsible for human detection, object detection, and
interaction classification, respectively. Therefore, the pro-
posed method learns discriminative representation for each
sub-task.
We evaluate MUREN on two public benchmarks, HICO-
DET [3] and V-COCO [10], showing that MUREN achieves
state-of-the-art performance on two benchmarks. The abla-
tion study demonstrates the effectiveness of the multiplex
relation embedding module and the attentive fusion mod-
ule. Our contribution can be summarized as follows:
• We propose multiplex relation embedding module for
HOI detection, which generates context information
using unary, pairwise, and ternary relations in an HOI
instance.
• We propose the attentive fusion module that effectively
propagates requisite context information for context
exchange.
• We design a three-branch architecture to learn more
discriminative features for sub-tasks, i.e., human de-
tection, object detection, and interaction classification.
• Our proposed method, dubbed MUREN, outperforms
state-of-the-art methods on HICO-DET and V-COCO
benchmarks.
|
Lee_TTA-COPE_Test-Time_Adaptation_for_Category-Level_Object_Pose_Estimation_CVPR_2023
|
Abstract
Test-time adaptation methods have been gaining atten-
tion recently as a practical solution for addressing source-
to-target domain gaps by gradually updating the model
without requiring labels on the target data. In this paper, we
propose a method of test-time adaptation for category-level
object pose estimation called TTA-COPE. We design a pose
ensemble approach with a self-training loss using pose-
aware confidence. Unlike previous unsupervised domain
adaptation methods for category-level object pose estima-
tion, our approach processes the test data in a sequential,
online manner, and it does not require access to the source
domain at runtime. Extensive experimental results demon-
strate that the proposed pose ensemble and the self-training
loss improve category-level object pose performance dur-
ing test time under both semi-supervised and unsupervised
settings.
|
1. Introduction
Object pose estimation is a crucial problem in com-
puter vision and robotics. Advanced methods that fo-
cus on diverse variations of object 6D pose estimation
have been introduced, such as known 3D objects (instance-
level) [28, 38], category-level [18, 36, 43], few-shot [52],
and zero-shot pose estimation [13, 47]. These techniques
are useful for downstream applications requiring an on-
line operation, such as robotic manipulation [6, 25, 48] and
augmented reality [23, 24, 32]. Our paper focuses on the
category-level object pose estimation problem since it is
more broadly applicable than the instance-level problem.
Many works on category-level object pose estimation [2,
3, 17, 18, 36, 43, 44] have been proposed recently. These
approaches estimate multiple classes of object pose more
efficiently in a single network compared to the instance-
level object pose estimation methods [27, 38, 41, 49–51],
which depend on known 3D shape knowledge and the size
of the objects. Notably, Wang et al . [43] introduced a
novel representation called Normalized Object Coordinate
Space (NOCS) to align various object instances within each
Figure 1. We propose a Test-Time Adaptation for Category-level
Object Pose Estimation framework (TTA-COPE) that automati-
cally improves the network in an online manner without labeled
target data. As new image frames are processed, our method fine-
tunes the network using the unlabeled data and simultaneously ap-
plies the network to perform pose estimation via inference. This
approach successfully handles domain shifts compared with no
adaptation, as seen here.
category in a canonical 3D space. The strengths of the
NOCS representation have led to its adoption by follow-up
work [3, 17, 36].
In order to obtain accurate category-level object pose
methods in unseen real-world scenarios, it is desirable to
fine-tune the models in the new environment with labeled
target data. The model that is not fine-tuned on the tar-
get domain distribution will almost certainly exhibit lower
performance than the fine-tuned model [37]. However, an-
notating 6D poses of objects in the target environment is
an expensive process [1, 39, 43, 45] that we seek to avoid.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21285
Table 1. Comparison with prior unsupervised works for
category-level object pose estimation. Our unsupervised method
trains models without 2D or 3D labels of target data, similar to
Self-DPDN [16]. Unlike previous methods, our proposed ap-
proach updates the model online without offline batch processing.
Moreover, we do not use the source data during test time (source-
free) because it is impractical to train on a large amount of source
data every iteration. There also may be privacy or legal constraints
to access source data [21].
MethodUnsupervised Test-time Adaptation
Target 3D Target 2D Source-FreeOnline
Adaptation
Supervised ✗ ✗ ✗ ✗
SSC-6D [29] ✓ ✗ ✗ ✗
RePoNet [5] ✓ ✗ ✗ ✗
UDA-COPE [15] ✓ ✗ ✓ ✗
Self-DPDN [16] ✓ ✓ ✗ ✗
Ours ✓ ✓ ✓ ✓
Compared to annotating in 2D space, labeling in 3D space
requires specific knowledge about geometry [7] from the
annotator and is much more laborious, time-consuming,
and error-prone due to the complex nature of SE(3)space.
Therefore, it is usually challenging to annotate real-world
data with 3D annotations for fine-tuning.
In order to solve the aforementioned problem of anno-
tating object pose data in the real world, several recent
methods [5, 15, 16] propose unsupervised domain adapta-
tion (UDA) that aims to train the network without utilizing
the ground truth of target pose labels. Although they show
promising results using UDA techniques, these approaches
still do not meet some of the requirements for online ap-
plications. For example, when a robot encounters a new
environment, it is desirable to adapt the scene online man-
ner while estimating object poses rather than waiting for
enough data to be collected in the novel scene to train the
model offline.
This problem definition of online fine-tuning is more
practical for real applications, where we desire to update
the model instantly when new data becomes available for
fast domain adaptation. This setting is known as test-time
adaptation (TTA) [42]. For TTA, the requirements are as
follows: 1) labeled source data should not be accessed at
test time, 2) adaptation should be online (rather than of-
fline batch processing), and 3) the method should be fully
unsupervised, without using 2D or 3D target labels during
online fine-tuning. Since we do not have access to labeled
source data (source-free) at test time this problem is more
challenging than existing unsupervised category-level ob-
ject pose methods [5,15,16,29]. Table 1 summarizes the dif-
ference between our problem definition and existing meth-
ods, showing that test-time adaptation for category-level ob-
ject pose estimation remains an open problem.
In this paper, we propose Test-time Adaptation for
Category-level Object Pose Estimation ( TTA-COPE ) tohandle domain shifts without any target domain annota-
tions (see Fig. 1). Prior works on general test-time adapta-
tion [42,46] propose self-training to minimize entropy loss.
TENT [42] has shown improvement in 2D classification and
segmentation tasks. We show, however, that simply extend-
ing TENT for the category-level object pose estimation is
not effective. Another self-training strategy is the teacher-
student framework [35] with pseudo labels. However, since
pseudo labels are created without any noise filtering, naive
pseudo labels may be unreliable and cause convergence to
a suboptimal model.
To tackle this problem, we design a novel pose ensemble
method to perform test-time adaptation for category-level
object pose estimation by extending the pose-aware filter-
ing of UDA-COPE [15]. The proposed method uses an en-
semble of teacher-student predictions based on pose-aware
confidence, which is used both for generating pseudo labels
and inference. Also, the pose ensemble helps to train mod-
els with additional self-training loss to reduce the domain
shift for category-level pose estimation by using pose-aware
confidence. We demonstrate the advantages of our proposed
pose ensemble and self-training loss with extensive stud-
ies in both semi-supervised and unsupervised settings. We
show that our TTA-COPE framework achieves state-of-the-
art performance compared to strong TTA baselines.
In summary, the main contributions of our work are as
follows:
• We propose Test-Time Adaptation for Category-level
Object Pose Estimation (TTA-COPE), which handles
domain shifts without labeling target data and without
accessing source data during test time.
• We introduce a pose ensemble with self-training loss
that utilizes the teacher-student predictions to generate
robust pseudo labels and estimates accurate poses for
inference.
• We evaluate our framework with experimental com-
parisons against strong test-time baselines and state-
of-the-art methods under both semi-supervised and un-
supervised settings.
|
Koryakovskiy_One-Shot_Model_for_Mixed-Precision_Quantization_CVPR_2023
|
Abstract
Neural network quantization is a popular approach for
model compression. Modern hardware supports quanti-
zation in mixed-precision mode, which allows for greater
compression rates but adds the challenging task of search-
ing for the optimal bit width. The majority of existing
searchers find a single mixed-precision architecture. To
select an architecture that is suitable in terms of perfor-
mance and resource consumption, one has to restart search-
ing multiple times. We focus on a specific class of methods
that find tensor bit width using gradient-based optimization.
First, we theoretically derive several methods that were em-
pirically proposed earlier. Second, we present a novel One-
Shot method that finds a diverse set of Pareto-front architec-
tures in O(1) time. For large models, the proposed method
is 5 times more efficient than existing methods. We verify
the method on two classification and super-resolution mod-
els and show above 0.93 correlation score between the pre-
dicted and actual model performance. The Pareto-front ar-
chitecture selection is straightforward and takes only 20 to
40 supernet evaluations, which is the new state-of-the-art
result to the best of our knowledge.
|
1. Introduction
In recent years, neural network quantization [31] has be-
come a popular hardware-friendly compression technique.
It is common to quantize linear and convolutional layer
operands while leaving vector operands unchanged. Mod-
ern algorithms achieve lossless quantization into fixed 8-bit
integer values in many applications [45, 15, 40, 35, 49, 25,
5]. At higher compression rates, mixed-precision is often
needed [22, 44]. For example, models often require 8-bit
precision for the first and last layers, while the middle lay-
ers can tolerate lower precision [15, 40]. In addition, the
selected precision may depend on a quantized operation [7]
or a hardware at hand [41]. This motivates many vendors to010Time (m)ESPCN
050SRResNet
0200ResNet18
0250MobileNet-v2
EdMIPS DNAS GMPQ One-Shot MPS
Figure 1. The searching time taken by each algorithm to dis-
cover a single bit width architecture belonging to a Pareto front.
EdMIPS [7], DNAS [44], GMPQ [42], and One-Shot MPS (our)
use a proxy dataset for ResNet-18 and MobileNet-v2. Bayesian
Bits [38] and HAQ [41] roughly take 100 times more searching
time compared to our method.
support mixed-precision models in hardware.
To attain the best mixed-precision performance, it is cru-
cial to find an optimal precision for each matrix multipli-
cation factor. Unfortunately, all possible bit width combi-
nations cannot be examined since the search space scales
exponentially with the number of multiplications. The in-
ability to predict the influence of individual loss coefficients
on the resulting compression rate furthermore exacerbates
the difficulty of searching. Existing methods [44, 7, 38, 9]
require multiple restarts of the searching process until a sat-
isfactory bit width allocation is found. This results in O(N)
searching time, where Nis the number of restarts.
The authors of EdMIPS [7] mention that “sometimes, it
is mysterious why and how an architecture is found by Neu-
ral Architecture Search (NAS)”. We answer this question in
the context of EdMIPS and DNAS [44] methods. To do
so, we simplify and generalize Bayesian Bits [38], where a
variational inference (VI) approach is used to derive the loss
function for a hierarchical supernet. Then, we demonstrate
how the EdMIPS and DNAS loss functions can be derived.
Next, using our derivation, we propose a novel One-
Shot Mixed-Precision Search (One-Shot MPS) method that
finds a diverse set of Pareto-front architectures in O(1)
time. We extend the commonly used supernet transforma-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7939
tion of a floating-point model with a set of trainable func-
tions that predict the bit width probability depending on a
hardware regularization parameter. For ImageNet [11], we
observe at least 0.96correlation score between child mod-
els sampled from a One-Shot model and standalone fine-
tuned models, cf. previous result attains a maximum score
of0.55[16]. Such a high score allows one to plot a Pareto
front of performance versus hardware resources using a lin-
ear sweep over the regularization parameter, and select the
most promising precision given hardware constraints be-
fore fine-tuning . The Pareto-front architecture selection is
straightforward and takes only 20 to 40 supernet evaluations
while existing One-Shot methods require at least 1000 eval-
uations [16, 10].
To sum up, our contribution is twofold. First, we pro-
vide a theoretical derivation of the earlier empirically-found
state-of-the-art searching methods. Second, we propose to
augment a supernet with a bit width prediction model that
allows searching for Pareto-front bit width combinations
corresponding to different compression rates in a constant
time. We validate the benefits of the approach on sev-
eral widely-used models including mobile-friendly archi-
tectures. To the best of our knowledge, the proposed pre-
dictor is not described in the existing literature.
|
Lee_Shape-Aware_Text-Driven_Layered_Video_Editing_CVPR_2023
|
Abstract
Temporal consistency is essential for video editing appli-
cations. Existing work on layered representation of videos
allows propagating edits consistently to each frame. These
methods, however, can only edit object appearance rather
than object shape changes due to the limitation of using
a fixed UV mapping field for texture atlas. We present
a shape-aware, text-driven video editing method to tackle
this challenge. To handle shape changes in video editing,
we first propagate the deformation field between the in-
put and edited keyframe to all frames. We then leverage
a pre-trained text-conditioned diffusion model as guidance
for refining shape distortion and completing unseen regions.
The experimental results demonstrate that our method can
achieve shape-aware consistent video editing and compare
favorably with the state-of-the-art.
|
1. Introduction
Image editing. Recently, image editing [19, 20, 24, 34,
40, 44] has made tremendous progress, especially those
using diffusion models [19, 20, 40, 44]. With free-form
text prompts, users can obtain photo-realistic edited images
without artistic skills or labor-intensive editing. However,
unlike image editing, video editing is more challenging due
to the requirement of temporal consistency. Independently
editing individual frames leads to undesired inconsistent
frames, as shown in Fig. 2a. A na ¨ıve way to deal with tem-
poral consistency in video editing is to edit a single frame
and then propagate the change to all the other frames. Nev-
ertheless, artifacts are presented when there are unseen pix-
els from the edited frame in the other frames, as shown in
Fig. 2b.
Video editing and their limitations. For consistent video
editing, Neural Layered Atlas (NLA) [18] decomposes a
video into unified appearance layers atlas . The layered de-
composition helps consistently propagate the user edit to
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14317
(a) Multi-frame editing with frame interpolation [42]
(b) Single-frame editing with frame propagation [17]
(c) Text2LIVE [2] with prompt “ sports car ”
Figure 2. Limitation of existing work. Compare these results from baseline methods with our “ sports car ” result in Fig. 1. (a)
Multiple frames are edited independently and interpolated by frame interpolation method [42]. Such an approach shows realistic
per-frame results but suffers from temporal flickering. (b) Extracting a single keyframe for image editing, the edits are propagated to each
frame via [17]. The propagated edits are temporally stable. However, it yields visible distortions due to the unseen pixels from the
keyframe. (c) The SOTA Text2LIVE [2] results demonstrate temporally-consistent appearance editing but remain the source shape
“Jeep ” instead of the target prompt “ sports car ” by using the fixed UV mapping of NLA.
individual frames with per-frame UV sampling association.
Based on NLA, Text2LIVE [2] performs text-driven editing
on atlases with the guidance of the Vision-Language model,
CLIP [39]. Although Text2LIVE [2] makes video editing
easier with a text prompt, it can only achieve appearance
manipulation due to the use of fixed-shape associated UV
sampling. Since per-frame UV sampling gathers informa-
tion on motion and shape transformation in each frame to
learn the pixel mapping from the atlas, shape editing is not
feasible, as shown in Fig. 2c.
Our work. In this paper, we propose a shape-aware text-
guided video editing approach. The core idea in our work
lies in a novel UV map deformation formulation. With a se-
lected keyframe and target text prompt, we first generate an
edited frame by image-based editing tool ( e.g., Stable Diffu-
sion [44]). We then perform pixel-wise alignment between
the input and edited keyframe pair through a semantic cor-
respondence method [51]. The correspondence specifies the
deformation between the input-edited pair at the keyframe.
According to the correspondence, the shape and appearance
change can then be mapped back to the atlas space. We can
thus obtain per-frame deformation by sampling the defor-
mation from the atlas to the original UV maps. While this
method helps with shape-aware editing, it is insufficient due
to unseen pixels in the edited keyframe. We tackle this byfurther optimizing the atlas texture and the deformation us-
ing a pretrained diffusion model by adopting the gradient
update procedure described in DreamFusion [38]. Through
the atlas optimization, we achieve consistent shape andap-
pearance editing, even in challenging cases where the mov-
ing object undergoes 3D transformation (Fig. 1).
Our contributions.
• We extend the capability of existing video editing
methods to enable shape-aware editing.
• We present a deformation formulation for frame-
dependent shape deformation to handle target shape
edits.
• We demonstrate the use of a pre-trained diffusion
model for guiding atlas completion in layered video
representation.
|
Lin_Being_Comes_From_Not-Being_Open-Vocabulary_Text-to-Motion_Generation_With_Wordless_Training_CVPR_2023
|
Abstract
Text-to-motion generation is an emerging and challeng-
ing problem, which aims to synthesize motion with the same
semantics as the input text. However, due to the lack of di-
verse labeled training data, most approaches either limit to
specific types of text annotations or require online optimiza-
tions to cater to the texts during inference at the cost of ef-
ficiency and stability. In this paper, we investigate offline
open-vocabulary text-to-motion generation in a zero-shot
learning manner that neither requires paired training data
nor extra online optimization to adapt for unseen texts. In-
spired by the prompt learning in NLP , we pretrain a motion
generator that learns to reconstruct the full motion from
the masked motion. During inference, instead of chang-
ing the motion generator, our method reformulates the in-
put text into a masked motion as the prompt for the motion
generator to “reconstruct” the motion. In constructing the
prompt, the unmasked poses of the prompt are synthesized
by a text-to-pose generator. To supervise the optimization
of the text-to-pose generator, we propose the first text-pose
alignment model for measuring the alignment between texts
and 3D poses. And to prevent the pose generator from over-
fitting to limited training texts, we further propose a novel
wordless training mechanism that optimizes the text-to-pose
generator without any training texts. The comprehensive
experimental results show that our method obtains a signif-
icant improvement against the baseline methods. The code
is available at https://github.com/junfanlin/
oohmg .
|
1. Introduction
Motion generation has attracted increasing attention due
to its practical value in the fields of virtual reality, video
games, and movies. Especially for text-conditional motion
generation, it can largely improve the user experience if
the virtual avatars can react to the communication texts in
real time. However, most current text-to-motion approaches
*Corresponding author: [email protected]
Figure 1. Demonstrations of our OOHMG. Given an unseen open-
vocabulary text (e.g., an object name “a basketball”, or a simile
description “fly like a bird”, or a usual text “he walks”), OOHMG
translates the text into the text-consistent pose, which is used to
prompt the motion generator for synthesizing the motion.
are trained on paired text-motion data with limited types of
annotations, and thus could not well-generalize to unseen
open-vocabulary texts.
To handle the open-vocabulary texts, recent works lever-
age the powerful zero-shot text-image alignment ability of
the pretrained model, i.e., CLIP [35], to facilitate the text-
to-motion generation. Some works like MotionCLIP [42]
use the CLIP text encoder to extract text features and learn
a motion decoder to decode the features into motions. How-
ever, they require paired text-motion training data and still
could not handle texts that are dissimilar to the training
texts. Instead of learning an offline motion generator with
paired data, some works like AvatarCLIP [13] generate mo-
tions for the given textual descriptions via online matching
and optimization. Nevertheless, matching cannot generate
new poses to fit diverse texts and online optimization is usu-
ally time-consuming and unstable.
In this paper, we investigate filling the blank of offline
open-vocabulary text-to-motion generation in a zero-shot
learning manner. For convenience, we term our method
asOOHMG which stands for Offline Open-vocabulary
Human Motion Generation. The main philosophy of
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23222
Figure 2. The sketch of OOHMG. A text is fed to the text-to-pose
generator to obtain a text-consistent pose. Then, the pose is used
to construct the motion prompt for the pretrained motion model to
generate a motion.
OOHMG is inspired by prompt learning [5,19,41,48,50,52]
in the field of natural language processing (NLP). Specifi-
cally, instead of changing the pretrained motion generator
to cater to the given texts online, OOHMG reformulates
the texts into a familiar input format to prompt the pre-
trained motion generator for synthesizing motions in the
manner of “reconstruction”. As for prompt construction,
OOHMG learns a text-to-pose generator using the novel
wordless training mechanism so that the pose generator can
generalize to unseen texts during inference. After train-
ing, OOHMG uses the text-to-pose generator to translate
texts into poses to construct the prompt. The overall sketch
and demonstrations of OOHMG are illustrated in Fig. 2 and
Fig. 1, respectively. In this sense, the two key ingredients of
OOHMG include the motion generator pretraining and the
prompt construction for open-vocabulary texts. In the fol-
lowing, we further elaborate on each of these ingredients.
As for the motion generator, we learn a motion genera-
tor by mask-reconstruction self-supervised learning. Par-
ticularly, our method adopts a bidirectional transformer-
based [44] architecture for the motion generator. During
training, the motion generator takes the randomly-masked
motions as inputs and is optimized to reconstruct the orig-
inal motions. To predict and reconstruct the masked poses
from the unmasked, the motion generator is required to fo-
cus on learning motion dynamics which is the general need
for diverse motion generation tasks. By this means, unlike
previous methods that design different models for different
tasks [1,2,11,18], our motion model can be directly applied
to diverse downstream tasks by unifying the input of these
tasks into masked motions to prompt the generator for mo-
tion generation. Moreover, our generator can flexibly con-
trol the generated content, such as the number, the order,
and the positions of different poses of the generated motion
by editing the masked motions, resulting in a controllable
and flexible motion generation.
In constructing the motion prompt for open-vocabulary
motion generation, OOHMG learns a text-to-pose generator
and uses it to generate the unmasked poses of the masked
motions, as shown in Fig. 2. There are two major diffi-culties in learning the text-to-pose generator: 1) what can
associate diverse texts and poses to supervise the pose gen-
erator, and 2) how to obtain diverse texts as the training in-
puts. For difficulty 1, we build the first large-scale text-pose
alignment model based on CLIP, namely TPA, that can effi-
ciently measure the alignment between texts and 3D SMPL
poses [27, 31] in the feature space. With TPA, the text-to-
pose generator learns to generate poses for texts by maxi-
mizing the text-pose alignments via gradient descent. As for
difficulty 2, instead of collecting massive texts laboriously
for training, we consider an extreme training paradigm,
termed wordless training. Just as its name implies, word-
less training only samples random training inputs from the
latent space of texts. And we found that the optimized pose
generator can well-generalize to real-world texts.
Overall, the contributions of OOHMG are as follows. 1)
We propose an offline open-vocabulary text-to-motion gen-
eration framework, inspired by prompt learning, and 2) to
supervise the training process of the text-to-pose generator,
we propose the first text-pose alignment model, i.e., TPA,
and 3) to endow the text-to-pose generator with the ability
to handle open-vocabulary texts, we train the generator with
the novel wordless training mechanism. 4) Extensive exper-
iment results show that OOHMG is able to generate motions
for open-vocabulary texts efficiently and effectively, and ob-
tain clear improvement over the advanced baseline methods
qualitatively and quantitatively.
|
Liu_MixTeacher_Mining_Promising_Labels_With_Mixed_Scale_Teacher_for_Semi-Supervised_CVPR_2023
|
Abstract
Scale variation across object instances remains a key
challenge in object detection task. Despite the remarkable
progress made by modern detection models, this challenge
is particularly evident in the semi-supervised case. While
existing semi-supervised object detection methods rely on
strict conditions to filter high-quality pseudo labels from
network predictions, we observe that objects with extreme
scale tend to have low confidence, resulting in a lack of
positive supervision for these objects. In this paper, we
propose a novel framework that addresses the scale vari-
ation problem by introducing a mixed scale teacher to im-
prove pseudo label generation and scale-invariant learn-
ing. Additionally, we propose mining pseudo labels using
score promotion of predictions across scales, which bene-
fits from better predictions from mixed scale features. Our
extensive experiments on MS COCO and PASCAL VOC
benchmarks under various semi-supervised settings demon-
strate that our method achieves new state-of-the-art per-
formance. The code and models are available at https:
//github.com/lliuz/MixTeacher .
|
1. Introduction
The remarkable performance of deep learning on various
tasks can largely be attributed to large-scale datasets with
accurate annotations. However, collecting a large amount of
high-quality annotations is infeasible as it is labor-intensive
and time-consuming, especially for tasks with complicated
annotations such as object detection [23, 30] and segmen-
tation [5, 6]. To reduce reliance on manual labeling, semi-
supervised learning (SSL) has gained much attention. SSL
aims to train models on a small amount of labeled im-
ages and a large amount of easily accessible unlabeled data.
†Corresponding Authors.
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
[email protected]_1x
Recall_0.5x
0.30.40.50.60.70.80.91.0
[email protected]_1x
Precision_0.5x
0.5 0.6 0.7 0.8 0.9
[email protected]_1x
Recall_0.5x
0.650.700.750.800.850.900.95
[email protected]_1x
Precision_0.5x
(a) Precision and recall for all objects (b) Precision and recall for large objects
(c) Detection results with different input scalesFigure 1. Detection results with input of regular 1 ×scale and
0.5×down-sampled scale images. We plot the precision and recall
under different score thresholds for (a) all objects and (b) large
objects in COCO val2017 with the same model but different
input scales. Two examples of unlabeled images are given in (c).
Large scale inputs have clear advantages in overall metrics, but
down-sampled images are more suitable for large objects.
Following extensive pioneering studies on semi-supervised
image classification [2, 14, 32], several methods on semi-
supervised object detection have emerged.
Most early studies on semi-supervised object detec-
tion [13, 24, 33] can be considered as a direct extension
of SSL methods designed for image classification, using a
teacher-student training paradigm [2,32,35]. In these meth-
ods, a teacher model generates pseudo bounding boxes and
corresponding class predictions on unlabeled images, and
the pseudo labels are used to train a student model. Despite
the performance improvement from using a large amount of
unlabeled data, these methods overlooked the characteris-
tics of object detection to some extent, resulting in a huge
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7370
gap from the fully supervised counterpart.
Compared to image classification, object instances in de-
tection tasks can vary in a wider range of scales. To address
this challenge of detecting and localizing multiple objects
across scales and locations, numerous works in object de-
tection have been proposed, such as FPN [21], Trident [20],
and SNIP [31]. However, the large scale variation brings
new challenges in the semi-supervised context. In order to
guarantee high precision, most existing semi-supervised ob-
ject detection methods adopt strict conditions ( e.g. score >
0.9) to filter out highly confident pseudo labels. Although
this ensures the quality of pseudo labels, many objects with
low confidence are wrongly assigned as background, es-
pecially for those with extreme scales. As shown in Fig-
ure 1 (c), inappropriate scales will lead to false negatives,
which can mislead the network in semi-supervised learn-
ing. We further observe the influence of the test scale of the
images. Consistent with common sense, large-scale inputs
have clear advantages in overall metrics, as shown in Fig-
ure 1 (a). However, down-sampled images show a superior-
ity for large objects, as shown in Figure 1 (b). This provides
a new view to handle the scale variation issue.
It is worth mentioning that recent works have paid at-
tention to the scale variation issue in semi-supervised ob-
ject detection. As shown in Figure 2 (a) and (b), previ-
ous methods have introduced an additional down-sampled
view to encourage the model to make scale-invariant pre-
dictions. Specifically, SED [10] proposes to distill predic-
tions of class probability from the regular scale to the down-
sampled scale and constrain consistent predictions of local-
ization for all proposals in two scales. PseCo [17] adopts
the same pseudo labels generated from the regular scale for
both scales. However, these methods mainly focus on the
consistency of predictions across scales, which indirectly
improves the models with regularization. Moreover, they
highly rely on the pseudo labels generated from the regular
scale in the teacher network. The false negatives caused by
inappropriate scales still remain in these methods.
Based on the above methods, which are equipped with
an additional down-sampled view of unlabeled images, we
propose to explicitly improve the quality of pseudo la-
bels to handle the scale variation of objects. As shown
in Figure 2 (c), we introduce a mixed-scale feature pyra-
mid, which is built from the large-scale feature pyramid in
the regular view and the small-scale feature pyramid in the
down-sampled view. The mixed-scale feature pyramid is
supposed to be capable of adaptively fusing features across
scales, thus making better predictions in the teacher net-
work. Furthermore, to avoid object instances missing in the
pseudo labels due to low confidence scores, we propose to
leverage the improvement of score as an indicator for min-
ing pseudo labels from low confidence predictions. In sum-
mary, the main contributions are as follows:• We propose a semi-supervised object detection frame-
work MixTeacher, in which high-quality pseudo labels
are generated from a mixed scale feature pyramid.
• We propose a method for pseudo labels mining, which
leverages the improvement of predictions as the indi-
cator to mining the promising pseudo labels.
• Our method achieves state-of-the-art performance on
MS COCO and Pascal VOC benchmarks under various
semi-supervised settings.
|
Li_GLIGEN_Open-Set_Grounded_Text-to-Image_Generation_CVPR_2023
|
Abstract
Large-scale text-to-image diffusion models have made
amazing advances. However, the status quo is to use
text input alone, which can impede controllability. In this
work, we propose GLIGEN ,Grounded- Language-to- Image
Generation, a novel approach that builds upon and extends
the functionality of existing pre-trained text-to-image dif-
fusion models by enabling them to also be conditioned on
grounding inputs. To preserve the vast concept knowledge ofthe pre-trained model, we freeze all of its weights and inject
the grounding information into new trainable layers via a
gated mechanism. Our model achieves open-world grounded
text2img generation with caption and bounding box condi-
tion inputs, and the grounding ability generalizes well to
novel spatial configurations and concepts. GLIGEN ’s zero-
shot performance on COCO and LVIS outperforms existing
supervised layout-to-image baselines by a large margin.
§Part of the work performed at Microsoft; ¶Co-senior authors
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22511
|
1. Introduction
Image generation research has witnessed huge advances
in recent years. Over the past couple of years, GANs [13]
were the state-of-the-art, with their latent space and con-
ditional inputs being well-studied for controllable manipu-
lation [42, 54] and generation [25, 27, 41, 75]. Text condi-
tional autoregressive [46, 67] and diffusion [45, 50] models
have demonstrated astonishing image quality and concept
coverage, due to their more stable learning objectives and
large-scale training on web image-text paired data. These
models have gained attention even among the general public
due to their practical use cases ( e.g., art design and creation).
Despite exciting progress, existing large-scale text-to-
image generation models cannot be conditioned on other
input modalities apart from text, and thus lack the ability to
precisely localize concepts, use reference images, or other
conditional inputs to control the generation process. The cur-
rent input, i.e., natural language alone, restricts the way that
information can be expressed. For example, it is difficult to
describe the precise location of an object using text, whereas
bounding boxes / keypoints can easily achieve this, as shown
in Figure 1. While conditional diffusion models [9, 47, 49]
and GANs [24, 33, 42, 64] that take in input modalities other
than text for inpainting, layout2img generation, etc., do exist,
they rarely combine those inputs for controllable text2img
generation.
Moreover, prior generative models—regardless of the
generative model family—are usually independently trained
on each task-specific dataset. In contrast, in the recognition
field, the long-standing paradigm has been to build recogni-
tion models [29, 37, 76] by starting from a foundation model
pretrained on large-scale image data [4,15,16] or image-text
pairs [30, 44, 68]. Since diffusion models have been trained
on billions of image-text pairs [47], a natural question is:
Can we build upon existing pretrained diffusion models and
endow them with new conditional input modalities? In this
way, analogous to the recognition literature, we may be able
to achieve better performance on other generation tasks due
to the vast concept knowledge that the pretrained models
have, while acquiring more controllability over existing text-
to-image generation models.
With the above aims, we propose a method for providing
new grounding conditional inputs to pretrained text-to-image
diffusion models. As shown in Figure 1, we still retain the
text caption as input, but also enable other input modalities
such as bounding boxes for grounding concepts, grounding
reference images, grounding part keypoints, etc. The key
challenge is preserving the original vast concept knowledge
in the pretrained model while learning to inject the new
grounding information. To prevent knowledge forgetting,
we propose to freeze the original model weights and add
new trainable gated Transformer layers [61] that take in the
new grounding input ( e.g., bounding box). During training,we gradually fuse the new grounding information into the
pretrained model using a gated mechanism [1]. This design
enables flexibility in the sampling process during generation
for improved quality and controllability; for example, we
show that using the full model (all layers) in the first half of
the sampling steps and only using the original layers (without
the gated Transformer layers) in the latter half can lead
to generation results that accurately reflect the grounding
conditions while also having high image quality.
In our experiments, we primarily study grounded
text2img generation with bounding boxes, inspired by the
recent scaling success of learning grounded language-image
understanding models with boxes in GLIP [31]. To en-
able our model to ground open-world vocabulary con-
cepts [29,31,69,72], we use the same pre-trained text encoder
(for encoding the caption) to encode each phrase associated
with each grounded entity ( i.e., one phrase per bounding
box) and feed the encoded tokens into the newly inserted
layers with their encoded location information. Due to the
shared text space, we find that our model can generalize to
unseen objects even when only trained on the COCO [36]
dataset. Its generalization on LVIS [14] outperforms a strong
fully-supervised baseline by a large margin. To further im-
prove our model’s grounding ability, we unify the object
detection and grounding data formats for training, following
GLIP [31]. With larger training data, our model’s general-
ization is consistently improved.
Contributions. 1) We propose a new text2img genera-
tion method that endows new grounding controllability over
existing text2img diffusion models. 2) By preserving the pre-
trained weights and learning to gradually integrate the new
localization layers, our model achieves open-world grounded
text2img generation with bounding box inputs, i.e., synthesis
of novel localized concepts unobserved in training. 3) Our
model’s zero-shot performance on layout2img tasks signifi-
cantly outperforms the prior state-of-the-art, demonstrating
the power of building upon large pretrained generative mod-
els for downstream tasks.
|
Li_AShapeFormer_Semantics-Guided_Object-Level_Active_Shape_Encoding_for_3D_Object_Detection_CVPR_2023
|
Abstract
3D object detection techniques commonly follow a
pipeline that aggregates predicted object central point fea-
tures to compute candidate points. However, these can-
didate points contain only positional information, largely
ignoring the object-level shape information. This eventu-
ally leads to sub-optimal 3D object detection. In this work,
we propose AShapeFormer, a semantics-guided object-level
shape encoding module for 3D object detection. This is a
plug-n-play module that leverages multi-head attention to
encode object shape information. We also propose shape
tokens and object-scene positional encoding to ensure that
the shape information is fully exploited. Moreover, we in-
troduce a semantic guidance sub-module to sample more
foreground points and suppress the influence of background
points for a better object shape perception. We demon-
strate a straightforward enhancement of multiple existing
methods with our AShapeFormer. Through extensive exper-
iments on the popular SUN RGB-D and ScanNetV2 dataset,
we show that our enhanced models are able to outperform
the baselines by a considerable absolute margin of up to
8.1%. Code will be available at https://github.
com/ZechuanLi/AShapeFormer
|
1. Introduction
As an important scene understanding task, 3D object de-
tection [13,20,47] aims to detect 3D bounding boxes and se-
mantic categories in 3D point cloud scenes. It plays an im-
portant role in many downstream tasks, such as augmented
reality [2, 3], mobile robots [18, 41, 52], and autonomous
navigation [1,36,37,39]. Object detection has made signifi-
cant progress in the 2D domain [15,22,33]. However, owing
to the sparse and irregular nature of the point cloud data, 2D
detection techniques are generally not readily applicable to
the 3D object detection task.
*Corresponding author
Figure 1. ( Top) V oteNet [29] seed points contain many back-
ground points, leading to sub-optimal candidate points, which are
also intrinsically weak as they fail to account for object shape
and contour features. ( Bottom ) The proposed AShapeFormer se-
lects more relevant seed points, leading to more appropriate can-
didates that additionally encode the object shape information ac-
tively. This results in high quality 3D object detection.
Inspired by their 2D counterparts, early attempts in
3D object detection, e.g., [16, 39], mapped irregular point
clouds to regular 3D voxels, thereafter using 3DCNNs for
feature extraction and object detection. However, voxeliza-
tion inevitably loses fine-grained information of the point
clouds, which adversely affects the detection performance.
With the advances that allow direct processing of the point
clouds with deep neural models, e.g., [30, 31], recent meth-
ods aim at directly predicting the 3D bounding boxes from
the original unordered point clouds. Among these tech-
niques, V oteNet [29] and its variants [7, 12, 28, 45, 46, 50]
have achieved remarkable performance.
These point-wise methods follow a common underlying
pipeline which includes first aggregating certain predicted
point features into candidate points. The candidate points
are later used to estimate the 3D bounding box information,
e.g., center, size, and orientation, along with the associated
semantic labels. As illustrated in Fig. 1, despite their ex-
cellent performance, these methods still face a few major
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1012
challenges. (1) The final prediction relies strongly on the
quality of the candidate points. However, these points fail
to encode important object-level features such as contours
and the shape of the 3D objects. (2) The methods must
regress over the candidate points, and these points are often
influenced by the background points. This propagates the
error to cause offsets in the eventual predictions. Current
attempts to mitigate these issues rely on generating new fea-
tures [7, 45] or sampling more points [42]. However, these
are resource intensive solutions, which must still rely on the
vote point regression quality.
To address the problems, we introduce a novel plug-n-
play neural module, named AShapeFormer. It can be easily
assembled with many existing 3D object detection methods
to provide a considerable performance boost. Our key driv-
ing insight is that by utilizing implicit object-level shape
features, a detector can be made aware of the object shape
distribution. Specifically, our module utilizes multi-head at-
tention to encode the object shape information. We aggre-
gate the object shape features using a self-attention mecha-
nism. Inspired by ViT [10] and BERT [19], we introduce a
shape token as the output of the final shape feature to avoid
information loss caused by simplistic operations, e.g., pool-
ing. Additionally, we devise a semantic guidance mecha-
nism to sample more foreground points and assign different
weights to their features, which improves the shape feature
generation. Semantic segmentation scores are also utilized
during the aggregation of vote points to reduce the influence
of irrelevant vote points and obtain better candidates.
We provide successful demonstration of boosting both
point-based [28, 29] and Transformer [23] baselines with
our method, achieving strong performance gains. Our ex-
perimental results (§ 4.1) show that AShapeFormer boosts
the multi-class mean average precision (mAP) up to 3.5%
on the challenging SUN RGB-D dataset [38] and 8.1% on
the ScanNet V2 dataset [9]. Highlights of our contributions
include the following.
• We propose a plug-and-play active shape encoding
module named AShapeFormer, which can be com-
bined with many existing 3D object detection networks
to achieve a considerable performance boost.
• To the best of our knowledge, our method is the first to
combine multi-head attention and semantic guidance
to encode strong object shape features for robust clas-
sification and accurate bounding box regression.
• We demonstrate a considerable mAP boost on SUN
RGB-D ([email protected]) and ScanNet V2 datasets by en-
hancing the state-of-the-art methods with our module.
|
Lee_Revisiting_Self-Similarity_Structural_Embedding_for_Image_Retrieval_CVPR_2023
|
Abstract
Despite advances in global image representation, exist-
ing image retrieval approaches rarely consider geometric
structure during the global retrieval stage. In this work,
we revisit the conventional self-similarity descriptor from
a convolutional perspective, to encode both the visual and
structural cues of the image to global image representation.
Our proposed network, named Structural Embedding Net-
work (SENet), captures the internal structure of the images
and gradually compresses them into dense self-similarity
descriptors while learning diverse structures from various
images. These self-similarity descriptors and original im-
age features are fused and then pooled into global embed-
ding, so that global embedding can represent both geomet-
ric and visual cues of the image. Along with this novel
structural embedding, our proposed network sets new state-
of-the-art performances on several image retrieval bench-
marks, convincing its robustness to look-alike distractors.
The code and models are available: https://github.
com/sungonce/SENet .
|
1. Introduction
Content-based image retrieval is the task of searching
for images with the same content present in the query im-
age in the large-scale database. What across images rep-
resents the same content are two things: the visual prop-
erties and the geometrical structure, so comparing them
well is the key to the image retrieval task. To achieve
this goal, two image representation types have been exten-
sively explored in many image retrieval solutions. The first
one is local features [1–3, 5, 19–22, 24, 47] that comprise
visual descriptors and spatial information about local re-
gions of the image, and the other one is a global descriptor
[3,11–13,18,24–26,28,33,43], also known as global embed-
ding, that summarizes the local features of the entire image.
In a general sense, the global descriptor loses spatial infor-
mation of local features during the summarization process.
Thus, many image retrieval solutions [3, 18, 24, 35, 36, 42]
first retrieve coarse candidates with similar visual proper-
*Corresponding author.
Negative Image Query Image
Visual
Property
Self-
Similarity
Positive Image
Figure 1. Images of the same content share both similar image
properties and internal self-similarities. Our proposed networks
leverage both visual features and self-similarity features and en-
code them to global embedding in an end-to-end manner.
ties for the query using global embeddings (typically re-
ferred to as global retrieval) and further verify that coarse
candidates have geometrically similar shapes to the query
using local features (typically referred to as local feature re-
ranking). This separation of tasks may sound reasonable at
first glance. However, in fact, they miss the opportunities to
perform robust retrieval by comparing structural informa-
tion also in the global retrieval stage.
In computer vision, a self-similarity descriptor [31] has
long been used as a regional descriptor for matching images
based on the aggregation of local internal structures. This
work has shown its effectiveness in challenging matching
problems even in situations where the visual properties of
images are not shared at all ( e.g. matching between draw-
ing and photo domains). However, their self-similarity en-
coding process is neither learnable nor differentiable. And
it also completely ignores visual properties, making it dif-
ficult to use directly for image retrieval tasks where visual
properties are also valuable cues.
In this paper, we revisit the self-similarity descriptor in
a convolutional manner and propose a novel global em-
bedding network named Structural Embedding Network
(SENet ). The proposed network captures the internal struc-
tures of the images and encodes them to self-similarity de-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23412
scriptors while learning diverse structures from various im-
ages. These self-similarity descriptors and original image
features are fused and then pooled into global embedding,
so that global embedding can represent both valuable geo-
metric and visual cues of the image. All proposed modules
of our networks are comprised of point-wise operations, en-
abling efficient descriptor encoding. Our proposed network
sets state-of-the-art performance on several image retrieval
benchmarks, convincing its robustness to look-alike distrac-
tors.
|
Lin_Magic3D_High-Resolution_Text-to-3D_Content_Creation_CVPR_2023
|
Abstract
DreamFusion [ 31] has recently demonstrated the utility
of a pre-trained text-to-image diffusion model to optimize
Neural Radiance Fields (NeRF) [ 23], achieving remarkable
text-to-3D synthesis results. However, the method has two in-
herent limitations: (a) extremely slow optimization of NeRF
and (b) low-resolution image space supervision on NeRF ,
leading to low-quality 3D models with a long processing
time. In this paper, we address these limitations by utilizing a
two-stage optimization framework. First, we obtain a coarse
model using a low-resolution diffusion prior and accelerate
with a sparse 3D hash grid structure. Using the coarse repre-
sentation as the initialization, we further optimize a textured
3D mesh model with an efficient differentiable renderer in-
teracting with a high-resolution latent diffusion model. Our
method, dubbed Magic3D, can create high quality 3D mesh
models in 40 minutes, which is 2×faster than DreamFu-
sion (reportedly taking 1.5 hours on average), while also
achieving higher resolution. User studies show 61.7% raters
to prefer our approach over DreamFusion. Together with
the image-conditioned generation capabilities, we provide
users with new ways to control 3D synthesis, opening up new
avenues to various creative applications.
|
1. Introduction
3D digital content has been in high demand for a variety
of applications, including gaming, entertainment, architec-
ture, and robotics simulation. It is slowly finding its way into
virtually every possible domain: retail, online conferencing,
virtual social presence, education, etc. However, creating
professional 3D content is not for anyone Ð it requires
immense artistic and aesthetic training with 3D modeling ex-
pertise. Developing these skill sets takes a significant amount
of time and effort. Augmenting 3D content creation with
natural language could considerably help democratize 3D
content creation for novices and turbocharge expert artists.
*†: equal contribution.Image content creation from text prompts [ 2,28,33,36]
has seen significant progress with the advances of diffusion
models [ 13,41,42] for generative modeling of images. The
key enablers are large-scale datasets comprising billions
of samples (images with text) scrapped from the Internet
and massive amounts of compute. In contrast, 3D content
generation has progressed at a much slower pace. Existing
3D object generation models [ 4,9,47] are mostly categorical.
A trained model can only be used to synthesize objects for a
single class, with early signs of scaling to multiple classes
shown recently by Zeng et al. [47]. Therefore, what a user
can do with these models is extremely limited and not yet
ready for artistic creation. This limitation is largely due to the
lack of diverse large-scale 3D datasets Ð compared to image
and video content, 3D content is much less accessible on the
Internet. This naturally raises the question of whether 3D
generation capability can be achieved by leveraging powerful
text-to-image generative models.
Recently, DreamFusion [ 31] demonstrated its remarkable
ability for text-conditioned 3D content generation by uti-
lizing a pre-trained text-to-image diffusion model [ 36] that
generates images as a strong image prior. The diffusion
model acts as a critic to optimize the underlying 3D repre-
sentation. The optimization process ensures that rendered
images from a 3D model, represented by Neural Radiance
Fields (NeRF) [ 23], match the distribution of photorealis-
tic images across different viewpoints, given the input text
prompt. Since the supervision signal in DreamFusion oper-
ates on very low-resolution images ( 64×64), DreamFusion
cannot synthesize high-frequency 3D geometric and texture
details. Due to the use of inefficient MLP architectures for
the NeRF representation, practical high-resolution synthesis
may not even be possible as the required memory footprint
and the computation budget grows quickly with the resolu-
tion. Even at a resolution of 64×64, optimization times are
in hours (1.5 hours per prompt on average using TPUv4).
In this paper, we present a method that can synthesize
highly detailed 3D models from text prompts within a re-
duced computation time. Specifically, we propose a coarse-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
300
asilver platter piled
high with fruits
ablue poison -dart frog
sitting on a water lilyan imperial state
crown of england neuschwanstein castle, aerial view astuffed grey rabbit
holding a pretend carrot an iguana holding a balloon
abeautiful dress made
out of garbage bagsmichelangelo style statue of
an astronaut
ametal
bunny
sitting on
top of a
stack of
broccoliasphinx
sitting on
top of a
stack of
chocolate
cookieLow resolution bunny
before editing ametal
bunny
sitting on
top of a
stack of
chocolate
cookiea baby bunny
sitting on
top of a
stack of
pancakes
Figure 1. Results and applications of Magic3D. Top: high-resolution text-to-3D generation . Magic3D can generate high-quality
and high-resolution 3D models from text prompts. Bottom: high-resolution prompt-based editing . Magic3D can edit 3D models by
fine-tuning with the diffusion prior using a different prompt. Taking the low-resolution 3D model as the input (left), Magic3D can modify
different parts of the 3D model corresponding to different input text prompts. Together with various creative controls on the generated 3D
models, Magic3D is a convenient tool for augmenting 3D content creation.
to-fine optimization approach that uses multiple diffusion
priors at different resolutions to optimize the 3D representa-
tion, enabling the generation of both view-consistent geome-
try as well as high-resolution details. In the first stage, we
optimize a coarse neural field representation akin to Dream-
Fusion, but with a memory- and compute-efficient scene
representation based on a hash grid [ 25]. In the second stage,we switch to optimizing mesh representations, a critical step
that allows us to utilize diffusion priors at resolutions as high
as512×512. As 3D meshes are amenable to fast graphics
renderers that can render high-resolution images in real-time,
we leverage an efficient differentiable rasterizer [ 9,26] and
make use of camera close-ups to recover high-frequency
details in geometry and texture. As a result, our approach
301
produces high-fidelity 3D content (see Fig. 1) that can con-
veniently be imported and visualized in standard graphics
software and does so at 2 ×the speed of DreamFusion. Fur-
thermore, we showcase various creative controls over the 3D
synthesis process by leveraging the advancements developed
for text-to-image editing applications [ 2,35]. Our approach,
dubbed Magic3D, endows users with unprecedented control
in crafting their desired 3D objects with text prompts and
reference images, bringing this technology one step closer
to democratizing 3D content creation.
In summary, our work makes the following contributions:
•We propose Magic3D, a framework for high-quality 3D
content synthesis using text prompts by improving several
major design choices made in DreamFusion. It consists of
a coarse-to-fine strategy that leverages both low- and high-
resolution diffusion priors for learning the 3D representa-
tion of the target content. Magic3D, which synthesizes 3D
content with an 8 ×higher resolution supervision, is also
2×faster than DreamFusion. 3D content synthesized by
our approach is significantly preferable by users (61.7%).
•We extend various image editing techniques developed for
text-to-image models to 3D object editing and show their
applications in the proposed framework.
|
Liu_FAC_3D_Representation_Learning_via_Foreground_Aware_Feature_Contrast_CVPR_2023
|
Abstract
Contrastive learning has recently demonstrated great
potential for unsupervised pre-training in 3D scene un-
derstanding tasks. However, most existing work ran-
domly selects point features as anchors while building con-
trast, leading to a clear bias toward background points
that often dominate in 3D scenes. Also, object aware-
ness and foreground-to-background discrimination are ne-
glected, making contrastive learning less effective. To
tackle these issues, we propose a general foreground-aware
feature contrast (FAC) framework to learn more effective
point cloud representations in pre-training. FAC consists
of two novel contrast designs to construct more effective
and informative contrast pairs. The first is building positive
pairs within the same foreground segment where points tend
to have the same semantics. The second is that we prevent
over-discrimination between 3D segments/objects and en-
courage foreground-to-background distinctions at the seg-
ment level with adaptive feature learning in a Siamese cor-
respondence network, which adaptively learns feature cor-
relations within and across point cloud views effectively.
Visualization with point activation maps shows that our
contrast pairs capture clear correspondences among fore-
ground regions during pre-training. Quantitative exper-
iments also show that FAC achieves superior knowledge
transfer and data efficiency in various downstream 3D se-
mantic segmentation and object detection tasks. All codes,
data, and models are available.
|
1. Introduction
3D scene understanding is crucial to many tasks such
as robot grasping and autonomous navigation [12, 21, 30].
However, most existing work is fully supervised which re-
lies heavily on large-scale annotated 3D data that is often
laborious to collect. Self-supervised learning (SSL), which
allows learning rich and meaningful representations from
large-scale unannotated data, has recently demonstrated
great potential to mitigate the annotation constraint [1, 5].
It learns with auxiliary supervision signals derived from
unannotated data, which are usually much easier to col-
†Corresponding Authors.
Figure 1. Constructing informative contrast pairs matters in con-
trastive learning: Conventional contrast requires strict point-level
correspondence. The proposed method FAC takes both fore-
ground grouping and foreground-background distinction cues into
account, thus forming better contrast pairs to learn more informa-
tive and discriminative 3D feature representations.
lect. In particular, contrastive learning as one prevalent SSL
approach has achieved great success in various visual 2D
recognition tasks [6, 29].
Contrastive learning has also been explored for point
cloud representation learning in various downstream tasks
such as semantic segmentation [7, 18, 22, 42], instance seg-
mentation [19, 20], and object detection [26, 44]. However,
many successful 2D contrastive learning methods [6,14,46]
do not work well for 3D point clouds, largely because
point clouds often capture wide-view scenes which con-
sist of complex points of many irregularly distributed fore-
ground objects as well as a large number of background
points. Several studies attempt to design specific contrast
to cater to the geometry and distribution of point clouds.
For example, [22] employ max-pooled features of two aug-
mented scenes to form the contrast, but they tend to over-
emphasize holistic information and overlook informative
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9476
features about foreground objects. [19, 26, 42] directly use
registered point/voxel features as positive pairs and treat all
non-registered as negative pairs, causing many false con-
trast pairs in semantics.
We propose exploiting scene foreground evidence and
foreground-background distinction to construct more fore-
ground grouping aware and foreground-background distinc-
tion aware contrast for learning discriminative 3D represen-
tations. For foreground grouping aware contrast, we first
obtain regional correspondences with over-segmentation
and then build positive pairs with points of the same re-
gion across views, leading to semantic coherent repre-
sentations. In addition, we design a sampling strategy
to sample more foreground point features while building
contrast, because the background point features are often
less-informative and have repetitive or homogeneous pat-
terns. For foreground-background contrast, we first en-
hance foreground-background point feature distinction, and
then design a Siamese correspondence network that se-
lects correlated features by adaptively learning affinities
among feature pairs within and across views in both fore-
ground and background to avoid over-discrimination be-
tween parts/objects. Visualizations show that foreground-
enhanced contrast guides the learning toward foreground re-
gions while foreground-background contrast enhances dis-
tinctions among foreground and background features effec-
tively in a complementary manner, the two collaborating to
learn more informative and discriminative representation as
illustrated in Fig. 1.
The contributions of this work can be summarized in
three aspects. First , we propose FAC, a foreground-aware
feature contrast framework for large-scale 3D pre-training.
Second , we construct region-level contrast to enhance the
local coherence and better foreground awareness in the
learned representations. Third , on top of that, we de-
sign a Siamese correspondence framework that can lo-
cate well-matched keys to adaptively enhance the intra-
and inter-view feature correlations, as well as enhance the
foreground-background distinction. Lastly , extensive ex-
periments over multiple public benchmarks show that FAC
achieves superior self-supervised learning when compared
with the state-of-the-art. FAC is compatible with the preva-
lent 3D segmentation backbone network SparseConv [15]
and 3D detection backbone networks including PV-RCNN,
PointPillars [25], and PointRCNN [36]. It is also applica-
ble to both indoor dense RGB-D and outdoor sparse LiDAR
point clouds.
|
Liu_Learning_Customized_Visual_Models_With_Retrieval-Augmented_Knowledge_CVPR_2023
|
Abstract
Image-text contrastive learning models such as CLIP
have demonstrated strong task transfer ability. The high
generality and usability of these visual models is achieved
via a web-scale data collection process to ensure broad con-
cept coverage, followed by expensive pre-training to feed
all the knowledge into model weights. Alternatively, we
propose REACT ,REtrieval- Augmented CusTomization, a
framework to acquire the relevant web knowledge to build
customized visual models for target domains. We retrieve the
most relevant image-text pairs ( ∼3% of CLIP pre-training
data) from the web-scale database as external knowledge
and propose to customize the model by only training new
modularized blocks while freezing all the original weights.
The effectiveness of REACT is demonstrated via extensive
experiments on classification, retrieval, detection and seg-
mentation tasks, including zero, few, and full-shot settings.
Particularly, on the zero-shot classification task, compared
with CLIP , it achieves up to 5.4% improvement on ImageNet
and 3.7% on the ELEVATER benchmark (20 datasets).
|
1. Introduction
It has been a fundamental research problem in computer
vision (CV) to build a transferable visual system that can
easily adapt to a wide range of downstream tasks. With
remarkable advances in deep learning, a de facto solution
to achieve this is to train deep neural networks on a large
amount of data to pursue the so-called generic visual repre-
sentations. This dates back to the standard supervised train-
ing on ImageNet [10], whose superb representation power
is further demonstrated in BiT [23]/ViT [12] by scaling up
the training to JFT300M [50]. Along the way, recent efforts
have been applied to the popular image self-supervised learn-
ing [6, 16, 17] to reduce the demand for labeled data. The
♠core contribution; ¶equal advising; §work initiated during an
internship at Microsoft.third approach is image-text contrastive learning trained on
billion-scale web-crawled image-text pairs. Such models,
like CLIP [43] and ALIGN [20], are able to achieve great
performance on different downstream domains, without the
need of any human labels.
Excellent empirical performance has been achieved with
the above three pre-training methods, by following the well
established two-stage pre-training then adaptation pipeline:
model pre-training from scratch on large data, then model
adaptation directly on downstream tasks. Specifically, the
pre-trained models are adapted to downstream tasks by con-
sidering the available task-specific samples only: either eval-
uated in a zero-shot task transfer manner, or updated using
linear probing (LP) [43], finetuning (FT) [27], or prompt tun-
ing [44,71]. Following this two-stage pipeline, most research
has reverted to the faith that building transferable visual sys-
tems is equivalent to developing more generic visual models
by feeding all knowledge in the model pre-training stage.
Therefore, the community has been witnessing a trend in
exploring scaling success of pre-training model and data size
with less care on the target domain, hoping that the model
can adapt to any downstream scenario.
In this paper, we argue that the conventional two-stage
pipeline above is over-simplified and less efficient, in achiev-
ing the goal of building a transferable visual system in real-
world settings. Instead, we propose a customization stage in
between the pre-training and adaptation, where customiza-
tion is implemented by systematically leveraging retrieved
external knowledge. The inspiration comes from how hu-
mans are specialized in society for better generalization:
instead of trying to memorize all concepts, humans are
trained/prepared in a relevant subject to master a certain
skill, while maintaining the basic skills in pre-training.
To this end, we explore a systematic approach to acquire
and learn with external knowledge sources from a large
image-text corpus for model customization. The process of
collecting external image-text knowledge is fully automatic
without extra human annotation. The acquired knowledge
typically contains richer information about the concept: rel-
evant images that never appear in the downstream training
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15148
400 410 2000
# Images Observed in Model Lifecycle (M)707880 ImageNet Zero-Shot (%)
CLIP-B32CLIP-L14
OpenCLIP-L14
REACT-B32-LTREACT-B32REACT-L14
OpenCLIP-B32OpenCLIP-L14OpenCLIP-HOpenCLIP-GREACT-G
0100200 400 800
# Parameters (Millions)65707580ImageNet-1K T op-1 Accuracy
REACT
CLIP
Semi-ViT
SimCLR-v2
60657075808590Classification / Retrieval Performance+2.8+1.1+0.4
+3.8+3.5 +5.1+2.0+1.4+3.0
+10
Classification
ImageNetZero-Shot 1% 10%
Classification
ELEVATER BenchmarkZero-Shot Few-ShotLP FT
Full-ShotLP FT
Retrieval
Flickr30KI2T T2I
Detection
MSCOCOZero-Shot Open-Voc
Segmentation
MSCOCOZero-Shot Anno-FreeREACT
CLIP
1520253035404550
Dense Prediction Performance+1.5+3.4
+3.6+2.6Figure 1. REACT achieves the best zero-shot ImageNet performance among public checkpoints (Left), achieves new SoTA on semi-supervised
ImageNet classification in the 1% labeled data setting (Middle), and consistently transfer better than CLIP on across a variety of tasks,
including ImageNet classification, zero/few/full-shot classification on 20 datasets in ELEV ATER benchmark, image-text retrieval, object
detection and segmentation (Right). Please see the detailed numbers and settings in the experimental section. For the left figure, circle size
indicates model size.
and evaluation set, and richer text descriptions about concept
semantics. Such multi-modal knowledge sources are gen-
erally available on the web, and further open-sourced like
LAION [45, 46]. They cover a variety of domains, making it
possible to develop customized visual models for task-level
transfer. Similar retrieval-augmented intuitions have been
exploited in computer vision for class-level transfer [32], but
not yet for task-level transfer (similar to that of CLIP). Our
main findings/contributions can be summarized as follows.
We propose to explore the potential of the web-scale
image-text corpus as external knowledge to significantly
improve task-level transfer performance on the target do-
main at an affordable cost. A simple and effective strategy
is proposed. To begin with, we build a large-scale multi-
modal indexing system to retrieve the relevant image-text
pairs using CLIP features and approximate nearest neigh-
bor search. For a CV problem, the task instruction is often
sufficiently specified with text such as class names, which
allows us to utilize them as queries to retrieve the relevant
image-text pair knowledge from the indexing system. No
images from the CV problem are needed. To efficiently build
the customized visual model, we propose a novel modular-
ized learning strategy: only updating the additional trainable
weights on the retrieved knowledge, and freezing the origi-
nal model weights. Hence, the model masters the new skill
without forgetting basic skills.
The generality and effectiveness of the proposed cus-
tomization strategy is demonstrated on four CV problems .
We instantiate it with CLIP, and develop the customized
visual models for image classification on ImageNet and
20 datasets in ELEVATER [27], image-text retrieval on
COCO [30]/Flickr [41], as well as object detection and se-
mantic segmentation on COCO [30]. The knowledge bases
are considered as LAION [46] and larger web-crawled multi-
modal data. The retrieval-augmented knowledge ( ∼3%
image-text pairs compared with the original training data)
significantly improves the model’s zero-shot performancewithout the need of accessing any images on downstream
tasks. See Figure 1 for highlighted results. For example,
our ViT-L/14 checkpoint achieves 78.5% zero-shot accuracy
on ImageNet [10], surpassing all public checkpoints from
CLIP [43] and OpenCLIP [18], including those with larger
model size and trained on a much larger LAION-2B [45].
The new customized models demonstrate higher few/full-
shot performance than the generic model counterparts.
Our retrieval system, codebase, and pre-trained models
are publicly available . To make this line of research more
accessible, our retrieved subsets for both ELEVATER and
ImageNet will also be made available, with an easy-to-use
toolkit to download the subsets without storing the whole
dataset locally. It poses a feasible direction for leveraging
the ever-increasing data from the Internet for customized
visual recognition, especially for the low-resource regimes.
|
Kong_vMAP_Vectorised_Object_Mapping_for_Neural_Field_SLAM_CVPR_2023
|
Abstract
We present vMAP, an object-level dense SLAM system
using neural field representations. Each object is repre-
sented by a small MLP , enabling efficient, watertight object
modelling without the need for 3D priors.
As an RGB-D camera browses a scene with no prior in-
formation, vMAP detects object instances on-the-fly, and
dynamically adds them to its map. Specifically, thanks to
the power of vectorised training, vMAP can optimise as
many as 50 individual objects in a single scene, with an
extremely efficient training speed of 5Hz map update. We
experimentally demonstrate significantly improved scene-
level and object-level reconstruction quality compared to
prior neural field SLAM systems. Project page: https:
//kxhit.github.io/vMAP .
|
1. Introduction
For robotics and other interactive vision applications, an
object-level model is arguably semantically optimal, with
scene entities represented in a separated, composable way,
but also efficiently focusing resources on what is important
in an environment.
The key question in building an object-level mapping
system is what level of prior information is known about
the objects in a scene in order to segment, classify and re-construct them. If no 3D object priors are available, then
usually only the directly observed parts of objects can be
reconstructed, leading to holes and missing parts [4, 46].
Prior object information such as CAD models or category-
level shape space models enable full object shape estima-
tion from partial views, but only for the subset of objects in
a scene for which these models are available.
In this paper, we present a new approach which applies to
the case where no 3D priors are available but still often en-
ables watertight object reconstruction in realistic real-time
scene scanning. Our system, vMAP, builds on the attractive
properties shown by neural fields as a real-time scene repre-
sentation [31], with efficient and complete representation of
shape, but now reconstructs a separate tiny MLP model of
each object. The key technical contribution of our work is
to show that a large number of separate MLP object models
can be simultaneously and efficiently optimised on a single
GPU during live operation via vectorised training.
We show that we can achieve much more accurate and
complete scene reconstruction by separately modelling ob-
jects, compared with using a similar number of weights in
a single neural field model of the whole scene. Our real-
time system is highly efficient in terms of both computation
and memory, and we show that scenes with up to 50 objects
can be mapped with 40KB per object of learned parameters
across the multiple, independent object networks.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
952
We also demonstrate the flexibility of our disentangled
object representation to enable recomposition of scenes
with new object configurations. Extensive experiments have
been conducted on both simulated and real-world datasets,
showing state-of-the-art scene-level and object-level recon-
struction performance.
|
Lee_Learning_Rotation-Equivariant_Features_for_Visual_Correspondence_CVPR_2023
|
AbstractExtracting discriminative local features that are invari-ant to imaging variations is an integral part of establish-ing correspondences between images. In this work, weintroduce a self-supervised learning framework to extractdiscriminative rotation-invariant descriptors using group-equivariant CNNs. Thanks to employing group-equivariantCNNs, our method effectively learns to obtain rotation-equivariant features and their orientations explicitly, with-out having to perform sophisticated data augmentations.The resultant features and their orientations are further pro-cessed by group aligning, a novel invariant mapping tech-nique that shifts the group-equivariant features by their ori-entations along the group dimension. Our group align-ing technique achieves rotation-invariance without any col-lapse of the group dimension and thus eschews loss of dis-criminability. The proposed method is trained end-to-endin a self-supervised manner, where we use an orientationalignment loss for the orientation estimation and a con-trastive descriptor loss for robust local descriptors to ge-ometric/photometric variations. Our method demonstratesstate-of-the-art matching accuracy among existing rotation-invariant descriptors under varying rotation and also showscompetitive results when transferred to the task of keypointmatching and camera pose estimation.
|
1. IntroductionExtracting local descriptors is an essential step for vi-sual correspondence across images, which is used for awide range of computer vision problems such as visual lo-calization [29,47,48], simultaneous localization and map-ping [7,8,39], and 3D reconstruction [1,16,17,49,66]. Toestablish reliable visual correspondences, the properties ofinvariance and discriminativeness are required for local de-scriptors; the descriptors need to be invariant to geomet-ric/photometric variations of images while being discrimi-native enough to distinguish true matches from false ones.Since the remarkable success of deep learning for visualrecognition, deep neural networks have also been adoptedto learn local descriptors, showing enhanced performanceson visual correspondence [44,45,64]. Learningrotation-invariant local descriptors, however, remains challenging;the classical techiniques [11,27,46] for rotation-invariantdescriptors, which are used for shallow gradient-based fea-ture maps, cannot be applied to feature maps from stan-dard deep neural networks, in which rotation of input in-duces unpredictable feature variations. Achieving rotationinvariance without sacrificing disriminativeness is particu-larly important for local descriptors as rotation is one of themost frequent imaging variations in reality.In this work, we propose a self-supervised approach toobtain rotation-invariant and discriminative local descrip-tors by leveraging rotation-equivariant CNNs. First, weuse group-equivariant CNNs [60] to jointly extract rotation-equivariant local features and their orientations from an im-age. To extract reliable orientations, we use an orientationalignment loss [21,23,63], which trains the network to pre-dict the dominant orientation robustly against other imag-ing variations, including illumination or viewpoint changes.Using group-equivariant CNNs enables the local featuresto be empowered with explicitly encoded rotation equiv-ariance without having to perform rigorous data augmen-tations [58,60]. Second, to obtain discriminative rotation-invariant descriptors from rotation-equivariant features, wepropose group-aligning thatshiftsthe group-equivariantfeatures by their dominant orientation along their groupdimension. Conventional methods to yield invariant fea-tures from group-equivariant features collapse the group di-mension by group-pooling,e.g.,max-pooling or bilinear-pooling [26], resulting in a drop in feature discriminabil-ity and quality. In contrast, our group-aligning preservesthe group dimension, achieving rotation-invariance whileeschewing loss of discriminability. Furthermore, by pre-serving the group dimension, we can obtain multiple de-scriptors by performing group-aligning using multiple ori-entation candidates, which improves the matching perfor-mance by compensating for potential errors in dominantorientation prediction. Finally, we evaluate our rotation-invariant descriptors against existing local descriptors, and
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21887
our group-aligning scheme against group-pooling methodson various image matching benchmarks to demonstrate theefficacy of our method.The contribution of our paper is fourfold:•We propose to extract discriminative rotation-invariantlocal descriptors to tackle the task of visual correspon-dence by utilizing rotation-equivariant CNNs.•We propose group-aligning, a method to shift a group-equivariant descriptor in the group dimension by itsdominant orientation to obtain a rotation-invariant de-scriptor without having to collapse the group informa-tion to preserve feature discriminability.•We use self-supervisory losses of orientation align-ment loss for orientation estimation, and a contrastivedescriptor loss for robust local descriptor extraction.•We demonstrate state-of-the-art performances undervarying rotations on the Roto-360 dataset and showcompetitive transferability on the HPatches dataset [2]and the MVS dataset [53].
|
Kuang_PaletteNeRF_Palette-Based_Appearance_Editing_of_Neural_Radiance_Fields_CVPR_2023
|
Abstract
Recent advances in neural radiance fields have enabled
the high-fidelity 3D reconstruction of complex scenes for
novel view synthesis. However, it remains underexplored
how the appearance of such representations can be effi-
ciently edited while maintaining photorealism. In this work,
we present PaletteNeRF , a novel method for photorealis-
tic appearance editing of neural radiance fields (NeRF)
based on 3D color decomposition. Our method decom-
poses the appearance of each 3D point into a linear com-
bination of palette-based bases (i.e., 3D segmentations de-
fined by a group of NeRF-type functions) that are shared
across the scene. While our palette-based bases are view-
independent, we also predict a view-dependent function to
capture the color residual (e.g., specular shading). Dur-
ing training, we jointly optimize the basis functions and
the color palettes, and we also introduce novel regulariz-
ers to encourage the spatial coherence of the decomposi-
*Parts of this work were done when Zhengfei Kuang was an intern at
Adobe Research.tion. Our method allows users to efficiently edit the appear-
ance of the 3D scene by modifying the color palettes. We
also extend our framework with compressed semantic fea-
tures for semantic-aware appearance editing. We demon-
strate that our technique is superior to baseline methods
both quantitatively and qualitatively for appearance edit-
ing of complex real-world scenes. Our project page is
https://palettenerf.github.io.
|
1. Introduction
Neural Radiance Fields (NeRF) [23] and its variants
[8, 25, 27, 39] have received increasing attention in recent
years for their ability to robustly reconstruct real-world 3D
scenes from 2D images and enable high-quality, photoreal-
istic novel view synthesis. However, such volumetric repre-
sentations are challenging to edit due to the fact that scene
appearance is implicitly encoded in neural features and net-
work weights that do not support local manipulation or in-
tuitive modification.
Multiple approaches have been proposed to support edit-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20691
ing of NeRF. One category of methods [4, 18,41,45] re-
cover the material properties of the scene so that they
can re-render them under novel lighting conditions or ad-
just the material properties such as surface roughness.
Such methods rely on accurate estimation of the scene
reflectance, which is typically challenging for real-world
complex scenes captured under unconstrained environment.
Another category of methods [21, 35] learns a latent code on
which NeRF can be conditioned to produce the desired ap-
pearance. However, these methods often suffer from limited
capacity and flexibility and do not support fine-grained edit-
ing. In addition, some other methods [40] learn to transfer
the appearance of NeRF to match a given style image, but
sometimes fail to maintain the same level of photorealism
in the original scene.
In this paper, we propose PaletteNeRF, a novel method
to support flexible and intuitive editing of NeRF. Our
method is inspired by previous image-editing methods
based on color palettes [7, 31], where a small set of col-
ors are used to represent the full range of colors in the im-
age. We model the radiance of each point using a combi-
nation of specular and diffuse components, and we further
decompose the diffuse component into a linear combina-
tion of view-independent color bases that are shared across
the scene. During training, we jointly optimize the per-
point specular component, the global color bases and the
per-point linear weights to minimize the difference between
the rendered images and the ground truth images. We also
introduce novel regularizers on the weights to encourage
the sparseness and spatially coherence of the decomposition
and achieve more meaningful grouping. With the proposed
framework, we can intuitively edit the appearance of NeRF
by freely modifying the learned color bases (Fig. 1). We
further show that our framework can be combined with se-
mantic features to support semantic-aware editing. Unlike
previous palette-based image [1, 31] or video [10] editing
methods, our method produces more globally coherent and
3D consistent recoloring results of the scene across arbi-
trary views. We demonstrate that our method can enable
more fine-grained local color editing while faithfully main-
taining the photorealism of the 3D scene, and achieves bet-
ter performance than baseline methods both quantitatively
and qualitatively. In summary, our contributions include:
• We propose a novel framework to facilitate the edit-
ing of NeRF by decomposing the radiance field into a
weighted combination of learned color bases.
• We introduced a robust optimization scheme with
novel regularizers to achieve intuitive decompositions.
• Our approach enables practical palette-based appear-
ance editing, making it possible for novice users to in-
teractively edit NeRF in an intuitive and controllable
manner on commodity hardware.
|
Liu_PD-Quant_Post-Training_Quantization_Based_on_Prediction_Difference_Metric_CVPR_2023
|
Abstract
Post-training quantization (PTQ) is a neural network
compression technique that converts a full-precision model
into a quantized model using lower-precision data types.
Although it can help reduce the size and computational cost
of deep neural networks, it can also introduce quantiza-
tion noise and reduce prediction accuracy, especially in ex-
tremely low-bit settings. How to determine the appropriate
quantization parameters ( e.g., scaling factors and round-
ing of weights) is the main problem facing now. Existing
methods attempt to determine these parameters by minimize
the distance between features before and after quantization,
but such an approach only considers local information and
may not result in the most optimal quantization parameters.
We analyze this issue and propose PD-Quant, a method
that addresses this limitation by considering global infor-
mation. It determines the quantization parameters by us-
ing the information of differences between network predic-
tion before and after quantization. In addition, PD-Quant
can alleviate the overfitting problem in PTQ caused by the
small number of calibration sets by adjusting the distribu-
tion of activations. Experiments show that PD-Quant leads
to better quantization parameters and improves the predic-
tion accuracy of quantized models, especially in low-bit
settings. For example, PD-Quant pushes the accuracy of
ResNet-18 up to 53.14% and RegNetX-600MF up to 40.67%
in weight 2-bit activation 2-bit. The code is released at
https://github.com/hustvl/PD-Quant .
|
1. Introduction
Various neural networks have been used in many real-
world applications with high prediction accuracy. When de-
ployed on resource-limited devices, networks’ vast memory
and computation costs become significant challenges. Re-
⋆Equal contribution.⋄This work was done when Jiawei Liu and Lin
Niu were interns at Houmo AI.†Corresponding authors.ducing overhead while maintaining the model accuracy has
received considerable attention. Network quantization is an
effective technique that can compress the neural networks
by converting the format of values from floating-point to
low-bit [10, 12, 27]. There are two types of quantization:
post-training quantization (PTQ) [32] and quantization-
aware training (QAT) [18]. QAT requires retraining a model
on the labeled training dataset, which is time-consuming
and computationally expensive. While PTQ only requires a
small number of unlabeled calibration samples to quantize
the pre-trained models without retraining, which is suitable
for quick deployment. Existing PTQ methods can achieve
good prediction accuracy with 8-bit or 4-bit quantization by
selecting appropriate quantization parameters. [22, 23, 30].
Local metrics (such as MSE [7] or cosine distance [45] of
the activation before and after quantization in layers) are
commonly used to search for quantization scaling factors.
These factors are chosen layer by layer by minimizing the
local metric with a small number of calibration samples. In
this paper, we observe that there is a gap between the se-
lected scaling factors and the optimal scaling factors .
Since the noise from quantization will be more severe
at low-bit, the prediction accuracy of the quantized model
significantly decreases at 2-bit. Recently, some meth-
ods [24, 25, 44] have added a new class of quantization pa-
rameters, weight rounding value, to adjust the rounding of
weights. They optimize both quantization scaling factors
and rounding values by reconstructing features layer-wisely
or block-wisely. Besides, the quantized model by PTQ re-
construction is more likely to be overfitting to the calibra-
tion samples because adjusting the rounding of weights will
significantly increase the PTQ’s degree of freedom.
We propose an effective PTQ method, PD-Quant, to ad-
dress the above-mentioned issues. In this paper, we fo-
cus on improving the performance of PTQ on extremely
low bit-width. PD-Quant uses the metric that considers the
We define the optimal quantization scaling factors as the factors that
make the quantized model have the lowest task loss (cross-entropy loss
calculated by real label) on the validation set.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
24427
global information from the prediction difference between
the quantized model and the full-precision (FP) model. We
show that the quantization parameters optimized by predic-
tion difference are more accurate in modeling the quan-
tization noise. Besides, PD-Quant adjusts the activations
for calibration in PTQ to mitigate the overfitting problem.
The distribution of the activations is adjusted to meet the
mean and variance saved in batch normalization layers. Ex-
periments show that PD-Quant leads to better quantization
parameters and improves the prediction accuracy of quan-
tized models, especially in low-bit settings. Our PD-Quant
achieves state-of-the-art performance in PTQ. For example,
PD-Quant pushes the accuracy of weight 2-bit activation
2-bit ResNet-18 up to 53.14% and RegNetX-600MF up to
40.67%. Our contributions are summarized as follows:
1. We analyze the influence of different metrics and indi-
cate that the widely used local metric can be improved
further.
2. We propose to use the information of the prediction
difference in PTQ, which improves the performance
of the quantized model.
3. We propose Distribution Correction (DC) to adjust the
activation distribution to approximate the mean and
variance stored in the batch normalization layer, which
mitigates the overfitting problem.
|
Li_Lite_DETR_An_Interleaved_Multi-Scale_Encoder_for_Efficient_DETR_CVPR_2023
|
Abstract
Recent DEtection TRansformer-based (DETR) models
have obtained remarkable performance. Its success cannot
be achieved without the re-introduction of multi-scale feature
fusion in the encoder. However, the excessively increased to-
kens in multi-scale features, especially for about 75% of low-
level features, are quite computationally inefficient, which
hinders real applications of DETR models. In this paper, we
present Lite DETR, a simple yet efficient end-to-end object
detection framework that can effectively reduce the GFLOPs
of the detection head by 60% while keeping 99% of the origi-
nal performance. Specifically, we design an efficient encoder
block to update high-level features (corresponding to small-
resolution feature maps) and low-level features (correspond-
ing to large-resolution feature maps) in an interleaved way.
In addition, to better fuse cross-scale features, we develop
a key-aware deformable attention to predict more reliable
attention weights. Comprehensive experiments validate the
effectiveness and efficiency of the proposed Lite DETR, and
the efficient encoder strategy can generalize well across
existing DETR-based models. The code will be available
inhttps://github.com/IDEA-Research/Lite-
DETR .
|
1. Introduction
Object detection aims to detect objects of interest in im-
ages by localizing their bounding boxes and predicting the
corresponding classification scores. In the past decade, re-
markable progress has been made by many classical de-
tection models [23, 24] based on convolutional networks.
*This work was done when Feng Li was an intern at IDEA.
†Corresponding author.
Figure 1. Average precision (Y axis) versus GFLOPs (X axis) for
different detection models on COCO without extra training data.
All models except EfficientDet [29] and YOLO series [12, 30] use
ResNet-50 and Swin-Tiny as backbones. Specifically, two markers
on the same line use ResNet-50 and Swin-Tiny, respectively. In-
dividual markers only use ResNet-50. Each dashed line connects
algorithm variants before and after adding our algorithm. The size
of the listed models vary from 32M to 82M.
Recently, DEtection TRansformer [1] (DETR) introduces
Transformers into object detection, and DETR-like models
have achieved promising performance on many fundamental
vision tasks, such as object detection [13, 36, 37], instance
segmentation [5, 6, 14], and pose estimation [26, 28].
Conceptually, DETR [1] is composed of three parts: a
backbone, a Transformer encoder, and a Transformer de-
coder. Many research works have been improving the back-
bone and decoder parts. For example, the backbone in DETR
is normally inherited and can largely benefit from a pre-
trained classification model [10, 20]. The decoder part in
DETR is the major research focus, with many research works
trying to introduce proper structure to DETR query and im-
prove its training efficiency [11, 13, 18, 21, 36, 37]. By con-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18558
trast, much less work has been done to improve the encoder
part. The encoder in vanilla DETR includes six Transformer
encoder layers, stacked on top of a backbone to improve its
feature representation. Compared with classical detection
models, it lacks multi-scale features, which are of vital im-
portance for object detection, especially for detecting small
objects [9, 16, 19, 22, 29]. Simply applying Transformer en-
coder layers on multi-scale features is not practical due to
the prohibitive computational cost that is quadratic to the
number of feature tokens. For example, DETR uses the C5
feature map, which is 1/32 of the input image resolution, to
apply the Transformer encoder. If a C3 feature (1/8 scale) is
included in the multi-scale features, the number of tokens
from this scale alone will be 16 times of the tokens from the
C5 feature map. The computational cost of self-attention in
Transformer will be 256 times high.
To address this problem, Deformable DETR [37] devel-
ops a deformable attention algorithm to reduce the self-
attention complexity from quadratic to linear by compar-
ing each query token with only a fixed number of sampling
points. Based on this efficient self-attention computation,
Deformable DETR introduces multi-scale features to DETR,
and the deformable encoder has been widely adopted in
subsequent DETR-like models [11, 13, 18, 36].
However, due to a large number of query tokens intro-
duced from multi-scale features, the deformable encoder
still suffers from a high computational cost. To reveal this
problem, we conduct some analytic experiments as shown
in Table 1 and 2 using a DETR-based model DINO [36] to
analyze the performance bottleneck of multi-scale features.
Some interesting results can be observed. First, the low-level
(high-resolution map) features account for more than 75% of
all tokens. Second, direct dropping some low-level features
(DINO-3scale) mainly affects the detection performance for
small objects (AP_S) by a 10% drop but has little impact on
large objects (AP_L).
Inspired by the above observations, we are keen to address
a question: can we use fewer feature scales but maintain
important local details? Taking advantage of the structured
multi-scale features, we present an efficient DETR frame-
work, named Lite DETR . Specifically, we design a simple
yet effective encoder block including several deformable
self-attention layers, which can be plug-and-play in any
multi-scale DETR-base models to reduce 62%∼78% en-
coder GFLOPs and maintain competitive performance. The
encoder block splits the multi-scale features into high-level
features (e.g., C6, C5, C4) and low-level features (e.g., C3).
High-level and low-level features will be updated in an in-
terleaved way to improve the multi-scale feature pyramid.
That is, in the first few layers, we let the high-level features
query all feature maps and improve their representations, but
keep low-level tokens intact. Such a strategy can effectively
reduce the number of query tokens to 5%∼25% of theoriginal tokens and save a great amount of computational
cost. At the end of the encoder block, we let low-level to-
kens query all feature maps to update their representations,
thus maintaining multi-scale features. In this interleaved
way, we update high-level and low-level features in different
frequencies for efficient computation.
Moreover, to enhance the lagged low-level feature up-
date, we propose a key-aware deformable attention (KDA)
approach to replacing all attention layers. When performing
deformable attention, for each query, it samples both keys
and values from the same sampling locations in a feature
map. Then, it can compute more reliable attention weights
by comparing the query with the sampled keys. Such an
approach can also be regarded as an extended deformable
attention or a sparse version of dense attention. We have
found KDA very effective in bringing the performance back
with our proposed efficient encoder block.
To summarize, our contributions are as follows.
•We propose an efficient encoder block to update high-
level and low-level features in an interleaved way,
which can significantly reduce the feature tokens for
efficient detection. This encoder can be easily plugged
into existing DETR-based models.
•To enhance the lagged feature update, we introduce a
key-aware deformable attention for more reliable atten-
tion weights prediction.
•Comprehensive experiments show that Lite DETR can
reduce the detection head GFLOPs by 60% and main-
tain99% detection performance. Specifically, our Lite-
DINO-SwinT achieves 53.9AP with 159GFLOPs.
|
Kolek_Explaining_Image_Classifiers_With_Multiscale_Directional_Image_Representation_CVPR_2023
|
Abstract
Image classifiers are known to be difficult to interpret
and therefore require explanation methods to understand
their decisions. We present ShearletX, a novel mask ex-
planation method for image classifiers based on the shear-
let transform – a multiscale directional image representa-
tion. Current mask explanation methods are regularized
by smoothness constraints that protect against undesirable
fine-grained explanation artifacts. However, the smooth-
ness of a mask limits its ability to separate fine-detail pat-
terns, that are relevant for the classifier, from nearby nui-
sance patterns, that do not affect the classifier. ShearletX
solves this problem by avoiding smoothness regularization
all together, replacing it by shearlet sparsity constraints.
The resulting explanations consist of a few edges, textures,
and smooth parts of the original image, that are the most
relevant for the decision of the classifier. To support our
method, we propose a mathematical definition for explana-
tion artifacts and an information theoretic score to evaluate
the quality of mask explanations. We demonstrate the supe-
riority of ShearletX over previous mask based explanation
methods using these new metrics, and present exemplary
situations where separating fine-detail patterns allows ex-
plaining phenomena that were not explainable before.
|
1. Introduction
Modern image classifiers are known to be difficult to
explain. Saliency maps comprise a well-established ex-
plainability tool that highlights important image regions
for the classifier and helps interpret classification deci-
sions. An important saliency approach frames saliency
map computation as an optimization problem over masks
[8,10,13,14,18,24,29]. The explanation mask is opti-
mized to keep only parts of the image that suffice to retain
the classification decision. However, Fong and Vedaldi [ 14]
showed that an unregularized explanation mask is very sus-
ceptible to explanation artifacts and is hence unreliable.
Figure 1. Left column: ImageNet samples with prediction. Mid-
dle column: Smooth pixel mask explanation from Fong et al. [ 13].
Right column: ShearletX (ours). Retained probability is computed
as class probability after masking divided by class probability be-
fore masking. ShearletX is the first mask explanation method that
can separate fine-detail patterns, that are relevant for the classifier,
from nearby patterns that are irrelevant, without producing arti-
facts.
Therefore, current practice [ 8,13,14] heavily regularizes the
explanation masks to be smooth. The smooth explanation
masks can communicate useful explanatory information by
roughly localizing the relevant image region. However, the
pattern that is relevant for the classifier is often overlaid on
patterns that do not affect the classifier. In such a situa-
tion the mask cannot effectively separate the relevant pat-
tern from the nuisance pattern, due to the smoothness con-
1
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18600
straints. As a result, many details that are irrelevant to the
classifier, such as background elements, textures, and other
spatially localized patterns, appear in the explanation.
An ideal mask explanation method should be resistant to
explanation artifacts and capable of highlighting only rele-
vant patterns. We present such a method, called ShearletX ,
that is able to separate different patterns that occupy nearby
spatial locations by optimizing a mask in the shearlet rep-
resentation of an image [ 25]. Due to the ability of shear-
lets to efficiently encode directional features in images, we
can separate relevant fine-grained image parts, like edges,
smooth areas, and textures, extremely well. We show both
theoretically and experimentally that defining the mask in
the shearlet domain circumvents explanation artifacts. The
masked image is optimized so that the classifier retains its
prediction as much as possible and to have small spatial sup-
port (but not high spatial smoothness), while regularizing
the mask to be sparse in the shearlet domain. This regular-
ization assures that ShearletX retains only relevant parts, a
fact that we support by a new information theoretic score for
the quality of mask explanations. Figure 1gives examples
demonstrating that ShearletX can separate relevant details
from nuisance patterns, which smooth pixel masks cannot.
Our contributions are summarized as follows:
1.ShearletX: The first mask explanation method that can
effectively separate fine-detail patterns, that are rele-
vant for the classifier, from nearby nuisance patterns,
that do not affect the classifier.
2.Artifact Analysis: Our explanation method is based
on low-level vision for maximal interpretability and
belongs to the family of methods that produce out-
of-distribution explanations. To validate that the re-
sulting out-of-distribution explanations are meaning-
ful, we develop a theory to analyze and quantify expla-
nation artifacts, and prove that ShearletX is resilient to
such artifacts.
3.Hallucination Score: a new metric for mask explana-
tions that quantifies explanation artifacts by measuring
the amount of edges in the explanation that do not ap-
pear in the original image.
4.Concisesness-Preciseness Score: A new information
theoretic metric for mask explanations that gives a high
score for explanations that extract the least amount of
information from the image to retain the classification
decision as accurately as possible.
5.Experimental Results: We demonstrate that ShearletX
performs better than previous mask explanations using
our new metrics and give examples where ShearletX
allows to explain phenomena that were not explainable
with previous saliency methods.
The source code for the experiments is publicly available1.
1https://github.com/skmda37/ShearletX
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.